code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 3 Simulated Annealing Using the Batch Size
# ## 3.1 Experimental Work
# In this section three different types of experiments are done. The first one is the exact replication of the set-up of the experiments of Smith et al. (1) that was used to demonstrate that an increasing batch size can be used instead of a decaying learning rate. In the next set of experiments, the factor of the batch size increase and the learning rate decay is changed from the original one of five to a smaller one of two. For the last round in this section, the experiments are repeated with an initial batch size of 1024.
#
# (1) <NAME>, <NAME>, and <NAME>. "Don't Decay the Learning Rate, Increase the Batch Size". In: CoRR abs/1711.00489 (2017). arXiv: 1711.00489. url: http://arxiv.org/abs/1711.00489
# The visualization of the test error as a graph can be done in the Jupyter Notebook _VisualizationGraph.ipynb_
# A short explanation of the supported options:
# <blockquote>
# <p>--batch_size initial batch size, default: 128</p>
#
# <p>--lr initial learning rate, default: 0.1</p>
#
# <p>--epochs number of epochs, default: 200</p>
#
# <p>--model the network that should be used for training, default: WideResNet 16-4 </p>
#
# <p>--dataset the data set on which the model should be trained on, default: CIFAR-10</p>
#
# <p>--optimizer the optimizer that should be used, default: SGD</p>
#
# <p>--filename the folder in which the log file and files for the visulization should be saved</p>
#
# <p>--gpu the gpu that should be used for the training, default: 0</p>
#
# <p>--mini_batch_size the size of the mini batch used as part of the Ghost Batch Normalization, default: 128</p>
#
# <p>--weight_decay the weight decay for the optimizer, default: 0.0005</p>
#
# <p>--momentum the momentum coefficient for SGD, default: 0.9</p>
#
# <p>--factor the factor of the batch size increase/learning rate decay, default: 5</p>
#
# <p>--LRD if a learning rate decay should be used instead of a batch size increase, default: False</p>
#
# <p>--steady if a learning rate decay/batch size increase should be done, default: False</p>
#
# <p>--doubleEndFactor if the factor of the BSI should double for the last epochs, default: False</p>
#
# <p>--saveState if the states of the training should be saved to enable the visualization of the loss landscape later, default: False</p>
#
# <p>--max the maximal batch size to be reached, default: 50000 (CIFAR-10 and CIFAR-100)</p>
#
# </blockquote>
# ### 3.1.1 Replication for different optimizers, networks, and data sets
# #### BSI (factor 5) and LRD (factor 5)
# !python files/main.py --filename 'smith/original/128_01_Adadelta_BSI' --optimizer 'adadelta'
# !python files/main.py --filename 'smith/original/128_01_Adadelta_LRD' --optimizer 'adadelta' --LRD True
# !python files/main.py --filename 'smith/original/128_01_Adagrad_BSI' --optimizer 'adagrad'
# !python files/main.py --filename 'smith/original/128_01_Adagrad_LRD' --optimizer 'adagrad' --LRD True
# !python files/main.py --filename 'smith/original/128_01_MNIST_BSI' --dataset 'mnist' --model 'mnist_f1'
# !python files/main.py --filename 'smith/original/128_01_MNIST_LRD' --dataset 'mnist' --model 'mnist_f1' --LRD True
# !python files/main.py --filename 'smith/original/128_01_R44_BSI' --model 'r44'
# !python files/main.py --filename 'smith/original/128_01_R44_LRD' --model 'r44' --LRD True
# !python files/main.py --filename 'smith/original/128_01_R44_Adadelta_BSI' --model 'r44' --optimizer 'adadelta'
# !python files/main.py --filename 'smith/original/128_01_R44_Adadelta_LRD' --model 'r44' --optimizer 'adadelta' --LRD True
# !python files/main.py --filename 'smith/original/128_01_R44_Adagrad_BSI' --model 'r44' --optimizer 'adagrad'
# !python files/main.py --filename 'smith/original/128_01_R44_Adagrad_LRD' --model 'r44' --optimizer 'adagrad' --LRD True
# ### 3.1.2 Replication with a Factor of Two for the Increase/Decay
# #### BSI (factor 2) and LRD (factor 2)
# !python files/main.py --lr 0.04 --filename 'smith/factor2/128_004_Adadelta_BSI' --optimizer 'adadelta' --factor 2 --saveState True
# !python files/main.py --lr 0.04 --filename 'smith/factor2/128_004_Adadelta_LRD' --optimizer 'adadelta' --factor 2 --LRD True
# !python files/main.py --lr 0.04 --filename 'smith/factor2/128_004_Adagrad_BSI' --optimizer 'adagrad' --factor 2
# !python files/main.py --lr 0.04 --filename 'smith/factor2/128_004_Adagrad_LRD' --optimizer 'adagrad' --factor 2 --LRD True
# !python files/main.py --lr 0.04 --filename 'smith/factor2/128_004_MNIST_BSI' --dataset 'mnist' --model 'mnist_f1' --factor 2
# !python files/main.py --lr 0.04 --filename 'smith/factor2/128_004_MNIST_LRD' --dataset 'mnist' --model 'mnist_f1' --factor 2 --LRD True
# !python files/main.py --lr 0.04 --filename 'smith/factor2/128_004_R44_BSI' --model 'r44' --factor 2
# !python files/main.py --lr 0.04 --filename 'smith/factor2/128_004_R44_LRD' --model 'r44' --factor 2 --LRD True
# !python files/main.py --lr 0.04 --filename 'smith/factor2/128_004_R44_Adadelta_BSI' --model 'r44' --optimizer 'adadelta' --factor 2
# !python files/main.py --lr 0.04 --filename 'smith/factor2/128_004_R44_Adadelta_LRD' --model 'r44' --optimizer 'adadelta' --factor 2 --LRD True
# !python files/main.py --lr 0.04 --filename 'smith/factor2/128_004_R44_Adagrad_BSI' --model 'r44' --optimizer 'adagrad' --factor 2
# !python files/main.py --lr 0.04 --filename 'smith/factor2/128_004_R44_Adagrad_LRD' --model 'r44' --optimizer 'adagrad' --factor 2 --LRD True
# ### 3.1.3 Replication with an Initial Batch Size of 1024
# #### BSI (factor 2) and LRD (factor 2) with an initial batch size of 1024
# !python files/main.py --batch_size 1024 --lr 0.32 --filename 'smith/1024/1024_032_Adadelta_BSI' --optimizer 'adadelta' --factor 2 --saveState True
# !python files/main.py --batch_size 1024 --lr 0.32 --filename 'smith/1024/1024_032_Adadelta_LRD' --optimizer 'adadelta' --factor 2 --LRD True
# !python files/main.py --batch_size 1024 --lr 0.32 --filename 'smith/1024/1024_032_Adagrad_BSI' --optimizer 'adagrad' --factor 2
# !python files/main.py --batch_size 1024 --lr 0.32 --filename 'smith/1024/1024_032_Adagrad_LRD' --optimizer 'adagrad' --factor 2 --LRD True
# !python files/main.py --batch_size 1024 --lr 0.32 --filename 'smith/1024/1024_032_MNIST_BSI' --dataset 'mnist' --model 'mnist_f1' --factor 2
# !python files/main.py --batch_size 1024 --lr 0.32 --filename 'smith/1024/1024_032_MNIST_LRD' --dataset 'mnist' --model 'mnist_f1' --factor 2 --LRD True
# !python files/main.py --batch_size 1024 --lr 0.32 --filename 'smith/1024/1024_032_R44_BSI' --model 'r44' --factor 2
# !python files/main.py --batch_size 1024 --lr 0.32 --filename 'smith/1024/1024_032_R44_LRD' --model 'r44' --factor 2 --LRD True
# !python files/main.py --batch_size 1024 --lr 0.32 --filename 'smith/1024/1024_032_R44_Adadelta_BSI' --model 'r44' --optimizer 'adadelta' --factor 2
# !python files/main.py --batch_size 1024 --lr 0.32 --filename 'smith/1024/1024_032_R44_Adadelta_LRD' --model 'r44' --optimizer 'adadelta' --factor 2 --LRD True
# !python files/main.py --batch_size 1024 --lr 0.32 --filename 'smith/1024/1024_032_R44_Adagrad_BSI' --model 'r44' --optimizer 'adagrad' --factor 2
# !python files/main.py --batch_size 1024 --lr 0.32 --filename 'smith/1024/1024_032_R44_Adagrad_LRD' --model 'r44' --optimizer 'adagrad' --factor 2 --LRD True
# 1024: Adadelta + BSI (factor 2) + resetting the gradients in the epoch 60, 120, 160
# !python files/mainResetGrad.py --batch_size 1024 --lr 0.32 --filename 'smith/1024/1024_032_R44_Adadelta_BSI_resetGrad' --model 'r44' --optimizer 'adadelta' --factor 2
# ## 3.2 Discussion
# #### Loss landscapes
# 128:
# !python files/main.py --filename 'smith/original/128_01_BSI' --saveState True
# !python files/plot_surface.py --x=-1:1:51 --y=-1:1:51 --model wrn_164 \
# --model_file files/trained_nets/smith/original/128_01_BSI/model_200.t7 \
# --mpi --cuda --dir_type weights --xignore biasbn --xnorm filter --yignore biasbn --ynorm filter --plot
# !python files/main.py --filename 'smith/original/128_01_LRD' --LRD True --saveState True
# !python files/plot_surface.py --x=-1:1:51 --y=-1:1:51 --model wrn_164 \
# --model_file files/trained_nets/smith/original/128_01_LRD/model_200.t7 \
# --mpi --cuda --dir_type weights --xignore biasbn --xnorm filter --yignore biasbn --ynorm filter --plot
# 1024:
# !python files/main.py --batch_size 1024 --lr 0.32 --filename 'smith/1024/1024_032_BSI' --factor 2 --saveState True
# !python files/plot_surface.py --x=-1:1:51 --y=-1:1:51 --model wrn_164 \
# --model_file files/trained_nets/smith/1024/1024_032_BSI/model_200.t7 \
# --mpi --cuda --dir_type weights --xignore biasbn --xnorm filter --yignore biasbn --ynorm filter --plot
# !python files/main.py --batch_size 1024 --lr 0.32 --filename 'smith/1024/1024_032_LRD' --factor 2 --LRD True --saveState True
# !python files/plot_surface.py --x=-1:1:51 --y=-1:1:51 --model wrn_164 \
# --model_file files/trained_nets/smith/1024/1024_032_LRD/model_200.t7 \
# --mpi --cuda --dir_type weights --xignore biasbn --xnorm filter --yignore biasbn --ynorm filter --plot
# 1D visualizations:
# !python files/plot_surface.py --mpi --cuda --model wrn_164 --x=-1:1:51 \
# --model_file files/trained_nets/smith/1024/1024_032_BSI/model_200.t7 \
# --dir_type weights --xnorm filter --xignore biasbn --plot
# !python files/plot_surface.py --mpi --cuda --model wrn_164 --x=-1:1:51 \
# --model_file files/trained_nets/smith/1024/1024_032_LRD/model_200.t7 \
# --dir_type weights --xnorm filter --xignore biasbn --plot
# Adadelta:
# !python files/plot_surface.py --x=-1:1:51 --y=-1:1:51 --model wrn_164 \
# --model_file files/trained_nets/smith/factor2/128_004_Adadelta_BSI/model_200.t7 \
# --mpi --cuda --dir_type weights --xignore biasbn --xnorm filter --yignore biasbn --ynorm filter --plot
# !python files/plot_surface.py --x=-1:1:51 --y=-1:1:51 --model wrn_164 \
# --model_file files/trained_nets/smith/1024/1024_032_Adadelta_BSI/model_200.t7 \
# --mpi --cuda --dir_type weights --xignore biasbn --xnorm filter --yignore biasbn --ynorm filter --plot
| Chapter3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
df = pd.read_csv(r'D:\ml_ineuron\modular\Ml_class_ineuron\data\iris.csv')
df.head()
# # EDA
df.info()
df.describe()
df.isnull().sum()
df['species'].value_counts()
sns.countplot(data=df,x='species')
sns.scatterplot(x = 'petal_length', y = 'petal_width', data = df, hue = 'species')
sns.scatterplot(x = 'sepal_length', y = 'sepal_width', data = df, hue = 'species')
sns.pairplot(df, hue = 'species')
sns.heatmap(df.corr(), annot = True)
# # Data processing
X = df.drop('species', axis =1)
y = df['species']
# +
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25, random_state=101)
scaler = StandardScaler()
scaled_X_train = scaler.fit_transform(X_train)
scaled_X_test = scaler.transform(X_test)
# -
# # Modeling
# +
from sklearn.linear_model import LogisticRegression
log_model = LogisticRegression(solver = 'saga', multi_class = 'ovr', max_iter = 10000)
log_model.fit(scaled_X_train,y_train)
pred = log_model.predict(scaled_X_test)
# -
from sklearn.metrics import plot_confusion_matrix , accuracy_score
print(accuracy_score(y_test,pred))
plot_confusion_matrix(log_model,scaled_X_test,y_test)
# # Hyperparameter Tuning
import warnings
warnings.filterwarnings('ignore')
# +
from sklearn.model_selection import GridSearchCV
penalty = ['l1', 'l2', 'elasticnet']
l1_ratio = np.linspace(0,1,20)
C = np.logspace(0,10,20)
params = {'penalty': penalty, 'l1_ratio': l1_ratio, 'C': C}
grid_model = GridSearchCV(log_model,param_grid=params,verbose=1)
# -
grid_model.fit(scaled_X_train,y_train)
# +
from sklearn.metrics import accuracy_score, classification_report, plot_confusion_matrix,plot_roc_curve
y_pred = grid_model.predict(scaled_X_test)
plot_confusion_matrix(grid_model,scaled_X_test,y_test)
# -
accuracy_score(y_test,y_pred)
print(classification_report(y_test,y_pred))
# +
# WHY WITHOUT ENCODING i AM ABLE TO DO CLASSFICATION DESPITE HAVING CATEGORIES
from sklearn.metrics import plot_roc_curve, roc_curve, auc
def plot_multiclass_roc(clf, X_test, y_test, n_classes, figsize=(5,5)):
y_score = clf.decision_function(X_test)
# structures
fpr = dict()
tpr = dict()
roc_auc = dict()
# calculate dummies once
y_test_dummies = pd.get_dummies(y_test, drop_first=False).values
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_test_dummies[:, i], y_score[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# roc for each class
fig, ax = plt.subplots(figsize=figsize)
ax.plot([0, 1], [0, 1], 'k--')
ax.set_xlim([0.0, 1.0])
ax.set_ylim([0.0, 1.05])
ax.set_xlabel('False Positive Rate')
ax.set_ylabel('True Positive Rate')
ax.set_title('Receiver operating characteristic example')
for i in range(n_classes):
ax.plot(fpr[i], tpr[i], label='ROC curve (area = %0.2f) for label %i' % (roc_auc[i], i))
ax.legend(loc="best")
ax.grid(alpha=.4)
sns.despine()
plt.show()
# -
plot_multiclass_roc(grid_model, scaled_X_test, y_test, n_classes =3)
| Practise_realdata/iris_multclass.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/ferdouszislam/Android-Malware-Detection-ML/blob/main/jupyter_notebooks/3A.%20Apply%20Decision%20Tree.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="mJp_Y19HWxNI"
# # Apply Decision Tree
#
# ### This notebook contains application of 'Decision Tree' in section-III(C) from the paper
# + id="q-xRkxrEcLRq"
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.tree import DecisionTreeClassifier
# + colab={"base_uri": "https://localhost:8080/"} id="cWV1ZdrecLR2" outputId="d7095f8b-3a98-46c6-97c3-70e45a2f08dd"
df = pd.read_csv('https://raw.githubusercontent.com/ferdouszislam/Android-Malware-Detection-ML/main/datasets/Feature-Selected_Dataset/Main_Dataset-Weka_Feature_Selected.csv?token=AKGHTOZCFCA62MER45KW3HLAUEPP4')
df.info()
# + id="v5S1TxEVcLR3"
X = df.drop('class', axis = 1)
y = df['class']
# + [markdown] id="DOYXpqcXcLR3"
# ### Trying to find out the hyperparameter for which the decision tree gives out highest combination of accuracy, f1 score and AUC
# + id="-939RLxecLR3"
from sklearn.model_selection import StratifiedKFold,cross_val_score
cv = StratifiedKFold(n_splits=10, random_state=42, shuffle= True)
max_depth = [x for x in range(1,30)]
accuracies = []
f1s = []
aucs = []
for depth in max_depth:
model = DecisionTreeClassifier(criterion='gini',max_depth = depth)
accuracie_segments = cross_val_score(model, X, y, scoring='accuracy',cv=cv, n_jobs=1)
f1_segments = cross_val_score(model, X, y, scoring='f1',cv=cv, n_jobs=1)
auc_segments = cross_val_score(model, X, y, scoring='roc_auc',cv=cv, n_jobs=1)
accuracies.append(np.mean(accuracie_segments))
f1s.append(np.mean(f1_segments))
aucs.append(np.mean(auc_segments))
# + colab={"base_uri": "https://localhost:8080/", "height": 567} id="y9VCgmechIkg" outputId="14eb0fe2-c35a-4544-b3c4-84840027b0b7"
plt.figure(figsize =(15,9))
# plt.title('Accuracy,F1 Score, and Area Under ROC-curve VS max depth', fontdict = {'fontsize' : 18})
plt.plot(max_depth, accuracies, 'ro-', max_depth, f1s ,'bv-', max_depth, aucs,'yo-')
plt.axvline(x=17, color='k', linestyle='--')
plt.legend(['Accuracies','F1 Scores','AUC', 'selected value (max_depth=17)'], fontsize=16)
plt.xlabel('Maximum depths', fontsize=18)
plt.ylabel('Accuracy,F1 Score,Area Under ROC-curve', fontsize=18)
plt.xticks(fontsize=18)
plt.yticks(fontsize=18)
plt.show()
# + [markdown] id="HooEltt_c6Np"
# ### Hyperparameter(max_depth, criterion) selection using GridSearchCV
# + colab={"base_uri": "https://localhost:8080/"} id="9K0HHL30dHUf" outputId="b3d1ac7c-0b2b-41b8-93c0-4df2010b5ff0"
'''
courtesy-
<https://towardsdatascience.com/building-a-k-nearest-neighbors-k-nn-model-with-scikit-learn-51209555453a>
<https://towardsdatascience.com/gridsearchcv-for-beginners-db48a90114ee>
'''
from sklearn.model_selection import GridSearchCV
# create decision tree model
decisionTree_model_gscv = DecisionTreeClassifier(random_state=42)
# create a dictionary of all parameter values we want to exhaustively search
param_grid = {'max_depth': np.arange(1, 30), 'criterion': ['gini', 'entropy']}
# use gridsearch to check all values in param_grid
decisionTree_gscv = GridSearchCV(decisionTree_model_gscv, param_grid, scoring=['accuracy', 'f1', 'roc_auc'], refit='accuracy', cv=cv)
# fit model to data
decisionTree_gscv.fit(X, y)
# + colab={"base_uri": "https://localhost:8080/"} id="_8DqTiKNdnWY" outputId="e7912da4-44c0-41a3-d3ed-1c6798764410"
#check top performing parameter values
decisionTree_gscv.best_params_
# + colab={"base_uri": "https://localhost:8080/"} id="NxX28YOPdsYf" outputId="fea26eef-9858-4981-f4dd-2eefaaf78cd3"
decisionTree_gscv.best_score_
# + [markdown] id="IWD7qiE8cLR4"
# ### So we can confirm that max_depth=17 and criterion='gini' gives best outcome.
# + [markdown] id="t1j-0WdLkMa0"
# ## Evaluate the model on experimenting/training set
# + id="ArZ0_UoBkTol"
y_predict = decisionTree_gscv.predict(X)
# + colab={"base_uri": "https://localhost:8080/"} id="5R8C7EdMkihj" outputId="a95a3509-03cf-4b67-8cfa-230c4c97f906"
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
print('Evaluation on experimenting/test dataset')
print('Accuracy:', round(accuracy_score(y, y_predict), 3))
print('Precision:', round(precision_score(y, y_predict), 3))
print('Recall:', round(recall_score(y, y_predict), 3))
print('F1-score:', round(f1_score(y, y_predict), 3))
# + [markdown] id="I-xvteNOzUfQ"
# ### ROC curve
# + colab={"base_uri": "https://localhost:8080/", "height": 303} id="A2enlr87zZgS" outputId="10a7805d-129b-4172-a788-001eaac322ad"
from sklearn.metrics import roc_curve, roc_auc_score
# calculate the fpr and tpr for all thresholds of the classification
y_probs = decisionTree_gscv.predict_proba(X)
y_probs = y_probs[:,1]
fpr, tpr, threshold = roc_curve(y, y_probs)
roc_auc = roc_auc_score(y, y_probs)
plt.title('Decision Tree Train-set ROC-Curve')
plt.plot(fpr, tpr, 'b', label = 'AUC = %0.3f' % roc_auc)
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],'r--')
plt.ylabel('True Positive Rate', fontsize=18)
plt.xlabel('False Positive Rate', fontsize=18)
plt.show()
# + [markdown] id="y4m-vVH4k06y"
# ## Train set evaluation
# accuracy = 0.965, precision = 0.983, recall = 0.92, f1 = 0.951, auc = 0.994
# + [markdown] id="bhfoE5QpOEoM"
# ## Evaluate the model on holdout/test set
# + colab={"base_uri": "https://localhost:8080/"} id="zD_cyzYURSnT" outputId="c0c972fe-df76-41b5-ef57-8405b6be1f12"
holdout_df = pd.read_csv('https://raw.githubusercontent.com/ferdouszislam/Android-Malware-Detection-ML/main/datasets/Feature-Selected_Dataset/Holdout_Dataset-Weka_Feature_Selected.csv?token=<KEY>')
holdout_df.shape
# + colab={"base_uri": "https://localhost:8080/"} id="aYusg3ieR_Rg" outputId="f8cb5af0-a0bf-4dce-ac54-395d443896cb"
holdout_df.info()
# + id="bU9U__JGSDgC"
X_holdout = holdout_df.drop('class', axis = 1)
y_holdout = holdout_df['class']
# + id="lEADbka_SQI3"
y_holdout_predict = decisionTree_gscv.predict(X_holdout)
# + colab={"base_uri": "https://localhost:8080/"} id="LjOXXF8shy42" outputId="ba18fee4-fed8-4c2e-f36d-097bb8c06dcb"
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score
print('Evaluation on holdout dataset')
print('Accuracy:', round(accuracy_score(y_holdout, y_holdout_predict), 3))
print('Precision:', round(precision_score(y_holdout, y_holdout_predict), 3))
print('Recall:', round(recall_score(y_holdout, y_holdout_predict), 3))
print('F1-score:', round(f1_score(y_holdout, y_holdout_predict), 3))
# + [markdown] id="lMCppKfM4F4a"
# ## ROC Curve
# + colab={"base_uri": "https://localhost:8080/", "height": 303} id="VJVcxW6C4HTK" outputId="7e17cd5e-acde-468f-fd5f-c3740e489870"
from sklearn.metrics import roc_curve, roc_auc_score
# calculate the fpr and tpr for all thresholds of the classification
y_holdout_probs = decisionTree_gscv.predict_proba(X_holdout)
y_holdout_probs = y_holdout_probs[:,1]
fpr, tpr, threshold = roc_curve(y_holdout, y_holdout_probs)
roc_auc = roc_auc_score(y_holdout, y_holdout_probs)
plt.title('Decision Tree test-set ROC-Curve')
plt.plot(fpr, tpr, 'b', label = 'AUC = %0.3f' % roc_auc)
plt.legend(loc = 'lower right')
plt.plot([0, 1], [0, 1],'r--')
plt.ylabel('True Positive Rate', fontsize=18)
plt.xlabel('False Positive Rate', fontsize=18)
plt.show()
# + [markdown] id="CjqQDD2TlgGF"
# ## Test set evaluation
# accuracy = 0.961, precision = 0.961, recall = 0.93, f1 = 0.945, auc = 0.985
| jupyter_notebooks/3A. Apply Decision Tree.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Emulating $\xi_+$-$\xi_-$-GGL covariances
#
# The "key project" in DES is the combined probes analysis. For DES Y1, this was the 3x2pt analysis, which consisted of three 2-point functions (hence the name). There was used a corresponding covariance matrix between these probes. In this notebook, we will build an emulator for just the $\xi_+$-$\xi_-$-GGL ($\gamma$) covariance from a set of 25 covariances computed by <NAME> in a 10 dimensional parameter space (cosmology + 5 tomographic biases).
import numpy as np
from scipy import stats
import covariance_emulator
import matplotlib.pyplot as plt
# %matplotlib inline
plt.rc("font", size=14, family="serif")
#plt.rc("text", usetex=True)
#Read in the domain locations, or locations in parameter space
parameters = np.loadtxt("cosmo_parameters.txt")
print(parameters.shape)
#Load in the covariances
covs = np.load("gaussian_w_sub_covs_withcut.npy")
print(covs.shape)
# +
#View the correlation matrix of the first
def corr_from_cov(cov):
D = np.diag(np.sqrt(cov.diagonal()))
Di = np.linalg.inv(D)
return np.dot(Di, np.dot(cov, Di))
def view_corr(cov, lncov=False):
R = corr_from_cov(cov)
fig, ax = plt.subplots()
if lncov:
R = np.log(np.fabs(cov))
im = ax.imshow(R, interpolation="nearest", origin="lower")
plt.colorbar(im)
return
# -
#Split off the last covariance matrix
test_cov = covs[-1]
test_parameters = parameters[-1]
covs = covs[:-1]
parameters = parameters[:-1]
#Create an emulator
NPC_D = 10
NPC_L = 10
#Emu = covariance_emulator.CovEmu(parameters, covs, NPC_D=NPC_D, NPC_L=NPC_L)
#Cpredicted = Emu.predict(test_parameters)
# ## Finding an optimal emulator
#
# The covariance emulator built above was done with the default configuration with a few principle components, but it actually has a few knobs to turn. We can control not only the number of principle components for D and L (`NPC_D, NPC_L`), but we can also create and pass in `george` kernels for both `D` and `L`. In the next cell, we will look over all reasonable options, and figure out which emulator setup is the best (but we keep the number of principle components fixed for now).
#
# Our method is the following:
# 1. Take the test covariance matrix $C_{\rm true}$ and draw from a multivariate normal in order to obtain a realization of the noise $d$.
# 2. Compute $\chi^2 = d^TC_{\rm emu}^{-1}d$ using the inverse of the emulated covariance matrix.
# 3. Repeat steps 1-2 thousands of times, recording all $\chi^2$s.
# 4. Histogram the $\chi^2$ values and plot them against the expected distribution given the number of degrees of freedom.
#Given a covariance matrix, make realizations of the noise, and then find the optimal kernel set up
def best_kernel_for_C(C, N_samples=1000):
dof = len(C)
means = np.zeros(dof)
chi2s = np.zeros(N_samples)
noise_realizations = np.array([np.random.multivariate_normal(means, C) for i in range(N_samples)])
import george.kernels as kernels
kerns = [kernels.ExpSquaredKernel, kernels.Matern52Kernel, kernels.Matern32Kernel]
names = ["Exp2", "Mat52", "Mat32"]
Npars = len(parameters[0])
metric_guess = np.std(parameters, 0)
#Loop over kernel combinations and compute the chi2 shift
best_shift = 1e99
best_kernels = None
for nameD, kd in zip(names, kerns):
kernel_D = 1.*kd(metric=metric_guess, ndim=Npars)
for nameL, kl in zip(names, kerns):
kernel_L = 1.*kd(metric=metric_guess, ndim=Npars)
Emu = covariance_emulator.CovEmu(parameters, covs, NPC_D=NPC_D, NPC_L=NPC_L,
kernel_D = kernel_D, kernel_lp = kernel_L)
shift = 1e99
try:
Cpredicted = Emu.predict(test_parameters)
iCpredicted = np.linalg.inv(Cpredicted)
except np.linalg.LinAlgError:
shift = 1e99
else:
for i in range(N_samples):
chi2s[i] = np.dot(noise_realizations[i], np.dot(iCpredicted, noise_realizations[i]))
shift = np.mean(chi2s) - dof
if np.fabs(shift) < np.fabs(best_shift):# and shift > 0:
best_shift = shift
best_name = "%s %s"%(nameD, nameL)
best_kernels = [kernel_D, kernel_L]
print("%s %s: %e / %d"%(nameD, nameL, shift, dof))
print("Best combination: %s"%best_name)
print("\tshift/dof = %e / %d"%(best_shift, dof))
return best_kernels
best_kernels = best_kernel_for_C(test_cov)
#Let's visualize
kernel_D, kernel_L = best_kernels
#kernel_L = 1.*kernels.Matern32Kernel(metric=metric_guess, ndim=Npars)
Emu = covariance_emulator.CovEmu(parameters, covs, NPC_D=NPC_D, NPC_L=NPC_L,
kernel_D = kernel_D, kernel_lp = kernel_L)
Cpredicted = Emu.predict(test_parameters)
view_corr(Cpredicted)
plt.title(r"$\xi_+\xi_-\gamma$ cut")
#plt.savefig("predicted_cov.png", dpi=300, bbox_inches="tight")
view_corr(test_cov)
plt.title(r"$\xi_+\xi_-\gamma$ cut")
#plt.savefig("true_cov.png", dpi=300, bbox_inches="tight")
# +
true_var = test_cov.diagonal()
emu_var = Cpredicted.diagonal()
frac_diff = (true_var - emu_var) / true_var
fig, ax = plt.subplots(ncols=1, nrows=2, sharex=True)
ax[0].plot(true_var, c='k', label='True variance')
ax[0].plot(emu_var, c='r', label='Emulated variance')
ax[1].plot(frac_diff, c='k')
ax[0].set_yscale('log')
ax[1].set_ylabel(r"Fractional difference")
ax[1].set_xlabel(r"Bin number")
#fig.savefig("scale_issue.png", dpi=300, bbox_inches="tight")
#ax[1].set_ylim(-2.5, 2.5)
# -
# ## Assessing the emulator performance
#
# One of the best ways to assess the performance of the emulator is to directly compare the true covariance to the emulated covariance. In the next cell, I will draw realizations of the noise from the true covariance, and compute $\chi^2$ values of these noises compared agains the emulated covariance. Then, by checking this against the expected distribution, we can see the performance of the emulator.
# +
#Define a function where we input two covariances, and get back out a list of chi2s
def get_chi2s_between_Cs(C1, C2, N_samples=1000):
means = np.zeros(len(C1))
chi2s = np.zeros(N_samples)
iC2 = np.linalg.inv(C2)
for i in range(N_samples):
x = np.random.multivariate_normal(means, C1)
chi2s[i] = np.dot(x, np.dot(iC2, x))
return chi2s
dof = len(test_cov)
# -
chi2s = get_chi2s_between_Cs(test_cov, test_cov)
plt.hist(chi2s, density=True, bins=100)
xmin = min(chi2s)*0.97
xmax = 1.03*max(chi2s)
x = np.linspace(xmin, xmax, 1000)
plt.plot(x, stats.chi2.pdf(x, dof))
plt.title(r"$C_{\rm true}$ vs $C_{\rm true}$")
chi2s = get_chi2s_between_Cs(test_cov, Cpredicted, 1000)
plt.hist(chi2s, density=True, bins=100)
x = np.linspace(xmin, xmax, 1000)
#x = np.linspace(300, 800, 1000)
plt.plot(x, stats.chi2.pdf(x, dof))
plt.title(r"$C_{\rm true}$ vs $C_{\rm emu}$")
plt.xlabel(r"$\chi^2$")
plt.axvline(dof, color="k", ls="--")
ax = plt.gca()
#ax.text(0.7, 0.5, r"$\chi2=d^TC^{-1}d$", transform=ax.transAxes)
print("Chi2/dof shift = %.2f / %d"%(np.mean(chi2s) - dof, dof))
plt.savefig("chi2_realizations.png", dpi=300, bbox_inches="tight")
# # Emulated covariance vs. any random covariance
#
# In fiducial analyses, and as has been suggested in the literature, we should be "fine" with neglecting parameter dependence in the covariance matrix. We can test this easily, by doing the chi2 distribution comparison between the test covariance matrix and the covariances we have on hand.
chi2s = get_chi2s_between_Cs(test_cov, covs[0])
plt.hist(chi2s, density=True, bins=100)
x = np.linspace(xmin, xmax, 1000)
plt.plot(x, stats.chi2.pdf(x, dof))
plt.title(r"$C_{\rm true}$ vs $C_{\rm 0}$")
# +
#Try looping over a few and comparing
x = np.linspace(xmin, xmax, 1000)
plt.plot(x, stats.chi2.pdf(x, dof))
for i in [0, 10, 20]:
chi2s = get_chi2s_between_Cs(test_cov, covs[0], 1000)
plt.hist(chi2s, density=True, bins=100, alpha=0.3, label=r"$C_{%d}$"%i)
print("Chi2/dof shift = %.2f / %d"%(np.mean(chi2s) - dof, dof))
plt.legend()
# -
# We can see that for 200 degrees of freedom, using any old covariance matrix can shift $chi^2$ by about 28/200, while just using the emulator is essentially perfect. Thus, it is a clear improvement.
| notebooks/Emulating DES Part3: clustering covariances.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# #!/usr/bin/env python3
import random
import gym
import gym.spaces
from collections import namedtuple
import numpy as np
from tensorboardX import SummaryWriter
import torch
import torch.nn as nn
import torch.optim as optim
HIDDEN_SIZE = 128
BATCH_SIZE = 100
PERCENTILE = 30
GAMMA = 0.9
class DiscreteOneHotWrapper(gym.ObservationWrapper):
def __init__(self, env):
super(DiscreteOneHotWrapper, self).__init__(env)
assert isinstance(env.observation_space, gym.spaces.Discrete)
self.observation_space = gym.spaces.Box(0.0, 1.0, (env.observation_space.n, ), dtype=np.float32)
def observation(self, observation):
res = np.copy(self.observation_space.low)
res[observation] = 1.0
return res
class Net(nn.Module):
def __init__(self, obs_size, hidden_size, n_actions):
super(Net, self).__init__()
self.net = nn.Sequential(
nn.Linear(obs_size, hidden_size),
nn.ReLU(),
nn.Linear(hidden_size, n_actions)
)
def forward(self, x):
return self.net(x)
Episode = namedtuple('Episode', field_names=['reward', 'steps'])
EpisodeStep = namedtuple('EpisodeStep', field_names=['observation', 'action'])
def iterate_batches(env, net, batch_size):
batch = []
episode_reward = 0.0
episode_steps = []
obs = env.reset()
sm = nn.Softmax(dim=1)
while True:
obs_v = torch.FloatTensor([obs])
act_probs_v = sm(net(obs_v))
act_probs = act_probs_v.data.numpy()[0]
action = np.random.choice(len(act_probs), p=act_probs)
next_obs, reward, is_done, _ = env.step(action)
episode_reward += reward
episode_steps.append(EpisodeStep(observation=obs, action=action))
if is_done:
batch.append(Episode(reward=episode_reward, steps=episode_steps))
episode_reward = 0.0
episode_steps = []
next_obs = env.reset()
if len(batch) == batch_size:
yield batch
batch = []
obs = next_obs
def filter_batch(batch, percentile):
disc_rewards = list(map(lambda s: s.reward * (GAMMA ** len(s.steps)), batch))
reward_bound = np.percentile(disc_rewards, percentile)
train_obs = []
train_act = []
elite_batch = []
for example, discounted_reward in zip(batch, disc_rewards):
if discounted_reward > reward_bound:
train_obs.extend(map(lambda step: step.observation, example.steps))
train_act.extend(map(lambda step: step.action, example.steps))
elite_batch.append(example)
return elite_batch, train_obs, train_act, reward_bound
if __name__ == "__main__":
random.seed(12345)
env = DiscreteOneHotWrapper(gym.make("FrozenLake-v0"))
# env = gym.wrappers.Monitor(env, directory="mon", force=True)
obs_size = env.observation_space.shape[0]
n_actions = env.action_space.n
net = Net(obs_size, HIDDEN_SIZE, n_actions)
objective = nn.CrossEntropyLoss()
optimizer = optim.Adam(params=net.parameters(), lr=0.001)
writer = SummaryWriter(comment="-frozenlake-tweaked")
full_batch = []
for iter_no, batch in enumerate(iterate_batches(env, net, BATCH_SIZE)):
reward_mean = float(np.mean(list(map(lambda s: s.reward, batch))))
full_batch, obs, acts, reward_bound = filter_batch(full_batch + batch, PERCENTILE)
if not full_batch:
continue
obs_v = torch.FloatTensor(obs)
acts_v = torch.LongTensor(acts)
full_batch = full_batch[-500:]
optimizer.zero_grad()
action_scores_v = net(obs_v)
loss_v = objective(action_scores_v, acts_v)
loss_v.backward()
optimizer.step()
print("%d: loss=%.3f, reward_mean=%.3f, reward_bound=%.3f, batch=%d" % (
iter_no, loss_v.item(), reward_mean, reward_bound, len(full_batch)))
writer.add_scalar("loss", loss_v.item(), iter_no)
writer.add_scalar("reward_mean", reward_mean, iter_no)
writer.add_scalar("reward_bound", reward_bound, iter_no)
if reward_mean > 0.8:
print("Solved!")
break
writer.close()
| Chapter04/.ipynb_checkpoints/03_frozenlake_tweaked-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/YoniSchirris/SimCLR-1/blob/master/SimCLR_Notebook.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="jF8ZoVrwt0n0" colab_type="text"
# # SimCLR
# PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations by T. Chen et al. With support for the LARS (Layer-wise Adaptive Rate Scaling) optimizer and global batch norm.
#
# [Link to paper](https://arxiv.org/pdf/2002.05709.pdf)
#
# + [markdown] id="Lt6WMxjCvN3o" colab_type="text"
# ## Setup the repository
# + id="53JMIYtat8tT" colab_type="code" outputId="b05d0daa-49ab-4aa7-9013-59bd1474dafa" colab={"base_uri": "https://localhost:8080/", "height": 1000}
# !git clone https://github.com/spijkervet/SimCLR.git
# %cd SimCLR
# !mkdir -p logs && cd logs && wget https://github.com/Spijkervet/SimCLR/releases/download/1.2/checkpoint_100.tar && cd ../
# !sh setup.sh || python3 -m pip install -r requirements.txt || exit 1
# !pip install pyyaml --upgrade
# + [markdown] id="fQ3jq3cWynLf" colab_type="text"
# # Part 1:
# ## SimCLR pre-training
# + id="0jhAv3hv8IHn" colab_type="code" colab={}
# whether to use a TPU or not (set in Runtime -> Change Runtime Type)
use_tpu = False
# + [markdown] id="bwW10d2O7pn8" colab_type="text"
# #### Install PyTorch/XLA
# + id="Vj84aiC27oxS" colab_type="code" colab={}
if use_tpu:
VERSION = "20200220" #@param ["20200220","nightly", "xrt==1.15.0"]
# !curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py
# !python pytorch-xla-env-setup.py --version $VERSION
# + id="oNDRcPbbymlX" colab_type="code" outputId="0fc30778-78a9-495a-b805-15563c602767" colab={"base_uri": "https://localhost:8080/", "height": 34}
import os
import torch
if use_tpu:
# imports the torch_xla package for TPU support
import torch_xla
import torch_xla.core.xla_model as xm
dev = xm.xla_device()
print(dev)
import torchvision
import argparse
from torch.utils.tensorboard import SummaryWriter
apex = False
try:
from apex import amp
apex = True
except ImportError:
print(
"Install the apex package from https://www.github.com/nvidia/apex to use fp16 for training"
)
from model import load_model, save_model
from modules import NT_Xent
from modules.transformations import TransformsSimCLR
from utils import post_config_hook
# + id="Abk6aFZxyedW" colab_type="code" colab={}
def train(args, train_loader, model, criterion, optimizer, writer):
loss_epoch = 0
for step, ((x_i, x_j), _) in enumerate(train_loader):
optimizer.zero_grad()
x_i = x_i.to(args.device)
x_j = x_j.to(args.device)
# positive pair, with encoding
h_i, z_i = model(x_i)
h_j, z_j = model(x_j)
loss = criterion(z_i, z_j)
if apex and args.fp16:
with amp.scale_loss(loss, optimizer) as scaled_loss:
scaled_loss.backward()
else:
loss.backward()
optimizer.step()
if step % 50 == 0:
print(f"Step [{step}/{len(train_loader)}]\t Loss: {loss.item()}")
writer.add_scalar("Loss/train_epoch", loss.item(), args.global_step)
loss_epoch += loss.item()
args.global_step += 1
return loss_epoch
# + [markdown] id="eYbV0fa_y03Z" colab_type="text"
# ### Load arguments from `config/config.yaml`
# + id="1klUf-IuyxdL" colab_type="code" colab={}
from pprint import pprint
from utils.yaml_config_hook import yaml_config_hook
config = yaml_config_hook("./config/config.yaml")
args = argparse.Namespace(**config)
if use_tpu:
args.device = dev
else:
args.device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
args.out_dir = "logs"
if not os.path.exists("logs"):
os.makedirs("logs")
# + id="O86__UhA0Lvr" colab_type="code" outputId="7fe30063-4fc4-48b7-b311-33b31d1c4304" colab={"base_uri": "https://localhost:8080/", "height": 374}
### override any configuration parameters here, e.g. to adjust for use on GPUs on the Colab platform:
args.batch_size = 64
args.resnet = "resnet18"
pprint(vars(args))
# + [markdown] id="xJfeOM9PzNoF" colab_type="text"
# ### Load dataset into train loader
# + id="YGcskdBsytbj" colab_type="code" outputId="ef2c900c-f0fd-4317-dc9f-aa190a34adef" colab={"base_uri": "https://localhost:8080/", "height": 83, "referenced_widgets": ["5e8f1ebd1fc64ed7a6d1d216df8ea872", "a6298aace59c450583a138e51a6c2f20", "db5d4e20d06d4c2b8903db4b536fb9c6", "aecfd907d4fc49fa83127fc5adca770d", "6aa3f40ba65e4a9ca96c008c77bd2734", "ef3095486dec4011b2aa316d9c5cfe60", "31233aab47244beab600c4dc051a405d", "c7c070f514bd407f97fa25a78cbb7985"]}
root = "./datasets"
train_sampler = None
if args.dataset == "STL10":
train_dataset = torchvision.datasets.STL10(
root, split="unlabeled", download=True, transform=TransformsSimCLR(size=96) # 224 in the original paper
)
elif args.dataset == "CIFAR10":
train_dataset = torchvision.datasets.CIFAR10(
root, download=True, transform=TransformsSimCLR(size=32) # 224 in the original paper
)
else:
raise NotImplementedError
train_loader = torch.utils.data.DataLoader(
train_dataset,
batch_size=args.batch_size,
shuffle=(train_sampler is None),
drop_last=True,
num_workers=args.workers,
sampler=train_sampler,
)
# + [markdown] id="RBlXZwvjzPmp" colab_type="text"
# ### Load the SimCLR model, optimizer and learning rate scheduler
# + id="xERq_yHSzJRX" colab_type="code" colab={}
model, optimizer, scheduler = load_model(args, train_loader)
# + [markdown] id="RyJ3ulWqzViL" colab_type="text"
# ### Setup TensorBoard for logging experiments
# + id="zZNieMqfzU7H" colab_type="code" colab={}
tb_dir = os.path.join(args.out_dir, "colab")
if not os.path.exists(tb_dir):
os.makedirs(tb_dir)
writer = SummaryWriter(log_dir=tb_dir)
# + [markdown] id="Xpl6uQiIzbvK" colab_type="text"
# ### Create the mask that will remove correlated samples from the negative examples
# + [markdown] id="dtNCVEynzjtV" colab_type="text"
# ### Initialize the criterion (NT-Xent loss)
# + id="u067AY93zh-k" colab_type="code" colab={}
criterion = NT_Xent(args.batch_size, args.temperature, args.device)
# + [markdown] id="cN5KBK-yztGD" colab_type="text"
# ### Start training
# + id="TdCrD62hzjDQ" colab_type="code" outputId="501c27da-9d00-4877-e5fd-4231e2c86aa5" colab={"base_uri": "https://localhost:8080/", "height": 306}
args.global_step = 0
args.current_epoch = 0
for epoch in range(args.start_epoch, args.epochs):
lr = optimizer.param_groups[0]['lr']
loss_epoch = train(args, train_loader, model, criterion, optimizer, writer)
if scheduler:
scheduler.step()
if epoch % 5 == 0:
save_model(args, model, optimizer)
writer.add_scalar("Loss/train", loss_epoch / len(train_loader), epoch)
writer.add_scalar("Misc/learning_rate", lr, epoch)
print(
f"Epoch [{epoch}/{args.epochs}]\t Loss: {loss_epoch / len(train_loader)}\t lr: {round(lr, 5)}"
)
args.current_epoch += 1
## end training
save_model(args, model, optimizer)
# + [markdown] id="77BXUR9_4hNc" colab_type="text"
# ## OPTIONAL: Download last checkpoint to local drive (replace `100` with `args.epochs`)
# + id="d7eHATk04Sgu" colab_type="code" outputId="02f49280-10ff-4436-ecc7-a51cfcbe9951" colab={"base_uri": "https://localhost:8080/", "height": 324}
from google.colab import files
files.download('./logs/checkpoint_100.tar')
# + [markdown] id="tAQpjiuJy61N" colab_type="text"
# # Part 2:
# ## Linear evaluation using logistic regression, using weights from frozen, pre-trained SimCLR model
# + [markdown] id="24wrzMP2vYcV" colab_type="text"
#
# + id="kFyS9RvpuCuC" colab_type="code" colab={}
import torch
import torchvision
import numpy as np
import argparse
from experiment import ex
from model import load_model
from utils import post_config_hook
from modules import LogisticRegression
# + id="pZRtPBCLvgqz" colab_type="code" colab={}
def train(args, loader, simclr_model, model, criterion, optimizer):
loss_epoch = 0
accuracy_epoch = 0
for step, (x, y) in enumerate(loader):
optimizer.zero_grad()
x = x.to(args.device)
y = y.to(args.device)
output = model(x)
loss = criterion(output, y)
predicted = output.argmax(1)
acc = (predicted == y).sum().item() / y.size(0)
accuracy_epoch += acc
loss.backward()
optimizer.step()
loss_epoch += loss.item()
# if step % 100 == 0:
# print(
# f"Step [{step}/{len(loader)}]\t Loss: {loss.item()}\t Accuracy: {acc}"
# )
return loss_epoch, accuracy_epoch
# + id="skBYAPb2uKB5" colab_type="code" colab={}
def test(args, loader, simclr_model, model, criterion, optimizer):
loss_epoch = 0
accuracy_epoch = 0
model.eval()
for step, (x, y) in enumerate(loader):
model.zero_grad()
x = x.to(args.device)
y = y.to(args.device)
output = model(x)
loss = criterion(output, y)
predicted = output.argmax(1)
acc = (predicted == y).sum().item() / y.size(0)
accuracy_epoch += acc
loss_epoch += loss.item()
return loss_epoch, accuracy_epoch
# + id="OJk4-nc-vkF0" colab_type="code" outputId="cc4fbda5-ac56-41c6-a13c-ecba7d3ada49" colab={"base_uri": "https://localhost:8080/", "height": 340}
from pprint import pprint
from utils.yaml_config_hook import yaml_config_hook
config = yaml_config_hook("./config/config.yaml")
pprint(config)
args = argparse.Namespace(**config)
if use_tpu:
args.device = dev
else:
args.device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# + id="_7cSwhu55KJc" colab_type="code" colab={}
args.batch_size = 64
args.dataset = "STL10" # make sure to check this with the (pre-)trained checkpoint
args.resnet = "resnet50" # make sure to check this with the (pre-)trained checkpoint
args.model_path = "logs"
args.epoch_num = 100
args.logistic_epochs = 400
# + [markdown] id="GWRuVrZZ5Vm1" colab_type="text"
# ### Load dataset into train/test dataloaders
# + id="iPGuFjLW5PF9" colab_type="code" outputId="4bb22167-a306-46b8-d8e5-810599a71ca1" colab={"base_uri": "https://localhost:8080/", "height": 100, "referenced_widgets": ["02bf281098c44c898bfa00fe9df62999", "78c8aa6d9e0d4bd3af9ea8f2af895837", "678f5a370bf841f38dfb0aa8547487b3", "837acb55fea444f7bc924bb60a77deb7", "4884744cf6664742a57232e9bc9047ed", "e37ef4ab182d4ca2a3539830473ae92b", "2de6465becb24a138b100b49a4e233e2", "fd01debcdee543ebb42aed8cc678f0d3"]}
root = "./datasets"
if args.dataset == "STL10":
train_dataset = torchvision.datasets.STL10(
root,
split="train",
download=True,
transform=TransformsSimCLR(size=96).test_transform, # 224 in original paper
)
test_dataset = torchvision.datasets.STL10(
root,
split="test",
download=True,
transform=TransformsSimCLR(size=96).test_transform, # 224 in original paper
)
elif args.dataset == "CIFAR10":
train_dataset = torchvision.datasets.CIFAR10(
root,
train=True,
download=True,
transform=TransformsSimCLR(size=32).test_transform, # 224 in original paper
)
test_dataset = torchvision.datasets.CIFAR10(
root,
train=False,
download=True,
transform=TransformsSimCLR(size=32).test_transform, # 224 in original paper
)
else:
raise NotImplementedError
train_loader = torch.utils.data.DataLoader(
train_dataset,
batch_size=args.logistic_batch_size,
shuffle=True,
drop_last=True,
num_workers=args.workers,
)
test_loader = torch.utils.data.DataLoader(
test_dataset,
batch_size=args.logistic_batch_size,
shuffle=False,
drop_last=True,
num_workers=args.workers,
)
# + [markdown] id="TmwXqVBH5ZX6" colab_type="text"
# ### Load SimCLR model and load model weights
# + id="RTVnvx2a5QnX" colab_type="code" outputId="14e527a8-c607-4758-a749-9f692d2dde40" colab={"base_uri": "https://localhost:8080/", "height": 1000}
simclr_model, _, _ = load_model(args, train_loader, reload_model=True)
simclr_model = simclr_model.to(args.device)
simclr_model.eval()
# + id="HZoABGRr5Q8_" colab_type="code" colab={}
## Logistic Regression
n_classes = 10 # stl-10 / cifar-10
model = LogisticRegression(simclr_model.n_features, n_classes)
model = model.to(args.device)
# + id="T694n_HQ5Tad" colab_type="code" colab={}
optimizer = torch.optim.Adam(model.parameters(), lr=3e-4)
criterion = torch.nn.CrossEntropyLoss()
# + [markdown] id="PLgDCu1uTLQ5" colab_type="text"
# ### Helper functions to map all input data $X$ to their latent representations $h$ that are used in linear evaluation (they only have to be computed once)
# + id="6B6li5NVSWR3" colab_type="code" colab={}
def inference(loader, context_model, device):
feature_vector = []
labels_vector = []
for step, (x, y) in enumerate(loader):
x = x.to(device)
# get encoding
with torch.no_grad():
h, z = context_model(x)
h = h.detach()
feature_vector.extend(h.cpu().detach().numpy())
labels_vector.extend(y.numpy())
if step % 20 == 0:
print(f"Step [{step}/{len(loader)}]\t Computing features...")
feature_vector = np.array(feature_vector)
labels_vector = np.array(labels_vector)
print("Features shape {}".format(feature_vector.shape))
return feature_vector, labels_vector
def get_features(context_model, train_loader, test_loader, device):
train_X, train_y = inference(train_loader, context_model, device)
test_X, test_y = inference(test_loader, context_model, device)
return train_X, train_y, test_X, test_y
def create_data_loaders_from_arrays(X_train, y_train, X_test, y_test, batch_size):
train = torch.utils.data.TensorDataset(
torch.from_numpy(X_train), torch.from_numpy(y_train)
)
train_loader = torch.utils.data.DataLoader(
train, batch_size=batch_size, shuffle=False
)
test = torch.utils.data.TensorDataset(
torch.from_numpy(X_test), torch.from_numpy(y_test)
)
test_loader = torch.utils.data.DataLoader(
test, batch_size=batch_size, shuffle=False
)
return train_loader, test_loader
# + id="sPeoK6ZkS4MB" colab_type="code" outputId="8a6208de-783f-4158-b2a0-d6a7ff778cd5" colab={"base_uri": "https://localhost:8080/", "height": 119}
print("### Creating features from pre-trained context model ###")
(train_X, train_y, test_X, test_y) = get_features(
simclr_model, train_loader, test_loader, args.device
)
arr_train_loader, arr_test_loader = create_data_loaders_from_arrays(
train_X, train_y, test_X, test_y, args.logistic_batch_size
)
# + id="vLaebM9Qvztx" colab_type="code" outputId="d043bf33-d4bd-4a3f-f546-742d036291e7" colab={"base_uri": "https://localhost:8080/", "height": 714}
for epoch in range(args.logistic_epochs):
loss_epoch, accuracy_epoch = train(args, arr_train_loader, simclr_model, model, criterion, optimizer)
if epoch % 10 == 0:
print(f"Epoch [{epoch}/{args.logistic_epochs}]\t Loss: {loss_epoch / len(train_loader)}\t Accuracy: {accuracy_epoch / len(train_loader)}")
# final testing
loss_epoch, accuracy_epoch = test(
args, arr_test_loader, simclr_model, model, criterion, optimizer
)
print(
f"[FINAL]\t Loss: {loss_epoch / len(test_loader)}\t Accuracy: {accuracy_epoch / len(test_loader)}"
)
# + id="dxK5MuRbR7tW" colab_type="code" colab={}
| SimCLR_Notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Custom Expectation Value Program for the Qiskit Runtime
#
#
# <p>
# <font size="4" color="#0f62fe"><NAME></font>
# </p>
# <p>
# <font size="3" color="#0f62fe">IBM Quantum Partners Technical Enablement Team</font>
# </p>
#
# Here we will show how to make a program that takes a circuit, or list of circuits, and computes the expectation values of one or more diagonal operators.
# ## Prerequisites
#
# - You must have Qiskit 0.32+ installed.
# - You must have an IBM Quantum Experience account with the ability to upload a Runtime program. You can upload a program if you have access to more than just the open hub/group/project (ibm-q/open/main).
# ## Background
#
# The primary method by which information is obtained from quantum computers is via expectation values. Indeed, the samples that come from executing a quantum circuit multiple times, once converted to probabilities, can be viewed as just a finite sample approximation to the expectation value for the projection operators corresponding to each bitstring. More practically, many quantum algorithms require computing expectation values over Pauli operators, e.g. Variational Quantum Eigensolvers, and thus having a runtime program that computes these quantities is of fundamental importance. Here we look at one such example, where an user passes one or more circuits and expectation operators and gets back the computed expectation values, and possibly error bounds.
#
# ### Expectation value of a diagonal operator
#
# Consider a generic observable given by the tensor product of diagonal operators over $N$ qubits $O = O_{N-1}\dots O_{0}$ where the subscript indicates the qubit on which the operator acts. Then for a set of observed $M$ bitstrings $\{b_{0}, \dots b_{M-1}\}$, where $M \leq 2^N $, with corresponding approximate probabilites $p_{m}$ the expectation value is given by
#
# $$
# \langle O\rangle \simeq \sum_{m=0}^{M-1} p_{m}\prod_{n=0}^{N-1}O_{n}[b_{m}[N-n-1], b_{m}[N-n-1]],
# $$
#
# where $O_{n}[b_{m}[N-n-1], b_{m}[N-n-1]]$ is the diagonal element of $O_{n}$ specified by the $N-n-1$th bit in bitstring $b_{m}$. The reason for the complicated indexing in `b_{m}` is because Qiskit uses least-sginificant bit indexing where the zeroth element of the bit-strings is given by the right-most bit.
#
# Here we will use built-in routines to compute these expectation values. However, it is not hard to do yourself, with plenty of examples to be found.
# ## Main program
#
# Here we define our main function for the expectation value runtime program. As always, our program must start with the `backend`, and `user_messenger` arguements, followed by the actual inputs we pass to the program. Here our options are quite simple:
#
# - `circuits`: A single QuantumCircuit or list of QuantumCircuits to be executed on the target backend.
#
#
# - `expectation_operators`: The operators we want to evaluate. These can be strings of diagonal Pauli's, eg, `ZIZZ`, or custom operators defined by dictionarys. For example, the projection operator on the all ones state of 4 qubits is `{'1111': 1}`.
#
#
# - `shots`: Howe many times to sample each circuit.
#
#
# - `transpiler_config`: A dictionary that passes additional arguments on to the transpile function, eg. `optimization_level`.
#
#
# - `run_config`: A dictionary that passes additional arguments on to `backend.run()`.
#
#
# - `skip_transpilation`: A flag to skip transpilation altogether and just run the circuits. This is useful for situations where you need to transpile parameterized circuits once, but must bind parameters multiple times and evaluate.
#
#
# - `return_stddev`: Flag to return bound on standard deviation. If using measurement mitigation this adds some overhead to the computation.
#
#
# - `use_measurement_mitigation`: Use M3 measurement mitigation and compute expecation value and standard deviation bound from quasi-probabilities.
#
# At the top of the cell below you will see a commented out `%%writefile sample_expval.py`. We will use this to convert the cell to a Python module named `sample_expval.py` to upload.
# +
# #%%writefile sample_expval.py
import mthree
from qiskit import transpile
# The entrypoint for our Runtime Program
def main(backend, user_messenger,
circuits,
expectation_operators='',
shots = 8192,
transpiler_config={},
run_config={},
skip_transpilation=False,
return_stddev=False,
use_measurement_mitigation=False,
):
"""Compute expectation
values for a list of operators after
executing a list of circuits on the target backend.
Parameters:
backend (ProgramBackend): Qiskit backend instance.
user_messenger (UserMessenger): Used to communicate with the program user.
circuits: (QuantumCircuit or list): A single list of QuantumCircuits.
expectation_operators (str or dict or list): Expectation values to evaluate.
shots (int): Number of shots to take per circuit.
transpiler_config (dict): A collection of kwargs passed to transpile().
run_config (dict): A collection of kwargs passed to backend.run().
skip_transpilation (bool): Skip transpiling of circuits, default=False.
return_stddev (bool): Return upper bound on standard devitation,
default=False.
use_measurement_mitigation (bool): Improve resulting using measurement
error mitigation, default=False.
Returns:
array_like: Returns array of expectation values or a list of (expval, stddev)
tuples if return_stddev=True.
"""
# transpiling the circuits using given transpile options
if not skip_transpilation:
trans_circuits = transpile(circuits, backend=backend,
**transpiler_config)
# Make sure everything is a list
if not isinstance(trans_circuits, list):
trans_circuits = [trans_circuits]
# If skipping set circuits -> trans_circuits
else:
if not isinstance(circuits, list):
trans_circuits = [circuits]
else:
trans_circuits = circuits
# If we are given a single circuit but requesting multiple expectation
# values, then set flag to make multiple pointers to same result.
duplicate_results = False
if isinstance(expectation_operators, list):
if len(expectation_operators) and len(trans_circuits) == 1:
duplicate_results = True
# If doing measurement mitigation we must build and calibrate a
# mitigator object. Will also determine which qubits need to be
# calibrated.
if use_measurement_mitigation:
# Get an the measurement mappings at end of circuits
meas_maps = mthree.utils.final_measurement_mapping(trans_circuits)
# Get an M3 mitigator
mit = mthree.M3Mitigation(backend)
# Calibrate over the set of qubits measured in the transpiled circuits.
mit.cals_from_system(meas_maps)
# Compute raw results
result = backend.run(trans_circuits, shots=shots, **run_config).result()
raw_counts = result.get_counts()
# When using measurement mitigation we need to apply the correction and then
# compute the expectation values from the computed quasi-probabilities.
if use_measurement_mitigation:
quasi_dists = mit.apply_correction(raw_counts, meas_maps,
return_mitigation_overhead=return_stddev)
if duplicate_results:
quasi_dists = mthree.classes.QuasiCollection(
[quasi_dists]*len(expectation_operators))
# There are two different calls depending on what we want returned.
if return_stddev:
return quasi_dists.expval_and_stddev(expectation_operators)
return quasi_dists.expval(expectation_operators)
# If the program didn't return in the mitigation loop above it means
# we are processing the raw_counts data. We do so here using the
# mthree utilities
if duplicate_results:
raw_counts = [raw_counts]*len(expectation_operators)
if return_stddev:
return mthree.utils.expval_and_stddev(raw_counts, expectation_operators)
return mthree.utils.expval(raw_counts, expectation_operators)
# -
# ## Local testing
#
# Here we test with a local "Fake" backend that mimics the noise properties of a real system and a 4-qubit GHZ state.
from qiskit import QuantumCircuit
from qiskit.test.mock import FakeSantiago
from qiskit.providers.ibmq.runtime import UserMessenger
msg = UserMessenger()
backend = FakeSantiago()
qc = QuantumCircuit(4)
qc.h(2)
qc.cx(2, 1)
qc.cx(1, 0)
qc.cx(2, 3)
qc.measure_all()
main(backend, msg,
qc,
expectation_operators=['ZZZZ', 'IIII', 'IZZZ'],
transpiler_config={'optimization_level':3, 'layout_method': 'sabre',
'routing_method': 'sabre'},
run_config={},
skip_transpilation=False,
return_stddev=False,
use_measurement_mitigation=True
)
# If we have done our job correctly, the above should print out two expectation values close to one and a final expectation value close to zero.
# ## Program metadata
#
# Next we add the needed program data to a dictionary for uploading with our program.
# +
meta = {
"name": "sample-expval",
"description": "A sample expectation value program.",
"max_execution_time": 1000,
"spec": {}
}
meta["spec"]["parameters"] = {
"$schema": "https://json-schema.org/draft/2019-09/schema",
"properties": {
"circuits": {
"description": "A single or list of QuantumCircuits.",
"type": [
"array",
"object"
]
},
"expectation_operators": {
"description": "One or more expectation values to evaluate.",
"type": [
"string",
"object",
"array"
]
},
"shots": {
"description": "Number of shots per circuit.",
"type": "integer"
},
"transpiler_config": {
"description": "A collection of kwargs passed to transpile.",
"type": "object"
},
"run_config": {
"description": "A collection of kwargs passed to backend.run. Default is False.",
"type": "object",
"default": False
},
"return_stddev": {
"description": "Return upper-bound on standard deviation. Default is False.",
"type": "boolean",
"default": False
},
"use_measurement_mitigation": {
"description": "Use measurement mitigation to improve results. Default is False.",
"type": "boolean",
"default": False
}
},
"required": [
"circuits"
]
}
meta["spec"]["return_values"] = {
"$schema": "https://json-schema.org/draft/2019-09/schema",
"description": "A list of expectation values and optionally standard deviations.",
"type": "array"
}
# -
# ## Upload the program
#
# We are now in a position to upload the program. To do so we first uncomment and excute the line `%%writefile sample_expval.py` giving use the `sample_expval.py` file we need to upload.
from qiskit import IBMQ
IBMQ.load_account();
provider = IBMQ.get_provider(group='deployed')
program_id = provider.runtime.upload_program(data='sample_expval.py', metadata=meta)
program_id
# ### Delete program if needed
# +
#provider.runtime.delete_program(program_id)
# -
# ## Wrapping the runtime program
#
# As always, it is best to wrap the call to the runtime program with a function (or possibly a class) that makes input easier and does some validation.
def expectation_value_runner(backend,
circuits,
expectation_operators='',
shots = 8192,
transpiler_config={},
run_config={},
skip_transpilation=False,
return_stddev=False,
use_measurement_mitigation=False):
"""Compute expectation values for a list of operators after
executing a list of circuits on the target backend.
Parameters:
backend (Backend or str): Qiskit backend instance or name.
circuits: (QuantumCircuit or list): A single or list of QuantumCircuits.
expectation_operators (str or dict or list): Expectation values to evaluate.
shots (int): Number of shots to take per circuit.
transpiler_config (dict): A collection of kwargs passed to transpile().
run_config (dict): A collection of kwargs passed to backend.run().
return_stddev (bool): Return upper bound on standard devitation,
default=False.
skip_transpilation (bool): Skip transpiling of circuits, default=False.
use_measurement_mitigation (bool): Improve resulting using measurement
error mitigation, default=False.
Returns:
array_like: Returns array of expectation values or a list of (expval, stddev)
pairs if return_stddev=True.
"""
if not isinstance(backend, str):
backend = backend.name()
options = {'backend_name': backend}
if isinstance(circuits, list) and len(circuits) != 1:
if isinstance(expectation_operators, list):
if len(circuits) != 1 and len(expectation_operators) == 1:
expectation_operators = expectation_operators*len(circuits)
elif len(circuits) != len(expectation_operators):
raise ValueError('Number of circuits must match number of expectation \
values if more than one of each')
inputs = {}
inputs['circuits'] = circuits
inputs['expectation_operators'] = expectation_operators
inputs['shots'] = shots
inputs['transpiler_config'] = transpiler_config
inputs['run_config'] = run_config
inputs['return_stddev'] = return_stddev
inputs['skip_transpilation'] = skip_transpilation
inputs['use_measurement_mitigation'] = use_measurement_mitigation
return provider.runtime.run('sample-expval', options=options, inputs=inputs)
# ### Trying it out
#
# Because we made our program public anyone can try it out. Lets do so here with our previously made GHZ state and running on the simulator.
# +
backend = provider.backend.ibmq_qasm_simulator
all_zeros_proj = {'0000': 1}
all_ones_proj = {'1111': 1}
job = expectation_value_runner(backend, qc, [all_zeros_proj, all_ones_proj, 'ZZZZ'])
# -
job.result()
# The first two projectors should be nearly $0.50$ as they tell use the probability of being in the all zeros and ones states, respectively, which should be 50/50 for our GHZ state. The final expectation value of `ZZZZ` should be one since this is a GHZ over an even number of qubits. It should go close to zero for an odd number.
qc2 = QuantumCircuit(3)
qc2.h(2)
qc2.cx(2, 1)
qc2.cx(1, 0)
qc2.measure_all()
all_zeros_proj = {'000': 1}
all_ones_proj = {'111': 1}
job2 = expectation_value_runner(backend, qc2, [all_zeros_proj, all_ones_proj, 'ZZZ'])
job2.result()
# ## Quantum Volume as an expectation value
#
# Here we formulate QV as an expectation value of a projector onto the heavy-output elements on a distribution. We can then use our expectation value routine to compute whether a given circuit has passed the QV metric.
#
# QV is defined in terms of heavy-ouputs of a distribution. Heavy-outputs are those bit-strings that are those that have probabilities above the median value of the distribution. Below we define the projection operator onto the set of bit-strings that are heavy-outputs for a given distribution.
def heavy_projector(qv_probs):
"""Forms the projection operator onto the heavy-outputs of a given probability distribution.
Parameters:
qv_probs (dict): A dictionary of bitstrings and associated probabilities.
Returns:
dict: Projector onto the heavy-set.
"""
median_prob = np.median(list(qv_probs.values()))
heavy_strs = {}
for key, val in qv_probs.items():
if val > median_prob:
heavy_strs[key] = 1
return heavy_strs
# Now we generate 10 QV circuits as our dataset.
import numpy as np
from qiskit.quantum_info import Statevector
from qiskit.circuit.library import QuantumVolume
# Generate QV circuits
N = 10
qv_circs = [QuantumVolume(5) for _ in range(N)]
# Next, we have to determine the heavy-set of each circuit from the ideal answer, and then pass this along to our heavy-set projector function that we defined above.
ideal_probs = [Statevector.from_instruction(circ).probabilities_dict() for circ in qv_circs]
heavy_projectors = [heavy_projector(probs) for probs in ideal_probs]
# QV circuits have no meaasurements on them so need to add them:
circs = [circ.measure_all(inplace=False) for circ in qv_circs]
# With a list of circuits and projection operators we now need only to pass both sets to our above expection value runner targeting the desired backend. We will also set the best transpiler arguments to give us a sporting chance of getting some passing scores.
backend = provider.backend.ibmq_manila
job3 = expectation_value_runner(backend, circs, heavy_projectors,
transpiler_config={'optimization_level':3, 'layout_method': 'sabre',
'routing_method': 'sabre'})
qv_scores = job3.result()
qv_scores
# A passing QV score is one where the value of the heavy-set projector is above $2/3$. So let us see who passed:
qv_scores > 2/3
from qiskit.tools.jupyter import *
# %qiskit_copyright
| tutorials/sample_expval_program/qiskit_runtime_expval_program.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import tensorflow as tf
import numpy as np
import librosa
import soundfile as sf
from pydub import AudioSegment
from glob import glob
import random
def int_to_float(array, type = np.float32):
"""
Change np.array int16 into np.float32
Parameters
----------
array: np.array
type: np.float32
Returns
-------
result : np.array
"""
if array.dtype == type:
return array
if array.dtype not in [np.float16, np.float32, np.float64]:
array = array.astype(np.float32) / np.max(np.abs(array))
return array
# +
from scipy import interpolate
def change_samplerate(data, old_samplerate, new_samplerate):
old_audio = data
duration = data.shape[0] / old_samplerate
time_old = np.linspace(0, duration, old_audio.shape[0])
time_new = np.linspace(
0, duration, int(old_audio.shape[0] * new_samplerate / old_samplerate)
)
interpolator = interpolate.interp1d(time_old, old_audio.T)
data = interpolator(time_new).T
return data
def read_flac(file, sample_rate = 16000):
data, old_samplerate = sf.read(file)
if len(data.shape) == 2:
data = data[:, 0]
if old_samplerate != sample_rate:
data = change_samplerate(data, old_samplerate, sample_rate)
return data, sample_rate
def read_wav(file, sample_rate = 16000):
y, sr = librosa.load(file, sr = sample_rate)
return y, sr
def read_mp3(file, sample_rate = 16000):
audio = AudioSegment.from_mp3(file)
a = np.array(audio.set_frame_rate(sample_rate).set_channels(1).get_array_of_samples())
return int_to_float(a), sample_rate
def read_file(file):
if '.flac' in file:
y, sr = read_flac(file)
if '.wav' in file:
y, sr = read_wav(file)
if '.mp3' in file:
y, sr = read_mp3(file)
return y, sr
# -
def sampling(combined, frame_duration_ms = 700, sample_rate = 16000):
n = int(sample_rate * (frame_duration_ms / 1000.0))
offset = 0
while offset + n <= len(combined):
yield combined[offset : offset + n]
offset += n
if offset < len(combined):
yield combined[offset:]
labels = [
'english',
'indonesian',
'malay',
'mandarin',
'manglish',
'others',
'not a language',
]
len(glob('english/clean-wav/*.wav'))
english = random.sample(glob('LibriSpeech/*/*/*/*.flac'), 1000) + glob('english/clean-wav/*.wav')
english = [(m, 'english') for m in english]
len(english)
len(glob('indon/clean-wav/*.wav'))
indon = glob('indon/clean-wav/*.wav') + random.sample(glob('speech/cv-corpus-5.1-2020-06-22/id/clips/*.mp3'),
1000)
indon = [(m, 'indonesian') for m in indon]
len(indon)
len(glob('malay/clean-wav/*.wav'))
malay = glob('malay/clean-wav/*.wav')
malay = [(m, 'malay') for m in malay]
len(malay)
len(glob('mandarin/clean-wav/*.wav'))
mandarin = glob('mandarin/clean-wav/*.wav') + random.sample(glob('speech/cv-corpus-5.1-2020-06-22/zh-CN/clips/*.mp3'), 500) \
+ random.sample(glob('speech/cv-corpus-5.1-2020-06-22/zh-HK/clips/*.mp3'), 500) \
+ random.sample(glob('speech/cv-corpus-5.1-2020-06-22/zh-TW/clips/*.mp3'), 500)
mandarin = [(m, 'mandarin') for m in mandarin]
len(mandarin)
manglish = glob('manglish/clean-wav/*.wav')
manglish = [(m, 'manglish') for m in manglish]
len(manglish)
lang = {'en': 'English',
'de': 'German',
'fr': 'French',
'cy': 'Welsh',
'br': 'Breton',
'cv': 'Chuvash',
'tr': 'Turkish',
'tt': 'Tatar',
'ky': 'Kyrgyz',
'ga-IE': 'Irish',
'kab': 'Kabyle',
'ca': 'Catalan',
'zh-TW': 'Chinese (Taiwan)',
'sl': 'Slovenian',
'it': 'Italian',
'nl': 'Dutch',
'cnh': 'Hakha Chin',
'eo': 'Esperanto',
'et': 'Estonian',
'fa': 'Persian',
'eu': 'Basque',
'es': 'Spanish',
'zh-CN': 'Chinese (China)',
'mn': 'Mongolian',
'sah': 'Sakha',
'dv': 'Dhivehi',
'rw': 'Kinyarwanda',
'sv-SE': 'Swedish',
'ru': 'Russian',
'id': 'Indonesian',
'ar': 'Arabic',
'ta': 'Tamil',
'ia': 'Interlingua',
'pt': 'Portuguese',
'lv': 'Latvian',
'ja': 'Japanese',
'vot': 'Votic',
'ab': 'Abkhaz',
'zh-HK': 'Chinese (Hong Kong)',
'rm-sursilv': 'Romansh Sursilvan',
'hsb': 'Sorbian, Upper',
'ro': 'Romanian',
'fy-NL': 'Frisian',
'cs': 'Czech',
'el': 'Greek',
'rm-vallader': 'Romansh Vallader',
'pl': 'Polish',
'as': 'Assamese',
'uk': 'Ukrainian',
'mt': 'Maltese',
'ka': 'Georgian',
'pa-IN': 'Punjabi',
'or': 'Odia',
'vi': 'Vietnamese'}
not_in = ['en', 'zh-TW', 'zh-CN', 'zh-HK', 'id']
lang = list(set(lang.keys()) - set(not_in))
# +
from tqdm import tqdm
others = []
for l in tqdm(lang):
g = glob(f'speech/cv-corpus-5.1-2020-06-22/{l}/clips/*.mp3')
others.extend(random.sample(g, min(len(g), 1000)))
others = [(m, 'others') for m in others]
# -
len(others)
not_music = glob('not-music/clean-wav/*.wav') + glob('musan/music/**/*.wav', recursive = True) \
+ glob('musan/noise/**/*.wav', recursive = True)
not_music = [(m, 'not a language') for m in not_music]
not_music[:10]
combined_all = english + indon + malay + mandarin + manglish + others + not_music
random.shuffle(combined_all)
len(combined_all)
# +
import os
for f in combined_all:
s = os.path.getsize(f[0]) / 1e6
if s > 50:
print(f, s)
# -
labels.index(combined_all[-1][1])
# +
# y, sr = read_file(combined_all[0][0])
# +
# y, sr, combined_all[0][1]
# +
import os
import tensorflow as tf
os.system('rm language-detection/data/*')
DATA_DIR = os.path.expanduser('language-detection/data')
tf.gfile.MakeDirs(DATA_DIR)
# +
import malaya_speech
vad = malaya_speech.vad.webrtc()
# +
from tqdm import tqdm
from malaya_speech.train import prepare_data
from collections import defaultdict
import warnings
warnings.filterwarnings('error')
def loop(files, dupe_factor = 2):
files, no = files
fname = f'{DATA_DIR}/part-{no}.tfrecords'
writer = tf.python_io.TFRecordWriter(fname)
counts = defaultdict(int)
for file in tqdm(files):
try:
wav = read_file(file[0])[0]
for _ in range(dupe_factor):
fs = sampling(wav, random.randint(500, 2000))
for s in fs:
try:
if file[1] != 'not a language':
n = malaya_speech.utils.astype.float_to_int(s)
frames = malaya_speech.utils.generator.frames(n, 30, 16000, append_ending_trail=False)
frames = [f.array for f in frames if vad(f)]
n = malaya_speech.utils.astype.int_to_float(np.concatenate(frames))
else:
n = s
if len(n) > 50:
example = prepare_data.to_example({'inputs': n.tolist(),
'targets': [labels.index(file[1])]})
writer.write(example.SerializeToString())
counts[file[1]] += 1
except Exception as e:
pass
except Exception as e:
pass
writer.close()
return [counts]
# -
import mp
returned = mp.multiprocessing(combined_all, loop, cores = 10)
combined_d = defaultdict(int)
for d in returned:
for k, v in d.items():
combined_d[k] += v
combined_d
| session/language-detection/prepare/language-detection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import sys
import time
import hbp_knowledge
import pybel
import pybel_jupyter
from pybel.struct import remove_pathologies
# -
print(sys.version)
print(time.asctime())
print(f'PyBEL Version: {pybel.get_version()}')
print(f'PyBEL-Jupyter Version: {pybel_jupyter.get_version()}')
print(f'HBP Knowledge Version: {hbp_knowledge.VERSION}')
graphs = hbp_knowledge.repository.get_graphs()
proteostasis_graphs = {
path: graph
for path, graph in graphs.items()
if 'proteostasis/' in path
}
proteostasis_graph = pybel.union(proteostasis_graphs.values())
proteostasis_graph.name = 'Proteostasis Subgraph'
proteostasis_graph.summarize()
remove_pathologies(proteostasis_graph)
pybel_jupyter.to_jupyter(proteostasis_graph)
| notebooks/Explore Proteostasis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Webscraping 40k Hindi songs
# We'll be scraping http://giitaayan.com/
# ### Phase 1
# In Phase 1, we will only scrape the category pages to get the song page URLs for all the songs on the website.
from selenium import webdriver
import re
import pandas as pd
import csv
import time
Chrome = webdriver.Chrome
chromedriver = './chromedriver'
browser = Chrome(chromedriver)
# Table headers for the csv file
table_headers = ['Song', 'Film', 'Year', 'Music Director', 'Lyricist', 'Singers']
# Opening the file in write mode and hence creating a new file with just the headers
with open(r'hindi_lyrics_phase1.csv', 'w') as file:
writer = csv.writer(file)
writer.writerow(table_headers)
# +
search_url = 'http://giitaayan.com/search.asp'
category_page_url = 'http://giitaayan.com/search.asp?fi=y&browse=Song&s='
# The website has following 28 categories
listofcategories = [
'%23', '0-9', 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', \
'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z'
]
# -
scraped_data = []
# %%time
# Iterating over one category at a time
for category_item in listofcategories:
url = f'http://giitaayan.com/search.asp?fi=y&browse=Song&s={category_item}'
# Opening the category's first page
browser.get(url)
try:
# Find the total number of pages for that category
total_items = int(browser.find_element_by_xpath('//table[1]/tbody/tr/td/b[2]').text)
except Exception as e:
total_items = -1
print(total_items)
# Each page for the category has 50 rows except the last page
for page_number in range(1, int(total_items // 50 + 1) + 1):
# To reduce the load on the server, we induce a 2 second delay for every page request
time.sleep(2)
url = f'{search_url}?browse=Song&s={category_item}&PageNo={page_number}'
browser.get(url)
# Initializing the page data list
page_data = []
# Each page has 51 rows of which 1st row is a header row
# We need to iterate from 2nd to 51st rows to get the information about each song
for row_item_index in range(2, 52):
try:
# Extracting various information about the song from the loaded page
lyrics_url = browser.find_element_by_xpath(f'//table[2]/tbody/tr[{row_item_index}]/td[1]/span/a').get_attribute('href')
movie_name = browser.find_element_by_xpath(f'//table[2]/tbody/tr[{row_item_index}]/td[2]/a').text
year = browser.find_element_by_xpath(f'//table[2]/tbody/tr[{row_item_index}]/td[2]').text
year = int(re.findall('\d+', year)[0])
music_director = browser.find_element_by_xpath(f'//table[2]/tbody/tr[{row_item_index}]/td[3]/a').text
lyricist = browser.find_element_by_xpath(f'//table[2]/tbody/tr[{row_item_index}]/td[4]/a').text
singers = browser.find_element_by_xpath(f'//table[2]/tbody/tr[{row_item_index}]/td[5]/a').text
row_item = [lyrics_url, movie_name, year, music_director, lyricist, singers]
# Adding the data for one song to the list of
page_data.append(row_item)
except Exception as e:
# For the last page of each category, this exception will be encountered atleast once
# since it contains less than 50 rows
pass
# Printing the progress of the scraping
print(f'Writing {len(page_data)} lines for Category {category_item}, Page Number: {page_number}')
# Writing the data for each page to the csv file
# Notice that this time the file was opened in append mode
with open(r'hindi_lyrics_phase1.csv', 'a') as file:
writer = csv.writer(file)
writer.writerows(page_data)
| Scraping_giitaayan/HindiWebscraping40k_Phase1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
import numpy as np
import tensorflow as tf
from keras.models import Sequential
from keras.layers import Dense
from keras.callbacks import EarlyStopping
path = 'data/wine'
prefix = 'wine_'
X_train = np.load(os.path.join(path, prefix+'train_vectors.npy'))
y_train = np.load(os.path.join(path, prefix+'train_labels.npy'))
X_test = np.load(os.path.join(path, prefix+'test_vectors.npy'))
y_test = np.load(os.path.join(path, prefix+'test_labels.npy'))
print(X_train.shape, y_train.shape)
tf.random.set_seed(42)
np.random.seed(42)
model = Sequential()
model.add(Dense(30, activation='relu', input_shape=(X_test.shape[1],)))
model.add(Dense(10, activation='relu'))
model.add(Dense(3, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(X_train, y_train, epochs=50, validation_split=0.1, batch_size=8, callbacks=[EarlyStopping(monitor='val_accuracy', patience=5)])
model.evaluate(X_test, y_test)
| .ipynb_checkpoints/Model 4.2 - Wine Quality Prediction-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: "Py 3.9 Histoire Num\xE9rique"
# language: python
# name: py3-9_hist-num
# ---
# + [markdown] tags=[]
# ## Import librairies
#
# Libraries list in the document _environment.yml_ .
# + tags=[]
import json
import pprint
# import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
import copy
# -
# # Prepare data
# ## Open and inspect file
# In this section, the data are inspected a first time. Here, we import the downloaded Geovistory output.
#
# Please verify that the file name corresponds to the downloaded one
file = 'data/table-873986-export.json'
### Open the file and crate a dictionary in the 'data' file
with open(file) as json_file:
data = json.load(json_file)
print(type(data), '\n-----')
# noms des objets racine du fichier JSON
z = [print(a) for a in data]
### Number of rows
rows = data['rows']
len(rows)
### Get the columns names
columns = data['columns']
pprint.pprint(columns)
# +
### Explore
# [pprint.pprint(r) for r in rows[99:101]]
# -
for r in rows[:2]:
print(r['col_1']['values'][0]['value']['timePrimitive']['from']['calGregorian'][:4], r['col_2']['entities'][0]['entity_label'])
res_1 = [[r['col_1']['values'][0]['value']['timePrimitive']['from']['calGregorian'][:4], r['col_2']['entities'][0]['entity_label']]\
for r in rows]
len(res_1), res_1[:4]
# +
### Minimal birth year and maximal age in 1860
min([l[0] for l in res_1]), 1860-int(min([l[0] for l in res_1]))
# +
### Separate people from Basel and others
age_bas_1860 = []
age_ext_1860 = []
for l in res_1:
if 'Basel-Stadt' in l[1]:
age_bas_1860.append(1860 - int(l[0]))
else:
age_ext_1860.append(1860 - int(l[0]))
len(age_bas_1860), len(age_ext_1860)
# -
age_bas_1860[:4], age_ext_1860[:4]
# + [markdown] tags=[]
# ### Create age classes
# -
i = 1
age_classes = []
while i < 101:
age_classes.append([i, i+4])
i += 5
age_classes[-1:], len(age_classes)
# + tags=[]
cut_list = [l[0] for l in age_classes]
# -
age_bas_1860_cut = pd.Series(pd.cut(age_bas_1860,cut_list))
age_bas_1860_cut[:4]
l_bas = age_bas_1860_cut.groupby(age_bas_1860_cut).count()
print(len(l_bas), '\n'*2,l_bas)
age_ext_1860_cut = pd.Series(pd.cut(age_ext_1860,cut_list))
age_ext_1860_cut[:4]
l_ext = age_ext_1860_cut.groupby(age_ext_1860_cut).count()
print(len(l_ext), '\n'*2,l_ext)
age_classes_str = [m.replace('[','').replace(']','') for m in map(str,age_classes[:-1])]
age_classes_str[:4]
# ## Create the data frame
df = pd.DataFrame({'Age': age_classes_str,
'External': list(l_ext),
'Basel': [e * -1 for e in list(l_bas)]})
df
age_classes_str.reverse()
age_classes_str_rev = copy.deepcopy(age_classes_str)
age_classes_str.reverse()
age_classes_str_rev[:4], age_classes_str[:4]
# +
# https://towardsdatascience.com/different-bar-charts-in-python-6d984b9c6b17
# Prepare Data
plt.rcParams["figure.figsize"] = (16, 12)
#Class
AgeClass = age_classes_str_rev
#Chart
bar_plot = sns.barplot(x='External', y='Age', data=df, order=AgeClass, lw=0, palette="Spectral")
bar_plot = sns.barplot(x='Basel', y='Age', data=df, order=AgeClass, lw=0, palette="PRGn")
plt.title("Pyramid Origins — Distribution of Ages", fontsize=12)
plt.xlabel("Basel — External")
plt.savefig('graphics/pyramid_origins_age_1860.jpg')
plt.plot()
# -
year_origin = pd.DataFrame(res_1, columns = ['year','origin'])
year_origin['year'] = year_origin['year'].apply(lambda x : int(x))
year_origin
### https://seaborn.pydata.org/generated/seaborn.violinplot.html
sns.set(font_scale = 1.2)
ax = sns.catplot(kind="violin", x="year", y="origin", orient="h", height=8,aspect=2,
data=year_origin \
, palette=['lightblue','violet'], split=True)
plt.xlabel("Birth Year")
plt.ylabel("State of Origin")
plt.title("Birth Years per State of Origin", fontsize=14)
plt.savefig('graphics/birth_years_state_of_origin.jpg')
| naissance_origines.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Logistic Regression with Grid Search (scikit-learn)
# <a href="https://colab.research.google.com/github/VertaAI/modeldb/blob/master/client/workflows/demos/sklearn.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# restart your notebook if prompted on Colab
try:
import verta
except ImportError:
# !pip install verta
# This example features:
# - **scikit-learn**'s `LinearRegression` model
# - **verta**'s Python client logging grid search results
# - **verta**'s Python client retrieving the best run from the grid search to calculate full training accuracy
# - predictions against a deployed model
# +
HOST = "app.verta.ai"
PROJECT_NAME = "Census Income Classification"
EXPERIMENT_NAME = "Logistic Regression"
# +
# import os
# os.environ['VERTA_EMAIL'] =
# os.environ['VERTA_DEV_KEY'] =
# -
# ## Imports
# +
from __future__ import print_function
import warnings
from sklearn.exceptions import ConvergenceWarning
warnings.filterwarnings("ignore", category=ConvergenceWarning)
warnings.filterwarnings("ignore", category=FutureWarning)
import itertools
import multiprocessing
import os
import time
import six
import numpy as np
import pandas as pd
import sklearn
from sklearn import model_selection
from sklearn import linear_model
from sklearn import metrics
# -
try:
import wget
except ImportError:
# !pip install wget # you may need pip3
import wget
# ---
# # Log Workflow
# This section demonstrates logging model metadata and training artifacts to ModelDB.
# ## Prepare Data
# +
train_data_url = "http://s3.amazonaws.com/verta-starter/census-train.csv"
train_data_filename = wget.detect_filename(train_data_url)
if not os.path.isfile(train_data_filename):
wget.download(train_data_url)
test_data_url = "http://s3.amazonaws.com/verta-starter/census-test.csv"
test_data_filename = wget.detect_filename(test_data_url)
if not os.path.isfile(test_data_filename):
wget.download(test_data_url)
# +
df_train = pd.read_csv(train_data_filename)
X_train = df_train.iloc[:,:-1]
y_train = df_train.iloc[:, -1]
df_train.head()
# -
# ## Prepare Hyperparameters
hyperparam_candidates = {
'C': [1e-6, 1e-4],
'solver': ['lbfgs'],
'max_iter': [15, 28],
}
hyperparam_sets = [dict(zip(hyperparam_candidates.keys(), values))
for values
in itertools.product(*hyperparam_candidates.values())]
# ## Instantiate Client
# +
from verta import Client
from verta.utils import ModelAPI
client = Client(HOST)
proj = client.set_project(PROJECT_NAME)
expt = client.set_experiment(EXPERIMENT_NAME)
# -
# ## Train Models
# +
def run_experiment(hyperparams):
# create object to track experiment run
run = client.set_experiment_run()
# create validation split
(X_val_train, X_val_test,
y_val_train, y_val_test) = model_selection.train_test_split(X_train, y_train,
test_size=0.2,
shuffle=True)
# log hyperparameters
run.log_hyperparameters(hyperparams)
print(hyperparams, end=' ')
# create and train model
model = linear_model.LogisticRegression(**hyperparams)
model.fit(X_train, y_train)
# calculate and log validation accuracy
val_acc = model.score(X_val_test, y_val_test)
run.log_metric("val_acc", val_acc)
print("Validation accuracy: {:.4f}".format(val_acc))
# create deployment artifacts
model_api = ModelAPI(X_train, model.predict(X_train))
requirements = ["scikit-learn"]
# save and log model
run.log_model(model, model_api=model_api)
run.log_requirements(requirements)
# log Git information as code version
run.log_code()
pool = multiprocessing.Pool()
pool.map(run_experiment, hyperparam_sets)
pool.close()
# -
# ---
# # Revisit Workflow
# This section demonstrates querying and retrieving runs via the Client.
# ## Retrieve Best Run
# +
best_run = expt.expt_runs.sort("metrics.val_acc", descending=True)[0]
print("Validation Accuracy: {:.4f}".format(best_run.get_metric("val_acc")))
best_hyperparams = best_run.get_hyperparameters()
print("Hyperparameters: {}".format(best_hyperparams))
# -
# ## Train on Full Dataset
model = linear_model.LogisticRegression(multi_class='auto', **best_hyperparams)
model.fit(X_train, y_train)
# ## Calculate Accuracy on Full Training Set
train_acc = model.score(X_train, y_train)
print("Training accuracy: {:.4f}".format(train_acc))
# ---
# # Deployment and Live Predictions
# This section demonstrates model deployment and predictions, if supported by your version of ModelDB.
# +
model_id = 'YOUR_MODEL_ID'
run = client.set_experiment_run(id=model_id)
# -
# ## Log Training Data for Reference
run.log_training_data(X_train, y_train)
# ## Prepare "Live" Data
df_test = pd.read_csv(test_data_filename)
X_test = df_test.iloc[:,:-1]
# ## Deploy Model
# +
run.deploy(wait=True)
run
# -
# ## Query Deployed Model
deployed_model = run.get_deployed_model()
for x in itertools.cycle(X_test.values.tolist()):
print(deployed_model.predict([x]))
time.sleep(.5)
# ---
| client/workflows/demos/census-end-to-end.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + run_control={"frozen": false, "read_only": false}
# %%flake8
import pandas as pd
import numpy as np
import statsmodels as sm
import statsmodels.api as smapi
import math
from pyqstrat.pq_utils import monotonically_increasing, infer_frequency
from pyqstrat.plot import TimeSeries, DateLine, Subplot, HorizontalLine, BucketedValues, Plot
import matplotlib as mpl
import matplotlib.figure as mpl_fig
from typing import Tuple, Sequence, Mapping, MutableMapping, Optional, Any, Callable, Dict
def compute_periods_per_year(timestamps: np.ndarray) -> float:
"""
Computes trading periods per year for an array of numpy datetime64's.
e.g. if most of the timestamps are separated by 1 day, will return 252.
Args:
timestamps: a numpy array of datetime64's
>>> compute_periods_per_year(np.array(['2018-01-01', '2018-01-02', '2018-01-03', '2018-01-09'], dtype='M8[D]'))
252.0
>>> round(compute_periods_per_year(np.array(['2018-01-01 10:00', '2018-01-01 10:05', '2018-01-01 10:10'], dtype='M8[m]')), 2)
72576.05
"""
if not len(timestamps): return np.nan
freq = infer_frequency(timestamps)
return 252. / freq if freq != 0 else np.nan
def compute_amean(returns: np.ndarray, periods_per_year: int) -> float:
'''
Computes arithmetic mean of a return array, ignoring NaNs
Args:
returns: Represents returns at any frequency
periods_per_year: Frequency of the returns, e.g. 252 for daily returns
>>> compute_amean(np.array([0.003, 0.004, np.nan]), 252)
0.882
'''
if not len(returns): return np.nan
return np.nanmean(returns) * periods_per_year
def compute_num_periods(timestamps: np.ndarray, periods_per_year: float) -> float:
'''
Given an array of timestamps, we compute how many periods there are between the first and last element, where the length
of a period is defined by periods_per_year. For example, if there are 6 periods per year,
then each period would be approx. 2 months long.
Args:
timestamps (np.ndarray of np.datetime64): a numpy array of returns, can contain nans
periods_per_year: number of periods between first and last return
>>> compute_num_periods(np.array(['2015-01-01', '2015-03-01', '2015-05-01'], dtype='M8[D]'), 6)
2.0
'''
if not len(timestamps): return np.nan
assert(monotonically_increasing(timestamps))
fraction_of_year = (timestamps[-1] - timestamps[0]) / (np.timedelta64(1, 's') * 365 * 24 * 60 * 60)
return round(fraction_of_year * periods_per_year)
def compute_gmean(timestamps: np.ndarray, returns: np.ndarray, periods_per_year: float) -> float:
"""
Compute geometric mean of an array of returns
Args:
returns: a numpy array of returns, can contain nans
periods_per_year: Used for annualizing returns
>>> round(compute_gmean(np.array(['2015-01-01', '2015-03-01', '2015-05-01'], dtype='M8[D]'), np.array([0.001, 0.002, 0.003]), 252.), 6)
0.018362
"""
if not len(returns): return np.nan
assert(len(returns) == len(timestamps))
assert(isinstance(timestamps, np.ndarray) and isinstance(returns, np.ndarray))
mask = np.isfinite(returns)
timestamps = timestamps[mask]
returns = returns[mask]
num_periods = compute_num_periods(timestamps, periods_per_year)
g_mean = ((1.0 + returns).prod())**(1.0 / num_periods)
g_mean = np.power(g_mean, periods_per_year) - 1.0
return g_mean
def compute_std(returns: np.ndarray) -> float:
""" Computes standard deviation of an array of returns, ignoring nans """
if not len(returns): return np.nan
return np.nanstd(returns)
def compute_sortino(returns: np.ndarray, amean: float, periods_per_year: float) -> float:
'''
Note that this assumes target return is 0.
Args:
returns: a numpy array of returns
amean: arithmetic mean of returns
periods_per_year: number of trading periods per year
>>> print(round(compute_sortino(np.array([0.001, -0.001, 0.002]), 0.001, 252), 6))
0.133631
'''
if not len(returns) or not np.isfinite(amean) or periods_per_year <= 0: return np.nan
returns = np.where((~np.isfinite(returns)), 0.0, returns)
normalized_rets = np.where(returns > 0.0, 0.0, returns)
sortino_denom = np.std(normalized_rets)
sortino = np.nan if sortino_denom == 0 else amean / (sortino_denom * np.sqrt(periods_per_year))
return sortino
def compute_sharpe(returns: np.ndarray, amean: float, periods_per_year: float) -> float:
'''
Note that this does not take into risk free returns so it's really a sharpe0, i.e. assumes risk free returns are 0
Args:
returns: a numpy array of returns
amean: arithmetic mean of returns
periods_per_year: number of trading periods per year
>>> round(compute_sharpe(np.array([0.001, -0.001, 0.002]), 0.001, 252), 6)
0.050508
'''
if not len(returns) or not np.isfinite(amean) or periods_per_year <= 0: return np.nan
returns = np.where((~np.isfinite(returns)), 0.0, returns)
s = np.std(returns)
sharpe = np.nan if s == 0 else amean / (s * np.sqrt(periods_per_year))
return sharpe
def compute_k_ratio(equity: np.ndarray, periods_per_year: int, halflife_years: float = None) -> float:
'''
Compute k-ratio (2013 or original versions by <NAME>ner). See https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2230949
We also implement a modification that allows higher weighting for more recent returns.
Args:
equity: a numpy array of the equity in your account
periods_per_year: 252 for daily values
halflife_years: If set, we use weighted linear regression to give less weight to older returns.
In this case, we compute the original k-ratio which does not use periods per year or number of observations
If not set, we compute the 2013 version of the k-ratio which weights k-ratio by sqrt(periods_per_year) / nobs
Returns:
weighted or unweighted k-ratio
>>> np.random.seed(0)
>>> t = np.arange(1000)
>>> ret = np.random.normal(loc = 0.0025, scale = 0.01, size = len(t))
>>> equity = (1 + ret).cumprod()
>>> assert(math.isclose(compute_k_ratio(equity, 252, None), 3.888, abs_tol=0.001))
>>> assert(math.isclose(compute_k_ratio(equity, 252, 0.5), 602.140, abs_tol=0.001))
'''
equity = equity[np.isfinite(equity)]
equity = np.log(equity)
t = np.arange(len(equity))
if halflife_years:
halflife = halflife_years * periods_per_year
k = math.log(0.5) / halflife
w = np.empty(len(equity), dtype=np.float)
w = np.exp(k * t)
w = w ** 2 # Statsmodels requires square of weights
w = w[::-1]
fit = sm.regression.linear_model.WLS(endog=equity, exog=t, weights=w, hasconst=False).fit()
k_ratio = fit.params[0] / fit.bse[0]
else:
fit = smapi.OLS(endog=equity, exog=np.arange(len(equity)), hasconst=False).fit()
k_ratio = fit.params[0] * math.sqrt(periods_per_year) / (fit.bse[0] * len(equity))
return k_ratio
def compute_equity(timestamps: np.ndarray, starting_equity: float, returns: np.ndarray) -> np.ndarray:
''' Given starting equity, timestamps and returns, create a numpy array of equity at each date'''
return starting_equity * np.cumprod(1. + returns)
def compute_rolling_dd(timestamps: np.ndarray, equity: np.ndarray) -> Tuple[np.ndarray, np.ndarray]:
'''
Compute numpy array of rolling drawdown percentage
Args:
timestamps: numpy array of datetime64
equity: numpy array of equity
'''
assert(len(timestamps) == len(equity))
if not len(timestamps): return np.array([], dtype='M8[ns]'), np.array([], dtype=np.float)
s = pd.Series(equity, index=timestamps)
rolling_max = s.expanding(min_periods=1).max()
dd = np.where(s >= rolling_max, 0.0, -(s - rolling_max) / rolling_max)
return timestamps, dd
def compute_maxdd_pct(rolling_dd: np.ndarray) -> float:
'''Compute max drawdown percentage given a numpy array of rolling drawdowns, ignoring NaNs'''
if not len(rolling_dd): return np.nan
return np.nanmax(rolling_dd)
def compute_maxdd_date(rolling_dd_dates: np.ndarray, rolling_dd: np.ndarray) -> float:
''' Compute date of max drawdown given numpy array of timestamps, and corresponding rolling dd percentages'''
if not len(rolling_dd_dates): return pd.NaT
assert(len(rolling_dd_dates) == len(rolling_dd))
return rolling_dd_dates[np.argmax(rolling_dd)]
def compute_maxdd_start(rolling_dd_dates: np.ndarray, rolling_dd: np.ndarray, mdd_date: np.datetime64) -> np.datetime64:
'''Compute date when max drawdown starts, given numpy array of timestamps corresponding rolling dd
percentages and date of the max draw down'''
if not len(rolling_dd_dates) or pd.isnull(mdd_date): return pd.NaT
assert(len(rolling_dd_dates) == len(rolling_dd))
return rolling_dd_dates[(rolling_dd <= 0) & (rolling_dd_dates < mdd_date)][-1]
def compute_mar(returns: np.ndarray, periods_per_year: float, mdd_pct: float) -> float:
'''Compute MAR ratio, which is annualized return divided by biggest drawdown since inception.'''
if not len(returns) or np.isnan(mdd_pct) or mdd_pct == 0: return np.nan
return np.mean(returns) * periods_per_year / mdd_pct
def compute_dates_3yr(timestamps: np.ndarray) -> np.ndarray:
''' Given an array of numpy datetimes, return those that are within 3 years of the last date in the array'''
if not len(timestamps): return np.array([], dtype='M8[D]')
last_date = timestamps[-1]
d = pd.to_datetime(last_date)
start_3yr = np.datetime64(d.replace(year=d.year - 3))
return timestamps[timestamps > start_3yr]
def compute_returns_3yr(timestamps: np.ndarray, returns: np.ndarray) -> np.ndarray:
'''Given an array of numpy datetimes and an array of returns, return those that are within 3 years
of the last date in the datetime array '''
if not len(timestamps): return np.array([], dtype=np.float)
assert(len(timestamps) == len(returns))
timestamps_3yr = compute_dates_3yr(timestamps)
return returns[timestamps >= timestamps_3yr[0]]
def compute_rolling_dd_3yr(timestamps: np.ndarray, equity: np.ndarray) -> Tuple[np.ndarray, np.ndarray]:
'''Compute rolling drawdowns over the last 3 years'''
if not len(timestamps): return np.array([], dtype='M8[D]')
last_date = timestamps[-1]
d = pd.to_datetime(last_date)
start_3yr = np.datetime64(d.replace(year=d.year - 3))
equity = equity[timestamps >= start_3yr]
timestamps = timestamps[timestamps >= start_3yr]
return compute_rolling_dd(timestamps, equity)
def compute_maxdd_pct_3yr(rolling_dd_3yr: np.ndarray) -> float:
'''Compute max drawdown percentage over the last 3 years'''
return compute_maxdd_pct(rolling_dd_3yr)
def compute_maxdd_date_3yr(rolling_dd_3yr_timestamps: np.ndarray, rolling_dd_3yr: np.ndarray) -> np.datetime64:
'''Compute max drawdown date over the last 3 years'''
return compute_maxdd_date(rolling_dd_3yr_timestamps, rolling_dd_3yr)
def compute_maxdd_start_3yr(rolling_dd_3yr_timestamps: np.ndarray, rolling_dd_3yr: np.ndarray, mdd_date_3yr: np.datetime64) -> np.datetime64:
'''Comput max drawdown start date over the last 3 years'''
return compute_maxdd_start(rolling_dd_3yr_timestamps, rolling_dd_3yr, mdd_date_3yr)
def compute_calmar(returns_3yr: np.ndarray, periods_per_year: float, mdd_pct_3yr: float) -> float:
'''Compute Calmar ratio, which is the annualized return divided by max drawdown over the last 3 years'''
return compute_mar(returns_3yr, periods_per_year, mdd_pct_3yr)
def compute_bucketed_returns(timestamps: np.ndarray, returns: np.ndarray) -> Tuple[Sequence[int], Sequence[np.ndarray]]:
'''
Bucket returns by year
Returns:
A tuple with the first element being a list of years and the second a list of
numpy arrays containing returns for each corresponding year
'''
assert(len(timestamps) == len(returns))
if not len(timestamps): return np.array([], dtype=np.str), np.array([], dtype=np.float)
s = pd.Series(returns, index=timestamps)
years_list = []
rets_list = []
for year, rets in s.groupby(s.index.map(lambda x: x.year)):
years_list.append(year)
rets_list.append(rets.values)
return years_list, rets_list
def compute_annual_returns(timestamps: np.ndarray, returns: np.ndarray, periods_per_year: float) -> Tuple[np.ndarray, np.ndarray]:
'''Takes the output of compute_bucketed_returns and returns geometric mean of returns by year
Returns:
A tuple with the first element being an array of years (integer) and the second element
an array of annualized returns for those years
'''
assert(len(timestamps) == len(returns) and periods_per_year > 0)
if not len(timestamps): return np.array([], dtype=np.str), np.array([], dtype=np.float)
df = pd.DataFrame({'ret': returns, 'timestamp': timestamps})
years = []
gmeans = []
for k, g in df.groupby(df.timestamp.map(lambda x: x.year)):
years.append(k)
gmeans.append(compute_gmean(g.timestamp.values, g.ret.values, periods_per_year))
return np.array(years), np.array(gmeans)
class Evaluator:
"""You add functions to the evaluator that are dependent on the outputs of other functions.
The evaluator will call these functions in the right order
so dependencies are computed first before the functions that need their output.
You can retrieve the output of a metric using the metric member function
>>> evaluator = Evaluator(initial_metrics={'x': np.array([1, 2, 3]), 'y': np.array([3, 4, 5])})
>>> evaluator.add_metric('z', lambda x, y: sum(x, y), dependencies=['x', 'y'])
>>> evaluator.compute()
>>> evaluator.metric('z')
array([ 9, 10, 11])
"""
def __init__(self, initial_metrics: Dict[str, Any]) -> None:
"""Inits Evaluator with a dictionary of initial metrics that are used to compute subsequent metrics
Args:
initial_metrics: a dictionary of string name -> metric. metric can be any object including a scalar,
an array or a tuple
"""
assert(type(initial_metrics) == dict)
self.metric_values: Dict[str, Any] = initial_metrics.copy()
self._metrics: MutableMapping[str, Tuple[Callable, Sequence[str]]] = {}
def add_metric(self, name: str, func: Callable, dependencies: Sequence[str]) -> None:
self._metrics[name] = (func, dependencies)
def compute(self, metric_names: Sequence[str] = None) -> None:
'''Compute metrics using the internal dependency graph
Args:
metric_names: an array of metric names. If not passed in, evaluator will compute and store all metrics
'''
if metric_names is None: metric_names = list(self._metrics.keys())
for metric_name in metric_names:
self.compute_metric(metric_name)
def compute_metric(self, metric_name: str) -> None:
'''
Compute and store a single metric:
Args:
metric_name: string representing the metric to compute
'''
func, dependencies = self._metrics[metric_name]
for dependency in dependencies:
if dependency not in self.metric_values:
self.compute_metric(dependency)
dependency_values = {k: self.metric_values[k] for k in dependencies}
values = func(**dependency_values)
self.metric_values[metric_name] = values
def metric(self, metric_name: str) -> Any:
'''Return the value of a single metric given its name'''
return self.metric_values[metric_name]
def metrics(self) -> Mapping[str, Any]:
'''Return a dictionary of metric name -> metric value'''
return self.metric_values
def handle_non_finite_returns(timestamps: np.ndarray,
rets: np.ndarray,
leading_non_finite_to_zeros: bool,
subsequent_non_finite_to_zeros: bool) -> Tuple[np.ndarray, np.ndarray]:
'''
>>> np.set_printoptions(formatter={'float': '{: .6g}'.format})
>>> timestamps = np.arange(np.datetime64('2019-01-01'), np.datetime64('2019-01-07'))
>>> rets = np.array([np.nan, np.nan, 3, 4, np.nan, 5])
>>> handle_non_finite_returns(timestamps, rets, leading_non_finite_to_zeros = False, subsequent_non_finite_to_zeros = True)
(array(['2019-01-03', '2019-01-04', '2019-01-05', '2019-01-06'], dtype='datetime64[D]'), array([ 3, 4, 0, 5]))
>>> handle_non_finite_returns(timestamps, rets, leading_non_finite_to_zeros = True, subsequent_non_finite_to_zeros = False)
(array(['2019-01-01', '2019-01-02', '2019-01-03', '2019-01-04', '2019-01-06'], dtype='datetime64[D]'), array([ 0, 0, 3, 4, 5]))
>>> handle_non_finite_returns(timestamps, rets, leading_non_finite_to_zeros = False, subsequent_non_finite_to_zeros = False)
(array(['2019-01-01', '2019-01-02', '2019-01-03', '2019-01-04', '2019-01-06'], dtype='datetime64[D]'), array([ 0, 0, 3, 4, 5]))
>>> rets = np.array([1, 2, 3, 4, 4.5, 5])
>>> handle_non_finite_returns(timestamps, rets, leading_non_finite_to_zeros = False, subsequent_non_finite_to_zeros = True)
(array(['2019-01-01', '2019-01-02', '2019-01-03', '2019-01-04', '2019-01-05', '2019-01-06'],
dtype='datetime64[D]'), array([ 1, 2, 3, 4, 4.5, 5]))
'''
first_non_nan_index = np.ravel(np.nonzero(~np.isnan(rets)))
if len(first_non_nan_index):
first_non_nan_index = first_non_nan_index[0]
else:
first_non_nan_index = -1
if first_non_nan_index > 0 and first_non_nan_index < len(rets):
if leading_non_finite_to_zeros:
rets[:first_non_nan_index] = np.nan_to_num(rets[:first_non_nan_index])
else:
timestamps = timestamps[first_non_nan_index:]
rets = rets[first_non_nan_index:]
if subsequent_non_finite_to_zeros:
rets = np.nan_to_num(rets)
else:
timestamps = timestamps[np.isfinite(rets)]
rets = rets[np.isfinite(rets)]
return timestamps, rets
def compute_return_metrics(timestamps: np.ndarray,
rets: np.ndarray,
starting_equity: float,
leading_non_finite_to_zeros: bool = False,
subsequent_non_finite_to_zeros: bool = True) -> Evaluator:
'''
Compute a set of common metrics using returns (for example, of an instrument or a portfolio)
Args:
timestamps (np.array of datetime64): Timestamps for the returns
rets (nd.array of float): The returns, use 0.01 for 1%
starting_equity (float): Starting equity value in your portfolio
leading_non_finite_to_zeros (bool, optional): If set, we replace leading nan, inf, -inf returns with zeros.
For example, you may need a warmup period for moving averages. Default False
subsequent_non_finite_to_zeros (bool, optional): If set, we replace any nans that follow the first non nan value with zeros.
There may be periods where you have no prices but removing these returns would result in incorrect annualization.
Default True
Returns:
An Evaluator object containing computed metrics off the returns passed in.
If needed, you can add your own metrics to this object based on the values of existing metrics and recompute the Evaluator.
Otherwise, you can just use the output of the evaluator using the metrics function.
>>> timestamps = np.array(['2015-01-01', '2015-03-01', '2015-05-01', '2015-09-01'], dtype='M8[D]')
>>> rets = np.array([0.01, 0.02, np.nan, -0.015])
>>> starting_equity = 1.e6
>>> ev = compute_return_metrics(timestamps, rets, starting_equity)
>>> metrics = ev.metrics()
>>> assert(round(metrics['gmean'], 6) == 0.021061)
>>> assert(round(metrics['sharpe'], 6) == 0.599382)
>>> assert(all(metrics['returns_3yr'] == np.array([0.01, 0.02, 0, -0.015])))
'''
assert(starting_equity > 0.)
assert(type(rets) == np.ndarray and rets.dtype == np.float64)
assert(type(timestamps) == np.ndarray and np.issubdtype(timestamps.dtype, np.datetime64) and monotonically_increasing(timestamps))
timestamps, rets = handle_non_finite_returns(timestamps, rets, leading_non_finite_to_zeros, subsequent_non_finite_to_zeros)
ev = Evaluator({'timestamps': timestamps, 'returns': rets, 'starting_equity': starting_equity})
ev.add_metric('periods_per_year', compute_periods_per_year, dependencies=['timestamps'])
ev.add_metric('amean', compute_amean, dependencies=['returns', 'periods_per_year'])
ev.add_metric('std', compute_std, dependencies=['returns'])
ev.add_metric('up_periods', lambda returns: len(returns[returns > 0]), dependencies=['returns'])
ev.add_metric('down_periods', lambda returns: len(returns[returns < 0]), dependencies=['returns'])
ev.add_metric('up_pct',
lambda up_periods, down_periods: up_periods * 1.0 / (up_periods + down_periods) if (up_periods + down_periods) != 0 else np.nan,
dependencies=['up_periods', 'down_periods'])
ev.add_metric('gmean', compute_gmean, dependencies=['timestamps', 'returns', 'periods_per_year'])
ev.add_metric('sharpe', compute_sharpe, dependencies=['returns', 'periods_per_year', 'amean'])
ev.add_metric('sortino', compute_sortino, dependencies=['returns', 'periods_per_year', 'amean'])
ev.add_metric('equity', compute_equity, dependencies=['timestamps', 'starting_equity', 'returns'])
ev.add_metric('k_ratio', compute_k_ratio, dependencies=['equity', 'periods_per_year'])
ev.add_metric('k_ratio_weighted', lambda equity, periods_per_year: compute_k_ratio(equity, periods_per_year, 3),
dependencies=['equity', 'periods_per_year'])
# Drawdowns
ev.add_metric('rolling_dd', compute_rolling_dd, dependencies=['timestamps', 'equity'])
ev.add_metric('mdd_pct', lambda rolling_dd: compute_maxdd_pct(rolling_dd[1]), dependencies=['rolling_dd'])
ev.add_metric('mdd_date', lambda rolling_dd: compute_maxdd_date(rolling_dd[0], rolling_dd[1]), dependencies=['rolling_dd'])
ev.add_metric('mdd_start', lambda rolling_dd, mdd_date: compute_maxdd_start(rolling_dd[0], rolling_dd[1], mdd_date),
dependencies=['rolling_dd', 'mdd_date'])
ev.add_metric('mar', compute_mar, dependencies=['returns', 'periods_per_year', 'mdd_pct'])
ev.add_metric('timestamps_3yr', compute_dates_3yr, dependencies=['timestamps'])
ev.add_metric('returns_3yr', compute_returns_3yr, dependencies=['timestamps', 'returns'])
ev.add_metric('rolling_dd_3yr', compute_rolling_dd_3yr, dependencies=['timestamps', 'equity'])
ev.add_metric('mdd_pct_3yr', lambda rolling_dd_3yr: compute_maxdd_pct_3yr(rolling_dd_3yr[1]), dependencies=['rolling_dd_3yr'])
ev.add_metric('mdd_date_3yr', lambda rolling_dd_3yr: compute_maxdd_date_3yr(rolling_dd_3yr[0], rolling_dd_3yr[1]),
dependencies=['rolling_dd_3yr'])
ev.add_metric('mdd_start_3yr', lambda rolling_dd_3yr, mdd_date_3yr:
compute_maxdd_start_3yr(rolling_dd_3yr[0], rolling_dd_3yr[1], mdd_date_3yr),
dependencies=['rolling_dd_3yr', 'mdd_date_3yr'])
ev.add_metric('calmar', compute_calmar, dependencies=['returns_3yr', 'periods_per_year', 'mdd_pct_3yr'])
ev.add_metric('annual_returns', compute_annual_returns, dependencies=['timestamps', 'returns', 'periods_per_year'])
ev.add_metric('bucketed_returns', compute_bucketed_returns, dependencies=['timestamps', 'returns'])
ev.compute()
return ev
def display_return_metrics(metrics: Mapping[str, Any], float_precision: int = 3) -> pd.DataFrame:
'''
Creates a dataframe making it convenient to view the output of the metrics obtained using the compute_return_metrics function.
Args:
float_precision: Change if you want to display floats with more or less significant figures than the default,
3 significant figures.
Returns:
A one row dataframe with formatted metrics.
'''
from IPython.core.display import display
_metrics = {}
cols = ['gmean', 'amean', 'std', 'shrp', 'srt', 'k', 'calmar', 'mar', 'mdd_pct', 'mdd_start', 'mdd_date', 'dd_3y_pct',
'up_periods', 'down_periods', 'up_pct', 'mdd_start_3yr', 'mdd_date_3yr']
translate = {'shrp': 'sharpe', 'srt': 'sortino', 'dd_3y_pct': 'mdd_pct_3yr', 'k': 'k_ratio'}
for col in cols:
key = col
if col in translate: key = translate[col]
_metrics[col] = metrics[key]
_metrics['mdd_dates'] = f'{str(metrics["mdd_start"])[:10]}/{str(metrics["mdd_date"])[:10]}'
_metrics['up_dwn'] = f'{metrics["up_periods"]}/{metrics["down_periods"]}/{metrics["up_pct"]:.3g}'
_metrics['dd_3y_timestamps'] = f'{str(metrics["mdd_start_3yr"])[:10]}/{str(metrics["mdd_date_3yr"])[:10]}'
years = metrics['annual_returns'][0]
ann_rets = metrics['annual_returns'][1]
for i, year in enumerate(years):
_metrics[str(year)] = ann_rets[i]
format_str = '{:.' + str(float_precision) + 'g}'
for k, v in _metrics.items():
if isinstance(v, np.float) or isinstance(v, float):
_metrics[k] = format_str.format(v)
cols = ['gmean', 'amean', 'std', 'shrp', 'srt', 'k', 'calmar', 'mar', 'mdd_pct', 'mdd_dates', 'dd_3y_pct', 'dd_3y_timestamps', 'up_dwn'] + [
str(year) for year in sorted(years)]
df = pd.DataFrame(index=[''])
for metric_name, metric_value in _metrics.items():
df.insert(0, metric_name, metric_value)
df = df[cols]
display(df)
return df
def plot_return_metrics(metrics: Mapping[str, Any], title: str = None) -> Optional[Tuple[mpl_fig.Figure, mpl.axes.Axes]]:
'''
Plot equity, rolling drawdowns and and a boxplot of annual returns given the output of compute_return_metrics.
'''
timestamps = metrics['timestamps']
equity = metrics['equity']
equity = TimeSeries('equity', timestamps=timestamps, values=equity)
mdd_date, mdd_start = metrics['mdd_start'], metrics['mdd_date']
mdd_date_3yr, mdd_start_3yr = metrics['mdd_start_3yr'], metrics['mdd_date_3yr']
drawdown_lines = [DateLine(name='max dd', date=mdd_start, color='red'),
DateLine(date=mdd_date, color='red'),
DateLine(name='3y dd', date=mdd_start_3yr, color='orange'),
DateLine(date=mdd_date_3yr, color='orange')]
equity_subplot = Subplot(equity, ylabel='Equity', height_ratio=0.6, log_y=True, y_tick_format='${x:,.0f}',
date_lines=drawdown_lines, horizontal_lines=[HorizontalLine(metrics['starting_equity'], color='black')])
rolling_dd = TimeSeries('drawdowns', timestamps=metrics['rolling_dd'][0], values=metrics['rolling_dd'][1])
zero_line = HorizontalLine(y=0, color='black')
dd_subplot = Subplot(rolling_dd, ylabel='Drawdowns', height_ratio=0.2, date_lines=drawdown_lines, horizontal_lines=[zero_line])
years = metrics['bucketed_returns'][0]
ann_rets = metrics['bucketed_returns'][1]
ann_ret = BucketedValues('annual returns', bucket_names=years, bucket_values=ann_rets)
ann_ret_subplot = Subplot(ann_ret, ylabel='Annual Returns', height_ratio=0.2, horizontal_lines=[zero_line])
plt = Plot([equity_subplot, dd_subplot, ann_ret_subplot], title=title)
return plt.draw()
def test_evaluator() -> None:
from datetime import datetime, timedelta
np.random.seed(10)
timestamps = np.arange(datetime(2018, 1, 1), datetime(2018, 3, 1), timedelta(days=1))
rets = np.random.normal(size=len(timestamps)) / 1000
starting_equity = 1.e6
ev = compute_return_metrics(timestamps, rets, starting_equity)
display_return_metrics(ev.metrics())
plot_return_metrics(ev.metrics())
assert(round(ev.metric('sharpe'), 6) == 2.932954)
assert(round(ev.metric('sortino'), 6) == 5.690878)
assert(ev.metric('annual_returns')[0] == [2018])
assert(round(ev.metric('annual_returns')[1][0], 6) == [0.063530])
assert(ev.metric('mdd_start') == np.datetime64('2018-01-19'))
assert(ev.metric('mdd_date') == np.datetime64('2018-01-22'))
if __name__ == "__main__":
test_evaluator()
import doctest
doctest.testmod(optionflags=doctest.NORMALIZE_WHITESPACE)
# -
| pyqstrat/src_nb/evaluator.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/fernandofsilva/Keras/blob/main/Layer_nodes.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="-a2KdExdRScB"
# # Layer nodes
# In this reading, we will be looking at the concept of layer nodes when creating a computational graph with shared layers.
# + id="vnRCfSLhRScC" outputId="b7ebf1f6-313b-488b-8110-723e3b90dffc" colab={"base_uri": "https://localhost:8080/"}
import tensorflow as tf
print(tf.__version__)
# + [markdown] id="xmLYT7HCRScI"
# ## Creating a simple computational graph
# + [markdown] id="T8R1qjkyRScI"
# You have previously seen how to construct multiple input or output models, and also how to access model layers. Let's start by creating two inputs:
# + id="Xu803TS8RScJ"
# Create the input layers
from tensorflow.keras.layers import Input
a = Input(shape=(128, 128, 3), name="input_a")
b = Input(shape=(64, 64, 3), name="input_b")
# + [markdown] id="uRPR8Qj4RScN"
# Now, we create a 2D convolutional layer, and call it on one of the inputs.
# + id="uxqJoXv7RScN" outputId="352b4693-ebb5-43b9-a4a8-fa9dcbb8c6b7" colab={"base_uri": "https://localhost:8080/"}
# Create and use the convolutional layer
from tensorflow.keras.layers import Conv2D
conv = Conv2D(32, (6, 6), padding='same')
conv_out_a = conv(a)
print(type(conv_out_a))
# + [markdown] id="L_HU4EX5RScP"
# The output of the layer is now a new Tensor, which captures the operation of calling the layer `conv` on the input `a`.
#
# By defining this new operation in our computational graph, we have added a _node_ to the `conv` layer. This node relates the input tensor to the output tensor.
# + [markdown] id="lnzoplaFRScQ"
# ### Layer input and outputs
# + [markdown] id="qURkyZFZRScQ"
# We can retrieve the output of a layer using the `output` attribute, and we can also get the input by using the `input` attribute.
#
# Similarly, we can retrieve the input/output shape using `input_shape` and `output_shape`.
# + id="ed1DdwHkRScQ" outputId="019ceff3-a915-4921-dc68-a41d8d5a9b35" colab={"base_uri": "https://localhost:8080/"}
# Print the input and output tensors
print(conv.input)
print(conv.output)
# + id="x6qgN_8SRScT"
# Verify the input and output shapes
assert conv.input_shape == (None, 128, 128, 3)
assert conv.output_shape == (None, 128, 128, 32)
# + [markdown] id="UJUyuREtRScU"
# ## Creating a new layer node
# + [markdown] id="LOiA6TorRScU"
# Now, let's call this layer again on a different input:
# + id="njLuI_QLRScV"
# Call the layer a second time
conv_out_b = conv(b)
# + [markdown] id="4__il0emRScW"
# When we call the same layer multiple times, that layer owns multiple nodes indexed as 0, 1, 2...
#
# Now, what happens if we call `input` and `output` for this layer?
# + id="O20EIVX5RScW"
# Check the input and output attributes
assert conv.input.name == a.name
assert conv.output.name == conv_out_a.name
# + [markdown] id="vVTkLBY_RScX"
# As you can see, the layer's input is identified as being `a` and its output as being `conved_a`, something is going wrong here. As long as a layer is only connected to one input, there is no confusion about what should be the input, and `.output` will return the one output of the layer, but when the layer is called on multiple inputs we end up in an ambiguous situation.
#
# Let's try to get the input/output shape:
# + id="qL2OtqFfRScY" outputId="165f5b9f-fbd4-4bce-df99-3c2c5700325a" colab={"base_uri": "https://localhost:8080/", "height": 334}
# Try accessing the input_shape
print(conv.input_shape)
# + id="v-fq6dmhRScZ"
# Try accessing the output_shape
print(conv.output_shape)
# + [markdown] id="53poautBRSca"
# `input_shape` and `output_shape` did not return the shape of the two inputs and outputs, instead they returned an error.
# + [markdown] id="hSy7XGCIRScb"
# ### Indexing layer nodes
# + [markdown] id="u2SoR6a8RScb"
# We have applied the same Conv2D layer to an input of shape (128, 128, 3), and then to an input of shape (64, 64, 3), therefore the layer has multiple input/output shapes, for this reason we now have to retrieve them by specifying the index of the node they belong to.
#
# To get the inputs/outputs shapes, we now have to use `get_input_shape_at` and `get_output_shape_at` with the correct index:
# + id="TfC--VaDRScb"
# Print the input and output shapes for each layer node
assert conv.get_input_shape_at(0) == (None, 128, 128, 3) # Tensor a
assert conv.get_input_shape_at(1) == (None, 64, 64, 3) # Tensor b
assert conv.get_output_shape_at(0) == (None, 128, 128, 32) # Tensor conv_out_a
assert conv.get_output_shape_at(1) == (None, 64, 64, 32) # Tensor conv_out_b
# + [markdown] id="wibOXLTsRScd"
# Likewise, we use `get_input_at` and `get_output_at` to fetch the inputs/outputs:
# + id="HaQyiTUCRScd"
assert conv.get_input_at(0).name == a.name
assert conv.get_input_at(1).name == b.name
assert conv.get_output_at(0).name == conv_out_a.name
assert conv.get_output_at(1).name == conv_out_b.name
# + [markdown] id="A2pi0WHBRSce"
# ## Further reading and resources
# * https://keras.io/getting-started/functional-api-guide/#the-concept-of-layer-node
| Functional API/Layer_nodes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_tensorflow_p36
# language: python
# name: conda_tensorflow_p36
# ---
# +
import os
import numpy as np
import pandas as pd
def file_url(category, event_id=None, train_or_test="train"):
"""Returns the path of a csv corresponding to a given event and data category.
Arguments:
category -- one of "cells", "hits", "particles", "truth", "blacklist", "detectors",
"sample_submission" or "hit_orders".
event_id -- the integer id of an event. Should be included unless category is "detectors" or
"sample submission". Ensure that event_id and train_or_test are consistent with each other.
train_or_test -- one of "train" (default) or "test".
TODO: Check for valid input.
"""
if category.startswith('blacklist'):
folder = 'dataset/blacklist'
elif category == 'hit_orders':
folder = 'particles-in-order'
elif category in ('sample_submission', 'detectors'):
return '/home/ec2-user/SageMaker/efs/dataset/{0}.csv'.format(category)
else:
folder = 'dataset/' + train_or_test
return '/home/ec2-user/SageMaker/efs/{0}/event{1:09d}-{2}.csv'.format(folder, event_id, category)
def write_hit_orders_csv(event_id):
"""Writes hit_order csv for an event."""
generate_hit_orders(event_id).to_csv(file_url('hit_orders', event_id), index=False)
def generate_hit_orders(event_id):
"""Generates hit_order dataframe for an event.
When finished, prints the number of valid particles and hits, as well as the number and
proportion of particles which were successfully placed in order.
"""
# load truth, blacklist_particles and blacklist_hits files for event 1000.
truth = pd.read_csv(file_url('truth', event_id))
blacklist_particles = pd.read_csv(file_url('blacklist_particles', event_id))
blacklist_hits = pd.read_csv(file_url('blacklist_hits', event_id))
# filter out track 0 (garbage track), tracks with three or fewer hits,
# and rows with blacklisted hits and particles.
not_blacklist_particle = ~truth.particle_id.isin(blacklist_particles.particle_id)
not_blacklist_hit = ~truth.hit_id.isin(blacklist_hits.hit_id)
del blacklist_particles, blacklist_hits
particle_num_hits = truth.groupby('particle_id')['particle_id'].transform('count')
not_short_track = particle_num_hits > 3
del particle_num_hits
not_particle_zero = truth.particle_id != 0
truth = truth[not_particle_zero & not_blacklist_particle & not_blacklist_hit & not_short_track]
del not_particle_zero, not_blacklist_particle, not_blacklist_hit, not_short_track
particle_weight = truth.groupby('particle_id')['weight'].transform('sum')
truth.loc[:, 'weight_order'] = truth.weight/particle_weight
del particle_weight
truth = truth[['particle_id', 'hit_id', 'tz', 'tpz', 'weight_order']]
# create z_order_dim. This is tz if the z-dimension of the particle's average trajectory
# is positive and -tz otherwise.
z_direction = np.sign(truth.groupby('particle_id').tpz.transform('mean'))
truth.loc[:, 'z_order_dim'] = z_direction*truth.tz
truth.drop(['tz', 'tpz'], axis=1, inplace=True)
del z_direction
# create hit_order column.
truth.loc[:, 'hit_order'] = truth.groupby('particle_id')['z_order_dim'].rank(
method='first',
ascending=True
).astype(int)
truth.drop('z_order_dim', axis=1, inplace=True)
# sort by particle_id and hit_order.
truth.sort_values(['particle_id', 'hit_order'], inplace=True)
truth.loc[:, 'track_length'] = truth.groupby('particle_id').hit_id.transform('count')
true_weight_order = truth.groupby(['track_length', 'hit_order']).weight_order.median()
truth.drop('track_length', axis=1, inplace=True)
# identify and remove particles whose hit order is incorrect.
# print('truth.groupby particle', truth.groupby('particle_id').values)
particles_in_order = truth.groupby('particle_id').apply(_correct_order(true_weight_order))
total_num_particles = len(particles_in_order)
mask = particles_in_order.loc[truth.particle_id].values
truth = truth[mask]
num_good_particles = len(truth.particle_id.unique())
truth.reset_index(drop=True, inplace=True)
truth.drop('weight_order', axis=1, inplace=True)
print('total number of scored particles in event:\t', total_num_particles)
print('number of successfully sorted particles:\t', num_good_particles)
print('percentage of partices successfully sorted\t:',
100*num_good_particles/total_num_particles)
return truth
def _correct_order(true_weight_order):
"""Helper function for generate_hit_order_csv"""
return lambda particle: np.all(
np.isclose(
particle.weight_order.values, true_weight_order.loc[len(particle)].values,
atol=1e-06
)
)
# +
import os
for event_id in range(10**3, 10**4):
if os.path.isfile(file_url('truth', event_id)):
print('event_id:', event_id)
write_hit_orders_csv(event_id)
print(2*'\n')
# -
| scratch_eda/hit_order_methods_scratch.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Before we begin, let's execute the cell below to display information about the CUDA driver and GPUs running on the server by running the `nvidia-smi` command. To do this, execute the cell block below by giving it focus (clicking on it with your mouse), and hitting Ctrl-Enter, or pressing the play button in the toolbar above. If all goes well, you should see some output returned below the grey cell.
# !nvidia-smi
# ## Learning objectives
# The **goal** of this lab is to:
#
# - Dig deeper into kernels by analyzing it with Nsight Compute
#
# In the previous section, we looked at some of the ways to optimize the parallel [RDF](../serial/rdf_overview.ipynb) application using OpenMP offloading. Moreover, we used NVIDIA Nsight Systems to get a system-wide performance analysis. Now, let's dig deeper and profile the kernel with the Nsight Compute profiler to get detailed performance metrics. Note: You will get a better understanding of the GPU architecture in the CUDA notebooks.
#
# To do this, let's use the [solution](../../source_code/openmp/SOLUTION/rdf_offload_split.f90) as a reference to get a similar report from Nsight Compute. Run the application, and profile it with the Nsight Systems first.
#
# Now, let's compile, and profile it with Nsight Systems first.
#compile for Tesla GPU
# !cd ../../source_code/openmp && nvfortran -mp=gpu -Minfo=mp -o rdf nvtx.f90 SOLUTION/rdf_offload_split.f90 -L/opt/nvidia/hpc_sdk/Linux_x86_64/21.3/cuda/11.2/lib64 -lnvToolsExt
#profile and see output of nvptx
# !cd ../../source_code/openmp && nsys profile -t nvtx,cuda --stats=true --force-overwrite true -o rdf_offload_split ./rdf
# Let's checkout the profiler's report. Download and save the report file by holding down <mark>Shift</mark> and <mark>Right-Clicking</mark> [Here](../../source_code/openmp/rdf_offload_split.qdrep) and open it via the GUI. Now, right click on the kernel and click on "Analyze the Selected Kernel with NVIDIA Nsight Compute" (see below screenshot).
#
# <img src="../images/compute_analyz.png">
#
# Then, make sure to tick the radio button next to "Display the command line to user NVIDIA Nsight Compute CLI".
#
# <img src="../images/compute_command_line.png" width="50%" height="50%">
#
# Then, you simply copy the command, run it and analyze the selected kernel.
#
# To profile the selected kernel, run the below cell (by adding `--set full` we make sure to capture all the sections in Nsight Compute profiler):
#profile with nsight compute
# !cd ../../source_code/openmp && ncu --set full --launch-skip 0 --launch-count 1 -o rdf_offload_split ./rdf
# Let's checkout Nsight Compute report. Download and save the report file by holding down <mark>Shift</mark> and <mark>Right-Clicking</mark> [Here](../../source_code/openmp/rdf_offload_split.ncu-rep) and open it via the GUI. Have a look at the example expected profiler report below:
#
# <img src="../images/f_openmp_offload_split_cmp.png">
#
# When compared to the base version using the baseline feature on the Nsight Compute (base version is the very first parallel version using OpenMP), we achieved 124% improvement in SM utilization and 709% utilization in memory.
#
# Feel free to checkout the [solution](../../source_code/openmp/SOLUTION/rdf_offload_split.f90) to help you understand better.
# ### `num_teams(n)` and `thread_limit`
#
# When using the `distribute parallel do` construct, the *for loop* is distributed across all threads for all teams of the current teams region. For example, if there are 10 teams, and each team consists of 256 threads, the loop will be distributed across 2560 threads. You can explicitly specify the number of threads to be created in the team using the `thread_limit(m)`. Moreover, you can specify the maximum number of teams created by using `num_teams(n)` (the actual number of teams may be smalled than this number). You can add these attributes to the `teams` construct. Please note that you cannot add `num_teams` to the `parallel` construct clause. For more information, please read the guide at https://docs.nvidia.com/hpc-sdk/compilers/hpc-compilers-user-guide/#openmp-subset .
#
#
# ```fortran
# # !$omp target teams distribute num_teams(65535)
# do i=1,N
# # !$omp parallel do thread_limit(128)
# do j=1,N
# < loop code >
# ```
# In the previous section, the grid size was too small to keep the hardware busy. Take a look at the *Launch Statistics*, the grid size is 80.
#
# <img src="../images/f_openmp_offload_split_grid.png">
#
# Now, lets start modifying the code again and add `thread_limit(n)` to the `teams` construct. From the top menu, click on *File*, and *Open* `rdf.f90` from the current directory at `Fortran/source_code/openmp` directory. Remember to **SAVE** your code after changes, before running below cells.
#
# **NOTE:** Try increasing the grid size to 65525 (*hint:* use `num_teams(65535)`). Once done, compile and profile the code and compare it with the previous version. Are there any differences? What does the profiler show?
#compile for Tesla GPU
# !cd ../../source_code/openmp && nvfortran -mp=gpu -Minfo=mp -o rdf nvtx.f90 SOLUTION/rdf_offload.f90 -L/opt/nvidia/hpc_sdk/Linux_x86_64/21.3/cuda/11.2/lib64 -lnvToolsExt
# Now, validate the output by running the executable, and then **Profile** your code with Nsight Systems command line `nsys`.
#Run on Nvidia GPU and check the output
# !cd ../../source_code/openmp && ./rdf && cat Pair_entropy.dat
# The output should be the following:
#
# ```
# s2 : -2.452690945278331
# s2bond : -24.37502820694527
# ```
#profile and see output of nvptx
# !cd ../../source_code/openmp && nsys profile -t nvtx,cuda --stats=true --force-overwrite true -o rdf_offload_split_num ./rdf
# Let's checkout the profiler's report. Download and save the report file by holding down <mark>Shift</mark> and <mark>Right-Clicking</mark> [Here](../../source_code/openmp/rdf_offload_split_num.qdrep) and open it via the GUI. Let's dig deeper and profile the application with Nsight Compute and compare it with the baseline.
#profile with nsight compute
# !cd ../../source_code/openmp && ncu --set full --launch-skip 0 --launch-count 1 -o rdf_offload_split_num ./rdf
# Let's checkout the Nsight Compute report. Download and save the report file by holding down <mark>Shift</mark> and <mark>Right-Clicking</mark> [Here](../../source_code/openmp/rdf_offload_split_num.ncu-rep) and open it via the GUI.
#
# <img src="../images/f_openmp_offload_split_cmp2.png">
#
# Comparing to the previous version, we achieved 72.39% less SM utilization. Now, let's compare the two optimization approaches with the base version (using target offload). As seen in the roofline chart below, the application is now in the *memory bound* region which means either it is limited by memory bandwidth or latency. Memory is heavily itilized than compute so it appears to be limited by memory bandwidth.
#
# <img src="../images/f_openmp_offload_roofline.png">
#
# The kernel exhibits low compute throughput and memory bandwidth utilization relative to the peak performance of the device. Let's have a look at the *Warp State Statistics* section for potential reasons.
#
# <img src="../images/f_openmp_warp_cmp.png">
#
# Comparing the two optimizations, warp is stalled waiting for the L1 instruction queue for local and global (LG) memory operations to be not full. The instruction queue is full.
#
# It is very clear that we did not improve anything through previous approaches. In the following sections, we look at the `collapse` clause to improve the performance.
#
# Feel free to checkout the example [solution](../../source_code/openmp/SOLUTION/rdf_offload_offload_split_num.f90) to help you understand better.
# ## Collapse clause
#
# Specifying the `collapse(n)` clause takes the next `n` tightly-nested loops, folds them into one, and applies the OpenMP directives to the new loop. Collapsing loops means that two loops of trip counts N and M respectively will be automatically turned into a single loop with a trip count of N times M. By collapsing two or more parallel loops into a single loop the compiler has an increased amount of parallelism to use when mapping the code to the device. On highly parallel architectures, such as GPUs, this will give us more parallelism to distribute and better performance.
#
# Try using the collapse clause and observe any performance difference. How much this optimization will speed-up the code will vary according to the application and the target accelerator, but it is not uncommon to see large speed-ups by using collapse on loop nests.
#
# In the example below, we collapse the two loops before applying both teams and thread parallelism to both.
#
# ```fortran
# # !$omp target teams distribute parallel do collapse(2)
# do i=1,N
# do j=1,N
# ```
#
# Now, lets start modifying the original code before splitting the `teams distribute` from `parallel do` and add the collapse clause. From the top menu, click on *File*, and *Open* `rdf.f90` from the current directory at `Fortran/source_code/openmp` directory. Remember to **SAVE** your code after changes, before running below cells.
#compile for Tesla GPU
# !cd ../../source_code/openmp && nvfortran -mp=gpu -Minfo=mp -o rdf nvtx.f90 rdf.f90 -L/opt/nvidia/hpc_sdk/Linux_x86_64/21.3/cuda/11.2/lib64 -lnvToolsExt
# Now, validate the output by running the executable, and then **Profile** your code with Nsight Systems command line `nsys`.
#Run on Nvidia GPU and check the output
# !cd ../../source_code/openmp && ./rdf && cat Pair_entropy.dat
# The output should be the following:
#
# ```
# s2 : -2.452690945278331
# s2bond : -24.37502820694527
# ```
#profile and see output of nvptx
# !cd ../../source_code/openmp && nsys profile -t nvtx,cuda --stats=true --force-overwrite true -o rdf_collapse ./rdf
# Let's checkout the profiler's report. Download and save the report file by holding down <mark>Shift</mark> and <mark>Right-Clicking</mark> [Here](../../source_code/openmp/rdf_collapse.qdrep) and open it via the GUI. Have a look at the example expected profiler report below:
#
# <img src="../images/f_openmp_gpu_collapse.png">
#
# Compare the execution time for the `Pair Calculation` from the NVTX row (annotated in Red rectangle in the example screenshot) with the previous section. It is clear the using collapse clause improved the performance by extracting more parallelism. Please note when you compare the two methods explored here, one may give better results than the other.
#
# Let's dig deeper and profile the kernel with Nsight Compute and compare it with the base version.
#profile with nsight compute
# !cd ../../source_code/openmp && ncu --set full --launch-skip 0 --launch-count 1 -o rdf_collapse ./rdf
# Let's checkout the Nsight Compute report. Download and save the report file by holding down <mark>Shift</mark> and <mark>Right-Clicking</mark> [Here](../../source_code/openmp/rdf_collapse.ncu-rep) and open it via the GUI. Have a look at the example expected profiler report below (yellow is the base version, blue is the current):
#
# <img src="../images/f_openmp_offload_collapse.png">
#
# When compared to the base version (using baseline feature on Nsight Compute), we achieved 262% improvement in SM utilization and 801% utilization in memory. We can also see the improvement and better utilization in the roofline analysis (see example screenshot below).The ideal situation would be getting closer to the rooflines (up).
#
# <img src="../images/f_openmp_collapse_baseline.png">
#
# Dots with the blue outline is the optimized version using `collapse` clause and the ones with the orange outline is the base version.
#
# Lets checkout the *Occupancy* section. Occupancy is the ratio of the number of active warps per multiprocessor to the maximum number of possible active warps. As seen in the below screenshot, the theoretical occupancy is 43.7% and the achieved occupancy is 41.3%. The plot showing the impact of varying register count per thread indicates that by reducing the number of registers per thread, we can increase the warp occupancy.
#
# <img src="../images/f_openmp_offload_occupancy.png">
#
# We can limit the register count at compile time by adding `-gpu=maxregcount:32` flag. Based on the plot, let reduce the number of registers per thread to 32 to see if it improves the overall performance.
#compile for Tesla GPU
# !cd ../../source_code/openmp && nvfortran -mp=gpu -gpu=maxregcount:32 -Minfo=mp -o rdf nvtx.f90 SOLUTION/rdf_offload_collapse.f90 -L/opt/nvidia/hpc_sdk/Linux_x86_64/21.3/cuda/11.2/lib64 -lnvToolsExt
# Now, validate the output by running the executable, and then **Profile** your code with Nsight Systems command line `nsys`.
#Run on Nvidia GPU and check the output
# !cd ../../source_code/openmp && ./rdf && cat Pair_entropy.dat
# The output should be the following:
#
# ```
# s2 : -2.452690945278331
# s2bond : -24.37502820694527
# ```
#profile and see output of nvptx
# !cd ../../source_code/openmp && nsys profile -t nvtx,cuda --stats=true --force-overwrite true -o rdf_offload_collapse_regcount ./rdf
# Let's checkout the profiler's report. Download and save the report file by holding down <mark>Shift</mark> and <mark>Right-Clicking</mark> [Here](../../source_code/openmp/rdf_collapse_regcount.qdrep) and open it via the GUI. Let's dig deeper and profile the application with Nsight Compute and compare it with the baseline.
#profile with nsight compute
# !cd ../../source_code/openmp && ncu --set full --launch-skip 0 --launch-count 1 -o rdf_collapse_regcount ./rdf
# Let's checkout the Nsight Compute report. Download and save the report file by holding down <mark>Shift</mark> and <mark>Right-Clicking</mark> [Here](../../source_code/openmp/rdf_collapse_regcount.ncu-rep) and open it via the GUI. Have a look at the example expected profiler report below (pink is the previous version using `collapse` clause, blue is the current after limiting the register count):
#
# <img src="../images/f_openmp_collapse_reg.png">
#
# We achieved 600% utilization in memory and 5% in SM. The current version takes 102% longer to execute. Roofline chart shows the application is now in memory bound region comparing to before that was in compute bound region (dots with the pink outline is the base version using `collapse` clause and the ones with the blue outline is the current version).
#
# <img src="../images/f_openmp_collapse_reg_roofline.png">
#
# Lets checkout the *Occupancy* section. As seen in the below screenshot, the theoretical occupancy is 100% and the achieved occupancy is 94.6%. Higher occupancy does not always result in higher performance, however, low occupancy always reduces the ability to hide latencies, resulting in overall performance degradation.
#
#
# <img src="../images/f_openmp_collapse_reg_occupancy.png">
#
# Let's have a look at the memory chart in the *memory workload analysis* section. Links between local memory and L1/TEX cache show an increase in the number of requests generated due to local memory load/store instructions. These are the result of registers spilling to local memory that happened after reducing the number of registers per thread to increase the occupancy. Once can also see the registers spilling by using `-Mcuda=ptxinfo` option at the compile time. Below is an example output showing the number of bytes spills to local memory. To read more about PTX, checkout the guide at https://docs.nvidia.com/cuda/parallel-thread-execution/index.html#ptx-machine-model .
#
# ```
# ptxas info : 0 bytes gmem
# ptxas info : Compiling entry function 'nvkernel_MAIN__F1L98_1_' for 'sm_70'
# ptxas info : Function properties for nvkernel_MAIN__F1L98_1_
# 88 bytes stack frame, 128 bytes spill stores, 120 bytes spill loads
# ptxas info : Used 32 registers, 720 bytes cmem[0]
# ```
#
# It is very clear that reducing the number of registers, had impact on memory and it hurt the performance by increasing the instruction count as well as memory traffic.
#
# <img src="../images/f_openmp_collapse_reg_memory.png">
#
# Feel free to checkout the example [solution](../../source_code/openmp/SOLUTION/rdf_offload_collapse.f90) to help you understand better.
# ## Post-Lab Summary
#
# If you would like to download this lab for later viewing, it is recommend you go to your browsers File menu (not the Jupyter notebook file menu) and save the complete web page. This will ensure the images are copied down as well. You can also execute the following cell block to create a zip-file of the files you've been working on, and download it with the link below.
# + language="bash"
# cd ..
# rm -f nways_files.zip
# zip -r nways_files.zip *
# -
# **After** executing the above zip command, you should be able to download and save the zip file by holding down <mark>Shift</mark> and <mark>Right-Clicking</mark> [Here](../nways_files.zip).
# Let us now go back to parallelizing our code using other approaches.
#
# **IMPORTANT**: Please click on **HOME** to go back to the main notebook for *N ways of GPU programming for MD* code.
#
# -----
#
# # <p style="text-align:center;border:3px; border-style:solid; border-color:#FF0000 ; padding: 1em"> <a href=../../../nways_MD_start.ipynb>HOME</a></p>
#
# -----
#
#
# # Links and Resources
# [OpenMP Programming Model](https://computing.llnl.gov/tutorials/openMP/)
#
# [OpenMP Target Directive](https://www.openmp.org/wp-content/uploads/openmp-examples-4.5.0.pdf)
#
# [NVIDIA Nsight System](https://docs.nvidia.com/nsight-systems/)
#
# [NVIDIA Nsight Compute](https://developer.nvidia.com/nsight-compute)
#
# **NOTE**: To be able to see the Nsight System profiler output, please download Nsight System latest version from [here](https://developer.nvidia.com/nsight-systems).
#
# Don't forget to check out additional [OpenACC Resources](https://www.openacc.org/resources) and join our [OpenACC Slack Channel](https://www.openacc.org/community#slack) to share your experience and get more help from the community.
#
# ---
#
# ## Licensing
#
# This material is released by OpenACC-Standard.org, in collaboration with NVIDIA Corporation, under the Creative Commons Attribution 4.0 International (CC BY 4.0).
| hpc/nways/nways_labs/nways_MD/English/Fortran/jupyter_notebook/openmp/nways_openmp_opt_2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (geodeep)
# language: python
# name: geodeep
# ---
# +
import re
import pickle
import numpy as np
from collections import defaultdict
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import matplotlib.cm as cm
import seaborn as sns
import pandas as pd
import torch
import torch.nn as nn
from sklearn.metrics import confusion_matrix
from torch_geometric.data import Data, DataLoader, DenseDataLoader as DenseLoader
from torch_geometric.data import InMemoryDataset
import torch_geometric.transforms as T
from diff_pool6_max import DiffPool as DiffPool
# +
# !!! load data
with open(r'./data/patient_gumbel_train.pickle', 'rb') as handle:
patient_dict_train = pickle.load(handle)
with open(r'./data/patient_gumbel_val.pickle', 'rb') as handle:
patient_dict_val = pickle.load(handle)
patient_dict = defaultdict(list)
for dic in (patient_dict_train, patient_dict_val):
for key, value in dic.items():
patient_dict[key] += value
class PatientDataset(InMemoryDataset):
def __init__(self, root, transform=None, pre_transform=None):
super(PatientDataset, self).__init__(root, transform, pre_transform)
self.data, self.slices = torch.load(self.processed_paths[0])
@property
def raw_file_names(self):
return []
@property
def processed_file_names(self):
return ['patient.dataset']
def download(self):
pass
def process(self):
data_list = []
node_labels_dict = {'Tumor': 0, 'Stroma': 1, 'TIL1': 2, 'TIL2': 3, 'NK': 4, 'MP': 5}
# node_labels_dict = {'CD3p': 0, 'CD3p_CD4p': 1, 'CD8p_CD3p': 2, 'Tumorp': 3, 'Stromap': 4}
class_num = len(node_labels_dict)
for idx, v in enumerate(patient_dict.values()):
for G in v:
node_features = torch.LongTensor([node_labels_dict[i] for i in
list(nx.get_node_attributes(G, 'cell_types').values())]).unsqueeze(1)
x = torch.zeros(len(G.nodes), class_num).scatter_(1, node_features, 1)
y = torch.LongTensor([idx])
edges = sorted([e for e in G.edges] + [e[::-1] for e in G.edges])
edge_index = torch.tensor([[e[0] for e in edges],
[e[1] for e in edges]], dtype=torch.long)
data = Data(x=x, edge_index=edge_index, y=y)
data_list.append(data)
data, slices = self.collate(data_list)
torch.save((data, slices), self.processed_paths[0])
def get_dataset(path, sparse=False):
dataset = PatientDataset(path)
if not sparse:
max_num_nodes = 0
for data in dataset:
max_num_nodes = max(data.num_nodes, max_num_nodes)
if dataset.transform is None:
dataset.transform = T.ToDense(max_num_nodes)
else:
dataset.transform = T.Compose(
[dataset.transform, T.ToDense(max_num_nodes)])
return dataset
# !!! change data
path = './data/patient_gumbel_val'
dataset = get_dataset(path, sparse=False)
# +
# save on gpu, load on cpu
# load model parameters
def load_model(dir_path, params_name, m, num_patches=5, ratio=0.05, plot=True, ge=False):
device = torch.device('cpu')
model = m(dataset, 5, 64, num_patches=num_patches, ratio=ratio, plot=plot, ge=ge)
params = torch.load(dir_path+params_name, map_location=device)
model.load_state_dict(params)
return model
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
def cal_cfm_LogSoftmax(model, loader, num_patches=5):
model.eval()
matrix = []
y_true = []
y_pred = []
for data in loader:
data = data.to(device)
with torch.no_grad():
out, _ = model(data)
matrix.append(out.numpy())
pred = out.max(1)[1]
y_pred += pred.tolist()
len_ = len(data.y)
indices = [i for i in range(0, len_, num_patches)]
y_true += data.y[indices].view(-1).tolist()
matrix = np.concatenate(matrix)
return confusion_matrix(y_true, y_pred), matrix
def plot_cfm_LogSoftmax(cfms, matrices, title):
sns.set(font_scale=1.3)
fig, axes = plt.subplots(1, 4, figsize = (20, 4))
plt.suptitle(title, fontsize=14)
for idx, string in enumerate(['training', 'validation']):
ax1, ax2 = axes[2*idx], axes[2*idx+1]
cfm = cfms[idx]
matrix = matrices[idx]
df_cm = pd.DataFrame(cfm, range(1, cfm.shape[0]+1), range(1, cfm.shape[0]+1))
sns.heatmap(df_cm, cmap=plt.cm.Blues, annot=True, annot_kws={"size": 16}, ax=ax1)
ax1.set_xlabel('Predicted label')
ax1.set_ylabel('True label')
ax1.set_title('confusion matrix for {} dataset'.format(string))
ax2.imshow(matrix, cmap=cm.Blues, extent=[0.5, 10.5, 10.5, 0.5])
ax2.set_xticks([i for i in range(1, 11)])
ax2.set_yticks([i for i in range(1, 11)])
ax2.set_xlabel('Predicted label')
ax2.set_ylabel('True label')
ax2.set_title('softmax values for {} dataset'.format(string))
def plot_loss_acc(filename, xvalues):
with open(filename) as f:
contents = f.readlines()
# you may also want to remove whitespace characters like `\n` at the end of each line
dic = {}
for line in contents:
if "Num" in line:
param = line.strip()
dic[param] = [[], [], [], []]
if "Train Loss" in line:
loss = float(re.findall("Train Loss: ([0-9.]+)", line)[0])
acc = float(re.findall("Train Accuracy: ([0-9.]+)", line)[0])
dic[param][0].append(loss)
dic[param][1].append(acc)
if 'Test Loss' in line:
loss = float(re.findall("Test Loss: ([0-9.]+)", line)[0])
acc = float(re.findall("Test Accuracy: ([0-9.]+)", line)[0])
dic[param][2].append(loss)
dic[param][3].append(acc)
for key, val in dic.items():
fig, (ax_loss, ax_acc) = plt.subplots(nrows=2, ncols=1, sharex=True, figsize=(12, 12))
# plot loss
ax_loss.set_title("Loss")
length = len(val[0])
ax_loss.plot(range(1, length+1), val[0], label='training')
ax_loss.plot(xvalues, val[2], c='r', label='validation')
ax_loss.legend()
# plot accuracy
ax_acc.set_title("Accuracy")
length = len(val[1])
ax_acc.plot(range(1, length+1), val[1], label='training')
ax_acc.plot(xvalues, val[3], c='r', label='validation')
ax_acc.legend()
# -
dir_path = './data/DiffPool_diff_pool6_max_bs50/gumbel2_5/'
plot_loss_acc(dir_path+'log_2021-04-10_17-25.txt', [1]+list(range(5, 501, 5)))
# plt.savefig('./img/p1.png', dpi=300)
# +
batch_size = 50
num_patients = 10
num_patches = 5
train_indices = []
for i in range(num_patients):
tmp = [2*num_patches*i+ j for j in range(num_patches)]
train_indices += tmp
test_indices = sorted(list(set(range(num_patients*num_patches*2)) - set(train_indices)))
train_indices = torch.tensor(train_indices)
test_indices = torch.tensor(test_indices)
train_dataset = dataset[train_indices]
train_loader = DenseLoader(train_dataset, batch_size, shuffle=False)
test_dataset = dataset[test_indices]
test_loader = DenseLoader(test_dataset, batch_size, shuffle=False)
# -
dir_path = './data/DiffPool_diff_pool6_max_bs50/gumbel2_5/'
for epoch in [150, 175, 200, 225]:
params_name = 'params_epoch{}.pt'.format(epoch)
model = load_model(dir_path, params_name, DiffPool, \
num_patches=num_patches, ratio=0.05)
cfm_train, matrix_train = cal_cfm_LogSoftmax(model, train_loader, num_patches=num_patches)
cfm_test, matrix_test = cal_cfm_LogSoftmax(model, test_loader, num_patches=num_patches)
cfms = [cfm_train, cfm_test]
matrices = [matrix_train, matrix_test]
plot_cfm_LogSoftmax(cfms, matrices, 'epoch{}'.format(epoch))
| Plot-results-pretrain.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] toc=true
# <h1>Table of Contents<span class="tocSkip"></span></h1>
# <div class="toc"><ul class="toc-item"><li><span><a href="#Imports" data-toc-modified-id="Imports-1"><span class="toc-item-num">1 </span>Imports</a></span></li><li><span><a href="#Wikibooks-SQL-Exercise:-The-Computer-Store" data-toc-modified-id="Wikibooks-SQL-Exercise:-The-Computer-Store-2"><span class="toc-item-num">2 </span>Wikibooks SQL Exercise: The Computer Store</a></span><ul class="toc-item"><li><span><a href="#manufacturer-data" data-toc-modified-id="manufacturer-data-2.1"><span class="toc-item-num">2.1 </span>manufacturer data</a></span></li><li><span><a href="#products-data" data-toc-modified-id="products-data-2.2"><span class="toc-item-num">2.2 </span>products data</a></span></li><li><span><a href="#combine-data" data-toc-modified-id="combine-data-2.3"><span class="toc-item-num">2.3 </span>combine data</a></span></li></ul></li><li><span><a href="#Create-SQL-table-from-spark-dataframes" data-toc-modified-id="Create-SQL-table-from-spark-dataframes-3"><span class="toc-item-num">3 </span>Create SQL table from spark dataframes</a></span></li><li><span><a href="#Exercise-Questions" data-toc-modified-id="Exercise-Questions-4"><span class="toc-item-num">4 </span>Exercise Questions</a></span><ul class="toc-item"><li><span><a href="#Select-the-names-of-all-the-products-in-the-store." data-toc-modified-id="Select-the-names-of-all-the-products-in-the-store.-4.1"><span class="toc-item-num">4.1 </span>Select the names of all the products in the store.</a></span></li><li><span><a href="#Select-the-names-and-the-prices-of-all-the-products-in-the-store." data-toc-modified-id="Select-the-names-and-the-prices-of-all-the-products-in-the-store.-4.2"><span class="toc-item-num">4.2 </span>Select the names and the prices of all the products in the store.</a></span></li><li><span><a href="#Select-the-name-of-the-products-with-a-price-less-than-or-equal-to-200." data-toc-modified-id="Select-the-name-of-the-products-with-a-price-less-than-or-equal-to-200.-4.3"><span class="toc-item-num">4.3 </span>Select the name of the products with a price less than or equal to 200.</a></span></li><li><span><a href="#Select-all-the-products-with-a-price-between-60-and-120." data-toc-modified-id="Select-all-the-products-with-a-price-between-60-and-120.-4.4"><span class="toc-item-num">4.4 </span>Select all the products with a price between 60 and 120.</a></span></li><li><span><a href="#Select-the-name-and-price-in-cents-(i.e.,-the-price-must-be-multiplied-by-100)" data-toc-modified-id="Select-the-name-and-price-in-cents-(i.e.,-the-price-must-be-multiplied-by-100)-4.5"><span class="toc-item-num">4.5 </span>Select the name and price in cents (i.e., the price must be multiplied by 100)</a></span></li><li><span><a href="#Compute-the-average-price-of-all-the-products." data-toc-modified-id="Compute-the-average-price-of-all-the-products.-4.6"><span class="toc-item-num">4.6 </span>Compute the average price of all the products.</a></span></li><li><span><a href="#Compute-the-average-price-of-all-products-with-manufacturer-code-equal-to-2." data-toc-modified-id="Compute-the-average-price-of-all-products-with-manufacturer-code-equal-to-2.-4.7"><span class="toc-item-num">4.7 </span>Compute the average price of all products with manufacturer code equal to 2.</a></span></li><li><span><a href="#Compute-the-number-of-products-with-a-price-larger-than-or-equal-to-180." data-toc-modified-id="Compute-the-number-of-products-with-a-price-larger-than-or-equal-to-180.-4.8"><span class="toc-item-num">4.8 </span>Compute the number of products with a price larger than or equal to 180.</a></span></li><li><span><a href="#Select-the-name-and-price-of-all-products-with-a-price-larger-than-or-equal-to-180,-and-sort-first-by-price-(in-descending-order),-and-then-by-name-(in-ascending-order)." data-toc-modified-id="Select-the-name-and-price-of-all-products-with-a-price-larger-than-or-equal-to-180,-and-sort-first-by-price-(in-descending-order),-and-then-by-name-(in-ascending-order).-4.9"><span class="toc-item-num">4.9 </span>Select the name and price of all products with a price larger than or equal to 180, and sort first by price (in descending order), and then by name (in ascending order).</a></span></li><li><span><a href="#Select-all-the-data-from-the-products,-including-all-the-data-for-each-product's-manufacturer." data-toc-modified-id="Select-all-the-data-from-the-products,-including-all-the-data-for-each-product's-manufacturer.-4.10"><span class="toc-item-num">4.10 </span>Select all the data from the products, including all the data for each product's manufacturer.</a></span></li><li><span><a href="#Select-the-product-name,-price,-and-manufacturer-name-of-all-the-products." data-toc-modified-id="Select-the-product-name,-price,-and-manufacturer-name-of-all-the-products.-4.11"><span class="toc-item-num">4.11 </span>Select the product name, price, and manufacturer name of all the products.</a></span></li><li><span><a href="#Select-the-average-price-of-each-manufacturer's-products,-showing-only-the-manufacturer's-code." data-toc-modified-id="Select-the-average-price-of-each-manufacturer's-products,-showing-only-the-manufacturer's-code.-4.12"><span class="toc-item-num">4.12 </span>Select the average price of each manufacturer's products, showing only the manufacturer's code.</a></span></li><li><span><a href="#Select-the-average-price-of-each-manufacturer's-products,-showing-the-manufacturer's-name." data-toc-modified-id="Select-the-average-price-of-each-manufacturer's-products,-showing-the-manufacturer's-name.-4.13"><span class="toc-item-num">4.13 </span>Select the average price of each manufacturer's products, showing the manufacturer's name.</a></span></li><li><span><a href="#Select-the-names-of-manufacturer-whose-products-have-an-average-price-larger-than-or-equal-to-150." data-toc-modified-id="Select-the-names-of-manufacturer-whose-products-have-an-average-price-larger-than-or-equal-to-150.-4.14"><span class="toc-item-num">4.14 </span>Select the names of manufacturer whose products have an average price larger than or equal to 150.</a></span></li><li><span><a href="#Select-the-name-and-price-of-the-cheapest-product." data-toc-modified-id="Select-the-name-and-price-of-the-cheapest-product.-4.15"><span class="toc-item-num">4.15 </span>Select the name and price of the cheapest product.</a></span></li><li><span><a href="#Select-the-name-of-each-manufacturer-along-with-the-name-and-price-of-its-most-expensive-product." data-toc-modified-id="Select-the-name-of-each-manufacturer-along-with-the-name-and-price-of-its-most-expensive-product.-4.16"><span class="toc-item-num">4.16 </span>Select the name of each manufacturer along with the name and price of its most expensive product.</a></span></li><li><span><a href="#Add-a-new-product:-Loudspeakers,-70,-manufacturer-2." data-toc-modified-id="Add-a-new-product:-Loudspeakers,-70,-manufacturer-2.-4.17"><span class="toc-item-num">4.17 </span>Add a new product: Loudspeakers, 70, manufacturer 2.</a></span></li><li><span><a href="#Update-the-name-of-product-8-to-"Laser-Printer"." data-toc-modified-id="Update-the-name-of-product-8-to-"Laser-Printer".-4.18"><span class="toc-item-num">4.18 </span>Update the name of product 8 to "Laser Printer".</a></span></li><li><span><a href="#Apply-a-10%-discount-to-all-products." data-toc-modified-id="Apply-a-10%-discount-to-all-products.-4.19"><span class="toc-item-num">4.19 </span>Apply a 10% discount to all products.</a></span></li><li><span><a href="#Apply-a-10%-discount-to-all-products-with-a-price-larger-than-or-equal-to-120." data-toc-modified-id="Apply-a-10%-discount-to-all-products-with-a-price-larger-than-or-equal-to-120.-4.20"><span class="toc-item-num">4.20 </span>Apply a 10% discount to all products with a price larger than or equal to 120.</a></span></li></ul></li></ul></div>
# -
# # Imports
#
# This is wikibooks SQL Exercise "The Computer Store"
# https://en.wikibooks.org/wiki/SQL_Exercises/The_computer_store
import time
time_start_notebook = time.time()
from bhishan import bp
import functools
from functools import reduce
# +
import numpy as np
import pandas as pd
import pyspark
print([(x.__name__,x.__version__) for x in [np, pd, pyspark]])
# +
# sql
from pyspark.sql.functions import col as _col
from pyspark.sql.functions import udf # @udf("integer") def myfunc(x,y): return x - y
from pyspark.sql import functions as F # stddev format_number date_format, dayofyear, when
from pyspark.sql.window import Window
from pyspark.sql.functions import (mean as _mean, min as _min,
max as _max, avg as _avg,
when as _when
)
from pyspark.sql.types import StructField, StringType, IntegerType, FloatType, StructType
# +
from pyspark import SparkConf, SparkContext, SQLContext
spark = pyspark.sql.SparkSession.builder.appName('bhishan').getOrCreate()
sc = spark.sparkContext
sqlContext = SQLContext(sc) # spark_df = sqlContext.createDataFrame(pandas_df)
sc.setLogLevel("INFO")
# -
bp.show_method_attributes(F,5)
# # Wikibooks SQL Exercise: The Computer Store
#
# 
#
# > Please note the datatypes given are `SQLite` datatypes.
# >
# > `PK` and `FK` stand for **primary key** and **foreign key** respectively.
#
# Note: These are postgres table not Sqlite as said in wikibooks.
# ## manufacturer data
dfm = pd.DataFrame({'code': [1, 2, 3, 4, 5, 6],
'name': ['Sony', 'Creative Labs', 'Hewlett-Packard', 'Iomega', 'Fujitsu', 'Winchester']})
dfm
dfm.dtypes
# +
schema = StructType([
StructField('code',IntegerType(),True),
StructField('name',StringType(),True)
])
sdfm = sqlContext.createDataFrame(dfm, schema)
sdfm.show()
# -
# ## products data
dfp = pd.DataFrame({'code': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
'name': ['Hard drive', 'Memory', 'ZIP drive', 'Floppy disk', 'Monitor', 'DVD drive', 'CD drive', 'Printer', 'Toner cartridge', 'DVD burner'],
'price': [240, 120, 150, 5, 240, 180, 90, 270, 66, 180],
'manufacturer': [5, 6, 4, 6, 1, 2, 2, 3, 3, 2]})
dfp
dfp.dtypes
# +
schema = StructType([
StructField('code',IntegerType(),True),
StructField('name',StringType(),True),
StructField('price',IntegerType(),True),
StructField('manufacturer',IntegerType(),True),
])
sdfp = sqlContext.createDataFrame(dfp, schema)
sdfp.show()
# -
# ## combine data
df = dfp.merge(dfm,left_on='manufacturer', right_on='code')
df
# +
df1 = df.rename(columns={'code_x':'code_product',
'code_y': 'code_manufacturer',
'name_x': 'name_product',
'name_y': 'name_manufacturer'})
df1
# +
# bp.show_method_attributes(sdfm,4)
# +
# help(sdfm.join)
# -
sdfp.show(2)
sdfm.show(2)
sdf1 = sdfp.join(sdfm,sdfp.manufacturer==sdfm.code)
sdf1.show()
sdf1.select('price').show(1) # this works
# sdf1.select('code').show(2) # this fails,
sdfp.columns
sdfm.columns
# +
# we have repeated column names, spark keeps duplicate column names unlike pandas.
# -
sdfp.withColumnRenamed('code','code_product').show(2)
sdfp.selectExpr('code as code_product').show(2)
# +
# oldColumns = sdfp.schema.names
# +
oldColumns = ['code','name']
newColumns = [i + '_product' for i in oldColumns]
sdfp2 = functools.reduce(lambda data, idx: data.withColumnRenamed(oldColumns[idx],
newColumns[idx]),
range(len(oldColumns)),
sdfp
)
sdfp2.printSchema()
# -
sdfp2.show(2)
sdfm.show(2)
# +
oldColumns = ['code','name']
newColumns = [i + '_manufacturer' for i in oldColumns]
sdfm2 = functools.reduce(lambda data, idx: data.withColumnRenamed(oldColumns[idx],
newColumns[idx]),
range(len(oldColumns)),
sdfm
)
sdfm2.show(2)
# -
sdf = sdfp2.join(sdfm2,sdfp2.manufacturer==sdfm2.code_manufacturer)
sdf.show()
sdfp3 = sdfp.selectExpr([c + ' as product_' + c for c in sdfp.columns])
sdfp3.show()
sdfm3 = sdfm.selectExpr([c + ' as manufacturer_' + c for c in sdfm.columns])
sdfm3.show()
sdf3 = sdfp3.join(sdfm3,sdfp3.product_manufacturer==sdfm3.manufacturer_code)
sdf3.show()
# +
# our sdf has duplicate column names and gives problem,
# so, I have replaced the variable sdf by sdf2
sdf.show(2)
# -
bp.show_method_attributes(sdf,4)
bp.show_method_attributes(sdf.rdd,4) # rdd attributes
# # Create SQL table from spark dataframes
#
# - https://spark.apache.org/docs/2.2.0/sql-programming-guide.html
#
# **Global Temporary View**
#
# Temporary views in Spark SQL are session-scoped and will disappear if the session that creates it terminates. If you want to have a temporary view that is shared among all sessions and keep alive until the Spark application terminates, you can create a global temporary view. Global temporary view is tied to a system preserved database global_temp, and we must use the qualified name to refer it, e.g. `SELECT * FROM global_temp.view1`.
# Register the DataFrame as a SQL temporary view
sdfp.createOrReplaceTempView("Products")
# +
# # Register the DataFrame as a global temporary view
# sdfp.createGlobalTempView("people")
# -
# Register the DataFrame as a SQL temporary view
sdfm.createOrReplaceTempView("Manufacturers")
spark.sql("select * from Products").show()
sdf.show(2)
# # Exercise Questions
# ## Select the names of all the products in the store.
# <div class="alert alert-block alert-info">
# <b>Question:</b>
# 1. Select the names of all the products in the store.
# </div>
# dfp['name'] # series
dfp[['name']] # dataframe
# sdfp[['name']].show()
sdfp.select('name').show()
spark.sql("select name from Products").show()
# ## Select the names and the prices of all the products in the store.
# <div class="alert alert-block alert-info">
# <b>Question:</b>
# 2. Select the names and the prices of all the products in the store.
# </div>
dfp.columns
dfp[['name','price']]
sdfp[['name','price']].show(2)
sdfp.select('name','price').show(2)
spark.sql("select name, price from Products limit 2").show()
# ## Select the name of the products with a price less than or equal to 200.
# <div class="alert alert-block alert-info">
# <b>Question:</b>
# 3. Select the name of the products with a price less than or equal to $200.
# </div>
dfp.head(1)
dfp.loc[dfp.price <=200, 'name']
# +
# dfp.query("price <= 200")['name']
# -
sdfp[sdfp.price<=200].select('name').show()
sdfp[['name']][sdfp.price<=200].show()
sdfp.filter(sdfp.price<=200).select('name').show()
sdfp.where((sdfp.price <=200)).select('name').show()
spark.sql("select name from Products where price <= 200").show()
# ## Select all the products with a price between 60 and 120.
# <div class="alert alert-block alert-info">
# <b>Question:</b>
# 4. Select all the products with a price between \$60 and \$120.
# </div>
dfp.query("60<= price <= 120")
dfp.loc[ (dfp.price>=60) & (dfp.price<=120)]
# +
# dfp.loc[ 60<= dfp.price <=120] # syntax error.
# -
dfp.loc[dfp.price.between(60,120)]
sdfp[sdfp.price.between(60,120)].show()
spark.sql("select * from Products where price between 60 and 120").show()
# ## Select the name and price in cents (i.e., the price must be multiplied by 100)
# <div class="alert alert-block alert-info">
# <b>Question:</b>
# 5. Select the name and price in cents (i.e., the price must be multiplied by 100)
# </div>
# +
# dfp['price_cents'] = dfp['price'] * 100
# dfp[['name','price_cents']]
# -
dfp.head(2)
dfp.assign(
price_cents = dfp['price']*100)[['name','price_cents']]
sdfp.withColumn('price_cents',
sdfp['price']*100)[['name', 'price_cents']].show()
spark.sql("select name, price * 100 as price_cents from Products").show()
# ## Compute the average price of all the products.
# <div class="alert alert-block alert-info">
# <b>Question:</b>
# 6. Compute the average price of all the products.
# </div>
dfp['price'].mean()
sdfp.select(_mean('price')).show()
sdfp.select(F.mean(sdfp['price'])).show()
sdfp.select(_mean('price')).collect()[0].asDict()['avg(price)']
sdf.select("price").rdd.max() # .max() works, .mean() fails.
# +
# [i for i in dir(sdf.select("price").rdd) if 'm' in i if i[0]!='_']
# this shows mean, but does not work
# -
sdfp.select(*[F.mean(c).alias(c) for c in ['code','price','manufacturer']]).show()
sdfp.describe().show()
sdfp.select([_mean('code'),
_min('price'),
_max('manufacturer')]
).show()
spark.sql("select avg(price) from Products").show()
# ## Compute the average price of all products with manufacturer code equal to 2.
# <div class="alert alert-block alert-info">
# <b>Question:</b>
# 7. Compute the average price of all products with manufacturer code equal to 2.
# </div>
dfp[dfp.manufacturer==2]
dfp.price[dfp.manufacturer==2].mean()
# Never do:
dfp[dfp.manufacturer==2]['price'].mean()
# never select whole dataframe, if you just need one series.
# +
#********* spark***************
# -
sdfp[['price']][sdfp.manufacturer==2].show()
sdfp[['price']][sdfp.manufacturer==2].describe().show()
sdfp[['price']][sdfp.manufacturer==2].agg({"price": "avg"}).show()
sdfp[['price']][sdfp.manufacturer==2].agg(_avg(_col('price'))).show()
# bad
sdfp.filter(sdfp['manufacturer'] == 2).agg(_avg(_col("price"))).show()
# good
sdfp.select(_avg(_when(sdfp['manufacturer']==2,
sdfp['price']))
).show()
spark.sql("""
select avg(price)
from Products
where manufacturer =2
""").show()
# ## Compute the number of products with a price larger than or equal to 180.
# <div class="alert alert-block alert-info">
# <b>Question:</b>
# 8. Compute the number of products with a price larger than or equal to $180.
# </div>
dfp[dfp.price >= 180].shape[0]
dfp.price[dfp.price >= 180].shape[0]
sdfp[['price']][sdfp.price>=180].show()
sdfp[['price']][sdfp.price>=180].agg({'price':'count'}).show()
sdfp[['price']][_col('price') >=180].agg({'price':'count'}).show()
spark.sql("""
select count(price)
from Products
where price >= 180
""").show()
# ## Select the name and price of all products with a price larger than or equal to 180, and sort first by price (in descending order), and then by name (in ascending order).
# <div class="alert alert-block alert-info">
# <b>Question:</b>
# 9. Select the name and price of all products with a price larger than or equal to $180, and sort first by price (in descending order), and then by name (in ascending order).
# </div>
dfp[['name','price']].query("price >= 180")\
.sort_values(['price','name'],ascending=[False,True])
sdfp.select('name','price').filter(sdfp.price >= 180)\
.orderBy(['price','name'],ascending=[False,True]).show()
sdfp[['name','price']][sdfp.price >= 180]\
.orderBy(['price','name'],ascending=[False,True]).show()
sdfp[['name','price']].filter(sdfp.price >= 180)\
.orderBy(F.desc('price'),F.asc('name')).show()
# +
# sql
# -
spark.sql("""
select name, price
from Products
where price >= 180
order by price desc, name asc
""").show()
# ## Select all the data from the products, including all the data for each product's manufacturer.
# <div class="alert alert-block alert-info">
# <b>Question:</b>
# 10. Select all the data from the products, including all the data for each product's manufacturer.
# </div>
# +
# dfp.merge(dfm,left_on='manufacturer', right_on='code',how='left')
# -
sdfp.join(sdfm,sdfp.manufacturer==sdfm.code).show() # this is bad, does not work.
# +
# sdfp.join(sdfm,sdfp.manufacturer==sdfm.code).select('name').show()
# Py4JJavaError
# AnalysisException: "Reference 'name' is ambiguous, could be: name, name.;"
# -
sdfp.take(1)
sdfm.take(1)
# +
# sdfp.show(2)
# -
sdfm.show(2)
# +
(sdfp.selectExpr('code as code_proj',
'name as name_proj',
'price','manufacturer')
.alias('A')
.join(
sdfm.selectExpr('code as code_manu','name as manu')
.alias('B'),
_col('A.manufacturer') == _col('B.code_manu')
)
.show()
)
# -
dfp.merge(dfm,left_on='manufacturer', right_on='code',how='left')
spark.sql("""
select P.code as code_proj, P.name as name_proj, price,
manufacturer, M.code as code_manu, M.name as name_manu
from Products P inner join Manufacturers M
on P.manufacturer = M.code
""").show()
# ## Select the product name, price, and manufacturer name of all the products.
# <div class="alert alert-block alert-info">
# <b>Question:</b>
# 11. Select the product name, price, and manufacturer name of all the products.
# </div>
dfp.head(2)
dfm.head(2)
dfp.merge(dfm,left_on='manufacturer',
right_on='code',
suffixes=('_proj','_manu')
)[['name_proj','price','name_manu']]
# +
# dfm.set_index('code')
# dfm.set_index('code')['name']
# dfp['manufacturer'].map(dfm.set_index('code')['name'])
# -
dfp[['name','price']].assign(
name_manu = dfp['manufacturer'].map(dfm.set_index('code')['name']))
# +
# using spark
# -
(sdfp.selectExpr(
'name as name_proj',
'price','manufacturer')
.alias('A')
.join(
sdfm.selectExpr('code as code_manu','name as name_manu')
.alias('B'),
_col('A.manufacturer') == _col('B.code_manu')
)
.select('name_proj','price','name_manu')
.show()
)
# +
# using sql
# -
spark.sql("""
select P.name as name_proj, P.price as price,
M.name as name_manu
from Products P inner join Manufacturers M
on P.manufacturer = M.code
"""
).show()
# ## Select the average price of each manufacturer's products, showing only the manufacturer's code.
# <div class="alert alert-block alert-info">
# <b>Question:</b>
# 12. Select the average price of each manufacturer's products, showing only the manufacturer's code.
# </div>
dfp.head(2)
dfm.head(2)
dfp.groupby('manufacturer')['price'].mean()
dfp['manufacturer'].unique()
dfp.groupby('manufacturer')['price'].mean()
# +
# spark
# -
sdfp.groupBy('manufacturer').mean().show()
gby = sdfp.groupBy('manufacturer')
gby.agg({'price':'avg'}).show()
# +
# sql
# -
spark.sql("""
select manufacturer, avg(price)
from Products
group by manufacturer
""").show()
# ## Select the average price of each manufacturer's products, showing the manufacturer's name.
# <div class="alert alert-block alert-info">
# <b>Question:</b>
# 13. Select the average price of each manufacturer's products, showing the manufacturer's name.
# </div>
dfp.merge(dfm,left_on='manufacturer', right_on='code')\
.groupby('name_y')['price'].mean()
# +
# spark
# -
sdf.show(2)
sdf.groupBy('name_manufacturer').agg({'price':'avg'}).show()
# +
# sql
# -
spark.sql("""
select M.name, avg(price)
from Products P inner join Manufacturers M
on P.manufacturer = M.code
group by M.name
""").show()
# ## Select the names of manufacturer whose products have an average price larger than or equal to 150.
# <div class="alert alert-block alert-info">
# <b>Question:</b>
# 14. Select the names of manufacturer whose products have an average price larger than or equal to $150.
# </div>
dfp.head(2)
dfp.merge(dfm,left_on='manufacturer', right_on='code')\
.groupby('name_y')['price'].mean()\
.rename('avg_price').rename_axis('manufacturer')\
.loc[lambda x: x>=150]\
.reset_index()
# +
# spark
# -
sdf.show(2)
sdf.groupBy('name_manufacturer').agg({'price':'avg'})\
.filter('avg(price) >= 150').show()
# +
# sql
# -
spark.sql("""
select M.name, avg(price)
from Products P inner join Manufacturers M
on P.manufacturer = M.code
group by M.name
having avg(price) >= 150
""").show()
# ## Select the name and price of the cheapest product.
# <div class="alert alert-block alert-info">
# <b>Question:</b>
# 15. Select the name and price of the cheapest product.
# </div>
dfp.head(2)
dfp.loc[dfp.price==dfp.price.min(), ['name','price']]
m = dfp.price.min()
dfp.query("price == @m")[['name','price']]
dfp.query("price == @dfp.price.min()")[['name','price']]
dfp.nsmallest(1,'price')[['name','price']]
dfp.sort_values('price').head(1)[['name','price']]
dfp.sort_values('price').iloc[[0]][['name','price']]
# +
# spark
# +
# sdfp.select('name','price').filter(sdfp.price == _min(sdfp.price)).show()
# this gives Py4JJavaError
# -
sdfp.orderBy('price',ascending=True).limit(1).select('name','price').show()
# +
# sql
# -
spark.sql("""
select name, price
from Products
where price = (select min(price) from Products)
""").show()
# bad way
spark.sql("""
select *
from Products
order by price
limit 1
""").show()
# ## Select the name of each manufacturer along with the name and price of its most expensive product.
# <div class="alert alert-block alert-info">
# <b>Question:</b>
# 16. Select the name of each manufacturer along with the name and price of its most expensive product.
# </div>
dfp.merge(dfm,left_on='manufacturer', right_on='code')\
.nlargest(1,'price')[['name_y','price']]
df = dfp.merge(dfm,left_on='manufacturer', right_on='code')
df
# +
# for each manufactures, seleect rows with max price for that manufacturer.
# +
df['max_price_manufacturer'] = df.groupby('name_y')['price'].transform(max)
df
# -
df[df.price==df.max_price_manufacturer]
# +
# spark
# -
sdf.show(2)
# +
from pyspark.sql.window import Window
window = Window().partitionBy('name_manufacturer')
sdf = sdf.withColumn('max_price_manufacturer',
F.max('price').over(window))
sdf.select('name_manufacturer','price','max_price_manufacturer').show()
# -
sdf[sdf.price==sdf.max_price_manufacturer].show()
# +
# sql
# -
spark.sql("""
select P.name, P.price, M.name
from Products P, Manufacturers M
where P.manufacturer = M.code
and P.price =
(
select max(P.price)
from Products P
where P.Manufacturer = M.Code
)
""").show()
spark.sql("""
select P.name, P.price, M.name
from Products P, Manufacturers M
where P.manufacturer = M.code
and P.price =
(
select max(P.price)
from Products P, Manufacturers M
where P.Manufacturer = M.Code
)
""").show()
# ## Add a new product: Loudspeakers, 70, manufacturer 2.
# <div class="alert alert-block alert-info">
# <b>Question: 17</b>
# Add a new product: Loudspeakers, $70, manufacturer 2.
# </div>
dfp.tail(2)
# +
dfp.loc[len(dfp)] = [dfp.code.iloc[-1]+1, 'Loudspeakers', 70, 2]
dfp
# +
# pyspark
# -
"""
spark df and rdd are immutable, we can not insert row to them,
but we can create new dataframe and union to old df and create new immutable dataframe.
""";
sdfp.show()
sdfp.count()
newRow = spark.createDataFrame([(sdfp.count()+1,'Loudspeakers', 70, 2)], sdfp.columns)
appended = sdfp.union(newRow)
appended.show()
"""
%%sql
insert into Products(code,name, price,manufacturer)
values (11, 'Loudspeakers',70, 2);
""";
# ## Update the name of product 8 to "Laser Printer".
# <div class="alert alert-block alert-info">
# <b>Question:</b>
# 18. Update the name of product 8 to "Laser Printer".
# </div>
dfp.loc[dfp.code==8,'name'] = 'Laser Printer'
dfp
# +
# spark
# -
sdfp.show()
sdfp.withColumn("new_name", F.when(F.col("code")==8, 'Laser Printer')\
.otherwise(F.col("name")))\
.drop('name')\
.withColumnRenamed('new_name','name')\
.show()
"""
%%sql
update Products
set name = '<NAME>'
where code = 8;
""";
# ## Apply a 10% discount to all products.
# <div class="alert alert-block alert-info">
# <b>Question:</b>
# 19. Apply a 10% discount to all products.
# </div>
dfp['price'] *= 0.9
dfp
# +
# spark
# -
sdfp.withColumn('discounted_price', F.col('price')*0.9).show()
# +
# sql
# -
"""
update Products
set price = price * 0.9
set price = price - (price * 0.1)
""";
# ## Apply a 10% discount to all products with a price larger than or equal to 120.
# <div class="alert alert-block alert-info">
# <b>Question:</b>
# 20. Apply a 10% discount to all products with a price larger than or equal to $120.
# </div>
dfp
dfp.loc[dfp.price >= 120, 'price'] *= 0.9
dfp
# +
# spark
# -
sdfp.withColumn("discounted_price",
F.when(F.col("price")>=120, F.col('price')*0.9)\
.otherwise(F.col("price"))
).show()
# +
# sql
# -
"""
update Products
set price = price * 0.9
where price >= 120;
""";
time_taken = time.time() - time_start_notebook
h,m = divmod(time_taken,60*60)
print('Time taken: {:.0f} hr {:.0f} min {:.0f} secs'.format(h, *divmod(m,60)))
| a01_PySpark/f02_Pyspark_Solutions_to_Wikibooks_SQL/a01_computer_store/a01_comp_store.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Machine Learning for Engineers: [FacialRecognition](https://www.apmonitor.com/pds/index.php/Main/FacialRecognition)
# - [Facial Recognition](https://www.apmonitor.com/pds/index.php/Main/FacialRecognition)
# - Source Blocks: 8
# - Description: Use computer vision and deep learning to detect faces, recognize the class participant, record attendance, and send a customized message to students who missed class that day.
# - [Course Overview](https://apmonitor.com/pds)
# - [Course Schedule](https://apmonitor.com/pds/index.php/Main/CourseSchedule)
#
# +
import cv2
import mediapipe as mp
mp_face_detection = mp.solutions.face_detection
mp_drawing = mp.solutions.drawing_utils
import urllib.request
# download image as class.jpg
url = 'http://apmonitor.com/pds/uploads/Main/students_walking.jpg'
urllib.request.urlretrieve(url, 'class.jpg')
IMAGE_FILES = ['class.jpg']
with mp_face_detection.FaceDetection(
model_selection=1, min_detection_confidence=0.5) as face_detection:
for idx, file in enumerate(IMAGE_FILES):
image = cv2.imread(file)
# Convert the BGR image to RGB and process it with MediaPipe Face Detection.
results = face_detection.process(cv2.cvtColor(image, cv2.COLOR_BGR2RGB))
# Draw face detections of each face.
if not results.detections:
continue
annotated_image = image.copy()
for detection in results.detections:
print('Nose tip:')
print(mp_face_detection.get_key_point(
detection, mp_face_detection.FaceKeyPoint.NOSE_TIP))
mp_drawing.draw_detection(annotated_image, detection)
cv2.imwrite('annotated_image' + str(idx) + '.png', annotated_image)
# +
import cv2
import mediapipe as mp
mp_face_detection = mp.solutions.face_detection
mp_drawing = mp.solutions.drawing_utils
# webcam input
cap = cv2.VideoCapture(0)
with mp_face_detection.FaceDetection(
model_selection=0, min_detection_confidence=0.5) as face_detection:
while cap.isOpened():
success, image = cap.read()
if not success:
print("Ignoring empty camera frame.")
# If loading a video, use 'break' instead of 'continue'.
continue
# To improve performance, optionally mark the image as not writeable to
# pass by reference.
image.flags.writeable = False
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
results = face_detection.process(image)
# Draw the face detection annotations on the image.
image.flags.writeable = True
image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
if results.detections:
for detection in results.detections:
mp_drawing.draw_detection(image, detection)
# Flip the image horizontally for a selfie-view display.
cv2.imshow('MediaPipe Face Detection', cv2.flip(image, 1))
if cv2.waitKey(5) & 0xFF == 27:
break
cap.release()
# +
import matplotlib.pyplot as plt
from mtcnn.mtcnn import MTCNN
import urllib.request
# download image as class.jpg
url = 'http://apmonitor.com/pds/uploads/Main/students_walking.jpg'
urllib.request.urlretrieve(url, 'class.jpg')
def draw_faces(data, result_list):
for i in range(len(result_list)):
x1, y1, width, height = result_list[i]['box']
x2, y2 = x1 + width, y1 + height
plt.subplot(1, len(result_list), i+1)
plt.axis('off')
plt.imshow(data[y1:y2, x1:x2])
plt.show()
pixels = plt.imread('class.jpg') # read image
detector = MTCNN() # create detector
faces = detector.detect_faces(pixels) # detect faces
draw_faces(pixels, faces) # display faces
# -
for x in faces:
print(x['confidence'])
# +
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import cv2
import urllib.request
# download image as class.jpg
url = 'http://apmonitor.com/pds/uploads/Main/students_walking.jpg'
urllib.request.urlretrieve(url, 'class.jpg')
# download cascade classifier configuration
url = 'http://apmonitor.com/pds/uploads/Main/cascade.xml'
urllib.request.urlretrieve(url, 'cascade.xml')
def draw_faces(data, result_list):
for i in range(len(result_list)):
x1, y1, width, height = result_list[i]
x2, y2 = x1 + width, y1 + height
plt.subplot(1, len(result_list), i+1)
plt.axis('off')
plt.imshow(data[y1:y2, x1:x2])
pixels = plt.imread('class.jpg')
faceCascade = cv2.CascadeClassifier('cascade.xml')
gray = cv2.cvtColor(pixels, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(gray,scaleFactor=1.1,
minNeighbors=2,\
minSize=(10, 10))
# display only the faces
draw_faces(pixels, faces)
# display identified faces on original image
fig, ax = plt.subplots(); ax.imshow(pixels)
for (x, y, w, h) in faces:
rect = patches.Rectangle((x, y), w, h, lw=2, \
alpha=0.5, edgecolor='r', \
facecolor='none')
ax.add_patch(rect)
plt.show()
# +
import cv2
import time
import urllib.request
# download cascade classifier configuration
url = 'http://apmonitor.com/pds/uploads/Main/cascade.xml'
urllib.request.urlretrieve(url, 'cascade.xml')
faceCascade = cv2.CascadeClassifier('cascade.xml')
video_capture = cv2.VideoCapture(0)
t = time.time()
while time.time()-t <=20: # run for max 20 sec
ret, frame = video_capture.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(gray,scaleFactor=1.1,
minNeighbors=5,minSize=(30, 30))
for (x, y, w, h) in faces:
cv2.rectangle(frame, (x, y), (x+w, y+h), (255, 0, 0), 2)
cv2.imshow('Video', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
video_capture.release()
cv2.destroyAllWindows()
# +
import pandas as pd
import smtplib
from email.mime.text import MIMEText
from getpass import getpass
ask = False
if ask:
From = input("Enter email address of the sender: ")
username = input("Enter email user name: ")
smtp_server = input("Enter SMTP server address: ")
password = getpass("Password for "+username+" at "+smtp_server+": ")
else:
From ='Instructor <<EMAIL>>'
username ='my_username'
smtp_server ='<EMAIL>'
password = '<PASSWORD>' # not good practice to put password in the code
url = 'http://apmonitor.com/pds/uploads/Main/students.txt'
students = pd.read_csv(url)
def sendEmail(Subject, bodyText, To, pw):
msg = MIMEText(bodyText)
msg['Subject'] = Subject
msg['From'] = From
msg['To'] = To
server = smtplib.SMTP(smtp_server)
server.starttls()
server.login(username, password)
server.send_message(msg)
server.quit()
return 'Sent to ' + To
Message = '''We missed you in class today. I hope you are doing well.
Today we worked on the project for facial recognition.
Best regards,
<NAME>
Brigham Young University'''
for i in range(len(students)):
bdTxt = students.First[i] + ',\n\n' + Message
To = students.Email[i]
print(To)
Subject = "Hi " + students.First[i] + ", we missed you today"
sendEmail(Subject,bdTxt,To,password)
# -
import pyttsx3
name = 'Peter'
engine = pyttsx3.init()
engine.say("Welcome to class, "+name)
engine.runAndWait()
| All_Source_Code/FacialRecognition/FacialRecognition.ipynb |
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .fs
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: F#
// language: fsharp
// name: ifsharp
// ---
// + [markdown] deletable=true editable=true
// 2^15 = 32768 and the sum of its digits is 3 + 2 + 7 + 6 + 8 = 26.
//
// What is the sum of the digits of the number 2^1000?
// + deletable=true editable=true
open System.Numerics
let rec pow x n =
match n with
| 0 -> 1
| _ -> x * (pow x (n-1))
let bigOne = BigInteger(1)
let rec pow' x n =
if n = 0 then bigOne
else x * (pow' x (n-1))
let twoToTheOneThousand = pow' (new BigInteger(2)) 1000
twoToTheOneThousand.ToString().ToCharArray()
|> Array.map string
|> Array.map int
|> Array.sum
// + deletable=true editable=true
| Problem 016 - Power digit sum.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + code_folding=[47]
class BinaryTree:
def __init__(self, val):
self.value=val
self.right=None
self.left=None
def get_value(self):
return self.value
def set_value(self,val):
self.value=val
def get_right(self):
return self.right
def get_left(self):
return self.left
def insert_left(self,v):
t=BinaryTree(v)
if self.left:
t.left=self.left
self.left=t
return t
def insert_right(self,v):
t=BinaryTree(v)
if self.right:
t.right=self.right
self.right=t
return t
# ----------------------------------------
def build_tree():
a=BinaryTree('*')
b=a.insert_left('b')
b.insert_right('d')
b.insert_left('x')
c=a.insert_right('c')
c.insert_left('e')
c.insert_right('f')
return a
t= build_tree()
import textwrap
def to_str(t,indent):
if not t: return 'null'
s= ("""
{{
value: {:},
left: {:},
right: {:}
}}""".format(
t.get_value(),
to_str(t.get_left(),indent+4),
to_str(t.get_right(),indent+4)
))
return textwrap.indent(s,' '* indent)
# print(to_str(t,0))
from collections import OrderedDict
import json
def build_dict(t):
if not t: return
o=OrderedDict()
# o=dict()
o['v']=t.get_value()
left=t.get_left()
if left:
o['left']=build_dict(left)
right=t.get_right()
if right:
o['right']=build_dict(right)
return o
d=build_dict(t)
json.dumps(d)
# +
# VISUALIZATION ----------------------
import networkx as nx
from networkx.drawing.nx_agraph import write_dot, graphviz_layout
import matplotlib.pyplot as plt
def uid_gen():
n=0
while True:
n+=1
yield n
uid=uid_gen()
def draw_graph(G):
plt.rcParams["figure.figsize"] = [10.,5.]
pos =graphviz_layout(G, prog='dot')
node_labels=nx.get_node_attributes(G,'name')
nx.draw(G,pos, with_labels=True,labels=node_labels, width=2, node_size=1000, node_color="orange",alpha=1.0)
lbls = nx.get_edge_attributes(G,'label')
nx.draw_networkx_edge_labels(G, pos, edge_labels = lbls)
# nx.draw_networkx_nodes(G,pos,node_size=2000, nodelist=['x'])
# nx.draw_networkx_edges(G, pos, alpha=0.9, width=6, edge_color="orange", edgelist=[(1, 'Petya')])
# plt.figure(1)
plt.show()
import uuid
# import random
def build_graph(g,parent_g_node,t, edge_label=None):
# global count
if not t: return
node= next(uid) #str(uuid.uuid4()) #random.random()
g.add_node(node, name=t.get_value())
if parent_g_node:
g.add_edge(parent_g_node,node, label=edge_label)
left=t.get_left()
right=t.get_right()
if left:
build_graph(g,node, left, 'L')
if right:
build_graph(g,node, right, 'R')
return node
def show_bin_tree(t):
G=nx.DiGraph()
root=build_graph(G,None,t )
draw_graph(G)
show_bin_tree(t)
# -
for n in G.nodes.items():
print(n)
G.edges()
root
# # Export JSON to use with D3
# +
from networkx.readwrite import json_graph
import json
# json.dumps(json_graph.tree_data(G,root='*', attrs={'children': 'children', 'id': 'name'}))
json.dumps(json_graph.tree_data(G,root=root, attrs={'children': 'children', 'id': 'id'}))
# -
# Using the information from above we can define four rules as follows:
#
# - If the current token is a '(', add a new node as the left child of the current node, and descend to the left child.
# - If the current token is in the list ['+','-','/','*'], set the root value of the current node to the operator represented by the current token. Add a new node as the right child of the current node and descend to the right child.
# - If the current token is a number, set the root value of the current node to the number and return to the parent.
# - If the current token is a ')', go to the parent of the current node.
#
# +
class BinaryTree:
def __init__(self, val):
self.value = val
self.right = None
self.left = None
def get_value(self):
return self.value
def set_value(self, val):
self.value = val
def get_right(self):
return self.right
def get_left(self):
return self.left
def insert_left(self, v):
t = BinaryTree(v)
if self.left:
t.left = self.left
self.left = t
return t
def insert_right(self, v):
t = BinaryTree(v)
if self.right:
t.right = self.right
self.right = t
return t
# from pythonds.basic.stack import Stack
# from pythonds.trees.binaryTree import BinaryTree
def buildParseTree(fpexp):
fplist = fpexp.split()
pStack = [] # Stack()
eTree = BinaryTree('')
pStack.append(eTree) # push(eTree)
currentTree = eTree
for i in fplist:
if i == '(':
currentTree.insert_left('') # insertLeft('')
pStack.append(currentTree) # push(currentTree)
currentTree = currentTree.get_left() # getLeftChild()
elif i in ['+', '-', '*', '/']:
currentTree.set_value(i) # setRootVal(i)
currentTree.insert_right('') # insertRight('')
pStack.append(currentTree) # push(currentTree)
currentTree = currentTree.get_right() # getRightChild()
elif i == ')':
currentTree = pStack.pop()
elif i not in ['+', '-', '*', '/', ')']:
try:
currentTree.set_value(int(i)) # setRootVal(int(i))
parent = pStack.pop()
currentTree = parent
except ValueError:
raise ValueError("token '{}' is not a valid integer".format(i))
return eTree
pt = buildParseTree("( ( 10 + 5 ) * ( 6 - 7 ) ")
G=nx.DiGraph()
root=build_graph(G,None,pt )
draw_graph(G)
# +
def getId=
class No:
def __init__(self, v):
self.data=v
self.left=None
self.right=None
no=No(5)
# +
import matplotlib.rcsetup as rcsetup
print(rcsetup.all_backends)
| coding/binary-tree.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Q#
# language: qsharp
# name: iqsharp
# ---
# # CHSH Game Workbook
#
# **What is this workbook?**
# A workbook is a collection of problems, accompanied by solutions to them.
# The explanations focus on the logical steps required to solve a problem; they illustrate the concepts that need to be applied to come up with a solution to the problem, explaining the mathematical steps required.
#
# Note that a workbook should not be the primary source of knowledge on the subject matter; it assumes that you've already read a tutorial or a textbook and that you are now seeking to improve your problem-solving skills. You should attempt solving the tasks of the respective kata first, and turn to the workbook only if stuck. While a textbook emphasizes knowledge acquisition, a workbook emphasizes skill acquisition.
#
# This workbook describes the solutions to the problems offered in the [CHSH Game kata](./CHSHGame.ipynb).
# Since the tasks are offered as programming problems, the explanations also cover some elements of Q# that might be non-obvious for a first-time user.
# ## Part I. Classical CHSH
#
# ### Task 1.1. Win Condition
# **Input:**
#
# 1. Alice and Bob's starting bits (X and Y),
#
# 2. Alice and Bob's output bits (A and B).
#
# **Output:**
# True if Alice and Bob won the CHSH game, that is, if X ∧ Y = A ⊕ B, and false otherwise.
# ### Solution
#
# There are four input pairs (X, Y) possible, (0,0), (0,1), (1,0), and (1,1), each with 25% probability.
# In order to win, Alice and Bob have to output different bits if the input is (1,1), and same bits otherwise.
#
# To check whether the win condition holds, you need to compute $x \wedge y$ and $a \oplus b$ and to compare these values: if they are equal, Alice and Bob won. [`Microsoft.Quantum.Logical`](https://docs.microsoft.com/qsharp/api/qsharp/microsoft.quantum.logical) library offers you logical functions `And` and `Xor` which you can use for this computation. Alternatively, you can compute these values using built-in operators: $x \wedge y$ as `x and y` and $a \oplus b$ as `a != b`.
# +
%kata T11_WinCondition_Test
open Microsoft.Quantum.Logical;
function WinCondition (x : Bool, y : Bool, a : Bool, b : Bool) : Bool {
let p = And(x, y);
let u = Xor(a, b);
return (p == u);
}
# -
# [Return to task 1.1 of the CHSH Game kata.](./CHSHGame.ipynb#Task-1.1.-Win-Condition)
# ### Task 1.2. Alice and Bob's classical strategy
#
# In this task you have to implement two functions, one for Alice's classical strategy and one for Bob's.
# Note that they are covered by one test, so you have to implement both to pass the test. Once you implement one of the strategies, execute its cell - it will fail with the error message indicating that the other strategy is not implemented yet. Once you implement the second strategy, execute its cell to get the test result.
#
# **Input:** Alice's OR Bob's starting bit (X or Y, respectively).
#
# **Output:** The bit that Alice OR Bob should output (A or B, respectively) to maximize their chance of winning.
# ### Solution
#
# If Alice and Bob always return TRUE, they will have a 75% win rate,
# since TRUE ⊕ TRUE = FALSE, and the AND operation on their input bits will be false with 75% probability.
#
# Alternatively, Alice and Bob could agree to always return FALSE to achieve the same 75% win probability.
# A classical strategy cannot achieve a higher success probability.
# +
%kata T12_ClassicalStrategy_Test
operation AliceClassical (x : Bool) : Bool {
return true;
}
# +
%kata T12_ClassicalStrategy_Test
operation BobClassical (y : Bool) : Bool {
// Alternatively, Alice and Bob could agree to always return FALSE to achieve the same 75% win chances.
return true;
}
# -
# [Return to task 1.2 of the CHSH Game kata.](./CHSHGame.ipynb#Task-1.2.-Alice-and-Bob's-classical-strategy)
# ## Part II. Quantum CHSH
#
# In the quantum version of the game, the players still can not
# communicate during the game, but they are allowed to share
# qubits from a Bell pair before the start of the game.
#
# ### Task 2.1. Entangled pair
#
# **Input:** An array of two qubits in the $|00\rangle$ state.
#
# **Goal:** Create a Bell state $|\Phi^{+}\rangle = \frac{1}{\sqrt{2}} \big( |00\rangle + |11\rangle \big)$ on these qubits.
# ### Solution
#
# You can find a detailed explanation of the solution to this task in the [Superposition kata workbook](../Superposition/Workbook_Superposition.ipynb#bell-state).
# +
%kata T21_CreateEntangledPair_Test
operation CreateEntangledPair (qs : Qubit[]) : Unit {
H(qs[0]);
// This performs a Hadamard transform on the first qubit
// which produces the intermediate state (|00> + |10>) / sqrt(2).
CX(qs[0], qs[1]);
// CX (Controlled X, Controlled NOT, CNOT) operates on two qubits, putting the second qubit through a NOT gate
// if and only if the first qubit is '1'.
// The 4x4 operator matrix for CX is:
// [1 0 0 0]
// [0 1 0 0]
// [0 0 0 1]
// [0 0 1 0]
// The original state |00> corresponds to the two-qubit amplitude vector [1, 0, 0, 0].
// The state after the Hadamard transform is given by the column vector [1/sqrt(2), 0, 1/sqrt(2), 0].
// The CX operator changes this vector to [1/sqrt(2), 0, 0, 1/sqrt(2)], which is the desired Bell state.
}
# -
# [Return to task 2.1 of the CHSH Game kata.](./CHSHGame.ipynb#Task-2.1.-Entangled-pair)
# ### Task 2.2. Alice's quantum strategy
#
# **Inputs:**
#
# 1. Alice's starting bit (X),
#
# 2. Alice's half of Bell pair she shares with Bob.
#
# **Goal:** Measure Alice's qubit in the Z basis if her bit is 0 (false), or the X basis if her bit is 1 (true), and return the result.
# The state of the qubit after the operation does not matter.
# ### Solution
#
# In Q#, you can perform measurements in a specific basis using either the
# [Measure operation](https://docs.microsoft.com/qsharp/api/qsharp/microsoft.quantum.intrinsic.measure)
# or convenient shorthands for measure-and-reset-to-$|0\rangle$ sequence of operations
# [MResetZ](https://docs.microsoft.com/qsharp/api/qsharp/microsoft.quantum.measurement.mresetz) and
# [MResetX](https://docs.microsoft.com/qsharp/api/qsharp/microsoft.quantum.measurement.mresetx).
#
# (See the [discussion](#discussion) below for details on why Alice should follow this strategy.)
# +
%kata T22_AliceQuantum_Test
open Microsoft.Quantum.Measurement;
operation AliceQuantum (bit : Bool, qubit : Qubit) : Bool {
if (bit) {
let q = MResetX(qubit);
return (q == One);
}
else {
let q = MResetZ(qubit);
return (q == One);
}
}
# -
# [Return to task 2.2 of the CHSH Game kata.](./CHSHGame.ipynb#Task-2.2.-Alice's-quantum-strategy)
# ### Task 2.3. Rotate Bob's qubit
#
# **Inputs:**
#
# 1. The direction to rotate: true for clockwise, false for counterclockwise,
#
# 2. Bob's qubit.
#
# **Goal:** Rotate the qubit $\frac{\pi}{8}$ radians around the Y axis in the given direction.
# ### Solution
#
# In Q#, you can perform rotations around a specific axis using one of the rotation gates.
# In our case the axis is Y, and the corresponding rotation gate is [Ry](https://docs.microsoft.com/qsharp/api/qsharp/microsoft.quantum.intrinsic.ry).
#
# Note that in the $R_y(\theta)$ matrix entries, the trigonometric functions are actually performed on $\frac{\theta}{2}$, so the angle input into the function has to be doubled.
#
# (See the [discussion](#discussion) below for details on why Bob should rotate the qubit.)
# +
%kata T23_RotateBobQubit_Test
open Microsoft.Quantum.Math;
operation RotateBobQubit (clockwise : Bool, qubit : Qubit) : Unit {
mutable dir = PI() / 8.0;
if (clockwise) {
set dir = dir * -1.0;
}
Ry(2.0 * dir, qubit);
}
# -
# [Return to task 2.3 of the CHSH Game kata.](./CHSHGame.ipynb#Task-2.3.-Rotate-Bob's-qubit)
# ### Task 2.4. Bob's quantum strategy
#
# **Inputs:**
#
# 1. Bob's starting bit (Y),
#
# 2. Bob's half of Bell pair he shares with Alice.
#
# **Goal:** Measure Bob's qubit in the $\frac{\pi}{8}$ basis if his bit is 0 (false), or the $-\frac{\pi}{8}$ basis if his bit is 1 (true), and return the result.
# The state of the qubit after the operation does not matter.
# ### Solution
#
# Measuring a qubit in the $\theta$ basis is the same as rotating the qubit by $\theta$, clockwise, and then making a standard measurement in the Z basis.
#
# To implement the described transformation in Q#, we need to rotate the qubit by $\frac{\pi}{8}$ clockwise if `bit = false` or counterclockwise if `bit = true` and then perform a measurement.
# We can do the rotation using the previous task (note the negation of the boolean parameter we need to do).
#
# (See the [discussion](#discussion) below for details on why Bob should follow this strategy.)
# +
%kata T24_BobQuantum_Test
open Microsoft.Quantum.Measurement;
operation BobQuantum (bit : Bool, qubit : Qubit) : Bool {
RotateBobQubit(not bit, qubit);
return M(qubit) == One;
}
# -
# [Return to task 2.4 of the CHSH Game kata.](./CHSHGame.ipynb#Task-2.4.-Bob's-quantum-strategy)
# ### Task 2.5. Play the CHSH game using the quantum strategy
#
# **Input:**
# Operations that return Alice and Bob's output bits (A and B) based on their quantum
# strategies and given their respective qubits from the Bell pair.
# Alice and Bob have already been told what their starting bits X and Y are.
#
# **Goal:** Return Alice and Bob's output bits (A, B).
# Note that this task uses strategies `AliceQuantum` and `BobQuantum`, which you've implemented in tasks 2.2 and 2.4, respectively.
# ### Solution
#
# Putting together the building blocks we've implemented into a strategy is very simple:
#
# 1. Allocate two qubits and prepare a Bell state on them (using `CreateEntangledPair` from task 2.1).
# 2. Send one of the qubits to Alice and another to Bob (this step is "virtual", not directly reflected in Q# code, other than making sure that Alice and Bob each act on their qubit only).
# 3. Have Alice and Bob perform their measurements on their respective qubits using `askAlice` and `askBob` operations.
# 4. Return their measurement results.
# +
%kata T25_PlayQuantumCHSH_Test
operation PlayQuantumCHSH (askAlice : (Qubit => Bool), askBob : (Qubit => Bool)): (Bool, Bool) {
using (bell = Qubit[2]) {
CreateEntangledPair(bell);
let A = askAlice(bell[0]);
let B = askBob(bell[1]);
return (A, B);
}
}
# -
# [Return to task 2.5 of the CHSH Game kata.](./CHSHGame.ipynb#Task-2.5.-Play-the-CHSH-game-using-the-quantum-strategy)
# ### <a name="discussion"></a>Discussion: probability of victory for quantum strategy
#
# The above quantum strategy adopted by Alice and Bob offers a win rate of $\frac{2 + \sqrt{2}}{4}$, or about 85.36%. Let's see why this is the case.
#
# First, consider the outcome if Alice and Bob simply measure their qubits in the Z basis without manipulating them at all. Because of the entanglement inherent to the Bell state they hold, their measurements will always agree (i.e., both true or both false).
# This will suffice for victory in the three scenarios (0,0), (0,1) and (1,0) and fail for (1,1), so their win probability is 75%, the same as that for the straightforward classical strategies of invariably returning both true or both false.
#
# Now let's analyze the optimal quantum strategy.
#
# > As a reminder, probability "wavefunction" for a two-qubit state is given by the following length-4 vector of amplitudes:
# >
# > $$
# \begin{bmatrix}
# \psi_{00}\\
# \psi_{01}\\
# \psi_{10}\\
# \psi_{11}
# \end{bmatrix}
# $$
# >
# > $|\psi_{ij}|^2$ gives the probability of observing the corresponding basis state $|ij\rangle$ upon measuring the qubit pair.
#
# The initial state $|00\rangle$ has $\psi_{00} = 1$ and $\psi_{01} = \psi_{10} = \psi_{11} = 0$.
# The Bell state we prepare as the first step of the game has an amplitude vector as follows (we'll use decimal approximations for matrix elements):
#
# $$
# \begin{bmatrix}
# 1/\sqrt{2}\\
# 0\\
# 0\\
# 1/\sqrt{2}
# \end{bmatrix} =
# \begin{bmatrix}
# 0.7071\\
# 0\\
# 0\\
# 0.7071
# \end{bmatrix}
# $$
#
# Let's analyze the probabilities of outcomes in case of different bits received by players.
#
# #### Case 1: Alice holds bit 0
#
# In this case Alice simply measures in the Z basis as above.
#
# * When Bob's bit is 0, he rotates his qubit clockwise by $\pi/8$, which corresponds to the operator
#
# $$
# \begin{bmatrix}
# 0.9239 & 0.3827 & 0 & 0\\
# -0.3827 & 0.9239 & 0 & 0\\
# 0 & 0 & 0.9239 & 0.3827\\
# 0 & 0 & -0.3827 & 0.9239
# \end{bmatrix}
# $$
#
# This performs the $R_y$ rotation by $\pi/8$ radians clockwise on Bob's qubit while leaving Alice's qubit unchanged.
#
# * If Bob's bit were 1, he would rotate his qubit counterclockwise by $\pi/8$, applying a very similar operator
#
# $$
# \begin{bmatrix}
# 0.9239 & -0.3827 & 0 & 0\\
# 0.3827 & 0.9239 & 0 & 0\\
# 0 & 0 & 0.9239 & -0.3827\\
# 0 & 0 & 0.3827 & 0.9239
# \end{bmatrix}
# $$
#
# Therefore, when Alice has bit 0, the application of the rotation operator to the Bell state gives
#
# $$
# \begin{bmatrix}
# 0.6533 \\
# -0.2706 \\
# 0.2706 \\
# 0.6533
# \end{bmatrix} \text{ or }
# \begin{bmatrix}
# 0.6533\\
# 0.2706\\
# -0.2706\\
# 0.6533
# \end{bmatrix}
# $$
#
# depending on whether Bob holds 0 (left-hand case) or 1 (right-hand case).
#
# The result of AND on their input bits will always be 0; thus they win when their outputs agree. These two cases correspond to the top and bottom elements of the vectors above, with a combined probability of $(0.6533)^2 + (0.6533)^2 = 0.4268 + 0.4268 = 0.8536$, so they have an 85.36% win chance.
#
# #### Case 2: Alice holds bit 1
#
# When Alice holds bit 1, she measures in basis X (or, equivalently, Hadamard-transforms her qubit, leaving Bob's be, before making her Z-basis measurement). This corresponds to applying the operator
#
# $$
# \begin{bmatrix}
# 0.7071 & 0 & 0.7071 & 0\\
# 0 & 0.7071 & 0 & 0.7071\\
# 0.7071 & 0 & -0.7071 & 0\\
# 0 & 0.7071 & 0 & -0.7071
# \end{bmatrix}
# $$
#
# to the Bell state, resulting in a vector of:
#
# $$
# \begin{bmatrix}
# 0.5\\
# 0.5\\
# 0.5\\
# -0.5
# \end{bmatrix}
# $$
#
# Now, one of the two rotation operators is applied depending on what bit Bob holds, transforming this vector into:
#
# $$
# \begin{bmatrix}
# 0.6533 \\
# 0.2706 \\
# 0.2706 \\
# -0.6533
# \end{bmatrix} \text{ or }
# \begin{bmatrix}
# 0.2706\\
# 0.6533\\
# 0.6533\\
# -0.2706
# \end{bmatrix}
# $$
#
# When Bob holds 0, they still want to return the same parity, which they again do with 85.36% probability (left-hand vector above).
# But when Bob holds 1, the AND condition is now true and the players want to answer in opposite parity. This corresponds to the second and third elements of the right-hand vector above.
# Thanks to the "magic" of the combination of the counterclockwise rotation and Hadamard transform, they now do this with probability $(0.6533)^2 + (0.6533)^2 = 0.8536$ and thus 85.36% becomes their win odds once more.
#
# #### Side notes
#
# * If Bob never rotated his qubit, their entangled state would remain the Bell state if Alice held bit 0 and the state corresponding to $\frac12 \big(|00\rangle + |01\rangle + |10\rangle - |11\rangle\big)$ if Alice held bit 1.
# While she and Bob would have a 100% success probability against Alice's 0 bit, they would have only a 50% chance of success if she held bit 1, and thus their win chance would revert to the 75% of the classical strategy again.
#
# * It can be proven that Alice and Bob cannot surpass an overall win probability of 85.36% in the CHSH game. This entails a higher-level discussion of quantum observable theory, for instance see [Tsirelson's bound](https://en.wikipedia.org/wiki/Tsirelson's_bound).
| CHSHGame/Workbook_CHSHGame.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [Root]
# language: python
# name: Python [Root]
# ---
# # 1.0 Introduction
# We analyze a collection of 116,909 Colombia event mentions between 09/11/2016 and 06/17/2017 collected from GDELT's Mentions (<em>[gdelt-bq:gdeltv2.eventmentions]</em>) table. The Mentions table records each mention of an event in the Events table, making it possible to track the trajectory and network structure of a story as it flows through the global media system. Each mention of an event receives its own entry in the Mentions table, therefore an event that is mentioned in 100 articles will be listed 100 times in the Mentions table. <em>If a news report mentions multiple events, each mention is recorded separately in this table.</em> As each event mention is recorded over time, along with the timestamp the article was published,users can track the progression of an event through the global media, identifying outlets that tend to break certain kinds of events the earliest or which may break stories later but are more accurate in their reporting on those events. Combined with the 15 minute update resolution and GCAM, the Mentions table also allows the emotional reaction and resonance of an event to be assessed as it sweeps through the world’s media. We begin by identifying the most prominent media type and languages for all event mentions, then we computer the event tone and confidence distribution for all event mentions in the data set. We then identify the dominant sources with the Top 50 event mention frequencies, filter the sources relevant to the Peace Accords Matrix (PAM) implementation monitoring and verification framework, and compare their language composition as well as tone and confidence distributions.
# Import useful libraries
import re
import operator
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.cm as cm
import matplotlib.pyplot as plt
from math import isnan
from collections import Counter
from collections import OrderedDict
from sklearn.neighbors.kde import KernelDensity
from NewspaperLanguages import translations
# Declare global options
# %matplotlib inline
pd.set_option('display.max_colwidth', -1)
plt.style.use('seaborn-whitegrid')
# Declare global variables
all_mentions = pd.read_csv('C:/Users/Administrator/Dropbox/GDELT/all_mentions.csv', encoding='latin-1').sort_values('EventTimeDate', ascending=1)
all_mentions.columns
# # 2.0 Media Analysis
#
# As an event is mentioned across multiple news reports, each of those mentions is recorded in the Mentions table, along with several key indicators about that mention, including the media type mentioning the event, the average "tone" of the news report mentioning the event, and the "confidence" of GDELT's algorithms in their identification of the event
# from that specific news report. The following section identifies the most prominent media type (WEB) and languages (English and Spanish) as well as the average tone and confidence distributions for the collection of source documents in the Mentions table.
# ## 2.1 Mention Types
#
# The MentionTypes field is a numerical identifier that refers to the source collection that the document originated from:
#
# 1: WEB - The document originates from the open web and the MentionIdentifier is a fully-qualified URL that can be used to access the document on the web
#
# 2: CITATIONONLY - The document originates from a broadcast, print, or other offline source in which only a textual citation is available for the document. In this case the MentionIdentifier contains the textual citation for the document
#
# 3: CORE - The document originates from the CORE archive and the MentionIdentifier contains its DOI, suitable for accessing the original document through the CORE website
#
# 4: DTIC - The document originates from the DTIC archive and the MentionIdentifier contains its DOI, suitable for accessing the original document through the DTIC website
#
# 5: JSTOR - The document originates from the JSTOR archive and the MentionIdentifier contains its DOI, suitable for accessing the original document through your JSTOR subscription if your institution subscribes to it
#
# 6: NONTEXTUALSOURCE - The document originates from a textual proxy (such as closed captioning) of a non-textual information source (such as a video) available via a URL and the MentionIdentifier provides the URL of the non-textual original source. At present, this Collection Identifier is used for processing of the closed captioning streams of the Internet Archive Television News Archive in which each broadcast is available via a URL, but the URL offers access only to the video of the broadcast and does not provide any access to the textual closed captioning used to generate the metadata. This code is used in order to draw a distinction between URL-based textual material (Collection Identifier 1 (WEB) and URL-based non-textual material like the Television News Archive.
# +
# Preprocessing: Mention Type Codes
MentionTypes = Counter(all_mentions.MentionType)
MentionTypeCodes = {1: "WEB",
2: "CITATIONONLY",
3: "CORE",
4: "DTIC",
5: "JSTOR",
6: "NONTEXTUALSOURCE"}
MentionTypes = OrderedDict(sorted(MentionTypes.items(), key=lambda x: x[1], reverse=True))
mention_type_labels = [MentionTypeCodes[key] for key in list(MentionTypes.keys())]
mention_type_sizes = list(MentionTypes.values())
print(MentionTypes)
# -
fig, ax = plt.subplots(figsize=(8, 8))
patches, texts = plt.pie(mention_type_sizes, colors=cm.Set3(np.linspace(0, 1, len(mention_type_labels))), startangle=90)
plt.legend(patches, mention_type_labels, loc="right", fontsize=15)
plt.axis('equal')
plt.tight_layout()
plt.show()
# ## 2.2 Media Languages
#
# GDELT provides realtime translation of the world’s news in 65 languages. The Mentions Document Translation Information (MentionDocTranslationInfo) field records the provenance information for machine-translated documents indicating the original source language and the citation of the translation system used to translate the document for processing. The field will be null for documents originally in English. The field will also be null for human-translated documents and provided to GDELT in English, such as BBC Monitoring materials. In the future, this field may be expanded to included information on human-translated pipelines. To analyse the language composition of the mentions documents collection, we begin by preprocessing the MentionDocTranslationInfo values to normal language names and append the clean language named to the DataFrame for further analysis.
# +
# Preprocessing: Mention Document Languages
languages = []
for lang in all_mentions.MentionDocTranslationInfo:
try:
languages.append(translations[lang][1])
except:
languages.append('English')
all_mentions['Language'] = languages
languages = Counter(languages)
print(languages)
languages = OrderedDict(sorted(languages.items(), key=lambda x: x[1], reverse=True))
language_labels = [key for key in list(languages.keys())]
language_sizes = list(languages.values())
# -
fig, ax = plt.subplots(figsize=(8, 8))
patches, texts = plt.pie(language_sizes, colors=cm.Set3(np.linspace(0, 1, len(language_sizes))), startangle=90)
plt.legend(patches, language_labels, loc="right", fontsize=15)
plt.axis('equal')
plt.tight_layout()
plt.show()
# ## 2.3 Tone Distribution
#
# In this section, we analyze the average tone distribution for all source documents in the Mentions Table. Although the score ranges from -100 (extremely negative) to +100 (extremely positive), common values range between -10 to +10, with 0 indicating a neutral tone. The MentionDocTone field can be used to filter event "contexts" as an implicit measure of the importance of an event or as a proxy indicator for the impact of an event. For example, a riot event with slightly negative average tone is more likely to be a minor occurence compared to one with extremely negative average tone. However, it is important to note that the MentioDocTone indicator only provides a basic tonal assessment of an article. It is recommended that users interested in emotional measures use the Mentions and Global Knowledge Graph (GKG) tables to merge the complete 2,300 emotions and themes from the GKG Global Content Analysis Measures (GCAM) system into their analysis.
# Preprocessing: Mention Document Tone
tone = np.array(list(all_mentions.MentionDocTone))
print(all_mentions.MentionDocTone.describe())
fig, ax = plt.subplots(figsize=(18, 8))
ax.hist(tone, bins=500, range=(-20,30), histtype='bar', align='mid', orientation='vertical')
ax.set_title('', fontsize=15, fontweight='bold')
ax.set_ylabel('Frequency' , fontsize=15)
ax.set_xlabel('Tone', fontsize=15)
ax.tick_params(axis='x', labelsize=15)
ax.tick_params(axis='y', labelsize=15)
ax.set_xticks(np.arange(-20, 50, 10))
plt.xlim([-20,20])
plt.show()
# ## 2.4 Daily Tone Timeseries
#
# In addition to the global tone distribution, we analysed the daily average tone for all mentions. Due to daily fluctuations in mention documents tone, we also computed the 7-day simple moving average:
#
# $$SMA = \frac{a_{m} + a_{m-1} + ... + a_{m-(n-1)}}{n}$$
# +
all_mentions['MentionDates'] = [str(date)[:8] for date in all_mentions['MentionTimeDate']]
dates = sorted([key for key in Counter(all_mentions['MentionDates']).keys()])
daily_tone = [np.mean(all_mentions.loc[all_mentions['MentionDates'] == date, 'MentionDocTone']) for date in dates]
def movingAverage(values, window):
weights = np.repeat(1.0, window)/window
sma = np.convolve(values, weights, 'valid')
return sma
daily_tone_ma = movingAverage(daily_tone, 7)
fig, ax = plt.subplots(figsize=(18, 8))
ax.set_title('', fontsize=15, fontweight='bold')
ax.tick_params(axis='x', labelsize=15)
ax.tick_params(axis='y', labelsize=10)
ax.set_xlabel('', fontsize=15)
ax.set_ylabel('AvgTone' , fontsize=15)
ax.set_xticks(np.arange(0, len(dates), 30))
ax.set_xticklabels(ax.xaxis.get_majorticklabels(), rotation=270)
ax.set_xticklabels(dates[::30])
plt.plot(np.arange(len(dates)), daily_tone, label='AvgTone')
plt.plot(np.arange(len(dates))[len(dates)-len(daily_tone_ma):], daily_tone_ma, label='SMA(7)', c='r')
plt.xlim([0,len(dates)])
plt.legend(loc='best', fontsize=15)
plt.show()
# -
# ## 2.5 Confidence Distribution
#
# The Confidence indicator measures the percent confidence in the extraction of an event from an article for each mention. See the discussion in the codebook at http://data.gdeltproject.org/documentation/GDELT-Event_Codebook-V2.0.pdf. The Confidence measure is a new feature in GDELT 2.0 that makes it possible to adjust the sensitivity of GDELT towards specific use cases. Those wishing to find the earliest glimmers of breaking events or reports of very small-bore events that tend to only appear as part of period “round up” reports, can use the entire event stream, while those wishing to find only the largest
# events with strongly detailed descriptions, can filter the Event stream to find only those events with the highest Confidence measures. The Confidence measure also makes it possible to identify the “best” news report to return for a given event (filtering all mentions of an event for those with the highest Confidence scores, most prominent positioning within the article, and/or in a specific source language – such as Arabic coverage of a protest versus English coverage of that protest)
fig, ax = plt.subplots(figsize=(18, 8))
values = list(all_mentions.Confidence)
plt.hist(values, bins=10, range=(0,100), histtype='bar', align='mid', orientation='vertical')
ax.set_title('', fontsize=15, fontweight='bold')
ax.set_ylabel('Mentions' , fontsize=15)
ax.set_xlabel('Confidence', fontsize=15)
ax.tick_params(axis='x', labelsize=15)
ax.tick_params(axis='y', labelsize=15)
ax.set_xticks(np.arange(0, 100, 10))
plt.xlim([0,100])
plt.show()
# ## 2.6 Dominant Sources
#
# We use the Mention Source Name (MentionSourceName) field to identify the most dominant sources, reporting the most event mentions for the duration of our study. The MentionSourceName field is a human-friendly identifier of the source of the document. For web documents with a URL (which comprise the majority of documents in our study), this field will contain the web page's top-level domain. BBC Monitoring materials will contain "BBC Monitoring" and JSTOR materials will contain "JSTOR". In the following section, we identify the most dominant sources - the top 50 sources ordered by event mentions frequency.
# +
# Pre-processing: Mention Source Names
clean_names = []
for sourcename in all_mentions.MentionSourceName:
try:
clean_names.append(re.search('(.*?)\.', sourcename).group(1))
except:
clean_names.append('nan')
all_mentions['SourceName'] = clean_names
MentionSourceNames = Counter(clean_names).most_common(50)
mention_source_name_labels = [MentionSourceNames[i][0] for i in range(len(MentionSourceNames))]
mention_source_name_values = [MentionSourceNames[i][1] for i in range(len(MentionSourceNames))]
print(MentionSourceNames)
# -
fig, ax = plt.subplots(figsize=(18, 8))
ax.set_title('', fontsize=15, fontweight='bold')
ax.set_ylabel('Mentions' , fontsize=15)
ax.set_xlabel('', fontsize=15)
ax.tick_params(axis='x', labelsize=15)
ax.tick_params(axis='y', labelsize=15)
plt.bar(np.arange(len(mention_source_name_values)), mention_source_name_values, align='center')
ax.set_xticks(np.arange(0, len(mention_source_name_labels), 1))
ax.set_xticklabels(ax.xaxis.get_majorticklabels(), rotation=270)
plt.xlim([0,len(mention_source_name_labels)])
ax.set_xticklabels(mention_source_name_labels[::1])
plt.xlim([-1,50])
plt.show()
# # 3.0 Source Filtering
#
# From the most dominant sources identified above, we performed further source filtering to identify sources currently used in the Peace Accords Matrix's (PAM) implementation monitoring and verification framework. The top 10 filtered sources are listed below:
# +
# Pre-processing: Filtered Sources
filtered_names = ['ap', 'caracol', 'colombiareports', 'elcomercio', 'elespectador', 'elpais', 'eltiempo', 'eluniversal', 'noticias', 'vanguardia']
filtered_labels = ['Associated Press', 'Caracol', 'Colombia Reports', 'El Comercio', 'El Espectador', 'El Pais', 'El Tiempo', 'El Universal', 'Noticias', 'Vanguardia']
filtered_frequencies = []
for name in filtered_names:
filtered_frequencies.append(len(all_mentions.loc[all_mentions['SourceName'] == name]))
print(OrderedDict(zip(filtered_labels, filtered_frequencies)))
# -
fig, ax = plt.subplots(figsize=(18, 8))
ax.set_title('', fontsize=15, fontweight='bold')
ax.set_ylabel('Events Mentioned' , fontsize=15)
ax.set_xlabel('', fontsize=15)
ax.tick_params(axis='x', labelsize=15)
ax.tick_params(axis='y', labelsize=15)
plt.bar(np.arange(len(filtered_labels)) , filtered_frequencies, align='center')
ax.set_xticks(np.arange(0, len(filtered_labels), 1))
ax.set_xticklabels(ax.xaxis.get_majorticklabels(), rotation=270)
plt.xlim([0,len(filtered_labels)])
ax.set_xticklabels(filtered_labels[::1])
plt.xlim([-1,10])
plt.show()
# ## 3.1 Languages Comparison
#
# We observe that most documents from the dominant sources in our data set, with the exception of the Associated Press and Colombia Reports, are originally published in Spanish. Although GDELT Translingual claims to be the largest realtime streaming news machine translation deployment in the world providing realtime media translation into English for processing through the entire GDELT Event and GKG/GCAM pipelines, in our study, we retrieve all mention source documents in the language in which they were originally published using a native Python Newspaper library. Below is the language composition of the Top 10 filtered sources identified above.
for key, value in OrderedDict(zip(filtered_labels, filtered_names)).items():
print(key+': ', Counter(all_mentions.loc[all_mentions['SourceName'] == value].Language))
# ## 3.2 Tone Comparison
#
# When trying to understand the particularities and commonalities of various news media, one finds that journalism is rife with varying reporting styles, ambiguities, assumed background knowledge, and complex linguistic structures. Using the MentionDocTone field from the Mentions table, we compare each news source's average tone distribution. We observe that Caracol <em>(mean=0.0, upper=2.8, lower=-2.4, std=3.9)</em>, El Tiempo <em>(-0.2, 1.7, -2.1, 2.9)</em>, and Vanguardia <em>(-0.4, 1.5, -2.6, 3.3)</em> offer balanced tones, with El Tiempo and Vanguardia providing more moderate tone ranges compared to Caracol covering both extremes. Associated Press <em>(-2.6, -1.3, -4.4, 3.0)</em> and Colombia Reports <em>(-4.0, -1.3, -6.4, 3.8)</em> tend to provide moderately negative reporting while the remaining sources mostly provide slightly positive or negative reporting tone.
# +
#all_mentions.loc[all_mentions['SourceName'] == 'colombiareports']['MentionDocTone'].describe()
# -
def compare_sources(compare_by):
ap = list(all_mentions.loc[all_mentions['SourceName'] == 'ap'][compare_by])
caracol = list(all_mentions.loc[all_mentions['SourceName'] == 'caracol'][compare_by])
colombiareports = list(all_mentions.loc[all_mentions['SourceName'] == 'colombiareports'][compare_by])
elcomercio = list(all_mentions.loc[all_mentions['SourceName'] == 'elcomercio'][compare_by])
elespectador = list(all_mentions.loc[all_mentions['SourceName'] == 'elespectador'][compare_by])
elpais = list(all_mentions.loc[all_mentions['SourceName'] == 'elpais'][compare_by])
eltiempo = list(all_mentions.loc[all_mentions['SourceName'] == 'eltiempo'][compare_by])
eluniversal = list(all_mentions.loc[all_mentions['SourceName'] == 'eluniversal'][compare_by])
noticias = list(all_mentions.loc[all_mentions['SourceName'] == 'noticias'][compare_by])
vanguardia = list(all_mentions.loc[all_mentions['SourceName'] == 'vanguardia'][compare_by])
myList = [ap] + [caracol] + [colombiareports] + [elcomercio] + [elespectador] + [elpais] + [eltiempo] + [eluniversal] + [noticias] + [vanguardia]
return myList
fig, ax = plt.subplots(figsize=(18, 8), sharex=True)
ax.set_title('', fontsize=15, fontweight='bold')
ax.xaxis.grid(True, linestyle='-', which='major', color='lightgrey', alpha=0.5)
ax.set_ylabel('Tone' , fontsize=15)
ax.tick_params(axis='x', labelsize=15)
ax.tick_params(axis='y', labelsize=15)
myList = compare_sources('MentionDocTone')
for i,l in enumerate(myList):
ax.boxplot(l, vert=True, showfliers=True, showmeans=True, positions = [i])
ax.set_xticks(range(len(myList)))
ax.set_xticklabels(filtered_labels)
ax.set_xticklabels(ax.xaxis.get_majorticklabels(), rotation=270)
ax.set_xlim(-0.5, len(myList)-0.5)
plt.show()
# ## 3.3 Confidence Comparison
#
# GDELT uses Natural Language Processing (NLP) algorithms such as coreference and deep parsing using whole-of-document context to understand and extract ambigous and linguistically complex events. Such extractions come wth a high potential for error. In this section, we compare GDELT's ability to extract an event from an article for each mention with confidence. On average, we observe high median (>80%) and upper-quartile bounds (100%) for El Comercio <em>(mean=71, std=32)</em>, El Pais <em>(67, 34)</em>, Noticias <em>(66, 32)</em>, and Vanguardia <em>(64, 36)</em> sources. Although Caracol <em>(58, 33)</em>, El Espectador <em>(55, 33)</em>, El Tiempo <em>(56, 32)</em> and El Universal <em>(56, 33)</em> have high upper-quartile bounds (100%), they have low median confidence bounds (40%). We observe low upper (50%) and median (20%) confidence scores for Associated Press <em>(38, 28)</em>.
# +
#all_mentions.loc[all_mentions['SourceName'] == 'ap']['Confidence'].describe()
# -
fig, ax = plt.subplots(figsize=(18, 8), sharex=True)
ax.set_title('', fontsize=15, fontweight='bold')
ax.xaxis.grid(True, linestyle='-', which='major', color='lightgrey', alpha=0.5)
ax.set_ylabel('Confidence' , fontsize=15)
ax.tick_params(axis='x', labelsize=15)
ax.tick_params(axis='y', labelsize=15)
myList = compare_sources('Confidence')
for i,l in enumerate(myList):
ax.boxplot(l, vert=True, showfliers=True, showmeans=True, positions = [i])
ax.set_xticks(range(len(myList)))
ax.set_xticklabels(filtered_labels)
ax.set_xticklabels(ax.xaxis.get_majorticklabels(), rotation=270)
ax.set_xlim(-0.5, len(myList)-0.5)
ax.set_ylim(0, 110)
plt.show()
#
| Notebooks/.ipynb_checkpoints/Mentions-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/ralsouza/python_fundamentos/blob/master/src/02_loops_condicionais_metodos_funcoes/12_calculadora.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="ZgOD3Bh6k5BW" colab_type="text"
# # Desenvolver uma Calculadora
# + [markdown] id="lgTW6JUF1DUj" colab_type="text"
# ## Versão 1
# + id="0y4HWEHnkLTV" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 276} outputId="f7d7d1f1-efc0-4381-8220-1fd5797f7da6"
print( 15 * '*', 'Calculadora em Python', 15 * '*' )
print( '' )
print( 'Selecione o número da operação desejada:' )
print( '' )
print( '1 - Soma' )
print( '2 - Subtração' )
print( '3 - Multiplicação' )
print( '4 - Divisão' )
print( '' )
print( 53 * '*' )
print( '' )
option = int( input( 'Digite sua opção (1/2/3/4): ' ) )
if ( option == 1 ):
numSoma1 = int( input( 'Digite o primeiro número para a soma: ' ) )
numSoma2 = int( input( 'Digite o segundo número para a soma: ' ) )
print( '{} + {} = {}'.format( numSoma1, numSoma2, numSoma1 + numSoma2 ) )
elif ( option == 2 ):
numSubt1 = int( input( 'Digite o primeiro número para a subtração: ' ) )
numSubt2 = int( input( 'Digite o segundo número para a subtração: ' ) )
print( '{} - {} = {}'.format( numSubt1, numSubt2, numSubt1 - numSubt2 ) )
elif ( option == 3 ):
numMult1 = int( input( 'Digite o primeiro número para a multiplicação: ' ) )
numMult2 = int( input( 'Digite o segundo número para a multiplicação: ' ) )
print( '{} × {} = {}'.format( numMult1, numMult2, numMult1 * numMult2 ) )
elif ( option == 4 ):
numDiv1 = int( input( 'Digite o primeiro número para a divisão: ' ) )
numDiv2 = int( input( 'Digite o segundo número para a divisão: ' ) )
print( '{} ÷ {} = {}'.format( numSubt1, numSubt2, numSubt1 - numSubt2 ) )
# + [markdown] id="ys3M5l7R1M-8" colab_type="text"
# ## Versão 2 ( with steroids )
# + id="Y51tHz831O7B" colab_type="code" colab={}
def menu():
''' Menu principal do programa '''
print( '' )
print( 15 * '*', 'Calculadora em Python', 15 * '*' )
print( '' )
print( 'Selecione o número da operação desejada:' )
print( '' )
print( '1 - Soma' )
print( '2 - Subtração' )
print( '3 - Multiplicação' )
print( '4 - Divisão' )
print( '5 - Sair' )
print( '' )
print( 53 * '*' )
print( '' )
# Variável de controle para sair do programa
terminarPrograma = False
# Bloco de operações do programa
while ( terminarPrograma != True ):
menu()
option = int( input( 'Digite sua opção (1/2/3/4/5): ' ) )
if ( option == 1 ):
numSoma1 = int( input( 'Digite o primeiro número para a soma: ' ) )
numSoma2 = int( input( 'Digite o segundo número para a soma: ' ) )
print( '{} + {} = {}'.format( numSoma1, numSoma2, numSoma1 + numSoma2 ) )
elif ( option == 2 ):
numSubt1 = int( input( 'Digite o primeiro número para a subtração: ' ) )
numSubt2 = int( input( 'Digite o segundo número para a subtração: ' ) )
print( '{} - {} = {}'.format( numSubt1, numSubt2, numSubt1 - numSubt2 ) )
elif ( option == 3 ):
numMult1 = int( input( 'Digite o primeiro número para a multiplicação: ' ) )
numMult2 = int( input( 'Digite o segundo número para a multiplicação: ' ) )
print( '{} × {} = {}'.format( numMult1, numMult2, numMult1 * numMult2 ) )
elif ( option == 4 ):
numDiv1 = int( input( 'Digite o primeiro número para a divisão: ' ) )
numDiv2 = int( input( 'Digite o segundo número para a divisão: ' ) )
print( '{} ÷ {} = {}'.format( numSubt1, numSubt2, numSubt1 - numSubt2 ) )
elif ( option == 5 ):
print( 'Programa finalizado!' )
terminarPrograma = True
else:
print ( 'Opção inválida!' )
| src/02_loops_condicionais_metodos_funcoes/12_calculadora.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
# # Preparation
data = []
with open ("../kaggle-competition/train.txt", "r") as ftxt, \
open ("../kaggle-competition/train_label.txt", "r") as flabel :
for t, l in zip(ftxt, flabel):
t = t.strip()
l = l.strip()
data.append({
"txt": t,
"label": l
})
df_train = pd.DataFrame(data)
def sampling_class(df, cls_name, n=20, seed=71):
df_class = df_train[df_train.label==cls_name]
total = len(df_class)
np.random.seed(seed)
indices = np.random.permutation(total)[:n]
return df_class.iloc[indices].txt.values
# +
seeds = dict(zip(['neg', 'neu', 'pos', 'q'], range(4)))
n = 40
sampling_data = []
for k, v in seeds.items():
txts = sampling_class(df_train, k, n=n, seed=v)
for t in txts:
t = t.replace("|", "\|")
sampling_data.append(dict(
label=k,
raw=t,
tokenised=t
))
pd.DataFrame(sampling_data).to_csv("./sampling-%d.csv" % n, header=True, index=False)
# note = """
# ################
# ### label: %s
# ### raw: %s
# ################
# """ % (k, t)
# f.write("\n%s\n" % note.strip())
# f.write("::> %s\n" % t)
# # ab = sampling_class(df_train, "neu")
# -
set(df_train.label.values)
# # Postprocessing
#
# Google Spreadsheet: https://docs.google.com/spreadsheets/d/1F_qT33T2iy0tKbflnVC8Ma-EoWEHimV3NmNRgLjN00o/edit#gid=1302375309
filepath = "https://docs.google.com/spreadsheets/d/e/2PACX-1vRm-f8qstNhxICHzEfhbCacJNQSAZptP-6ockKwsxyck5vtl7e1-A2726Qj2hgp4Oht7WfcbdivQNPT/pub?gid=1302375309&single=true&output=csv"
df = pd.read_csv(filepath)
print("we have %d samples" % len(df))
should_removed = ~df.label.apply(lambda x: len(x.split("-")) > 1)
df_filtered = df[should_removed]
print("we have %d after samples" % len(df_filtered))
filename = "wisesight-%d-samples-tokenised.txt" % len(df)
with open(filename, "w") as ft, open(filename.replace(".txt", ".label"), "w") as fl:
for l in df_filtered.tokenised.values:
l = l.strip()
ft.write("%s\n" % l.replace("|", ""))
fl.write("%s\n" % l)
| word-tokenization/data-preparation-and-post-processing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import torch
a = torch.FloatTensor(5, 7)
a = torch.randn(5, 7)
a.size()
a.fill_(3.5)
b = a.add_(4.0)
print(a, b)
b = a[0, 3]
b
b = a[:, 3:5]
b
x = torch.ones(5, 5)
x
z = torch.Tensor(5, 2)
z[:, 0] = 10
z[:, 1] = 100
z
x.index_add_(1, torch.LongTensor([4, 0]), z)
x
test = x.view(x.size(0), -1)
test
test = x.view(x.size(1), -1)
test
torch.cat((batch, hidden), 1)
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
# +
class RNN(nn.Module):
# you can also accept arguments in your model constructor
def __init__(self, data_size, hidden_size, output_size):
super(RNN, self).__init__()
self.hidden_size = hidden_size
input_size = data_size + hidden_size
self.i2h = nn.Linear(input_size, hidden_size)
self.h2o = nn.Linear(hidden_size, output_size)
def forward(self, data, last_hidden):
input = torch.cat((data, last_hidden), 1)
hidden = self.i2h(input)
output = self.h2o(hidden)
return hidden, output
rnn = RNN(50, 20, 10)
# +
loss_fn = nn.MSELoss()
batch_size = 10
TIMESTEPS = 5
# -
batch = Variable(torch.randn(batch_size, 50))
hidden = Variable(torch.zeros(batch_size, 20))
target = Variable(torch.zeros(batch_size, 10))
print(batch.size())
# +
loss_fn = nn.MSELoss()
batch_size = 10
TIMESTEPS = 5
# Create some fake data
batch = Variable(torch.randn(batch_size, 50))
hidden = Variable(torch.zeros(batch_size, 20))
target = Variable(torch.zeros(batch_size, 10))
loss = 0
for t in range(TIMESTEPS):
# yes! you can reuse the same network several times,
# sum up the losses, and call backward!
hidden, output = rnn(batch, hidden)
loss += loss_fn(output, target)
loss.backward()
# -
| notebooks/kaggle/fashion-mnist/pytorch-4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Note: The codes were originally created by Prof. <NAME> in the MATLAB
from scipy.stats import norm
import matplotlib.pyplot as plt
from matplotlib.path import Path
import matplotlib.patches as patches
import numpy as np
from gmpe_bjf97 import gmpe_bjf97
from gmpe_prob_bjf97 import gmpe_prob_bjf97
from scipy.interpolate import interp1d
# %matplotlib inline
# +
x = np.logspace(-3, np.log10(2), num=100) # Considered IM values
T = 1 # 0.001 is the PGA case in the gmpe function
IM_label = 'SA(1 s)'
# seismicity parameters
Fault_Type = 1 # 1 is strike slip
Vs30 = 500
# +
##############################
### Single Rupture Example ###
##############################
lambda_A = 1/100
M_A = 6.5
R_A = 10
# compute rates (and intermediate results) for specific IM levels
[medianIM, sigmaIM] = gmpe_bjf97(M_A, R_A, T, Fault_Type, Vs30)
imLevel = [0.2, 0.5]
imProbabilitiesA = 1 - norm.cdf(np.log(imLevel),np.log(medianIM),sigmaIM)
imRateA = lambda_A * imProbabilitiesA # get rates for two example cases
# compute rates for a range of IM levels
p_A = gmpe_prob_bjf97(x, M_A, R_A, T, Fault_Type, Vs30)
lambda_IM_A = lambda_A * p_A # IM rates from rup_1
# Plot Fig 6.4
plt.figure(1)
fig, ax = plt.subplots(figsize=(8, 6.5))
ax.loglog(x, lambda_IM_A, linestyle='-', linewidth=2, color=[0.4, 0.4, 0.4])
ax.scatter(imLevel, imRateA, facecolors='none', edgecolor=[0.4, 0.4, 0.4])
ax.set_xlabel('Spectral Acceleration, '+IM_label+' [g]', fontsize = 12)
ax.set_ylabel('Annual rate of exceedance, $\lambda$', fontsize = 12)
ax.set_ylim(10**(-5), 10**(-1))
ax.set_xlim(10**(-1.3), 10**(0.1))
text1 = '$\lambda$(' + IM_label + ' > ' + str(imLevel[0]) + ' g) = ' + str(format(imRateA[0],".5f"))
text2 = '$\lambda$(' + IM_label + ' > ' + str(imLevel[1]) + ' g) = \n' + str(format(imRateA[1],".6f"))
ax.text(imLevel[0]*1.05, imRateA[0]*1.2, text1, fontsize=10)
ax.text(imLevel[1]*1.05, imRateA[1]*1.2, text2, fontsize=10)
# +
###########################
### Two Rupture Example ###
###########################
# Define second fault
lambda_B = 1/500
M_B = 7.5
R_B = 10
# Compute rates (and intermediate results) for specific IM levels
medianIM, sigmaIM = gmpe_bjf97(M_B, R_B, T, Fault_Type, Vs30)
imProbabilitiesB = 1 - norm.cdf(np.log(imLevel),np.log(medianIM),sigmaIM)
imRateB = lambda_B * imProbabilitiesB # get rates for two example cases
imRateTot = imRateA + imRateB
# Compute rates for a range of IM levels
p_B = gmpe_prob_bjf97(x, M_B, R_B, T, Fault_Type, Vs30)
lambda_IM_B = lambda_B * p_B # IM rates from rup_2
lambda_IM_Tot = lambda_IM_A + lambda_IM_B
# Plot Fig 6.5
plt.figure(2)
fig, ax = plt.subplots(figsize=(8, 6.5))
ax.loglog(x, lambda_IM_Tot, 'k-', linewidth=2, label='Total hazard')
ax.loglog(x, lambda_IM_A, linestyle='-', linewidth=2, color=[0.4, 0.4, 0.4], label='rup_1')
ax.loglog(x, lambda_IM_B, linestyle='-', linewidth=2, color=[0.7, 0.7, 0.7], label='rup_2')
ax.scatter(imLevel, imRateTot, facecolors='none', edgecolor='k')
ax.scatter(imLevel, imRateA, facecolors='none', edgecolor=[0.4, 0.4, 0.4])
ax.scatter(imLevel, imRateB, facecolors='none', edgecolor=[0.7, 0.7, 0.7])
ax.set_xlabel('Spectral Acceleration, '+IM_label+' [g]', fontsize = 12)
ax.set_ylabel('Annual rate of exceedance, $\lambda$', fontsize = 12)
ax.set_ylim(10**(-5), 10**(-1))
ax.set_xlim(10**(-1.3), 10**(0.3))
text1 = '$\lambda$(' + IM_label + ' > ' + str(imLevel[0]) + ' g) = ' + str(format(imRateTot[0],".5f"))
text2 = '$\lambda$(' + IM_label + ' > ' + str(imLevel[1]) + ' g) = ' + str(format(imRateTot[1],".5f"))
ax.text(imLevel[0]*1.1, imRateTot[0]*1.1, text1, fontsize=10)
ax.text(imLevel[1]*1.05, imRateTot[1]*1.2, text2, fontsize=10)
ax.legend(loc='upper right', fontsize=12)
| Chapter 6-7/psha_example_calcs_one_rup_two_rup.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + [markdown] deletable=true editable=true
# # Red Hat Insights Core
# Insights Core is a framework for collecting and processing data about systems. It allows users to write components that collect and transform sets of raw data into typed python objects, which can then be used in rules that encapsulate knowledge about them.
#
# To accomplish this the framework uses an internal dependency engine. Components in the form of class or function definitions declare dependencies on other components with decorators, and the resulting graphs can be executed once all components you care about have been loaded.
#
# This is an introduction to the dependency system followed by a summary of the standard components Insights Core provides.
# + [markdown] deletable=true editable=true
# ## Components
# To make a component, we first have to create a component type, which is a decorator we'll use to declare it.
# + deletable=true editable=true
import sys
sys.path.insert(0, "../..")
from insights.core import dr
# + deletable=true editable=true
# Here's our component type with the clever name "component."
# Insights Core provides several types that we'll come to later.
class component(dr.ComponentType):
pass
# + [markdown] deletable=true editable=true
# ### How do I use it?
# + deletable=true editable=true
import random
# Make two components with no dependencies
@component()
def rand():
return random.random()
@component()
def three():
return 3
# Make a component that depends on the other two. Notice that we depend on two
# things, and there are two arguments to the function.
@component(rand, three)
def mul_things(x, y):
return x * y
# + deletable=true editable=true
# Now that we have a few components defined, let's run them.
from pprint import pprint
# If you call run with no arguments, all components of every type (with a few caveats
# I'll address later) are run, and their values or exceptions are collected in an
# object called a broker. The broker is like a fancy dictionary that keeps up with
# the state of an evaluation.
broker = dr.run()
pprint(broker.instances)
# + [markdown] deletable=true editable=true
# ## Component Types
# We can define components of different types by creating different decorators.
# + deletable=true editable=true
class stage(dr.ComponentType):
pass
# + deletable=true editable=true
@stage(mul_things)
def spam(m):
return int(m)
# + deletable=true editable=true
broker = dr.run()
print "All Instances"
pprint(broker.instances)
print
print "Components"
pprint(broker.get_by_type(component))
print
print "Stages"
pprint(broker.get_by_type(stage))
# + [markdown] deletable=true editable=true
# ## Component Invocation
# You can customize how components of a given type get called by overriding the `invoke` method of your `ComponentType` class. For example, if you want your components to receive the broker itself instead of individual arguments, you can do the following.
# + deletable=true editable=true
class thing(dr.ComponentType):
def invoke(self, broker):
return self.component(broker)
@thing(rand, three)
def stuff(broker):
r = broker[rand]
t = broker[three]
return r + t
# + deletable=true editable=true
broker = dr.run()
print broker[stuff]
# + [markdown] deletable=true editable=true
# Notice that broker can be used as a dictionary to get the value of components that have already executed without directly looking at the `broker.instances` attribute.
# + [markdown] deletable=true editable=true
# ## Exception Handling
# When a component raises an exception, the exception is recorded in a dictionary whose key is the component and whose value is a list of exceptions. The traceback related to each exception is recorded in a dictionary of exceptions to tracebacks. We record exceptions in a list because some components may generate more than one value. We'll come to that later.
# + deletable=true editable=true
@stage()
def boom():
raise Exception("Boom!")
broker = dr.run()
e = broker.exceptions[boom][0]
t = broker.tracebacks[e]
pprint(e)
print
print t
# + [markdown] deletable=true editable=true
# ## Missing Dependencies
# A component with any missing required dependencies will not be called. Missing dependencies are recorded in the broker in a dictionary whose keys are components and whose values are tuples with two values. The first is a list of all missing **required** dependencies. The second is a list of all dependencies of which at least one was required.
# + deletable=true editable=true
@stage("where's my stuff at?")
def missing_stuff(s):
return s
broker = dr.run()
print broker.missing_requirements[missing_stuff]
# + deletable=true editable=true
@stage("a", "b", [rand, "d"], ["e", "f"])
def missing_more_stuff(a, b, c, d, e, f):
return a + b + c + d + e + f
broker = dr.run()
print broker.missing_requirements[missing_more_stuff]
# + [markdown] deletable=true editable=true
# Notice that the first elements in the dependency list after `@stage` are simply "a" and "b", but the next two elements are themselves lists. This means that at least one element of each list must be present. The first "any" list has [rand, "d"], and rand is available, so it resolves. However, neither "e" nor "f" are available, so the resolution fails. Our missing dependencies list includes the first two standalone elements as well as the second "any" list.
# + [markdown] deletable=true editable=true
# ## SkipComponent
# Components that raise `dr.SkipComponent` won't have any values or exceptions recorded and will be treated as missing dependencies for components that depend on them.
# + [markdown] deletable=true editable=true
# ## Optional Dependencies
#
# There's an "optional" keyword that takes a list of components that should be run before the current one. If they throw exceptions or don't run for some other reason, execute the current component anyway and just say they were `None`.
# + deletable=true editable=true
@stage(rand, optional=['test'])
def is_greater_than_ten(r, t):
return (int(r*10.0) < 5.0, t)
broker = dr.run()
print broker[is_greater_than_ten]
# + [markdown] deletable=true editable=true
# ## Automatic Dependencies
# The definition of a component type may include `requires` and `optional` attributes. Their specifications are the same as the `requires` and `optional` portions of the component decorators. Any component decorated with a component type that has `requires` or `optional` in the class definition will automatically depend on the specified components, and any additional dependencies on the component itself will just be appended.
#
# **This functionality should almost never be used because it makes it impossible to tell that the component has implied dependencies.**
# + deletable=true editable=true
class mything(dr.ComponentType):
requires = [rand]
@mything()
def dothings(r):
return 4 * r
broker = dr.run(broker=broker)
pprint(broker[dothings])
pprint(dr.get_dependencies(dothings))
# + [markdown] deletable=true editable=true
# ## Metadata
# Component types and components can define metadata in their definitions. If a component's type defines metadata, that metadata is inherited by the component, although the component may override it.
# + deletable=true editable=true
class anotherthing(dr.ComponentType):
metadata={"a": 3}
@anotherthing(metadata={"b": 4, "c": 5})
def four():
return 4
dr.get_metadata(four)
# + [markdown] deletable=true editable=true
# ## Component Groups
# So far we haven't said how we might group components together outside of defining different component types. But sometimes we might want to specify certain components, even of different component types, to belong together and to only be executed when explicitly asked to do so.
#
# All of our components so far have implicitly belonged to the default group. However, component types and even individual components can be assigned to specific groups, which will run only when specified.
# + deletable=true editable=true
class grouped(dr.ComponentType):
group = "grouped"
@grouped()
def five():
return 5
b = dr.Broker()
dr.run(dr.COMPONENTS["grouped"], broker=b)
pprint(b.instances)
# + [markdown] deletable=true editable=true
# If a group isn't specified in the type definition or in the component decorator, the default group is assumed. Likewise, the default group is assumed when calling `run` if one isn't provided.
#
# It's also possible to override the group of an individual component by using the `group` keyword in its decorator.
# + [markdown] deletable=true editable=true
# ## run_incremental
# Since hundreds or even thousands of dependencies can be defined, it's sometimes useful to separate them into graphs that don't share any components and execute those graphs one at a time. In addition to the `run` function, the `dr` module provides a `run_incremental` function that does exactly that. You can give it a starting broker (or none at all), and it will yield a new broker for each distinct graph among all the dependencies.
# -
# ## run_all
# The `run_all` function is similar to `run_incremental` since it breaks a graph up into independently executable subgraphs before running them. However, it returns a list of the brokers instead of yielding one at a time. It also has a `pool` keyword argument that accepts a `concurrent.futures.ThreadPoolExecutor`, which it will use to run the independent subgraphs in parallel. This can provide a significant performance boost in some situtations.
# + [markdown] deletable=true editable=true
# ## Inspecting Components
# The `dr` module provides several functions for inspecting components. You can get their aliases, dependencies, dependents, groups, type, even their entire dependency trees.
# + deletable=true editable=true
from insights.core import dr
@stage()
def six():
return 6
@stage(six)
def times_two(x):
return x * 2
# If the component's full name was foo.bar.baz.six, this would print "baz"
print "\nModule (times_two):", dr.get_base_module_name(times_two)
print "\nComponent Type (times_two):", dr.get_component_type(times_two)
print "\nDependencies (times_two): "
pprint(dr.get_dependencies(times_two))
print "\nDependency Graph (stuff): "
pprint(dr.get_dependency_graph(stuff))
print "\nDependents (rand): "
pprint(dr.get_dependents(rand))
print "\nGroup (six):", dr.get_group(six)
print "\nMetadata (four): ",
pprint(dr.get_metadata(four))
# prints the full module name of the component
print "\nModule Name (times_two):", dr.get_module_name(times_two)
# prints the module name joined to the component name by a "."
print "\nName (times_two):", dr.get_name(times_two)
print "\nSimple Name (times_two):", dr.get_simple_name(times_two)
# + [markdown] deletable=true editable=true
# ## Loading Components
# If you have components defined in a package and the root of that path is in `sys.path`, you can load the package and all its subpackages and modules by calling `dr.load_components`. This way you don't have to load every component module individually.
#
# ```python
# # recursively load all packages and modules in path.to.package
# dr.load_components("path.to.package")
#
# # or load a single module
# dr.load_components("path.to.package.module")
# ```
# + [markdown] deletable=true editable=true
# Now that you know the basics of Insights Core dependency resolution, let's move on to the rest of Core that builds on it.
# + [markdown] deletable=true editable=true
# ## Standard Component Types
# The standard component types provided by Insights Core are `datasource`, `parser`, `combiner`, `rule`, `condition`, and `incident`. They're defined in `insights.core.plugins`.
#
# Some have specialized interfaces and executors that adapt the dependency specification parts described above to what developers using previous versions of Insights Core have come to expect.
#
# For more information on parser, combiner, and rule development, please see our [component developer tutorials](http://insights-core.readthedocs.io/en/latest/rule_tutorial_index.html).
# + [markdown] deletable=true editable=true
# ### Datasource
# A datasource used to be called a spec. Components of this type collect data and make it available to other components. Since we have several hundred predefined datasources that fall into just a handful of categories, we've streamlined the process of creating them.
#
# Datasources are defined either with the `@datasource` decorator or with helper functions from `insights.core.spec_factory`.
#
# The `spec_factory` module has a handful of functions for defining common datasource types.
# - simple_file
# - glob_file
# - simple_command
# - listdir
# - foreach_execute
# - foreach_collect
# - first_file
# - first_of
#
# All datasources defined helper functions will depend on a `ExecutionContext` of some kind. Contexts let you activate different datasources for different environments. Most of them provide a root path for file collection and may perform some environment specific setup for commands, even modifying the command strings if needed.
#
# For now, we'll use a `HostContext`. This tells datasources to collect files starting at the root of the file system and to execute commands exactly as they are defined. Other contexts are in `insights.core.contexts`.
#
# All file collection datasources depend on any context that provides a path to use as root unless a particular context is specified. In other words, some datasources will activate for multiple contexts unless told otherwise.
# + [markdown] deletable=true editable=true
# #### simple_file
# `simple_file` reads a file from the file system and makes it available as a `TextFileProvider`. A `TextFileProvider` instance contains the path to the file and its content as a list of lines.
# + deletable=true editable=true
from insights.core import dr
from insights.core.context import HostContext
from insights.core.spec_factory import (simple_file,
glob_file,
simple_command,
listdir,
foreach_execute,
foreach_collect,
first_file,
first_of)
release = simple_file("/etc/redhat-release")
hostname = simple_file("/etc/hostname")
ctx = HostContext()
broker = dr.Broker()
broker[HostContext] = ctx
broker = dr.run(broker=broker)
print broker[release].path, broker[release].content
print broker[hostname].path, broker[hostname].content
# + [markdown] deletable=true editable=true
# #### glob_file
# `glob_file` accepts glob patterns and evaluates at runtime to a list of `TextFileProvider` instances, one for each match. You can pass `glob_file` a single pattern or a list (or set) of patterns. It also accepts an `ignore` keyword, which should be a regular expression string matching paths to ignore. The glob and ignore patterns can be used together to match lots of files and then throw out the ones you don't want.
# + deletable=true editable=true
host_stuff = glob_file("/etc/host*", ignore="(allow|deny)")
broker = dr.run(broker=broker)
print broker[host_stuff]
# + [markdown] deletable=true editable=true
# #### simple_command
# `simple_command` allows you to get the results of a command that takes no arguments or for which you know all of the arguments up front.
#
# It and other command datasources return a `CommandOutputProvider` instance, which has the command string, any arguments interpolated into it (more later), the return code if you requested it via the `keep_rc=True` keyword, and the command output as a list of lines.
#
# `simple_command` also accepts a `timeout` keyword, which is the maximum number of seconds the system should attempt to execute the command before a `CalledProcessError` is raised for the component.
#
# A default timeout for all commands can be set on the initial `ExecutionContext` instance with the `timeout` keyword argument.
#
# If a timeout isn't specified in the `ExecutionContext` or on the command itself, none is used.
# + deletable=true editable=true
uptime = simple_command("/usr/bin/uptime")
broker = dr.run(broker=broker)
print (broker[uptime].cmd, broker[uptime].args, broker[uptime].rc, broker[uptime].content)
# + [markdown] deletable=true editable=true
# #### listdir
# `listdir` lets you get the contents of a directory.
# + deletable=true editable=true
interfaces = listdir("/sys/class/net")
broker = dr.run(broker=broker)
pprint(broker[interfaces])
# + [markdown] deletable=true editable=true
# #### foreach_execute
# `foreach_execute` allows you to use output from one component as input to a datasource command string. For example, using the output of the interfaces datasource above, we can get ethtool information about all of the ethernet devices.
#
# The timeout description provided in the `simple_command` section applies here to each seperate invocation.
# + deletable=true editable=true
ethtool = foreach_execute(interfaces, "ethtool %s")
broker = dr.run(broker=broker)
pprint(broker[ethtool])
# + [markdown] deletable=true editable=true
# Notice each element in the list returned by `interfaces` is a single string. The system interpolates each element into the `ethtool` command string and evaluates each result. This produces a list of objects, one for each input element, instead of a single object. If the list created by `interfaces` contained tuples with `n` elements, then our command string would have had `n` substitution parameters.
# + [markdown] deletable=true editable=true
# #### foreach_collect
# `foreach_collect` works similarly to `foreach_execute`, but instead of running commands with interpolated arguments, it collects files at paths with interpolated arguments. Also, because it is a file collection, it doesn't not have execution related keyword arguments
# + [markdown] deletable=true editable=true
# #### first_file
# `first_file` takes a list of paths and returns a `TextFileProvider` for the first one it finds. This is useful if you're looking for a single file that might be in different locations.
# + [markdown] deletable=true editable=true
# #### first_of
# `first_of` is a way to express that you want to use any datasource from a list of datasources you've already defined. This is helpful if the way you collect data differs in different contexts, but the output is the same.
#
# For example, the way you collect installed rpms directly from a machine differs from how you would collect them from a docker image. Ultimately, downstream components don't care: they just want rpm data.
#
# You could do the following. Notice that `host_rpms` and `docker_installed_rpms` implement different ways of getting rpm data that depend on different contexts, but the final `installed_rpms` datasource just references whichever one ran.
# + deletable=true editable=true
from insights.specs.default import format_rpm
from insights.core.context import DockerImageContext
from insights.core.plugins import datasource
from insights.core.spec_factory import CommandOutputProvider
rpm_format = format_rpm()
cmd = "/usr/bin/rpm -qa --qf '%s'" % rpm_format
host_rpms = simple_command(cmd, context=HostContext)
@datasource(DockerImageContext)
def docker_installed_rpms(ctx):
root = ctx.root
cmd = "/usr/bin/rpm -qa --root %s --qf '%s'" % (root, rpm_format)
result = ctx.shell_out(cmd)
return CommandOutputProvider(cmd, ctx, content=result)
installed_rpms = first_of([host_rpms, docker_installed_rpms])
broker = dr.run(broker=broker)
pprint(broker[installed_rpms])
# + [markdown] deletable=true editable=true
# #### What datasources does Insights Core provide?
# To see a list of datasources we already collect, have a look in `insights.specs`.
# + [markdown] deletable=true editable=true
# ### Parsers
# Parsers are the next major component type Insights Core provides. A `Parser` depends on a single datasource and is responsible for converting its raw content into a structured object.
#
# Let's build a simple parser.
# + deletable=true editable=true
from insights.core import Parser
from insights.core.plugins import parser
@parser(hostname)
class HostnameParser(Parser):
def parse_content(self, content):
self.host, _, self.domain = content[0].partition(".")
broker = dr.run(broker=broker)
print "Host:", broker[HostnameParser].host
# + [markdown] deletable=true editable=true
# Notice that the `parser` decorator accepts only one argument, the datasource the component needs. Also notice that our parser has a sensible default constructor that accepts a datasource and passes its content into a parse_content function.
#
# Our hostname parser is pretty simple, but it's easy to see how parsing things like rpm data or configuration files could get complicated.
#
# Speaking of rpms, hopefully it's also easy to see that an rpm parser could depend on our installed_rpms definition in the previous section and parse the content regardless of where the content originated.
# + [markdown] deletable=true editable=true
# #### What about parser dependencies that produce lists of components?
# Not only do parsers have a special decorator, they also have a special executor. If the datasource is a list, the executor will attempt to construct a parser object with each element of the list, and the value of the parser in the broker will be the list of parser objects. It's important to keep this in mind when developing components that depend on parsers.
#
# This is also why exceptions raised by components are stored as lists by component instead of single values.
#
# Here's a simple parser that depends on the `ethtool` datasource.
# + deletable=true editable=true
@parser(ethtool)
class Ethtool(Parser):
def parse_content(self, content):
self.link_detected = None
self.device = None
for line in content:
if "Settings for" in line:
self.device = line.split(" ")[-1].strip(":")
if "Link detected" in line:
self.link_detected = line.split(":")[-1].strip()
broker = dr.run(broker=broker)
for eth in broker[Ethtool]:
print "Device:", eth.device
print "Link? :", eth.link_detected, "\n"
# + [markdown] deletable=true editable=true
# We provide curated parsers for all of our datasources. They're in `insights.parsers`.
# + [markdown] deletable=true editable=true
# ### Combiners
# Combiners depend on two or more other components. They typically are used to standardize interfaces or to provide a higher-level view of some set of components.
#
# As an example of standardizing interfaces, `chkconfig` and `service` commands can be used to retrieve similar data about service status, but the command you run to check that status depends on your operating system version. A datasource would be defined for each command along with a parser to interpret its output. However, a downstream component may just care about a service's status, not about how a particular program exposes it. A combiner can depend on both `chkconfig` and `service` parsers (like this, so only one of them is required: `@combiner([[chkconfig, service]])`) and provide a unified interface to the data.
#
# As an example of a higher level view of several related components, imagine a combiner that depends on various ethtool and other network information gathering parsers. It can compile all of that information behind one view, exposing a range of information about devices, interfaces, iptables, etc. that might otherwise be scattered across a system.
# + [markdown] deletable=true editable=true
# We provide a few common combiners. They're in `insights.combiners`.
# + [markdown] deletable=true editable=true
# Here's an example `combiner` that tries a few different ways to determine the Red Hat release information. Notice that its dependency declarations and interface are just like we've discussed before. If this was a class, the `__init__` function would be declared like `def __init__(self, rh_release, un)`.
#
# ```python
# from collections import namedtuple
# from insights.core.plugins import combiner
# from insights.parsers.redhat_release import RedhatRelease as rht_release
# from insights.parsers.uname import Uname
#
# @combiner([rht_release, Uname])
# def redhat_release(rh_release, un):
# if un and un.release_tuple[0] != -1:
# return Release(*un.release_tuple)
#
# if rh_release:
# return Release(rh_release.major, rh_release.minor)
#
# raise Exception("Unabled to determine release.")
# ```
# + [markdown] deletable=true editable=true
# ### Rules
# Rules depend on parsers and/or combiners and encapsulate particular policies about their state. For example, a rule might detect whether a defective rpm is installed. It might also inspect the `lsof` parser to determine if a process is using a file from that defective rpm. It could also check network information to see if the process is a server and whether it's bound to an internal or external IP address. Rules can check for anything you can surface in a `parser` or a `combiner`.
#
# Rules use the `make_fail` or `make_pass` helpers to create their return values. They take one required parameter, which is a key identifying the particular state the rule wants to highlight, and any number of required parameters that provide context for that state.
# + deletable=true editable=true
from insights.core.plugins import rule, make_fail, make_pass
ERROR_KEY = "IS_LOCALHOST"
@rule(HostnameParser)
def report(hn):
return make_pass(ERROR_KEY) if "localhost" in hn.host else make_fail(ERROR_KEY)
brok = dr.Broker()
brok[HostContext] = HostContext()
brok = dr.run(broker=brok)
pprint(brok.get(report))
# + [markdown] deletable=true editable=true
# ### Conditions and Incidents
# Conditions and incidents are optional components that can be used by rules to encapsulate particular pieces of logic.
#
# Conditions are questions with answers that can be interpreted as True or False. For example, a condition might be "Does the kdump configuration contain a 'net' target type?" or "Is the operating system Red Hat Enterprise Linux 7?"
#
# Incidents, on the other hand, typically are specific types of warning or error messages from log type files.
#
# Why would you use conditions or incidents instead of just writing the logic directly into the rule? Future versions of Insights may allow automated analysis of rules and their conditions and incidents. You will be able to tell which conditions, incidents, and rule firings across all rules correspond with each other and how strongly. This feature will become more powerful as conditions and incidents are written independently of explicit rules.
# + [markdown] deletable=true editable=true
# ## Observers
# Insights Core allows you to attach functions to component types, and they'll be called any time a component of that type is encountered. You can attach observer functions globally or to a particular broker.
#
# Observers are called whether a component succeeds or not. They take the component and the broker right after the component is evaluated and so are able to ask the broker about values, exceptions, missing requirements, etc.
# + deletable=true editable=true
def observer(c, broker):
if c not in broker:
return
value = broker[c]
pprint(value)
broker.add_observer(observer, component_type=parser)
broker = dr.run(broker=broker)
| docs/notebooks/Insights Core Tutorial.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### String, Int and Float
name = "I am <NAME>" # String
price = 100.50 #Float
# ### Lists
# It is called array in other languages.
# +
list1 = ['math', 'science'];
list2 = [1, 2];
list3 = [2, 'math', 1, "science"];
list3[2] = 3
print list3 # [2, 'math', 3, 'science']
print len(list3) # 4
# +
x = range(5) # [0, 1, 2, 3, 4]
#slice
middle = x[2:4] # [2, 3]
first_two = x[:2] # [0, 1]
second_to_end = x[2:] # [2, 3, 4]
last_two = x[-2:] # [3, 4]
4 in x # True
10 in x # False
# -
# ### Tuples
# Tuples are immutable lists.
# +
tupple1 = (1,2)
tupple2 = ('Math', 'Science')
#tupple1[0] = 3; #Error
print(tupple1 + tupple2)
# -
# ### Dictionary
# Dictionary is like a list. In Dictionary there is a mapping between set of keys with set of values.
dict = {'language': 'Python', 'developer': 100}
print dict.keys() # ['language', 'developer']
print dict.values() # ['Python', 100]
# ### Loops
for x in range(0,5) :
print x * x
# ### Functions
def summation(a, b):
return a + b;
summation(1,2)
# +
def summation(a, *b):
return a + sum(b)
summation(1,2,3,4)
# +
#Multiple returns
def multiplyBy2(a, b):
return a*2, b*2
c, d = multiplyBy2(1, 2)
# -
# ### List Comprehensions
# It is an elegant way to define and create list. We can use this to transform a list into another list.
[x * 2 for x in range(0,5)]
[(x, y) for x in range(0, 5) for y in range(5,10)]
# ### Random Module
# Its a separate Module. We need to import it.
# +
import random
random.random() #Generate a random number
[random.random() for _ in range(0, 10)] #A list of 10 random variables. I used _ in place of x(a variable) as I don't neeed it.
# -
#This module actually generates pseudorandom that is deterministic based on internal state.
random.seed(5) # set the state to 5
print random.random() # 0.62290169489
random.seed(5)
print random.random() # 0.62290169489
| .ipynb_checkpoints/Python Basic-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + id="Yu6V7oLKQ0Fi"
#Multilayer Perceptron
import pandas as pd
import numpy as np
import random
import matplotlib.pyplot as plt
dataset = ["titanic_train",
"iris"]
# + id="GmSHyVEGQ0Fj"
def LeerDatosC1(filename : str, separa : str, header = True):
if (header):
data = pd.read_csv(filename + ".csv", sep =separa,usecols=['Sex', 'Age', 'Fare', 'Embarked','Survived'],header = 0)
else:
data = pd.read_csv(filename+ ".csv", sep = separa,usecols=['Sex', 'Age', 'Fare', 'Embarked','Survived'],header = None)
data = data[['Sex', 'Age', 'Fare', 'Embarked','Survived']]
data = data.dropna()
data['Sex'] = data['Sex'].replace({'male':0,'female':1})
data['Embarked'] = data['Embarked'].replace({'Q':0,'S':1,'C':2})
data = data.sort_values(data.columns[-1])
return data.to_numpy()
# + id="txEUcaeiaLDF"
def LeerDatosC2(filename : str, separa : str, header = True):
if (header):
data = pd.read_csv(filename + ".csv", sep =separa, header = 0)
else:
data = pd.read_csv(filename+ ".csv", sep = separa, header = None)
#data = data.sample(frac = 1) #shuffle data
data = data.sort_values(data.columns[-1])
return data.to_numpy()
# + id="VJqL-lIBQ0Fj"
def Normalizar_Datos(data : np.array):
#normal = np.empty_like(data)
for i in range (0,np.size(data[0])):
media = np.mean(data[:,i])
desvi =np.std(data[:,i])
data[:,i] = (data[:,i] - media)/desvi
return data
# + id="s5RmMrshQ0Fj"
def Crear_k_folds(data : np.array , k:int, clases: []):
folds = []
tot_clase = []
prop_clase = [] #Acumulado de indices
pre_fold = []
m = np.size(data[:,-1]) #numero de datos
#n = np.size(data[0])
for i in clases:
tot_clase.append(np.count_nonzero( data[:,-1] == i))
prop_clase.append(tot_clase[0])
for i in range (1, len(tot_clase)):
prop_clase.append( prop_clase[i-1] + tot_clase[i])
pos_ini = 0
for i in range(0, len(clases)):
pre_fold.append(np.array_split(data[pos_ini:prop_clase[i]], k))
pos_ini = prop_clase[i]
for i in range (0,k):
temp = np.empty( (0,np.size(data[0])) )
for j in range(0,len(clases)):
temp = np.vstack( (temp,pre_fold[j][i]))
folds.append(temp)
return folds
# + id="rU4bOjHBQ0Fj"
# + id="UzWOPP5IQ0Fj"
def Sigmoidal(X:np.array, theta:np.array):
pot = X.dot(theta)
return 1/(1+ np.exp(-pot))
# + id="mTKt2i3xQ0Fj"
def ds(D : np.array):
return D*(1-D)
# + id="55CV03RVQ0Fk"
def Calcular_Funcion_Costo(X: np.array, y:np.array):
#J(theta) = -1/m[ SUM( y* log(h(x)) + (1-y)*log(1-h(x)))
m = np.size(X[:,0]) #numero de datos
costo = 0
for i in range(0, len(X[0])):
costo += -1/m * ( np.sum( y[i].dot(np.log(X[i])) + (1-y[i]).dot( np.log(1-X[i]))) )
return costo
# + id="NUUi6ePUQ0Fk"
def GenerarW( num_capas : int, dim_capas = []):
W = {}
for i in range(0,num_capas+1):
if (i == 0):
temp = np.random.randn( dim_capas[i], dim_capas[i+1] )
W[i] = temp
if (i != 0):
temp = np.random.randn( dim_capas[i]+1, dim_capas[i+1] )
W[i] = temp
return W
# + [markdown] id="dCXNjVtltdw9"
# # Forward y Backward
# + id="tfQPbyqkQ0Fk"
def Forward (X: np.array, W : {}):
A = {}
h_l = X
A[0] = h_l
for i in range(0, len(W)):
if (i == len(W)-1):
h_l = Sigmoidal(h_l, W[i])
else:
h_l = Sigmoidal(h_l, W[i])
bias = np.ones( (np.size(h_l[:,0]),1) )
h_l = np.hstack( (bias,h_l) )
A[i+1] = h_l
return A
def Backward (X: np.array, y: np.array, W:{}, A:{}, tasa_apren:float):
#Actualizacion de W (pesos) de la red por back-propagation
#deriv J(theta) = a^l* delta^(l+1)
#g'(z) = a * (1-a)
m = np.size(X[:,-1])
delta_t = (A[len(A)-1] - y) #* ds(A[len(A)-1])
for i in range(len(W)-1,-1,-1):
R = tasa_apren* ((A[i].T.dot(delta_t))/ m)
if (i == len(W)-1):
W[i]-= R #tasa_apren* (A[i].T.dot(delta_t))/ m
delta_t = ds(A[i])*(delta_t.dot(W[i].T))
else:
R = R[:,1:]
W[i]-= R # tasa_apren* (A[i].T.dot(delta_t))/ m
if (i != len(W)-1 and i != 0):
delta_t = ds(A[i])*(delta_t[:,1:].dot(W[i].T))
# + [markdown] id="Mo75zD2jtl-v"
# # Algoritmos clásicos
# + id="y21Dxuk4Q0Fk"
def Gradiente_Descendiente(X: np.array, y:np.array, W:{},
num_itera:int, tasa_apren:float):
arr_costo = np.empty(num_itera, dtype =float)
A = {}
num_capas = len(W)
for it in range(0, num_itera):
A = Forward(X, W)
arr_costo[it] = Calcular_Funcion_Costo(A[num_capas], y)
Backward(X, y, W, A, tasa_apren)
return A[num_capas], arr_costo, W
# + id="JqYysZVzQ0Fk"
def TransformacionOneShot(y: np.array, clases:[]):
num_clases = len(clases)
vec_clases = np.empty((0,num_clases), dtype = int)
for i in y:
idx = clases.index(i)
vec = [0] * num_clases
vec[idx] = 1
vec_clases = np.vstack ((vec_clases, vec))
return vec_clases
def OneShot_Salida(y:np.array):
y_cat = np.zeros_like(y)
max = np.argmax(y, axis = 1)
for i in range(0, len(max)):
y_cat[i,max[i]] = 1
return y_cat
# + id="HYT3eVqEQ0Fk"
def Calcular_Accuracy(X:np.array, y:np.array, theta:np.array):
y_calc = Forward(X, theta)
y_calc = OneShot_Salida(y_calc[len(y_calc)-1])
aciertos = 0
for i in (y - y_calc):
if (np.count_nonzero(i) == 0):
aciertos += 1
return aciertos/np.size(y[:,0])
def PromedioAccuracy(test:np.array, theta, k, clases):
accu = np.zeros(k)
for i in range(0,k):
X_test = test[i][:,:-1]
X_test = X_test.astype('float64')
X_test = Normalizar_Datos(X_test)
y_test = TransformacionOneShot(test[i][:,-1], clases)
accu[i] = Calcular_Accuracy(X_test, y_test, theta)
return accu.mean()
# + id="LjP_fBiIQ0Fk"
def CalculoParametros(folds:[], k:int, iteraciones:int, alpha:float,
num_clases:int, num_capa_hidden:int, num_neurona: int, clases:[]):
arr_costo = []
arr_theta = []
arr_test = []
for test_i in range(0, k):
test = folds[test_i]
train = np.zeros( (0,np.size(folds[0][0])) )
for train_i in range (0, k):
if (train_i == test_i):
continue
else:
train = np.vstack( (train,folds[train_i]) )
costo = []
X_train = train[:,:-1]
X_train = X_train.astype('float64')
X_train = Normalizar_Datos(X_train)
N = np.size(X_train[:,-1]) #tamaño batch
D_in = np.size(X_train[0]) #dimension entrada
D_out = num_clases
#Generacion array de capas
array_capas = []
array_capas.append(D_in)
for i in range(0, num_capa_hidden):
array_capas.append(num_neurona)
array_capas.append(D_out)
W = GenerarW(num_capa_hidden, array_capas)
y_train = TransformacionOneShot( train[:,-1], clases)
theta, costo, W = Gradiente_Descendiente(X_train, y_train, W, iteraciones, alpha)
arr_theta.append(theta)
arr_costo.append(costo)
arr_test.append(test)
return theta, arr_costo, arr_test, W
# + [markdown] id="mKVMK6tjtqTH"
# # Para ploteo
# + id="NZ0d2HeeQ0Fk"
def CalculoCustom(folds:[], k:int, iteraciones:int, alpha:float,
num_clases:int, num_capa_hidden:int, num_neurona: [], clases:[]):
arr_costo = []
arr_theta = []
arr_test = []
for test_i in range(0, k):
test = folds[test_i]
train = np.zeros( (0,np.size(folds[0][0])) )
for train_i in range (0, k):
if (train_i == test_i):
continue
else:
train = np.vstack( (train,folds[train_i]) )
costo = []
X_train = train[:,:-1]
X_train = X_train.astype('float64')
X_train = Normalizar_Datos(X_train)
N = np.size(X_train[:,-1]) #tamaño batch
D_in = np.size(X_train[0]) #dimension entrada
D_out = num_clases
#Generacion array de capas
array_capas = []
array_capas.append(D_in)
for i in range(0, num_capa_hidden):
array_capas.append(num_neurona[i])
array_capas.append(D_out)
W = GenerarW(num_capa_hidden, array_capas)
y_train = TransformacionOneShot( train[:,-1], clases)
theta, costo, W = Gradiente_Descendiente(X_train, y_train, W, iteraciones, alpha)
arr_theta.append(theta)
arr_costo.append(costo)
arr_test.append(test)
return theta, arr_costo, arr_test, W
# + [markdown] id="uFaPxcSktuv3"
# # Pruebas
# + id="vo1_rZReQ0Fk"
def BusquedaParametros(folds:[], k, num_clases, clases:[]):
alpha = [0.1, 0.25, 0.5, 0.75, 1.0]
iteraciones = range(500,3501,500) #500 1000 1500 2000
num_capa = [1,2,3]
num_neurona = range(5,20,5) #5 10 15
arr_accu = np.empty( (len(alpha),len(iteraciones) ))
for nc in num_capa:
print ("----------------------------")
print ("Numero de Capas ocultas: ", nc)
print ("----------------------------")
for nn in num_neurona:
print ("Numero de neuronas x layer: ", nn)
for tasa in range(0,len(alpha)):
for it in range(0, len(iteraciones)):
theta, dummy, test, W = CalculoParametros(folds, k, iteraciones[it], alpha[tasa], num_clases, nc, nn, clases)
arr_accu[tasa,it] = PromedioAccuracy(test, W, k, clases)
print(pd.DataFrame(arr_accu, index = alpha, columns = iteraciones))
# + id="KufibjbTQ0Fk"
titanic = LeerDatosC1(dataset[0], separa = ',')
print(titanic)
clases = [0, 1]
titanic_folds = Crear_k_folds(titanic, 3, clases)
print(titanic_folds)
#theta, arr_costo, arr_test = CalculoParametros(iris_folds, 3, 500, 0.1, 3, 3, 20, clases)
# + tags=[] colab={"base_uri": "https://localhost:8080/"} id="oYyOHxFwQ0Fk" outputId="a8d6e764-5198-4987-a42a-526af4479f68"
BusquedaParametros(titanic_folds, 3, num_clases = 2, clases = clases)
# + id="XoZoXGkQQ0Fn"
iris = LeerDatosC2(dataset[1], separa = ',')
clas_iris = ['setosa','versicolor','virginica']
iris_folds = Crear_k_folds(iris, 3, clas_iris)
# + tags=[] id="90167dX5Q0Fn" colab={"base_uri": "https://localhost:8080/"} outputId="8abe420a-02f4-4341-8f17-942aeb88598d"
BusquedaParametros(iris_folds, k=3, num_clases = 3, clases = clas_iris)
# + id="dCIRUTfWQ0Fn"
def PloteoCurvaCosto (arr: np.array, title):
#arr_theta.append(theta_grad[-1])
#print (pd.DataFrame(arr))
fig, ax = plt.subplots()
#Ploteo de Curva
scale = 1.0
color = 'tab:blue'
iteraciones = len(arr)
plt.plot(range(0,iteraciones), arr, 'o', linewidth=1, markersize=2 )
plt.title(title, {'fontsize':10})
ax.set(xlim = [-10,iteraciones], ylim = (np.min(arr)-0.01, np.max(arr)+0.01))
ax.grid(True)
plt.xlabel('Iteración')
plt.ylabel('Costo')
plt.show()
# + id="FHfaDTM6Q0Fn"
#Calculo con número de neuronas por capa a elección
num_neurona = [15]
theta, arr_costo, arr_test, W = CalculoCustom(iris_folds, k=3, iteraciones=1500,alpha=0.5, num_clases=3, num_capa_hidden=1, num_neurona=num_neurona, clases=clases)
acc = PromedioAccuracy(arr_test, W, k=3, clases=clases)
# + id="cHhiXvV3Q0Fn" outputId="257aecda-f582-4530-bb64-87fa9bfed011"
acc
# + id="ZUVJB1ZeQ0Fn" outputId="46a13e93-26a7-4c62-a83c-ad3df1ca836e"
arr_costo
# + id="eGTdVLOUQ0Fo" outputId="c6f1b172-48ef-49c6-9c56-6e006bd06871"
PloteoCurvaCosto(arr_costo[1], "Costo MLP Dataset \"Iris\"")
# + id="eKTf7-TZQ0Fo" outputId="90a72dce-4648-4a05-b169-7907e08e7a08"
#Calculo con número de neuronas por capa a elección
num_neurona = [15,15]
theta, costo_card, arr_test, W = CalculoCustom(cardiaca_folds, k=3, iteraciones=2000,alpha=1.0, num_clases=2, num_capa_hidden=2, num_neurona=num_neurona, clases=clas_cardiaca)
acc = PromedioAccuracy(arr_test, W, k=3, clases=clas_cardiaca)
acc
# + id="BqiAsYCnQ0Fo" outputId="081c783f-93e0-4d1c-8d30-0bd2a171226f"
arr_costo
# + id="u_fRauUsQ0Fp" outputId="e2b5dad0-07a8-4297-f6d0-2a76aa1a575d"
PloteoCurvaCosto(costo_card[2], "Costo MLP Dataset \"Enfermedad Cardiaca\"")
# + id="IM8gVK8LQ0Fp"
| SVM MLP/MLP.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import xgboost as xgb
# +
df_data = pd.read_csv('dix.csv', names=['Date','SP500','DIX','GEX'], index_col='Date', parse_dates=True, header=0)
def process_data(df):
cols = df.columns
for col in cols:
df[col + ' %dif'] = df_data[col].pct_change()
df['SP500 %mvmt'] = df['SP500 %dif'].shift(periods=-1)
return df
df_data = process_data(df_data)
df_data
# -
| .ipynb_checkpoints/dix-gex-ml-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Лабораторная работа по Relevance Vector Regression
# В рамках этой лабораторной работы необходимо:
# - Имплементировать Relevance Vector Regression
# - Применить на синетическом датасете (восстановление полинома), сравнить с Lasso из sklearn и гребневой регрессией
# - Применить на данных sinc с RBF признаками, визуализировать "релевантные вектора", сравнить с Support Vector Regression и Lasso
# - Сделать выводы
# + pycharm={"name": "#%%\n"}
from typing import Tuple
import numpy as np
from matplotlib import pyplot as plt
from sklearn.linear_model import RidgeCV, LassoCV
from tqdm.auto import trange
# %matplotlib inline
# + pycharm={"name": "#%%\n"}
np.random.seed(123)
# + pycharm={"name": "#%%\n"}
def l2_error(X, t, w):
return np.sum((X.dot(w.ravel()) - t) ** 2)
# -
# ## Имплементация Relevance Vector Regression
#
# Здесь необходимо реализовать три функции:
#
# 1. `get_w_sigma(X, t, alpha, beta)`, которая принимает датасет (X, t) и гиперпараметры RVR (alpha, beta) и возвращает параметры апостериорного распределения mu, sigma
# 2. `update_alpha_beta(X, t, alpha, beta)`, которая принимает датасет (X, t) и гиперпараметры RVR (alpha, beta) и делает один шаг итерационной процедуры для обновления гиперпараметров (было на лекции)
# 3. `fit_rvr(X, t, max_iters)`, которая принимает датасет (X, t) и максимальное количество итераций и возвращает обученные гиперпараметры и параметры апостериорного распределения на веса модели
#
# На что стоит обратить внимание:
#
# 1. Результаты дорогостоящих операций типа перемножения одних и тех же матриц нужно кешировать и переиспользовать
# 2. $\alpha$-ы для нерелевантных объектов должны принять значение `np.inf`, а соответствующие веса и их дисперсии должны иметь значение 0
# 3. Бесконечности и нули из предыдущего пункта должны обрабатываться корректно, без NaN-ов и warning-ов
# 4. Матрицу с бесконечными элементами на диагонали можно обращать более эффективно (достаточно обратить подматрицу)
class RelevanceVectorRegression:
"""Relevance vector machine
Useful implementation links:
- http://www.machinelearning.ru/wiki/index.php?title=RVM
- https://disk.yandex.ru/i/RyxlGTJMqberDQ
- https://www.hds.utc.fr/~tdenoeux/dokuwiki/_media/en/ace_chapter9_2019.pdf
"""
def __init__(self, alpha_bound: float = 1e12, weight_bound: float = 1e-6):
self.__alpha_bound = alpha_bound
self.__weight_bound = weight_bound
self.__cache = {}
def __fit_cache(self, X: np.ndarray, t: np.ndarray):
self.__cache = {}
XT = X.T
self.__cache["XTX"] = XT @ X
self.__cache["XTt"] = XT @ t
def _get_mu_sigma(
self, X: np.ndarray, t: np.ndarray, alpha: np.ndarray, beta: float
) -> Tuple[np.ndarray, np.ndarray]:
"""Calculate the mean and the covariance matrix of the posterior distribution.
p(w | X, t, alpha, beta) = N(w | mu, sigma)
sigma = (beta X^T X + A)^{-1}
mu = beta sigma X^T t
"""
n, d = X.shape
XTX = self.__cache.get("XTX", X.T @ X)
XTt = self.__cache.get("XTt", X.T @ t)
mask = alpha < self.__alpha_bound
sigma_inverse = beta * XTX[mask][:, mask]
sigma_inverse[np.diag_indices_from(sigma_inverse)] += alpha[mask]
sigma = np.linalg.inv(sigma_inverse)
sigma_diag = np.zeros(d)
sigma_diag[mask] = np.diag(sigma)
mu = np.zeros(d)
mu[mask] = beta * sigma.dot(XTt[mask])
return mu, sigma_diag
def _update_alpha_beta(
self, X: np.ndarray, t: np.ndarray, alpha: np.ndarray, beta: float
) -> Tuple[np.ndarray, float]:
"""Update the hyperparemeters to increase evidence.
w_j < weight bound or a_j > alpha bound => w_j = 0, a_j = inf
else => a_j = (1 - a_j sigma_j) / w_j^2
b_i = n - sum(1 - a_j sigma_j) / ||X w - t||_2
"""
n, d = X.shape
mu, sigma = self._get_mu_sigma(X, t, alpha, beta)
mask = (alpha < self.__alpha_bound) & (abs(mu) > self.__weight_bound)
mu[~mask] = 0
y = 1 - alpha[mask] * sigma[mask]
alpha_new = np.full(d, np.inf)
alpha_new[mask] = y / (mu[mask] ** 2)
beta_new = (n - y.sum()) / ((X.dot(mu) - t) ** 2).sum()
return alpha_new, beta_new
def fit(
self, X: np.ndarray, t: np.ndarray, max_iter: int = 10000
) -> Tuple[np.ndarray, np.ndarray, np.ndarray, float]:
"""Train the Relevance Vector Regression model."""
self.__fit_cache(X, t)
n, d = X.shape
alpha = np.ones(d)
beta = 1.0
for _ in trange(max_iter, desc="Fit RVR"):
alpha, beta = self._update_alpha_beta(X, t, alpha, beta)
mu, sigma = self._get_mu_sigma(X, t, alpha, beta)
return mu, sigma, alpha, beta
# ## Восстановление полинома
#
# Здесь решается модельная задача: зашумленным полиномом третьей степени сгенерированы данные для задачи регрессии. Нужно на этих данных обучить многочлен степени, не превышающей 20. Предлагается сравнить три модели: гребневую регрессию, L1-регрессию (Lasso) и RVR, и сравнить ошибку на тестовой выборке и качество отобранных признаков.
# +
# Data generation
def gen_batch(n, w, beta):
d = len(w)
X = np.random.uniform(-1, 1, (n, 1))
X = np.sort(X, axis=0)
X = np.hstack([X ** i for i in range(d)])
t = X.dot(w) + np.random.normal(size=n) / beta ** 0.5
return X, t
n = 200
d = 21
w_true = np.zeros(d)
w_true[1] = 1
w_true[3] = -1
beta_true = 100
X_train, t_train = gen_batch(n, w_true, beta_true)
X_test, t_test = gen_batch(n, w_true, beta_true)
# Visualization
fig, ax = plt.subplots()
ax.scatter(X_train[:, 1], t_train, s=3, label="Train data", alpha=0.3)
ax.scatter(X_test[:, 1], t_test, s=3, label="Test data", alpha=0.3)
ax.plot(X_train[:, 1], X_train.dot(w_true), label="Ground truth")
ax.set_xlabel("x")
ax.set_ylabel("y")
ax.legend(ncol=3, loc=9, bbox_to_anchor=(0.5, 1.15))
plt.show()
# -
# %%time
# Relevance Vector Regression
rvr = RelevanceVectorRegression()
w_rvr, sigma_rvr, alpha_rvr, beta_rvr = rvr.fit(X_train, t_train)
# %%time
# Ridge Regression with Cross-Validation
ridge = RidgeCV(cv=20, alphas=10.0 ** np.linspace(-6, 3, 100), fit_intercept=False).fit(X_train, t_train)
w_ridge = ridge.coef_
# %%time
# Lasso Regression with Cross-Validation
lasso = LassoCV(cv=5, alphas=10.0 ** np.linspace(-6, 3, 100), fit_intercept=False, max_iter=2000000).fit(
X_train, t_train
)
w_lasso = lasso.coef_
# +
# Comparison
print("Relevance Vector Regression")
print("Features remaining:", np.sum(alpha_rvr < 1e8), "/", d)
print("Train error:", l2_error(X_train, t_train, w_rvr) / n)
print("Test error: ", l2_error(X_test, t_test, w_rvr) / n)
print("-" * 50)
print("Ridge Regression")
print("Features remaining: NA (no sparsity)")
print("Train error:", l2_error(X_train, t_train, w_ridge) / n)
print("Test error: ", l2_error(X_test, t_test, w_ridge) / n)
print("-" * 50)
print("Lasso Regression")
print("Features remaining:", np.sum(np.abs(w_lasso) > 1e-20), "/", d)
print("Train error:", l2_error(X_train, t_train, w_lasso) / n)
print("Test error: ", l2_error(X_test, t_test, w_lasso) / n)
fig, ax = plt.subplots()
ax.scatter(X_train[:, 1], t_train, s=3, label="Train data", alpha=0.3)
ax.scatter(X_test[:, 1], t_test, s=3, label="Test data", alpha=0.3)
ax.plot(X_train[:, 1], X_train.dot(w_true), label="Ground truth")
ax.plot(X_train[:, 1], X_train.dot(w_rvr), label="RVR")
ax.plot(X_train[:, 1], X_train.dot(w_ridge), label="Ridge")
ax.plot(X_train[:, 1], X_train.dot(w_lasso), label="Lasso")
ax.set_xlabel("x")
ax.set_ylabel("y")
ax.legend(ncol=3, loc=9, bbox_to_anchor=(0.5, 1.25))
plt.show()
# -
# ## Регрессия с RBF-признаками
#
# Здесь решается другая модельная задача: необходимо восстановить зашумленную функцию `sinc(x)`. Предлагается применить kernel trick с RBF-ядром (можно использовать функцию `sklearn.metrics.pairwise.rbf_kernel`), обучить три модели: SVM-регрессию (SVR), L1-регрессию (Lasso) и RVR, и сравнить ошибку на тестовой выборке и качество отобранных опорных / релевантных объектов.
# +
# Data generation
from sklearn.metrics.pairwise import rbf_kernel
def gen_batch(n, beta):
points = np.random.uniform(-5, 5, n)
points = np.sort(points)
t = np.sinc(points) + np.random.normal(size=n) / beta ** 0.5
return points, t
n = 200
n_test = 1000
d = n + 1
beta_true = 100
points_train, t_train = gen_batch(n, beta_true)
points_test, t_test = gen_batch(n_test, beta_true)
# RBF-transform
X_train = rbf_kernel(points_train.reshape(-1, 1))
X_test = rbf_kernel(points_test.reshape(-1, 1), points_train.reshape(-1, 1))
# Constant feature
X_train = np.hstack((np.ones((n, 1)), X_train))
X_test = np.hstack((np.ones((n_test, 1)), X_test))
# Visualization
fig, ax = plt.subplots()
ax.scatter(points_train, t_train, s=3, label="Train data", alpha=1)
ax.scatter(points_test, t_test, s=3, label="Test data", alpha=0.2)
ax.plot(points_train, np.sinc(points_train), label="Ground truth")
ax.set_xlabel("x")
ax.set_ylabel("y")
ax.legend(ncol=3, loc=9, bbox_to_anchor=(0.5, 1.15))
plt.show()
# -
# %%time
# Relevance Vector Regression
rvr = RelevanceVectorRegression()
w_rvr, sigma_rvr, alpha_rvr, beta_rvr = rvr.fit(X_train, t_train)
# %%time
# Lasso Regression with Cross-Validation
lasso = LassoCV(
cv=10, alphas=10.0 ** np.linspace(-5, 1, 20), fit_intercept=False, max_iter=100000, tol=1e-2, n_jobs=10
).fit(X_train, t_train)
w_lasso = lasso.coef_
# +
# %%time
# Support Vector Regression
from sklearn.svm import SVR
svr = SVR(gamma=1, tol=1e-6, C=1).fit(points_train.reshape(-1, 1), t_train)
# +
# Comparison
print("Relevance Vector Regression")
print("Objects remaining:", np.sum(alpha_rvr[1:] < 1e8), "/", n)
print("Train error:", l2_error(X_train, t_train, w_rvr) / n)
print("Test error: ", l2_error(X_test, t_test, w_rvr) / n)
print("-" * 50)
print("Lasso Regression")
print("Objects remaining:", np.sum(np.abs(w_lasso[1:]) > 1e-20), "/", n)
print("Train error:", l2_error(X_train, t_train, w_lasso) / n)
print("Test error: ", l2_error(X_test, t_test, w_lasso) / n)
print("-" * 50)
print("Support Vector Regression")
print("Objects remaining:", len(svr.support_), "/", n)
print("Train error:", np.sum((svr.predict(points_train.reshape(-1, 1)) - t_train) ** 2) / n)
print("Test error: ", np.sum((svr.predict(points_test.reshape(-1, 1)) - t_test) ** 2) / n)
fig, ax = plt.subplots()
ax.scatter(points_train, t_train, s=3, label="Train data", alpha=0.3)
ax.scatter(points_test, t_test, s=3, label="Test data", alpha=0.3)
ax.plot(points_test, np.sinc(points_test), label="Ground truth")
ax.plot(points_test, X_test.dot(w_rvr), label="RVR")
ax.plot(points_test, X_test.dot(w_lasso), label="Lasso")
ax.plot(points_test, svr.predict(points_test.reshape(-1, 1)), label="SVR")
ax.set_xlabel("x")
ax.set_ylabel("y")
ax.legend(ncol=3, loc=9, bbox_to_anchor=(0.5, 1.25))
plt.show()
# -
# ### Визуализация релевантных объектов для RVR
# +
relevant = alpha_rvr[1:] < 1e8
fig, ax = plt.subplots()
ax.scatter(points_train, t_train, s=3, label="Train data", alpha=0.3)
ax.scatter(points_train[relevant], t_train[relevant], c="tab:blue", s=30, label="Relevant objects")
ax.scatter(points_test, t_test, s=3, label="Test data", alpha=0.3)
ax.plot(points_test, np.sinc(points_test), label="Ground truth")
ax.plot(points_test, X_test.dot(w_rvr), label="RVR")
ax.set_xlabel("x")
ax.set_ylabel("y")
ax.legend(ncol=3, loc=9, bbox_to_anchor=(0.5, 1.25))
plt.show()
# -
# ## Выводы
#
# В обоих проведённых экспериментах, удалось достичь меньшей ошибки на тестовой выборке используя именно RVR.
# Хотя разница очень незначительна, вполне возможно другой сид псевдослучайных чисел поменяет порядок.
#
# С другой стороны, используя RVR удалось достичь максимальной разряженности итоговых параметров.
# Однако RVR показывает сравнительно небольшую скорость обучения.
#
| lab_1_relevance_vector_regression/Relevance Vector Regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="IvVq-VIoPsDs"
# # Developing an AI application
#
# Going forward, AI algorithms will be incorporated into more and more everyday applications. For example, you might want to include an image classifier in a smart phone app. To do this, you'd use a deep learning model trained on hundreds of thousands of images as part of the overall application architecture. A large part of software development in the future will be using these types of models as common parts of applications.
#
# In this project, you'll train an image classifier to recognize different species of flowers. You can imagine using something like this in a phone app that tells you the name of the flower your camera is looking at. In practice you'd train this classifier, then export it for use in your application. We'll be using [this dataset](http://www.robots.ox.ac.uk/~vgg/data/flowers/102/index.html) of 102 flower categories, you can see a few examples below.
#
# <img src='https://i.imgur.com/6n64KAw.png' width=500px>
#
# The project is broken down into multiple steps:
#
# * Load and preprocess the image dataset
# * Train the image classifier on your dataset
# * Use the trained classifier to predict image content
#
# We'll lead you through each part which you'll implement in Python.
#
# When you've completed this project, you'll have an application that can be trained on any set of labeled images. Here your network will be learning about flowers and end up as a command line application. But, what you do with your new skills depends on your imagination and effort in building a dataset. For example, imagine an app where you take a picture of a car, it tells you what the make and model is, then looks up information about it. Go build your own dataset and make something new.
#
# First up is importing the packages you'll need. It's good practice to keep all the imports at the beginning of your code. As you work through this notebook and find you need to import a package, make sure to add the import up here.
# + [markdown] colab_type="text" id="rFKYE_43z9po"
# ## Install PyTorch (setup for Google Colab)
# + colab={} colab_type="code" id="H9j1Df2quRrX"
# http://pytorch.org/
from os.path import exists
from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())
# cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\([0-9]*\)$/cu\1\2/'
accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu'
# !pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.1-{platform}-linux_x86_64.whl torchvision
import torch
# + [markdown] colab_type="text" id="L1fGqdcBueWG"
# ### Mount your Google Drive
#
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="VhrmH92ztR5G" outputId="01e4c931-8a36-4054-f0a7-5b631899f173"
from google.colab import drive
drive.mount('/content/drive')
# + [markdown] colab_type="text" id="un0rKns_zyuy"
# ## Download the dataset (code setup for Google Colab)
# + colab={} colab_type="code" id="kBmdmSuAvA6X"
# #!wget "https://s3.amazonaws.com/content.udacity-data.com/courses/nd188/flower_data.zip" -P "drive/My Drive/PyTorch Challenge/"
# #!unzip "drive/My Drive/PyTorch Challenge/flower_data.zip" -d "drive/My Drive/PyTorch Challenge/Side Project/Flower_dataset/"
# #!wget "https://github.com/udacity/pytorch_challenge/blob/master/cat_to_name.json" -P "drive/My Drive/PyTorch Challenge/cat_to_name.json"
# + [markdown] colab_type="text" id="rpqNh3qfSybV"
# ## Import the test environment
# ### The functions below were used for calculating the accuracy with a function from built by the community
# + colab={"base_uri": "https://localhost:8080/", "height": 237} colab_type="code" id="OuAclTBSSv_m" outputId="b801f76d-b379-45e0-d368-718123fdb536"
# !git clone https://github.com/GabrielePicco/deep-learning-flower-identifier
# !pip install requests
# !pip install airtable
import sys
sys.path.insert(0, 'deep-learning-flower-identifier')
from test_model_pytorch_facebook_challenge import publish_evaluated_model, calc_accuracy
# + [markdown] colab_type="text" id="rXpK2VM5k_0y"
# ## Import (for Google Colab)
# + colab={"base_uri": "https://localhost:8080/", "height": 184} colab_type="code" id="7EzSAxFSk0wK" outputId="e8d2e822-1d9e-42e8-9e84-6f264442f6d8"
# we need pillow version of 5.3.0
# we will uninstall the older version first
# !pip uninstall -y Pillow
# install the new one
# !pip install Pillow==5.3.0
# import the new one
import PIL
print(PIL.PILLOW_VERSION)
# this should print 5.3.0. If it doesn't, then restart your runtime:
# Menu > Runtime > Restart Runtime
# + colab={"base_uri": "https://localhost:8080/", "height": 35} colab_type="code" id="CXFANeSHPsDu" outputId="bacd75cb-b0d2-4232-c29c-3b922ff121da"
# #!pip install --no-cache-dir -I pillow
# Imports here
# %matplotlib inline
import time
import os
import json
import copy
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from PIL import Image
from collections import OrderedDict
import torch
from torch import nn, optim
from torch.optim import lr_scheduler
from torch.autograd import Variable
from torchvision import datasets, models, transforms
from google.colab import files
torch.manual_seed(42)
# + [markdown] colab_type="text" id="Vu0jssymPsDy"
# ## Load the data
#
# Here you'll use `torchvision` to load the data ([documentation](http://pytorch.org/docs/0.3.0/torchvision/index.html)). You can [download the data here](https://s3.amazonaws.com/content.udacity-data.com/courses/nd188/flower_data.zip). The dataset is split into two parts, training and validation. For the training, you'll want to apply transformations such as random scaling, cropping, and flipping. This will help the network generalize leading to better performance. If you use a pre-trained network, you'll also need to make sure the input data is resized to 224x224 pixels as required by the networks.
#
# The validation set is used to measure the model's performance on data it hasn't seen yet. For this you don't want any scaling or rotation transformations, but you'll need to resize then crop the images to the appropriate size.
#
# The pre-trained networks available from `torchvision` were trained on the ImageNet dataset where each color channel was normalized separately. For both sets you'll need to normalize the means and standard deviations of the images to what the network expects. For the means, it's `[0.485, 0.456, 0.406]` and for the standard deviations `[0.229, 0.224, 0.225]`, calculated from the ImageNet images. These values will shift each color channel to be centered at 0 and range from -1 to 1.
# + colab={} colab_type="code" id="99LuKlIPPsDz"
data_dir = 'drive/My Drive/PyTorch Challenge/Flower_dataset/flower_data/'
train_dir = os.path.join(data_dir, 'train')
valid_dir = os.path.join(data_dir, 'valid')
dirs = {'train': train_dir,
'valid': valid_dir}
# + colab={} colab_type="code" id="hO163BalPsD1"
size = 224
data_transforms = data_transforms = {
'train': transforms.Compose([
transforms.RandomRotation(25),
transforms.RandomResizedCrop(size),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])
]),
'valid': transforms.Compose([
transforms.Resize(size + 32),
transforms.CenterCrop(size),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])
]),
}
image_datasets = {x: datasets.ImageFolder(dirs[x], transform=data_transforms[x]) for x in ['train', 'valid']}
dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=16, shuffle=True) for x in ['train', 'valid']}
dataset_sizes = {x: len(image_datasets[x])
for x in ['train', 'valid']}
class_names = image_datasets['train'].classes
# + [markdown] colab_type="text" id="B3BD6gY9PsD4"
# ### Label mapping
#
# You'll also need to load in a mapping from category label to category name. You can find this in the file `cat_to_name.json`. It's a JSON object which you can read in with the [`json` module](https://docs.python.org/2/library/json.html). This will give you a dictionary mapping the integer encoded categories to the actual names of the flowers.
# + colab={} colab_type="code" id="itP3TwrqPsD5"
with open('drive/My Drive/PyTorch Challenge/cat_to_name.json', 'r') as f:
cat_to_name = json.load(f)
# + [markdown] colab_type="text" id="6GtO1e5FPsD8"
# # Building and training the classifier
#
# Now that the data is ready, it's time to build and train the classifier. As usual, you should use one of the pretrained models from `torchvision.models` to get the image features. Build and train a new feed-forward classifier using those features.
#
# We're going to leave this part up to you. If you want to talk through it with someone, chat with your fellow students! You can also ask questions on the forums or join the instructors in office hours.
#
# Refer to [the rubric](https://review.udacity.com/#!/rubrics/1663/view) for guidance on successfully completing this section. Things you'll need to do:
#
# * Load a [pre-trained network](http://pytorch.org/docs/master/torchvision/models.html) (If you need a starting point, the VGG networks work great and are straightforward to use)
# * Define a new, untrained feed-forward network as a classifier, using ReLU activations and dropout
# * Train the classifier layers using backpropagation using the pre-trained network to get the features
# * Track the loss and accuracy on the validation set to determine the best hyperparameters
#
# We've left a cell open for you below, but use as many as you need. Our advice is to break the problem up into smaller parts you can run separately. Check that each part is doing what you expect, then move on to the next. You'll likely find that as you work through each part, you'll need to go back and modify your previous code. This is totally normal!
#
# When training make sure you're updating only the weights of the feed-forward network. You should be able to get the validation accuracy above 70% if you build everything right. Make sure to try different hyperparameters (learning rate, units in the classifier, epochs, etc) to find the best model. Save those hyperparameters to use as default values in the next part of the project.
# + colab={"base_uri": "https://localhost:8080/", "height": 74} colab_type="code" id="qonDUhPZPsD9" outputId="816dd4c0-c469-408b-81b8-c6a3445765f1"
#Load pretrained model
model = models.densenet161(pretrained=True)
#freeze parameters so that we don't backprop through them
for param in model.parameters():
param.requires_grad = False
# + colab={} colab_type="code" id="ep1RdGifg3Mw"
#Uncomment if you want to print model architecture
#model
# + colab={} colab_type="code" id="V_GShFGfxqZW"
# Adjust number of classes
model.classifier = nn.Linear(2208, 102)
# + colab={} colab_type="code" id="5ePL_EUbPsEG"
def train_model(model, criteria, optimizer, scheduler, num_epochs=25, device='cuda'):
model.to(device)
since = time.time()
best_model_wts = copy.deepcopy(model.state_dict())
best_acc = 0.0
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 10)
# Each epoch has a training and validation phase
for phase in ['train', 'valid']:
if phase == 'train':
scheduler.step()
model.train() # Set model to training mode
else:
model.eval() # Set model to evaluate mode
running_loss = 0.0
running_corrects = 0
# Iterate over data.
for inputs, labels in dataloaders[phase]:
inputs = inputs.to(device)
labels = labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward
# track history if only in train
with torch.set_grad_enabled(phase == 'train'):
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criteria(outputs, labels)
# backward + optimize only if in training phase
if phase == 'train':
loss.backward()
optimizer.step()
# statistics
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
epoch_loss = running_loss / dataset_sizes[phase]
epoch_acc = running_corrects.double() / dataset_sizes[phase]
print('{} Loss: {:.4f} Acc: {:.4f}'.format(
phase, epoch_loss, epoch_acc))
# deep copy the model
if phase == 'valid' and epoch_acc > best_acc:
best_acc = epoch_acc
best_model_wts = copy.deepcopy(model.state_dict())
print()
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 60, time_elapsed % 60))
print('Best val Acc: {:4f}'.format(best_acc))
# load best model weights
model.load_state_dict(best_model_wts)
return model
# + colab={} colab_type="code" id="R6_OsfVQPsEJ"
# Criteria NLLLoss which is recommended with Softmax final layer
criteria = nn.CrossEntropyLoss()
# Observe that all parameters are being optimized
optimizer = optim.SGD(model.classifier.parameters(), lr=0.006, momentum=0.9, nesterov=True)
# Decay LR by a factor of 0.1 every 4 epochs
sched = lr_scheduler.StepLR(optimizer, step_size=7, gamma=0.1)
# Number of epochs
eps=14
# + colab={"base_uri": "https://localhost:8080/", "height": 1339} colab_type="code" id="8q5ZhsXCPsEL" outputId="90a4ece8-ef49-4ba2-8883-e9bcae007635"
device = "cuda" if torch.cuda.is_available() else "cpu"
model_ft = train_model(model, criteria, optimizer, sched, eps, device)
# + [markdown] colab_type="text" id="b8VnBpJxPsEP"
# ## Save the checkpoint
#
# Now that your network is trained, save the model so you can load it later for making predictions. You probably want to save other things such as the mapping of classes to indices which you get from one of the image datasets: `image_datasets['train'].class_to_idx`. You can attach this to the model as an attribute which makes inference easier later on.
#
# ```model.class_to_idx = image_datasets['train'].class_to_idx```
#
# Remember that you'll want to completely rebuild the model later so you can use it for inference. Make sure to include any information you need in the checkpoint. If you want to load the model and keep training, you'll want to save the number of epochs as well as the optimizer state, `optimizer.state_dict`. You'll likely want to use this trained model in the next part of the project, so best to save it now.
# + colab={"base_uri": "https://localhost:8080/", "height": 54} colab_type="code" id="4Gh_LSsy07A_" outputId="b3c79c06-3830-4e8a-87c1-1a5a756abb9b"
# Find total parameters and trainable parameters
total_params = sum(p.numel() for p in model.parameters())
print(f'{total_params:,} total parameters.')
total_trainable_params = sum(
p.numel() for p in model.parameters() if p.requires_grad)
print(f'{total_trainable_params:,} training parameters.')
# + colab={} colab_type="code" id="ZNHeDKgGPsEQ"
model_file_name = 'classifier_densenet161.pth'
path = F"drive/My Drive/PyTorch Challenge/{model_file_name}"
model.class_to_idx = image_datasets['train'].class_to_idx
model.cpu()
torch.save({'arch': 'densenet161',
'state_dict': model.state_dict(),
'class_to_idx': model.class_to_idx,
'optimizer_state_dict': optimizer.state_dict,
'criterion':criteria},
path)
# + [markdown] colab_type="text" id="jmUhvmegPsEW"
# ## Loading the checkpoint
#
# At this point it's good to write a function that can load a checkpoint and rebuild the model. That way you can come back to this project and keep working on it without having to retrain the network.
# + colab={} colab_type="code" id="W62JW_oRPsEW"
def load_model(checkpoint_path):
chpt = torch.load(checkpoint_path)
pretrained_model = getattr(models, chpt['arch'])
if callable(pretrained_model):
model = pretrained_model(pretrained=True)
for param in model.parameters():
param.requires_grad = False
else:
print("Sorry base architecture not recognized")
model.class_to_idx = chpt['class_to_idx']
# Create the classifier
model.classifier = nn.Linear(2208, 102)
model.load_state_dict(chpt['state_dict'])
return model
# + colab={"base_uri": "https://localhost:8080/", "height": 588} colab_type="code" id="GcHUjCibPsEZ" outputId="f0563402-6d89-408a-c23a-44a4d8720374"
model = load_model('drive/My Drive/PyTorch Challenge/classifier_densenet161.pth')
calc_accuracy(model, input_image_size=224, testset_path=valid_dir)
# + [markdown] colab_type="text" id="eihTdAizaadw"
# ### Retrain only specific layers
# + colab={"base_uri": "https://localhost:8080/", "height": 237} colab_type="code" id="fCrAE-0t5iyX" outputId="391b86a3-c308-4f1f-9f97-a6ecbdf4fba6"
# This is for DenseNet architecture, you need to adjust the method for each architecture
for name in model.children():
for child, config in name.named_children():
if child in ['denseblock4', 'norm5']:
print(str(child) + ' is unfrozen')
for param in config.parameters():
param.requires_grad = True
else:
print(str(child) + ' is frozen')
for param in config.parameters():
param.requires_grad = False
# + colab={} colab_type="code" id="pBj4OPEFdcRz"
#settings to train using different optimizer
criteria = nn.CrossEntropyLoss()
#adjust optimizer
optimizer = torch.optim.SGD(filter(lambda p: p.requires_grad, model.parameters()), lr=0.00006, momentum=0.9, nesterov=True)
# Use scheduler
sched = lr_scheduler.StepLR(optimizer, step_size=5, gamma=0.1)
# Number of epochs
eps=12
# + colab={"base_uri": "https://localhost:8080/", "height": 1156} colab_type="code" id="2Bp-jNQ1dozX" outputId="6b6a14e5-6352-40df-80f5-92e80d675e28"
#train model
device = "cuda" if torch.cuda.is_available() else "cpu"
model_full = train_model(model, criteria, optimizer, sched, eps, device)
# + colab={} colab_type="code" id="AiR3K2HldzfG"
#save model
model_file_name = 'classifier_densenet161_2.pth'
path = F"drive/My Drive/PyTorch Challenge/{model_file_name}"
model.class_to_idx = image_datasets['train'].class_to_idx
model.cpu()
torch.save({'arch': 'densenet161',
'state_dict': model.state_dict(),
'class_to_idx': model.class_to_idx,
'optimizer_state_dict': optimizer.state_dict,
'criterion':criteria},
path)
# + colab={"base_uri": "https://localhost:8080/", "height": 531} colab_type="code" id="mqK4uzH7duTB" outputId="09905866-fc37-4c1a-bfc5-73a47ba2774c"
#calculate accuracy
calc_accuracy(model, input_image_size=224, testset_path=valid_dir)
# + [markdown] colab_type="text" id="0I7oB1aMcPv9"
# ## Publish the result on the Airtable shared leaderboard
# + colab={} colab_type="code" id="NHdX0lizcUZd"
#publish_evaluated_model(model, input_image_size=224, username="@Slack.Username", model_name="VGG19", optim="Adam", criteria="NLLLoss", scheduler="StepLR", epoch=5)
| lab_challenge/Final_Lab_Challenge.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # The heart of Austin
#
# For this tutorial, imagine you are a data scientist in a medical device company. We will learn how to simulate the Heart Rate (HR) of ten citizens of Austin, TX. Our virtual study participants will be 10, 25-years old, individuals that sleep (8 hours a day), perform normal activites for the majority of the day (14 hours a day, we will refer to these as rest activities) and perform high-intensity exercise (2 hours a day). We will follow the participants over a period of one week, with measurements of their HR for every hour of the week.
#
# The dataset that we will simulate will provide opportunities to practice working with Numpy Arrays and randn, plotting time series and scatter plots using matplotlib.
# ##### Learning outcomes:
# - Advanced operations with Numpy Arrays
# - Advanced indexing into Numpy Arrays
# - Simulation of datasets assuming normally distributed data (randn)
# - computation of basic descriptive statistics (mean, median and standard deviation)
# - Data visualization using matplotlib
# - use of plot for simple data
# - use of plot for time series
# - use of scatter
# Before doing anything, we will import
# all the stuff we think we need
import numpy as np
import matplotlib.pyplot as plt
# ### Setting up the simulation and the variables needed
#
# Our simulation will use some basic concepts from statistics that you might be familiar with (but it does not really matter if you are not!). HR is measured in Beats Per Minute (BPM). The HR of a population are somehow variable across individuals and generally normally distributed. This means that folks have a higher HR at rest, others a lower one, etc.
#
# The following will be the assumptions used for our simulation.
#
# ###### Rest HR
# The mean rest HR of a population of 25-years old individuals is about `75` BPM. The variance across individuals (the between samples variance) is about `10` BMP. This means that whereas the majority of the individuals will have a HR of about `75` BPM, several will have one as low as `55` BPM and others as high as `95` BPM. We will assume that the Rest HR will be associated with the Sleep and Exercise HR.
#
# ###### Sleep HR
# We will assume that the mean HR during sleep goes down by about 20 BPM from the population mean of `75` BPM. That means that an average individual will have an HR of `55` BPM during sleep. More generally that reduction corresponds to about `70%` reduction in HR during sleep (`55/75=0.733`).
#
# ###### Exercise HR
# We will use a commom computation an assume that max HR during a high-intensity exercise will be predicted as follows: `max HR = 220-Age`, that means that for our 25 years old Austinites the expected HR under high-intensity exercise is 195 BPM. That is about a `2.6` times higher than the base HR in the population. Sowe will use that number, `2.6` to do our calculations.
# Remember, we are assuming a correlation between Rest HR, Sleep HR and Exercise HR. To simulate that, we can start by simulating ten individuals drawn randomly from the population of Austin with a Rest HR of 75 BPM. After that, we can simulate the Sleep HR as a decrease of Rest HR and similarly the Exercise HR as an increase of the distribution of Rest HR. In principle, Sleep and Exercise HR would also have their own variance independend of the variance already existing in the group of ten Austinites. Becuase of time constraints we will not consider this last type of variability.
# +
# Let's define the variables we need
n = 10 # We need the number of individuals
# Population Rest Heart Rate definitions
hr_r = 75 # We need the aaverage Rest HR in Austin for 25-years old individuals
hr_r_sd_factor = .1 # Standard Deviation factor for the Rest HR (10% of the mean)
# Population Sleep and Exercise definitions
hr_e_factor = 2.6 # We need the increase in HR from Rest during high-intensity exercise.
# This will allow us to compute the Exercise HR given each
# individuals' Rest HR, say from `75 * 2.6 = 195`
hr_s_factor = .7 # Decrease in HR during sleep. HR during sleep is only about
# 70% that during awake rest. Say `75 * 0.7 = 52.5`
add_var_factor =.05 # Factor determining the additional variance added to HR during
# Sleep and Exercise periods (variance beyond Rest HR variance).
# Time variables
duration_exercise = 2 # Number of hours of high-intensity exercise per day
duration_sleep = 8 # Number of hours of sleep per day
duration_rest = 14 # the rest activities duraration every day
duration_day = 24 # Number of hours in a day
duration_week = 7 # Number of days in a week
# -
# ### Simulating the Rest HR for 10 individuals
#
# Next, we will compute the distribution of HR across the 10 individuals
# by implementing the following assumptions:
# - The Rest HR of all individuals comes from the same population, that is the distribution of Austin
# - The Rest, Sleep and Exercise HR are correlated with each others for a subject (i.e., if a subejct has a high Rest HR, the Exercise and Sleep HR will also be higher).
# +
# First we define the variables needed for the *Rest HR*
Rest_mean = hr_r # This is the mean HR of all 25-years old Austinites
Rest_SD = Rest_mean * hr_r_sd_factor # we set the SD to be 10% of the mean HR
# We will create the distribution of rest HR using randn
#
# randn generates random numbers with mean 0 and SD = 1
# To match the needed distribution in our situation
# we add to the numbers generated by `randn` the mean HR
# and multiple by the SD of the HR
hr_rest_individuals = Rest_mean + Rest_SD * np.random.randn(n,1)
# Next, we will sort the individuals from the lowest to the highest Resting HR
# This will represent our simulated distribution of 25-years old Austinites' HR
# It will be the base for many of the following operations
hr_rest_individuals = np.sort(hr_rest_individuals, axis=0)
# -
# Let's plot these individuals
plt.plot(hr_rest_individuals, "o");
plt.title('Rest HR');
plt.xlabel('Individuals');
plt.ylabel('Rest HR (BPM)');
# Questions:
# - What is the mean HR?
# - What was supposed to be?
# - Why are they different
# ### Simulating the Exercise HR for 10 individuals
#
# Next, we will compute the distribution of HR across the same 10 individuals, but during high-intensity exercises. We will call this Exercise HR. The assumptions in the simulation below are the same as those in the previous section. Hereafter, we also assume that there is a correlation between the Rest HR and the Exercise HR.
# +
# Exercise HR
# The mean HR of all 25-years old Austinites
# during high-intensity Exercise. Not how we start from the Rest HR
# After thatm, we scale the values to the mean Exercise HR,
# this operation assures a correlation of 1 between the Rest and Exercise HR
Exercise_mean = hr_rest_individuals * hr_e_factor
# We reduce the correlation between Rest and Exercise HR by adding a small
# variation (Standard Deviation, SD) to the Exercise HR data
Exercise_SD = hr_rest_individuals * add_var_factor
# We will create the distribution of rest HR using randn
#
# randn generates random numbers with mean 0 and SD = 1
# To match the needed distribution in our situation
# we add to the numbers generated by `randn` the mean HR
# and multiple by the SD of the HR
hr_exercise_individuals = Exercise_mean + (Exercise_SD * np.random.randn(n,1))
# Note that here we do not sort. This is because sorting would make us loose
# the pairing between hr_exercise_individuals and hr_rest_individuals.
# Instead, we want to keep the entries in the two arrays paired, so that
# subject 1 in the first array corresponds to subject 1 in the second.
# -
plt.plot(hr_exercise_individuals, "o");
plt.title('Exercise HR');
plt.xlabel('Individuals');
plt.ylabel('Heart Rate (BPM)');
# Questions:
# - What is the mean HR?
# - What was supposed to be?
# - Why are they different
# ### Simulating the Sleep HR for 10 individuals
#
# Finally, we will compute the distribution of HR across the same 10 individuals, but during sleep. We will call this Sleep HR. The assumptions in the simulation below are the same as those in the previous section. Hereafter, we also assume that there is a correlation between the Rest HR and the Sleep HR.
# +
# Sleep HR
# The mean HR of all 25-years old Austinites
# during Sleep. Not how we start from the Rest HR
Sleep_mean = hr_rest_individuals * hr_s_factor
# We reduce the correlation between Rest and Sleep HR by adding a small
# variation (Standard Deviation, SD) to the Sleep HR data
Sleep_SD = hr_rest_individuals * add_var_factor
# We will create the distribution of rest HR using randn
#
# randn generates random numbers with mean 0 and SD = 1
# To match the needed distribution in our situation
# we add to the numbers generated by `randn` the mean HR
# and multiple by the SD of the HR
hr_sleep_individuals = Sleep_mean + (Sleep_SD * np.random.randn(n,1))
# Note that here we do not sort. This is because sorting would make us loose
# the pairing between hr_exercise_individuals and hr_rest_individuals.
# Instead, we want to keep the entries in the two arrays paired, so that
# subject 1 in the first array corresponds to subject 1 in the second.
# -
plt.plot(hr_sleep_individuals, "o");
plt.title('Sleep HR');
plt.xlabel('Individuals');
plt.ylabel('Heart Rate (BPM)');
# Questions:
# - What is the mean HR?
# - What was supposed to be?
# - Why are they different
# ### Next, let take a look at the three samples (Rest, Exercise and Sleep)
#
# Above we have generate three samples representing the Sleep, Rest and Exercise Heart Rate for `n` individulas form Austin. We would like to take a look at the results and compare the samples. To do so, we can try to plot them together, in the same plot, say to see their overall relationships.
plt.scatter(hr_rest_individuals, hr_sleep_individuals, color='k');
plt.scatter(hr_rest_individuals, hr_exercise_individuals);
plt.title('Rest vs. {Exercise, Sleep} HR');
plt.xlabel('Rest HR (BPM)');
plt.ylabel('Exercise (Blue) / Sleep (Black) HR (BPM)');
# As it can be seen the plot is not very helpful. This is primarily due to the major difference in means between the two samples (Exercise and Sleep) that pushes the Sleep sample all the way down. But we can always look at the two scatter plots independently. To do this we will use a little more advanced plto command that will allow us to have two plots in the same row.
# +
FontSize = 16 # Note. I am setting s slightly larger font size, it looks better
fig, ax = plt.subplots(1,2, figsize=(16,6)); # Here I open a few sub-plots and set the figure size
# I do the actul plot of the `rest HR` below
ax[0].scatter(hr_rest_individuals, hr_sleep_individuals, color='k');
# The three lines below, set the lables and the titles
# with the font size I chose above.
ax[0].set_xlabel("Rest HR (BPM)", fontsize=FontSize);
ax[0].set_ylabel("Sleep HR (BPM)", fontsize=FontSize);
ax[0].set_title('Rest vs. Sleep HR', fontsize=FontSize);
# The lines below are identical to the ones above but for `exercise`
ax[1].scatter(hr_rest_individuals, hr_exercise_individuals);
ax[1].set_xlabel("Rest HR (BPM)", fontsize=FontSize);
ax[1].set_ylabel("Exercise HR (BPM)", fontsize=FontSize);
ax[1].set_title('Rest vs. Exercise HR', fontsize=FontSize);
# -
# ### Simulating a week full of Austinites' heart beats
#
# Next, we will want to simulate the 7-days of a week so to start simulating the data across hours and days of the 10 Austinites. To do so we will use a single numpy array.
#
# Let's do some calculations. We have 24 hours in a day and 7 days in a week. Given that our measurements come at one HR-value per hour and we have `n` subjects (10), we will need to initialize an array that is 168 x 10.
#
# The array will need to hold the Rest HR between 9am and 9pm, the Exercise HR between 6 and 8 am and the Sleep HR between 10pm and 5am. Note that we will assume that all participants will exercise at the same time, between 6 and 8 am, say, before going to work. We will also assume that all the participants will go to bed at the same time 10 pm.
#
# To simulate the full dataset across all days, we will need to insert the proper HR in the proper slots of the numpy array. This is our opportunity to practice array indexing. So let's get started.
# We will first simulate the array that will contain the full set of time series. We will initialize the array with all `zeros` with one dimension containing the number of subjects (`n`) and the other the number of hours in a week (`24*7=168`).
#
# After initializing the array, we will then substitute the appropriate slots with the values simulated for Sleep, Rest or Exercise HR. If you do n ot rememebr, these values are the ones shown in the last two plots. One value per period (Sleep, Rest and Exercise) was simulated above, one value per period and per subject.
# The first thing we will do is to simulate
# the array to hold 10 individuals and 7 x 24 hours
# this means that we will need to create an array 10 x 168
HR_time_series = np.zeros([n, duration_day*duration_week])
# Our goal next is to set different locations in the array `HR_time_series` to the different values of HR for Sleep, Rest and Exercise. To do so, we need to identify the correct hours in each day.
#
# Below we created an approach to do that. First, we create a START and END hour for each activity in a day. For example, if exercises start at 6 am and end at 8 am we will set up a variable (a numpy array) with `6` in the first slot and `8` in the second, e.g., `a_numpy_array[0]=6` and `a_numpy_array[1]=8`.
#
# These variables will be convenient, butthey are not necessary. Indexing could be done, manually entering indices one by one. But after thinking the problem, the indices for one day can be *transofrmed* into indices for the second day by adding the number `24` to them. Similarly, the indices for the third day are the indices for the second day plus `24`. So, in this case we identified a conveneient trick, an approach that allow us to populate the correct days from 1-7 by reating first arrays holding the correct indcies only for day 1 and then simply transofrming the indices for day one to day two, three, four, etc by adding 24, multiple times, sequentially.
#
# Below the code. First we define indices for each segment in our Day 1. After that we use the indices to populate day 1. Firnally, we tranform the indices for day 1 to those for day 2 by adding the number 24 (24 hours). We do this multiple times until we reach day 7. The last day of the week.
#
# Let's see how it works and let's visualize the array as we fill it out with HR numbers.
# +
# Preparing indices to address the 168 hours.
#
# After creating the array we will want to create helpful variables
# holding the indices into the array for the various periods
# sleep, exercise and rest.
# So, we will need to index the different time allocations, sleep, rest, exercise
# in the propert hour-slots, for each day of the week. To do so, we prepare
# variables that will hold the time slots in hours of the day,
# yet, coded in 0-based Python indices (the `-1` below will do this trick).
time_exercise = np.array([6,8])-1 # Hours of the day exercise start and end
# in python's 0-based indexing
time_rest = np.array([8,22])-1 # Hours of the day the normal 'rest' activities
# start and end in python's 0-based indexing
# 8 am to 10 pm
# We will treat Sleeptime as divided into night sleep (until midnight)
# and morning sleep 12-5am. This will help with some of the indexing we will
# need to do.
time_sleep_n = np.array([22,24])-1 # Hours of the *night* dedicated to sleep
# start and end in python's 0-based indexing
# 10 pm to 12 am
time_sleep_m = np.array([1,6])-1 # Hours of the *morning* dedicated to sleep
# start and end in python's 0-based indexing
# 12 am to 6 am
# -
# ##### Simulating the Exercise HR in the morning hours
# Ok, after setting up the variables to use for indexing. Let's start populating our time series. To do so, we will use precisely the variables generated above. The variables, will facilitate addressing the proper locations in `HR_time_series`.
# +
# To prepare for adding in the Exercise HR
# will then want to zero-out all
# the time series in the relevant 2-hours.
# Day 1:
# We build a range of indices between the start and end point
day1 = np.arange(time_exercise[0],time_exercise[1])
HR_time_series[:,day1] = hr_exercise_individuals
# To use the same variables created above for day 1 and get
# the correct indices for day 2, 3 ,4 etc, we will add 24
# (duration_day) cumulatively to each new day
# Day 2:
day2 = day1 + duration_day # adding 24 to the indices of day 1
HR_time_series[:,day2] = hr_exercise_individuals
# Day 3:
day3 = day2 + duration_day # adding 24 to the indices of day 2
HR_time_series[:,day3] = hr_exercise_individuals
# Day 4:
day4 = day3 + duration_day # adding 24 to the indices of day 3
HR_time_series[:,day4] = hr_exercise_individuals
# Day 5:
day5 = day4 + duration_day # adding 24 to the indices of day 4
HR_time_series[:,day5] = hr_exercise_individuals
# Day 6:
day6 = day5 + duration_day # adding 24 to the indices of day 5
HR_time_series[:,day6] = hr_exercise_individuals
# Day 7:
day7 = day6 + duration_day # adding 24 to the indices of day 6
HR_time_series[:,day7] = hr_exercise_individuals
# -
# Let's take a look at the time series we just created. It should have the mean HR for each participant in the time slots (hours) allocated.
plt.figure(figsize=(20,6))
plt.plot(HR_time_series.T);
plt.title('HR Time Series (7 days, 24 hours, 10 participants)');
plt.xlabel('Individuals');
plt.ylabel('Exercise Heart Rate (BPM)');
# ##### Simulating the Rest HR in the day hours
# Next we will simulate the time series for the Rest HR during the day-light hours. Let's start populating our time series. To do so, we will use the variable `HR_time_series` and `hr_rest_individuals` as value to populate.
#
# The code will be extremely similar at the one used for the Exercise HR!
# +
# Adding in the Rest HR
# Day 1:
day1 = np.arange(time_rest[0],time_rest[1])
HR_time_series[:,day1] = hr_rest_individuals
# To use the variables we created about and get the indices
# to go into the proper location for each subsequent data
# We will add the 24 (duration_day) variable to each new day
# Day 2:
day2 = day1 + duration_day
HR_time_series[:,day2] = hr_rest_individuals
# Day 3:
day3 = day2 + duration_day
HR_time_series[:,day3] = hr_rest_individuals
# Day 4:
day4 = day3 + duration_day
HR_time_series[:,day4] = hr_rest_individuals
# Day 5:
day5 = day4 + duration_day
HR_time_series[:,day5] = hr_rest_individuals
# Day 6:
day6 = day5 + duration_day
HR_time_series[:,day6] = hr_rest_individuals
# Day 7:
day7 = day6 + duration_day
HR_time_series[:,day7] = hr_rest_individuals
# -
plt.figure(figsize=(20,6))
plt.plot(HR_time_series.T); # Note here we need to rotate (transpose the array for plotting)
plt.title('HR Time Series (7 days, 24 hours, 10 participants)');
plt.xlabel('Individuals');
plt.ylabel('Rest and Exercise Heart Rate (BPM)');
# ##### Simulating the Sleep HR in the day hours
# Finally, we will simulate the time series for the Sleep HR. This HR is a little bit more complicated. Indeed, we had to create one variable for the morning and one for the evening. This was the way we decided to keep a 24-hours cycle while also assigning some ofthe ours of sleep to the night (10 pm - 12 am) and others to the morning (1 - 5 am).
#
# So for sleep we will populate the time series using `hr_sleep_individuals` as value. But in this case we will need to deal with a day divided into two fragments. Nights and mornings. This is because our indices are notcircular, the do not wrap around midnight like hours of a day do.
#
# The code will be a little bit longer but conceptually similar.
# +
# Adding in the Sleep HR
# Below we repeate the same operations already described above.
# We have to do it twice, onces for the evening segment and the other
# time for the moring segment of the sleep period.
# NIGHT fragment of the sleep hours
#
# Day 1:
day1 = np.arange(time_sleep_n[0],time_sleep_n[1])
HR_time_series[:,day1] = hr_sleep_individuals
# Day 2:
day2 = day1 + duration_day
HR_time_series[:,day2] = hr_sleep_individuals
# Day 3:
day3 = day2 + duration_day
HR_time_series[:,day3] = hr_sleep_individuals
# Day 4:
day4 = day3 + duration_day
HR_time_series[:,day4] = hr_sleep_individuals
# Day 5:
day5 = day4 + duration_day
HR_time_series[:,day5] = hr_sleep_individuals
# Day 6:
day6 = day5 + duration_day
HR_time_series[:,day6] = hr_sleep_individuals
# Day 7:
day7 = day6 + duration_day
HR_time_series[:,day7] = hr_sleep_individuals
# -
plt.figure(figsize=(20,6))
plt.plot(HR_time_series.T); # Note here we need to rotate (transpose the array for plotting)
plt.title('HR Time Series (7 days, 24 hours, 10 participants)');
plt.xlabel('Individuals');
plt.ylabel('Rest, Sleep and Exercise Heart Rate (BPM)');
# +
# MORNING fragment of the sleep hours
#
# Day 1:
day1 = np.arange(time_sleep_m[0],time_sleep_m[1])
HR_time_series[:,day1] = hr_rest_individuals
# Day 2:
day2 = day1 + duration_day
HR_time_series[:,day2] = hr_sleep_individuals
# Day 3:
day3 = day2 + duration_day
HR_time_series[:,day3] = hr_sleep_individuals
# Day 4:
day4 = day3 + duration_day
HR_time_series[:,day4] = hr_sleep_individuals
# Day 5:
day5 = day4 + duration_day
HR_time_series[:,day5] = hr_sleep_individuals
# Day 6:
day6 = day5 + duration_day
HR_time_series[:,day6] = hr_sleep_individuals
# Day 7:
day7 = day6 + duration_day
HR_time_series[:,day7] = hr_sleep_individuals
# -
plt.figure(figsize=(20,6))
plt.plot(HR_time_series.T); # Note here we need to rotate (transpose the array for plotting)
plt.title('HR Time Series (7 days, 24 hours, 10 participants)');
plt.xlabel('Individuals');
plt.ylabel('Rest, Sleep and Exercise Heart Rate (BPM)');
# ##### Using imshow to visualize the time series array
# We can also try imshow. We have used that before to show the content of numpy arrays. It might be a helpful visualization, providing a different via of the data and variability across participants.
plt.figure(figsize=(20,6))
plt.imshow(HR_time_series);
# That seems to help a little. We can see colors for each band. Each color is a participant's individual HR during the 2 hours time frame of exercise. There a color gradient because the subjects are sorted by HR.
# #### Done with this tutorial!
| tutorial011.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Sampling of tropical marine cloud-radiative effects and cloud cover as a function of pressure velocity at 500hPa
#
# This notebook reproduces Figure 1b. The sampling time-scale is monthly.
#
# Data: ERA-Interim for omega at 500hPa. CERES EBAF edition 4.1 for cloud-radiative effects. CALIPSO-GOCCP for cloud cover. ERA-Interim and CALIPSO-GOCCP have been interpolated to the CERES EBAF horizontal grid.
# ### Load libraries
import xarray as xr
import numpy as np
import matplotlib.pyplot as plt
# For reference, print package versions to screen
print('xarrary: ', xr.__version__)
print('numpy: ', np.__version__)
import matplotlib; print('matplotlib:', matplotlib.__version__); del matplotlib
# ### Load data for common years 2007-2018
#
# Note that for CALIPSO-GOCCP, February 2016 is missing, so this month is also excluded for ERA-Interim and CERES EBAF.
# ERA-Interim:
omega = ( xr.load_dataset('../../data/obs/ERAI_omega500_monthly_2007-2018.remapcon_ceresgrid.nc')
[['initial_time0_hours', 'lat', 'lon', 'W_GDS4_ISBL_S123']].
rename({'initial_time0_hours':'time', 'W_GDS4_ISBL_S123':'omega'}) )
# convert omega from Pa/s to hPa/day
omega['omega'] = omega['omega']*86400/100
#exclude February 2016
omega = omega.sel(time=~((omega.time.dt.year == 2016) & (omega.time.dt.month == 2)))
# sea-land mask: ocean=0, land=1
slm = xr.load_dataset('../../data/obs/ERAI_land.remapcon_ceresgrid.nc').isel(time=0).drop('time')
# CERES EBAF:
cre = xr.load_dataset('../../data/obs/CERES_EBAF_Ed4.1_200701-201812.cre_toa_sfc.nc')
#exclude February 2016
cre = cre.sel(time=~((cre.time.dt.year == 2016) & (cre.time.dt.month == 2)))
# CALIPSO-GOCCP:
# note again that February 2016 is missing
clc = ( xr.load_dataset('../../data/obs/MapLowMidHigh330m_200606-201910_avg_CFMIP2_sat_3.1.2'+
'.2007-2018.remapcon_ceresgrid.nc')
[['clhcalipso','cltcalipso']].squeeze() )
# ### Set all land points to zero
# This is a poor man's way to achieve the masking, but sufficient for the purpose here.
for i in range(slm.lon.size):
for j in range(slm.lat.size):
if slm['land'][j,i]>0.0:
omega['omega' ][:,j,i] = np.nan
cre['toa_cre_sw_mon'][:,j,i] = np.nan
cre['toa_cre_lw_mon'][:,j,i] = np.nan
cre['sfc_cre_net_sw_mon'][:,j,i] = np.nan
cre['sfc_cre_net_lw_mon'][:,j,i] = np.nan
clc['clhcalipso' ][:,j,i] = np.nan
clc['cltcalipso' ][:,j,i] = np.nan
# ### Sampling based on vertical velocity
# Define omega bins: 5hPa/day wide omega bins with centers ranging from -97.5 to 97.5.
bins_edges = 100*np.linspace(-1,1,41)
bins = bins_edges[0:40]+2.5
# Sampling function
def make_omega_sampling(omega, data, bins_edges):
# define surface area weights
weights = ( omega*0.0 +
np.expand_dims(np.cos(np.deg2rad(omega.lat)), axis=[0,2]) )
# make omega histogram
counts, _ = np.histogram( omega, bins=bins_edges, weights=weights, density=True )
# for each entry of omega, indices gives the bin index it belongs to
indices = np.digitize(omega, bins)
# resample data on omega bins
data_sampled = np.zeros(bins.size)
for ibin in range(bins.size):
data_sampled[ibin] = ( np.nansum( data.values[indices==ibin] *
weights.values[indices==ibin] ) /
np.nansum( weights.values[indices==ibin] ) )
return counts, data_sampled
# Definition of tropical sampling region: only points between 30 deg N/S are taken into account.
latn=30; lats=-30
omega_pdf, toaswcre_sampled = make_omega_sampling(omega['omega'].sel(lat=slice(lats,latn)),
cre['toa_cre_sw_mon'].sel(lat=slice(lats,latn)),
bins_edges)
_, atmlwcre_sampled = make_omega_sampling(omega['omega'].sel(lat=slice(lats,latn)),
cre['toa_cre_lw_mon'].sel(lat=slice(lats,latn))-
cre['sfc_cre_net_lw_mon'].sel(lat=slice(lats,latn)),
bins_edges)
_, atmswcre_sampled = make_omega_sampling(omega['omega'].sel(lat=slice(lats,latn)),
cre['toa_cre_sw_mon'].sel(lat=slice(lats,latn))-
cre['sfc_cre_net_sw_mon'].sel(lat=slice(lats,latn)),
bins_edges)
_, clch_sampled = make_omega_sampling(omega['omega'].sel(lat=slice(lats,latn)),
clc['clhcalipso'].sel(lat=slice(lats,latn)),
bins_edges)
_, clct_sampled = make_omega_sampling(omega['omega'].sel(lat=slice(lats,latn)),
clc['cltcalipso'].sel(lat=slice(lats,latn)),
bins_edges)
# ### Plotting
# +
plt.figure(figsize=(5.915,3))
ax = plt.subplot(1,1,1)
ax.spines['left'].set_bounds(10, 90)
ax.spines['bottom'].set_bounds(-80,80)
ax.spines['top'].set_color('none')
ax.spines['right'].set_color('none')
plt.plot(bins[4:36], 100*clch_sampled[4:36], 'dimgray', linestyle='--')
plt.plot(bins[4:36], 100*clct_sampled[4:36], 'k', linestyle='--')
plt.xlim(-90,90); plt.ylim(10,90)
ax.xaxis.set_ticks([-80,-60,-40,-20,0,20,40,60,80])
ax.xaxis.set_ticklabels([-80,-60,-40,-20,0,20,40,60,80], fontsize=8)
ax.yaxis.set_ticks([10,20,30,40,50,60,70,80,90])
ax.yaxis.set_ticklabels([10,'',30,'',50,'',70,'',90], fontsize=8)
plt.xlabel(r'$\omega_{500}$ / hPa day$^{-1}$',fontsize=10)
plt.ylabel('cloud cover / %',fontsize=10)
# twin object for two different y-axis on the sample plot
ax2=ax.twinx()
ax2.spines['right'].set_bounds(-90, 60)
ax2.spines['bottom'].set_color('none')
ax2.spines['top'].set_color('none')
ax2.spines['left'].set_color('none')
ax2.plot(bins[4:36],toaswcre_sampled[4:36],'royalblue')
ax2.plot(bins[4:36],atmlwcre_sampled[4:36]+atmswcre_sampled[4:36],'firebrick')
plt.xlim(-90,90);plt.ylim(-90,60)
ax2.yaxis.set_ticks([-90,-60,-30,0,30,60])
ax2.yaxis.set_ticklabels([-90,-60,-30,0,30,60], fontsize=8)
plt.ylabel(r'CRE / Wm$^{-2}$',fontsize=10)
plt.text(80,65,'total cloud cover', ha='right', color='k', size=12)
plt.text(80,53,'high-level cloud cover', ha='right', color='dimgray', size=12)
plt.text(80,41,'TOA SW CRE', ha='right', color='royalblue', size=12)
plt.text(80,29,'ATM NET CRE', ha='right', color='firebrick', size=12)
plt.savefig('figure-1b.pdf')
| figures/figure-1/make_figure-1b.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/hayasam/MachineLearning/blob/master/Anomalies/Credit_Card_Fraud.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="xohAxkytgvbx" colab_type="text"
# # Credit Card Fraud Detection
#
# The dataset contains transactions made by credit cards in September 2013 by european cardholders. This dataset represents transactions that occurred in two days, where we have 492 cases of fraud out of 284,807 transactions. The dataset is highly unbalanced, the positive class (known fraudulent transactions) account for only 0.172% of all transactions.
#
# An autoencoder is used as an unsupervised model to identify irregularities that might indicate fraud. It is imperfect, but works reasonably well.\updated by haya
#
#
# The data is available on [Kaggle](https://www.kaggle.com/mlg-ulb/creditcardfraud)
# + id="CyDO4jYDg_nO" colab_type="code" outputId="eccf2ea0-2840-455e-e749-27497b2918c2" colab={"base_uri": "https://localhost:8080/", "height": 35}
# %tensorflow_version 2.x
# + id="fPIVi6aGgvb0" colab_type="code" colab={}
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from tensorflow.keras.models import Sequential, load_model
from tensorflow.keras.layers import Dense
from tensorflow.keras.callbacks import ModelCheckpoint
from sklearn.metrics import confusion_matrix
# + [markdown] id="CqbDAvEbgvb7" colab_type="text"
# ## Read Data
#
# The original data set is fairly large. To reduce experiment cycle time, a subset of of the data is used here. The data is available on [Kaggle](https://www.kaggle.com/mlg-ulb/creditcardfraud), which requires login. If you are using Google Colab, you can download the zip file to your local machine and then upload it into a Google Colab notebook. Pandas can read zip files directly.
# + id="VSEi4bFBgvb8" colab_type="code" outputId="28c3edd8-72d2-4be6-da20-a2de897f38cd" colab={"base_uri": "https://localhost:8080/", "height": 35}
data = pd.read_csv("creditcardfraud.zip")
data = data.head(30000)
data.shape
# + id="zh-HZs3Gv5c2" colab_type="code" outputId="98e8ec0c-a7fc-4570-c406-6f5970811c01" colab={"base_uri": "https://localhost:8080/", "height": 223}
data.head()
# + id="CDDQ22kt81nt" colab_type="code" outputId="ccebb0d6-dc49-413b-b6b0-ec7540eb8145" colab={"base_uri": "https://localhost:8080/", "height": 161}
data.groupby(['Class']).count()
# + [markdown] id="cxjaflpVgvcD" colab_type="text"
# ## Data Engineering
#
# The Time column is ignored and the Amount is normalized. All other columns remain the same.
# + id="JB_FNz-4gvcE" colab_type="code" colab={}
data = data.drop(['Time'], axis=1)
data['Amount'] = StandardScaler().fit_transform(data['Amount'].values.reshape(-1, 1))
# + id="3VhZZRvwgvcG" colab_type="code" outputId="47d23c6f-1885-4071-a1fc-1f12627b0d40" colab={"base_uri": "https://localhost:8080/", "height": 35}
X_train, X_test = train_test_split(data, test_size=0.2, random_state=0)
X_train = X_train.drop(['Class'], axis=1)
y_test = X_test['Class']
X_test = X_test.drop(['Class'], axis=1)
X_train = X_train.values
X_test = X_test.values
X_train.shape
# + [markdown] id="idV-blvXgvcJ" colab_type="text"
# ## Autoencoder Model
#
# This is a standard dense autoencoder with four layers.
# + id="rDTwKbXqln-T" colab_type="code" outputId="86c0da83-79c5-4de2-f966-096418b1663a" colab={"base_uri": "https://localhost:8080/", "height": 299}
input_dim = X_train.shape[1]
encoding_dim = 14
model = Sequential()
model.add(Dense(encoding_dim, activation="tanh", input_shape=(input_dim,)))
model.add(Dense(int(encoding_dim / 2), activation="relu"))
model.add(Dense(int(encoding_dim / 2), activation='tanh'))
model.add(Dense(input_dim, activation='relu'))
model.summary()
# + [markdown] id="kvOSXfO2yRAF" colab_type="text"
# # Model Training
#
# Given such a simple model and relatively small data set, it might be faster to train without a GPU.
# + id="sw64M6v7gvcN" colab_type="code" outputId="5cf141de-c6c3-4499-bbda-c870bcea208c" colab={"base_uri": "https://localhost:8080/", "height": 1000}
nb_epoch = 40
batch_size = 32
model.compile(optimizer='adam', loss='mean_squared_error', metrics=['acc'])
history = model.fit(X_train, X_train,
epochs=nb_epoch,
batch_size=batch_size,
validation_data=(X_test, X_test),
verbose=1)
autoencoder = model
# + [markdown] id="jOreexr_gvcP" colab_type="text"
# ## Model Performance
#
# A simple plot of model accuracy to confirm that it is learning something.
# + id="KU1Y1-12gvcQ" colab_type="code" outputId="d3643d27-25d5-4cee-a191-1ed5355e99a5" colab={"base_uri": "https://localhost:8080/", "height": 295}
# summarize history for accuracy
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
# + [markdown] id="EK8OLAVegvcT" colab_type="text"
# ## Prediction
#
# Predictions are made on the test set. The mean-squared error (MSE) is calculated between the test set and their predictions. If the MSE is high, it's a potential irregularity that might suggest fraud. It's not perfect, there will be false positives and false negatives.
# + id="pedvVXtngvcU" colab_type="code" colab={}
predictions = autoencoder.predict(X_test)
mse = np.mean(np.power(X_test - predictions, 2), axis=1)
error_df = pd.DataFrame({'reconstruction_error': mse, 'true_class': y_test})
# + id="lVXgySCoNb-O" colab_type="code" outputId="e319f5dd-8ff2-4b50-c6fa-19d3182b49cb" colab={"base_uri": "https://localhost:8080/", "height": 203}
error_df.head()
# + [markdown] id="IXPep-KbgvcX" colab_type="text"
# ## Plot Reconstruction Error
#
# The reconstruction error for each sample is plotted long with a color code indicating known fraud. Only 6000 samples are plotted, but the index is randomly sampled from the original set, so the X-axis shows almost the full range of indices.
# + id="O3TXPKJIgvcX" colab_type="code" outputId="ea302ab0-9d80-466e-f1aa-39df53b7ca69" colab={"base_uri": "https://localhost:8080/", "height": 513}
threshold = 6.0
groups = error_df.groupby('true_class')
fig, ax = plt.subplots(figsize=(12, 8))
for name, group in groups:
ax.plot(group.index, group.reconstruction_error, marker='o', ms=2.0, linestyle='',
label = "Fraud" if name == 1 else "Normal",
color = "red" if name == 1 else "blue")
ax.hlines(threshold, ax.get_xlim()[0], ax.get_xlim()[1], colors="green", zorder=100, label='Threshold')
ax.legend()
plt.title("Reconstruction error for different classes")
plt.ylabel("Reconstruction error")
plt.xlabel("Data point index")
plt.show();
# + [markdown] id="TsONcjx6_DBn" colab_type="text"
#
# + [markdown] id="e4JxXgvyxJku" colab_type="text"
# # Analysis
#
# Given the known fraud transactions, we can determine the number of true/false positives and negatives. Ideally, there should be no false positives and false negatives, but this is an imperfect model. Let's see how well it does.
# + id="YcYrqCvrgvce" colab_type="code" outputId="1f4f2c78-7c9b-46b9-cba2-0db5fc426b92" colab={"base_uri": "https://localhost:8080/", "height": 35}
normal = error_df[error_df.true_class == 0]
fraud = error_df[error_df.true_class == 1]
print('Normal transactions: %d, fraud transactions: %d' % (len(normal), len(fraud)))
# + id="1ySewU_gsZBZ" colab_type="code" outputId="f7685a9f-fe92-4394-fc27-a6b31f1b8206" colab={"base_uri": "https://localhost:8080/", "height": 52}
true_positives = len(fraud[fraud.reconstruction_error >= threshold])
false_positives = len(normal[normal.reconstruction_error >= threshold])
true_negatives = len(normal[normal.reconstruction_error < threshold])
false_negatives = len(fraud[fraud.reconstruction_error < threshold])
print('True positives: %d, true negatives: %d' % (true_positives, true_negatives))
print('False positives: %d, false negatives: %d' % (false_positives, false_negatives))
# + [markdown] id="5zr0KAh_-52b" colab_type="text"
#
# + [markdown] id="FbU9T7gMgvca" colab_type="text"
# ## Confusion Matrix
#
# The confusion matrix below shows the number of true/false positive/negatives. It's not perfect, but not too bad either.
# + id="zb_Azs0Xgvcb" colab_type="code" outputId="945d0aaf-93cc-48b6-e840-7c6d573f1d69" colab={"base_uri": "https://localhost:8080/", "height": 350}
labels = ["Normal", "Fraud"]
y_pred = [1 if e > threshold else 0 for e in error_df.reconstruction_error.values]
conf_matrix = confusion_matrix(error_df.true_class, y_pred)
plt.figure(figsize=(6, 5))
sns.heatmap(conf_matrix, xticklabels=labels, yticklabels=labels, annot=True, fmt="d");
plt.title("Confusion matrix")
plt.ylabel('True class')
plt.xlabel('Predicted class')
plt.show()
| Anomalies/Credit_Card_Fraud.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Beer Reviews
# This example analyzes beer reviews to find the most common words used in positive and negative reviews.
# Original example can be found [here](https://medium.com/rapids-ai/real-data-has-strings-now-so-do-gpus-994497d55f8e)
# ### Notes on running these queries:
#
# By defaults runs use Bodo. Hence, data is distributed in chunks across processes.
#
# The current results are based on running on one **m5.8xlarge** instance (16 cores, 128GiB memory)
#
# reviews_sample.csv size is 23.1MB
#
# Fulldataset is available on "s3://bodo-examples-data/beer/reviews.csv" and its size is 2.2GB
#
# To run the code:
# 1. Make sure you add your AWS account credentials to access the data.
# 2. If you want to run the example using pandas only (without Bodo):
# 1. Comment lines magic expression (`%%px`) and bodo decorator (`@bodo.jit`) from all the code cells.
# 2. Then, re-run cells from the beginning.
#
#
# +
# %%px
import os
os.environ["AWS_ACCESS_KEY_ID"] = "your_aws_access_key_id"
os.environ["AWS_SECRET_ACCESS_KEY"] = "your_aws_secret_access_key"
os.environ["AWS_DEFAULT_REGION"] = "us-east-2"
# -
# %%px
import numpy as np
import pandas as pd
import itertools
import time
import bodo
# ## Preprocessing
# 1. Create lists of stopwords and punctuation that will be removed.
# 2. Define regex that will be used to remove these punctuation and stopwords from the reviews.
# 3. Use the lower and strip functions to convert all letters to lowercase and remove excess whitespace.
# 4. Remove stopwords and punctuation.
# +
# %%px
with open("nltk-stopwords.txt", "r") as fh:
STOPWORDS = list(map(str.strip, fh.readlines()))
PUNCT_LIST = ["\.", "\-", "\?", "\:", ":", "!", "&", "'", ","]
punc_regex = "|".join([f"({p})" for p in PUNCT_LIST])
stopword_regex = "|".join([f"\\b({s})\\b" for s in STOPWORDS])
# -
# %%px
@bodo.jit(distributed=["reviews"])
def preprocess(reviews):
# lowercase and strip
reviews = reviews.str.lower()
reviews = reviews.str.strip()
# remove punctuation and stopwords
reviews = reviews.str.replace(punc_regex, "", regex=True)
reviews = reviews.str.replace(stopword_regex, "", regex=True)
return reviews
# ## Find the Most Common Words
# +
# %%px
@bodo.jit
def find_top_words(review_filename):
# Load in the data
t_start = time.time()
df = pd.read_csv(review_filename, parse_dates=[2])
print("read time", time.time() - t_start)
score = df.score
reviews = df.text
t1 = time.time()
reviews = preprocess(reviews)
print("preprocess time", time.time() - t1)
t1 = time.time()
# create low and high score series
low_threshold = 1.5
high_threshold = 4.95
high_reviews = reviews[score > high_threshold]
low_reviews = reviews[score <= low_threshold]
high_reviews = high_reviews.dropna()
low_reviews = low_reviews.dropna()
high_colsplit = high_reviews.str.split()
low_colsplit = low_reviews.str.split()
print("high/low time", time.time() - t1)
t1 = time.time()
high_words = high_colsplit.explode()
low_words = low_colsplit.explode()
top_words = high_words.value_counts().head(25)
low_words = low_words.value_counts().head(25)
print("value_counts time", time.time() - t1)
print("total time", time.time() - t_start)
print(top_words)
print(low_words)
find_top_words("s3://bodo-examples-data/beer/reviews_sample.csv")
# -
| notebooks/beer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.3 64-bit (''base'': conda)'
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import math
import random
# +
def p(k, i, xi, A, a, h, k2coord, Gt):
return 1 / (1 + math.exp(-2 * I(k, i, xi, A, a, h, k2coord, Gt)))
def I(k, i, xi, A, a, h, k2coord, Gt):
total = 0
zeta = random.uniform(-1,1) # sampled for each unique (k,i)
for j in k2coord[k]: # for each coordinate in cluster k
eta = random.uniform(-1,1) # different for each cell
sigma = Gt[j]
total += ((A*xi[k] + a*eta) * sigma) + h*zeta
return (1 / len(k2coord[k])) * total
def cluster_info(arr):
""" number of clusters (nonzero fields separated by 0s) in array
and size of cluster
"""
data = []
k2coord = {}
k = 0
if arr[0] != 0: # left boundary
data.append(0) # we will increment later in loop
k2coord[k] = []
else:
k=-1
# print("arr", arr)
# print("data", data)
for i in range(0,len(arr)-1):
if arr[i] == 0 and arr[i+1] != 0:
data.append(0)
k += 1
k2coord[k] = []
if arr[i] != 0:
data[-1] += 1
k2coord[k].append(i)
if arr[-1] != 0:
if data: # if array is not empty
data[-1] += 1 # right boundary
k2coord[k].append(len(arr)-1)
else:
data.append(1)
k2coord[k] = [len(arr)-1]
Ncl = len(data) # number of clusters
Nk = data # Nk[k] = size of cluster k
coord2k = {e:k for k,v in k2coord.items() for e in v}
return Ncl, Nk, k2coord, coord2k
# + tags=[]
# pd = 0.25
# pe = 0.02
# ph = 0.18 # vary
pd = 0.1
pe = 0.0001
ph = 0.1 # vary
pa = 0.5
N0 = 2000
N1 = 200
A = 2
a = 0.1
h = 0.1
G = np.zeros(shape=(N0,N1)).astype(int)
G[0] = np.random.choice(a=[-1,0,1], p=[pa/2, 1-pa, pa/2], size=N1, replace=True)
x = np.empty(N0)
for t in range(N0):
Ncl, Nk, k2coord, coord2k = cluster_info(G[t])
xi = np.random.uniform(-1, 1, size=Ncl) # unique xi for each cluster k
# print(Ncl, Nk, k2coord, coord2k, xi)
xt = 0
for k, size in enumerate(Nk):
tmp = 0
for i in k2coord[k]:
tmp += G[t,i]
xt += size * tmp
x[t] = xt
if t == N0-1:
# last iteration, we stop
break
for i in range(N1):
# traders update their stance
if G[t,i] != 0:
k = coord2k[i]
# print(k)
pp = p(k, i, xi, A, a, h, k2coord, G[t])
if random.random() < pp:
G[t+1,i] = 1
else:
G[t+1,i] = -1
# trader influences non-active neighbour to join
if G[t,i] != 0:
stance = G[t,i]
if random.random() < ph:
if G[t,(i-1)%N1] == 0 and G[t,(i+1)%N1] == 0:
ni = random.choice([-1,1])
G[t+1,(i+ni)%N1] = stance#random.choice([-1,1])
elif G[t,(i-1)%N1] == 0:
G[t+1,(i-1)%N1] = stance#random.choice([-1,1])
elif G[t,(i+1)%N1] == 0:
G[t+1,(i+1)%N1] = stance#random.choice([-1,1])
else:
continue
# active trader diffuses if it has inactive neighbour
# only happens at edge of cluster
if G[t,i] != 0:
if random.random() < pd:
if (G[t,(i-1)%N1] == 0) or (G[t,(i+1)%N1] == 0):
G[t+1,i] = 0
else:
continue
# nontrader enters market
if G[t,i] == 0:
if random.random() < pe:
G[t+1,i] = random.choice([-1,1])
fig, (ax1, ax2) = plt.subplots(
ncols=1, nrows=2, figsize=(12,5), sharex=True, gridspec_kw = {'wspace':0, 'hspace':0}
)
ax1.imshow(G.T, cmap="binary", interpolation="None", aspect="auto")
# plt.colorbar()
r = (x - np.mean(x)) / np.std(x)
s = 100
S = np.zeros_like(x)
S[0] = s
for i in range(1,N0):
# S[i] = S[i-1] + (S[i-1] * r[i])
S[i] = S[i-1] + (S[i-1] * r[i]/100) + 0.01
ax2.plot(S)
ax2.grid(alpha=0.4)
ax2.set_xlabel("time")
# ax2.set_ylabel("standardised log returns")
ax2.set_ylabel("close price")
ax1.set_ylabel("agents")
plt.tight_layout()
plt.show()
# +
A_SPACE = np.linspace(0,10,30)
SIM = 30
RVAR_STANDARD = np.zeros((len(A_SPACE), SIM))
RVAR = np.zeros((len(A_SPACE), SIM))
for j, A_val in enumerate(A_SPACE):
for l in range(SIM):
A = A_val
pd = 0.1
pe = 0.0001
ph = 0.1 # vary
pa = 0.5
N0 = 200
N1 = 200
a = 0.1
h = 0.1
G = np.zeros(shape=(N0,N1)).astype(int)
G[0] = np.random.choice(a=[-1,0,1], p=[pa/2, 1-pa, pa/2], size=N1, replace=True)
x = np.empty(N0)
for t in range(N0):
Ncl, Nk, k2coord, coord2k = cluster_info(G[t])
xi = np.random.uniform(-1, 1, size=Ncl) # unique xi for each cluster k
# print(Ncl, Nk, k2coord, coord2k, xi)
xt = 0
for k, size in enumerate(Nk):
tmp = 0
for i in k2coord[k]:
tmp += G[t,i]
xt += size * tmp
x[t] = xt
if t == N0-1:
# last iteration, we stop
break
for i in range(N1):
# traders update their stance
if G[t,i] != 0:
k = coord2k[i]
# print(k)
pp = p(k, i, xi, A, a, h, k2coord, G[t])
if random.random() < pp:
G[t+1,i] = 1
else:
G[t+1,i] = -1
# trader influences non-active neighbour to join
if G[t,i] != 0:
stance = G[t,i]
if random.random() < ph:
if G[t,(i-1)%N1] == 0 and G[t,(i+1)%N1] == 0:
ni = random.choice([-1,1])
G[t+1,(i+ni)%N1] = stance#random.choice([-1,1])
elif G[t,(i-1)%N1] == 0:
G[t+1,(i-1)%N1] = stance#random.choice([-1,1])
elif G[t,(i+1)%N1] == 0:
G[t+1,(i+1)%N1] = stance#random.choice([-1,1])
else:
continue
# active trader diffuses if it has inactive neighbour
# only happens at edge of cluster
if G[t,i] != 0:
if random.random() < pd:
if (G[t,(i-1)%N1] == 0) or (G[t,(i+1)%N1] == 0):
G[t+1,i] = 0
else:
continue
# nontrader enters market
if G[t,i] == 0:
if random.random() < pe:
G[t+1,i] = random.choice([-1,1])
r = (x - np.mean(x)) / np.std(x)
# Collecting Data
RVAR[j, l] = np.sum(x**2)
RVAR_STANDARD[j, l] = np.sum(r**2)
# +
RVAR_MEAN = np.mean(RVAR, axis=1)
RVAR_STANDARD_MEAN = np.mean(RVAR_STANDARD, axis=1)
plt.figure(figsize=(15,5))
plt.plot(A_SPACE, RVAR_MEAN/np.max(RVAR_MEAN))
plt.xlabel("A value")
plt.ylabel("REALIZED VARIANCE")
plt.show()
# -
| code/alex/OTHER/CellularAutomata.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.10 ('escritoras_latinas-kyucqLTu')
# language: python
# name: python3
# ---
# ### Imports
# %load_ext autoreload
# %autoreload 2
import re
import requests
import escritoras_latinas.data.load as load
import pandas as pd
from bs4 import BeautifulSoup
from pigeon import annotate
# ### Load data
data_processed = load.data_processed
# ### Web scrapping
# Make a request to Wikipedia entry
response = requests.get('https://en.m.wikipedia.org/wiki/List_of_Latin_American_writers')
# Return content of the response
html = response.text
# Parse html
soup = BeautifulSoup(html, 'html.parser')
# Look for <a> tag inside <li> tag
anchors = [a for a in (li.find('a') for li in soup.find_all('li')) if a]
# ### Process data
# Convert 'bs4.element.Tag' to strings
anchors_str = list(map(lambda x: str(x), anchors))
# Extract text from anchors with reguar expressions
anchors_text = list(map(lambda a: re.findall(r'<a.*>(.*)<\/a>', a), anchors_str))
# Flatten list of lists
anchors_text = [item for sublist in anchors_text for item in sublist]
# Delete list items by index
del anchors_text[0:30]
del anchors_text[562:]
# ### Create dataframe
# +
# Create the pandas dataframe
df = pd.DataFrame(anchors_text, columns=['Nombre'])
# Show sample from dataframe
df.sample(1)
# -
# ### Save data
# Save dataframe as 'csv' file
df.to_csv(f'{data_processed}/escritores_destacados.csv', index=False)
| notebooks/0.1-scrapping-text.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Tutorial of network schematas - Bio Models
# The network schematas for biological relevant boolean network models
# %load_ext autoreload
# %autoreload 2
# %matplotlib inline
from __future__ import division
import os
import math
import numpy as np
import pandas as pd
pd.options.display.float_format = '{:.2g}'.format
import graphviz
import cana
from cana.drawing.canalizing_map import draw_canalizing_map_graphviz
import matplotlib as mpl
import matplotlib.style
mpl.style.use('classic')
import random
mpl.rc('font', **{'size':16})
import matplotlib.pyplot as plt
from cana.datasets.bio import THALIANA, DROSOPHILA, BUDDING_YEAST
from IPython.display import display, Image, Latex, SVG, HTML
import subprocess
N = THALIANA()
#N = DROSOPHILA()
#N = BUDDING_YEAST()
print(N)
# ## Effective Graph
Nsg = N.structural_graph()
# +
# Node Position for each one of the models
nodes = {d['label']:i for i,d in Nsg.nodes(data=True)}
print(nodes)
att = {}
#
if N.name == 'Arabidopsis Thaliana':
foldername = 'thaliana'
# Calculates Node position in a circle. Used to plot nodes always in the same position.
for deg,name in zip(range(0,360,30), ['AG', 'AP3', 'PI', 'AP2', 'TFL1', 'FUL', 'AP1', 'FT', 'EMF1', 'LFY', 'SEP', 'WUS']):
r = 150
x, y = r*math.cos(math.radians(deg)), r*math.sin(math.radians(deg))
att[name] = {'x':x,'y':y}
# Nodes not on the circle. Manually position them = UFO, LUG, CLF
for name,(x,y) in zip(['UFO','LUG','CLF'], [(200.,140.),(240.,50.),(240.,-50.)]):
att[name] = {'x':x,'y':y}
elif N.name == 'Drosophila Melanogaster':
foldername = 'drosophila'
x,y = np.linspace(0,500,8,dtype=int), np.linspace(500,0,8,dtype=int)
att['nWG'] = {'x':x[5],'y':y[0],'fillcolor':'#4f6fb0'}
att['SLP'] = {'x':x[7],'y':y[1],'fillcolor':'#4f6fb0'}
att['en'] = {'x':x[5],'y':y[1]}
att['EN'] = {'x':x[5],'y':y[2]}
att['nhhnHH'] = {'x':x[1],'y':y[4]}
att['ci'] = {'x':x[4],'y':y[3]}
att['PTC'] = {'x':x[2],'y':y[4]}
att['nhhnHH'] = {'x':x[2],'y':y[2],'fillcolor':'#4f6fb0'}
att['CI'] = {'x':x[4],'y':y[4]}
att['PH'] = {'x':x[0],'y':y[5]}
att['SMO'] = {'x':x[1],'y':y[5]}
att['CIA'] = {'x':x[3],'y':y[5]}
att['CIR'] = {'x':x[4],'y':y[5]}
att['ptc'] = {'x':x[3],'y':y[6]}
att['wg'] = {'x':x[4],'y':y[6]}
att['hh'] = {'x':x[6],'y':y[6]}
att['WG'] = {'x':x[4],'y':y[7]}
att['HH'] = {'x':x[6],'y':y[7]}
elif N.name == 'Budding Yeast Cell Cycle':
foldername = 'yeast'
# Calculates Node position in a circle.
for deg,name in zip( np.linspace(0,360,10), ['Cln3','MBF','Clb5,6','Mcm1/SFF','Swi5','Cdc20/14','Cdh1','Cln1,2','SBF']):
r = 190
deg += 90
x, y = r*math.cos(math.radians(deg)), r*math.sin(math.radians(deg))
att[name] = {'x':x,'y':y}
# Nodes not on the circle. Manually position them = UFO, LUG, CLF
for name,(x,y) in zip(['CellSize','Sic1','Clb1,2'], [(0.,280.),(0.,100.),(0.,-50.)]):
att[name] = {'x':x,'y':y}
# +
# Draw the Structural Graph
S = graphviz.Digraph(name='Structural Graph', engine='neato')
S.attr('graph', concentrate='false', simplify='false', overlap='false',splines='false')
S.attr('node', pin='true', shape='circle', fixedsize='true', width='.55', color='gray', style='filled', fillcolor='#515660', penwidth='3', fontname='Helvetica', fontcolor='white',fontsize='12')
S.attr('edge', arrowhead='normal', arrowsize='.5', color='#545454')
for node,d in Nsg.nodes(data=True):
if d['label'] in att:
natt = att[d['label']]
if 'x' in natt or 'y' in natt:
x,y = natt['x'] , natt['y']
xy = '%.2f,%.2f!' % (x/72,y/72)
if 'fillcolor' in natt:
fillcolor = natt['fillcolor']
else:
fillcolor = '#515660'
else:
xy = ''
fillcolor = '#515660'
S.node(name=str(node), label=d['label'], pos=xy, fillcolor=fillcolor)
max_penwidth = 2.5
for s,t,d in Nsg.edges(data=True):
weight = '%d' % (d['weight']*100)
penwidth_scaled = '%.2f' % ( (d['weight']/1)*max_penwidth )
S.edge(str(s),str(t), weight=weight, penwidth=penwidth_scaled, )
print('Nodes: %d | Edges: %d' % (len(Nsg.nodes()) , len(Nsg.edges()) ))
# Display
display(SVG(S.pipe(format='svg')),metadata={'isolated':True})
# Export
#S._format = 'svg'
#efile = u"%s/../experiments/2017 - BioModels/%s/graphs/SG" % (os.getcwd(),foldername)
#S.render(efile, cleanup=True)
#subprocess.call("inkscape -z '%s.svg' -d 300 -e '%s.png'" % (efile,efile) , shell=True)
# -
# Calculate Effective Graph
threshold = 0.00
Neg = N.effective_graph(mode='input',bound='upper', threshold=threshold)
# +
# Draw the Effective Graph
E = graphviz.Digraph(name='Effective Graph', engine='neato')
E.attr('graph', concentrate='false', simplify='false')
E.attr('node', shape='circle', fixedsize='true', width='.55', color='grey', style='filled', fillcolor='#515660', penwidth='3', fontname='Helvetica', fontcolor='white',fontsize='12')
E.attr('edge', arrowhead='normal', arrowsize='.5', color='#545454')
for node,d in Neg.nodes(data=True):
if d['label'] in att:
natt = att[d['label']]
x,y = natt['x'],natt['y']
xy = '%.1f,%.1f!' % (x/72,y/72)
if 'fillcolor' in natt:
fillcolor = natt['fillcolor']
else:
fillcolor = '#515660'
else:
xy = 'false'
E.node(name=str(node), label=d['label'], pos=xy, fillcolor=fillcolor)
max_penwidth = 2.5
for s,t,d in Neg.edges(data=True):
weight = '%d' % (d['weight']*100)
penwidth_scaled = '%.2f' % ( (d['weight']/1)*max_penwidth )
E.edge(str(s),str(t), weight=weight, penwidth=penwidth_scaled)
print('Nodes: %d | Edges: %d' % (len(Neg.nodes()) , len(Neg.edges()) ))
## Display
display(SVG(E.pipe(format='svg')),metadata={'isolated':True})
## Export
E._format = 'svg'
efile = u'%s/../experiments/2017 - BioModels/%s/graphs/EG' % (os.getcwd(),foldername)
E.render(efile, cleanup=True)
subprocess.call("inkscape -z '%s.svg' -d 300 -e '%s.png'" % (efile,efile) , shell=True)
# -
bound = 'upper'
print(N.nodes[1].schemata_look_up_table(type="ts"))
df = pd.DataFrame({
'node':[n.name for n in N.nodes],
'k':[n.k for n in N.nodes],
'k_r':[n.input_redundancy(mode='node',bound=bound,norm=False) for n in N.nodes],
'k_e':[n.effective_connectivity(mode='node',bound=bound,norm=False) for n in N.nodes],
'k_s':[n.input_symmetry(mode='node',bound=bound,norm=False) for n in N.nodes],
'k_r*':[n.input_redundancy(mode='node',bound=bound,norm=True) for n in N.nodes],
'k_e*':[n.effective_connectivity(mode='node',bound=bound,norm=True) for n in N.nodes],
'k_s*':[n.input_symmetry(mode='node',bound=bound,norm=True) for n in N.nodes],
'k^{out}':[v for n,v in Neg.out_degree()],
'k_e^{out}':[v for n,v in Neg.out_degree(weight='weight')],
}).set_index('node')
df = df[['k','k_r','k_e','k_s','k_r*','k_e*','k_s*','k^{out}','k_e^{out}']]
print(df)
fig, ax = plt.subplots(1,1,figsize=(6,5), sharex=True, sharey=True)
dfp = df.loc[ (df['k']>1) , :]
ax.scatter(dfp['k_r*'],dfp['k_s*'], s=50, c='red', marker='o', zorder=2)
quadrants = [-0.035,0.035]
for name, dfp_ in dfp.iterrows():
x,y = dfp_['k_r*']+random.choice(quadrants) , dfp_['k_s*']+random.choice(quadrants)
ax.annotate(name, (x,y),fontsize=12, va='center', ha='center')
ax.plot((0,1),(0,1),'black', lw=2,alpha=0.25, zorder=1)
ax.grid(True)
ax.set_xlim(-0.05,1.05)
ax.set_ylim(-0.05,1.05)
ax.set_xlabel('$k_r^*$')
ax.set_ylabel('$k_s^*$')
## Display
## Export
#plt.savefig('../experiments/2017 - BioModels/%s/plots/k_sn_vs_k_rn.png' % (foldername), dpi=150)
plt.show()
bound = 'upper'
for i,n in enumerate(N.nodes):
display(HTML('<h2>'+n.name+'</h2>'))
# to make sure each SVG renders independently, add the "metadata={'isolated':True}
CM = n.canalizing_map()
gv = draw_canalizing_map_graphviz(CM)
## Display
display(SVG(gv.pipe(format='svg')), metadata={'isolated':True})
## Export to .SVG
filename = n.name
filename = filename.replace(',','_')
filename = filename.replace('/','_')
gv._format = 'svg'
efile = u'%s/../experiments/2017 - BioModels/%s/CM/%s-%s' % (os.getcwd(),foldername,i,filename)
#gv.render(efile, cleanup=True)
#subprocess.call("inkscape -z -d 150 '%s.svg' -e '%s.png'" % (efile,efile) , shell=True)
| tutorials/Tutorial - Canalization - BioModels.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os # to set current working directory
import math # basic calculations like square root
from sklearn.neighbors import KNeighborsRegressor # for nearest k neighbours
from sklearn import metrics # measures to check our models
from sklearn.model_selection import cross_val_score # cross validation methods
import pandas as pd # DataFrames and plotting
import pandas.plotting as pd_plot
import numpy as np # arrays and matrix math
import matplotlib.pyplot as plt # plotting
from subprocess import check_call
from sklearn.model_selection import train_test_split # train and test split
import seaborn as sns
# +
# Define a couple of functions to streamline plotting correlation matrices and visualization of a decision tree regression model
def plot_corr(dataframe,size=10): # plots a graphical correlation matrix
corr = dataframe.corr()
fig, ax = plt.subplots(figsize=(size, size))
im = ax.matshow(corr,vmin = -1.0, vmax = 1.0)
plt.xticks(range(len(corr.columns)), corr.columns);
plt.yticks(range(len(corr.columns)), corr.columns);
plt.colorbar(im, orientation = 'vertical')
plt.title('Correlation Matrix')
def visualize_model(model,xfeature,x_min,x_max,yfeature,y_min,y_max,response,z_min,z_max,title,plot_step):# plots the data points and the decision tree prediction
n_classes = 10
cmap = plt.cm.RdYlBu
xx, yy = np.meshgrid(np.arange(x_min, x_max, plot_step),
np.arange(y_min, y_max, plot_step))
Z = model.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, cmap=cmap,vmin=z_min, vmax=z_max, levels=np.linspace(z_min, z_max, 100))
im = plt.scatter(xfeature,yfeature,s=None, c=response, marker=None, cmap=cmap, norm=None, vmin=z_min, vmax=z_max, alpha=0.8, linewidths=0.3, verts=None, edgecolors="black")
plt.title(title)
plt.xlabel(xfeature.name)
plt.ylabel(yfeature.name)
cbar = plt.colorbar(im, orientation = 'vertical')
cbar.set_label(response.name, rotation=270, labelpad=20)
def check_model_train(model,xfeature,yfeature,response,title): # plots the estimated vs. the actual
predict_train = model.predict(np.c_[xfeature,yfeature])
plt.scatter(response,predict_train,s=None, c='red',marker=None, cmap=None, norm=None, vmin=None, vmax=None, alpha=0.2, linewidths=0.3, verts=None, edgecolors="black")
plt.title(title); plt.xlabel('Actual Production (MCFPD)'); plt.ylabel('Estimated Production (MCFPD)')
plt.xlim(0,7000); plt.ylim(0,7000)
plt.arrow(0,0,7000,7000,width=0.02,color='black',head_length=0.0,head_width=0.0)
MSE = metrics.mean_squared_error(response,predict_train)
Var_Explained = metrics.explained_variance_score(response,predict_train)
cor = math.sqrt(metrics.r2_score(response,predict_train))
print('Mean Squared Error on Training = ', round(MSE,2),', Variance Explained =', round(Var_Explained,2),'Cor =', round(cor,2))
def check_model_test(model,xfeature,yfeature,response,title): # plots the estimated vs. the actual
predict_train = model.predict(np.c_[xfeature,yfeature])
plt.scatter(response,predict_train,s=None, c='red',marker=None, cmap=None, norm=None, vmin=None, vmax=None, alpha=0.2, linewidths=0.3, verts=None, edgecolors="black")
plt.title(title); plt.xlabel('Actual Production (MCFPD)'); plt.ylabel('Estimated Production (MCFPD)')
plt.xlim(0,7000); plt.ylim(0,7000)
plt.arrow(0,0,7000,7000,width=0.02,color='black',head_length=0.0,head_width=0.0)
MSE = metrics.mean_squared_error(response,predict_train)
Var_Explained = metrics.explained_variance_score(response,predict_train)
cor = math.sqrt(metrics.r2_score(response,predict_train))
print('Mean Squared Error on Testing = ', round(MSE,2),', Variance Explained =', round(Var_Explained,2),'Cor =', round(cor,2))
# -
df=pd.read_csv("cretaceous_wells.csv")
df
df.describe().transpose()
df['Average Daily Production'] = df['Cumulative oil (bbl)']/df['Cumulative days']/df['Lateral length (ft)']*1000000
df
# +
# # Standardize data
# from sklearn.preprocessing import StandardScaler
# scaler = StandardScaler() # instantiate the scaler
# stdfeatures = scaler.fit_transform(df.iloc[:,[4,6]]) # standardize all the values except production
# df['Surface northing'] = stdfeatures[:,0]
# df['Surface easting'] = stdfeatures[:,1]
# df.describe().transpose()
# -
sns.pairplot(df.iloc[:,[3,5,-1,2]], hue = "Formation")
plt.subplot(111)
im = plt.scatter(df["Surface easting"],df["Surface northing"],s=None, c=df['Average Daily Production'], marker=None, cmap='plasma', norm=None, alpha=0.8, linewidths=0.3, verts=None, edgecolors="black")
plt.title('Production vs. Surface Well Location'); plt.xlabel('Easting'); plt.ylabel('Northing')
# plt.xlim(1.045,1.06); plt.ylim(1.02,1.1)
cbar = plt.colorbar(im, orientation = 'vertical')
cbar.set_label("Daily Production normalized by Lateral Length", rotation=270, labelpad=20)
plt.grid()
plt.subplots_adjust(left=0.0, bottom=0.0, right=1.2, top=1.2, wspace=0.2, hspace=0.2)
plt.show()
plt.subplot(111)
im = plt.scatter(df["Surface easting"],df["Surface northing"],s=None, c=df['Average Daily Production'], marker=None, cmap='plasma', norm=None, alpha=0.8, linewidths=0.3, verts=None, edgecolors="black")
plt.title('Zoomed-in Production vs. Surface Well Location'); plt.xlabel('Easting'); plt.ylabel('Northing')
plt.xlim(1.05,1.065); plt.ylim(1.07,1.1)
cbar = plt.colorbar(im, orientation = 'vertical')
cbar.set_label("Daily Production normalized by Lateral Length", rotation=270, labelpad=20)
plt.grid()
plt.subplots_adjust(left=0.0, bottom=0.0, right=1.2, top=1.2, wspace=0.2, hspace=0.2)
plt.show()
# Filter to highest production wells
northmin = 1.08; northmax = 1.095;
eastmin = 1.054; eastmax = 1.062;
df_filter = df[(df['Surface northing']>northmin) & (df['Surface northing']<northmax)]
df_filter = df_filter[(df_filter['Surface easting']>eastmin) & (df_filter['Surface easting']<eastmax)]
df_filter.shape
plt.subplot(111)
im = plt.scatter(df["Surface easting"],df["Surface northing"],s=None, c=df['Average Daily Production'], marker=None, cmap='Reds', norm=None, vmin=prodmin, vmax=prodmax, alpha=0.8, linewidths=0.3, verts=None, edgecolors="black")
plt.title('Zoomed-in Production vs. Surface Well Location'); plt.xlabel('Easting'); plt.ylabel('Northing')
plt.xlim(eastmin,eastmax); plt.ylim(northmin,northmax)
cbar = plt.colorbar(im, orientation = 'vertical')
cbar.set_label("Daily Production normalized by Lateral Length", rotation=270, labelpad=20)
plt.grid()
plt.subplots_adjust(left=0.0, bottom=0.0, right=1.2, top=1.2, wspace=0.2, hspace=0.2)
plt.show()
# Another round of Filtering
northmin = 1.088; northmax = 1.094;
eastmin = 1.055; eastmax = 1.059;
df_filter = df[(df['Surface northing']>northmin) & (df['Surface northing']<northmax)]
df_filter = df_filter[(df_filter['Surface easting']>eastmin) & (df_filter['Surface easting']<eastmax)]
df_filter.shape
plt.subplot(111)
im = plt.scatter(df["Surface easting"],df["Surface northing"],s=None, c=df['Average Daily Production'], marker=None, cmap='Reds', norm=None, vmin=prodmin, vmax=prodmax, alpha=0.8, linewidths=0.3, verts=None, edgecolors="black")
plt.title('Zoomed-in Production vs. Surface Well Location'); plt.xlabel('Easting'); plt.ylabel('Northing')
plt.xlim(eastmin,eastmax); plt.ylim(northmin,northmax)
cbar = plt.colorbar(im, orientation = 'vertical')
cbar.set_label("Daily Production normalized by Lateral Length", rotation=270, labelpad=20)
plt.grid()
plt.subplots_adjust(left=0.0, bottom=0.0, right=1.2, top=1.2, wspace=0.2, hspace=0.2)
plt.show()
# +
df_subset = df_filter.loc[:,['Surface easting','Surface northing','Average Daily Production']]
X = df_subset.iloc[:,:2]
y = df_subset.iloc[:,-1]
neigh = KNeighborsRegressor(weights = 'distance', n_neighbors=1, p = 2) # instantiate the prediction model
neigh_fit = neigh.fit(X,y) # train the model with the training data
# -
# plt.subplot(122)
neigh_fit =neigh.fit(X,y)
Z= visualize_model(neigh_fit,X["Surface easting"],eastmin,eastmax,X["Surface northing"],northmin,northmax,y,prodmin,prodmax,'Training Data and k Nearest Neighbours',0.00001)
#plt.xlim(eastmin,eastmax); plt.ylim(northmin,northmax)
# plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=1.2, wspace=0.2, hspace=0.2)
# plt.show()
# +
cmap = plt.cm.RdYlBu
plot_step = 0.0001
x_min = X.iloc[:,0].min(); x_max = X.iloc[:,0].max()
y_min = X.iloc[:,1].min(); y_max = X.iloc[:,1].max()
z_min = 0; z_max = 300000
neigh = KNeighborsRegressor(weights = 'distance', n_neighbors=1, p = 2) # instantiate the prediction model
model = neigh.fit(X,y)
xx, yy = np.meshgrid(np.arange(x_min, x_max + plot_step, plot_step),
np.arange(y_min, y_max, plot_step))
Z = model.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
cs = plt.contourf(xx, yy, Z, cmap=cmap,vmin=z_min, vmax=z_max, levels=np.linspace(z_min, z_max, 100))
im = plt.scatter(X.iloc[:,0],X.iloc[:,1],s=None, c=y.values, marker=None, cmap=cmap, norm=None, vmin=z_min, vmax=z_max, alpha=0.8, linewidths=0.3, verts=None, edgecolors="black")
# plt.title(title)
# plt.xlabel(xfeature.name)
# plt.ylabel(yfeature.name)
# cbar = plt.colorbar(im, orientation = 'vertical')
# cbar.set_label(response.name, rotation=270, labelpad=20)
# -
y.shape
df_subset.describe().transpose()
# +
plt.subplot(131)
plt.hist(X["Surface easting"], alpha = 0.2, color = 'red', edgecolor = 'black', bins=20)
plt.title('Surface easting'); plt.xlim(eastmin,eastmax)
plt.subplot(132)
plt.hist(X["Surface northing"], alpha = 0.2, color = 'red', edgecolor = 'black', bins=20)
plt.title('Surface northing'); plt.xlim(northmin,northmax)
plt.subplot(133)
plt.hist(y, alpha = 0.2, color = 'red', edgecolor = 'black', bins=20)
plt.title('Production'); plt.xlim(prodmin,prodmax)
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=1.2, wspace=0.2, hspace=0.2)
plt.show()
# -
# Standardize data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler() # instantiate the scaler
stdfeatures = scaler.fit_transform(df_subset) # standardize all the values except production
variables = df_subset.columns.values
df_std = pd.DataFrame(stdfeatures, columns=variables) # instantiate a new DataFrame
df_std.describe().transpose()
# +
X = df_std.iloc[:,:2]
y = df_std.iloc[:,-1]
plt.subplot(131)
plt.hist(X["Surface easting"], alpha = 0.2, color = 'red', edgecolor = 'black', bins=20)
plt.title('Surface easting')
plt.subplot(132)
plt.hist(X["Surface northing"], alpha = 0.2, color = 'red', edgecolor = 'black', bins=20)
plt.title('Surface northing')
plt.subplot(133)
plt.hist(y, alpha = 0.2, color = 'red', edgecolor = 'black', bins=20)
plt.title('Production')
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=1.2, wspace=0.2, hspace=0.2)
plt.show()
# -
# ### Ignore below
df_subset = df_filter.loc[:,['Surface northing','Surface easting','Average Daily Production']]
df_train, df_test = train_test_split(df_subset, test_size=0.2, random_state=100)
df_test.describe().transpose()
# +
# Create individual dataframes for training and testing data
X_train = df_train.iloc[:,:2]
y_train = df_train.iloc[:,-1]
X_test = df_test.iloc[:,:2]
y_test = df_test.iloc[:,-1]
prodmin = 0; prodmax = 300000
# +
plt.subplot(121)
im = plt.scatter(X_train["Surface easting"],X_train["Surface northing"],s=None, c=y_train, marker=None, cmap='inferno', norm=None, vmin=prodmin, vmax=prodmax, alpha=0.8, linewidths=0.3, verts=None, edgecolors="black")
plt.title('Training Production vs. Well Location'); plt.xlabel('Easting'); plt.ylabel('Northing')
plt.xlim(eastmin,eastmax); plt.ylim(northmin,northmax)
cbar = plt.colorbar(im, orientation = 'vertical')
cbar.set_label("Daily Production normalized by Lateral Length", rotation=270, labelpad=20)
plt.subplot(122)
im = plt.scatter(X_test["Surface easting"],X_test["Surface northing"],s=None, c=y_test, marker=None, cmap='inferno', norm=None, vmin=prodmin, vmax=prodmax, alpha=0.8, linewidths=0.3, verts=None, edgecolors="black")
plt.title('Testing Production vs. Well Location'); plt.xlabel('Easting'); plt.ylabel('Northing')
plt.xlim(eastmin,eastmax); plt.ylim(northmin,northmax)
cbar = plt.colorbar(im, orientation = 'vertical')
cbar.set_label("Daily Production normalized by Lateral Length", rotation=270, labelpad=20)
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=1.2, wspace=0.2, hspace=0.2)
plt.show()
# +
# X = df.iloc[:,[3,5]]
# Y = df.iloc[:,[-1]]
# +
neigh = KNeighborsRegressor(weights = 'distance', n_neighbors=2, p = 2) # instantiate the prediction model
neigh_fit = neigh.fit(X_train,y_train) # train the model with the training data
plot_step = 0.0001
plt.subplot(121)
visualize_model(neigh_fit,X_train["Surface easting"],eastmin,eastmax,X_train["Surface northing"],northmin,northmax,y_train,prodmin,prodmax,'Training Data and k Nearest Neighbours',plot_step)
plt.xlim(eastmin,eastmax); plt.ylim(northmin,northmax)
plt.subplot(122)
visualize_model(neigh_fit,X_test["Surface easting"],eastmin,eastmax,X_test["Surface northing"],northmin,northmax,y_test,prodmin,prodmax,'Training Data and k Nearest Neighbours',plot_step)
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=1.2, wspace=0.2, hspace=0.2)
plt.show()
# +
neigh = KNeighborsRegressor(weights = 'distance', n_neighbors=2, p = 2) # instantiate the prediction model
neigh_fit = neigh.fit(X,Y) # train the model with the training data
plt.subplot(111)
visualize_model(neigh_fit,X['Surface easting'],1.035,1.08,X['Surface northing'],1.02,1.12,Y['Average Daily Production'],0,30000,'k Nearest Neighbours',0.02)
# visualize_model(neigh_fit,X_train["Por"],-3.5,3.5,X_train["Brittle"],-3.5,3.5,y_train,prodmin,prodmax,'Training Data and k Nearest Neighbours',0.02)
plt.subplots_adjust(left=0.0, bottom=0.0, right=1.2, top=1.2, wspace=0.2, hspace=0.2)
plt.show()
# -
X = df3.iloc[:,[3,5]]
Y = df3.iloc[:,[-1]]
X.describe().transpose()
# +
plt.subplot(111)
im = plt.scatter(X["Surface easting"],X["Surface northing"],s=None, c=Y['Average Daily Production'], marker=None, cmap='plasma', norm=None, alpha=0.8, linewidths=0.3, verts=None, edgecolors="black")
plt.title('Training Production vs. Brittleness and Porosity'); plt.xlabel('Porosity (%)'); plt.ylabel('Brittleness (%)')
# plt.xlim(-3,3); plt.ylim(-3,3)
cbar = plt.colorbar(im, orientation = 'vertical')
cbar.set_label("Production", rotation=270, labelpad=20)
plt.subplots_adjust(left=0.0, bottom=0.0, right=1.2, top=1.2, wspace=0.2, hspace=0.2)
plt.show()
# +
# Standardize data
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler() # instantiate the scaler
stdfeatures = scaler.fit_transform(X) # standardize all the values except production
variables = X.columns.values
X_std = pd.DataFrame(stdfeatures, columns=variables) # instantiate a new DataFrame
X_std.describe().transpose()
# +
# neigh = KNeighborsRegressor(weights = 'distance', n_neighbors=2, p = 2) # instantiate the prediction model
# neigh_fit = neigh.fit(X,Y) # train the model with the training data
# plt.subplot(111)
# visualize_model(neigh_fit,X_std['Surface easting'],-3.5,3,X_std['Surface northing'],-3.5,3,Y['Average Daily Production'],Y.min(),Y.max(),'k Nearest Neighbours',0.005)
# # visualize_model(neigh_fit,X_train["Por"],-3.5,3.5,X_train["Brittle"],-3.5,3.5,y_train,prodmin,prodmax,'Training Data and k Nearest Neighbours')
# plt.subplots_adjust(left=0.0, bottom=0.0, right=1.2, top=1.2, wspace=0.2, hspace=0.2)
# plt.show()
# -
| Class Case Studies/Where_to_drill_debug.ipynb |
% ---
% jupyter:
% jupytext:
% text_representation:
% extension: .m
% format_name: light
% format_version: '1.5'
% jupytext_version: 1.14.4
% kernelspec:
% display_name: Octave
% language: octave
% name: octave
% ---
% + [markdown] id="c0j2HEwrwlfL"
% # AM QAM FDM
%
% ## Objective
%
% This experiment aims:
%
% 1. To study and practice the generation and demodulation of AM wave.
%
% 2. To study and practice the generation and demodulation of QAM wave.
%
% 3. To study and practice the FDM technique.
%
%
% ## Introduction
%
% In this experiment, you will practice AM, QAM and FDM techniqus.
% Please refer to the textbook and lecture notes for theory part.
%
% ## Procedure
%
% ### Amplitude Modulation: Generation and Detection
% + id="6WK7_LGFwlfc" outputId="07adafb5-5cd1-48ff-8dbb-9f3d9532cbad"
clear all;
fs = 1e2; % sampling frequency
t = 0 : 1/fs : 1; % time vector
f = -fs/2 : 1 : fs/2; % frequency vector
fc = 10; % carrier frequency
fm = 2; % message frequency
m = cos(2*pi*fm*t); % message signal
c = cos(2*pi*fc*t); % carrier signal
u = 0.8; % modulation index
dsb = u*m .* c; % DSB-SC signal
am = dsb + c; % AM signal
AM = sig_spec(am);
% Demodulation using envelope detection
% envelope detection can be implemented by squaring and lowpass filtering
amsq = am .* am;
pkg load signal;
[b, a] = butter(2,2*fm/(fs/2));
mr1 = filter(b, a, amsq);
% envelope detection also can be implemented using Hilbert transfom
mr2 = abs(hilbert(am));
figure();
subplot(311); plot(t,am); grid on; title('AM and Message in Time Domain')
hold on;
subplot(311); plot(t,u*m+1); grid on;hold on;
subplot(311); plot(t,-u*m-1); grid on;
subplot(312); plot(f,AM); grid on; title('AM in Frequency Domain')
subplot(313); plot(t,mr1); grid on; title('Received Signal')
hold on;
subplot(313); plot(t,mr2);
hold on;
subplot(313); plot(t,m); grid on;
legend('mr1','mr2','m')
% + [markdown] id="LljSs-eZwlfh"
% ---
%
% ### Modulation Power Efficiency
% + id="twfVYRUKwlfi" outputId="ede89565-89fb-4006-cfd2-8b136928267c"
% Power efficiency of modulation
% eta = Psidebands / (Pcarrier + Psidebands)
Ps = sum(dsb .* dsb) / fs
Pt = sum(am .* am) / fs
eta = Ps / Pt
% + [markdown] id="8UKbogSqwlfk"
% Discussion
%
% * Use different values of modulation index and comment on its affect upon power efficiency and detection capability
%
% ---
%
% ### QAM: Generation
% + id="eSCs8H9hwlfl" outputId="339e98f6-d202-4a9a-85aa-2d2de130d4e8"
fm1 = 2; % first message frequency
fm2 = 3; % second message frequency
c2 = cos(2*pi*fc*t-pi/2); % second carrier signal
m1 = cos(2*pi*fm1*t); % first message signal
m2 = cos(2*pi*fm2*t); % second message signal
qam = m1.*c + m2.*c2; % QAM signal
QAM = sig_spec(qam);
figure();
subplot(211); plot(t,qam); grid on; title('QAM and two Messages in Time Domain')
hold on;
subplot(211); plot(t,m1); grid on;hold on;
subplot(211); plot(t,m2); grid on;
legend('QAM','m1','m2')
subplot(212); plot(f,QAM); grid on; title('QAM in Frequency Domain')
% + [markdown] id="RdF1FOCowlfm"
% ---
%
% ### QAM: Receiver
% + id="1zQkS3BCwlfm" outputId="26bfc90b-6eca-41f9-c102-96f5f908ef1a"
lo1 = cos(2*pi*fc*t);
lo2 = cos(2*pi*fc*t-pi/2);
r1 = lo1.*qam;
r2 = lo2.*qam;
R1 = sig_spec(r1);
R2 = sig_spec(r2);
pkg load signal;
[b, a] = butter(2,fm2/(fs/2));
mr1 = filter(b,a,r1);
mr2 = filter(b,a,r2);
Mr1 = sig_spec(mr1);
Mr2 = sig_spec(mr2);
figure();
subplot(211); plot(t,mr1); grid on; title('First Demodulated Signal')
hold on;
subplot(211); plot(t,m1); grid on;
legend('mr1','m1')
subplot(212); plot(t,mr2); grid on; title('Second Demodulated Signal')
hold on;
subplot(212); plot(t,m2); grid on;
legend('mr2','m2')
% + [markdown] id="C5zZ-9uujGWw"
% Discussion
%
% * Try to insert a phase error at the demodulation and comment on its affect upon detected signal.
%
% * What happen if the phase difference between demodulator and carrier was $\pi/2$.
%
% ---
%
% ### FDM
% + id="Pk0jYO5Owlfp" outputId="3f43754d-feea-4475-9f29-b9dc66a35dde"
clear all;
fs = 4e2; % sampling frequency
t = 0 : 1/fs : 1; % time vector
f = -fs/2 : 1 : fs/2; % frequency vector
fc1 = 80; % first carrier frequency
fm1 = 5; % first message frequency
fc2 = 120; % second carrier frequency
fm2 = 7; % second message frequency
m1 = cos(2*pi*fm1*t); % first message signal
m2 = cos(2*pi*fm2*t); % second message signal
c1 = cos(2*pi*fc1*t); % first carrier signal
c2 = cos(2*pi*fc2*t); % second carrier signal
fdm = m1.*c1 + m2.*c2; % FDM of two DSB-SC signals
FDM = sig_spec(fdm); % Spectrum of FDM of two DSB-SC signals
pkg load signal;
[b1, a1] = butter(2,[fc1-10 fc1+10]/(fs/2)); % first bandpass filter
[b2, a2] = butter(2,[fc2-10 fc2+10]/(fs/2)); % second bandpass filter
r1 = filter(b1, a1, fdm); % extracting the first signal
r2 = filter(b2, a2, fdm); % extracting the second signal
% Coherent detection of two DSB-SC signals
dr1 = r1.*c1;
dr2 = r2.*c2;
[b, a] = butter(2,2*fm1/(fs/2));
mr1 = filter(b, a, dr1);
mr2 = filter(b, a, dr2);
figure();
subplot(221); plot(t,fdm); grid on; title('FDM in Time Domain')
subplot(222); plot(f,FDM); grid on; title('FDM in Frequency Domain')
subplot(223); plot(t,mr1); grid on; title('First Received Signal')
hold on;
subplot(223); plot(t,m1);
legend('mr1','m1')
subplot(224); plot(t,mr2); grid on; title('Second Received Signal')
hold on;
subplot(224); plot(t,m2); grid on;
legend('mr2','m2')
% + [markdown] id="AFc2SEZKrXZR"
% Discussion
%
% * Try to make carriers as much as closer in frequency without interfering the modulating signals with each other.
%
% * Comment on the guard band requirement versus the filter order.
| Exp4_AM_QAM_FDM.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: luna-passa
# language: python
# name: luna-passa
# ---
import sys; sys.path.insert(0, "..")
from src.models import NLM
from src.utils import generate_data
from autograd import numpy as np
import matplotlib.pyplot as plt
# ### Generate Synthetic Data
# +
x, y, x_test = generate_data(number_of_points=50, noise_variance=9)
plt.scatter(x, y)
plt.show()
# -
# ### Run NLM
# +
###relu activation
activation_fn_type = 'relu'
activation_fn = lambda x: np.maximum(np.zeros(x.shape), x)
width = [50,20] # using the architecture used in the paper
hidden_layers = len(width)
input_dim = 1
output_dim = 1
architecture = {'width': width,
'hidden_layers': hidden_layers,
'input_dim': input_dim,
'output_dim': output_dim,
'activation_fn_type': 'relu',
'activation_fn_params': 'rate=1',
'activation_fn': activation_fn}
#set random state to make the experiments replicable
rand_state = 0
random = np.random.RandomState(rand_state)
#instantiate a Feedforward neural network object
nn = NLM(architecture, random=random)
print('Number of parameters =', nn.D)
###define design choices in gradient descent
params = {
'step_size':1e-3,
'max_iteration':5000,
'random_restarts':1,
'reg_param':0,
}
nn.fit(x.reshape((1, -1)), y.reshape((1, -1)), params)
# -
plt.plot(nn.objective_trace)
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.show()
# ### Examine MAP Model
y_pred = nn.forward(nn.weights, x_test)
plt.scatter(x[0,:], y[0,:], color='red', alpha=0.5, label='Observed Data')
plt.plot(x_test[0,:], x_test[0,:]**3, color='black', label="Ground Truth")
plt.plot(x_test[0,:], y_pred[0,0,:], color='tab:blue', label="Model Prediction")
plt.legend()
plt.show()
# ### Examine Prior Samples
## WE SHOULD CONFIRM WHAT EXACT VALUES OF PRIOR VAR AND NOISE VAR THEY USED. THIS SEEMED TO AFFECT THE RESULTS A LOT
prior_mean = 0
prior_var = 5**2
noise_var = 3
y_prior = nn.get_prior_preds(x_test, w_prior_mean=prior_mean, w_prior_cov=prior_var, noise_var=noise_var)
plt.scatter(x[0,:], y[0,:], color='red', alpha=0.5, label='Observed Data')
plt.plot(x_test[0,:], x_test[0,:]**3, color='black', label="Ground Truth")
plt.plot(x_test[0,:], y_prior.T, color='tab:blue', alpha=0.1)
plt.ylim([-150, 150])
plt.legend()
plt.show()
# ### Examine Posterior Samples
y_posterior = nn.get_posterior_preds(x_test, x_obs=x, y_obs=y, w_prior_cov=prior_var, noise_var=noise_var)
plt.scatter(x[0,:], y[0,:], color='red', alpha=0.5, label='Observed Data')
plt.plot(x_test[0,:], x_test[0,:]**3, color='black', label="Ground Truth")
plt.plot(x_test[0,:], y_posterior.T, color='tab:blue', alpha=0.1)
plt.ylim([-150, 150])
plt.legend()
plt.show()
# Calulating percentiles
pp_upper = np.percentile(y_posterior, 97.5, axis=0)
pp_lower = np.percentile(y_posterior, 2.5, axis=0)
pp_mean = np.mean(y_posterior, axis=0)
# Visualizing 95% posterior predictive interval of Bayesian polynomial regression
plt.scatter(x[0,:], y[0,:], color='red', alpha=0.5, label='Observed Data')
plt.plot(x_test[0,:], x_test[0,:]**3, color='black', label="Ground Truth")
plt.plot(x_test[0,:], pp_mean, color='tab:orange', alpha=0.9, label='Posterior Predictive Mean')
plt.fill_between(x_test[0,:], pp_upper, pp_lower, color='tab:orange', alpha=0.2, label='95% Posterior Predictive Interval')
plt.legend()
plt.xlabel('x')
plt.ylabel('y')
plt.title("95% Posterior Predictive Interval of Bayesian Polynomial Regression")
plt.show()
# ### Questions for Professor Pan and Cooper
# * Is it okay to assumea prior mean of 0?
# * What was the prior variance assumed in the paper?
# * How is MAP as defined here any different than MLE? We are not putting any restrictions (any priors0 on the weights when calling `.fit`.
| notebooks/.dev/demo_NLM.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## What is Neural Network?
#
# It is a computational system inspired by the Structure, Processing Method and Learning Ability similar to our biological brain
#
# ### Characteristics of Artificial Neural Networks
#
# A large number of very simple processing neuron-like processing elements
#
# A large number of weighted connections between the elements
#
# Distributed representation of knowledge over the connections
#
# Knowledge is acquired by network through a learning process
#
# ### What is perceptron?
#
# A perceptron can be understood as anything that takes multiple inputs and produces one output
#
# <img src="perceptron.png" width="400" height="400">
# ### Multi-layer perceptron (MLP)
#
# MLP is the stack of perceptrons
#
# <img src="MLP.png" width="400" height="400">
#
# In this image, the yellow nodes are inputs, the blue nodes (at each vertical) are hidden layers and the orange ones are output of the MLP
# ### Forward and backward propagation
#
# NN takes several input, processes it through multiple neurons from multiple hidden layers and returns the result using an output layer. This result estimation process is technically known as “***Forward Propagation***“
#
# Next, we compare the result with actual output. The task is to make the output to neural network as close to actual (desired) output. This defines our cost function.
#
# We try to obtain the weight of neurons such that the NN total error (our cost function) being minimized. This process is known as “***Backward Propagation***“.
# ### Activity: Implementing NN using Numpy
#
# Assume, we want to build and train (obtain the weigths) of a MLP such that for the given input:
#
# `X=np.array([[1,0,1,0],[1,0,1,1],[0,1,0,1]])`
#
# gives us this desire ouput:
#
# `y=np.array([[1],[1],[0]])`
#
# Also, assume we have only one hidden layer with three neurons and activation function for each perceptron is sigmoid
#
#
# <img src="NN_block_diag.PNG" width="700" height="700">
# +
import numpy as np
# check this out:
# https://www.analyticsvidhya.com/blog/2017/05/neural-network-from-scratch-in-python-and-r/
# Input array
X=np.array([[1,0,1,0],[1,0,1,1],[0,1,0,1]])
#Output
y=np.array([[1],[1],[0]])
#Sigmoid Function
def sigmoid (x):
return 1/(1 + np.exp(-x))
#Derivative of Sigmoid Function
def derivatives_sigmoid(x):
return x * (1 - x)
#Variable initialization
epoch=5000 #Setting training iterations
lr=0.1 #Setting learning rate
inputlayer_neurons = X.shape[1] #number of features in data set
hiddenlayer_neurons = 3 #number of hidden layers neurons
output_neurons = 1 #number of neurons at output layer
#weight and bias initialization
wh=np.random.uniform(size=(inputlayer_neurons,hiddenlayer_neurons))
bh=np.random.uniform(size=(1,hiddenlayer_neurons))
wout=np.random.uniform(size=(hiddenlayer_neurons,output_neurons))
bout=np.random.uniform(size=(1,output_neurons))
for i in range(epoch):
#Forward Propogation
hidden_layer_input1=np.dot(X,wh)
hidden_layer_input=hidden_layer_input1 + bh
#h is hiddenlayer_activations
hiddenlayer_activations = sigmoid(hidden_layer_input)
output_layer_input1=np.dot(hiddenlayer_activations,wout)
output_layer_input= output_layer_input1+ bout
output = sigmoid(output_layer_input)
#Backpropagation
# y_t - y_p is D
D = y-output
# y_p*(1 - y_p) is slope_output_layer
slope_output_layer = derivatives_sigmoid(output)
# slope_hidden_layer = derivatives_sigmoid(hiddenlayer_activations)
d_output = D * slope_output_layer
# Error_at_hidden_layer = d_output.dot(wout.T)
# d_hiddenlayer = Error_at_hidden_layer * slope_hidden_layer
wout += hiddenlayer_activations.T.dot(d_output) *lr
# bout += np.sum(d_output, axis=0,keepdims=True) *lr
# wh += X.T.dot(d_hiddenlayer) *lr
# bh += np.sum(d_hiddenlayer, axis=0,keepdims=True) *lr
print(output)
# -
# ### How do we update the weight to minimize the error?
#
# First we should define the cost function. For our example here the MSE is our cost function:
#
# $E= \frac{1}{2} ({\bf y}_t - {\bf y}_p)^T ({\bf y}_t - {\bf y}_p)$
#
# We update the weight (${\bf W}_i$ and ${\bf W}_h$) such that the error, $E$, being minimized. The most popular algorithm is Gradient Descent:
#
# ${\bf W}_h = {\bf W}_h - \eta {\partial E}/{\partial {\bf W}_h} $
#
# For our above example we can show that:
#
# ${\partial E}/{\partial {\bf W}_h} = ({\bf y}_t - {\bf y}_p) {\bf y}_p (1 - {\bf y}_p)\bf {h}$
#
# where ${\bf h} = \sigma({\bf W}_i {\bf x}_i + {\bf b}_i)$
#
# In above code:
#
# $D = {\bf y}_t - {\bf y}_p$
#
# ${\bf y}_p (1 - {\bf y}_p)$ = `slope_hidden_layer`
#
# $\bf {h}$ = `hiddenlayer_activations`
# ## Pseudocode of the above Neural Network
#
# <img src="pseudocode_NN.png" width="500" height="500">
# ## What is not good with this approache?
#
# - We should compute the derivate of cost function w.r.t weights and biases (of hidden and output layer) by hand
| site/public/courses/DS-2.2/Notebooks/Simple_NN/NN_basic_slides.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# ## Geospatial data models
#
# Resources:
#
# * [
# GRASS GIS overview and manual](http://grass.osgeo.org/grass72/manuals/index.html)
# * [Recommendations](data_acquisition.html#commands)
# and [tutorial](http://www4.ncsu.edu/~akratoc/GRASS_intro/)
# how to use GUI from the first assignment
#
#
# ### Start GRASS GIS
# Start GRASS - click on GRASS icon or type
# using Python to initialize GRASS GIS
# This is a quick introduction into Jupyter Notebook.
# Python code can be excecuted like this:
a = 6
b = 7
c = a * b
print "Answer is", c
# Python code can be mixed with command line code (Bash).
# It is enough just to prefix the command line with an exclamation mark:
# !echo "Answer is $c"
# Use Shift+Enter to execute this cell. The result is below.
# +
# using Python to initialize GRASS GIS
import os
import sys
import subprocess
from IPython.display import Image
# create GRASS GIS runtime environment
gisbase = subprocess.check_output(["grass", "--config", "path"]).strip()
os.environ['GISBASE'] = gisbase
sys.path.append(os.path.join(gisbase, "etc", "python"))
# do GRASS GIS imports
import grass.script as gs
import grass.script.setup as gsetup
# set GRASS GIS session data
rcfile = gsetup.init(gisbase, "/home/jovyan/grassdata", "nc_spm_08_grass7", "user1")
# -
# using Python to initialize GRASS GIS
# default font displays
os.environ['GRASS_FONT'] = 'sans'
# overwrite existing maps
os.environ['GRASS_OVERWRITE'] = '1'
gs.set_raise_on_error(True)
gs.set_capture_stderr(True)
# using Python to initialize GRASS GIS
# set display modules to render into a file (named map.png by default)
os.environ['GRASS_RENDER_IMMEDIATE'] = 'cairo'
os.environ['GRASS_RENDER_FILE_READ'] = 'TRUE'
os.environ['GRASS_LEGEND_FILE'] = 'legend.txt'
# In startup pannel set GIS Data Directory to path to datasets,
# for example on MS Windows, `C:\Users\myname\grassdata`.
# For Project location select nc_spm_08_grass7 (North Carolina, State Plane, meters) and
# for Accessible mapset create a new mapset (called e.g. HW_data_models).
# Click Start GRASS.
#
# If you prefer to work in GUI, you should be able to find out yourself
# the GUI equivalents for the tasks below.
# Some hints for GUI are included, but
# from now on, most of the instructions will be provided as commands for command line.
# Hint for running most of the commands in GUI - type or paste the name of the module
# into the command console in the _Console_ tab and then hit Enter to open the GUI dialog.
# _Read_ the manual page for each command you are using for the first time to learn
# what it is doing and what the parameters mean.
#
# ### Resampling to higher resolution
#
#
# Resample the given raster map to higher and lower resolution
# (30m->10m, 30m->100m) and compare resampling by nearest neighbor
# with bilinear and bicubic method.
# First, set the computation region extent to our study area
# and set resolution to 30 meters.
# The computational region (region for short) is set using
# _g.region_ module.
# Here for convenience we use named region which defines both the extent and the resolution.
# This named region is included in the data (location) we are using
# but it is possible to create new named regions and use them to bookmark different study areas.
# !g.region region=swwake_30m -p
# The `-p` flag for _g.region_ is used to print the region
# we just set.
#
#
# Then we display the 30m resolution NED elevation raster.
# !d.rast elev_ned_30m
Image(filename="map.png")
# To resample it to 10m resolution, first set the computational region to resolution 10m,
# then resample the raster using the nearest neighbor method.
# Hint: To open the _r.resamp.interp_ in GUI, type or paste the module name
# into the _Console_ tab, then _Enter_ to open the GUI dialog,
# don't forget to set the method to nearest under _Optional_ tab.
# !g.region res=10 -p
# !r.resamp.interp elev_ned_30m out=elev_ned10m_nn method=nearest
# Display the resampled map by adding "elev_ned10m_nn" to _Layer Manager_
# in case you don't have it in the Layer Manager already.
# Alternatively, use in command line the following:
# !d.rast elev_ned10m_nn
Image(filename="map.png")
# The elevation map "elev_ned10m_nn" looks the same as the original one,
# so now check the resampled elevation surface using the aspect map:
# !r.slope.aspect elevation=elev_ned10m_nn aspect=aspect_ned10m_nn
# Display the resampled map by adding "aspect_ned10m_nn" to _Layer Manager_
# or in command line using:
# !d.rast aspect_ned10m_nn
Image(filename="map.png")
# Save the displayed map and explain what is going on in the report
# and how it differs from the aspect computed from the original elevation map?
# To save the map, click in _Map Display_ to on the button
# _Save display to graphic file"_ or alternatively,
# use the following command:
Image(filename="map.png")
# Now, reinterpolate DEMs using bilinear and bicubic interpolation.
# Check the interpolated elevation surfaces using aspect maps.
# !r.resamp.interp elev_ned_30m out=elev_ned10m_bil meth=bilinear
# !r.resamp.interp elev_ned_30m out=elev_ned10m_bic meth=bicubic
# !r.slope.aspect elevation=elev_ned10m_bil aspect=aspect_ned10m_bil
# !r.slope.aspect elevation=elev_ned10m_bic aspect=aspect_ned10m_bic
# !d.rast aspect_ned10m_bil
# !d.rast aspect_ned10m_bic
Image(filename="map.png")
# Save the displayed map and in your report, compare the result with
# the previously computed nearest neighbor result.
# In _Map Display_ click button _Save display to graphic file_,
# or use the following in the command line:
Image(filename="map.png")
# ### Resampling to lower resolution
#
#
# Resample to lower resolution (30m -> 100m).
#
# First, display the original elevation and land use maps:
# !d.rast elev_ned_30m
# !d.rast landuse96_28m
Image(filename="map.png")
# Then change the region resolution and resample
# elevation (which is a continuous field)
# and land use (which has discrete categories).
# Explain selection of aggregation method. Can we use average also for landuse?
# What does mode mean?
# !g.region res=100 -p
# !r.resamp.stats elev_ned_30m out=elev_new100m_avg method=average
# !d.rast elev_new100m_avg
Image(filename="map.png")
# Before the next computation, remove all map layers from the _Layer Manager_
# because we don't need to see them anymore.
# !d.erase
# !r.resamp.stats landuse96_28m out=landuse96_100m method=mode
# !d.rast landuse96_100m
Image(filename="map.png")
# Remove or switch off the land use, elevation and aspect maps.
#
#
#
# ### Converting between vector data types
#
#
# Convert census blocks polygons to points using their centroids
# (useful for interpolating a population density trend surface):
# !v.to.points census_wake2000 type=centroid out=census_centr use=vertex
# Display census boundaries using GUI:
# _Add vector_ "census_wake2000"
# _Selection_ > _Feature type_ > _boundary_
# (switch off the other types).
# Save the displayed map in _Map Display_ click button
# _Save display to graphic file_.
# Alternatively, use the following commands to control display.
#
# Note that in both command line and GUI you must either enter the full path
# to the file you are saving the image in, or you must know the current working
# directory.
# !d.vect census_centr icon=basic/circle fill_color=green size=10
# !d.vect census_wake2000 color=red fill_color=none
# !d.legend.vect
Image(filename="map.png")
# Convert contour lines to points (useful for computing DEM from contours):
# !v.to.points input=elev_ned10m_cont10m output=elev_ned_contpts type=line use=vertex
# Display the "elev_ned_contpts" points vector and zoom-in to very small area
# to see the actual points.
# !d.vect elev_ned_contpts co=brown icon=basic/point size=3
Image(filename="map.png")
# ### Convert from vector to raster
#
#
# Convert vector data to raster for use in raster-based analysis.
# First, adjust the computational region to resolution 200m:
# !g.region swwake_30m res=200 -p
# Then remove all layers from the _Layer Manager_.
#
#
#
# Convert vector points "schools" to raster.
# As value for raster use attribute column "CORECAPACI" for core capacity.
# To add legend in GUI use
# _Add map elements_ > _Show/hide legend_
# and select "schools_cap_200m".
# !d.vect schools_wake
# !v.info -c schools_wake
# !v.to.rast schools_wake out=schools_cap_200m use=attr attrcol=CORECAPACI type=point
# !d.rast schools_cap_200m
# !d.vect streets_wake co=grey
# !d.legend schools_cap_200m at=70,30,2,6
Image(filename="map.png")
# Now convert lines in "streets" vector to raster.
# Set the resolution to 30m and use speed limit attribute.
# !g.region res=30 -p
# !v.to.rast streets_wake out=streets_speed_30m use=attr attrcol=SPEED type=line
# If you haven't done this already, add remove all other map layers
# from _Layer Manager_ and add "streets_speed_30m" raster layer.
# Add legend for "streets_speed_30m" raster using GUI in _Map Display_:
# _Add legend_ > _Set Options_ > _Advanced_ > _List of discrete cat numbers_
# and type in speed limits 25,35,45,55,65; move legend with mouse as needed.
#
# Alternatively, use the following commands:
# !d.erase
# !d.rast streets_speed_30m
# !d.legend streets_speed_30m at=5,30,2,5 use=25,35,45,55,65
Image(filename="map.png")
# Save the displayed map.
# In _Map Display_ click button _Save display to graphic file_,
# or use the following.
Image(filename="map.png")
# ### Convert from raster to vector
#
# Convert raster lines to vector lines.
#
# First, set the region and remove map layers from _Layer Manager_.
# Then do the conversion.
#
# Explain why we are using _r.thin_ module.
# You may want to remove all previously used layers from the _Layer Manager_
# before you start these new computations.
# !d.erase
# !g.region raster=streams_derived -p
# !d.rast streams_derived
# !r.thin streams_derived output=streams_derived_t
# !r.to.vect streams_derived_t output=streams_derived_t type=line
Image(filename="map.png")
# Visually compare the result with streams digitized from airphotos.
# !d.vect streams_derived_t color=blue
# !d.vect streams color=red
Image(filename="map.png")
# Save the displayed map (in Map Display click button _Save display to graphic file_).
Image(filename="map.png")
# Convert raster areas representing basins to vector polygons.
#
# Use raster value as category number (flag -v) and
# display vector polygons filled with random colors.
# In GUI: Add vector > Colors > Switch on Random colors.
# You may want to remove all previously used layers from the _Layer Manager_
# before you start these new computations.
# !g.region raster=basin_50K -p
# !d.erase
# !d.rast basin_50K
# !r.to.vect -sv basin_50K output=basin_50Kval type=area
# !d.vect -c basin_50Kval
# !d.vect streams color=blue
Image(filename="map.png")
# Save the displayed map either using GUI or using the following in case
# you are working in the command line.
Image(filename="map.png")
# end the GRASS session
os.remove(rcfile)
| notebooks/data_models.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
from sklearn.model_selection import train_test_split
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.tree import DecisionTreeRegressor
import pickle
#Features
data = pd.read_csv("car data.csv",usecols=[1,3,4])
print('Features:','\n\n\t',data.head(10))
print('\n\n Dimentions of Feature:','\n\n\t',data.shape)
#Targets
data2 = pd.read_csv("car data.csv",usecols=[2])
print('\n\n Target values:','\n\n\t',data2.head(10))
print('\n\n Dimentions of Target:','\n\t',data2.shape, '\n\n')
#Splitting the test data from the dataset first
X_train, X_test, y_train, y_test = train_test_split(data, data2, test_size=0.30)
#Dimenstions of the Split Dataset
print('-----Shapes-----')
print('Training Data: ', X_train.shape, y_train.shape)
print('Testing Data: ', X_test.shape, y_test.shape, '\n\n')
#Creating the model
Model = DecisionTreeRegressor(random_state = 0)
#converting to numpy arrays
Xtr = np.array(X_train)
ytr = np.array(y_train)
Xte = np.array(X_test)
yte = np.array(y_test)
#Fitting the model to the data
Model.fit(Xtr, ytr)
#Making Predictions on the fitted model
predictions = Model.predict(Xte)
print('-------Predictions--------\n',predictions,'\n')
#Evaluating the model's performance
r_square = Model.score(Xtr, ytr)
print('\n\nModel Evaluation Score (On Training Data): ', r_square)
r_square_testD = Model.score(Xte, yte)
print('\nModel Evaluation Score (On Testing Data): ', r_square_testD)
#Plotting the predictions
plt.scatter(yte, predictions)
plt.show()
# -
#Saving the model for future use
filename = 'automobile-price-prediction-DTRegressor-model.pkl'
pickle.dump(Model, open(filename, 'wb'))
| Model-building and training/Car Price Predictor (Sklearn Decision Tree Regression Model).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from ConvGRU import ConvGRU, ConvGRUCell
from reformer.reformer_enc_dec import ReformerEncDec
from reformer.reformer_pytorch import Reformer, ReformerLM
from patchify import patchify, unpatchify
from axial_positional_embedding import AxialPositionalEmbedding
from transformers import ReformerModel, ReformerConfig, ReformerTokenizer
import deepspeed
import argparse
import os
import sys
import numpy as np
import math
import pickle
import cv2 as cv
import matplotlib
import matplotlib.pyplot as plt
import random
import time
import json
from cv2 import VideoWriter, VideoWriter_fourcc, imread
import torchvision.transforms as transforms
from torchvision.utils import save_image
from torch.utils.data import Dataset, DataLoader
from torchvision import datasets
from torch.autograd import Variable
from torch.cuda.amp import autocast, GradScaler
import torch.nn as nn
import torch.nn.functional as F
import torch
import torchvision
import warnings
torch.cuda.set_device(0)
dataset_dir = r"C:\Users/Leo's PC/Documents/SSTP Tests/SSTP/GruGan/test_frames"
# +
def img2embedding(imgs: np.array, patch_shape):
# N, C, H, W
out = []
if len(imgs.shape) == 3:
imgs = np.expand_dims(imgs, 1)
if imgs.shape[1] == 1: # if grayscale
for img in imgs:
img = img[0]
patches = patchify(img, patch_shape, step=patch_shape)
patches = np.reshape(patches, (int((img.shape[0]/patch_shape[0]) * (img.shape[1]/patch_shape[1])), int(patch_shape[0] * patch_shape[1])))
out.append(patches)
out = np.asarray(out)
return out
toVTensor = lambda x : Variable(torch.Tensor(x).cuda())
# -
class ReformerDatasetFast(Dataset):
def __init__(self, file_dir, transform=None, seq_len=1):
self.dir = file_dir
self.transform = transform
self.seq_len = seq_len
self.diction = [] # yes, yes, it is an array called diction
readImage = lambda filename: self.transform(np.array(cv.imread(os.path.join(self.dir, filename)) / 255)) if self.transform else np.array(cv.imread(os.path.join(self.dir, filename)) / 255)
idx = 0
for filename in os.listdir(self.dir):
if filename.endswith('jpg'):
self.diction.append(readImage(filename))
idx += 1
def __len__(self):
return len(self.diction) - 1
def __getitem__(self, idx):
start = time.time()
x, y = self.diction[idx*self.seq_len : (idx+1)*self.seq_len], self.diction[idx*self.seq_len+1 : (idx+1)*self.seq_len+1]
print(time.time() - start)
x, y = torch.Tensor(np.asarray(x)), torch.Tensor(np.asarray(y))
print(time.time() - start)
return [x, y]
# +
class ReformerDataset(Dataset):
def __init__(self, file_dir, transform=None, seq_len=1):
self.dir = file_dir
self.transform = transform
self.seq_len = seq_len
self.diction = [] # yes, yes, it is an array called diction
idx = 0
for filename in os.listdir(self.dir):
if filename.endswith('jpg'):
self.diction.append(filename)
idx += 1
def __len__(self):
return len(self.diction) - 1
def __getitem__(self, idx):
start = time.time()
readImage = lambda filename: self.transform(np.array(cv.imread(os.path.join(self.dir, filename)) / 255)) if self.transform else np.array(cv.imread(os.path.join(self.dir, filename)) / 255)
x, y = self.diction[idx*self.seq_len : (idx+1)*self.seq_len], self.diction[idx*self.seq_len+1 : (idx+1)*self.seq_len+1]
x, y = torch.Tensor(np.asarray(list(map(readImage, x)))), torch.Tensor(np.asarray(list(map(readImage, y))))
return [x, y]
def HWC2CHW(x):
return np.array(x).transpose(2, 0, 1)
dataset = ReformerDataset(file_dir=dataset_dir, transform=HWC2CHW, seq_len=256)
loader = DataLoader(dataset=dataset, batch_size=1, shuffle=False, drop_last=True, num_workers=0)
# -
for i, imgs in enumerate(loader):
for j in imgs:
print(j.shape)
break
# +
class Generator(nn.Module):
def __init__(self):
super(Generator, self).__init__()
self.encoder = ReformerLM(
dim = 256,
depth = 6,
heads = 8,
max_seq_len = 256,
bucket_size = 64,
causal = False,
embed = False,
return_embeddings = True #return the output of the last attention layer, the keys.
).cuda()
self.decoder = ReformerLM(
dim = 256,
depth = 6,
heads = 8,
max_seq_len = 256,
bucket_size = 64,
causal = False,
embed = False,
return_embeddings = True #return the output of the last attention layer, the keys; otherwise would get a softmax activation of vocab dict distribution
).cuda()
def forward(self, x, y_prev):
self.encoded_keys = self.encoder(x)
self.output = self.decoder(y_prev, keys = self.encoded_keys)
return self.output
class Discriminator(nn.Module):
def __init__(self):
super(Generator, self).__init__()
self.encoder = ReformerLM(
dim = 256,
depth = 6,
heads = 8,
max_seq_len = 256,
bucket_size = 64,
causal = False,
embed = False,
return_embeddings = True #return the output of the last attention layer, the keys.
).cuda()
self.decoder = ReformerLM(
dim = 256,
depth = 6,
heads = 8,
max_seq_len = 256,
bucket_size = 64,
causal = False,
embed = False,
return_embeddings = True #return the output of the last attention layer, the keys; otherwise would get a softmax activation of vocab dict distribution
).cuda()
self.fc = nn.Linear(256, 1)
self.sigmoid = nn.Sigmoid()
def forward(x, y_prev):
self.encoded_keys = self.encoder(x)
self.embeddings = self.decoder(y_prev, keys = self.encoded_keys)
self.output = self.fc(self.embeddings)
self.output = self.sigmoid(self.output)
return self.output, self.embeddings
class Decoder_Generator(nn.Module):
def __init__(self):
super(Decoder_Generator, self).__init__()
self.decoder = ReformerLM(
dim = 256,
depth = 6,
heads = 8,
max_seq_len = 16384, # ~10 seconds
bucket_size = 64,
causal = True,
embed = False,
return_embeddings = True #return the output of the last attention layer, the keys; otherwise would get a softmax activation of vocab dict distribution
).cuda()
def forward(self, x):
self.output = self.decoder(x)
return self.output
class Input_Conv(nn.Module):
def __init__(self):
super(Input_Conv, self).__init__()
# Initialize the DenseBlock, input shape is (n, 3, 256, 256), output shape is (n, 64, 16, 16)
self.denseblock = torchvision.models.densenet121()
self.denseblock.features.transition1.conv = nn.Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
self.denseblock.features.transition1.pool = nn.AvgPool2d(kernel_size=4, stride=4, padding=0)
self.denseblock = nn.Sequential(*list(self.denseblock.features.children())[:6])
def forward(self, x):
return self.denseblock(x)
# -
# ## Embeddings:
# ### Generator:
# #### Encoder Input:
# Encoder input is the sample image, broken up into 16x16 patches (for a 256x256 image, that's 256 vectors of length 256). However, for a colored image (3x256x256,) the channels need to be mapped. To get the input, we first cat(R, G, B), which has shape (768, 256). But we need to concatenate RGB embeddings, as well as positional embeddings, to the end of each pixel array. If RGB encoding occupies 64 numbers and positional (in-image spatial) occupies 196 numbers, we'll get a vector of length 512 for each patch, making our input of shape (256, 512) for a 256x256 image.
# #### Decoder Input:
# Compare to the encoder input, the decoder needs to handle one more dimension -- the time dimension of the video frame sequence.
# ### Discriminator:
# #### Encoder Input:
# (same as Generator encoder input) Encoder input is the sample image, broken up into 16x16 patches (for a 256x256 image, that's 256 vectors of length 256). However, for a colored image (3x256x256,) the channels need to be mapped. To get the input, we first cat(R, G, B), which has shape (768, 256). But we need to concatenate RGB embeddings, as well as positional embeddings, to the end of each pixel array. If RGB encoding occupies 64 numbers and positional (in-image spatial) occupies 196 numbers, we'll get a vector of length 512 for each patch, making our input of shape (256, 512) for a 256x256 image.
# #### Decoder Input:
# +
featuremap_embedder = nn.Embedding(num_embeddings=64, embedding_dim=128).cuda()
sequence_position_embedder = nn.Embedding(num_embeddings=256, embedding_dim=128).cuda()
CNN = Input_Conv().cuda()
Decoder = Decoder_Generator().cuda()
# -
print(sequence_position_embedder(torch.Tensor([0]).long().cuda()))
pos_embedder = AxialPositionalEmbedding(256, (256, 64))
fmap_embedder = AxialPositionalEmbedding(256, (256, 64))
# +
inp = toVTensor(np.random.rand(2, 3, 256, 256))
out = CNN(inp)
out = out.view(1, 128, 256)
out = out + pos_embedder(out) + fmap_embedder(out)
print(out.shape)
out = Decoder(out)
print(out.shape)
# +
inp = toVTensor(np.random.rand(1, 3, 256, 256))
out = CNN(inp)
final = []
for n, image in enumerate(out):
result = []
for f, featuremap in enumerate(image):
featuremap = torch.cat((featuremap.reshape(256), featuremap_embedder(torch.Tensor([f]).long().squeeze(0).cuda()), sequence_position_embedder(torch.Tensor([n]).long().squeeze(0).cuda())), dim=0)
result.append(featuremap.detach().cpu().numpy()) ##TODO this cannot be detached here, need to figure out how to do without
final.append(result)
final = torch.Tensor(np.asarray(final))
print(final.shape)
print(final[0][0].shape)
out = Decoder(final)
# -
out = Decoder(toVTensor(np.random.rand(1, 128, 512)))
print(out.shape)
dsconfig={
"train_batch_size":4,
"optimizer": {
"type": "Adam",
"params": {
"lr": 0.001,
"betas": [0.8, 0.999],
"eps": 1e-8,
"weight_decay": 3e-7
}
},
"fp16": {
"enabled": True,
"loss_scale": 0,
"initial_scale_power": 32,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1
},
"zero_optimization": {
"stage": 2,
"allgather_partitions": True,
"allgather_bucket_size": 5e8,
"overlap_comm": False,
"reduce_scatter": True,
"reduce_bucket_size": 5e8,
"contiguous_gradients" : False,
"cpu_offload": False,
"cpu_offload_params" : False,
"cpu_offload_use_pin_memory" : False,
"stage3_max_live_parameters" : 1e9,
"stage3_max_reuse_distance" : 1e9,
"stage3_prefetch_bucket_size" : 5e8,
"stage3_param_persistence_threshold" : 1e6,
"sub_group_size" : 1e12
},
"logging":{
"steps_per_print":100,
"wall_clock_breakdown":True
}
}
# ## With the lucid Reformer, crashes the kernal
# +
class Decoder(nn.Module):
def __init__(self, dim, depth=6, heads=8, max_seq_len=16384, bucket_size=64):
super(Decoder, self).__init__()
self.dim = dim
self.depth = depth
self.heads = heads
self.max_seq_len = max_seq_len
self.bucket_size = bucket_size
self.decoder = ReformerLM(
dim = self.dim,
depth = self.depth,
heads = self.heads,
max_seq_len = self.max_seq_len, # ~10 seconds
bucket_size = self.bucket_size,
causal = True,
embed = False,
return_embeddings = True #return the output of the last attention layer, the keys; otherwise would get a softmax activation of vocab dict distribution
).cuda()
self.pos_embedder = AxialPositionalEmbedding(256, (256, 64))
self.fmap_embedder = AxialPositionalEmbedding(256, (256, 64))
#@autocast()
def forward(self, x):
self.out = x + self.pos_embedder(x)
#Positional Embedding
for b in range(len(self.out)): #batch
for i in range(int(len(self.out[b])/64)): #vector embeddings in a batch
self.out[b][i*64:(i+1)*64] = self.fmap_embedder(self.out[b][i*64:(i+1)*64].unsqueeze(0)).squeeze(0)
self.out = self.decoder(x)
return self.out
class Input_Conv(nn.Module):
def __init__(self):
super(Input_Conv, self).__init__()
# Initialize the DenseBlock, input shape is (n, 3, 256, 256), output shape is (n, 64, 16, 16)
self.denseblock = torchvision.models.densenet121()
self.denseblock.features.transition1.conv = nn.Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
self.denseblock.features.transition1.pool = nn.AvgPool2d(kernel_size=4, stride=4, padding=0)
self.denseblock = nn.Sequential(*list(self.denseblock.features.children())[:6])
@autocast()
def forward(self, x):
return self.denseblock(x)
class Output_ConvTranspose(nn.Module):
def __init__(self):
super(Output_ConvTranspose, self).__init__()
self.relu = nn.ReLU()
self.sigmoid = nn.Sigmoid()
self.upsample = nn.Upsample(scale_factor=2)
self.conv1 = nn.ConvTranspose2d(in_channels=64, out_channels=64, kernel_size=[5,5], stride=1, padding=1)
self.conv2 = nn.ConvTranspose2d(in_channels=64, out_channels=64, kernel_size=[5,5], stride=1, padding=1)
self.conv3 = nn.ConvTranspose2d(in_channels=64, out_channels=64, kernel_size=[5,5], stride=1, padding=1)
self.conv4 = nn.ConvTranspose2d(in_channels=64, out_channels=64, kernel_size=[5,5], stride=1, padding=1)
self.conv5 = nn.ConvTranspose2d(in_channels=64, out_channels=3, kernel_size=[1,1], stride=1, padding=0)
@autocast()
def forward(self, x):
# input size (1, 64, 16, 16)
self.out = self.conv1(x)
self.out = self.relu(self.out)
self.out = self.upsample(self.out)
self.out = self.conv2(self.out)
self.out = self.relu(self.out)
self.out = self.upsample(self.out)
self.out = self.conv3(self.out)
self.out = self.relu(self.out)
self.out = self.upsample(self.out)
self.out = self.conv4(self.out)
self.out = self.relu(self.out)
self.out = self.upsample(self.out)
self.out = self.conv5(self.out)
self.out = self.sigmoid(self.out)
return self.out
class Generator(nn.Module):
def __init__(self, dim, depth=6, heads=8, max_seq_len=16384, bucket_size=64):
super(Generator, self).__init__()
self.dim = dim
self.depth = depth
self.heads = heads
self.max_seq_len = max_seq_len
self.bucket_size = bucket_size
self.inputconv = Input_Conv()
self.reformer = Decoder(dim=self.dim, depth=self.depth, heads=self.heads, max_seq_len=self.max_seq_len, bucket_size=self.bucket_size)
self.outputconvtranspose = Output_ConvTranspose()
@autocast()
def forward(self, x):
#input shape is (b, n, c, h, w)
self.out = []
for b in x:
for n in b:
self.out.append(self.inputconv(n.unsqueeze(0)).cpu().detach().numpy())
self.out = torch.Tensor(self.out)
self.unflattened_shape = self.out.shape
self.out = self.out.view(x.shape[0], self.max_seq_len, self.dim) #TODO padding for variable sequence length input
self.out = self.reformer(self.out)
self.out = self.out.view(self.unflattened_shape)
self.out = []
for b in self.out:
for n in b:
self.out.append(self.outputconvtranspose(n.unsqueeze(0)))
self.out = torch.Tensor(self.out)
return self.out
# +
## Using Huggingface's Reformer implementation
# +
# Initializing a Reformer configuration
configuration = ReformerConfig(attention_head_size=64, attn_layers=['local', 'lsh', 'local', 'lsh', 'local', 'lsh'], axial_norm_std=1.0, axial_pos_embds=True, axial_pos_shape=[64, 64],
axial_pos_embds_dim=[64, 192], chunk_size_lm_head=0, eos_token_id=2, feed_forward_size=256, hash_seed=None, hidden_act='relu', hidden_dropout_prob=0.05,
hidden_size=256, initializer_range=0.02, is_decoder=False, layer_norm_eps=1e-12, local_num_chunks_before=1, local_num_chunks_after=0,
local_attention_probs_dropout_prob=0.05, local_attn_chunk_length=64, lsh_attn_chunk_length=64, lsh_attention_probs_dropout_prob=0.0, lsh_num_chunks_before=1,
lsh_num_chunks_after=0, max_position_embeddings=4096, num_attention_heads=12, num_buckets=None, num_hashes=1, pad_token_id=0, vocab_size=320, tie_word_embeddings=False,
use_cache=True)
# Initializing a Reformer model
model = ReformerModel.from_pretrained('google/reformer-crime-and-punishment')
tokenizer = ReformerTokenizer.from_pretrained('google/reformer-crime-and-punishment')
# +
class Decoder(nn.Module):
def __init__(self, dim, depth=6, heads=8, max_seq_len=16384, bucket_size=64):
super(Decoder, self).__init__()
self.dim = dim
self.depth = depth
self.heads = heads
self.max_seq_len = max_seq_len
self.bucket_size = bucket_size
# Initializing a Reformer configuration
self.configuration = ReformerConfig(attention_head_size=64, attn_layers=['local', 'lsh', 'local', 'lsh', 'local', 'lsh'], axial_norm_std=1.0, axial_pos_embds=True, axial_pos_shape=[256, 64],
axial_pos_embds_dim=[64, 192], chunk_size_lm_head=0, eos_token_id=2, feed_forward_size=256, hash_seed=None, hidden_act='relu', hidden_dropout_prob=0.05,
hidden_size=256, initializer_range=0.02, is_decoder=True, layer_norm_eps=1e-12, local_num_chunks_before=1, local_num_chunks_after=0,
local_attention_probs_dropout_prob=0.05, local_attn_chunk_length=64, lsh_attn_chunk_length=64, lsh_attention_probs_dropout_prob=0.0, lsh_num_chunks_before=1,
lsh_num_chunks_after=0, max_position_embeddings=16384, num_attention_heads=self.heads, num_buckets=None, num_hashes=1, pad_token_id=0, vocab_size=320,
tie_word_embeddings=False, use_cache=False, target_mapping=None)
# Initializing a Reformer model
self.decoder = ReformerModel(self.configuration)
# self.pos_embedder = AxialPositionalEmbedding(256, (256, 64))
self.fmap_embedder = AxialPositionalEmbedding(256, (256, 64))
@autocast()
def forward(self, x):
# self.out = x + self.pos_embedder(x)
self.out = x
#Positional Embedding
for b in range(len(self.out)): #batch
for i in range(int(len(self.out[b])/64)): #vector embeddings in a batch
self.out[b][i*64:(i+1)*64] = self.fmap_embedder(self.out[b][i*64:(i+1)*64].unsqueeze(0)).squeeze(0)
print(self.out.shape)
self.out = self.decoder(inputs_embeds=self.out)
return self.out.last_hidden_state
class Input_Conv(nn.Module):
def __init__(self):
super(Input_Conv, self).__init__()
# Initialize the DenseBlock, input shape is (n, 3, 256, 256), output shape is (n, 64, 16, 16)
self.denseblock = torchvision.models.densenet121()
self.denseblock.features.transition1.conv = nn.Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
self.denseblock.features.transition1.pool = nn.AvgPool2d(kernel_size=4, stride=4, padding=0)
self.denseblock = nn.Sequential(*list(self.denseblock.features.children())[:6])
@autocast()
def forward(self, x):
return self.denseblock(x)
class Output_ConvTranspose(nn.Module):
def __init__(self):
super(Output_ConvTranspose, self).__init__()
self.relu = nn.ReLU()
self.sigmoid = nn.Sigmoid()
self.upsample = nn.Upsample(scale_factor=2)
self.conv1 = nn.ConvTranspose2d(in_channels=128, out_channels=128, kernel_size=[3,3], stride=1, padding=1)
self.conv2 = nn.ConvTranspose2d(in_channels=128, out_channels=64, kernel_size=[3,3], stride=1, padding=1)
self.conv3 = nn.ConvTranspose2d(in_channels=64, out_channels=64, kernel_size=[3,3], stride=1, padding=1)
self.conv4 = nn.ConvTranspose2d(in_channels=64, out_channels=64, kernel_size=[3,3], stride=1, padding=1)
self.conv5 = nn.ConvTranspose2d(in_channels=64, out_channels=3, kernel_size=[1,1], stride=1, padding=0)
@autocast()
def forward(self, x):
# input size (1, 64, 16, 16)
self.out = self.conv1(x)
self.out = self.relu(self.out)
self.out = self.upsample(self.out)
self.out = self.conv2(self.out)
self.out = self.relu(self.out)
self.out = self.upsample(self.out)
self.out = self.conv3(self.out)
self.out = self.relu(self.out)
self.out = self.upsample(self.out)
self.out = self.conv4(self.out)
self.out = self.relu(self.out)
self.out = self.upsample(self.out)
self.out = self.conv5(self.out)
self.out = self.sigmoid(self.out)
return self.out
class Generator(nn.Module):
def __init__(self, dim, depth=6, heads=8, max_seq_len=16384, bucket_size=64):
super(Generator, self).__init__()
self.dim = dim
self.depth = depth
self.heads = heads
self.max_seq_len = max_seq_len
self.bucket_size = bucket_size
self.inputconv = Input_Conv()
self.reformer = Decoder(dim=self.dim, depth=self.depth, heads=self.heads, max_seq_len=self.max_seq_len, bucket_size=self.bucket_size)
self.outputconvtranspose = Output_ConvTranspose()
@autocast()
def forward(self, x):
#input shape is (b, n, c, h, w)
self.out = []
for b in x:
for n in b:
self.out.append(self.inputconv(n.unsqueeze(0)).squeeze(0).cpu().detach().numpy())
self.out = torch.Tensor(self.out).cuda()
self.unflattened_shape = self.out.shape
self.out = self.out.view(x.shape[0], self.max_seq_len, self.dim) #TODO padding for variable sequence length input
self.out = self.reformer(self.out)
print(self.out.shape)
self.out = self.out.view(1, 256, 128, 16, 16) #TODO this need to be changed for adaptive sizing
self.outarray = []
for b in self.out:
for n in b:
self.outarray.append(self.outputconvtranspose(n.unsqueeze(0)).squeeze(0))
#self.out = torch.Tensor(self.outarray)
return self.outarray
# -
G = Generator(dim=256).cuda()
for i, imgs in enumerate(loader):
inp = Variable(imgs[0]).cuda()
with autocast():
out = G(inp)
print(out[1].shape)
#dsconfig = json.loads(dsconfig)
model = torchvision.models.densenet121()
model_engine, optimizer, _, _ = deepspeed.initialize(args=dsconfig, model=model, model_parameters=model.parameters)
| GruGan/ReformerTest.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Querying Microsoft Defender Data
# MSTICPy versions >= 1.5.0
#
# ### Description
# This Notebook provides details and examples of how to connect to and query data from the Microsoft Defender Advanced Hunting API.
#
# <p style="border: solid; padding: 5pt"><b>Note: </b>
# This notebook reflects a partially-updated component and still
# uses the "MDATP" abbreviation to refer to the Microsoft 365 Defender
# and Microsoft Defender for Endpoint data services.
# </p>
#
# ### Installation
# !pip install --upgrade msticpy
# ### Authentication
#
# Authentication for the Microsoft Defender Advanced Hunting API is handled via an Azure AD application. Before you can authenticate you will need to register an application and provide it with the required permissions. MSTICpy supports Application Context authentication to the API.
# Detailed instructions on registering an application can be found here:
# - [Get access with an application context](https://docs.microsoft.com/en-us/microsoft-365/security/defender-endpoint/exposed-apis-create-app-webapp?view=o365-worldwide)
# - [Get access with a user context](https://docs.microsoft.com/en-us/microsoft-365/security/defender-endpoint/exposed-apis-create-app-nativeapp?view=o365-worldwide)
#
# Once created you will require the following details:
# * Application (client) ID
# * Directory (tenant) ID
# * Client secret
#
# These details can be found in the Azure Portal under Azure Active Directory > App Registrations.
#
# Once collected the easiest way to manage these details is via msticpyconfig.yaml - simply add them to the file in the following format:
#
# ```yaml
# DataProviders:
# MicrosoftDefender:
# Args:
# ClientId: "CLIENT ID"
# ClientSecret:
# KeyVault:
# TenantId: "TENANT ID"
# ```
# You can then initialize a data provider for Microsoft Defender and connect the provider.
#
# Note: you can also provide these values to the connect function.
# See [Microsoft Defender data provider](https://msticpy.readthedocs.io/en/latest/data_acquisition/DataProviders.html#microsoft-defender)
#
# <p style="border: solid; padding: 5pt"><b>Note: </b>
# If you want to access the Microsoft Defender for Endpoint
# APIs rather than the M365 Defender API (the latter is a subset
# of the former), please use "MDE" as the parameter to QueryProvider.
# </p>
# +
from msticpy.data.data_providers import QueryProvider
md_prov = QueryProvider('M365D')
md_prov.connect()
# -
# Once connected the Microsoft Defender data connector functions in a similar manner to other data connectors. You can list queries:
md_prov.list_queries()
# Get details about avaliable queries:
md_prov.MDATP.list_alerts('?')
# Execute queries with default parameters:
md_prov.MDATP.list_alerts()
# Execute queries with custom parameters:
md_prov.MDATP.list_alerts(start="-30", add_query_items="| summarize count() by Severity")
# Print a fully constructed query for debug purposes:
md_prov.MDATP.list_alerts("print", start="-30", add_query_items="| summarize count() by Severity")
# Execute a custom query:
query = "AlertEvents | sample 10"
md_prov.exec_query(query)
| docs/notebooks/MicrosoftDefender.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Lab 04 - Bayesian Fitting
# ## Tasks
# - Construct a Gaussian Process model and tune hyperparameters of GP model given noisy data
# - Investigate what kernels can be used to best represent the data
# # Set up environment
# !pip install git+https://github.com/uspas/2021_optimization_and_ml --quiet
# +
# %reset -f
import numpy as np
import matplotlib.pyplot as plt
#matplotlib graphs will be included in your notebook, next to the code:
# %matplotlib inline
import torch
import gpytorch
# -
# ## Import Data
# We are going to look at some data that was generated by sampling a 5 x 5 x 5 grid in the domain [0,1] on each axis. The function that generated this data is
#
# $$
# f(x_1,x_2,x_3) = \sin(2\pi x_1)\sin(\pi x_2) + x_3
# $$
#
# The columns of the imported array is $(x_1,x_2,x_3,f)$. We need to convert it to a torch tensor to use with GPyTorch.
# +
x = np.linspace(0,1,5)
xx = np.meshgrid(x,x,x)
train_x = np.vstack([ele.ravel() for ele in xx]).T
train_f = np.sin(2*np.pi*train_x[:,0]) * np.sin(np.pi*train_x[:,1]) + train_x[:,2] + np.random.randn(train_x.shape[0]) * 0.01
train_x = torch.from_numpy(train_x)
train_f = torch.from_numpy(train_f)
# -
# ## Define a GP Model
# Here we define an Exact GP model using GPyTorch. The model is exact because we have analytic expressions for the integrals associated with the GP likelihood and output distribution. If we had a non-Gaussian likelihood or some other complication that prevented analytic integration we can also use Variational/Approximate/MCMC techniques to approximate the integrals necessary.
#
# Taking a close look at the model below we see two important modules:
# - ```self.mean_module``` which represents the mean function
# - ```self.covar_module``` which represents the kernel function (or what is used to calculate the kernel matrix
#
# Both of these objects are torch.nn.Module objects (see https://pytorch.org/docs/stable/generated/torch.nn.Module.html). PyTorch modules have trainable parameters which we can access when doing training. By grouping the modules inside another PyTorch module (gpytorch.models.ExactGP) lets us easily control which parameters are trained and which are not.
class ExactGPModel(gpytorch.models.ExactGP):
def __init__(self, train_x, train_f, likelihood):
super(ExactGPModel, self).__init__(train_x, train_f, likelihood)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
# Here we initialize our model with the training data and a defined likelihood (also a nn.Module) with a trainable noise parameter.
likelihood = gpytorch.likelihoods.GaussianLikelihood()
model = ExactGPModel(train_x, train_f, likelihood)
# NOTE: All PyTorch modules (including ExactGPModel) have ```.train()``` and ```.eval()``` modes. ```train()``` mode is for optimizing model hyperameters. ```.eval()``` mode is for computing predictions through the model posterior.
# ## Training the model
# Here we train the hyperparameters of the model (the parameters of the covar_module and the mean_module) to maximize the marginal log likelihood (minimize the negative marginal log likelihood). Note that since everything is defined in pyTorch we can use Autograd functionality to get the derivatives which will speed up optimization using the modified gradient descent algorithm ADAM.
#
# Also note that several of these hyperparameters (lengthscale and noise) must be strictly positive. Since ADAM is an unconstrained optimizer (which optimizes over the domain (-inf, inf)) gpytorch accounts for this constraint by optimizing the log of the lengthscale (raw_lengthscale). To get the actual lengthscale just use ```model.covar_module.base_kernel.lengthscale.item()```
#
# <div class="alert alert-block alert-info">
#
# **Task:**
# Write the steps for minimizing the negative log likelihood using pytorch. Refer back to Lab 3 for a reminder of how to do this. Use `gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)` as the loss function (which we are trying to maximize!). Use your function to train the model and report the marginal log likelihood.
#
# </div>
def train_model(model, likelihood):
# Find optimal model hyperparameters
model.train()
likelihood.train()
# define optimizer
optimizer = torch.optim.Adam(model.parameters(), lr=0.1)
for i in range(training_iter):
pass
#print the new trainable parameters
for param in model.named_parameters():
print(f'{param[0]} : {param[1]}')
return loss
# <div class="alert alert-block alert-info">
#
# **Task:**
# Define a new GP model that uses a different kernel (or combination of kernels) to maximize the marginal log likelihood.
#
# </div>
class MyExactGPModel(gpytorch.models.ExactGP):
def __init__(self, train_x, train_f, likelihood):
super(MyExactGPModel, self).__init__(train_x, train_f, likelihood)
self.mean_module = gpytorch.means.ConstantMean()
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
# <div class="alert alert-block alert-info">
#
# **Task:**
# Plot the mean and uncertainty along the $x_1$ axis where $x_2=\pi/2, x_3=0$.
#
# </div>
#Hint: you can use the following code to get the predicted mean, lower + upper confidence bounds
x = torch.zeros(1,3).double()
my_likelihood.eval()
my_model.eval()
with torch.no_grad():
post = my_likelihood(my_model(x))
mean = post.mean
lower,upper = post.confidence_region()
| labs/lab_04/lab_04.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# <center><h2><b>Imports</b></h2></center>
# + id="oQpnF4Gcbesh"
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from tqdm import tqdm
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
# + [markdown] id="I9totdsduPyz"
# # KOMORAN
# + id="WJnJUd5-uTMU"
from konlpy.tag import Komoran
komoran = Komoran()
# + id="Lo8NN_0ucMHg"
#영화리뷰 데이터 읽어오기
df = pd.read_csv("./data/processed/base_df.csv", encoding='utf-8')
#불용어 읽어오기
stopwords = pd.read_csv("./data/stopwords.txt")
stopwords = stopwords['word'].tolist()
# + colab={"base_uri": "https://localhost:8080/"} id="ycEdIiYbdkEZ" outputId="824ba907-be1c-4261-fdd3-ea596bfadf12"
#불용어제거 + 토큰화
tk = []
for sentence in tqdm(df['review']) :
tokenized_sentence = komoran.morphs(sentence) # 토큰화
stopwords_removed_sentence = [word for word in tokenized_sentence if not word in stopwords] # 불용어 제거
tk.append(stopwords_removed_sentence)
# + colab={"base_uri": "https://localhost:8080/"} id="-IzBJ_sqdosP" outputId="fa245bad-d554-4c36-ad6e-aad500324b1c"
tk[:3]
# + id="aqauOBrmdved"
# 정수 인코딩
tokenizer = Tokenizer()
tokenizer.fit_on_texts(tk)
# + colab={"base_uri": "https://localhost:8080/"} id="-sHyHZMad_B1" outputId="83238dd5-6ad6-417a-9f35-ce92978a5bf7"
threshold = 3
total_cnt = len(tokenizer.word_index) # 단어의 수
rare_cnt = 0 # 등장 빈도수가 threshold보다 작은 단어의 개수를 카운트
total_freq = 0 # 훈련 데이터의 전체 단어 빈도수 총 합
rare_freq = 0 # 등장 빈도수가 threshold보다 작은 단어의 등장 빈도수의 총 합
# 단어와 빈도수의 쌍(pair)을 key와 value로 받는다.
for key, value in tokenizer.word_counts.items():
total_freq = total_freq + value
# 단어의 등장 빈도수가 threshold보다 작으면
if(value < threshold):
rare_cnt = rare_cnt + 1
rare_freq = rare_freq + value
print('단어 집합(vocabulary)의 크기 :',total_cnt)
print('등장 빈도가 %s번 이하인 희귀 단어의 수: %s'%(threshold - 1, rare_cnt))
print("단어 집합에서 희귀 단어의 비율:", (rare_cnt / total_cnt)*100)
print("전체 등장 빈도에서 희귀 단어 등장 빈도 비율:", (rare_freq / total_freq)*100)
# + colab={"base_uri": "https://localhost:8080/"} id="6Zv4D8oXeA_7" outputId="57712d60-74a3-49bc-f1bd-6d9de8eed613"
# 전체 단어 개수 중 빈도수 2이하인 단어는 제거.
# 0번 패딩 토큰을 고려하여 + 1
vocab_size = total_cnt - rare_cnt + 1
print('단어 집합의 크기 :',vocab_size)
# + colab={"base_uri": "https://localhost:8080/"} id="I2cdvV0NeGfj" outputId="f2850e18-6581-4649-9f85-4ec823683f1c"
#텍스트 시퀀스 >> 정수 시퀀스
tokenizer = Tokenizer(vocab_size)
tokenizer.fit_on_texts(tk)
tk = tokenizer.texts_to_sequences(tk)
print(tk[:3])
# + id="gvD8k_6FeIcR"
#target data 따로 지정
yy = np.array(df['sentiment'])
# + colab={"base_uri": "https://localhost:8080/"} id="NMLF8b8PeMdA" outputId="f4df31e7-d8ed-4c11-bfc4-35260e5bc4be"
# 빈 샘플 제거
drop_index = [index for index, sentence in enumerate(tk) if len(sentence) < 1]
len(drop_index)
# + colab={"base_uri": "https://localhost:8080/"} id="s4xo6oKlePIl" outputId="10bea640-69bd-4dc8-e950-16457c01f7a9"
tk = np.delete(tk, drop_index, axis=0)
yy = np.delete(yy, drop_index, axis=0)
print(len(tk))
print(len(yy))
# + colab={"base_uri": "https://localhost:8080/", "height": 318} id="AO8cglyDeQtb" outputId="5473fe29-eb8b-416e-b3a3-2ddb8277982c"
#패딩
print('리뷰의 최대 길이 :',max(len(review) for review in tk))
print('리뷰의 평균 길이 :',sum(map(len, tk))/len(tk))
plt.hist([len(review) for review in tk], bins=50)
plt.xlabel('length of samples')
plt.ylabel('number of samples')
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 282} id="C4QPOTGaeSp3" outputId="c7dc04f9-e012-4133-c1d1-bfedfb89c2a1"
plt.hist([len(review) for review in tk], bins=50)
plt.xlabel('length of samples')
plt.ylabel('number of samples')
plt.xlim(0, 100)
plt.show()
# + [markdown] id="3UD8qSyYeZdQ"
# max_len 50으로 결정
# + colab={"base_uri": "https://localhost:8080/"} id="hRhaBVSdeVHE" outputId="08f37935-a9ae-44c6-d78a-e434ea2d2dee"
def below_threshold_len(max_len, nested_list):
count = 0
for sentence in nested_list:
if(len(sentence) <= max_len):
count = count + 1
print('전체 샘플 중 길이가 %s 이하인 샘플의 비율: %s'%(max_len, (count / len(nested_list))*100))
max_len = 50
below_threshold_len(max_len, tk)
# + id="-bTWe5BMeXk7"
final = pad_sequences(tk, maxlen=max_len)
# -
final_ = list(map(lambda x:[x],final.astype(object)))
# + id="6XCEA6hZed5k"
final_df = pd.DataFrame(final_,columns=['word'])
y_df = pd.DataFrame(yy)
# + colab={"base_uri": "https://localhost:8080/", "height": 235} id="ctJzxY1oefsW" outputId="b0c1d48a-5918-4cc2-9674-66ecb15f7e7f"
final_df = pd.concat([final_df, y_df], axis=1)
final_df.columns = ['word','label']
# + id="DgG1pKUOehiZ"
final_df.to_csv('./data/processed/komoran_df.csv')
| 3_kmj_preprocessing_komoran.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from filo import Series
folders = ['data/img1', 'data/img2']
basefolder = 'data'
series = Series(paths=folders, savepath=basefolder, extension='.png')
series
# # Access individual files in series
series.files[0] # first file in the first folder
num = 19
series.files[num].num # should always be equal to num
series.files[8].time # unix time of file creation
# # Get and save file info
series.files # list of all filo.File objects
series.info.head() # see top of info DataFrame
series.duration # time between first file and last file
series.save_info()
# # Update file info
# Completely replace file info (`series.files` and `series.info`) by that contained in the external file (overwrites `series.files`):
series.load_info('External_File_Info.txt')
series.info.head()
# Load data from data previously saved with `save_info()` (here, will effectively reset files and info to what they were before the previous call):
series.load_info()
series.info.head()
# Keep file info but only update times using an external file (file must contain `num` and `time (unix)` as columns):
series.load_time('External_Time_Info.txt')
series.info.head()
# Save this updated file info to csv file for future use:
series.save_info('Updated_File_Info.txt')
| ExampleSeries.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
# %matplotlib inline
sns.set()
# -
# ## Transacciones del client al mes
df_transacciones = pd.read_csv("../data/DATA_TRANSAC_CANALES_F.csv")
df_transacciones.head().T
df_transacciones.describe().T
df_transacciones.info() # floats?
# ### Aquí hay varios nulls en el conteo de transacciones al mes. Podría ser un leak que indique que el cliente ya ha abandonado.
# # Correlacion entre rangos y conteos.
not_pk_cols = list(set(df_transacciones.columns) - set(['MES', 'ID_CLIENTE']))
# Heatmap
plt.figure(figsize=(18,10))
sns.heatmap(df_transacciones[not_pk_cols].corr(),annot=True)
# ## Observaciones
# Columnas de CT_SUNAT_PYME y CT_PAGO_PROVEE
# (Nro de transacciones sunat en el mes y Nro de transacciones pago proveedores en el mes) --> CORR 1.
#
# Además el rango que poseen es bastante alto. Si al final valores nulls son leak, podría tratarse de la misma información.
# # Análisis de valores NULOS. Luego validaremos con target.
# RGO_MTO_SUNAT_PYME
df_transacciones.RGO_MTO_SUNAT_PYME.value_counts(dropna=False).sort_index()
plt.figure(figsize=(20,5))
plot = sns.countplot(x = 'RGO_MTO_SUNAT_PYME', data = df_transacciones)
#RGO_MTO_PAGO_PROVEE
df_transacciones.RGO_MTO_PAGO_PROVEE.value_counts(dropna=False).sort_index()
plt.figure(figsize=(20,5))
plot = sns.countplot(x = 'RGO_MTO_PAGO_PROVEE', data = df_transacciones)
# ### Quitando los nulls para los dos grupos en sus campos de conteo de transacciones
#
# CT_SUNAT_PYME (no null) con RGO_MTO_SUNAT_PYME
df_transacciones[df_transacciones['CT_SUNAT_PYME'].notnull()].RGO_MTO_SUNAT_PYME.value_counts(dropna=False).sort_index()
# CT_PAGO_PROVEE (no null) con RGO_MTO_PAGO_PROVEE
df_transacciones[df_transacciones['CT_PAGO_PROVEE'].notnull()].RGO_MTO_PAGO_PROVEE.value_counts(dropna=False).sort_index()
# Heatmap
plt.figure(figsize=(18,10))
sns.heatmap(df_transacciones[df_transacciones['CT_SUNAT_PYME'].notnull() &
df_transacciones['CT_PAGO_PROVEE'].notnull()]
[['RGO_MTO_SUNAT_PYME','RGO_MTO_PAGO_PROVEE']].corr(),annot=True)
# ## Data es la misma cuando no se toman en cuenta los nulls. COMPARAR con target. Podria ser un leak.
# # Último análisis: POR MES.
df_transacciones.PERIODO.value_counts(dropna=False).sort_index()
# +
#Veremos tranferencias y monto de RECAUDO como ejemplo
# 201705
plt.figure(figsize=(20,5))
plot = sns.countplot(x = 'RGO_MTO_RECAUDO', data = df_transacciones[df_transacciones['PERIODO']==201705])
# -
df_transacciones[df_transacciones['PERIODO']==201705].CT_RECAUDO.value_counts(dropna=False).sort_index()
df_transacciones[df_transacciones['PERIODO']==201705].ID_CLIENTE.value_counts(dropna=False).sort_index()
# +
# 201706
plt.figure(figsize=(20,5))
plot = sns.countplot(x = 'RGO_MTO_RECAUDO', data = df_transacciones[df_transacciones['PERIODO']==201706])
# -
df_transacciones[df_transacciones['PERIODO']==201706].CT_RECAUDO.value_counts(dropna=False).sort_index()
df_transacciones[df_transacciones['PERIODO']==201706].ID_CLIENTE.value_counts(dropna=False).sort_index()
# +
# Aquí la distribución es diferente.
# 201708
plt.figure(figsize=(20,5))
plot = sns.countplot(x = 'RGO_MTO_RECAUDO', data = df_transacciones[df_transacciones['PERIODO']==201708])
# -
df_transacciones[df_transacciones['PERIODO']==201708].CT_RECAUDO.value_counts(dropna=False).sort_index()
# +
# Entonces, que significan esos nulls? Primera transferencia? o transacciones RECIBIDAS! (:o) ? uhm...
# Viendo solo los que tienen contador en nulo.
plt.figure(figsize=(20,5))
plot = sns.countplot(x = 'RGO_MTO_RECAUDO',
data = df_transacciones[(df_transacciones['PERIODO']==201708) & (df_transacciones['CT_RECAUDO'].notnull())])
# -
# Viendo solo los que tienen contador en nulo.
plt.figure(figsize=(20,5))
plot = sns.countplot(x = 'RGO_MTO_RECAUDO',
data = df_transacciones[(df_transacciones['PERIODO']==201708) & (df_transacciones['CT_RECAUDO'].isnull())])
# # Granularidad. Aquí podríamos tener un verdadero significado de los contadores en nulo.
df_transacciones.sort_values(["ID_CLIENTE","PERIODO"], inplace = True)
bool_series = df_transacciones[["ID_CLIENTE","PERIODO"]].duplicated()
df_transacciones[bool_series]
# ### No hay repetidos. Revisaremos la historia de un cliente en periodos recientes para revisar comportamiento de nulos.
df_transacciones[(df_transacciones['PERIODO']==201901)]
# ### Historia del cliente 8
df_transacciones[(df_transacciones['ID_CLIENTE']==8)].sort_values(by=['PERIODO'])
# Tiene todos los valores de conteo en transferencias locales.
# ### Historia del cliente 31
df_transacciones[(df_transacciones['ID_CLIENTE']==31)].sort_values(by=['PERIODO'])
# Tiene conteo de cheques y un null en transf locales.
# ### Historia del cliente 99989
df_transacciones[(df_transacciones['ID_CLIENTE']==99989)].sort_values(by=['PERIODO'])
# ### Historia del cliente 206
df_transacciones[(df_transacciones['ID_CLIENTE']==206)].sort_values(by=['PERIODO'])
# #### El conteo tiene bastantes valores en Nulo. Habría que ver como llenarlo o en su defecto sòlo usar rangos.
| eda/BasicEDA_DATA_TRANSAC_CANALES_F.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/moustafa-7/ChatBot-Project/blob/master/The_chat_bot_using_DL_seq2seq.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="bGik37-3W8SZ" colab_type="code" colab={}
import numpy as np
import pandas as pd
import string
import pickle
import operator
import matplotlib.pyplot as plt
# %matplotlib inline
# + id="EmJ1ASv2hmOa" colab_type="code" colab={}
import codecs
# #!wget -c http://www.cs.cornell.edu/~cristian/data/cornell_movie_dialogs_corpus.zip
# #!unzip cornell_movie_dialogs_corpus.zip
with codecs.open("cornell movie-dialogs corpus/movie_lines.txt", "rb", encoding="utf-8", errors="ignore") as f:
lines = f.read().split("\n")
conversations = []
for line in lines:
data = line.split(" +++$+++ ")
conversations.append(data)
# + id="6nAYtzh0h31a" colab_type="code" colab={}
chats = {}
for tokens in conversations:
if len(tokens)>4:
idx = tokens[0][1:]
chat = tokens[4]
chats [int(idx)] = chat
# + id="_MhIx5gzi7PV" colab_type="code" colab={}
sorted_chats = sorted(chats.items(), key = lambda x: x[0])
#sorted_chats
# + id="TcmObjb7sYMk" colab_type="code" colab={}
conves_dict = {}
counter = 1
conves_ids = []
for i in range(1, len(sorted_chats)+1):
if i < len(sorted_chats):
if (sorted_chats[i][0] - sorted_chats[i-1][0]) == 1:
if sorted_chats[i-1][1] not in conves_ids:
conves_ids.append(sorted_chats[i-1][1])
conves_ids.append(sorted_chats[i][1])
elif (sorted_chats[i][0] - sorted_chats[i-1][0]) > 1:
conves_dict[counter] = conves_ids
conves_ids = []
counter += 1
else:
pass
# + id="zU2vo04Us4Ia" colab_type="code" colab={}
#conves_dict
# + id="36Z9SVsGtQS9" colab_type="code" colab={}
context_and_target = []
for conves in conves_dict.values():
if (len(conves) % 2) != 0:
conves = conves[:-1]
for i in range(0, len(conves), 2):
context_and_target.append((conves[i], conves[i+1]))
# + id="fmJQ4Pifwkf4" colab_type="code" colab={}
#context_and_target
# + id="aXB_paQTwnXd" colab_type="code" colab={}
context, target = zip(*context_and_target)
# + id="3dJDAX0Fw3jB" colab_type="code" colab={}
context = list(context)
target = list(target)
# + id="IwSDDK5Ew8Lp" colab_type="code" colab={}
import re
def clean_text(text):
'''Clean text by removing unnecessary characters and altering the format of words.'''
text = text.lower()
text = re.sub(r"i'm", "i am", text)
text = re.sub(r"he's", "he is", text)
text = re.sub(r"she's", "she is", text)
text = re.sub(r"it's", "it is", text)
text = re.sub(r"that's", "that is", text)
text = re.sub(r"what's", "that is", text)
text = re.sub(r"where's", "where is", text)
text = re.sub(r"how's", "how is", text)
text = re.sub(r"\'ll", " will", text)
text = re.sub(r"\'ve", " have", text)
text = re.sub(r"\'re", " are", text)
text = re.sub(r"\'d", " would", text)
text = re.sub(r"\'re", " are", text)
text = re.sub(r"won't", "will not", text)
text = re.sub(r"can't", "cannot", text)
text = re.sub(r"n't", " not", text)
text = re.sub(r"n'", "ng", text)
text = re.sub(r"'bout", "about", text)
text = re.sub(r"'til", "until", text)
text = re.sub(r"[-()\"#/@;:<>{}`+=~|.!?,]", "", text)
return text
# + id="F81oSOANyAiC" colab_type="code" colab={}
clean_target = []
for targ in target:
clean_target.append(clean_text(targ))
# + id="hQn8TUs1ypl5" colab_type="code" colab={}
clean_context =[]
for cont in context:
clean_context.append(clean_text(cont))
# + id="5Y7C6KBNzFBH" colab_type="code" colab={}
#clean_context
# + id="oF0lMIx7zHOh" colab_type="code" colab={}
# Beggining of a sentence <BOS> and End of Senctence <EOS>
bos = '<BOS> '
eos = ' <EOS>'
final_target = [bos + targ + eos for targ in clean_target]
final_context = [bos + cont + eos for cont in clean_context]
# + id="OUUFkDH04AU9" colab_type="code" outputId="bb2dd975-af64-4806-e752-96de7084a9c7" colab={"base_uri": "https://localhost:8080/", "height": 289}
import codecs
# !wget https://github.com/samurainote/Automatic-Encoder-Decoder_Seq2Seq_Chatbot/raw/master/encoder_inputs.txt
with codecs.open("encoder_inputs.txt", 'rb', encoding = "utf-8", errors = "ignore") as f:
lines = f.read().split('\n')
encoder_text = []
for line in lines:
data = line.split('\n')[0]
encoder_text.append(data)
# + id="sfH4oy1x715L" colab_type="code" outputId="c819c3a6-5f10-4ab4-bafe-cc7d66f33dbd" colab={"base_uri": "https://localhost:8080/", "height": 34}
encoder_text = encoder_text[0:14499]
len(encoder_text)
# + id="uXQ1J0klFUKw" colab_type="code" colab={}
encoder_text
# + id="J7ONj0kz9Lsm" colab_type="code" outputId="724529c7-87a5-4f3c-8258-bd80e311f8cf" colab={"base_uri": "https://localhost:8080/", "height": 289}
# !wget https://github.com/samurainote/Automatic-Encoder-Decoder_Seq2Seq_Chatbot/raw/master/decoder_inputs.txt
with codecs.open("decoder_inputs.txt", 'rb', encoding = "utf-8", errors = "ignore") as f:
lines = f.read().split('\n')
decoder_text = []
for line in lines:
data = line.split('\n')[0]
decoder_text.append(data)
# + id="jmukqbaj-KjD" colab_type="code" outputId="51cf86ff-2acc-4e47-943d-5c8ad7b8bc20" colab={"base_uri": "https://localhost:8080/", "height": 34}
decoder_text = decoder_text[0:14499]
len(decoder_text)
# + id="jhQaN-8wHn_p" colab_type="code" colab={}
decoder_text
# + id="P5tpJb2d-i8I" colab_type="code" colab={}
full_text = encoder_text + decoder_text
# + id="y4FS6r8M_WPH" colab_type="code" colab={}
# dictionary = []
# print("Making dictionary of words.\n")
# for text in full_text:
# words = text.split()
# for i in range(0,len(words)):
# if words[i] not in dictionary:
# dictionary.append(words[i])
# + id="6NVEww9p_2iK" colab_type="code" outputId="75192bd5-b3ab-42bd-efe1-4a027016cd64" colab={"base_uri": "https://localhost:8080/", "height": 34}
from keras.preprocessing.text import Tokenizer
# + id="ZXA1tDiyCUqq" colab_type="code" colab={}
vocab_size = 14500
tokenizer = Tokenizer(num_words = vocab_size)
# + id="EMANi9XkComI" colab_type="code" outputId="0903be49-62c5-485d-e674-eb0faac717aa" colab={"base_uri": "https://localhost:8080/", "height": 34}
tokenizer.fit_on_texts(full_text)
word_index = tokenizer.word_index
len(word_index)
# + id="MEyxHTyQDBul" colab_type="code" colab={}
index2word = {}
for k,v in word_index.items():
if v<14500:
index2word[v] = k
if v>14500:
continue
# + id="uSbHsQVgGvns" colab_type="code" colab={}
index2word
# + id="FIRuk9r3GwvT" colab_type="code" colab={}
word2index = {}
for k,v in index2word.items():
word2index[v] = k
# + id="BWUL46-OH-Bv" colab_type="code" outputId="d7a46730-808f-47e1-c585-3ae427f21a48" colab={"base_uri": "https://localhost:8080/", "height": 34}
len(word2index)
# + id="PiGsMdMdH_pS" colab_type="code" colab={}
encoder_sequences = tokenizer.texts_to_sequences(encoder_text)
# + id="U7dr1-FaIP0W" colab_type="code" colab={}
decoder_sequences = tokenizer.texts_to_sequences(decoder_text)
# + id="chu64s2eIXEq" colab_type="code" outputId="7b822a4d-c861-498f-a8f0-ae0119697d8e" colab={"base_uri": "https://localhost:8080/", "height": 34}
len(encoder_sequences)
# + id="M4kE8Bg3IeNM" colab_type="code" colab={}
for seqs in encoder_sequences:
for seq in seqs:
if seq>14499:
print(seq)
break
# + id="doYf5uABJLOE" colab_type="code" outputId="5454e496-37c9-45d3-fcb3-2b763a3f18bc" colab={"base_uri": "https://localhost:8080/", "height": 34}
vocab_size = len(word2index)+1
vocab_size
# + id="IiYdFe_W9LPg" colab_type="code" colab={}
decoder_output_data = None
# + id="hgR5lSfxJhFK" colab_type="code" colab={}
import numpy as np
max_len = 20
num_samples = len(encoder_sequences)
num_samples
decoder_output_data = np.zeros((num_samples, max_len, vocab_size), dtype="float16")
# + id="dtyVx2mTJcoa" colab_type="code" colab={}
decoder_output_data
# + id="nVuaYBikSjyJ" colab_type="code" colab={}
for i, seqs in enumerate(decoder_input_data):
for j, seq in enumerate(seqs):
if j > 0:
decoder_output_data[i][j][seq] = 1.
# + id="JezJHojfKHMv" colab_type="code" colab={}
from keras.preprocessing.sequence import pad_sequences
encoder_input_data = pad_sequences(encoder_sequences, maxlen = max_len, dtype = 'int32', padding='post', truncating='post')
decoder_input_data = pad_sequences(decoder_sequences, maxlen = max_len, dtype = 'int32', padding = 'post', truncating = 'post')
# + id="-ujlij4DSWEX" colab_type="code" outputId="62c855ff-07bf-4c79-d6a6-0a380dbb4ae2" colab={"base_uri": "https://localhost:8080/", "height": 51}
decoder_input_data[0]
# + id="rzxNtuj7kcT3" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 476} outputId="6ee6ec2a-8db2-46b1-82f2-e569bec286a4"
# !wget http://nlp.stanford.edu/data/glove.6B.zip
# !unzip glove.6B.zip
embeddings_index = {}
with open('glove.6B.50d.txt', encoding='utf-8') as f:
for line in f:
values = line.split()
word = values[0]
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
f.close()
print("Glove Loded!")
# + id="yL0f_3GVkcW5" colab_type="code" colab={}
embedding_dimention = 50
def embedding_matrix_creater(embedding_dimention, word_index):
embedding_matrix = np.zeros((len(word_index) + 1, embedding_dimention))
for word, i in word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
# words not found in embedding index will be all-zeros.
embedding_matrix[i] = embedding_vector
return embedding_matrix
# + id="LAADQOAOkcZ8" colab_type="code" colab={}
embedding_matrix = embedding_matrix_creater(50, word_index=word2index)
# + id="Q48Ckh6QNSRG" colab_type="code" colab={}
from keras.layers import Embedding
from keras.layers import Input, Dense, LSTM, TimeDistributed
from keras.models import Model
# + id="yfJXyqHzkcdx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 88} outputId="0e4c3a0b-eac8-4f8f-fe3f-8821eaf34d0d"
embed_layer = Embedding(input_dim=vocab_size, output_dim=50, trainable=True,)
embed_layer.build((None,))
embed_layer.set_weights([embedding_matrix])
# + id="1jw1C5xukcgt" colab_type="code" colab={}
def seq2seq_model_builder(HIDDEN_DIM=300):
encoder_inputs = Input(shape=(max_len, ), dtype='int32',)
encoder_embedding = embed_layer(encoder_inputs)
encoder_LSTM = LSTM(HIDDEN_DIM, return_state=True)
encoder_outputs, state_h, state_c = encoder_LSTM(encoder_embedding)
decoder_inputs = Input(shape=(max_len, ), dtype='int32',)
decoder_embedding = embed_layer(decoder_inputs)
decoder_LSTM = LSTM(HIDDEN_DIM, return_state=True, return_sequences=True)
decoder_outputs, _, _ = decoder_LSTM(decoder_embedding, initial_state=[state_h, state_c])
# dense_layer = Dense(VOCAB_SIZE, activation='softmax')
outputs = TimeDistributed(Dense(vocab_size, activation='softmax'))(decoder_outputs)
model = Model([encoder_inputs, decoder_inputs], outputs)
return model
# + id="cx7jQ1Vrkcjd" colab_type="code" colab={}
model = seq2seq_model_builder(HIDDEN_DIM=300)
# + id="NfFT2GvsNsSB" colab_type="code" colab={}
from keras.utils import plot_model
plot_model(model)
# + id="0IjUl05NNsVW" colab_type="code" colab={}
model.compile(optimizer='adam', loss ='categorical_crossentropy', metrics = ['accuracy'])
# + id="TVuz0gN6NsY4" colab_type="code" colab={}
BATCH_SIZE = 32
EPOCHS = 5
# + id="WVWpcnjcNscA" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="28e266e9-d38c-4a82-cb01-db6a92c16199"
encoder_input_data.shape
# + id="gXMW9mRyNsis" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 258} outputId="ec0160c2-9bb3-49bb-bbf6-5d3ae3b4111a"
history = model.fit([encoder_input_data, decoder_input_data],
decoder_output_data,
epochs=EPOCHS,
batch_size=BATCH_SIZE)
# + id="xDuz3ONDOOdl" colab_type="code" colab={}
BATCH_SIZE = 64
EPOCHS = 8
# + id="NSiBVSN8OOg0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 289} outputId="5270187c-efe9-4076-bf93-2d27e91d209b"
history = model.fit([encoder_input_data, decoder_input_data],
decoder_output_data,
epochs=EPOCHS,
batch_size=BATCH_SIZE)
# + id="hCCOE1tnOOkJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 404} outputId="802d9235-4348-4f09-8465-87275c241295"
import matplotlib.pyplot as plt
# %matplotlib inline
plt.figure(figsize=(10, 6))
plt.plot(history.history['acc'])
#plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
# plt.legend(['train', 'test'], loc='upper left')
plt.show()
# + id="FAa2wOphOOqc" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 404} outputId="b5372e21-2436-4a12-f06d-1b007972192d"
# 損失関数の可視化
plt.figure(figsize=(10, 6))
plt.plot(history.history['loss'])
# plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
# plt.legend(['train', 'test'], loc='upper left')
plt.show()
# + id="58x36LasOOth" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 129} outputId="14d9694b-446c-4ff2-f703-6b8d73f82710"
with open('seq2seq.json',"w").write(model.to_json())
# 重みの読み込み
model.load_weights('seq2seq.h5')
print("Saved Model!")
# + id="PS3-gu8DOOwy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 88} outputId="a88a180b-2e01-49cf-f870-37ebf467ac95"
model_json = model.to_json()
with open("model.json", "w") as json_file:
json_file.write(model_json)
model.save_weights("chatbot_model.h5")
print("Saved Model!")
# + id="0y_fQqPFOO0A" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 71} outputId="d6c5669b-cbea-4df5-e807-3ba2feb0269c"
json_string = model.to_json()
open('seq2seq.json', 'w').write(json_string)
model.save_weights('seq2seq_weights.h5')
# + id="C3_GMImLOO3U" colab_type="code" colab={}
# + id="-8HUNaRROO65" colab_type="code" colab={}
# + id="v9P8VUoEOO-u" colab_type="code" colab={}
# + id="fn4h8k1AOPBu" colab_type="code" colab={}
# + id="vVlD3zATOPIO" colab_type="code" colab={}
# + id="jAeKd4wdOPL7" colab_type="code" colab={}
# + id="H0pUf7v8OPOx" colab_type="code" colab={}
# + id="SaK_ARAcOPSR" colab_type="code" colab={}
# + id="wcrY7qz3OPE_" colab_type="code" colab={}
# + id="uNjmrF-9OOnq" colab_type="code" colab={}
# + id="g2CKnE6uSyN0" colab_type="code" colab={}
# !wget https://www.dropbox.com/sh/o0rze9dulwmon8b/AAAEKe0FpShNMsLAJsuTY8Pwa/my_model_weights_bot.h5
# + id="nDPzXXtmVwzd" colab_type="code" colab={}
# !wget https://www.dropbox.com/sh/o0rze9dulwmon8b/AADiuIuRIHbB2-i9l_BewYoFa/my_model_weights_discriminator.h5
# + id="kjchlUNkV-33" colab_type="code" colab={}
# !wget https://www.dropbox.com/sh/o0rze9dulwmon8b/AAC3o1KWTAi0uETciZcy4-ava/my_model_weights.h5
# + id="wzA9VAvtWeQo" colab_type="code" colab={}
# !wget https://www.dropbox.com/sh/o0rze9dulwmon8b/AAAaJ1dHJ4Xke_QHl3gUeuyBa/my_model_weights20.h5
# + id="VPqLxZKQWk2w" colab_type="code" colab={}
# + id="gyVKKu0bka40" colab_type="code" colab={}
# + id="7kCkNgLIka-s" colab_type="code" colab={}
# + id="fyUcs4TYkbBi" colab_type="code" colab={}
# + id="zZsojo_uka7u" colab_type="code" colab={}
| The_chat_bot_using_DL_seq2seq.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Approximate π with sampling
import numpy as np
from matplotlib import pyplot as plt
# The idea here is to empirically estimate the area of a circle by exploiting the fact that we can easily determine if a point is *inside* the circle by using the relationship $$ (x^2 + y^2) = r $$
#
# We know that pi is equal to the area of a unit circle:
# $$ A = πr^2 $$ so $$π = A/r^2 $$
# so if $$ r = 1 $$
# then $$ π = A $$
#
# To estimate the area of the circle we sample from a bounding square (the area of which we can trivially calculate) and determine what proportion of our samples land inside the circle. The estimation of the area of the circle is the ratio of the proportion of samples inside the circle to the known area of the bounding square.
#
# The intuition is that if the sampling is totally uniform across the square, then the proportion of samples that will land in the circle will be governed by the proportion of the area of the circle to the square.
# +
square_min = -1
square_max = 1
def approximate_pi_with_sampling(n):
xy = np.random.uniform(square_min, square_max, size=(2, n))
in_circle = np.array([e[0]**2 + e[1]**2 < 1 for e in xy.T])
plt.figure(figsize=(5, 5))
plt.scatter(x=xy[0][~in_circle], y=xy[1][~in_circle], s=1, marker='.');
plt.scatter(x=xy[0][in_circle], y=xy[1][in_circle], s=1, color='green', marker='.');
in_ratio = sum(in_circle)/n
square_area = (square_max - square_min) ** 2
pi_approx = square_area * in_ratio
err = np.math.pi - pi_approx
plt.title(f'Pi ~ {pi_approx:0.8f} | Error: {err:0.8f} ({100*err/np.math.pi:0.2f}%)');
# -
# ## π is 3.141592653589793
np.random.seed(3)
for _n in [100, 1000, 10_000, 100_000]:
approximate_pi_with_sampling(_n)
# Interesting to note that the error in estimates does not necessarily decrease monotonically with increasing samples.
#
# This is also a very inefficient way to calculate π.
for _n in [10_000_000]:
approximate_pi_with_sampling(_n)
| notebooks/Approximating Pi.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### The MLDA sampler
# This notebook is a good starting point to understand the basic usage of the Multi-Level Delayed Acceptance MCMC algorithm (MLDA) proposed in [1], as implemented within PyMC3.
#
# It uses a simple linear regression model (and a toy coarse model counterpart) to show the basic workflow when using MLDA. The model is similar to the one used in https://docs.pymc.io/notebooks/GLM-linear.html.
#
# The MLDA sampler is designed to deal with computationally intensive problems where we have access not only to the desired (fine) posterior distribution but also to a set of approximate (coarse) posteriors of decreasing accuracy and decreasing computational cost (we need at least one of those). Its main idea is that coarser chains' samples are used as proposals for the finer chains. This has been shown to improve the effective sample size of the finest chain and this allows us to reduce the number of expensive fine-chain likelihood evaluations.
#
# The PyMC3 implementation supports any number of levels, tuning parameterization for the bottom-level sampler, separate subsampling rates for each level, choice between blocked and compound sampling for the bottom-level sampler. More features like support for two types of bottom-level samplers (Metropolis, DEMetropolisZ), adaptive error correction and variance reduction are currently under development.
#
# For more details about the MLDA sampler and the way it should be used and parameterised, the user can refer to the docstrings in the code and to the other example notebooks which deal with more complex problem settings and more advanced MLDA features.
#
# Please note that the MLDA sampler is new in PyMC3. The user should be extra critical about the results and report any problems as issues in the PyMC3's github repository.
#
# [1] Dodwell, Tim & Ketelsen, Chris & Scheichl, Robert & Teckentrup, Aretha. (2019). Multilevel Markov Chain Monte Carlo. SIAM Review. 61. 509-545. https://doi.org/10.1137/19M126966X
# ### Work flow
#
# MLDA is used in a similar way as most step method in PyMC3. It has the special requirement that the user need to provide at least one coarse model to allow it to work.
#
# The basic flow to use MLDA consists of four steps, which we demonstrate here using a simple linear regression model with a toy coarse model counterpart.
#
# ##### Step 1: Generate some data
#
# Here, we generate a vector `x` of 200 points equally spaced between 0.0 and 1.0. Then we project those onto a straight line with intercept 1.0 and slope 2.0, adding some random noise, resulting in a vector `y`. The goal is to infer the intercept and slope from `x` and `y`, i.e. a very simple linear regression problem.
# +
# Import libraries
import time as time
import arviz as az
import numpy as np
import pymc3 as pm
# -
az.style.use("arviz-darkgrid")
# +
# Generate data
RANDOM_SEED = 915623497
np.random.seed(RANDOM_SEED)
true_intercept = 1
true_slope = 2
sigma = 1
size = 200
x = np.linspace(0, 1, size)
y = true_intercept + true_slope * x + np.random.normal(0, sigma ** 2, size)
# -
# ##### Step 2: Define the fine model
#
# In this step we use the PyMC3 model definition language to define the priors and the likelihood. We choose non-informative Normal priors for both intercept and slope and a Normal likelihood, where we feed in `x` and `y`.
# Constructing the fine model
with pm.Model() as fine_model:
# Define priors
intercept = pm.Normal("intercept", 0, sigma=20)
slope = pm.Normal("slope", 0, sigma=20)
# Define likelihood
likelihood = pm.Normal("y", mu=intercept + slope * x, sigma=sigma, observed=y)
# ##### Step 3: Define a coarse model
#
# Here, we define a toy coarse model where coarseness is introduced by using fewer data in the likelihood compared to the fine model, i.e. we only use every 2nd data point from the original data set.
# Thinning the data set
x_coarse = x[::2]
y_coarse = y[::2]
# Constructing the coarse model
with pm.Model() as coarse_model:
# Define priors
intercept = pm.Normal("intercept", 0, sigma=20)
slope = pm.Normal("slope", 0, sigma=20)
# Define likelihood
likelihood = pm.Normal("y", mu=intercept + slope * x_coarse, sigma=sigma, observed=y_coarse)
# ##### Step 4: Draw MCMC samples from the posterior using MLDA
#
# We feed `coarse_model` to the MLDA instance and we also set `subsampling_rate` to 10. The subsampling rate is the number of samples drawn in the coarse chain to construct a proposal for the fine chain. In this case, MLDA draws 10 samples in the coarse chain and uses the last one as a proposal for the fine chain. This is accepted or rejected by the fine chain and then control goes back to the coarse chain which generates another 10 samples, etc. Note that `pm.MLDA` has many other tuning arguments which can be found in the documentation.
#
# Next, we use the universal `pm.sample` method, passing the MLDA instance to it. This runs MLDA and returns a `trace`, containing all MCMC samples and various by-products. Here, we also run a standard Metropolis sampler for comparison which returns a separate trace. We time the runs to compare later.
#
# Finally, PyMC3 provides various functions to visualise the trace and print summary statistics (two of them are shown below).
with fine_model:
# Initialise step methods
step = pm.MLDA(coarse_models=[coarse_model], subsampling_rates=[10])
step_2 = pm.Metropolis()
# Sample using MLDA
t_start = time.time()
trace = pm.sample(draws=6000, chains=4, tune=2000, step=step, random_seed=RANDOM_SEED)
runtime = time.time() - t_start
# Sample using Metropolis
t_start = time.time()
trace_2 = pm.sample(draws=6000, chains=4, tune=2000, step=step_2, random_seed=RANDOM_SEED)
runtime_2 = time.time() - t_start
# Trace plots
pm.plots.traceplot(trace)
pm.plots.traceplot(trace_2)
# Summary statistics for MLDA
pm.stats.summary(trace)
# Summary statistics for Metropolis
pm.stats.summary(trace_2)
# Make sure samplers have converged
assert all(az.rhat(trace) < 1.03)
assert all(az.rhat(trace_2) < 1.03)
# Display runtimes
print(f"Runtimes: MLDA: {runtime}, Metropolis: {runtime_2}")
# ##### Comments
#
# **Performance:**
#
# You can see from the summary statistics above that MLDA's ESS is ~8x higher than Metropolis. The runtime of MLDA is ~6x larger than Metropolis. Therefore in this toy example MLDA is almost an overkill. For more complex problems where the difference in computational cost between the coarse and fine models/likelihoods is orders of magnitude, MLDA is expected to outperform Metropolis, as long as the coarse model is reasonably close to the fine one. This case is often enountered in inverse problems in engineering, ecology, imaging, etc where a forward model can be defined with varying coarseness in space and/or time (e.g. subsurface water flow, predator prey models, etc).
#
# **Subsampling rate:**
#
# The MLDA sampler is based on the assumption that the coarse proposal samples (i.e. the samples proposed from the coarse chain to the fine one) are independent from each other. In order to generate independent samples, it is necessary to run the coarse chain for at least an adequate number of iterations to get rid of autocorrelation. Therefore, the higher the autocorrelation in the coarse chain, the more iterations are needed and the larger the subsampling rate should be.
#
# Values larger than the minimum for beating autocorreletion can further improve the proposal (as the distribution is epxlored better and the proposal are imptoved), and thus ESS. But at the same time more steps cost more computationally. Users are encouraged to do test runs with different subsampling rates to understand which gives the best ESS/sec.
#
# Note that in cases where you have more than one coarse model/level, MLDA allows you to choose a different subsampling rate for each coarse level (as a list of integers when you instantiate the stepper).
# Show packages' and Python's versions
# %load_ext watermark
# %watermark -n -u -v -iv -w
| docs/source/notebooks/MLDA_simple_linear_regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.10.1 64-bit
# language: python
# name: python3
# ---
# +
import sys
sys.path.append('../')
from Confluence.Confluence_Util import Confluence_Util
from Comala.Comala_Util import Comala_Util
from getpass import getpass
import time
from atlassian import Confluence
import requests
contentApiUrl = '/rest/api/content'
# Change these based on your instance
confluenceBaseUrl = "https://confluence.corp.bioagilytix.com/confluence"
username = 'yangyang.liu'
password = <PASSWORD>pass("password")
confluenceUtil = Confluence_Util(confluenceBaseUrl, username, password)
comalaUtil = Comala_Util(confluenceBaseUrl, username, password)
confluence = Confluence(
url='https://confluence.corp.bioagilytix.com/confluence',
username=username,
password=password)
# -
def fixFootNoteField(pageIdList):
for pageId in pageIdList:
if comalaUtil.isFinal(pageId) < 1:
requestUrl = "{}/rest/scriptrunner/latest/custom/addFootNoteField?rootPageId={}".format(confluenceBaseUrl, pageId)
print(requestUrl)
requestResponse = requests.get(requestUrl, auth=(username, password))
time.sleep(5)
print(requestResponse.json())
print()
allSpace = confluence.get_all_spaces(start=500, limit=2000)
pageIdList = confluenceUtil.get_all_page_id_from_space('YSB')
for pageId in pageIdList:
print(comalaUtil.isFinal(pageId))
| Request/Footnote_Fix.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %pylab inline
# Here are some basic — and some more surprising — features of the iPython Notebook
# that has been used to build this collection of astronomy examples.
>>> n = 0
>>> for i in range(5):
... n += i
...
>>> print(n)
# +
# Exception tracebacks are attractive and detailed
1/0
# -
# !pwd
# !cal 1 2013
# files = !ls /usr/bin
# %load http://matplotlib.org/mpl_examples/api/radar_chart.py
# %timeit '-'.join(('abc', 'def', 'ghi'))
# %timeit '-'.join(['abc', 'def', 'ghi'])
from IPython.display import Image, HTML, Latex, YouTubeVideo
# +
# Inline media
f = '../Talks/tiangong-1-headline.png'
Image(filename=f)
# -
HTML('<iframe src="http://numpy.org/" height=240 width=480>'
'</iframe>')
YouTubeVideo('F4rFuIb1Ie4') # <NAME> at PyConCA
# +
from sympy.interactive import init_printing
init_printing()
from sympy import *
x, y = symbols('x y')
eq = ((x + y)**2 * (x + 1))
# -
eq
expand(eq)
Latex(r'The Taylor series for $e^x$ is:'
r'$$\sum_{x=0}^\infty {x^n / n!}$$')
# ## XKCD Style
#
# Recently, @jakevdp decided that his example plots looked too serious,
# and wanted them to look more like hand-drawn plots in xkcd.
#
# http://jakevdp.github.com/blog/2012/10/07/xkcd-style-plots-in-matplotlib/
| An-Introduction/Notebook-Features.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Introduction
# In this example we explore the use case in which the "high-level" API is insufficient: we already have masks generated by some other algorithm.
#
# In the [OcclusionSaliency](OcclusionSaliency.ipynb) example, we showed how to use the "high-level" image classifier black-box API for visual saliency map generation.
# This is coined "high-level" because the API does not reveal how the input image and black-box classifier are utilized in generating the saliency heatmap.
#
# In this example use case, however, we already know that we have an external process to generate masks.
# In this case, we can utilize a more specific, less-encompassing interface API to generate visual saliency heatmaps: ``GenerateClassifierConfidenceSaliency``.
# This instead takes as input computed confidences for the input image, the image masks, and the computed confidences for images over which masks have been applied.
# This also takes black-box classifier standardization out of scope since this API does not require invoking the black box: it assumes that the black box has already been invoked, leaving that up to the caller.
# Previously formulated black-box classifiers using the `ClassifyImage` interface could of course still be used in this case.
#
# <a id='fig1'></a>
# 
#
# The `xaitk-saliency` package additionally provides helper functions for application of masks to an image which can be found in the `xaitk_saliency.utils.masking` module (both batch and streaming versions are available).
#
# Below, we will show mask generation via the superpixel segmentation algorithm ``quickshift`` from the ``scikit-image`` package.
#
# Like in the previous example, this will necessarily require us to define some black-box classification model for us to introspect the saliency of.
# We will fill this role here with a PyTorch Imagenet-pretrained ResNet18 network.
# This will be wrapped up in an implementation of the `ClassifyImage` interface for input to our "application."
# This sub-classing standardizes classifier operation with our API to support the varying ways classification is performed across toolkits and applications.
#
# ### Table of Contents
# * [Set Up Environment](#Set-Up-Environment-superpixel)
# * [The Test Image](#The-Test-Image-superpixel)
# * [Black-box Classifier](#Black-box-Classifier-superpixel)
# * [Superpixel Mask Generation](#Superpixel-Mask-Generation-superpixel)
# * [Generating Saliency Maps](#Generating-Saliency-Maps-superpixel)
#
# ### Miscellaneous
# License for test image used may be found in 'COCO-LICENSE.txt'.
#
# #### References
# 1. Zeiler, <NAME>., and <NAME>. "Visualizing and understanding convolutional networks." European conference on computer vision. Springer, Cham, 2014.
#
# <br>
#
# To run this notebook in Colab, use the link below:
#
# [](https://colab.research.google.com/github/XAITK/xaitk-saliency/blob/master/examples/SuperPixelSaliency.ipynb)
# # Set Up Environment <a name="Set-Up-Environment-superpixel"></a>
# !pip install -qU pip
# !pip install -q xaitk-saliency
# !pip install -q "torch==1.9.0"
# !pip install -q "torchvision==0.10.0"
# # The Test Image <a name="The-Test-Image-superpixel"></a>
# We will test this application on the following image.
# We know that this image contains the ImageNet classes of "boxer" and "tiger cat".
# +
import PIL.Image
import matplotlib.pyplot as plt
import urllib.request
# Use JPEG format for inline visualizations here.
# %config InlineBackend.figure_format = "jpeg"
urllib.request.urlretrieve('https://farm1.staticflickr.com/74/202734059_fcce636dcd_z.jpg', "catdog.jpg")
test_image_filename = 'catdog.jpg'
plt.figure(figsize=(12, 8))
plt.axis('off')
_ = plt.imshow(PIL.Image.open(test_image_filename))
# -
# # Black-box Classifier <a name="Black-box-Classifier-superpixel"></a>
# In this example, we will use a basic PyTorch-based, pretrained ResNet18 model and use its softmax output as classification confidences.
# Since this model normally outputs 1000 classes, we will, for simplicity of example, constrain the output to the two classes that we are relevant for our test image.
# +
# Set up our "black box" classifier using PyTorch and it's ImageNet pretrained ResNet18.
# We will constrain the output of our classifier here to the two classes that are relevant
# to our test image for the purposes of this example.
import os
import numpy as np
import torch
import torchvision.models as models
import torchvision.transforms as transforms
CUDA_AVAILABLE = torch.cuda.is_available()
model = models.resnet18(pretrained=True)
model = model.eval()
if CUDA_AVAILABLE:
model = model.cuda()
# These are some simple helper functions to perform prediction with this model
model_input_size = (224, 224)
model_mean = [0.485, 0.456, 0.406]
model_loader = transforms.Compose([
transforms.ToPILImage(),
transforms.Resize(model_input_size),
transforms.ToTensor(),
transforms.Normalize(
mean=model_mean,
std=[0.229, 0.224, 0.225]
),
])
# Grabbing the class labels associated with this model.
if not os.path.isfile('imagenet_classes.txt'):
# !wget https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt -O imagenet_classes.txt
f = open("imagenet_classes.txt", "r")
categories = [s.strip() for s in f.readlines()]
# For this test, we will use an image with both a cat and a dog in it.
# Let's only consider the saliency of two class predictions.
sal_class_labels = ['boxer', 'tiger cat']
sal_class_idxs = [categories.index(lbl) for lbl in sal_class_labels]
@torch.no_grad()
def blackbox_classifier(test_image: np.ndarray) -> np.ndarray:
image_tensor = model_loader(test_image).unsqueeze(0)
if CUDA_AVAILABLE:
image_tensor = image_tensor.cuda()
feature_vec = model(image_tensor)
# Converting feature extractor output to probabilities.
class_conf = torch.nn.functional.softmax(feature_vec, dim=1).cpu().detach().numpy().squeeze()
# Only return the confidences for the focus classes
return class_conf[sal_class_idxs]
blackbox_fill = np.uint8(np.asarray(model_mean) * 255)
# -
# # Superpixel Mask Generation <a name="Superpixel-Mask-Generation-superpixel"></a>
# We'll use the [Quick Shift](https://www.robots.ox.ac.uk/~vedaldi/assets/pubs/vedaldi08quick.pdf) segmentation algorithm to generate superpixels that can be used as masks for image perturbation. This technique is also commonly used in other black-box saliency algorithms such as [LIME](https://arxiv.org/abs/1602.04938) and [SHAP](https://arxiv.org/abs/1705.07874). Alternatively, we could create a new `PerturbImage` implementation, similar to the existing `SlidingWindow` and `RISEGrid` approaches.
# +
# Make use of superpixel based mask generation
from skimage.segmentation import quickshift, mark_boundaries
# Load the reference image
ref_image = PIL.Image.open(test_image_filename)
# Generate superpixel segments
segments = quickshift(ref_image, kernel_size=4, max_dist=200, ratio=0.2, random_seed=0)
# Print number of segments
num_segments = len(np.unique(segments))
print("Quickshift number of segments: {}".format(num_segments))
# -
# Visualize the superpixels on the image
plt.figure(figsize=(12, 8))
plt.axis('off')
_ = plt.imshow(mark_boundaries(ref_image, segments))
# Next, we'll convert these superpixel segments to binary perturbation masks in preparation for generating the corresponding perturbation images.
pert_masks = np.empty((num_segments, *ref_image.size[::-1]), dtype=bool)
for i in range(num_segments):
pert_masks[i] = (segments != i)
# # Generating Saliency Maps <a name="Generating-Saliency-Maps-superpixel"></a>
# We will use the occlusion-based saliency map generation method.
# This implements the [above described](#fig1) `GenerateClassifierConfidenceSaliency` interface API.
# +
from xaitk_saliency.impls.gen_classifier_conf_sal.occlusion_scoring import OcclusionScoring
sal_map_generator = OcclusionScoring()
# -
# Next, we define a helper function for visualizing the generated results, with defined inputs for the following:
# * the input image
# * black-box classifier
# * perturbation masks
# * saliency map generation API implementation
#
# For the purposes of this tool, let's say that the input blackbox classifier must take in one image and output a 1D vector of per-class confidences (`Callable[[np.ndarray], np.ndarray]`) for simplicity.
# +
import matplotlib.pyplot as plt
import numpy as np
import PIL.Image
from typing import Callable, Iterable, Optional, Sequence, Union
from xaitk_saliency import GenerateClassifierConfidenceSaliency
from xaitk_saliency.utils.masking import occlude_image_batch
def app(
image_filepath: str,
blackbox_classify: Callable[[np.ndarray], np.ndarray],
pert_masks: Iterable[np.ndarray],
sal_map_generator: GenerateClassifierConfidenceSaliency,
fill: Optional[Union[int, Sequence[int]]] = None,
vis_mask_examples: bool = True,
):
# Load the image
ref_image = np.asarray(PIL.Image.open(image_filepath))
# Remember that we defined our own perturbation masks, and will
# now use a helper function to generate perturbation images
pert_imgs = occlude_image_batch(ref_image, pert_masks, fill)
print(f"Perterbed images: {pert_imgs.shape[0]}")
# Visualize some example perturbed images before heading into blackbox classification
if vis_mask_examples:
n = 4
print(f"Visualizing {n} random perturbed images...")
rng = np.random.default_rng(seed=0)
rng_idx_lst = sorted(rng.integers(0, len(pert_imgs)-1, n))
plt.figure(figsize=(n*4, 4))
for i, rnd_i in enumerate(rng_idx_lst):
plt.subplot(1, n, i+1)
plt.title(f"pert_imgs[{rnd_i}]")
plt.axis('off')
plt.imshow(pert_imgs[rnd_i])
# For the saliency heatmap generation API we need reference image predictions as well as
# the predictions for each of the perturbed images.
ref_preds = blackbox_classify(ref_image)
print(f"Ref preds: {ref_preds.shape}")
pert_preds = np.asarray([
blackbox_classify(pi)
for pi in pert_imgs
])
print(f"Pert preds: {pert_preds.shape}")
sal_maps = sal_map_generator(ref_preds, pert_preds, pert_masks)
print(f"Saliency maps: {sal_maps.shape}")
# Visualize the saliency heat-maps
sub_plot_ind = len(sal_maps) + 1
plt.figure(figsize=(12, 6))
plt.subplot(2, sub_plot_ind, 1)
plt.imshow(ref_image)
plt.axis('off')
plt.title('Test Image')
# Some magic numbers here to get colorbar to be roughly the same height
# as the plotted image.
colorbar_kwargs = {
"fraction": 0.046*(ref_image.shape[0]/ref_image.shape[1]),
"pad": 0.04,
}
for i, class_sal_map in enumerate(sal_maps):
print(f"Class {i} saliency map range: [{class_sal_map.min()}, {class_sal_map.max()}]")
# Positive half saliency
plt.subplot(2, sub_plot_ind, 2+i)
plt.imshow(ref_image, alpha=0.7)
plt.imshow(
np.clip(class_sal_map, 0, 1),
cmap='jet', alpha=0.3
)
plt.clim(0, 1)
plt.colorbar(**colorbar_kwargs)
plt.title(f"Class #{i+1} Pos Saliency")
plt.axis('off')
# Negative half saliency
plt.subplot(2, sub_plot_ind, sub_plot_ind+2+i)
plt.imshow(ref_image, alpha=0.7)
plt.imshow(
np.clip(class_sal_map, -1, 0),
cmap='jet_r', alpha=0.3
)
plt.clim(-1, 0)
plt.colorbar(**colorbar_kwargs)
plt.title(f"Class #{i+1} Neg Saliency")
plt.axis('off')
# -
# Finally, we'll call our helper function and visualize the superpixel-based saliency maps.
app(
test_image_filename,
blackbox_classifier,
pert_masks,
sal_map_generator,
fill=blackbox_fill,
)
| examples/SuperPixelSaliency.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Server salt.db.elephantsql.com (salt-01)
# Region amazon-web-services::us-east-1
# Created at 2020-10-17 01:51
# User & Default desvravz
# database
# Reset
#
# Password <PASSWORD>
#
# URL postgres://desvravz:154BR2025mmrWBwKO3VorN6u4jkP4WpO@sal
# t.db.elephantsql.com:5432/desvravz
# Current database 30 MB
# size
# Max database size 20 MB
# +
import os
import urllib.parse as up
import psycopg2
import pandas as pd
up.uses_netloc.append("postgres")
url = up.urlparse("postgres://desvravz:1S4BR2o2SmmrWBwK03VorN6u4jkP4WpO@salt.db.elephantsql.com:5432/desvravz")
conn = psycopg2.connect(database=url.path[1:],
user=url.username,
password=url.password,
host=url.hostname,
port=url.port
)
cursor = conn.cursor()
test1 = pd.read_sql_query("SELECT * FROM suggestion_bank", conn)
test2 = pd.read_sql_query("SELECT artists FROM suggestion_bank", conn)
cursor.close()
conn.close()
# -
# # Retrain prototype
features = ['acousticness',
'danceability',
'duration_ms',
'energy',
'instrumentalness',
'liveness',
'loudness',
'speechiness',
'valence',
'tempo']
test1
retrain = test1[features]
retrain
samples = retrain.values.tolist()
samples
from sklearn.neighbors import NearestNeighbors
neigh = NearestNeighbors(n_neighbors=10)
neigh.fit(samples)
# ## Visualize
# +
def heigher_order_features(new_df, input_y):
"""A helper function for compare_this function, it creates
a list with a specific row input"""
state = []
for i, x in enumerate(new_df.columns.tolist()):
a = new_df[str(x)][input_y]
state.append(a)
return state
import plotly.graph_objects as go
import plotly.offline as pyo
def compare_this(new_df,a,b):
categories = new_df.columns.tolist()
fig = go.Figure()
fig.add_trace(go.Scatterpolar(
theta=['testa', 'testb', 'testc'],
r=[.5,.5, .5],
fill='toself',
name='Product A'
))
fig.update_layout(
polar=dict(
radialaxis=dict(
visible=True,
range=[0, 1]
)),
showlegend=False
)
pyo.iplot(fig, filename = 'basic-line')
compare_this(retrain,100,200)
# -
retrain.columns.tolist()
# ## Postgres endpoint
# +
def get_stuff(input_value):
url = up.urlparse("postgres://desvravz:1S4BR2o2SmmrWBwK03VorN6u4jkP4WpO@salt.db.elephantsql.com:5432/desvravz")
conn = psycopg2.connect(database=url.path[1:],
user=url.username,
password=url.password,
host=url.hostname,
port=url.port
)
cursor = conn.cursor()
query = f"SELECT * FROM suggestion_bank WHERE artists LIKE '%{input_value}%'"
test1 = pd.read_sql_query(query, conn)
cursor.close()
conn.close()
return test1
get_stuff("Beatles")
# -
test2
len(df["id"])
# +
df = pd.read_csv('spotify_kaggle/data.csv')
state = []
for i in range(len(df["id"])):
state.append((df["id"][i],df["name"][i]))
# -
state
# +
up.uses_netloc.append("postgres")
url = up.urlparse("postgres://desvravz:1S4BR2o2SmmrWBwK03VorN6u4jkP4WpO@salt.db.elephantsql.com:5432/desvravz")
conn = psycopg2.connect(database=url.path[1:],
user=url.username,
password=url.password,
host=url.hostname,
port=url.port
)
cursor = conn.cursor()
# -
| Notebooks/pyspark.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="SmqeZtFdJdHg" outputId="d15f30cd-67fb-41c2-cdba-2c55103d4803" colab={"base_uri": "https://localhost:8080/", "height": 269}
import sys
import random
import numpy as np
from matplotlib import pyplot as plt
import keras
from keras.applications.vgg16 import VGG16
from keras.models import Sequential
from keras.layers import Dense, Activation, Flatten
from keras.datasets import cifar10
_LABELS={0:"airplain", 1:"automobile", 2:"bird", 3:"cat", 4:"deer", 5:"dog", 6:"frog", 7:"horse", 8:"ship", 9:"truck"}
def load_cifar10(num_training=40000, num_validation=10000, num_test=10000):
# Fetch the CIFAR-10 dataset from the web
cifar10 = keras.datasets.cifar10.load_data()
(X_train, y_train), (X_test, y_test) = cifar10
X_train = np.asarray(X_train, dtype=np.float32)
y_train = np.asarray(y_train, dtype=np.int32).flatten()
X_test = np.asarray(X_test, dtype=np.float32)
y_test = np.asarray(y_test, dtype=np.int32).flatten()
# Subsample the data
#
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
# mask = range(num_training)
# X_train = X_train[mask]
# y_train = y_train[mask]
# mask = range(num_test)
# X_test = X_test[mask]
# y_test = y_test[mask]
# # Normaliza the data: subtract the mean pixel and divide by std
# mean_pixel = X_train.mean(axis=(0, 1, 2), keepdims=True)
# std_pixel = X_train.std(axis=(0, 1, 2), keepdims=True)
# X_train = (X_train - mean_pixel) / std_pixel
# X_val = (X_val - mean_pixel) / std_pixel
# X_test = (X_test - mean_pixel) / std_pixel
# one-hot the labels
y_train = keras.utils.to_categorical(y_train, 10)
y_val = keras.utils.to_categorical(y_val, 10)
y_test = keras.utils.to_categorical(y_test, 10)
return X_train, y_train, X_val, y_val, X_test, y_test
def plot_images_labels_prediction(images, labels):
#plot_images_labels
fig = plt.figure('5.1 Show Train Images',figsize=(10,5))
fig.subplots_adjust(hspace=0.0,wspace=0.4)
for i in range(0, 10):
ax = fig.add_subplot(2, 5, i+1)
temp = random.randint(0,9999)
ax.imshow(np.uint8(images[temp]))
ax.set_title(_LABELS[list(labels[temp]).index(1)],fontsize=10)
ax.set_xticks([])
ax.set_yticks([]);
plt.show()
if __name__ == "__main__":
X_train, y_train, X_val, y_val, X_test, y_test = load_cifar10()
plot_images_labels_prediction(X_train, y_train)
| Hw1/Hw1_05_P76081116_鄭皓中_V1/載入dataset測試.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import sympy
p = sympy.Symbol("p")
q = sympy.Symbol("q")
def eqSolve(eq1,eq2,tax):
demandP = sympy.solve(eq1-q,p)[0]
supplyP = sympy.solve(eq2-q,p)[0]
print(demandP)
print(supplyP)
eqSolve(10-p,2*p,2)
# +
import sympy
p = sympy.Symbol("p")
q = sympy.Symbol("q")
cTax = sympy.Symbol("cTax")
pTax = sympy.Symbol("pTax")
def eqSolve(eq1,eq2,tax):
demandP = sympy.solve(eq1-q,p)[0]
supplyP = sympy.solve(eq2-q,p)[0]
demandP = demandP-cTax
supplyP = supplyP+pTax
print(demandP)
print(supplyP)
eqSolve(10-p,2*p,2)
# +
import sympy
p = sympy.Symbol("p")
q = sympy.Symbol("q")
cTax = sympy.Symbol("cTax")
pTax = sympy.Symbol("pTax")
def eqSolve(eq1,eq2,tax):
demandP = sympy.solve(eq1-q,p)[0]
supplyP = sympy.solve(eq2-q,p)[0]
demandP = demandP-cTax
supplyP = supplyP+pTax
demandQ = sympy.solve(demandP-p,q)[0]
supplyQ = sympy.solve(supplyP-p,q)[0]
print(demandQ)
print(supplyQ)
eqSolve(10-p,2*p,2)
# +
import sympy
p = sympy.Symbol("p")
q = sympy.Symbol("q")
cTax = sympy.Symbol("cTax")
pTax = sympy.Symbol("pTax")
def eqSolve(eq1,eq2,tax):
demandP = sympy.solve(eq1-q,p)[0]
supplyP = sympy.solve(eq2-q,p)[0]
demandP = demandP-cTax
supplyP = supplyP+pTax
demandQ = sympy.solve(demandP-p,q)[0]
supplyQ = sympy.solve(supplyP-p,q)[0]
return sympy.solve((demandP-supplyP, demandQ-supplyQ,tax-cTax-pTax), q,p,cTax,pTax)
eqSolve(10-p,2*p,2)
# -
music = {"Genre":"Rap","Artists":["<NAME>","<NAME>","<NAME>"],"Albums":10}
print(music)
print(music["Genre"])
music["Albums"]+=1
print(music["Albums"])
# +
import sympy
p = sympy.Symbol("p")
q = sympy.Symbol("q")
cTax = sympy.Symbol("cTax")
pTax = sympy.Symbol("pTax")
def eqSolve(eq1,eq2,tax):
demandP = sympy.solve(eq1-q,p)[0]
supplyP = sympy.solve(eq2-q,p)[0]
demandP = demandP-cTax
supplyP = supplyP+pTax
demandQ = sympy.solve(demandP-p,q)[0]
supplyQ = sympy.solve(supplyP-p,q)[0]
return sympy.solve((demandP-supplyP, demandQ-supplyQ,tax-cTax-pTax), q,p,cTax,pTax)[q]
eqSolve(10-p,2*p,2)
# -
import sympy
import matplotlib.pyplot as plt
p = sympy.Symbol("p")
q = sympy.Symbol("q")
cTax = sympy.Symbol("cTax")
pTax = sympy.Symbol("pTax")
def EquilibriumTax(demandEquation,supplyEquation,priceStart,priceEnd,tax):
prices = []
demand = []
supply = []
for price in range(priceStart,priceEnd+1):
prices += [price]
demand += [demandEquation.subs(p,price)]
supply += [supplyEquation.subs(p,price)]
equilibriumQ = eqSolve(demandEquation,supplyEquation,tax)
equilibriumP1 = sympy.solve(demandEquation-equilibriumQ)[0]
equilibriumP2 = sympy.solve(supplyEquation-equilibriumQ)[0]
plt.plot(demand,prices)
plt.plot(supply,prices)
plt.legend(["Demand","Supply"])
plt.plot(equilibriumQ,equilibriumP1, 'ro')
plt.plot(equilibriumQ,equilibriumP2, 'ro')
plt.xlabel("Supply and Demand Quantity")
plt.ylabel("Price")
plt.show()
print("The equilibrium prices are "+str(equilibriumP1)+" and "+str(equilibriumP2)+" and equilibrium quantity is "+str(equilibriumQ)+".")
EquilibriumTax(10-p,p,0,10,4)
# +
import sympy
import matplotlib.pyplot as plt
import matplotlib.patches as patches
p = sympy.Symbol("p")
q = sympy.Symbol("q")
cTax = sympy.Symbol("cTax")
pTax = sympy.Symbol("pTax")
def EquilibriumTax(demandEquation,supplyEquation,priceStart,priceEnd,tax):
prices = []
demand = []
supply = []
for price in range(priceStart,priceEnd+1):
prices += [price]
demand += [demandEquation.subs(p,price)]
supply += [supplyEquation.subs(p,price)]
nonTaxPrice = sympy.solve(demandEquation-supplyEquation)[0]
nonTaxQ = demandEquation.subs(p,nonTaxPrice)
equilibriumQ = eqSolve(demandEquation,supplyEquation,tax)
equilibriumP1 = sympy.solve(demandEquation-equilibriumQ)[0]
equilibriumP2 = sympy.solve(supplyEquation-equilibriumQ)[0]
triangle1 = patches.Polygon([[nonTaxQ,nonTaxPrice],[equilibriumQ,nonTaxPrice],[equilibriumQ,equilibriumP1]],True,color="green")
triangle2 = patches.Polygon([[nonTaxQ,nonTaxPrice],[equilibriumQ,nonTaxPrice],[equilibriumQ,equilibriumP2]],True)
currentAxis = plt.gca()
currentAxis.add_patch(triangle1)
currentAxis.add_patch(triangle2)
plt.plot(demand,prices)
plt.plot(supply,prices)
plt.legend(["Demand","Supply"])
plt.plot(equilibriumQ,equilibriumP1, 'ro')
plt.plot(equilibriumQ,equilibriumP2, 'ro')
plt.xlabel("Supply and Demand Quantity")
plt.ylabel("Price")
plt.show()
print("The equilibrium prices are "+str(equilibriumP1)+" and "+str(equilibriumP2)+" and equilibrium quantity is "+str(equilibriumQ)+".")
EquilibriumTax(10-p,p,0,10,4)
# +
import sympy
import matplotlib.pyplot as plt
import matplotlib.patches as patches
p = sympy.Symbol("p")
q = sympy.Symbol("q")
cTax = sympy.Symbol("cTax")
pTax = sympy.Symbol("pTax")
def EquilibriumTax(demandEquation,supplyEquation,priceStart,priceEnd,tax):
prices = []
demand = []
supply = []
for price in range(priceStart,priceEnd+1):
prices += [price]
demand += [demandEquation.subs(p,price)]
supply += [supplyEquation.subs(p,price)]
nonTaxPrice = sympy.solve(demandEquation-supplyEquation)[0]
nonTaxQ = demandEquation.subs(p,nonTaxPrice)
equilibriumQ = eqSolve(demandEquation,supplyEquation,tax)
equilibriumP1 = sympy.solve(demandEquation-equilibriumQ)[0]
equilibriumP2 = sympy.solve(supplyEquation-equilibriumQ)[0]
triangle1 = patches.Polygon([[nonTaxQ,nonTaxPrice],[equilibriumQ,nonTaxPrice],[equilibriumQ,equilibriumP1]],True,color="green")
triangle2 = patches.Polygon([[nonTaxQ,nonTaxPrice],[equilibriumQ,nonTaxPrice],[equilibriumQ,equilibriumP2]],True)
currentAxis = plt.gca()
currentAxis.add_patch(triangle1)
currentAxis.add_patch(triangle2)
rect1 = patches.Rectangle((0,nonTaxPrice),equilibriumQ,equilibriumP1-nonTaxPrice,linewidth=1,facecolor="red")
rect2 = patches.Rectangle((0,nonTaxPrice),equilibriumQ,equilibriumP2-nonTaxPrice,linewidth=1,facecolor="yellow")
currentAxis.add_patch(rect1)
currentAxis.add_patch(rect2)
plt.plot(demand,prices)
plt.plot(supply,prices)
plt.legend([rect1,rect2,triangle1,triangle2], ["Consumer Tax","Producer Tax","Consumer Deadweight Loss","Producer Deadweight Loss"])
plt.plot(equilibriumQ,equilibriumP1, 'ro')
plt.plot(equilibriumQ,equilibriumP2, 'ro')
plt.xlabel("Supply and Demand Quantity")
plt.ylabel("Price")
plt.show()
print("The equilibrium prices are "+str(equilibriumP1)+" and "+str(equilibriumP2)+" and equilibrium quantity is "+str(equilibriumQ)+".")
print("Taxes raised from consumers equals "+str(equilibriumQ*(equilibriumP1-nonTaxPrice)))
print("Taxes raised from producers equals "+str(equilibriumQ*(nonTaxPrice-equilibriumP2)))
print("Total taxes raised equals "+str(equilibriumQ*tax))
EquilibriumTax(10-p,p,0,10,4)
# -
EquilibriumTax(10-p,p*2,0,10,4)
EquilibriumTax(10-p*2,p,0,5,4)
def taxRevenue(demandEquation,supplyEquation,priceStart,priceEnd,tax):
equilibriumQ = eqSolve(demandEquation,supplyEquation,tax)
return tax*equilibriumQ
taxs = []
moneyRaised = []
for x in range(0,11):
taxs+=[x]
moneyRaised+=[taxRevenue(10-p,p,0,10,x)]
plt.plot(taxs,moneyRaised)
plt.xlabel("Tax Applied")
plt.ylabel("Money Raised")
plt.title("The Laffer Cuve")
plt.show()
def EquilibriumTax(demandEquation,supplyEquation,priceStart,tax):
priceEnd = sympy.solve(demandEquation)[0]
prices = []
demand = []
supply = []
for price in range(priceStart,priceEnd+1):
prices += [price]
demand += [demandEquation.subs(p,price)]
supply += [supplyEquation.subs(p,price)]
nonTaxPrice = sympy.solve(demandEquation-supplyEquation)[0]
nonTaxQ = demandEquation.subs(p,nonTaxPrice)
equilibriumQ = eqSolve(demandEquation,supplyEquation,tax)
equilibriumP1 = sympy.solve(demandEquation-equilibriumQ)[0]
equilibriumP2 = sympy.solve(supplyEquation-equilibriumQ)[0]
triangle1 = patches.Polygon([[nonTaxQ,nonTaxPrice],[equilibriumQ,nonTaxPrice],[equilibriumQ,equilibriumP1]],True,color="green")
triangle2 = patches.Polygon([[nonTaxQ,nonTaxPrice],[equilibriumQ,nonTaxPrice],[equilibriumQ,equilibriumP2]],True)
currentAxis = plt.gca()
currentAxis.add_patch(triangle1)
currentAxis.add_patch(triangle2)
rect1 = patches.Rectangle((0,nonTaxPrice),equilibriumQ,equilibriumP1-nonTaxPrice,linewidth=1,facecolor="red")
rect2 = patches.Rectangle((0,nonTaxPrice),equilibriumQ,equilibriumP2-nonTaxPrice,linewidth=1,facecolor="yellow")
currentAxis.add_patch(rect1)
currentAxis.add_patch(rect2)
plt.plot(demand,prices)
plt.plot(supply,prices)
plt.legend([rect1,rect2,triangle1,triangle2], ["Consumer Tax","Producer Tax","Consumer Deadweight Loss","Producer Deadweight Loss"])
plt.plot(equilibriumQ,equilibriumP1, 'ro')
plt.plot(equilibriumQ,equilibriumP2, 'ro')
plt.xlabel("Supply and Demand Quantity")
plt.ylabel("Price")
plt.show()
print("The equilibrium prices are "+str(equilibriumP1)+" and "+str(equilibriumP2)+" and equilibrium quantity is "+str(equilibriumQ)+".")
print("Taxes raised from consumers equals "+str(equilibriumQ*(equilibriumP1-nonTaxPrice)))
print("Taxes raised from producers equals "+str(equilibriumQ*(nonTaxPrice-equilibriumP2)))
print("Total taxes raised equals "+str(equilibriumQ*tax))
| FinanceAndPython.com-EconomicFoundations-master/9 Taxes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
import linearsolve as ls
import matplotlib.pyplot as plt
plt.style.use('classic')
# %matplotlib inline
# # Homework 8
#
# **Instructions:** Complete the notebook below. Download the completed notebook in HTML format. Upload assignment using Canvas.
#
# **Due:** Mar. 4 at **2pm.**
# ## Exercise: Changing $\beta$ in Prescott's Real Business Cycle Model
#
# Recall that the equilibrium conditions for Prescott's RBC model are:
#
# \begin{align}
# \frac{1}{C_t} & = \beta E_t \left[\frac{\alpha A_{t+1}K_{t+1}^{\alpha-1}L_{t+1}^{1-\alpha} +1-\delta }{C_{t+1}}\right]\\
# \frac{\varphi}{1-L_t} & = \frac{(1-\alpha)A_tK_t^{\alpha}L_t^{-\alpha}}{C_t} \\
# Y_t & = A_t K_t^{\alpha}L_t^{1-\alpha}\\
# K_{t+1} & = I_t + (1-\delta) K_t\\
# Y_t & = C_t + I_t\\
# \log A_{t+1} & = \rho \log A_t + \epsilon_{t+1}
# \end{align}
#
# where $\epsilon_{t+1} \sim \mathcal{N}(0,\sigma^2)$.
#
# The objective is use `linearsolve` to simulate impulse responses to a TFP shock for $\beta = 0.96,0.97,0.98,0.99$. Other parameter values are given in the table below:
#
# | $$\sigma$$ | $$\rho$$ | $$\varphi$$ | $$\alpha$$ | $$\delta $$ |
# |------------|-----------|-------------|------------|-------------|
# | 0.006 | 0.75 | 1.7317 | 0.35 | 0.025 |
#
#
# ## Model Preparation
#
# As usual, we recast the model in the form required for `linearsolve`. Write the model with all variables moved to the left-hand side of the equations and dropping the expecations operator $E_t$ and the exogenous shock $\epsilon_{t+1}$:
#
# \begin{align}
# 0 & = \beta\left[\frac{\alpha A_{t+1}K_{t+1}^{\alpha-1}L_{t+1}^{1-\alpha} +1-\delta }{C_{t+1}}\right] - \frac{1}{C_t}\\
# 0 & = \frac{(1-\alpha)A_tK_t^{\alpha}L_t^{-\alpha}}{C_t} - \frac{\varphi}{1-L_t}\\
# 0 & = A_t K_t^{\alpha}L_t^{1-\alpha} - Y_t\\
# 0 & = I_t + (1-\delta) K_t - K_{t+1}\\
# 0 & = C_t + I_t - Y_t\\
# 0 & = \rho \log A_t - \log A_{t+1}
# \end{align}
#
# Remember, capital and TFP are called *state variables* because they're $t+1$ values are predetermined. Output, consumption, and investment are called a *costate* or *control* variables. Note that the model as 5 equations in 5 endogenous variables.
#
# ## Initialization, Approximation, and Solution
#
# The next several cells initialize the model in `linearsolve` and then approximate and solve it.
# +
# Create a variable called 'parameters' that stores the model parameter values in a Pandas Series.
parameters = pd.Series(dtype=float)
parameters['rho'] = 0.75
parameters['phi'] = 1.7317
parameters['alpha'] = 0.35
parameters['delta'] = 0.025
# Print the model's parameters
print(parameters)
# +
# Create variable called 'varNames' that stores the variable names in a list with state variables ordered first.
var_names = ['a','k','y','c','i','l']
# Create variable called 'shockNames' that stores an exogenous shock name for each state variable.
shock_names = ['e_a','e_k']
# -
# Define a function that evaluates the equilibrium conditions of the model solved for zero
def equilibrium_equations(variables_forward,variables_current,parameters):
# Parameters
p = parameters
# Current variables
cur = variables_current
# Forward variables
fwd = variables_forward
# Euler equation
mpk = p.alpha*fwd.a*fwd.k**(p.alpha-1)*fwd.l**(1-p.alpha)
euler_equation = p.beta*(mpk+1-p.delta)/fwd.c - 1/cur.c
# Labor-labor choise
mpl = (1-p.alpha)*cur.a*cur.k**p.alpha*cur.l**(-p.alpha)
labor_leisure = mpl/cur.c - p.phi/(1-cur.l)
# Production function
production_function = cur.a*cur.k**p.alpha*cur.l**(1-p.alpha) - cur.y
# Capital evolution
capital_evolution = cur.i + (1 - p.delta)*cur.k - fwd.k
# Market clearing
market_clearing = cur.c+cur.i - cur.y
# Exogenous tfp
tfp_process = p.rho*np.log(cur.a) - np.log(fwd.a)
# Stack equilibrium conditions into a numpy array
return np.array([
euler_equation,
labor_leisure,
production_function,
capital_evolution,
market_clearing,
tfp_process
])
# Next, initialize the model using `ls.model` which takes the following required arguments:
#
# * `equations`
# * `n_states`
# * `var_names`
# * `shock_names`
# * `parameters`
# Initialize the model into a variable named 'rbc_model'
rbc_model = ls.model(equations = equilibrium_equations,
n_states=2,
var_names=var_names,
shock_names=shock_names,
parameters=parameters)
# ## SImulation and Plotting
#
# Create a $2\times 2$ grid of plots containing the impulse responses of TFP, output, labor, and consumption to a one percent shock to TFP for each of the values for $\beta$: 0.96,0.97,0.98,0.99. Your figure should have a legend that clearly indicates which curves were produced from
#
#
# Here are the steps that that you should take:
#
# 1. Initilize figure and axes for plotting.
# 2. Iterate over each desired value for $\beta$.
# 1. Set `rbc_model.parameters['beta']` equal to current value of $\beta$.
# 2. Use `rbc_model.compute_ss()` to compute the steady state with `guess` equal to `[1,4,1,1,1,0.5]`.
# 3. Use `rbc_model.approximate_and_solve()` to approximate and solve the model with the current value of $\beta$.
# 4. Use `rbc_model.impulse()` to compute the **31** period impulse response to a 0.01 unit shock to TFP in period 5.
# 5. Add the computed impulse responses to the axes.
#
# Be sure to add a legend somewhere in your figure so that it's clear which impulse response lines were determined by which value of $\beta$.
# +
# Create a 12x8 figure. CELL NOT PROVIDED
fig = plt.figure(figsize=(12,8))
# Create four axis variables: 'ax1', 'ax2', 'ax3', 'ax4'
ax1 = fig.add_subplot(2,2,1)
ax2 = fig.add_subplot(2,2,2)
ax3 = fig.add_subplot(2,2,3)
ax4 = fig.add_subplot(2,2,4)
# Create an axis equal to the size of the figure
ax0 = fig.add_subplot(1,1,1)
# Turn off the axis so that the underlying axes are visible
ax0.set_frame_on(False)
# Hide the x-axis
ax0.get_xaxis().set_visible(False)
# Hide the y-axis
ax0.get_yaxis().set_visible(False)
# Create variable called 'beta_values' that stores the desired values of rho
beta_values = [0.96,0.97,0.98,0.99]
# Iterate over the elements of beta_values
for beta in beta_values:
# Update the value of rho in rbc_model.parameters
rbc_model.parameters['beta'] = beta
# Compute the steady state with initial guess equal to [1,4,1,1,1,0.5]
rbc_model.compute_ss([1,4,1,1,1,0.5])
# Approximate the model and solve
rbc_model.approximate_and_solve()
# Compute the impulse responses to a 0.01 unit shock to TFP
rbc_model.impulse(T=31,t0=5,shocks=[0.01,0])
# Add plots of TFP, output, labor, and consumption to ax1, ax2, ax3, and ax4
ax1.plot(rbc_model.irs['e_a']['a']*100,lw=3,alpha=0.75)
ax2.plot(rbc_model.irs['e_a']['y']*100,lw=3,alpha=0.75)
ax3.plot(rbc_model.irs['e_a']['l']*100,lw=3,alpha=0.75)
ax4.plot(rbc_model.irs['e_a']['c']*100,lw=3,alpha=0.75)
# Plot the point 0,0 on ax0 with the same line properties used for the other plotted lines and provide a label
ax0.plot(0,0,lw=3,alpha=0.75,label='$\\beta='+str(beta)+'$')
# Set axis titles
ax1.set_title('TFP')
ax2.set_title('Output')
ax3.set_title('Labor')
ax4.set_title('Consumption')
# Add grids to the axes
ax1.grid()
ax2.grid()
ax3.grid()
ax4.grid()
# Set ax1 y-axis limits to [0,2]
ax1.set_ylim([0,2])
# Set ax2 y-axis limits to [0,2]
ax2.set_ylim([0,2])
# Set ax3 y-axis limits to [-0.5,1.25]
ax3.set_ylim([-0.5,1.5])
# Set ax4 y-axis limits to [-0.5,1.5]
ax4.set_ylim([-0.5,1.5])
# Add legend below the figure.
legend = ax0.legend(loc='upper center',bbox_to_anchor=(0.5,-0.075), ncol=4,fontsize=15)
# -
# **Questions**
#
# 1. Describe in your own words how increasing $\beta$ from 0.96 to 0.99 affects the impulse response of consumption to a TFP shock.
# 2. Explain in your own words the intuition for *why* your observation in the previous question makes sense.
# 3. Describe in your own words how increasing $\beta$ from 0.96 to 0.99 affects the impulse response of labor to a TFP shock.
# 4. Explain in your own words the intuition for *why* your observation in the previous question makes sense.
# **Answers**
#
# 1. Increasing beta causes the effect of the shock at impact (period 5) to be smaller and for the peak level of consumption reached to be smaller. But the effect of the shock on consumption dissipates less quickly.<!-- answer -->
# 2. With a higher level of $\beta$, the household is being assumed to place a higher value on future expected utility flows and so wants to save more, leading to less consumption when the shock hits, but more consumption in the distant future. <!-- answer -->
# 1. Increasing beta causes the effect of the shock at impact (period 5) to be larger and for the subsequent drop in labor to not be as pronounced. <!-- answer -->
# 2. With a higher level of $\beta$, the household is being assumed to place a higher value on future expected utility flows and is therefore more willing to work harder when the shock hits to generate income that can be saved for the future in the form of capital. <!-- answer -->
# ## Exercise: Monetary policy Regimes
#
# In August 6, 1979, <NAME> began his first of two terms as chair of the Board of Governors of the Federal Reserve System. In the two decades prior to Volcker's appointment, inflation in the US had been acccelerating rapidly, as the graph produced by the code in the next cell shows:
# +
# Import GDP deflator data from FRED. CELL PROVIDED
deflator = pd.read_csv('https://fred.stlouisfed.org/data/GDPDEF.txt',sep='\s+',skiprows=15,index_col=0,parse_dates=True)
# Set deflator equal to 'VALUE' column of deflator
deflator = deflator['VALUE']
# Compute the inflation rate
inflation = (deflator - deflator.shift(4))/deflator.shift(4)*100
# Plot
inflation.plot(title='US GDP Deflator Inflation',ylabel='Percent',grid=True)
# -
# The decline and stabilization of inflation in the US starting in the early 1980s is widely attributed to Volcker's leadership. As chair, he pushed the FOMC to aggressively pursue tight monetary policy that led to a sharp contraction in the rate of money growth and an increase in the federal fund rate.
#
# In light of this story, many economists argue that Volcker was the start of a new monetary policy *regime*. Before Volcker, the Fed pursued a looser monetary policy that allowed inflation to accelerate and starting with Volcker, the Fed has pursued a tighter monetary policy that is more aggressive to managing inflation.
#
# Here you will test this proposition by estimating a monetary policy rule for the US on data before Volcker's arrival at the Fed and after. Basically, you will replicate the estimation at the end of class 16 for pre- and post-Volcker data.
# ### Preliminaries
#
# The next block of code imports federal funds rate, GDP deflator inflation, and output gap data from FRED and returs the data as a DataFrame called `df`.
# +
# Initial an empty DataFrame that will store data. CELL PROVIDED
df = pd.DataFrame()
# Import federal funds rate data from FRED. Use arguments: sep='\s+',skiprows=62,index_col=0,parse_dates=True
fed_funds = pd.read_csv('https://fred.stlouisfed.org/data/FEDFUNDS.txt',sep='\s+',skiprows=62,index_col=0,parse_dates=True)
# Set fed_funds equal to 'VALUE' column of fed_funds
fed_funds = fed_funds['VALUE']
# Use .resample('QS').mean() method of 'fed_funds' to convert the fed funds data from monthdy to quarterly
fed_funds = fed_funds.resample('QS').mean()
# Import GDP deflator data from FRED
deflator = pd.read_csv('https://fred.stlouisfed.org/data/GDPDEF.txt',sep='\s+',skiprows=15,index_col=0,parse_dates=True)
# Set deflator equal to 'VALUE' column of deflator
deflator = deflator['VALUE']
# Compute the inflation rate
inflation = (deflator - deflator.shift(4))/deflator.shift(4)*100
# Import real GDP data from FRED
gdp_actual = pd.read_csv('https://fred.stlouisfed.org/data/GDPC1.txt',sep='\s+',skiprows=17,index_col=0,parse_dates=True)
# Set gdp_actual equal to 'VALUE' column of gdp_actual
gdp_actual = gdp_actual['VALUE']
# Import potential real GDP data from FRED
gdp_potential = pd.read_csv('https://fred.stlouisfed.org/data/GDPPOT.txt',sep='\s+',skiprows=12,index_col=0,parse_dates=True)
# Set gdp_potential equal to 'VALUE' column of gdp_potential
gdp_potential = gdp_potential['VALUE']
# Create variable 'df' that is a DataFrame storing fed funds, inflation, actual and potential real GDP
df = pd.DataFrame({
'fed_funds':fed_funds,
'inflation':inflation,
'gdp_actual':gdp_actual,
'gdp_potential':gdp_potential
})
# Drop missing values from 'df'
df = df.dropna()
# -
# ### The Output Gap
#
# The output gap is measured as the percent difference of real GDP from the CBO's estimate of the potential real GDP:
#
# \begin{align}
# \text{Output gap} & = \left(\frac{\text{Real GDP}-\text{Real potential GDP}}{\text{Real potential GDP}}\right)\cdot 100
# \end{align}
#
# Real GDP has FRED series ID `GDPC1` and is available here: https://fred.stlouisfed.org/series/GDPC1. Notice that there are 17 lines of text *before* the line starting with "DATE".
# Construct an output gap column
df['output_gap'] = (df['gdp_actual'] - df['gdp_potential'])/df['gdp_potential']*100
# ### OLS Regressions
#
# The rule to be estimated is the same considered in Class 16:
#
# \begin{align}
# \hat{i}_t & = \bar{\imath} + \phi_{\pi}\pi_t + \phi_{y} y_t + \epsilon_t
# \end{align}
#
# where $\pi_t$ is the percent change in the GDP deflator over the previous year and $y_t$ is the output gap measured as the percent difference of real GDP from the CBO's estimate of the potential real GDP. $\phi_{\pi}$ is the weight that the FOMC places on inflation in the rule and $\phi_{y}$ is the weight that the central bank places on the output gap. $\epsilon_t$ is the residual of the regression.
# Import statsmodels.api as sm
import statsmodels.api as sm
# For a Pandas DataFrame or Series with a DateTime index, you can select a subinterval of the dates using data strings in `.ilo[]`. For example,
#
# `df.loc[:'1960']`
#
# will return all data through the end of 1960. And:
#
# df.loc['08-2000':]
#
# will return all data from and after August 2000.
# #### Pre-Volcker Era
#
# Estimate the monetary policy rule for dates through September 1979.
# +
# Create variable 'X' with columns inflation, output and a constant
X = sm.add_constant(df[['inflation','output_gap']].loc[:'09-1979'])
# Create variable 'Y' equal to the federal funds rate
Y = df['fed_funds'].loc[:'09-1979']
# +
# Initialize OLS model
model = sm.OLS(Y,X)
# Fit OLS model
results = model.fit()
# Print regression results
print(results.summary2())
# -
# #### Post-Volcker Era
#
# Estimate the monetary policy rule for dates after October 1979.
# +
# Create variable 'X' with columns inflation, output and a constant
X = sm.add_constant(df[['inflation','output_gap']].loc['10-1979':])
# Create variable 'Y' equal to the federal funds rate
Y = df['fed_funds'].loc['10-1979':]
# +
# Initialize OLS model
model = sm.OLS(Y,X)
# Fit OLS model
results = model.fit()
# Print regression results
print(results.summary2())
# -
# **Questions**
#
# 1. Compare the results from the two regressions. For which monetary policy era is the estimated coefficient on inflation greater?
# 2. How do your results help to explain the high and variable inflation of the 1960s and 1970s and the lower and more stable inflation of the 1980s and after?
# **Answers**
#
# 1. The estimated coefficient on inflation is higher in the regression on the post-Volcker data.<!-- answer -->
# 2. Before Volcker, the Federal Reserve did not respond aggressively enough to increases in the inflation rate and so the inflation rate became out of control. <!-- answer -->
| Homework Notebooks/Econ126_Winter2021_Homework_08.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] tags=["header"]
# <table width="100%">
# <tr>
# <td style="text-align:left" width="10%" class="border_pre_gradient">
# <a href="contacts.ipynb?" download><img src="../../images/icons/download.png"></a>
# </td>
# <td style="text-align:left" width="10%" class="border_gradient">
# <a href="../MainFiles/opensignalsfactory.ipynb"><img src="../../images/icons/program.png"></a>
# </td>
# <td></td>
# <td style="text-align:left" width="5%">
# <a href="../MainFiles/opensignalsfactory.ipynb"><img src="../../images/icons/home.png"></a>
# </td>
# <td style="text-align:left" width="5%">
# <a href="../MainFiles/contacts.ipynb"><img src="../../images/icons/contacts.png"></a>
# </td>
# <td style="text-align:left" width="5%">
# <a href="https://pypi.org/project/opensignalstools/"><img src="../../images/icons/package.png"></a>
# </td>
# <td style="border-left:solid 3px #009EE3" width="20%">
# <img src="../../images/ost_logo.png">
# </td>
# </tr>
# </table>
# -
# <link rel="stylesheet" href="../../styles/theme_style.css">
# <!--link rel="stylesheet" href="../../styles/header_style.css"-->
# <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/4.7.0/css/font-awesome.min.css">
#
# <table width="100%">
# <tr>
# <td id="image_td" width="50%" class="header_image_color_2">
# <img id="image_img" src="../../images/ost_logo.png"></td>
# <td class="header_text header_gradient"> Contacts </td>
# </tr>
# </table>
# + [markdown] tags=["test"]
# <div style="background-color:black">
# <table width="100%">
# <tr>
# <td style="border-right:solid 3px #009EE3" width="50%">
# <img src="../../images/plux_logo.png" width="50%">
# </td>
# <td style="text-align:left">
# <strong>Lisbon Office</strong>
# <br>
# Phone <i>(+351) 211 956 542</i>
# <br>
# Fax <i>(+351) 211 956 546</i>
# <br>
# Av. 5 de Outubro, 70 - 8º
# <br>
# 1050-059 Lisboa
# <br><br>
# <strong>Support or Suggestions</strong>
# <br>
# E-mail <i>_<EMAIL></i>
# </td>
# </tr>
# </table>
# </div>
# + [markdown] tags=["footer"]
# <hr>
# <table width="100%">
# <tr>
# <td style="border-right:solid 3px #009EE3" width="30%">
# <img src="../../images/ost_logo.png">
# </td>
# <td width="35%" style="text-align:left">
# <a href="https://github.com/opensignalstools/opensignalstools">☌ GitHub Repository</a>
# <br>
# <a href="../MainFiles/opensignalstools.ipynb">☌ Notebook Categories</a>
# <br>
# <a href="https://pypi.org/project/opensignalstools/">☌ How to install opensignalstools Python package ?</a>
# <br>
# <a href="../MainFiles/signal_samples.ipynb">☌ Signal Library</a>
# </td>
# <td width="35%" style="text-align:left">
# <a href="../MainFiles/by_diff.ipynb">☌ Notebooks by Difficulty</a>
# <br>
# <a href="../MainFiles/by_signal_type.ipynb">☌ Notebooks by Signal Type</a>
# <br>
# <a href="../MainFiles/by_tag.ipynb">☌ Notebooks by Tag</a>
# <br>
# <br>
# </td>
# </tr>
# </table>
# + tags=["hide_both"]
from opensignalstools.__notebook_support__ import css_style_apply
css_style_apply()
| notebookToHtml/oldFiles/opensignalstools_html/Categories/MainFiles/contacts.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Optimizing a Tensor Network using Pytorch
#
#
# In this example we show how a general machine learning
# strategy can be used to optimize tensor networks with
# respect to some target loss function.
#
# We'll take the example of maximizing the overlap of some
# matrix product state with periodic boundary conditions
# with a densely represented state, since this does not
# have a simple, deterministic alternative.
#
# ``quimb`` makes use of ``opt_einsum`` which can contract
# tensors with a variety of backends. Here we'll use
# ``pytorch``. Note that pytorch does not yet support complex
# data (but that also means we don't need to conjugate using
# the ``.H`` attribute).
# +
import torch
import quimb as qu
import quimb.tensor as qtn
# perform all contractions with pytorch
qtn.set_contract_backend('torch')
# -
# First, find a (dense) PBC groundstate, $| gs \rangle$:
L = 16
H = qu.ham_heis(L, sparse=True, cyclic=True)
gs = qu.groundstate(H)
# Then we convert it to a (constant) torch array:
# +
# this converts the dense vector to an effective 1D tensor network
target = qtn.Dense1D(gs)
# this maps the torch.tensor function over all the data arrays, here only one
target.apply_to_arrays(torch.tensor)
# -
# Next we create an initial guess random MPS, $|\psi\rangle$, also converting each
# of the arrays to torch variables (but now requiring the
# gradient so that each can be optimized):
bond_dim = 32
mps = qtn.MPS_rand_state(L, bond_dim, cyclic=True)
mps.apply_to_arrays(lambda t: torch.tensor(t, requires_grad=True))
# Last, we set up a ``pytorch`` optimizer, taking as the loss
# the normalized target overlap $\dfrac{|\langle gs | \psi \rangle|^2} { \langle \psi | \psi \rangle }$:
# +
# we give the optimizer all the tensors it should optimize
optimizer = torch.optim.Adam([t.data for t in mps], lr=0.01)
# perform 100 steps of optimization
for t in range(1, 101):
# negate the overlap as we a minimizing
loss = - (mps @ target)**2 / (mps @ mps)
# reset, compute the gradient, and take a optimize step
optimizer.zero_grad()
loss.backward()
optimizer.step()
if t % 10 == 0:
print(f"round: {t}, loss: {loss.item()}")
# -
# We now have a pretty good fidelity between our PBC MPS ansatz and the target groundstate.
#
# Although the loss was computed with normalization, the MPS still needs to be normalized:
mps /= (mps @ mps)**0.5
mps @ mps
# And finally we can check that the overlap matches the loss found:
(mps @ target)**2
# Other things to think about might be:
#
# - playing with the optimizer type (here ADAM) and settings (e.g. learning rate)
# - using single precision data for GPU acceleration
#
# We can also convert the ``pytorch`` arrays back to numpy with:
# the 'detach' unlinks the tensors from the gradient calculator
mps.apply_to_arrays(lambda t: t.detach().numpy())
type(mps[4].data)
| docs/examples/ex_torch_optimize_pbc_mps.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# ## Outline
#
# * Neural Net Traning Workflow
#
# * PyTorch Data Type: Tensors
#
# * Graph Computation and Neural Net Models
#
# * Example: Iris Dataset Classification
#
# * Assignment: MNIST Classification
# ### Part 1: Neural Net Training Workflow
# 1. Prepare Data
# * Define batch size
# * Split train/val/test sets
# * Migrate to Tensors
# * Additional pre-processing (normalization, one hot encoding, etc.)
#
# One hot encoding: Transforms categorical data into one hot vectors. 1 in the index representing the class, 0 in all other indices.
#
# Use *torch.nn.functional.one_hot()*
#
# 2. Select Hyperparameter
#
# * Network size and type
# * Learning Rate
# * Regularizers and strength
# * Loss function and optimizer
# * Other hyperparameters
#
# 3. Define Model
# * Network type
# * Network parameters/layers
# * Output values(s) and dimensions
# * Forward() Function
#
# 4. Identiify Tracked Values
#
# * Traning Loss
# * Validation Loss
# * Other relevant values
#
# 5. Train and Validate Model
#
# * Train Model:
# * Calculate loss on training set
# * Backpropagation gradients
# * Update weights
# * Validate Model:
# * Calculate error on validation or test set
# * Do not update weights
# * Save losses in placeholders
#
# 6. Visulization and Evaluation
# * Visualize Traning Progress: Convergence, over or under fitting
# * Evaluate model: Confusion matrix, generate samples, identify model weakness
#
#
#
# ### Part 2: Tensors
#
# * Main data structure for PyTorch
# * Like numpy arrays, but optimized for machine learning
# * Can be migrated to or stored on GPUs
# * Optimized for automatic differentiation
# * Three main attributes:
# * Shape - size of each dimension
# * Datatype - form of each entry (float, int, etc.)
# * Device - cpu or cuda (gpu)
#
# **Tensor Initialization**
#
# Can create tensor from existing data: *torch.Tensor([[1,2],[3,4]])*, *torch.Tensor(np_array)*
#
# Can generate tensor with random or fixed values: *torch.ones(shape)*, *torch.rand(shape)*
#
# **Tensor Operations**
#
# <img src="images/2_3.png" width ="500" height="300" alt="centered image" />
#
# ### Part 3: Graph Computation and Neural Net Models
#
# **Bass Class**: nn.Module
#
# **Two primary features of base class**: Parameters, Forward function
#
# **Common PyTorch Layers**:
# * Linear
# * Activation Functions (ReLU, tanj, etc.)
# * Dropout
# * RNN
# * Convolution
#
# **Neural Network Models**
#
# <img src="images/2_4.png" width ="700" height="400" alt="centered image" />
#
# + [markdown] vscode={"languageId": "plaintext"}
# ### Computational graphs
# * PyTorch generates a computational graph every time a parameter or variable with requires_grad is operated on
# * The graph is used to back-propagate errors and update the parameters
#
# ### Loss functions and optimizers
# * **Loss function**: example nn.MSELoss() for regression, nn.NLLLoss() or nn.CrossEntropy() for classification
# * **Optimizer**: example optim.SGD, optim.Adam
#
# -
# ```
# optimizer = torch.optim.SGD(model.parameters(), lr=learning_rate)
# ```
# + [markdown] vscode={"languageId": "plaintext"}
# ## Optimization during training
#
# * Each traning step, the optimization occurs in 3 steps
# * optimizer.zero_grad() -Resets the accumulated gradients
# * loss.backward() -Back-propagates the gradients to assign the contribution from each parameter
# * optimizer.step() -Updates the parameters based on the gradients, according to the optimization scheme (optimizer)
# **Saving and Loading Models**
# * Userful quantities to track during traning: training loss, validation loss, model state dictionary (parameters), optimizer state dictionary
# * Loading state dictionaries: model.load_statedict() loads the saved parameter weights into the model, optimizer.load_statedict() loads optimizer state, such as learning rate, momentum, etc.
#
# * Can use torch.save() and torch.load()
# * Create checkpoints to save all in one file
# * Can save every eopch, or define some condition for saving (best loss, every n eopchs, etc.)
# -
# ## Example: Iris Dataset Classification
#
#
# ```Python
# from skelarn.datasets import load_iris
# from sklearn.model_selection import tranin_test_split
# from sklearn.preprocessing import StandardScaler
#
# iris = load_iris()
# X = iris['data']
# y = iris['target']
# names = iris['target_names']
# feature_names = iris['feature_names']
#
# scaler = StandardScaler()
# X_scaled = scaler.fit_transform(X)
#
# X_train, X_test, y_train, y_test = train_test_split(
# X_scaled, y, test_size=0.2, random_state=2
# )
#
# ```
#
#
#
#
#
#
#
#
#
# Dataset containing characteristics of 3 different flower types. Found in sklearn.datasets.
#
# * 3-layer fully connected neural net
# * 1^{st} and 2^{nd} layer have 50 neurons each
# * Final layer has 3 neurons for classification
#
# * Activation functions can also be called using torch.nn.functional
#
#
# **Model Definition**
#
# ```Python
# import torch.nn.functional as F
# class Model(nn.Module):
# def _init_(self,input_dim):
#
# super(Model,self)._init_()
# self.layer1 = nn.Linear(input_dim,50)
# self.layer2 = nn.Linear(50,50)
# self.layer3 = nn.Linear(50,3)
#
# def forward(self,x):
# x = F.relu(self.layer1(x))
# x = F.relu(self.layer2(x))
# x = F.softmax(self.layer3(x), dim=1)
# return x
#
# ```
# **Data, hyperparameters, and Saved Values**
# * Define loss and optimizer
#
# * Define training eopchs and data
# * Tracking loss and accuracy at each epoch
#
# ```Python
# import tqdm
#
# EPOCHS = 100
# X_train = torch.from_numpy(X_train).float()
# y_train = torch.from_numpy(y_train).float()
# X_test = torch.from_numpy(X_test).float()
# y_test = torch.from_numpy(y_test).float()
#
# loss_list = np.zeros((EPOCHS,))
# accuracy_list = np.zeros(EPOCHS,))
#
# model = MOdel(X_train.shape[1])
# optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
# loss_fn = nn.CrossEntropyLoss()
#
# ```
# **Model Traning and validation**
#
# * Output of model gives predictions
#
# * Track traning loss as loss
#
# * Check accuracy on validation set
#
# ```Python
# for epoch in tqdm.trange(EPOCHS):
# y_pred = model(X_train)
# loss = loss_fn(y_pred, y_train)
# loss_list[epoch] = loss.item()
#
# # Zero gradients
# optimizer.zero_grad()
# loss.backward()
# optimizer.step()
#
# with torch.no_grad():
# y_pred = model(X_test)
# correct = (torch.argmax(y_pred, dim=1) == y_test.type(torch.FloatTensor)
# accuracy_list[epoch] = correct.mean()
# ```
# **Plot Validation accuracy and training loss**
#
# ```Python
# fig, (ax1, ax2) = plt.subplots(2, figsize=(12, 6), sharex=True)
#
# ax1.plot(accuracy_list)
# ax1.set_ylabel('validation accuracy')
# ax2.plot(loss_list)
# ax2.set_ylabel('training loss')
# ax2.set_xlabel('eopchs')
#
# ```
#
| PyTorch Basics and Model Traning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
from binomialRF import binomialRF
import seaborn as sb
import numpy as np
mcf7 =pd.read_csv('data/mcf7.csv')
# #!pip install --upgrade pip
# !pip install --upgrade binomialRF
# +
'''
The MCF7 data set is a breast-cancer cell-line dataset containing 14 paired sample of isogenic replicates.
The purpose of this study was to determine which was more prefereable:
- more replicates vs. more sequencing
For the sake of obtaining accurate estimates of differential gene expression. This sample data contains only the first
5,000 gene expression measurements as well as only the samples with a sequencing depth of 30m reads,
(optimally determine by the authors).
The data can be accessed fully through this link: https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE51403
'''
# First 10 rows of the data
mcf7 =mcf7.rename(columns={'Unnamed: 0': 'gene_names'})
mcf7.head(10)
# -
# ## Example Data Visualization
#
## Example Visualization
mcf7.hist(column='X2012.562.subsamp.30M')
## Take log to clean up
## Add 1 to avoid log(0) issues
log_df = mcf7.apply(lambda x: np.log10(x+1) if np.issubdtype(x.dtype, np.number) else x)
log_df.head()
## log transform hist to illustrate dist'n
log_df.hist('X2012.562.subsamp.30M')
# +
## Transpose and change column names
transposed_df = mcf7.T
transposed_df.head(5)
transposed_df.columns = transposed_df.iloc[0]
transposed_df=transposed_df[1:]
transposed_df.head()
# -
# create training matrix
y = [1,0,1,0,1,0,1,0,1,0,1,0, 1,0]
X = transposed_df
feat = binomialRF.binomialRF(X,y, 100, .3)
rf_model = feat.fit_model()
# calculate naive pvalues
main_effects = feat.get_main_effects(rf_model)
naive_pvalues = feat.calculate_naive_pvalue(main_effects)
naive_pvalues
# calculate correlated adjusted p-values
cbinom = feat.calculate_cbinom()
# correl_pvalues = feat.calculate_correlated_pvalues(main_effects, cbinom)
# print(correl_pvalues)
| examples/genomics_example.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 0.6.1
# language: julia
# name: julia-0.6
# ---
# # Estructuras de datos
#
# Una vez que empecemos a trabajar con muchos datos a la vez, será conveniente guardar nuestros datos en estructuras como arreglos o diccionarios (más allá que sólo variables).<br>
#
# Tipos de estructuras de datos que cubrimos
# 1. Tuplas
# 2. Diccionarios
# 3. Arreglos
#
# <br>
# Como repaso, las tuplas y los arreglos ambos son secuencias ordenadas de elementos (entonces podemos accesarlos por medio de un índice).
# Los diccionarios y los arreglos son mutables.
#
# ¡Explicaremos más brevemente!
# ## Tuplas / Tuples
#
# Podemos crear una tupla encerrando una secuencia ordenada de elementos con `( )`.
#
# Sintaxis: <br>
# ```julia
# (item1, item2, ...)```
mis_animales_favoritos = ("pingüino", "gato", "petauro_del_azúcar")
# Podemos accesar con el índice a esta tupla,
myfavoriteanimals[1]
# Pero como las tuplas con inmutables, no la podemos modificar
mis_animales_favoritos[1] = "nutria"
# ## Diccionarios
#
# Si tenemos conjuntos de datos relacionados entre sí, podemos guardar los datos en un diccionario. Un buen ejemplo es una lista de contactos, donde asociamos nombres a números de teléfono.
#
# Sintaxis:
# ```julia
# Dict(llave1 => valor1, llave2 => valor2, ....)```
miagenda = Dict("Jenny" => "867-5309", "Cazafantasmas" => "555-2368")
# En este ejemplo, cada nombre y número es un par de "llave" y "valor". Podemos tomar el númer de Jenny (un valor) usando la llave asociada.
miagenda["Jenny"]
# Podemos agregar otra entrada al diccionario de la manera siguiente
miagenda["Kramer"] = "555-FILK"
# Veamos como se ve nuestro diccionario ahora...
miagenda
# Para borrar a Kramer de nuestro diccionario - y simultáneamente tomar su número - usamos pop!
pop!(miagenda, "Kramer")
miagenda
# A diferencia de las tuplas y los arreglos, los diccionarios no están ordenados y no podemos accesarlos con un índice
miagenda[1]
# En el ejemplo anterior, `julia` piensa que estás tratando de accesar a un valor asociado a la llave `1`.
# ## Arreglos
#
# A diferencia de las tuplas, los arreglos son mutables. A diferencia de los diccionarios, los arreglos contienen secuencias ordenadas de elementos. <br>
# Podemos crear un arreglo encapsulando esta secuencia de elementos con `[ ]`.
#
# Sintaxis: <br>
# ```julia
# [item1, item2, ...]```
#
#
# Por ejemplo, podemos usar un arreglo para recordar a mis amigos
myfriends = ["Ted", "Robyn", "Barney", "Lily", "Marshall"]
# O guardar una secuencia de números
fibonacci = [1, 1, 2, 3, 5, 8, 13]
mezcla = [1, 1, 2, 3, "Ted", "Robyn"]
# Una vez que tenenmos un arreglo, podemos tomar los datos individualmente dentro del arreglo accesándolos por su índice. Por ejemplo, si queremos al tercer amigo en myfriends, escribimos
myfriends[3]
# Podemos usar el índice para mutar la entrada del arreglo
myfriends[3] = "Baby Bop"
# También se puede editar con las fuciones de `push!` y `pop!`. `push!` agrega un elemento al final del arreglo y `pop!` quita al último elemento del arreglo.
#
# Podemos agregar otro número a la secuencia fibonacci
push!(fibonacci, 21)
# y quitarlo
pop!(fibonacci)
fibonacci
# Hasta ahora hemos dado ejemplos de arreglos de escalars unidimensionales, pero los arreglos pueden tener un número arbitrario de dimensiones y también pueden guardar otros arreglos.
# <br><br>
# Por ejemplo, estos son arreglos de arreglos
favoritos = [["koobideh", "chocolate", "eggs"],["penguins", "cats", "sugargliders"]]
numbers = [[1, 2, 3], [4, 5], [6, 7, 8, 9]]
# Abajo hay arreglos de 2D y 3D poblados con valores aleatorios
rand(4, 3)
rand(4, 3, 2)
# ¡Cuidado copiando los arreglos!
fibonacci
somenumbers = fibonacci
somenumbers[1] = 404
fibonacci
# Editar `somenumbers` causa que `fibonacci` se edite también!
#
# En el ejemplo superior, en realidad no hicimos una copia de `fibonacci`. Sólo creamos una nueva manera de accesar las entradas del arreglo relacionado a `fibonacci`.
#
# Si queremos hacer una copia de un arreglo amarrado a `fibonacci`, usamos la función de `copy`.
# Primero restauramos a fibonnaci
fibonacci[1] = 1
fibonacci
somemorenumbers = copy(fibonacci)
somemorenumbers[1] = 404
fibonacci
# En el último ejemplo, no se editó a fibonacci. Entonces vemos que los arreglos amarrados a `somemorenumbers` y `fibonacci` son distintos.
# ### Exercises
#
# 3.1 Crea un arreglo, `arreglo`, que sea un arreglo de 1D de 2-elementos de 1-elemento de 1D, cada uno guardando el número 0.
# Accesa a `arreglo` para agregar un `1` a cada uno de los arreglos que contiene.
# 3.2 Trata de agregar "Emergencia" a `miagenda` con el valor `911`. Trata de agregar `911` como un entero y no como cadena. ¿Porqué no funciona?
# 3.3 Crea un nuevo diccionario que se llame `agenda_flexible` que tenga el número de Jenny guardado como cadena y el de los Cazafantasmas como entero.
# 3.4 Agrega la llave de "Emergencia al valor (entero) `911` a `agenda_flexible`.
# 3.5 ¿Porqué podemos agregar un entero como valor a `agenda_flexible` pero no a `miagenda`? ¿Cómo pudimos haber inicializado `miagenda` para que aceptara enteros como valores?
| es-es/intro-to-julia-ES/ 3. Estructuras de datos.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Mac、Linux安装PySpark
# ## 1. 安装Spark
# http://spark.apache.org/downloads.html
# <img src="./imgs/01.spark1.png" width="800px">
# 1. 选择Spark 3.0.1, Hadoop 3.2
# 2. 点击spark-3.0.1-bin-hadoop3.2.tgz下载
# 3. 将压缩包移动到/usr/local下:sudo mv spark-3.0.1-bin-hadoop3.2.tgz /usr/local
# 4. 解压:cd /usr/local && tar -zvxf spark-3.0.1-bin-hadoop3.2.tgz
# 5. 编辑环境配置文件:vim ~/.bash_profile
# >```shell
# export SPARK_HOME=/usr/local/spark-3.0.1-bin-hadoop3.2
# export PATH=$PATH:$SPARK_HOME/bin
# export PYSPARK_PYTHON=python3.7
# export PYSPARK_DRIVER_PYTHON=ipython
# export PYSPARK_DRIVER_PYTHON_OPTS="notebook"
# >```
# > 注意PYSPARK_PYTHON填写自己的python环境
# > PYSPARK_DRIVER_PYTHON填写驱动,用jupyter则填ipython
# > PYSPARK_DRIVER_PYTHON_OPTS如果用jupyter则填notebook
# 6. 让环境配置生效:source ~/.bash_profile
# 7. 编辑spark环境配置文件
# > * 7.1 进入/usr/local/spark-3.0.1-bin-hadoop3.2/conf,找到spark-env.sh.template文件,拷贝一份到该目录,并重命名为spark-env.sh
# > * 7.2 编辑spark-env.sh,写入:
# >> ```shell
# PYSPARK_PYTHON=python3.7
# PYSPARK_DRIVER_PYTHON=ipython
# PYSPARK_DRIVER_PYTHON_OPTS="notebook"
# >>```
# > * 可能要重启电脑才能生效
# 8. 输入spark-shell,验证是否安装成功
# <img src="./imgs/01.spark-shell.png" width="800px">
# 9. 输入:quit退出
# ## 2. 安装pyspark
# 1. 安装命令:
# ```shell
# pip3.7 install pyspark -i https://pypi.douban.com/simple
# ```
| docs/01.install_pyspark.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Lendo dados de geociência
# ## License
#
# All content can be freely used and adapted under the terms of the
# [Creative Commons Attribution 4.0 International License](http://creativecommons.org/licenses/by/4.0/).
#
# 
# ## Imports
#
# Coloque **todos** os `import` na célula abaixo. Não se esqueça do `%matplotlib inline` para que os gráficos apareçam no notebook.
import matplotlib.pyplot as plt
import numpy as np
import math
# %matplotlib inline
# ## IMPORTANTE
#
# Agora que vocês sabem técnicas de programação defensiva, eu espero que todo o código que vocês fizerem abaixo utilizem essas técnicas. Crie docstrings para suas funções, cheque as entradas (quando for possível) e cheque as saídas. **Não esqueçam dos comentários**.
# ## Temperatura no Rio de Janeiro
#
# O arquivo `data/23.31S-42.82W-TAVG-Trend.txt` contém dados de temperatura média mensal para a cidade do Rio de Janeiro. O arquivo também contém médias móveis anual, 5, 10 e 20 anos. Esses dados foram baixados do site Berkeley Earth (http://berkeleyearth.lbl.gov/locations/23.31S-42.82W).
#
# ### Tarefa 1
#
# Faça duas funções, uma que lê os dados de temperatura mensal, outra que lê os dados da média móvel anual.
# As duas funções devem:
#
# * Receber como entrada **somente** o nome do arquivo de dados.
# * Retornar duas listas: uma com as datas referentes aos dados e outra com os dados de temperatura.
# * As datas retornadas devem ser em anos decimais. Ex: Janeiro de 1984 seria 1984.0833333333333 (1984 + 1/12).
# * Datas sem valores de temperatura (NaN) devem ser ignoradas (não incluidas nas listas).
#
# Utilize suas funções para carregar os dados e fazer um gráfico da temperatura média mensal e média movel anual pelo tempo.
# +
# Foi definida uma função para temperatura mensal
def tempmedia_mensal(arquivo):
#docstring
"""Função que gera lista de dados de tempo mensais e suas temperaturas mensais"""
#Abre o arquivo
arquivo = open(arquivo)
# cria listas vazias para datas e dados
datas = []
dados = []
# Foi feito um loop para as linhas do arquivo.
for linhas in arquivo:
# Com o if foram definidas certas condiçoes para a quebra das linhas
if linhas[0] != '%':
coluna = linhas.split()
# Foi estabelecida a condiçao de que as linhas não podiam estar vazias.
if len(coluna) != 0:
# Condição para ignorar os NaN da coluna 2.
if coluna[2] != 'NaN':
# Transformou os dados da coluna 0, 1 e 2 em numeros reais.
ano = float(coluna[0])
mes = float(coluna[1])
dadostemp = float(coluna[2])
# Soma a temperatura as suas variaçoes na coluna 2.
dadosmes = dadostemp + 24.01
# Soma o ano aos seus decimais que representam os meses.
anomensal = ano + (mes/12)
# Adiciona os dados e o anos nas listas vazias.
datas.append(anomensal)
dados.append(dadosmes)
# Fecha o arquivo
arquivo.close()
# Retorna as listas
return datas, dados
# Usando a função criada temp_mensal, a entrada é o nome do arquivo e retorna duas listas, x1 com as datas e y1 com as temperaturas.
x1,y1 = tempmedia_mensal('data/23.31S-42.82W-TAVG-Trend.txt')
print(x1, y1)
# +
#Função que calcula a temperatura média móvel anual
def mmovelanual(arquivo):
# Docstring
"""Função que gera lista de dados de tempo mensais e suas respectivas temperaturas médias móveis anuais"""
# Abre o arquivo
arquivo = open(arquivo)
# Cria listas vazias
datas = []
dados = []
# Faz um loop dentro do arquivo.
for linhas in arquivo:
# Condiciona que as linhas que o loop vai pegar sejam as que não começam com o simbolo %.
if linhas[0] != '%':
# Quebra as linhas.
coluna = linhas.split()
# Condiciona que as linhas do loop sejam as que não estão vazias.
if len(coluna) != 0:
# Ignora os dados com NaN da coluna 4
if coluna[4] != 'NaN':
# Transforma as colunas 0, 1 e 4 em numero real
ano = float(coluna[0])
mes = float(coluna[1])
dadostemp = float(coluna[4])
# Somas os desvios a media dada
dadosmes = dadostemp + 24.01
# Calcula os anos em décimos de mês.
anomensal = ano + (mes/12)
# Adiciona os valores as listas.
datas.append(anomensal)
dados.append(dadosmes)
# Fecha o arquivo
arquivo.close()
# Retorna as datas e os dados
return datas, dados
# Insere o nome do arquivo na função criada e define duas variaveis, datas e dados geradas a partir da função.
x2,y2 = mmovelanual('data/23.31S-42.82W-TAVG-Trend.txt')
# -
# Construindo o gráfico
plt.figure()
plt.plot(x1, y1, '.k', label = 'Média mensal')
plt.plot(x2, y2, '-r', linewidth = 2, label = 'Média movel anual')
#legenda
legend = plt.legend(loc='upper left', shadow=True, fontsize='large')
# Dá título ao eixo x.
plt.xlabel("Ano")
plt.xlim(min(x1), max(x1))
# Dá título ao eixo y.
plt.ylabel("Temperatura Média (°C)")
plt.grid(b=None, which='major', axis='both')
# ### Resultado esperado
#
# O gráfico final deve ser parecido com o abaixo:
#
# 
# ### Tarefa 2
#
# Faça uma função que calcule a temperatura média anual a partir das temperaturas mensais. A sua função deve:
#
# * Receber como entrada a lista das datas e a lista das temperaturas mensais.
# * Retornar duas listas: uma com os anos e outra com as temperaturas médias correspondetes.
# * Anos que não contem dados de todos os 12 meses devem ser ignorados (não incluídos nas listas retornadas).
#
# Utilize sua função para calcular a média anual. Faça um gráfico da temperatura média anual por ano junto com a média móvel anual.
#
# **Dica**: A função `math.floor` retorna o número inteiro que precede um número real. Ex: `math.floor(1984.23) == 1984`
# +
# Fução que calcula a temperatura media anual.
def tempmanual(datas, dados):
# Criei listas vazias.
lista_anos_media = []
lista_temp_media = []
# Variavel N criada.
N = 12
# Foi feito um loop que percorre todas as linhas do arquivo.
for i in range(0, len(datas), 1):
# Foi estabelecida a condição de que somente os anos com doze meses fossem usados.
if i + (N - 1) < len(datas) and math.floor(datas[i]) == math.floor(datas[i + (N - 1)]):
# Foi calculada a media das temperaturas, utilizando a função somatorio.
media_temp = sum(dados[i:i + N ])/N
# Foram adicionadas os valores gerados as listas vazias.
lista_temp_media.append(media_temp)
lista_anos_media.append(math.floor((datas[i])))
# Retorna os valores das listas
return lista_anos_media, lista_temp_media
# Usei a função criada para gerar as médias.
[t1, t2] = tempmanual(x1, y1)
print(t1, t2)
# -
# Construindo o gráfico
plt.figure()
plt.plot(t1, t2, 'ok', label = 'Média Anual')
plt.plot(x2, y2, '-r', linewidth = 1, label = 'Média móvel anual')
#legenda
legend = plt.legend(loc='upper left', shadow=True, fontsize='large')
# Dá título ao eixo x.
plt.xlabel("Ano")
plt.xlim(min(x1), max(x1))
# Dá título ao eixo y.
plt.ylabel("Temperatura Média (°C)")
plt.grid(b=None, which='major', axis='both')
# ### Resultado esperado
#
# O gráfico final deve ser parecido com o abaixo:
#
# 
# ## <NAME>
#
# Salve os dados da média anual em um arquivo CSV (comma separated values) chamado `temp-media-anual.csv`. Os valores devem ser separados por `,`. A primeira coluna deve conter os anos e a segunda as temperaturas. Esse arquivo deve estar presente em seu repositório (dê `git add` nele).
| capstone.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Занятие 3. Предварительная обработка данных и отбор признаков
# ### <NAME>
# Веберите любые данные из репозитория данных для машинного обучения (UCI Machine learning repository: http://archive.ics.uci.edu/ml/index.php) или возьмите свои данные и проведите предварительную обработку данных и отбор признаков в соответствии со следующей схемой. Комментарии к каждому разделу обязательны.
# ## Предварительная обработка данных
# +
import numpy as np
import pandas as pd
import os
import seaborn as sns
import matplotlib.pyplot as plt
# %matplotlib inline
plt.style.use('ggplot')
# -
filename = 'wine.csv'
data = pd.read_csv(filename)
data = data.interpolate()
data.head(6)
# ### Rescale data
# +
from sklearn.preprocessing import MinMaxScaler
array = data.values
# separate array into input and output components
X = array[:,1:13]
Y = array[:,0]
scaler = MinMaxScaler(feature_range=(0, 1))
rescaledX = scaler.fit_transform(X)
# summarize transformed data
np.set_printoptions(precision=3)
data0 = pd.DataFrame(rescaledX, columns=data.columns[1:])
data0.head(6)
# -
# Вручную
# +
data0 = data.copy()
data0 = data.iloc[:, 1:data0.shape[1]-1]
min1 = data0.min()
max1 = data0.max()
for i in range(data0.shape[1]):
data0.iloc[:, i] = (data0.iloc[:, i] - min1[i])/(max1[i] - min1[i])
data0.head(6)
# -
# Как видно по графику ниже данные масштабировались в отрезок [0,1]
fig, ax = plt.subplots(figsize=(12, 5))
data0.iloc[:, :].plot(ax=ax)
plt.show()
# ### Standardize data
# +
from sklearn.preprocessing import StandardScaler
std_model = StandardScaler()
new_data = std_model.fit_transform(X)
data1 = pd.DataFrame(new_data, columns=data.columns[1:])
data1.head(6)
# -
# Вручную
# +
data1 = data.copy()
means = data1.mean()
std = data1.std()
for i in range(1, data1.shape[1]-1):
data1.iloc[:, i] = (data1.iloc[:, i] - means[i-1])/std[i-1]
data1.head(6)
# -
# Как видно на графиках ниже все признаки имеют ожидание 0 и примерно стандартное отклонение 1.
data1.hist(figsize=(8,8), density=True, layout=(4,3), bins=30, sharex=False, sharey=False)
plt.subplots_adjust(hspace=0.5)
plt.show()
# ### Normalize data
# +
from sklearn.preprocessing import Normalizer
norm = Normalizer()
new_data1 = norm.fit_transform(X)
data2 = pd.DataFrame(new_data1, columns=data.columns[1:])
data2.head(6)
# -
# ### Binarize data (Make Binary)
# +
from sklearn.preprocessing import Binarizer
norm = Binarizer(threshold=0.0)
new_data1 = norm.fit_transform(X)
data2 = pd.DataFrame(new_data1, columns=data.columns[1:])
data2.head(6)
# -
# ## Отбор признаков
# ### Univariate Selection
# +
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_classif
unv_sel = SelectKBest(score_func=f_classif, k=3)
new_data2 = unv_sel.fit(X,Y)
np.set_printoptions(precision=3)
print(new_data2.scores_)
features = new_data2.transform(X)
# summarize selected features
print(features[:,:])
len(new_data2.scores_)
# -
# ### Recursive Feature Elimination
# +
from sklearn.feature_selection import RFE
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(solver='liblinear')
rfe = RFE(model, 4)
fit = rfe.fit(X, Y)
print("Num Features: %d" % fit.n_features_)
print("Selected Features: %s" % fit.support_)
print("Feature Ranking: %s" % fit.ranking_)
# -
# ### Principle Component Analysis
# +
from sklearn.decomposition import PCA
pca = PCA(n_components=3)
fit = pca.fit(X)
# summarize components
print("Explained Variance: %s" % fit.explained_variance_ratio_)
print(fit.components_)
# -
# ### Feature Importance
from sklearn.ensemble import ExtraTreesClassifier
# feature extraction
model = ExtraTreesClassifier(n_estimators=100)
model.fit(X, Y)
print(model.feature_importances_)
| Marketing Analytics/Rescale, selection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os, glob
import numpy as np
import pandas as pd
import sys
sys.path.insert(0, "../src")
import auxilary_functions as f
from generation_algorithm import *
import subprocess
import csv
import matplotlib.pyplot as plt
import json
import networkx as nx
cfg_file = "../src/config-ecoli.json"
# -
def load_ffl_based_component():
with open(cfg_file, 'r') as j:
config_file = json.loads(j.read())
interaction_matrix = f.get_interaction_matrix(config_file)
motifs, counter = f.motif_search(config_file, interaction_matrix, batch_size=10000)
motifs_orig = motifs["030T"]
ffl_nodes = list(set(sum([list(map(int, x.split("_"))) for x in motifs_orig], [])))
interaction_matrix_ffl = np.zeros((len(ffl_nodes), len(ffl_nodes)))
for motif in motifs_orig:
motif = f.split_motif(motif)
motif_new = list(ffl_nodes.index(x) for x in motif)
interaction_matrix_ffl[np.ix_(motif_new, motif_new)] = \
interaction_matrix[np.ix_(motif, motif)]
interaction_matrix_ffl.shape, interaction_matrix_ffl.sum()
# Vertex-based motif network on FFL
motifs_network = f.build_vmn(motifs_orig, verbose=True)
V = nx.Graph(motifs_network)
nx.is_connected(V)
return interaction_matrix, motifs_orig, motifs_network, interaction_matrix_ffl
# +
out_degrees = []
list_of_matrices = []
for i in range(16):
yeast_matrix, ffl_motif, ffl_component, ffl_matrix = load_ffl_based_component()
substrate_matrix = get_network_nucleus(
yeast_matrix, ffl_motif, ffl_component, min_size=20, random_seed=np.random.randint(1,100)
)
list_of_matrices.append(substrate_matrix.transpose())
out_degree = []
for i in range(substrate_matrix.shape[0]):
out_degree.append(substrate_matrix[:i].sum()/substrate_matrix.shape[0])
out_degrees.append(out_degree)
# +
fig, axes = plt.subplots(nrows=4, ncols=4, figsize=(16,16))
for i in range(16):
a = pd.DataFrame({'Out-degree': out_degrees[i]}).sort_values(['Out-degree'])
a.plot(kind='bar', ax=axes.flat[i])
# +
fig, axes = plt.subplots(nrows=4, ncols=4, figsize=(16,16))
for num, i in enumerate(list_of_matrices):
substrate_matrix_graph = nx.DiGraph(i)
nx.draw(substrate_matrix_graph, node_size = 5, edge_color = 'gray', ax=axes.flat[num])
# -
| snippets/.nucleus_extraction_exploration.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
from joblib import Parallel, delayed
import multiprocessing
import numpy as np
import pandas as pd
import sys
sys.path.append('../')
from src import *
random.seed(1234)
np.random.seed(1234)
# -
PATH_DATA = '/home/disij/projects/acdc/data/'
OUTPUT_DIR = "/extra/disij0/data/flow_cytometry/flowMP_output/"
PATH_SAMPLES = OUTPUT_DIR + "BMMC_accepted_samples"
FILENAME_PREDICTIONS = OUTPUT_DIR + "BMMC_predictions.csv.gz"
# Load BMMC dataset from [ACDC paper](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5447237/pdf/btx054.pdf)...
# +
### LOAD DATA ###
path = PATH_DATA + 'BMMC_benchmark/'
df = pd.read_csv( path + 'BMMC_benchmark.csv.gz', sep=',', header = 0, \
compression = 'gzip', engine='python')
table = pd.read_csv(path + 'BMMC_table.csv', sep=',', header=0, index_col=0)
channels = ['CD45','CD45RA', 'CD19', 'CD11b', 'CD4', 'CD8', 'CD34',
'CD20', 'CD33', 'CD123', 'CD38', 'CD90', 'CD3']
df.columns = channels + ['cell_type']
df = df[df.cell_type != 'NotGated']
### five cell types below are the ones that we do not have prior information about.
### in acdc implementation, they are all catagorized as "unknown", yet since we are not able
### to handle unknown cell types, we remove all instances of these types
### proportion of "unknown" is 24.49% in total
df = df.loc[df['cell_type'] != 'Megakaryocyte']
df = df.loc[df['cell_type'] != 'CD11bmid Monocyte']
df = df.loc[df['cell_type'] != 'Platelet']
df = df.loc[df['cell_type'] != 'Myelocyte']
df = df.loc[df['cell_type'] != 'Erythroblast']
table = table.fillna(0)
X = df[channels].values
### transform data
data = np.arcsinh((X-1.)/5.)
N, d = data.shape
emp_bounds = np.array([[data[:,d].min(), data[:,d].max()] for d in range(data.shape[1])])
ct2idx = {x:i for i,x in enumerate(table.index)}
idx2ct = [key for idx, key in enumerate(table.index)]
Y = np.array([ct2idx[_] for _ in df.cell_type])
# -
# Learn MP trees and write accepted samples to file...
# +
# %%time
###################### Parallel run #####################
# n_mcmc_chain = 50
# n_mcmc_samples = 3000
# chains = range(n_mcmc_chain)
# num_cores = multiprocessing.cpu_count()
# accepted_MP = Parallel(n_jobs=num_cores)(delayed(MP_mcmc)\
# (data, emp_bounds, table, i, n_mcmc_samples) for i in chains)
# write_chains_to_file(accepted_MP, PATH_SAMPLES)
### Here we run sequentially to monitor the effect of ensembling multiple chains
n_mcmc_chain = 50
n_mcmc_samples = 3000
accepted_MP = []
for i in range(n_mcmc_chain):
print "Sampling Chain %d..." % i
accepted_MP.append(MP_mcmc(data, emp_bounds, table, i, n_mcmc_samples))
burnt_samples = [sample for chain in accepted_MP[-1:] for sample in chain[-20:]]
Y_predict = classify_cells_majority(data, burnt_samples, table, ct2idx)
accuracy = sum(Y == Y_predict)*1.0/ N
print "Accuracy of cell classification on all data: %.3f" % (accuracy)
write_chains_to_file(accepted_MP, PATH_SAMPLES)
# -
# Classify cells based on accepted MP trees, and write predictions to file...
# +
burnt_samples = [sample for chain in accepted_MP for sample in chain[-1:]]
Y_predict = classify_cells_majority(data, burnt_samples, table, ct2idx)
accuracy = sum(Y == Y_predict)*1.0/ N
print "Accuracy of cell classification: %.3f" % (accuracy)
df['MP_prediction'] = pd.Series([idx2ct[i] for i in Y_predict], index=df.index)
df.to_csv(FILENAME_PREDICTIONS, compression='gzip', index = False)
# -
print [len(i) for i in accepted_MP]
print table.shape
| run/flowMP-BMMC-run-v1.0.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Importing libraries
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import sklearn
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.metrics import accuracy_score, confusion_matrix
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.metrics import classification_report
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import MultinomialNB
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.linear_model import SGDClassifier
from sklearn.externals import joblib
# -
data = pd.read_csv("Data/reddit-top-flairs-cleaned.csv")
data.head()
# ### Dropping unwanted columns and shuffling the data.
# +
data = data.drop(['dirty_title','dirty_body','num_words_body','num_unique_words_title'
,'num_unique_words_body','num_chars_title','num_words_title','num_chars_body','url','id'], axis=1)
data.sample(frac=1)
# -
data["body"].fillna(".", inplace = True)
data.head()
top_flairs = ["Politics", "Non-Political", "Coronavirus", "AskIndia", "Policy/Economy",
"Photography", "Business/Finance", "Sports", "Science/Technology"]
# ### The columns of "title" and "body" are combined as one feature as "input_features" and "cat" stores the output label.
# +
input_features = data["title"] + " "+ data["body"]
data = data.assign(input_features = input_features)
y = data.flair
# -
data = data.drop(['title', 'body'],axis=1)
data.head()
# ### Rearranging the columns and writing the final dataset used for models.
# +
cols = data.columns.tolist()
print(cols)
cols = cols[-1:] + cols[:-1]
print(cols)
data = data[cols]
data.to_csv('Data/final-data.csv', index=False)
data.head()
# -
# ### Train test split is done.
# +
x_train, x_test, y_train, y_test = train_test_split(input_features,y, test_size=0.3)
print("x_train dim:",x_train.shape, "\ty_train dim:", y_train.shape)
print("x_test dim:",x_test.shape, "\ty_test dim:", y_test.shape)
# -
# ## Logistic Regression
#
# It is a typical classifier and most pervasively used one too. It has great interpretability properties.
# ### Parameters:
# 1. C = inverse regularization parameter; C = 1/λ; It’s a penalty term meant regulate against overfitting.
# 2. max_iter = The maximum number of passes over the training data, epochs.
# +
# Logistic Regression
logistic = Pipeline([('cv', CountVectorizer()),('tfidf', TfidfTransformer()),('lr', LogisticRegression(C=1000, max_iter=1000))])
logistic.fit(x_train, y_train)
y_pred = logistic.predict(x_test)
accuracy = (accuracy_score(y_pred, y_test)*100)
print("Accuracy: %.2f" % accuracy +'%')
print(classification_report(y_test, y_pred,target_names=top_flairs))
# -
# ### Inference from Logistic Regression:
# 1. The model has a total accuracy of **83%**, which is good.
# 2. The most performing flair category was **"Coronavirus"**.
# 3. The least performing flair categories was **"Non-Political"**
# ## NAIVE BAYES CLASSIFIER
# One of the most suitable variants for text is the multinomial variant.
# +
# Naive Bayes
naive = Pipeline([('cv', CountVectorizer()),('tfidf', TfidfTransformer()),('nb', MultinomialNB())])
naive.fit(x_train, y_train)
y_pred = naive.predict(x_test)
accuracy = (accuracy_score(y_pred, y_test)*100)
print("Accuracy: %.2f" % accuracy +'%')
print(classification_report(y_test, y_pred,target_names=top_flairs))
# -
# ### Inference from Naive Bayes Classifier:
# 1. The model has a total accuracy of **69%**, which is decent.
# 2. The most performing flair category was **"Coronavirus"**.
# 3. The least performing flair category was **"Non-Political"**.
# ## Random Forest Classifier
#
# Random forest classifier suits most multi-class classification problem, also they have good interpretability and work faster.
# ### Parameters:
# 1. n_estimators = the number of trees in the forest.
# +
# Random Forest
random = Pipeline([('cv', CountVectorizer()),('tfidf', TfidfTransformer()),('rf', RandomForestClassifier(n_estimators = 600))])
random.fit(x_train, y_train)
y_pred = random.predict(x_test)
accuracy = (accuracy_score(y_pred, y_test)*100)
print("Accuracy: %.2f" % accuracy +'%')
print(classification_report(y_test, y_pred,target_names=top_flairs))
# -
# ### Inference from Random Forest Classifier:
# 1. The model has a total accuracy of **82%**, which is goood.
# 2. The most performing flair category was **"AskIndia"**.
# 3. The least performing flair category was **"Business/Finance"**.
joblib.dump(random, 'Pickle_files/random-forest.pkl')
# ## k- Nearest Neighbours Classifier
# ### Parameters:
# 1. n_neighbours = number of neighbours to take into consideration.
# +
# k-Nearest Neighbours
neighbours = Pipeline([('vect', CountVectorizer()),('tfidf', TfidfTransformer()),('knn', KNeighborsClassifier(n_neighbors=10))])
neighbours.fit(x_train, y_train)
y_pred = neighbours.predict(x_test)
accuracy = (accuracy_score(y_pred, y_test)*100)
print("Accuracy: %.2f" % accuracy +'%')
print(classification_report(y_test, y_pred,target_names=top_flairs))
# -
# ### Inference from k- Nearest Neighbours Classifier:
# 1. The model has a total accuracy of **63%**, which is goood.
# 2. The most performing flair category was **"Coronavirus"**.
# 3. The least performing flair category was **"Photography"**.
# ## Linear Support Vector Machine
# ### Parameters:
# 1. loss = 'hinge' : The loss function to be used.‘hinge’ gives a linear SVM.
# 2. penalyty = regularisation term. 'l2' for linear SVM.
# 2. alpha = Constant that multiples the regularisation term.
# 3. max_iter = The maximum number of passes over the training data, epochs.
# +
svm = Pipeline([('cv', CountVectorizer()),('tfidf', TfidfTransformer()),('svm', SGDClassifier(loss='hinge',
penalty='l2',alpha=0.001, max_iter=20))])
svm.fit(x_train, y_train)
y_pred = svm.predict(x_test)
accuracy = (accuracy_score(y_pred, y_test)*100)
print("Accuracy: %.2f" % accuracy +'%')
print(classification_report(y_test, y_pred,target_names=top_flairs))
# -
# ### Inference from Linear SVM Classifier:
# 1. The model has a total accuracy of **84%**, which is gooood.
# 2. The most performing flair category was **"Policy/Economy"**.
# 3. The least performing flair category was **"Photography"**.
joblib.dump(random, 'Pickle_files/linear-svm.pkl')
# # OUT OF THE 5 MODELS USED, LINEAR SVM PERFORMED THE BEST WITH A TOTAL AN ACCURACY OF 84%
| Models/Part III - Building a Flare Detector.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:fisi2028]
# language: python
# name: conda-env-fisi2028-py
# ---
# +
import numpy as np
import pandas as pd
import scipy as sp
import sklearn as sl
import seaborn as sns; sns.set()
import matplotlib as mpl
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import axes3d
from matplotlib import cm
# %matplotlib inline
# -
# # Tarea 5 y 6: Ecuación de difusión en 3 dimensiones
#
# Uds se preguntarán, ¿por qué vamos a resolver la ecuación de difusión? La respuesta no es muy obvia, pero es interesante: Los procesos de difusión comparten algo muy bonito con los procesos estocásticos. Para ello, vamos a analizar un problema de COVID: el tiempo medio estimado de infección de los viriones.
#
# La transmisión de COVID se da preponderamente debido a la aspersión de aerosoles en el aire. Estas partículas *semi*-esféricas -cuyo orden de magnitud es de $\sim1-10\,\mu m$ de radio- están compuestas principalmente por agua, lípidos orgánicos y viriones (se pueden considerar *quasi*-puntuales ya que son del orden de los $\sim100\,nm$). Cuando una particula del aerosol entra y se adhiere al tracto respiratorio, el virus toma un tiempo en entrar en contacto con las células para infectar el huésped debido al movimiento Browniano. Los viriones, a diferencia de las bacterias, no cuentan con cilios, flagelos u otros mecanismos para desplazarse en los medios, por lo cual, su única esperanza es que por fluctuaciones térmicas puedan llegar a la superficie de la gota de aerosol para replicar su ADN al entrar en contacto con los tejidos susceptibles. Este proceso es en esencia estocástico y se puede modelar mediante la ecuación de Difusión. Esta ecuación tiene dos partes. La idea es que uds resuelvan el problema de la manera más sencilla. La ecuación es la siguiente,
# $$
# \frac{\partial\Psi}{\partial t}=D\nabla^2\Psi,
# $$
# donde $D$ es la constante de difusión del medio y $\Psi$ es la concentración de partículas. La taza de difusión depende de la temperatura y la viscosidad del medio y se puede modelar usando la relación de Einstein-Stokes,
# $$
# D=\frac{k_BT}{6\pi\eta a},
# $$
# siendo $k_B$ la constante de Boltzmann, $T$ la temperatura en grados Kelvin, $\eta$ la viscosidad del medio y $a$ el radio de los viriones. En esencia, lo que la ecuación de difusión me está diciendo es que la concentración media de viriones depende de la posición y el tiempo. No obstante, para poder calcular el tiempo que tardaría un virión en alcanzar la superficie se puede modelar en la media usando la siguiente ecuación de difusión,
# $$
# -\nabla^2\tau=\frac{1}{D},
# $$
# donde $\tau$ es el tiempo medio que dependería de la posición en la que se encuentra inicialmente.
# ## 1. Escriba la ecuación de difusión para el tiempo $\tau$ en coordenadas esféricas y asuma que $\tau(r,\theta,\phi)\simeq\tau(r)$ ya que por simetría esférica sólo dependerá de la posición radial respecto al centro de la gotica (Usar **LaTex**)
# [**Escriba aquí**]
#
# Ejemplo de una ecuación:
# $$
# f(x)=\frac{1}{x^2}
# $$
# ## 2. Resuelva la ecuación diferencial para el tiempo de forma numérica y grafique
#
# Asuma las siguientes condiciones iniciales:
# 1. $\tau(R)=0$ ya que si el virión está en la superficie el tiempo debe ser naturalmente nulo.
# 1. $\tau^\prime(r)=0$ ya que por simetría la derivada radial debe ser nula en el origen
#
# Suponga las siguientes condiciones:
# - $R=5\mu m$ para el radio de la esfera de *quasi* agua (calcula el volumen $V$)
# - $\eta_{\text{H}_2\text{O}}\simeq1\times10^{-3}\,Pa\cdot s$ (Pascales por segundo)
# - $\frac{\eta}{\eta_{\text{H}_2\text{O}}}\approx10^3\to10^5$
# - $a\simeq100\,nm$
# - $V=\frac{4}{3}\pi a^3$
# - $k_BT\simeq4.05\times10^{-21}J$
# ## 3. Si los viriones están distribuidos uniformemente, encuentre el tiempo que tardaría un virión en salir de la gota de aerosol.
#
# Tenga presente que debe promediar suponiendo que el virión tiene una distribución uniforme, i.e. $\rho\left(\vec{r}\right)=1/V$, usando la siguiente relación,
# $$
# \bar{\tau} = \int_{\mathcal{V}}\tau\left(\vec{r}\right)\rho\left(\vec{r}\right)\,\text{d}\vec{r} = \frac{4\pi}{V}\int_{0}^{R}\tau(r)\,r^2\text{d}r.
# $$
# Realice la integral numéricamente.
# ## 4. Las cadenas de Markov.
#
# Vamos a resolver el problema anterior usando un proceso de Markov. Suponga que ud **divide** la esfera en cubitos de ancho $\delta x=\delta y=\delta z=\Delta=R/N$ con $N$ un número determinado de particiones. Para nuestro experimento, vamos a suponer que ponemos un virión en una posición inicial $\vec{r}_0=(\Delta\,j, 0, 0)$, determinada por un índice $j\in\{0,1,2,\dots,N\}$. Ud va a actualizar la posición del virión en la malla discreta siguiendo las reglas a continuación:
# - Determine el número de divisiones $N$ y calcule $\Delta$.
# - Ajuste la escala de tiempo $\delta t$ y de tal manera que la probabilidad $\alpha=D\frac{\delta t}{\Delta^2}<\frac{1}{6}$. (Recomiendo $\leq1/12$)
# - Haga una corrida de Markov-Monte Carlo actualizando la posición con la probabilidad de transición $\alpha$ hacia los primeros vecinos cercanos y calcule el número de pasos de tiempo para llegar a la superficie, i.e. $|\vec{r}(t_m)|>R-\Delta$
# - Repita este experimento para la misma posición un gran número de veces para obtener una estadística (media y desviación estándar).
# - Repita todos los pasos para todos los índices $j\in\{0,1,2,\dots,N\}$ y grafique. ¡Compare con los resultados anteriores!
# ## 5. Diseñe un experimento para calcular el numeral (3) usando Markov-Monte Carlo
| ejercicios/tarea5y6/tarea5y6.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.12 ('ai4e')
# language: python
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 2847, "status": "ok", "timestamp": 1644848673327, "user": {"displayName": "<NAME>", "photoUrl": "https://<KEY>", "userId": "03255547772837230796"}, "user_tz": -420} id="RFI1qplXc_CW" outputId="f21c6474-f11c-47eb-fcee-7d289060d803"
from google.colab import drive
drive.mount('/content/drive')
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 21, "status": "ok", "timestamp": 1644848673328, "user": {"displayName": "<NAME>", "photoUrl": "https://<KEY>", "userId": "03255547772837230796"}, "user_tz": -420} id="tJp3nIaNhHJ1" outputId="3bd8a82d-0510-405d-e2af-3647aa13cd92"
# %cd /content/drive/MyDrive/TKPTTT/DNA-Decoder-Simulator/
# !ls
# + id="LwEP1t5Zc_N_"
from Models.Decoder import Decoder
from Models.Encoder import Encoder
from utils.converter import str2ncs, ncs2str
from Models.Decoder.Decoder import DFS, BFS, Beam
import time
import numpy as np
import matplotlib.pyplot as plt
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 1737, "status": "ok", "timestamp": 1644848678490, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4JqPaJxeo0ONOMc-xrl0nJXyXS9zEhSnxJer9Nw=s64", "userId": "03255547772837230796"}, "user_tz": -420} id="LoRccjCTc_OC" outputId="fbb8a36d-9dbc-4cd6-9212-f46c292ee7aa"
# 1k1050 10k450 1k0350 1k03100 1k01100 20010100 2000150 2000450 2001050 20004200
en = Encoder()
origins, reads = en.load_data(path='data/20004200')
origins.shape, reads.shape
# + [markdown] id="gWPpmj83ohQe"
# ## BFS
# + [markdown] id="tu7hfhOromEu"
# ### Choice kmer size
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 3911247, "status": "ok", "timestamp": 1644858339727, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4JqPaJxeo0ONOMc-xrl0nJXyXS9zEhSnxJer9Nw=s64", "userId": "03255547772837230796"}, "user_tz": -420} id="MkBZpq21c_OG" outputId="d7e97cf9-41d0-46e1-f573-b5135d7d294f"
init_kmer_size = 24
prune = 4
bfs_kmer_sizes = []
bfs_accuracies = []
bfs_times = []
for kmer_size in range(init_kmer_size, 60, 2):
# kmer_size = 50
total_time = 0
bfs_results = []
print(f'kmer_size: {kmer_size} prune: {prune}')
for i, original in enumerate(origins):
decoder = Decoder(origin=original, prune=prune)
decoder.build_graph(reads[i], len(original), kmer_size=kmer_size, visualization=False)
start = time.time()
rs = decoder.decode(BFS(min_weight=prune, db=100000))
ttt = time.time() - start
total_time += round(ttt, 3)
bfs_results.append(rs[0])
if i%20 == 0:
print(f'{i}: {round(ttt, 3)}s {round(total_time, 3)}s {round(np.array(bfs_results).sum()*100/(i+1), 4)}%')
bfs_kmer_sizes.append(kmer_size)
bfs_accuracies.append(round(np.array(bfs_results).sum()*100/origins.shape[0], 4))
bfs_times.append(total_time*1000)
# + colab={"base_uri": "https://localhost:8080/", "height": 404} executionInfo={"elapsed": 1108, "status": "ok", "timestamp": 1644858349116, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4JqPaJxeo0ONOMc-xrl0nJXyXS9zEhSnxJer9Nw=s64", "userId": "03255547772837230796"}, "user_tz": -420} id="QMKTaVLjk3SP" outputId="b20897c9-c43f-4c56-e648-18f763c8a31f"
fig,ax = plt.subplots(figsize=(14, 6))
ax.plot(bfs_kmer_sizes, bfs_accuracies, color="b", label="accuarcy")
ax.set_xlabel("k-mer size")
ax.set_ylabel("acuaracy(%)")
ax2=ax.twinx()
ax2.plot(bfs_kmer_sizes, bfs_times, color="r", label="time")
ax2.set_ylabel("time(ms)")
plt.title('BFS Accuracy and Run time by K-mer size (error rate 4%, 200 copies)')
h1, l1 = ax.get_legend_handles_labels()
h2, l2 = ax2.get_legend_handles_labels()
ax.legend(h1+h2, l1+l2)
plt.show()
# + [markdown] id="HB64B4yDoq5V"
# ### Choice prune
# + colab={"background_save": true, "base_uri": "https://localhost:8080/"} id="aNp976XKot-p"
init_prune = 1
kmer_size = 50
bfs_prunes = []
bfs_accuracies = []
bfs_times = []
for prune in reversed(range(init_prune, 9)):
total_time = 0
bfs_results = []
print(f'kmer_size: {kmer_size} prune: {prune}')
for i, original in enumerate(origins):
decoder = Decoder(origin=original, prune=prune)
decoder.build_graph(reads[i], len(original), kmer_size=kmer_size, visualization=False)
start = time.time()
rs = decoder.decode(BFS(min_weight=prune, db=0))
ttt = time.time() - start
total_time += round(ttt, 3)
bfs_results.append(rs[0])
if i%20 == 0:
print(f'{i}: {round(ttt, 3)}s {round(total_time, 3)}s {round(np.array(bfs_results).sum()*100/(i+1), 4)}%')
bfs_prunes.append(prune)
bfs_accuracies.append(round(np.array(bfs_results).sum()*100/origins.shape[0], 4))
bfs_times.append(total_time*1000)
bfs_prunes.reverse()
bfs_accuracies.reverse()
bfs_times.reverse()
# + colab={"base_uri": "https://localhost:8080/", "height": 404} executionInfo={"elapsed": 795, "status": "ok", "timestamp": 1644767801753, "user": {"displayName": "S\u01a1<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4JqPaJxeo0ONOMc-xrl0nJXyXS9zEhSnxJer9Nw=s64", "userId": "03255547772837230796"}, "user_tz": -420} id="SqbOHXVu3hYH" outputId="b3977559-2d46-4ef0-c7d8-46352798b429"
fig,ax = plt.subplots(figsize=(14, 6))
ax.plot(bfs_prunes, bfs_accuracies, color="b", label="accuarcy")
ax.set_xlabel("prune")
ax.set_ylabel("acuaracy(%)")
ax2=ax.twinx()
ax2.plot(bfs_prunes, bfs_times, color="r", label="time")
ax2.set_ylabel("time(ms)")
plt.title('BFS Accuracy and Run time by Prune weight (error rate 4%, 50 copies)')
h1, l1 = ax.get_legend_handles_labels()
h2, l2 = ax2.get_legend_handles_labels()
ax.legend(h1+h2, l1+l2)
plt.show()
# + [markdown] id="XIi1ST-5HPws"
# ## DFS
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 1430689, "status": "ok", "timestamp": 1644772473222, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4JqPaJxeo0ONOMc-xrl0nJXyXS9zEhSnxJer9Nw=s64", "userId": "03255547772837230796"}, "user_tz": -420} id="ab-bxH3dwUB6" outputId="52a3f321-cef4-4f7c-9ca6-52495f3dd07d"
init_kmer_size = 14
prune = 4
dfs_kmer_sizes = []
dfs_accuracies = []
dfs_times = []
for kmer_size in range(init_kmer_size, 60, 2):
# kmer_size = 50
total_time = 0
dfs_results = []
print(f'kmer_size: {kmer_size} prune: {prune}')
for i, original in enumerate(origins):
decoder = Decoder(origin=original, prune=prune)
decoder.build_graph(reads[i], len(original), kmer_size=kmer_size, visualization=False)
start = time.time()
rs = decoder.decode(DFS(min_weight=prune))
ttt = time.time() - start
total_time += round(ttt, 3)
dfs_results.append(rs[0])
if i%20 == 0:
print(f'{i}: {round(ttt, 3)}s {round(total_time, 3)}s {round(np.array(dfs_results).sum()*100/(i+1), 4)}%')
dfs_kmer_sizes.append(kmer_size)
dfs_accuracies.append(round(np.array(dfs_results).sum()*100/origins.shape[0], 4))
dfs_times.append(total_time*1000)
# + colab={"base_uri": "https://localhost:8080/", "height": 404} executionInfo={"elapsed": 713, "status": "ok", "timestamp": 1644772473936, "user": {"displayName": "S\u01a1<NAME>\u1ea1m", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4JqPaJxeo0ONOMc-xrl0nJXyXS9zEhSnxJer9Nw=s64", "userId": "03255547772837230796"}, "user_tz": -420} id="R5f5OD2Cwpf5" outputId="b4e3419d-60cd-4f83-bbfb-500ae10b8cce"
fig,ax = plt.subplots(figsize=(14, 6))
ax.plot(dfs_kmer_sizes, dfs_accuracies, color="b")
ax.set_xlabel("k-mer size")
ax.set_ylabel("acuaracy(%)")
ax2=ax.twinx()
ax2.plot(dfs_kmer_sizes, dfs_times, color="r")
ax2.set_ylabel("time(ms)")
plt.title('DFS Accuracy and Run time by K-mer size (error rate 4%, 50 copies)')
plt.show()
# + [markdown] id="EMq-nG0Nolda"
# ### Choice prune
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 303210, "status": "ok", "timestamp": 1644768104920, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4JqPaJxeo0ONOMc-xrl0nJXyXS9zEhSnxJer9Nw=s64", "userId": "03255547772837230796"}, "user_tz": -420} id="Ozzw_s9uwu2_" outputId="91a87c77-aab2-4c4d-cd3e-a63def5f3d18"
init_prune = 1
kmer_size = 32
dfs_prunes = []
dfs_accuracies = []
dfs_times = []
for prune in reversed(range(init_prune, 9)):
total_time = 0
dfs_results = []
print(f'kmer_size: {kmer_size} prune: {prune}')
for i, original in enumerate(origins):
decoder = Decoder(origin=original, prune=prune)
decoder.build_graph(reads[i], len(original), kmer_size=kmer_size, visualization=False)
start = time.time()
rs = decoder.decode(DFS(min_weight=prune))
ttt = time.time() - start
total_time += round(ttt, 3)
dfs_results.append(rs[0])
if i%20 == 0:
print(f'{i}: {round(ttt, 3)}s {round(total_time, 3)}s {round(np.array(dfs_results).sum()*100/(i+1), 4)}%')
dfs_prunes.append(prune)
dfs_accuracies.append(round(np.array(dfs_results).sum()*100/origins.shape[0], 4))
dfs_times.append(total_time*1000)
dfs_prunes.reverse()
dfs_accuracies.reverse()
dfs_times.reverse()
# + colab={"base_uri": "https://localhost:8080/", "height": 404} executionInfo={"elapsed": 781, "status": "ok", "timestamp": 1644768105693, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4JqPaJxeo0ONOMc-xrl0nJXyXS9zEhSnxJer9Nw=s64", "userId": "03255547772837230796"}, "user_tz": -420} id="h9PQUB8Nw-mk" outputId="12ab7853-cce1-44b0-a365-2b891666b4eb"
fig,ax = plt.subplots(figsize=(14, 6))
ax.plot(dfs_prunes, dfs_accuracies, color="b")
ax.set_xlabel("prune")
ax.set_ylabel("acuaracy(%)")
ax2=ax.twinx()
ax2.plot(dfs_prunes, dfs_times, color="r")
ax2.set_ylabel("time(ms)")
plt.title('DFS Accuracy and Run time by Prune weight')
plt.show()
# + [markdown] id="GfUHSEBtwUo5"
# ### Compare DFS & BFS
#
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 403108, "status": "ok", "timestamp": 1644573149989, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4JqPaJxeo0ONOMc-xrl0nJXyXS9zEhSnxJer9Nw=s64", "userId": "03255547772837230796"}, "user_tz": -420} id="_ZXh80uiHRB4" outputId="81d23b5f-f4b4-426b-ba83-a4ae030f0aef"
init_prune = 2
kmer_size = 17
dfs_prunes = []
dfs_accuracies = []
dfs_times = []
for prune in reversed(range(init_prune, 8)):
total_time = 0
bfs_results = []
print(f'kmer_size: {kmer_size} prune: {prune}')
for i, original in enumerate(origins):
decoder = Decoder(origin=original, prune=prune)
decoder.build_graph(reads[i], len(original), kmer_size=kmer_size, visualization=False)
start = time.time()
rs = decoder.decode(DFS(min_weight=prune))
ttt = time.time() - start
total_time += round(ttt, 3)
bfs_results.append(rs[0])
if i%20 == 0:
print(f'{i}: {round(ttt, 3)}s {round(total_time, 3)}s {round(np.array(bfs_results).sum()*100/(i+1), 4)}%')
dfs_prunes.append(prune)
dfs_accuracies.append(round(np.array(bfs_results).sum()*100/origins.shape[0], 4))
dfs_times.append(total_time*1000)
# + colab={"base_uri": "https://localhost:8080/", "height": 404} executionInfo={"elapsed": 432, "status": "ok", "timestamp": 1644573265939, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4JqPaJxeo0ONOMc-xrl0nJXyXS9zEhSnxJer9Nw=s64", "userId": "03255547772837230796"}, "user_tz": -420} id="sUzTfe_bHYct" outputId="6bae01f9-fca5-4224-88b0-2bd0a3d9d3e0"
fig,ax = plt.subplots(figsize=(14, 6))
ax.plot(dfs_prunes, dfs_accuracies, color="b")
ax.set_xlabel("prune")
ax.set_ylabel("acuaracy(%)")
ax2=ax.twinx()
ax2.plot(dfs_prunes, dfs_times, color="r")
ax2.set_ylabel("time(ms)")
plt.title('DFS Accuracy and Run time by Prune weight')
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 404} executionInfo={"elapsed": 854, "status": "ok", "timestamp": 1644768106534, "user": {"displayName": "<NAME>", "photoUrl": "<KEY>", "userId": "03255547772837230796"}, "user_tz": -420} id="jO1qwSwwJSmR" outputId="e3e48a5b-e624-49f8-ec00-dd57383e8e7a"
fig,ax = plt.subplots(figsize=(12, 6))
ax.plot(bfs_prunes, bfs_accuracies, color="b", label='bfs acc')
ax2=ax.twinx()
ax2.plot(bfs_prunes, bfs_times, '--', color="b", label='bfs time')
ax.plot(dfs_prunes, dfs_accuracies, color="r", label='btk acc')
ax2.plot(dfs_prunes, dfs_times, '--', color="r", label='btk time')
ax.set_xlabel("prune size")
ax.set_ylabel("acuaracy(%)")
ax2.set_ylabel("time(ms)")
h1, l1 = ax.get_legend_handles_labels()
h2, l2 = ax2.get_legend_handles_labels()
ax.legend(h1+h2, l1+l2)
plt.title('BFS and Backtracking accuracy and time by prune size')
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 404} executionInfo={"elapsed": 3, "status": "ok", "timestamp": 1644772473941, "user": {"displayName": "S\u01a1n Ph\u1ea1m", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4JqPaJxeo0ONOMc-xrl0nJXyXS9zEhSnxJer9Nw=s64", "userId": "03255547772837230796"}, "user_tz": -420} id="ae6nqmdYe6NM" outputId="07ade9c4-6921-4593-fc66-3cbe90370584"
fig,ax = plt.subplots(figsize=(12, 6))
ax.plot(bfs_kmer_sizes, bfs_accuracies, color="b", label='bfs acc')
ax2=ax.twinx()
ax2.plot(bfs_kmer_sizes, bfs_times, '--', color="b", label='bfs time')
ax.plot(dfs_kmer_sizes, dfs_accuracies, color="r", label='btk acc')
ax2.plot(dfs_kmer_sizes, dfs_times, '--', color="r", label='btk time')
ax.set_xlabel("k-mer size")
ax.set_ylabel("acuaracy(%)")
ax2.set_ylabel("time(ms)")
h1, l1 = ax.get_legend_handles_labels()
h2, l2 = ax2.get_legend_handles_labels()
ax.legend(h1+h2, l1+l2)
plt.title('BFS and Backtracking accuracy and time by k-mer size (error rate 4%, 50 copies)')
plt.show()
# + [markdown] id="UmStAJp0oj0a"
# ## Beam
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 7533340, "status": "ok", "timestamp": 1644825749058, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4JqPaJxeo0ONOMc-xrl0nJXyXS9zEhSnxJer9Nw=s64", "userId": "03255547772837230796"}, "user_tz": -420} id="97nD9CNhc_OJ" outputId="a8826280-46d8-4d7b-b798-f9c2b743c49d"
init_k = 20
prune = 6
kmer_size = 12
beam_k_sizes = []
beam_accuracies = []
beam_times = []
for k in range(init_k, 300, 5):
# kmer_size = 50
total_time = 0
bfs_results = []
print(f'kmer_size: {kmer_size} k: {k}')
for i, original in enumerate(origins):
decoder = Decoder(origin=original, prune=prune)
decoder.build_graph(reads[i], len(original), kmer_size=kmer_size, visualization=False)
start = time.time()
rs = decoder.decode(Beam(k=k, db=0))
ttt = time.time() - start
total_time += round(ttt, 3)
bfs_results.append(rs[0])
if i%10 == 0:
print(f'{i}: {round(ttt, 3)}s {round(total_time, 3)}s {round(np.array(bfs_results).sum()*100/(i+1), 4)}%')
beam_k_sizes.append(k)
beam_accuracies.append(round(np.array(bfs_results).sum()*100/origins.shape[0], 4))
beam_times.append(total_time*1000)
# + colab={"base_uri": "https://localhost:8080/", "height": 404} executionInfo={"elapsed": 783, "status": "ok", "timestamp": 1644825750873, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4JqPaJxeo0ONOMc-xrl0nJXyXS9zEhSnxJer9Nw=s64", "userId": "03255547772837230796"}, "user_tz": -420} id="XKbAsv6BQ2LV" outputId="2ea9f4f1-bd29-47c3-dde1-a3bc0071bba7"
fig,ax = plt.subplots(figsize=(14, 6))
ax.plot(beam_k_sizes, beam_accuracies, color="b")
ax.set_xlabel("queue size")
ax.set_ylabel("acuaracy(%)")
ax2=ax.twinx()
ax2.plot(beam_k_sizes, beam_times, color="r")
ax2.set_ylabel("time(ms)")
plt.title('Beam Accuracy and Run time by Queue size')
plt.show()
# + id="aXJqev4BUOOY"
# + id="Dva0pQRsic_7"
k = 300
prune = 1
init_kmer_size = 12
beam_kmer_sizes = []
beam_accuracies = []
beam_times = []
for kmer_size in range(init_kmer_size, 30, 2):
# kmer_size = 50
# total_time = 0
# beam_results = []
# print(f'kmer_size: {kmer_size} k: {k}')
# for i, original in enumerate(origins):
# decoder = Decoder(origin=original, prune=prune)
# decoder.build_graph(reads[i], len(original), kmer_size=kmer_size, visualization=False)
# start = time.time()
# rs = decoder.decode(Beam(k=k, db=0))
# ttt = time.time() - start
# total_time += round(ttt, 3)
# beam_results.append(rs[0])
# if i%10 == 0:
# print(f'{i}: {round(ttt, 3)}s {round(total_time, 3)}s {round(np.array(beam_results).sum()*100/(i+1), 4)}%')
beam_kmer_sizes.append(kmer_size)
# beam_accuracies.append(round(np.array(beam_results).sum()*100/origins.shape[0], 4))
# beam_times.append(total_time*1000)
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 2067545, "status": "ok", "timestamp": 1644830304805, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4JqPaJxeo0ONOMc-xrl0nJXyXS9zEhSnxJer9Nw=s64", "userId": "03255547772837230796"}, "user_tz": -420} id="vV-vrC9NzlBD" outputId="627cac9d-15e0-437d-b8ed-0ee678343cf4"
k = 300
prune = 1
init_kmer_size = 12
beam_kmer_sizes = []
beam_accuracies = []
beam_times = []
for kmer_size in range(init_kmer_size, 30, 2):
# kmer_size = 50
total_time = 0
beam_results = []
print(f'kmer_size: {kmer_size} k: {k}')
for i, original in enumerate(origins):
decoder = Decoder(origin=original, prune=prune)
decoder.build_graph(reads[i], len(original), kmer_size=kmer_size, visualization=False)
start = time.time()
rs = decoder.decode(Beam(k=k, db=0))
ttt = time.time() - start
total_time += round(ttt, 3)
beam_results.append(rs[0])
if i%10 == 0:
print(f'{i}: {round(ttt, 3)}s {round(total_time, 3)}s {round(np.array(beam_results).sum()*100/(i+1), 4)}%')
beam_kmer_sizes.append(k)
beam_accuracies.append(round(np.array(beam_results).sum()*100/origins.shape[0], 4))
beam_times.append(total_time*1000)
# + colab={"base_uri": "https://localhost:8080/"} executionInfo={"elapsed": 445, "status": "ok", "timestamp": 1644830562715, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4JqPaJxeo0ONOMc-xrl0nJXyXS9zEhSnxJer9Nw=s64", "userId": "03255547772837230796"}, "user_tz": -420} id="oD_7oeK6iWui" outputId="88f8ba0d-d67b-404d-ee09-f65df0912d4e"
beam_kmer_sizes
# + id="gNVg2_tBiqbH"
beam_accuracies = [35.6021, 51.8325, 57.5916, 57.0681, 54.4503, 42.4084, 25.1309, 13.089, 4.1885]
beam_times = [199204, 189009, 178288, 173001, 169771, 163872, 161526, 154314, 143958]
# + colab={"base_uri": "https://localhost:8080/", "height": 404} executionInfo={"elapsed": 555, "status": "ok", "timestamp": 1644830850117, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4JqPaJxeo0ONOMc-xrl0nJXyXS9zEhSnxJer9Nw=s64", "userId": "03255547772837230796"}, "user_tz": -420} id="44AtcCaBz9Yp" outputId="940b6e47-beb6-4e34-fb78-71b0d9c3c5e0"
fig,ax = plt.subplots(figsize=(14, 6))
ax.plot(beam_kmer_sizes, beam_accuracies, color="b", label="accuarcy")
ax.set_xlabel("k-mer size")
ax.set_ylabel("acuaracy(%)")
ax2=ax.twinx()
ax2.plot(beam_kmer_sizes, beam_times, color="r", label="time")
ax2.set_ylabel("time(ms)")
plt.title('Beam Accuracy and Run time by K-mer size (error rate 10%, 50 copies)')
h1, l1 = ax.get_legend_handles_labels()
h2, l2 = ax2.get_legend_handles_labels()
ax.legend(h1+h2, l1+l2)
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 404} executionInfo={"elapsed": 1236, "status": "ok", "timestamp": 1644830911475, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4JqPaJxeo0ONOMc-xrl0nJXyXS9zEhSnxJer9Nw=s64", "userId": "03255547772837230796"}, "user_tz": -420} id="A1sJJxbtj1-k" outputId="0458d365-ca58-4cf4-93fe-7565679a54dd"
fig,ax = plt.subplots(figsize=(12, 6))
ax.plot(bfs_kmer_sizes, bfs_accuracies, color="b", label='bfs acc')
ax2=ax.twinx()
ax2.plot(bfs_kmer_sizes, bfs_times, '--', color="b", label='bfs time')
ax.plot(beam_kmer_sizes, beam_accuracies, color="r", label='beam acc')
ax2.plot(beam_kmer_sizes, beam_times, '--', color="r", label='beam time')
ax.set_xlabel("k-mer size")
ax.set_ylabel("acuaracy(%)")
ax2.set_ylabel("time(ms)")
h1, l1 = ax.get_legend_handles_labels()
h2, l2 = ax2.get_legend_handles_labels()
ax.legend(h1+h2, l1+l2)
plt.title('BFS and Beam accuracy and time by k-mer size (error rate 10%, 50 copies)')
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 531} executionInfo={"elapsed": 39, "status": "ok", "timestamp": 1644562463535, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gj4JqPaJxeo0ONOMc-xrl0nJXyXS9zEhSnxJer9Nw=s64", "userId": "03255547772837230796"}, "user_tz": -420} id="8lgE5gw5jhxN" outputId="316964b8-b71d-4b51-8d44-1be801c303b1"
plt.figure(figsize=(16, 8))
plt.plot(beam_k_sizes, beam_accuracies)
plt.xlabel('queue size')
plt.ylabel('acuaracy(%)')
plt.title('Beam Accuracy by Queue size')
| visualize.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# [](https://colab.research.google.com/github/JoshWilde/JOSS_Reviewer_Matcher/blob/main/Idea%201/JOSS_Reviewer_Idea_1.ipynb)
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import glob
import string
import glob
from tqdm import tqdm
#import pdfminer
from pdfminer.high_level import extract_text
import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer
import nltk
nltk.download('stopwords')
import numpy as np
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import CountVectorizer
import nltk
nltk.download('punkt')
from nltk import sent_tokenize
from nltk import word_tokenize
from nltk.corpus import stopwords
from nltk.stem import WordNetLemmatizer
nltk.download('wordnet')
from nltk.probability import FreqDist
from JOSS_PDF_Cleaner import Clean_PDF
import re
from termcolor import colored
import warnings
from sklearn.metrics.pairwise import cosine_similarity
PAPER_OF_INTEREST_FNAME = glob.glob('/Volumes/Seagate Backup Plus Drive/JOSS Project/joss-papers-master/*/*/*.pdf')
def Get_Lemma_Words(POI_PDF):
text = str(POI_PDF)
text2 = text.split()
words_no_punc = []
for w in text2:
if w.isalpha():
words_no_punc.append(w.lower())
from nltk.corpus import stopwords
stopwords = stopwords.words('english')
clean_words = []
for w in words_no_punc:
if w not in stopwords:
clean_words.append(w)
clean_words_arr = ''
for i in range(len(clean_words)):
clean_words_arr = clean_words_arr + ' ' + str(clean_words[i])
string_for_lemmatizing = clean_words_arr
lemmatizer = WordNetLemmatizer()
words_2 = word_tokenize(string_for_lemmatizing)
lemmatized_words = [lemmatizer.lemmatize(word) for word in words_2]
lemmatized_words_arr = ''
for i in range(len(lemmatized_words)):
lemmatized_words_arr = lemmatized_words_arr + ' ' + str(lemmatized_words[i])
words = word_tokenize(lemmatized_words_arr)
return words
df_reviewers = pd.read_csv('../Data/JOSS Table Test.csv')
def Get_Top_Words_tf(Paper_interest, df_reviewers=df_reviewers,num_suggestions=5, num_top20=20):
POI_PDF = [extract_text(Paper_interest)]
text = str(POI_PDF)
words = Get_Lemma_Words(POI_PDF)
#print(len(words))
fdist = FreqDist(words)
X = np.array(fdist.most_common())
top20_tf = X[:num_top20,0]
#df_reviewers.username.iloc[+1]
return top20_tf
def Compare_topics(top20, df_reviewers):
length = df_reviewers.shape[0] - 1
match_arr = np.zeros(length)
for i in range(length):
if pd.isna(df_reviewers['Domains/topic areas you are comfortable reviewing'].str.lower().values[1+i]) == False:
t = df_reviewers['Domains/topic areas you are comfortable reviewing'].str.lower().values[1+i]
#print(i)
uniarr = Split_columns(t)
for j in range(len(uniarr)):
for k in range(len(top20)):
if uniarr[j] == top20[k]:
match_arr[i] = match_arr[i] + 1
return match_arr
def Split_columns(t):
txt = " ".join("".join([" " if ch in string.punctuation else ch for ch in t]).split())
sol1 = np.char.split(txt, ' ')
txt_arr = array_of_lists_to_array(sol1)
uniarr = np.unique(txt_arr)
return uniarr
def array_of_lists_to_array(arr):
return np.apply_along_axis(lambda a: np.array(a[0]), -1, arr[..., None])
def summatation_bot(all_usernames, all_domains, all_num_matched_words, all_matched_words):
length = len(all_usernames)
message = 'Hello. \nI have found ' + str(length) + ' possible reviewers for this paper.' +'\n\n'
for i in range(length):
ps = 'I believe ' + all_usernames[i] + ' will make a good reviewer for this paper because they have matched ' + str(int(all_num_matched_words[i])) + ' words from their comfortable domain topics with the top 20 most frequent words in the paper. These matched words are ' + str(all_matched_words[i]) +'.\nFrom their topics domain: ' + str(all_domains[i].replace('\n', ', ')) +'.\n'
message = message + ps + '\n'
print(message)
Q = 340
print(PAPER_OF_INTEREST_FNAME[Q])
top20_tf = Get_Top_Words_tf(PAPER_OF_INTEREST_FNAME[Q])
top20_tf
def GetReviewer_Vectors(df_reviewers=df_reviewers):
reviewer_vectors = np.zeros(((df_reviewers.shape[0]-1),300))
for i in range(df_reviewers.shape[0]-1):
#if i%10 == 0:
# print(i)
if pd.isna(df_reviewers['Domains/topic areas you are comfortable reviewing'].iloc[1:].values[i]) == False:
review_text = df_reviewers['Domains/topic areas you are comfortable reviewing'].iloc[1:].values[i].lower()
review_text = review_text.replace('-\\n','')
review_text = review_text.replace('\\n',' ')
review_text = review_text.replace('\n', ' ')
review_arr = []
for token in model(review_text):
if token.is_alpha == True:
if token.is_stop == False:
review_arr.append(str(token.lemma_).lower())
review_arr = np.array(review_arr)
review_str = ''
for j in np.unique(review_arr):
review_str = review_str + j +' '
#print(model(review_str).vector.shape)
#print(reviewer_vectors.shape)
reviewer_vectors[i] = model(review_str).vector
return reviewer_vectors
def GetCosineSims(doc_vec, review_vec, df_reviewers=df_reviewers):
all_usernames = []
all_domains = []
all_cosine_sims = []
for j in range(len(review_vec)):
if pd.isna(df_reviewers.iloc[j+1]['Domains/topic areas you are comfortable reviewing']) == False:
all_cosine_sims.append(cosine_similarity(np.array([doc_vec]), np.array([review_vec[j]]))[0,0])
all_domains.append(df_reviewers.iloc[j+1]['Domains/topic areas you are comfortable reviewing'].lower())
all_usernames.append(df_reviewers.iloc[j+1].username)
all_usernames= np.array(all_usernames)
all_domains= np.array(all_domains)
all_cosine_sims= np.array(all_cosine_sims)
return all_usernames, all_domains, all_cosine_sims
import spacy
model = spacy.load('en_core_web_lg')
reviewer_vectors = GetReviewer_Vectors()
doc_top20= ''
for i in top20_tf:
doc_top20 = doc_top20 + i +' '
all_usernames2, all_domains2, all_cosine_sims2 = GetCosineSims(model(doc_top20).vector, reviewer_vectors)
def TopReviewers(number=5, all_usernames=all_usernames2, all_domains=all_domains2, all_cosine_sims=all_cosine_sims2):
message = 'Hello.\n I have found ' +str(number) + ' possible reviewers for this paper.'+ '\n\n'
for J in range(number):
index = np.argsort(all_cosine_sims)[-1-J]
#print(index)
ps = 'I believe '+ colored(str(all_usernames[index]), 'green') + ' will be a good reviewer for this paper. Their domain interests and this paper have a cosine similairity score of ' + colored(str(all_cosine_sims[index])[:6], 'blue') + '. This reviewers domain interests are ' + colored(str(all_domains[index].replace('\n', ',')), 'red')
message = message + ps + '\n\n'
print(message)
TopReviewers()
| Idea 3/JOSS_Reviewer_Idea_3_TF_spaCy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
import sys
import pickle
import numpy as np
import matplotlib.pyplot as plt
from scipy.spatial.distance import cosine
import cv2
import mtcnn
from keras.models import load_model
sys.path.append('..')
from utils import get_face, l2_normalizer, normalize, save_pickle, plt_show, get_encode
# +
encoder_model = 'data/model/facenet_keras.h5'
people_dir = 'data/people'
encodings_path = 'data/encodings/encodings.pkl'
test_img_path = 'data/test/friends.jpg'
test_res_path = 'data/results/friends.jpg'
recognition_t = 0.3
required_size = (160, 160)
encoding_dict = dict()
# -
# ### Models
face_detector = mtcnn.MTCNN()
face_encoder = load_model(encoder_model)
face_encoder.summary()
# get encode
# ### Prepare
for person_name in os.listdir(people_dir):
person_dir = os.path.join(people_dir, person_name)
encodes = []
for img_name in os.listdir(person_dir):
img_path = os.path.join(person_dir, img_name)
img = cv2.imread(img_path)
img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
results = face_detector.detect_faces(img_rgb)
if results:
res = max(results, key=lambda b: b['box'][2] * b['box'][3])
face, _, _ = get_face(img_rgb, res['box'])
face = normalize(face)
face = cv2.resize(face, required_size)
encode = face_encoder.predict(np.expand_dims(face, axis=0))[0]
encodes.append(encode)
if encodes:
encode = np.sum(encodes, axis=0)
encode = l2_normalizer.transform(np.expand_dims(encode, axis=0))[0]
encoding_dict[person_name] = encode
# print keys, values
for key, val in encoding_dict.items():
print(key, val.shape)
# #### pickle
save_pickle(encodings_path, encoding_dict)
# ### Recognizer
img = cv2.imread(test_img_path)
plt_show(img)
img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
results = face_detector.detect_faces(img_rgb)
for res in results:
face, pt_1, pt_2 = get_face(img_rgb, res['box'])
encode = get_encode(face_encoder, face, required_size)
encode = l2_normalizer.transform(np.expand_dims(encode, axis=0))[0]
name = 'unknown'
distance = float("inf")
for db_name, db_encode in encoding_dict.items():
dist = cosine(db_encode, encode)
if dist < recognition_t and dist < distance:
name = db_name
distance = dist
if name == 'unknown':
cv2.rectangle(img, pt_1, pt_2, (0,0, 255),1)
cv2.putText(img,name, pt_1,cv2.FONT_HERSHEY_PLAIN, 1, (0,0,255), 1)
else:
cv2.rectangle(img, pt_1, pt_2, (0, 255, 0),1)
cv2.putText(img,name + f"__{distance:.2f}", pt_1 ,cv2.FONT_HERSHEY_PLAIN, 1, (0,255,0), 1)
plt_show(img)
cv2.imwrite(test_res_path,img)
| recognizer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] toc=true
# <h1>Table of Contents<span class="tocSkip"></span></h1>
# <div class="toc"><ul class="toc-item"><li><span><a href="#Kaggle---Don't-Get-Kicked" data-toc-modified-id="Kaggle---Don't-Get-Kicked-1"><span class="toc-item-num">1 </span>Kaggle - Don't Get Kicked</a></span><ul class="toc-item"><li><span><a href="#Preprocessing" data-toc-modified-id="Preprocessing-1.1"><span class="toc-item-num">1.1 </span>Preprocessing</a></span></li><li><span><a href="#Modeling" data-toc-modified-id="Modeling-1.2"><span class="toc-item-num">1.2 </span>Modeling</a></span></li><li><span><a href="#Scoring" data-toc-modified-id="Scoring-1.3"><span class="toc-item-num">1.3 </span>Scoring</a></span></li></ul></li><li><span><a href="#Future-Improvements" data-toc-modified-id="Future-Improvements-2"><span class="toc-item-num">2 </span>Future Improvements</a></span></li></ul></div>
# -
from jupyterthemes import get_themes
from jupyterthemes.stylefx import set_nb_theme
themes = get_themes()
set_nb_theme(themes[3])
# +
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
# 4. magic to enable retina (high resolution) plots
# https://gist.github.com/minrk/3301035
# %matplotlib inline
# %load_ext watermark
# %load_ext autoreload
# %autoreload 2
# %config InlineBackend.figure_format = 'retina'
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from joblib import dump, load
from xgboost import XGBClassifier
from sortedcontainers import SortedSet
from scipy.stats import randint, uniform
from sklearn.metrics import roc_auc_score
from sklearn.model_selection import train_test_split, RandomizedSearchCV
from mlutils.transformers import Preprocessor
from utils import clean, build_xgb, write_output
# %watermark -a 'Ethen' -d -t -v -p numpy,scipy,pandas,joblib,xgboost,sklearn,matplotlib,sortedcontainers
# -
# # Kaggle - Don't Get Kicked
# Problem description is available at https://www.kaggle.com/c/DontGetKicked
#
# Please download the training and testing dataset provided at the link above and store it under the `../data` directory (i.e. there should be a data directory one level above this notebook).
#
# The `utils.py` contains utility function to prevent cluttering the notebook.
#
# ## Preprocessing
# original raw data
data_dir = os.path.join('..', 'data')
path_train = os.path.join(data_dir, 'training.csv')
data = pd.read_csv(path_train)
data.head()
# The next section specifies the categorical, numerical, datetime columns, columns that are dropped and the rationale behind them.
#
# Columns that are dropped:
#
# For categorical variables, use `dataframe[colname].value_counts()` to check for the number of distinct categories, we'll choose to drop columns with too many distinct categories (number of categories is listed in the parenthesis)
#
# - Make (33), have potential for binning
# - Model (1063)
# - Trim (134)
# - SubModel (863), have potential for binning the first two keywords, e.g. 4D SEDAN LS, 4D SEDAN SE would get merged into 4D SEDAN
# - Color (16)
# - VNST (37), state where the car was purchased, so could potentially bin into regions
# - BYRNO (17), unique number assigned to the buyer that purchased the vehicle
# - RefId, id for vehicle (each observation) is dropped
# - BYRNO (74), id for buyer that bought the vehicle is dropped
# - VNZIP1 (153), zipcode where the car was purchased, most likely duplicated effect with column VNST
#
# Columns that are drop due to too many null values, (percentage of null is listed in the parenthesis):
#
# - PRIMEUNIT (0.95)
# - AUCGUART (0.95)
#
# Drop due to being a redundant column:
#
# - VehYear measures identical information as VehAge
# - WheelTypeID measures identical information as WheelType
# +
# note that the drop_cols variable indicating which columns are dropped is not
# actually used, this is used in the notebook for sanity checking purpose, i.e.
# ensuring the column number adds up to the original column
drop_cols = [
'Make', 'Model', 'Trim', 'SubModel', 'Color',
'WheelTypeID', 'VNST', 'BYRNO', 'VNZIP1',
'PRIMEUNIT', 'AUCGUART', 'VehYear']
cat_cols = [
'Auction', 'Transmission', 'WheelType', 'Nationality',
'Size', 'TopThreeAmericanName', 'IsOnlineSale']
num_cols = [
'VehicleAge', 'VehOdo', 'VehBCost', 'WarrantyCost',
'MMRCurrentAuctionAveragePrice', 'MMRAcquisitionAuctionAveragePrice',
'MMRCurrentAuctionCleanPrice', 'MMRAcquisitionAuctionCleanPrice',
'MMRCurrentRetailAveragePrice', 'MMRAcquisitionRetailAveragePrice',
'MMRCurrentRetailCleanPrice', 'MMRAcquisitonRetailCleanPrice']
date_cols = ['PurchDate']
label_col = 'IsBadBuy'
ids_col = 'RefId'
# current time for computing recency feature
now = '2011-01-01 00:00:00'
# -
# The next code block executes some preprocessing steps that are specific to this problem.
data = clean(path_train, now, cat_cols, num_cols, date_cols, ids_col, label_col)
print('dimension:', data.shape)
data.head()
# extract target variable, perform
# a quick check of the target variable's skewness
ids = data[ids_col].values
label = data[label_col].values
data = data.drop([ids_col, label_col], axis = 1)
print('labels distribution:', np.bincount(label) / label.size)
# +
# train/validation stratified split
val_size = 0.1
test_size = 0.1
split_random_state = 1234
df_train, df_test, y_train, y_test, ids_train, ids_test = train_test_split(
data, label, ids, test_size = test_size,
random_state = split_random_state, stratify = label)
df_train, df_val, y_train, y_val, ids_train, ids_val = train_test_split(
df_train, y_train, ids_train, test_size = val_size,
random_state = split_random_state, stratify = y_train)
# +
# due the fact that in the cleaning step, some numeric columns
# got transformed, thus we obtain the new numeric columns after
# the cleaning step;
# use sorted set to ensure the consistency of the column order
num_cols_cleaned = list(SortedSet(df_train.columns) - SortedSet(cat_cols))
# final sanity check to ensure numeric columns are
# all normally distributed-ish
df_train[num_cols_cleaned].hist(bins = 50, figsize = (20, 15))
plt.show()
# -
# Converts the DataFrame format data to numpy array format.
# +
# ideally this preprocessing step should be constructed
# into a pipeline along with the model, but this is infeasible
# as of now
# https://github.com/dmlc/xgboost/issues/2039
preprocess = Preprocessor(num_cols_cleaned, cat_cols)
X_train = preprocess.fit_transform(df_train)
X_val = preprocess.transform(df_val)
X_test = preprocess.transform(df_test)
print('colnames', preprocess.colnames_)
X_train
# -
# ## Modeling
#
# Xgboost (Extreme Gradient Boosting) is chosen for its performance. We also set up a validation set to perform early stopping, which prevents overfitting issues.
cv = 10
n_iter = 3
model_random_state = 4321
eval_set = [(X_train, y_train), (X_val, y_val)]
xgb_tuned = build_xgb(n_iter, cv, model_random_state, eval_set)
xgb_tuned.fit(X_train, y_train)
pd.DataFrame(xgb_tuned.cv_results_)
# +
# model checkpoint for future scoring
model_dir = os.path.join('..', 'model')
if not os.path.isdir(model_dir):
os.mkdir(model_dir)
checkpoint_preprocess = os.path.join(model_dir, 'preprocess.pkl')
checkpoint_xgb = os.path.join(model_dir, 'xgb.pkl')
# -
dump(preprocess, checkpoint_preprocess)
dump(xgb_tuned, checkpoint_xgb)
# monitor the train, validation and test AUC score
y_pred = []
xgb_best = xgb_tuned.best_estimator_
zipped = zip(
('train', 'validation', 'test'),
(X_train, X_val, X_test),
(y_train, y_val, y_test))
for name, X, y in zipped:
xgb_pred = xgb_best.predict_proba(
X, ntree_limit = xgb_best.best_ntree_limit)[:, 1]
score = round(roc_auc_score(y, xgb_pred), 2)
print('{} AUC: {}'.format(name, score))
y_pred.append(xgb_pred)
# +
# output the prediction
output_dir = os.path.join('..', 'output')
if not os.path.isdir(output_dir):
os.mkdir(output_dir)
ids = np.hstack((ids_train, ids_val, ids_test))
y_pred = np.hstack(y_pred)
# this prediction table can be written to a .csv or upload back to database
output = pd.DataFrame({
ids_col: ids,
label_col: y_pred
}, columns = [ids_col, label_col])
output.head()
# -
# output to .csv file
output_path = os.path.join(output_dir, 'prediction.csv')
write_output(ids, ids_col, y_pred, label_col, output_path)
# ## Scoring
#
# Scoring a future dataset, here it's scoring the test set provided from Kaggle.
# +
path_future = os.path.join(data_dir, 'test.csv')
data = clean(path_future, now, cat_cols, num_cols, date_cols, ids_col)
ids = data[ids_col].values
data = data.drop(ids_col, axis = 1)
preprocess = load(checkpoint_preprocess)
xgb_tuned = load(checkpoint_xgb)
X = preprocess.transform(data)
xgb_best = xgb_tuned.best_estimator_
xgb_pred = xgb_best.predict_proba(
X, ntree_limit = xgb_best.best_ntree_limit)[:, 1]
xgb_pred
# -
output_path = os.path.join(output_dir, 'prediction_future.csv')
write_output(ids, ids_col, xgb_pred, label_col, output_path)
# After understanding the overall workflow, the you can simply use the `main.py` script and follow the steps below to replicate the workflow:
#
# ```bash
# # assuming you're at the project's root directory
#
# # train the model on the training set and store it
# python src/main.py --train --inputfile training.csv --outputfile prediction.csv
#
# # predict on future dataset and output the prediction
# # to a .csv file in a output directory (will be created
# # one level above where the script is if it doesn't exist yet)
# python src/main.py --inputfile test.csv --outputfile prediction_future.csv
# ```
#
# As of now, most of the changeable parameters used throughout this notebook are coded as constant at the top of script and not exposed as command line arguments.
# # Future Improvements
#
# This script reaches around 0.70 ~ 0.72 AUC on the test set. Some potential ways of improving this score includes:
#
# - Leverage more features, e.g. some categorical columns can be included using binning (use intuition or leverage domain experts) or embedding methods and the columns with missing values can be included by converting it to a binary label of whether the column is missing or not as the missing values could potentially be a signal.
# - Explicitly add interaction terms by checking the top most important features using model's feature importance or LIME
# - Longer iterations for hyperparmeter search or smarter hyperparameter search methods.
# - Oversampling, undersampling or a mix of both could be utilized since the dataset is a bit unbalanced. An alternative way to resolve the unbalanced issue is to supply sample weights to each observation, where the observation that represents the minority class will get assigned a higher weight.
# - Try other algorithms to obtain performance boost: e.g. deeper learning or stacking.
| projects/kaggle_dont_get_kicked/src/main.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="iF_ArzdOqnCG" colab_type="text"
# # Anagram Check
# + id="WLX4lSBCqq6k" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598016570471, "user_tz": -60, "elapsed": 832, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "01062919807050385714"}}
def anagram(s1,s2):
s1 = s1.replace(' ','').lower()
s2 = s2.replace(' ','').lower()
print(s1,s2)
count = {}
for i in s1:
if i in count:
count[i] += 1
else:
count[i] = 1
for i in s2:
if i in count:
count[i] -= 1
else:
count[i] = 1
for k in count:
if count[k] != 0:
return False
return True
# + id="qyRTZG-pWavg" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 50} executionInfo={"status": "ok", "timestamp": 1598016602991, "user_tz": -60, "elapsed": 761, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "01062919807050385714"}} outputId="ca9df778-4420-44db-f456-c796f8844276"
anagram("d og","god")
# + [markdown] id="0Gr22o590J4R" colab_type="text"
# # Array Pair sum
# + id="W4Cwqzdc0Oju" colab_type="code" colab={}
def array_pair_sum(arr,k):
if len(arr) < 2:
return 0
else:
seen = set()
output = set()
for num in arr:
target = int(k) - num
if target not in seen:
seen.add(num)
else:
output.add((min(num,target),max(num,target)))
return output
# + id="hJ6ShdMu1OPn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} executionInfo={"status": "ok", "timestamp": 1597943167718, "user_tz": -60, "elapsed": 797, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "01062919807050385714"}} outputId="872f6c4f-3de5-4393-bedb-560f6c7b540c"
array_pair_sum([1,3,2,2],3)
# + [markdown] id="oKVCievdOoe2" colab_type="text"
# # Missing elements
# + id="iwgmj8lPO06C" colab_type="code" colab={} executionInfo={"status": "ok", "timestamp": 1598017868852, "user_tz": -60, "elapsed": 1567, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "01062919807050385714"}}
def find_missingElemtnt(arr1,arr2):
result = 0
for i in arr1+arr2:
print(f"{result} with {i}")
result ^= i
print(result)
return result
# + id="uQq5UUa7PDSM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 269} executionInfo={"status": "ok", "timestamp": 1598017872211, "user_tz": -60, "elapsed": 1228, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "01062919807050385714"}} outputId="5306b962-3c56-4fb1-9c2a-f2b590fb28f0"
find_missingElemtnt([5,5,7,7],[5,7,7])
# + [markdown] id="udJ3I5eykiwA" colab_type="text"
# # Largest continious sum
# + [markdown] id="QnvZjw6YkwDN" colab_type="text"
# # Sequential Reversal
# + [markdown] id="co5MiB0lkzea" colab_type="text"
# # String Comparison
# + [markdown] id="2IGkiM7Wk5ak" colab_type="text"
# # Unique characters in String
# + [markdown] id="3h1VRSknk-im" colab_type="text"
# # New Section
| Colab Notebooks/ArraySequence.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/BrianGakungi/IP-WEEK-2/blob/main/JOHN_NJAGI_IP_WEEK_2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="DMlyylONof18"
# **a) UNIVARIATE ANALYSIS**
# + [markdown] id="R6oxrG5FQesp"
# **1) DEFINING THE QUESTION**:
# Which demographic factors highly influence the possession of active bank accounts
# + [markdown] id="uec8QZYsVxmY"
# **2) METRIC FOR SUCCESS:**
# Getting the demographic factors having strong correlation to bank account holders
# + [markdown] id="a6vsOJ-DV1f2"
# **3)UNDERSTANDING THE CONTEXT:**
# Data set is a summary of surveys conducted in Kenya, Uganda, Rwanda and Tanzania. It contains several demographic factors that influence the ability of individuals to possess bank accounts. In a bid to understand the state of financial inclusion we are to study the effects of these factors on the state of financial inclusion.
#
#
# + [markdown] id="lu1GUb6NYXeU"
# **4) EXPERIMENTAL DESIGN TAKEN:**
# We will use the factorial experimental design to determine the effects of the multiple variables we have on whether one has a bank account
# + [markdown] id="HaNSu6PUcWxf"
# **5) DATA RELEVANCE:**
# The data set we have for conducting our analysis is relevant considering it was extracted from surveys conducted by a reliable source i.e Finscope
# + id="D8Kf3GRnd6b1"
# Importing Numpy
import numpy as np
# Importing Pandas
import pandas as pd
# Importing Matplotlib
import matplotlib.pyplot as plt
# Importing Seaborn
import seaborn as sns
# import researchpy
# !pip install -q researchpy
import researchpy as rp
from scipy import stats
import sklearn
# + id="2gSKODryeD54"
# Loading the data set
df = pd.read_csv("/content/Financial Dataset - 1.csv")
# + colab={"base_uri": "https://localhost:8080/", "height": 365} id="VI57Un9leXlt" outputId="ab64d644-3595-4441-f166-9ab405037590"
# preview the data set
df.head()
# + [markdown] id="xC0QUVZrevCx"
# **6) CHECKING THE DATA**
# + colab={"base_uri": "https://localhost:8080/"} id="fSzwXNcze4QU" outputId="7a850625-c9b5-4aa2-bad4-928845020881"
# determining the number of records in our data
df.shape
# + colab={"base_uri": "https://localhost:8080/"} id="Q64VGPZre_8e" outputId="5106d792-f2b8-46b6-c071-95485a7a13e0"
# checking datatype of our data
df.info()
# + [markdown] id="asfrK0-ti2lg"
# **7) TIDYING THE DATA SET**
# + id="QKHOavCw1Zis"
# Replace value in has a bank account column with integers
df["Has a Bank account"].replace(to_replace ="Yes",
value ="1", inplace=True)
# + id="UbCBUF3R1pBZ"
# Replace value in has a bank account column with integers
df["Has a Bank account"].replace(to_replace ="No",
value ="0", inplace=True)
# + id="sNSp9xvU2d98"
df = df.astype({'Has a Bank account':'float64'})
# + colab={"base_uri": "https://localhost:8080/", "height": 365} id="QN0HfE6nCu63" outputId="9f60a558-d53b-4a31-9686-e22a336a024e"
# rename wrongly named columns
df.rename(columns={"Education Level":"Education_Level"}, inplace=True)
df.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 365} id="A973Y-jmGWes" outputId="3962fe9d-37d7-48bc-8393-82b41d6c354e"
df.rename(columns={"Respondent Age":"Respondent_Age"}, inplace=True)
df.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 283} id="SgXRLGuZi7nm" outputId="9bece8f1-82a3-43fb-fd74-19dad756757e"
# checking for outliers in household size
import matplotlib.pyplot as plt
import seaborn as sns
df.boxplot(column=["household_size"], grid = False)
# + colab={"base_uri": "https://localhost:8080/", "height": 669} id="5zSB1H4wIhdC" outputId="2e8b1493-8bd1-4cdd-ecd4-b4d5414b923a"
# position of outliers in household size
df[(df['household_size'] > 10)]
# + colab={"base_uri": "https://localhost:8080/", "height": 669} id="T8bC5IqBLvat" outputId="47729705-dd76-4cd9-ff92-abdccd00c937"
# trimming outliers
df[(df['household_size'] < 11)]
# + id="R9CCfgDTL9Q1"
# capping outliers
upper_limit = df['household_size'].mean() + 3*df['household_size'].std()
lower_limit = df['household_size'].mean() - 3*df['household_size'].std()
# + id="9jVRb_LrMNVW"
df['household_size'] = np.where(
df['household_size']>upper_limit,
upper_limit,
np.where(
df['household_size']<lower_limit,
lower_limit,
df['household_size']
)
)
# + colab={"base_uri": "https://localhost:8080/"} id="Oqh_KlB8MhDO" outputId="74f37b1b-e998-4fa2-fd6e-ff568acc0b06"
# checking our data set
df["household_size"].describe()
# + colab={"base_uri": "https://localhost:8080/", "height": 283} id="5W422xwrm5JB" outputId="b3078867-3f9f-4eba-c3c8-d2f63cacda75"
# check for outliers in respondent age
df.boxplot(column=["Respondent Age"], grid = False)
# + colab={"base_uri": "https://localhost:8080/"} id="trK8giz4IyxU" outputId="9727eeb4-2877-4027-89b1-81b1cc3e7777"
# position of outliers in age
print(np.where(df['Respondent_Age']>100))
# + colab={"base_uri": "https://localhost:8080/"} id="OVPvjTchoErE" outputId="9c94936d-f700-4c22-f971-50f08d73a9bb"
# check for anomalies
q1_size = df["household_size"].quantile(.25)
q3_size = df["household_size"].quantile(.75)
iqr_size = q3_size - q1_size
q1_age = df["Respondent_Age"].quantile(.25)
q3_age = df["Respondent_Age"].quantile(.75)
iqr_age = q3_age - q1_age
print(iqr_size, iqr_age)
# + colab={"base_uri": "https://localhost:8080/"} id="JqsIynxvpuKk" outputId="bcc66507-98f9-46aa-ca24-52483ffd8acd"
# checking for missing values
df.isnull().sum()
# + id="gTdw8bd8qoR6"
# dropping records with more than 2 missing values
df.dropna(thresh = 11, inplace=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 300} id="Jt-DewiArlnS" outputId="87b6ddca-cbce-4e20-c99e-05820a7c774c"
# describing our data set to display the mean for household size and respondent age
df.describe()
# + colab={"base_uri": "https://localhost:8080/"} id="Rb0jvhY8szNq" outputId="804aa720-43f2-4724-c51b-9abdbc1f031d"
# calculating median
m1 = df["household_size"].median()
m2 = df["Respondent_Age"].median()
print(m1, m2)
# + colab={"base_uri": "https://localhost:8080/"} id="E-hrK6JhtOlQ" outputId="c3533706-d166-47e2-eec4-c9c76f2a4d20"
# calculating mode
md1 = df["household_size"].mode()
md2 = df["Respondent_Age"].mode()
print(md1, md2)
# + colab={"base_uri": "https://localhost:8080/"} id="D7yE7vUlt1b0" outputId="8f523e33-c8e8-47bb-a280-977bd9ebbc84"
# calculating standard deviation
sd1 = df["household_size"].std()
sd2 = df["Respondent_Age"].std()
print(sd1, sd2)
# 2.2799 is the deviation of household sizes from the mean of 3.683
# 16.5216 is the deviation of respondent ages from the mean of 38.805
# + colab={"base_uri": "https://localhost:8080/"} id="dBT-MLfYvQMD" outputId="a0ff5bc9-54b8-4882-ab6b-b494ce219697"
# calculating variance
v1 = df["household_size"].var()
v2 = df["Respondent_Age"].var()
print(v1, v2)
# 5.198 is the square of the standard deviation of household size of 2.2799
# 272.9646 is the square of the standard deviation of respondent ages of 16.5216
# + colab={"base_uri": "https://localhost:8080/"} id="9f52YGXBvZeW" outputId="3c6e0f3c-a5dc-411f-db55-fd556d281ce4"
# Calculating range for respondent age
age_max = df["Respondent_Age"].max()
age_min = df["Respondent_Age"].min()
age_range = age_max - age_min
# calculating household size range
size_max = df["household_size"].max()
size_min = df["household_size"].min()
size_range = size_max - size_min
print(age_range, size_range)
# 84.0 represents the difference between the maximum and minimum respondent age in the dataset
# 21.0 represents the difference between the maximum and minimum household size in the dataset
# + colab={"base_uri": "https://localhost:8080/"} id="FXCLk7yGwL9B" outputId="64c88546-fbbc-45f8-80b7-ae4347aab0ca"
# Age quantiles
df["Respondent_Age"].quantile([0.25,0.5,0.75])
# Second quartile (0.50) is median of the whole data which is 35.0. First quartile (0.25) is median of upper half of the data which is 26.0.
# And Third Quartile (0.75) is median of lower half of the data which is 49.0.
# + colab={"base_uri": "https://localhost:8080/"} id="z0LLTpZqwuOB" outputId="89c946c1-8f80-47de-873f-3c7427ed6ac6"
# household size quantiles
df["household_size"].quantile([0.25,0.5,.075])
# Second quartile (0.50) is median of the whole data which is 3.0. First quartile (0.25) is median of upper half of the data which is 2.0.
# And Third Quartile (0.75) is median of lower half of the data which is 1.0.
# + colab={"base_uri": "https://localhost:8080/"} id="M6dzOlAbw4Jf" outputId="c08408ba-2565-481d-86ec-53ecb3967eea"
# age skewness
df["Respondent_Age"].skew()
# the age distribution is positively skewed since 0.84 is a positive figure.
#It also indicates that the mean of respondent ages is greater than the mode
# + colab={"base_uri": "https://localhost:8080/"} id="bny1DuFqxXfr" outputId="8298afb6-fe7a-4265-9d26-db5649c72caa"
# household size skewness
df["household_size"].skew()
# the household size distribution is positively skewed since 0.97 is a positive figure.
#It also indicates that the mean of household sizes is greater than the mode
# + colab={"base_uri": "https://localhost:8080/"} id="D1Qc-17fxbKN" outputId="5cdee8f5-0474-4cfa-ca5e-263b0ae372e8"
# age kurtosis
df["Respondent_Age"].kurt()
# kurtosis for respondent ages is greater than 0 hence is a leptokurtic distribution indicating the presence of outliers
# + colab={"base_uri": "https://localhost:8080/"} id="rPaUfl24xiMB" outputId="4174e2e8-d36f-42c5-bc0b-e48fcc4b4fe8"
#household size kurtosis
df["household_size"].skew()
# kurtosis for household sizes is greater than 0 hence is a leptokurtic distribution indicating the presence of outliers
# + colab={"base_uri": "https://localhost:8080/", "height": 281} id="pz_qdI9LxqCL" outputId="8345b15c-790e-420f-e597-9176ee461f56"
# ages histogram
age = df['Respondent_Age']
plt.hist(age, bins=10, histtype='bar', rwidth=0.9)
plt.title('Ages')
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 281} id="x78uocUT3RAR" outputId="a02905cb-7f9c-401d-fcf6-110dc16bb5b3"
# household histogram
age = df['household_size']
plt.hist(age, bins=10, histtype='bar', rwidth=0.9)
plt.title('Sizes')
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 283} id="cdGiUZ4ICCIZ" outputId="76f40abe-25a4-46df-a151-50c73889c3df"
# age box plot
df.boxplot(["Respondent_Age"])
# + colab={"base_uri": "https://localhost:8080/", "height": 283} id="i8Lf7mE_DtMw" outputId="2d88c53c-5488-4613-9871-b3fa131535eb"
# household size box plot
df.boxplot(["household_size"])
# + colab={"base_uri": "https://localhost:8080/"} id="jTSKqPixEVPS" outputId="74edf83f-7479-42aa-8127-17e6746a1a27"
# frequency table for education level
df.Education_Level.value_counts()
# + colab={"base_uri": "https://localhost:8080/"} id="1Ps3PsRrE9p3" outputId="1f1a4699-47d9-4811-ddca-98d0bb98107c"
# frequency table for household size
df.household_size.value_counts()
# + colab={"base_uri": "https://localhost:8080/"} id="oKatX1K0FtSf" outputId="82d0834d-6eaf-4225-d0b8-78f3a5c23cb6"
# frequency table for ages
df.Respondent_Age.value_counts()
# + [markdown] id="jwetckJjNtXU"
# **b) BIVARIATE ANALYSIS**
# + colab={"base_uri": "https://localhost:8080/", "height": 365} id="CfRgrSFXQUTJ" outputId="2ecc3588-980b-4503-c320-2b6f150ffcd1"
# preview dataset
df.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 498} id="llGcJ36rNxGe" outputId="fddce6dd-acd9-4657-e0f3-dd8af79c320b"
# scatter plot between respondent ages and household size
from seaborn.relational import scatterplot
plt.figure(figsize=(14,8))
_ = sns.regplot(data=df, x='household_size', y='Respondent_Age')
# + colab={"base_uri": "https://localhost:8080/"} id="HjZwaj3RUdmt" outputId="32b16e93-0925-4302-adc6-3e223cf3036f"
# pearson correlation coefficient
coeff = df["household_size"].corr(df["Respondent_Age"], method="pearson")
print(coeff)
# this correlation of -0.12 signifies a weak negative correlation between age of respondents and household size
# hence an increase in age has little effect on the movement of household size
# in the opposite direction
# + [markdown] id="kLDgEhvEXy0q"
# **c) MULTIVARIATE ANALYSIS**
# + id="tVey1asTeKz5" colab={"base_uri": "https://localhost:8080/"} outputId="e28981c0-a9ba-4832-cbc2-72939b35b013"
# check the factorability or sampling adequacy using Bartlett’s Test
# !pip install factor_analyzer==0.2.3
from factor_analyzer.factor_analyzer import calculate_bartlett_sphericity
chi_square_value,p_value=calculate_bartlett_sphericity(df)
chi_square_value, p_value
# In Bartlett ’s test, the p-value indicates the test was statistically significant,
# indicating that the observed correlation matrix is not an identity matrix.
# + colab={"base_uri": "https://localhost:8080/", "height": 175} id="IvDKxs0GqG31" outputId="c973ffe0-b21e-4e9b-f3af-38a5be28989a"
# choosing the number of factors using the Kaiser criterion
from factor_analyzer.factor_analyzer import FactorAnalyzer
# Creating factor analysis object and perform factor analysis
fa = FactorAnalyzer()
fa.analyze(df, 4, rotation=None)
# Checking the Eigenvalues
ev, v = fa.get_eigenvalues()
ev
# we will choose only 3 factors since only 3 have an eigenvalue greather than 1
# + colab={"base_uri": "https://localhost:8080/", "height": 175} id="bOzpVTbmsaOC" outputId="c6920ebf-57b0-42b3-b0e5-d59ba73a8da5"
# Performing Factor Analysis
# Creating factor analysis object and perform factor analysis
fa = FactorAnalyzer()
fa.analyze(df, 3, rotation="varimax")
fa.loadings
# factor 1 has high factor loadings for respondent age and household size
# factor 2 has no high loadings
# factor 3 has no high loadings for any variable
# we'll take only 1 factor
# + colab={"base_uri": "https://localhost:8080/", "height": 229} id="tLHidXSytLbW" outputId="c0c17253-8c3a-4408-984b-c5a5f07ae4a0"
# Performing factor analysis for 1 factor
#
# Create factor analysis object and perform factor analysis using 5 factors
fa = FactorAnalyzer()
fa.analyze(df, 1, rotation="varimax")
fa.loadings
# + colab={"base_uri": "https://localhost:8080/", "height": 143} id="Lqz22rAbutd8" outputId="a417c34c-1085-489a-8831-3db84ca1ae29"
#Getting variance of the factors
#
fa.get_factor_variance()
# + colab={"base_uri": "https://localhost:8080/", "height": 726} id="VyOGjCnvvSJw" outputId="1c5a7705-b202-47e9-f94a-91439a789369"
# Ploting the bivariate summaries and recording our observations
sns.pairplot(df)
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 371} id="Gw8q06X7xTI_" outputId="c3d2c585-a112-4be2-b1ba-2456c81a08f4"
sns.heatmap(df.corr(),annot=True)
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 497} id="Vv27kYTDxf3n" outputId="031aa4c7-fb37-405f-8e53-144412f3f4d2"
# Implementing the Solution
#
plt.figure(figsize=(14,8)) # set the size of the graph
_ = sns.regplot(data=df, x='Has a Bank account', y='Respondent_Age')
# + [markdown] id="8sDSHIoDzYi8"
# **Follow up questions**
# + [markdown] id="l6ofcKFXzo7N"
# **a). Did we have the right data?**
# No since most of our columns were categorical in nature and had little to no correlation with the status of one being a bank account holder
# + [markdown] id="_dLdQZYn0B5v"
# **b). Do we need other data to answer our question?**
# Yes supplementary data is needed to answer our question
# + [markdown] id="xaiOcSdD0PJ9"
# **c). Did we have the right question**
# Yes we had the right question in order to solve the research problem
| JOHN_NJAGI_IP_WEEK_2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:py37_tf113]
# language: python
# name: conda-env-py37_tf113-py
# ---
# # Make tensorflow pb graph
# +
import tensorflow as tf
from tensorflow.python.framework import graph_io
from tensorflow.keras.applications.inception_v3 import InceptionV3
def freeze_graph(graph, session, output):
with graph.as_default():
graphdef_inf = tf.graph_util.remove_training_nodes(graph.as_graph_def())
graphdef_frozen = tf.graph_util.convert_variables_to_constants(session, graphdef_inf, output)
graph_io.write_graph(graphdef_frozen, ".", "xception_hpv.pb", as_text=False)
# +
tf.keras.backend.set_learning_phase(0) # this line most important
# base_model = tf.keras.applications.InceptionV3(input_shape=(299, 299, 3),
# include_top=True,
# weights='imagenet')
# base_model.compile(loss='sparse_categorical_crossentropy',
# optimizer=tf.keras.optimizers.Adam())
keras_model_path = "/Users/justina/Documents/EPFL/thesis/projects/hnsc/trained_model.h5"
base_model = tf.keras.models.load_model(keras_model_path, compile=False)
base_model.compile(loss='sparse_categorical_crossentropy',
optimizer=tf.keras.optimizers.Adam())
session = tf.keras.backend.get_session()
INPUT_NODE = base_model.inputs[0].op.name
OUTPUT_NODE = base_model.outputs[0].op.name
freeze_graph(session.graph, session, [out.op.name for out in base_model.outputs])
# +
tf.keras.backend.set_learning_phase(0) # this line most important
# base_model = tf.keras.applications.InceptionV3(input_shape=(299, 299, 3),
# include_top=True,
# weights='imagenet')
# base_model.compile(loss='sparse_categorical_crossentropy',
# optimizer=tf.keras.optimizers.Adam())
# keras_model_path = "/Users/justina/Documents/EPFL/thesis/projects/hnsc/trained_model.h5"
# base_model = tf.keras.models.load_model(keras_model_path, compile=False)
# base_model.compile(loss='sparse_categorical_crossentropy',
# optimizer=tf.keras.optimizers.Adam())
# session = tf.keras.backend.get_session()
INPUT_NODE = base_model.inputs[0].op.name
OUTPUT_NODE = base_model.outputs[0].op.name
freeze_graph(session.graph, session, [out.op.name for out in base_model.outputs])
# +
sess.close()
from tensorflow.python.platform import gfile
import tcav.utils as utils
import tensorflow as tf
sess = utils.create_session()
with sess.graph.as_default():
input_graph_def = tf.GraphDef()
with tf.gfile.FastGFile("../imagenet_small_test/graphs/tensorflow_inception_graph.pb", 'rb') as f:
input_graph_def.ParseFromString(f.read())
tf.import_graph_def(input_graph_def)
LOGDIR='./logs/tests/googlenet/'
train_writer = tf.summary.FileWriter(LOGDIR)
train_writer.add_graph(sess.graph)
# -
| Untitled.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + id="5rS6LXXTDcH6" colab_type="code" outputId="b09926ac-27ab-442f-8478-c7a42b3b21e1" executionInfo={"status": "ok", "timestamp": 1588745750936, "user_tz": -330, "elapsed": 1397, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjFFnpJjw-7WaiTzz7xrkIJjBBwMs5i3OwVVYALIg=s64", "userId": "07173842849534370372"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
from google.colab import drive
drive.mount('/content/drive')
# + id="oVjxLahnDeVU" colab_type="code" outputId="2169ba92-0826-4381-f300-af662a421076" executionInfo={"status": "ok", "timestamp": 1588745750938, "user_tz": -330, "elapsed": 1242, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjFFnpJjw-7WaiTzz7xrkIJjBBwMs5i3OwVVYALIg=s64", "userId": "07173842849534370372"}} colab={"base_uri": "https://localhost:8080/", "height": 34}
# cd /content/drive/My Drive/Colab Notebooks/eminem_lyrics_generator
# + id="R7RNVjXQ-0ZK" colab_type="code" colab={}
import pandas as pd
import numpy as np
import re
import os
# + id="Odmu2urJ-0Zj" colab_type="code" colab={}
# reading lyrics from files
lines = []
for filename in os.listdir("data/"):
file = open("data/" + filename)
lines.append(file.read())
# + id="HWgevD-2-0Z5" colab_type="code" colab={}
lines = pd.DataFrame(lines, columns=['lines'])
# + id="tNTXifyV-0aI" colab_type="code" colab={}
def clean_text(sentence):
sentence = sentence.lower()
sentence = re.sub(r"i'm", "i am", sentence)
sentence = re.sub(r"i’m", "i am", sentence)
sentence = re.sub(r"he's", "he is", sentence)
sentence = re.sub(r"he’s", "he is", sentence)
sentence = re.sub(r"she's", "she is", sentence)
sentence = re.sub(r"she’s", "she is", sentence)
sentence = re.sub(r"it's", "it is", sentence)
sentence = re.sub(r"it’s", "it is", sentence)
sentence = re.sub(r"that's", "that is", sentence)
sentence = re.sub(r"that’s", "that is", sentence)
sentence = re.sub(r"what's", "what is", sentence)
sentence = re.sub(r"what’s", "what is", sentence)
sentence = re.sub(r"where's", "where is", sentence)
sentence = re.sub(r"where’s", "where is", sentence)
sentence = re.sub(r"there's", "there is", sentence)
sentence = re.sub(r"there’s", "there is", sentence)
sentence = re.sub(r"who's", "who is", sentence)
sentence = re.sub(r"who’s", "who is", sentence)
sentence = re.sub(r"how's", "how is", sentence)
sentence = re.sub(r"how’s", "how is", sentence)
sentence = re.sub(r"\'ll", " will", sentence)
sentence = re.sub(r"’ll", " will", sentence)
sentence = re.sub(r"\'ve", " have", sentence)
sentence = re.sub(r"’ve", " have", sentence)
sentence = re.sub(r"\'re", " are", sentence)
sentence = re.sub(r"’re", " are", sentence)
sentence = re.sub(r"\'d", " would", sentence)
sentence = re.sub(r"’d", " would", sentence)
sentence = re.sub(r"won't", "will not", sentence)
sentence = re.sub(r"won’t", "will not", sentence)
sentence = re.sub(r"can't", "cannot", sentence)
sentence = re.sub(r"can’t", "cannot", sentence)
sentence = re.sub(r"n't", " not", sentence)
sentence = re.sub(r"n’t", " not", sentence)
sentence = re.sub(r"n'", "ng", sentence)
sentence = re.sub(r"n’", "ng", sentence)
sentence = re.sub(r"'bout", "about", sentence)
sentence = re.sub(r"’bout", "about", sentence)
sentence = re.sub(r"'til", "until", sentence)
sentence = re.sub(r"’til", "until", sentence)
sentence = re.sub(r"c'mon", "come on", sentence)
sentence = re.sub(r"c’mon", "come on", sentence)
sentence = re.sub("\n", "", sentence)
sentence = re.sub("[-*/()\"’'#/@;:<>{}`+=~|.!?,]", "", sentence)
return sentence
# + id="d50elWmV-0aV" colab_type="code" colab={}
lines.lines = lines.lines.apply(lambda line: clean_text(line))
# + id="1LO3UQhl-0al" colab_type="code" outputId="08ac8329-c448-4868-dbcb-b744a14825f7" executionInfo={"status": "ok", "timestamp": 1588745752194, "user_tz": -330, "elapsed": 1567, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjFFnpJjw-7WaiTzz7xrkIJjBBwMs5i3OwVVYALIg=s64", "userId": "07173842849534370372"}} colab={"base_uri": "https://localhost:8080/", "height": 206}
lines.head()
# + id="sYDJq60DiwpR" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="b6338fac-ae28-4407-a5c8-b5221eac5f33" executionInfo={"status": "ok", "timestamp": 1588745752195, "user_tz": -330, "elapsed": 1361, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjFFnpJjw-7WaiTzz7xrkIJjBBwMs5i3OwVVYALIg=s64", "userId": "07173842849534370372"}}
lines.shape
# + id="VKEfXByiFq_5" colab_type="code" colab={}
lines.lines = lines.lines.apply(lambda line: line.split())
# + id="EX8mMxFv-0a5" colab_type="code" colab={}
x_train = [line[:-1] for line in lines.lines]
y_train = [line[1:] for line in lines.lines]
# + id="t6cueFD--0bb" colab_type="code" colab={}
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
# + id="tT51cTsN-0bm" colab_type="code" colab={}
tokenizer = Tokenizer()
# + id="wsTwQX2a-0by" colab_type="code" colab={}
tokenizer.fit_on_texts(lines.lines)
# + id="xyZARYVe-0b-" colab_type="code" colab={}
x_train = tokenizer.texts_to_sequences(x_train)
y_train = tokenizer.texts_to_sequences(y_train)
# + id="JivHjswYQqsD" colab_type="code" colab={}
word2idx = tokenizer.word_index
idx2word = {value: key for key, value in word2idx.items()}
# + colab_type="code" id="gc1P4IYWlDzV" colab={}
word2idx["<pad>"] = 0
idx2word[0] = "<pad>"
# + id="hYmlGZ8E-0cQ" colab_type="code" outputId="f040cbac-6fd0-4188-9212-c47ba9b23b1e" executionInfo={"status": "ok", "timestamp": 1588745755088, "user_tz": -330, "elapsed": 2580, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjFFnpJjw-7WaiTzz7xrkIJjBBwMs5i3OwVVYALIg=s64", "userId": "07173842849534370372"}} colab={"base_uri": "https://localhost:8080/", "height": 173}
lengths = []
for sequence in x_train:
lengths.append(len(sequence))
lengths = pd.Series(lengths)
lengths.describe()
# + id="v3IdStVW-0ch" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="5d8d34fe-64ed-4eed-c2fd-27aa3177803e" executionInfo={"status": "ok", "timestamp": 1588745755088, "user_tz": -330, "elapsed": 1903, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjFFnpJjw-7WaiTzz7xrkIJjBBwMs5i3OwVVYALIg=s64", "userId": "07173842849534370372"}}
maxlen = 1024
vocab_size = len(tokenizer.word_index) + 1
embedding_dim = 128
vocab_size
# + id="AY_uinnF-0co" colab_type="code" colab={}
x_train = pad_sequences(x_train, maxlen=maxlen, padding='post', truncating='post')
y_train = pad_sequences(y_train, maxlen=maxlen, padding='post', truncating='post')
# + id="w8GkHpsH-0c4" colab_type="code" colab={}
from tensorflow.keras.models import Sequential, load_model
from tensorflow.keras.layers import GRU, Dense, Input, Embedding, Dropout
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.regularizers import l2
from tensorflow.keras.losses import SparseCategoricalCrossentropy
# + id="zaEAkQ1T-0dE" colab_type="code" colab={}
model = Sequential()
model.add(Embedding(input_dim=vocab_size, output_dim=embedding_dim, mask_zero=True))
model.add(GRU(units=1024, return_sequences=True))
model.add(Dense(vocab_size))
# + id="a-afmZUd-0dL" colab_type="code" colab={}
model.compile(optimizer=Adam(), loss=SparseCategoricalCrossentropy(from_logits=True))
# + id="uVKkvmpb-0dQ" colab_type="code" outputId="47f4ee48-7ba6-4203-f687-c0c59b22cc53" colab={"base_uri": "https://localhost:8080/", "height": 1000} executionInfo={"status": "ok", "timestamp": 1588757945355, "user_tz": -330, "elapsed": 1185302, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjFFnpJjw-7WaiTzz7xrkIJjBBwMs5i3OwVVYALIg=s64", "userId": "07173842849534370372"}}
history = model.fit(x_train, y_train, epochs=50, verbose=1) #400
# + id="eF9Z5xlkQ3Nm" colab_type="code" colab={}
model.save("model.h5")
#model = load_model("model.h5")
# + id="SrchgcJv-0dZ" colab_type="code" colab={}
def generate(word):
word = clean_text(word)
inputs = np.zeros((1, 1))
inputs[0, 0] = word2idx[word]
count = 1
while count <= 100:
pred = model.predict(inputs)
word = np.argmax(pred)
if word >= vocab_size:
word = vocab_size - 1
inputs[0, 0] = word
print(idx2word[word], end=" ")
count += 1
# + id="tQoTjmtY-0dh" colab_type="code" outputId="ff8a5f5a-3029-4ad2-8812-420021f50f5d" colab={"base_uri": "https://localhost:8080/", "height": 54} executionInfo={"status": "ok", "timestamp": 1588758090979, "user_tz": -330, "elapsed": 4330, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjFFnpJjw-7WaiTzz7xrkIJjBBwMs5i3OwVVYALIg=s64", "userId": "07173842849534370372"}}
generate("slim")
# + id="nWoOF4cwkp3l" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 282} outputId="1e5a3d73-2778-4561-d22a-7240749a327f" executionInfo={"status": "ok", "timestamp": 1588758156804, "user_tz": -330, "elapsed": 1051, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GjFFnpJjw-7WaiTzz7xrkIJjBBwMs5i3OwVVYALIg=s64", "userId": "07173842849534370372"}}
import matplotlib.pyplot as plt
plt.plot(range(50), history.history['loss'])
# + id="4-PKQnIuXZF1" colab_type="code" colab={}
| generator.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Vizzuality/data_sci_tutorials/blob/master/work/lmipy_add_mapbiomass_version_3_1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="JJ509AFc6Pdy" colab_type="text"
# #Add MapBiomass version 3.1 to the GFW resourcewatch API
#
# The data comes as and GEE Image asset where each band is a year, this is converted to an ee.Image.Collection Asset where each image is a year.
#
# Using LMIpy a suitable dataset is cloned, the dataset fields are updated and all years are added as individual layers to the dataset.
#
# Remember to define your API_TOKEN = ""your-token" in a scratch cell (insert > scratch code cell)
# + id="g9wj2ZDMMHLx" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 920} outputId="45058ed4-594c-4917-a194-9d4a57d1dad2"
# Pip install the LMIPy Module
# !pip install --upgrade LMIPy
# + id="lB15A3JIECbc" colab_type="code" colab={}
# Set some variables
# environment to add to
env = "production"
# dataset and layer to clone
ds_to_clone = "fee5fc38-7a62-49b8-8874-dfa31cbb1ef6"
ly_to_clone = "f13f86cb-08b5-4e6c-bb8d-b4782052f9e5"
# dataset parameters
dataset_params = {
'name': "Brazil Land Cover 1985-2017",
'tableName': "projects/wri-datalab/mapbiomas_collection31_integration_v1",
'description':'This data set shows annual land use and land cover for Brazil from 1985 to 2017.'
}
# vocab parameters
vocab_params = {
'application': 'gfw',
'name': 'categoryTab',
'tags': ['landCover', 'landCover']
}
# metadata parameters
meta_params = {
'application': 'gfw',
'language': 'en',
'info': {
'isSelectorLayer': True,
'citation': '1985-2017, MapBiomas',
'color': '#a0c746',
'description': 'This data set shows annual land use and land cover for Brazil from 1985 to 2017.',
'name': 'Brazil land cover'},
}
# layer parameters
def ly_params(year, selConf_position, appConf_default = False):
"""Create parameters for multiyear layers"""
year = str(year)
return {
'name':'Brazil land cover ' + year,
'iso': ["BRA"],
'env': "staging",
'legendConfig':{
'items':[
{
'color':'#006400',
'name':'Forest formations'
},
{
'color':'#8D9023',
'name':'Savannah formations'
},
{
'color':'#8AA81D',
'name':'Mangroves'
},
{
'color':'#E8A3E5',
'name':'Planted forest'
},
{
'color':'#2789D4',
'name':'Non-forest wetlands'
},
{
'color':'#CCDB98',
'name':'Grassland'
},
{
'color':'#8AB84A',
'name':'Other non-forest vegetation'
},
{
'color':'#FFB87E',
'name':'Pasture'
},
{
'color':'#D2A965',
'name':'Agriculture'
},
{
'color':'#E8B071',
'name':'Pasture or agriculture'
},
{
'color':'#DD7E6B',
'name':'Beaches and dunes'
},
{
'color':'#E9462B',
'name':'Urban infrastructure'
},
{
'color':'#F6F0EA',
'name':'Uncategorized'
},
{
'color':'#A3DCFE',
'name':'Water bodies'
},
{
'color':' #8A2BE2',
'name':'Mining'
}
],
'type':'basic'
},
'layerConfig':{
'type':'tileLayer',
'provider':'gee',
'assetId':'projects/wri-datalab/mapbiomas_collection31_integration_v1/classification_' + year,
"isImageCollection": False,
'body':{
"maxNativeZoom": 13,
"maxzoom": 19,
"minNativeZoom": 4,
"minzoom": 2,
"sldValue": '<RasterSymbolizer>' +' <ColorMap type="intervals" extended="false" >' +'<ColorMapEntry color="#006400" quantity="3" label="Forest formations"/>' +'<ColorMapEntry color="#8D9023" quantity="4" label="Savannah formations"/>' +'<ColorMapEntry color="#8AA81D" quantity="5" label="Mangroves"/>' +'<ColorMapEntry color="#E8A3E5" quantity="9" label="Planted forest"/>' +'<ColorMapEntry color="#2789D4" quantity="11" label="Non-forest wetlands"/>' +'<ColorMapEntry color="#CCDB98" quantity="12" label="Grassland"/>' +'<ColorMapEntry color="#8AB84A" quantity="13" label="Other non-forest vegetation"/>' +'<ColorMapEntry color="#FFB87E" quantity="15" label="Pasture"/>' +'<ColorMapEntry color="#D2A965" quantity="18" label="Agriculture"/>' +'<ColorMapEntry color="#E8B071" quantity="21" label="Pasture or agriculture"/>' +'<ColorMapEntry color="#DD7E6B" quantity="23" label="Beaches and dunes"/>' +'<ColorMapEntry color="#E9462B" quantity="24" label="Urban infrastructure"/>' +'<ColorMapEntry color="#F6F0EA" quantity="27" label="Uncategorized"/>' +'<ColorMapEntry color="#A3DCFE" quantity="26" label="Water bodies"/>' +'<ColorMapEntry color="#8A2BE2" quantity="30" label="Mining"/>' +'</ColorMap>' +'</RasterSymbolizer>',
"styleType": "sld",
}},
'applicationConfig' :{
'global': False,
'default': appConf_default,
'active': True,
'metadata': 'bra_mapbiomas_1985_2017',
'selectorConfig': {
'label': year,
'position': selConf_position,
'value': "classification_"+year
}}
}
# + id="yuOQv-UvRmqy" colab_type="code" outputId="adcea557-c964-4f8a-ed33-8dc74501bf9b" colab={"base_uri": "https://localhost:8080/", "height": 299}
import LMIPy as lmi
c = lmi.Collection(search="brazil", app=["gfw"], env="production", object_type="dataset")
c
# + id="dBJkOJucQKIz" colab_type="code" outputId="d9135f96-a0eb-406f-c1a3-e9ab7dd0a81d" colab={"base_uri": "https://localhost:8080/", "height": 17}
import LMIPy as lmi
c = lmi.Collection(search="brazil", app=["gfw"], env="staging", object_type="layer")
c
# + id="1ERDP8NoUmnz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1267} outputId="be9c5336-cbaf-4755-e21c-6aaf64b6440b"
# If needed remove incorrect versions
#for i in range(0, len(c)):
# ods = c[2]
# ods.delete(token=API_TOKEN, force=True)
# + id="j6FH2ZgoXwCC" colab_type="code" outputId="8d17a8eb-43f5-49a7-80ef-72ed5337b277" colab={"base_uri": "https://localhost:8080/", "height": 69}
# Clone the biodiversity dataset
import LMIPy as lmi
ds = lmi.Dataset(ds_to_clone).clone(token=API_TOKEN, env=env, dataset_params=dataset_params, clone_children=False)
ds_id = ds.id
ds_id
# + id="1eq7MGVXZ1iP" colab_type="code" outputId="2dd4c572-629a-4677-f7a4-7371e0fface8" colab={"base_uri": "https://localhost:8080/", "height": 124}
# Add vocab
import LMIPy as lmi
lmi.Dataset(ds_id).add_vocabulary(vocab_params = vocab_params, token = API_TOKEN)
# + id="pbi2KusEbCkM" colab_type="code" outputId="268cd172-cb90-4b80-af2a-b98d0cae673d" colab={"base_uri": "https://localhost:8080/", "height": 124}
# Add metadata
import LMIPy as lmi
lmi.Dataset(ds_id).add_metadata(meta_params = meta_params, token = API_TOKEN)
# + id="QKbtLlaV6pdc" colab_type="code" outputId="f2ebf1d0-d428-4848-8f06-7087e0f9f9ef" colab={"base_uri": "https://localhost:8080/", "height": 1163}
# Make list of years
year_list = range(2017, 1984, -1)
pos_list = range(0, len(year_list), 1)
# Add default layer
layer_params = ly_params(year = year_list[0], selConf_position = pos_list[0], appConf_default = True)
ly = lmi.Layer(ly_to_clone)
ly = ly.clone(layer_params = layer_params, env = env, token = API_TOKEN, target_dataset_id = ds_id)
# Add other layers
for i in pos_list[1:]:
layer_params = ly_params(year = year_list[i], selConf_position = i, appConf_default = False)
ly.clone(layer_params = layer_params, env = env, token = API_TOKEN, target_dataset_id = ds_id)
# + id="qTuuftwLqajf" colab_type="code" outputId="7650a527-b1c7-441f-e3ad-2bdab4967776" colab={"base_uri": "https://localhost:8080/", "height": 299}
import LMIPy as lmi
c = lmi.Collection(search="brazil", app=["gfw"], env="production", object_type="dataset")
c
# + id="_JDjukF-brKh" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 107} outputId="7b63470c-a1d5-419b-98ca-36d90840bc48"
# Disable the old dataset
import LMIPy as lmi
ds = c[1]
update_params = {
'published': False
}
ds.update(update_params=update_params, token=API_TOKEN)
| work/lmipy_add_mapbiomass_version_3_1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: dsu-mlpp-env
# language: python
# name: dsu-mlpp-env
# ---
# # osu! Beatmap Validation
# **Contributors:** <NAME>, <NAME>, <NAME>
# ## Imports
# **Most important packages:** Pymongo, Pandas, Numpy (we could also write something else here).
import sys
sys.path.append('../..')
from config import client
import pymongo
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from itertools import chain
import requests
# ## Connection with Compass
# **Dataset:** osu_random_db, osu_top_db
db = client['osu_top_db']
scores = db.osu_scores_high
count = db.osu_user_beatmap_playcount
players = db.osu_user_stats
scores.create_index("user_id")
random = client['osu_random_db']
maps = random.osu_beatmaps
# ## Data Retrieval
# **Collection:** osu_scores_high
# get list of user_ids
l = list(players.find({}, {"user_id": True}))
user_id = [d['_id'] for d in l]
fifth = user_id[0:-1:4]
# created index for the collection to make the loop quicker
top_scores = []
for i in fifth:
beatmaps = list(scores.find({"user_id": i}, {"user_id": True, "pp": True, "beatmap_id": True}))
top_scores.append(beatmaps)
df = pd.DataFrame(list(chain.from_iterable(top_scores)))
df.head(20)
# ## Data frame with how many times beatmap appears in a players top 15 scores (frequency)
df_top = df.sort_values(['user_id','pp'], ascending = [True, False]).groupby("user_id").head(15)
df_top = df_top.reset_index(drop = True)
df_top = df_top.drop(["_id"], axis = 1)
maps_dict = df_top['beatmap_id'].value_counts().to_dict()
temp = pd.DataFrame.from_dict(maps_dict, orient = "index").reset_index()
temp = temp.rename(columns = {"index": "beatmap_id"})
df_top = df_top.merge(temp, how = "left")
df_top = df_top.rename(columns = {0: "frequency_top_fifteen"})
df_top.head(10)
plt.xlabel("frequency")
plt.ylabel("pp")
plt.scatter(df_top["frequency_top_fifteen"], df_top["pp"])
maps_dict_total = df['beatmap_id'].value_counts().to_dict()
temp_total = pd.DataFrame.from_dict(maps_dict_total, orient = "index").reset_index()
temp_total = temp_total.rename(columns = {"index": "beatmap_id"})
df_top = df_top.merge(temp_total, how = "left")
df_top.head(10)
#merge how many times the beatmap occurs in the dataframe
plt.hist(df_top["frequency_top_fifteen"])
# ### Create a ratio of how many times a beatmap appeared in a players top 15 / how many times it occured in the dataset total
df_top = df_top.rename(columns = {0: "frequency_total"})
df_top["ratio"] = np.divide(df_top["frequency_top_fifteen"], df_top["frequency_total"])
df_top.head(20)
#create a ratio that represents frequency_top_ten/frequency_total
df_top[df_top["ratio"] == max(df_top["ratio"])]
df_top[df_top["frequency_top_fifteen"] == max(df_top["frequency_top_fifteen"])]
plt.hist(df_top["ratio"])
some_maps = list(df["beatmap_id"])
date = list(maps.find({}, {"_id" : True, "last_update": True}))
df_date = pd.DataFrame(date)
df_date.head(10)
# ### Create a coefficient for each players top 15 scores, add totalweight column that represents each beatmap summed coefficient
lister=[1,0.95,0.9,0.85,0.8,0.75,0.7,0.65,0.6,0.55,0.5,0.45,0.40,0.35,0.30]
lister=lister*2500
df_top["weight"]=pd.Series(lister)
weighted_df=df_top.groupby(["beatmap_id"])["weight"].agg('sum')
weighted_df=pd.DataFrame(weighted_df)
weighted_df=weighted_df.rename(columns = {'weight': 'totalWeight'})
weighted_df=weighted_df.reset_index()
weighted_df
# ### Create new column (weightedRatio) that represents the total weight / frequency total
df_top = df_top.merge(weighted_df, how = "left")
df_top["weightedRatio"]=np.divide(df_top['totalWeight'],df_top['frequency_total'])
df_top=df_top.sort_values(by=['weightedRatio'],ascending=False)
df_top
len(df_top['beatmap_id'].unique()) # of unique beatmaps
plt.hist(df_top["weightedRatio"])
#there are few beatmaps with high ratios; these beatmaps may indicate overweightness as they occur in a players top 15 frequently
# ### Correlation Heatmap
# ### Webscraping for top overweighted maps off of osu-pps.com
osu = pd.read_csv("run_results-12.csv", sep = ";")
osu["selection1_url"] = osu["selection1_url"].str.slice(20)
osu = osu.rename(columns = {"selection1_url": "beatmap_id"})
osu = osu[4200:]
osu = osu.reset_index()
osu
a = [str(i) for i in df_top['beatmap_id']]
m = set(a)
n = set(osu["beatmap_id"])
inter = list(m.intersection(n))
b = [int(i) for i in inter]
# ### Find overlapping overweighted beatmaps in our data and osu-pps data
new_df = df_top[df_top["beatmap_id"].isin(b)]
rankings = new_df[["beatmap_id","weightedRatio"]]
rankings = rankings.drop_duplicates()
rankings = rankings.reset_index(drop = True)
rankings
rankings["beatmap_id"] = rankings["beatmap_id"].astype('int')
osu["beatmap_id"] = osu["beatmap_id"].astype('int')
rankings = rankings.merge(osu, how = "left")
rankings = rankings.drop(["index"], axis = 1)
rankings
# ## Correlation between our overweighted maps and osu-pps.com overweighted maps
rankings = rankings.rename(columns = {"selection1_selection2": "website_score"})
rankings['weightedRatio'].corr(rankings["website_score"])
plt.scatter(rankings["weightedRatio"], rankings["website_score"]/10000)
rankings["our-ranking"] = np.arange(1,371,1)
rankings
rankings = rankings.sort_values(by = "website_score", ascending = False)
rankings["osu-ranking"] = np.arange(1,371,1)
rankings
rankings = rankings.sort_values(by = "our-ranking", ascending = True)
rankings = rankings.rename(columns = {"selection1_name": "beatmap", "weightedRatio": "weighted_ratio",
"our-ranking": "our_ranking", "osu-ranking":"osu_ranking"})
rankings
rankings = rankings.rename(columns = {"osu_ranking": "website_ranking"})
rankings
| spring21/validation/beatmap_validation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import xgboost as xgb
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import classification_report
from sklearn.model_selection import validation_curve
from sklearn.model_selection import learning_curve
from sklearn import metrics
from sklearn.metrics import fbeta_score, make_scorer
from sklearn.grid_search import GridSearchCV
# -
import matplotlib
import matplotlib.pyplot as plt
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)):
"""
Generate a simple plot of the test and training learning curve.
Parameters
----------
estimator : object type that implements the "fit" and "predict" methods
An object of that type which is cloned for each validation.
title : string
Title for the chart.
X : array-like, shape (n_samples, n_features)
Training vector, where n_samples is the number of samples and
n_features is the number of features.
y : array-like, shape (n_samples) or (n_samples, n_features), optional
Target relative to X for classification or regression;
None for unsupervised learning.
ylim : tuple, shape (ymin, ymax), optional
Defines minimum and maximum yvalues plotted.
cv : int, cross-validation generator or an iterable, optional
Determines the cross-validation splitting strategy.
Possible inputs for cv are:
- None, to use the default 3-fold cross-validation,
- integer, to specify the number of folds.
- An object to be used as a cross-validation generator.
- An iterable yielding train/test splits.
For integer/None inputs, if ``y`` is binary or multiclass,
:class:`StratifiedKFold` used. If the estimator is not a classifier
or if ``y`` is neither binary nor multiclass, :class:`KFold` is used.
Refer :ref:`User Guide <cross_validation>` for the various
cross-validators that can be used here.
n_jobs : integer, optional
Number of jobs to run in parallel (default 1).
"""
plt.figure()
plt.title(title)
if ylim is not None:
plt.ylim(*ylim)
plt.xlabel("Training examples")
plt.ylabel("Score")
train_sizes, train_scores, test_scores = learning_curve(
estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.grid()
plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.1,
color="r")
plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.1, color="g")
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Cross-validation score")
plt.legend(loc="best")
return plt
def modelfit(alg, feature_names, X_train, y_train, X_test, y_test, height=16, useTrainCV=True, cv_folds=5, early_stopping_rounds=50):
if useTrainCV:
xgb_param = alg.get_xgb_params()
xgtrain = xgb.DMatrix(X_train, label=y_train)
cvresult = xgb.cv(xgb_param, xgtrain, num_boost_round=alg.get_params()['n_estimators'], nfold=cv_folds,
metrics='rmse', early_stopping_rounds=early_stopping_rounds)
alg.set_params(n_estimators=cvresult.shape[0])
#Fit the algorithm on the data
alg.fit(X_train, y_train)
#Predict training set:
y_predictions = alg.predict(X_test)
#dtrain_predprob = alg.predict_proba(y_test)
#Print model report:
print("\nModel Report")
print("RMSE : %.4g" % metrics.mean_squared_error(y_test, y_predictions))
#print("AUC Score (Train): %f" % metrics.roc_auc_score(dtrain['Disbursed'], dtrain_predprob))
feat_imp = pd.Series(alg.booster().get_fscore()).sort_values(ascending=False)
feat_imp = feat_imp.rename(lambda x: feature_names[int(x[1:len(x)])])
feat_imp.plot(kind='barh', title='Feature Importances',figsize=(7, height))
plt.ylabel('Feature Importance Score')
return feat_imp
def ModelParamSearch(xgb, params, X_train, y_train):
search = GridSearchCV(estimator=xgb, param_grid=params, n_jobs=4, iid=False, cv=5)
search.fit(X_train, y_train)
print('\ngrid_scores')
for score in search.grid_scores_:
print(score)
print('\nbest_params')
print(search.best_params_)
print('\nbest_score')
print(search.best_score_)
return search
# +
# X_train_df = pd.read_csv("../data/offline/X_train3.csv", index_col=0)
# y_train_df = pd.read_csv("../data/offline/y_train3.csv", index_col=0)
# X_test_df = pd.read_csv("../data/offline/X_test3.csv", index_col=0)
# combine_df = pd.concat([X_train_df, X_test_df])
# -
X_train_df = pd.read_csv("../data/offline/X_train.csv", index_col=0)
y_train_df = pd.read_csv("../data/offline/y_train.csv", index_col=0)
X_test_df = pd.read_csv("../data/offline/X_test.csv", index_col=0)
combine_df = pd.concat([X_train_df, X_test_df])
X_train, X_test, y_train, y_test = train_test_split(X_train_df.values, y_train_df['SalePrice'].values, test_size=0.5, random_state=1729)
feature_names = X_train_df.columns
# +
xgb_model = xgb.XGBRegressor(max_depth=4, learning_rate=0.1, n_estimators=100,
silent=False, objective='reg:linear', subsample=0.8,
colsample_bytree=0.8, gamma=0, min_child_weight = 1,
scale_pos_weight=1, seed=27)
feat_data = modelfit(xgb_model, feature_names, X_train, y_train, X_test, y_test, 10)
plt.show()
# -
feat_data.index
list(set(X_train_df.columns) - set(feat_data.index))
# +
xgb_model = xgb.XGBRegressor(max_depth=2, learning_rate=0.1, n_estimators=100,\
silent=False, objective='reg:linear', subsample=0.8,\
colsample_bytree=0.75, gamma=0, min_child_weight = 6,\
scale_pos_weight=1, seed=27)
params1 = {
'max_depth':np.array(range(3,10,2)),
'min_child_weight':np.array(range(1,6,2))
}
search = ModelParamSearch(xgb_model, params1, X_train, y_train)
# +
params = {
'max_depth':[2, 3, 4],
'min_child_weight':[2, 3, 4]
}
search2 = ModelParamSearch(xgb_model, params, X_train, y_train)
# 0.8959456941803028
# 0.8938496041827715
# -
# +
xgb_model = xgb.XGBRegressor(max_depth=3, learning_rate=0.1, n_estimators=100,\
silent=False, objective='reg:linear', subsample=0.55,\
colsample_bytree=0.7, gamma=0, min_child_weight = 3,\
scale_pos_weight=1, seed=27)
param4 = {
'subsample':[i/10.0 for i in range(6,10)],
'colsample_bytree':[i/10.0 for i in range(6,10)]
}
search4 = ModelParamSearch(xgb_model, param4, X_train, y_train)
# best_params
# {'colsample_bytree': 0.7, 'subsample': 0.7}
# best_score
# 0.9001576485029655
# +
xgb_model = xgb.XGBRegressor(max_depth=3, learning_rate=0.1, n_estimators=100,\
silent=False, objective='reg:linear', subsample=0.8,\
colsample_bytree=0.6, gamma=0, min_child_weight = 3,\
scale_pos_weight=1, seed=27)
param5 = {
'colsample_bytree':[i/100.0 for i in range(65,80,5)],
'subsample':[i/100.0 for i in range(55,70,5)]
}
search5 = ModelParamSearch(xgb_model, param5, X_train, y_train)
# best_params
# {'colsample_bytree': 0.65, 'subsample': 0.65}
# best_score
# 0.9007344923004951
# +
xgb_model = xgb.XGBRegressor(max_depth=3, learning_rate=0.1, n_estimators=100,\
silent=False, objective='reg:linear', subsample=0.65,\
colsample_bytree=0.65, gamma=0, min_child_weight = 3,\
scale_pos_weight=1, seed=27)
param6 = {
'reg_alpha':[1e-5, 1e-2, 0.1, 1, 100]
}
search6 = ModelParamSearch(xgb_model, param6, X_train, y_train)
# +
param7 = {
#'reg_alpha':[1e-07, 1e-05, 1e-03]
#'reg_alpha':[0.005, 0.01, 0.02]
'reg_alpha':[0.05, 0.1, 0.2]
}
search7 = ModelParamSearch(xgb_model, param7, X_train, y_train)
# best_params
# {'reg_alpha': 1.1}
# best_score
# 0.9009523929423489
# -
# +
xgb_model = xgb.XGBRegressor(max_depth=3, learning_rate=0.1, n_estimators=100,\
silent=False, objective='reg:linear', subsample=0.65,\
colsample_bytree=0.65, gamma=0, min_child_weight = 3,\
scale_pos_weight=1, seed=27, reg_alpha=0.1)
modelfit(xgb_model, feature_names, X_train, y_train, X_test, y_test)
plt.show()
# -
#gamma=0, min_child_weight=1, max_delta_step=0, subsample=1, colsample_bytree=1, colsample_bylevel=1, reg_alpha=0, reg_lambda=1, scale_pos_weight=1, base_score=0.5, seed=0, missing=Nonemax_depth=3, learning_rate=0.1, n_estimators=100, silent=True, objective='reg:linear', booster='gbtree', nthread=-1, gamma=0, min_child_weight=1, max_delta_step=0, subsample=1, colsample_bytree=1, colsample_bylevel=1, reg_alpha=0, reg_lambda=1, scale_pos_weight=1, base_score=0.5, seed=0, missing=None
xgb_model = xgb.XGBRegressor(max_depth=3, learning_rate=0.05, n_estimators=200,\
silent=False, objective='reg:linear', subsample=0.65,\
colsample_bytree=0.65, gamma=0, min_child_weight = 3,\
scale_pos_weight=1, seed=27, reg_alpha=0.1)
plot_learning_curve(xgb_model, 'gbdt', X_train, y_train, cv=5)
plt.show()
xgb_model.fit(X_train, y_train)
xgb_model.score(X_test, y_test)
xgb_model.fit(X_train_df.values, y_train_df['SalePrice'].values)
0.88069580114898871
0.87256074767322089
0.86215552690396724
0.86209053019833237
0.84963578857989486
0.85519185040271417
0.87586527403706282
0.88027425879399812
0.87161084260404043
y_predict = xgb_model.predict(X_test_df.values)
y_predict_df = pd.DataFrame(y_predict, index=X_test_df.index)
y_predict_df = y_predict_df.rename(columns={0:'SalePrice'})
y_predict_df['SalePrice'] = np.expm1(y_predict_df['SalePrice'])
y_predict_df.to_csv('../data/online/predict.csv', header = True, index=True)
| offline/RandomForestModel.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Filter for specific tidal frequencies
# + code_folding=[0]
# import modules
import xarray as xr
import datetime as dt
import math
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.dates as pldates
import matplotlib.colors as colors
import scipy.signal as sig
import numpy as np
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
for i in range(2):
# %matplotlib notebook
# + code_folding=[]
# import data
year = 2013
ds = xr.open_dataset(f'../../../Data/tide/tofino_tide_{year}.nc')
print(ds)
# -
# ## Spectral method
# + code_folding=[0]
# remove mean
tidem = ds.tide - np.nanmean(ds.tide)
# + code_folding=[0]
# spectrogram of tide
fs = 2.7777e-4
nps = 256
overlap = 0.9*nps
win = 'hann'
tide_f, tide_t, tide_Sxx = sig.spectrogram(tidem, fs=fs, window=win, \
nperseg = nps, noverlap = overlap, return_onesided=True)
# convert spectro_t to datetime for x-axis on plots for PSD
spectro_t4 = tide_t*fs
spectro_time_len = len(spectro_t4)
spectro_time_axis = np.zeros([spectro_time_len],dtype='datetime64[s]')
for k in range(spectro_time_len):
j = int(spectro_t4[k])
spectro_time_axis[k] = ds.time[j].values
time_axis = spectro_time_axis
# + code_folding=[0]
# plot spectrogram
start_date = ds.time[0].values
end_date = ds.time[0].values
fig, ax0 = plt.subplots(1, 1, figsize=(12,4.3))
fig.text(0.5, 0.94, f'Tide spectrograms - Tofino - {year}', ha='center', fontsize=14)
fig.text(0.05, 0.5, 'Frequency [Hz]', va='center', rotation='vertical', fontsize=14)
fig.text(0.935, 0.5, 'S$_{xx}$ [(m)$^2$/Hz]', va='center', rotation='vertical', fontsize=14)
fig.text(0.5, 0.01, f'Time [months]', ha='center',fontsize=14)
vmin = 1e-3
vmax = 5e6
im0 = ax0.pcolormesh(time_axis, tide_f, tide_Sxx, rasterized=True, \
norm=colors.LogNorm(vmin=vmin, vmax=vmax), cmap='plasma',shading='auto')
cbar0 = fig.colorbar(im0, ax=ax0, fraction=0.05, pad=0.01, aspect=15, extend='both')
cbar0.ax.tick_params(labelsize=14)
ax0.patch.set_facecolor('grey')
ax0.set_yscale('log')
ax0.set_ylim(tide_f[1],tide_f[-1])
date_form = pldates.DateFormatter("%m")
ax0.xaxis.set_major_formatter(date_form)
#ax0.set_xlim(start_date,end_date)
ax0.tick_params(labelsize=14)
plt.show()
plt.savefig(fname=f'./tide_spectro_{year}.pdf',format='pdf')
# + code_folding=[15]
# filter tides
band = 'Semidiurnal' # Diurnal or Semidiurnal
if band == 'Diurnal':
low_f = 10
high_f = 13
freqs = tide_f[low_f:high_f]
elif band == 'Semidiurnal':
low_f = 20
high_f = 23
freqs = tide_f[low_f:high_f]
t = len(time_axis)
tidepower = np.zeros(t)
for i in range(t):
bandrange = tide_Sxx[low_f:high_f,i]
tidepower[i] = np.trapz(y=bandrange,x=freqs)
# + code_folding=[0]
# plot tides
fig,ax = plt.subplots(1,1,figsize=(12,5))
ax.plot(time_axis,tidepower)
date_form = pldates.DateFormatter("%m")
ax.xaxis.set_major_formatter(date_form)
ax.set_xlabel('Time [months]')
ax.set_ylabel('Sea level above chart datum [m]')
ax.set_title(f'Tofino {band} tides - {year}')
plt.show()
plt.savefig(fname=f'./tide_{band}_{year}.pdf',format='pdf')
# + code_folding=[0]
# save to .nc file
ds_out = xr.Dataset(
data_vars=dict(
tide=(['time'], tidepower), # tide height data [m]
),
coords=dict(
time=time_axis,
),
attrs=dict(
description=f'Tide data from Tofino CHS, filtered for {band} response.',
units=['metres amplitude, numpy.datetime64'],
),
)
ds_out.to_netcdf(f'../../../Data/tide/tide_{band}_{year}.nc')
# -
| archive/tide/.ipynb_checkpoints/tide_filter-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from keras.preprocessing import image
import numpy as np
import matplotlib.pyplot as plt
img_path = 'C:/Users/Lenovo/Desktop/AIudemy/tiger.jpg'
img = image.load_img(img_path, target_size=(480,640))
plt.imshow(img)
plt.show()
# +
import keras
from keras.datasets import mnist
# input image dimensions
img_rows, img_cols = 28, 28
# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# -
y = x_train[10]
plt.gray()
plt.imshow(y)
plt.show()
| Section 10/Matplotlib display pictures.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda root]
# language: python
# name: conda-root-py
# ---
#testing reverse(list)
for x in reversed(range(0,5,)):
print(x)
# +
def reverse_number(n): #turn the number into a string, return the reverse of that string and then convert it back to an integer
return(int(str(n)[::-1]))
def isPalindrome(n): #take a number, reverse it and check it it's the same backwards and forwards
if reverse_number(n)==n:
return(True)
else:
return(False)
def candidates():
candidates = {}
for x in reversed(range(100,1000)):
for y in reversed(range(100,x)): #as y <= x
if isPalindrome(x*y)==True:
candidates[x*y]=[x,y] # add an entry to dictionary
else:
pass
return(candidates)
def findPal(n): #in the list of candidates, work backwards until you reach a number smaller than n
pal_list = sorted(list(candidates().keys()))
for x in reversed(pal_list):
if x<n:
return(x)
break
# -
findPal(101110)
# Final Code Submitted
# +
# #!/bin/python3
import sys
def reverse_number(n): #turn the number into a string, return the reverse of that string and then convert it back to an integer
return(int(str(n)[::-1]))
def isPalindrome(n): #take a number, reverse it and check it it's the same backwards and forwards
if reverse_number(n)==n:
return(True)
else:
return(False)
def candidates():
candidates = {}
for x in reversed(range(100,1000)):
for y in reversed(range(100,x)): #as y <= x
if isPalindrome(x*y)==True:
candidates[x*y]=[x,y] # add an entry to dictionary
else:
pass
return(candidates)
candidates = candidates() # run the candidate generator
def findPal(n,candidates): #in the list of candidates, work backwards until you reach a number smaller than n
pal_list = sorted(list(candidates.keys()))
for x in reversed(pal_list):
if x<n:
return(x)
break
t = int(input().strip())
for a0 in range(t):
n = int(input().strip())
print(findPal(n,candidates))
| Project Euler/Euler04 - Largest Palindrome.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Topic modeling with PyMC3
#
# This article is an introduction to [topic models](https://en.wikipedia.org/wiki/Topic_model) and their naive implementation with [PyMC3](https://docs.pymc.io/). It is mainly targeted at readers with little or no prior PyMC3 experience. The examples focus on simplicity rather than efficiency and I'll provide links to more efficient implementations at the end of the article. I'll also show how mathematical descriptions of topic models relate to PyMC3 code. For this article it is helpful to have a basic understanding of probability theory.
#
# PyMC3 is an open source probabilistic programming library. It allows the specification of Bayesian statistical models with an intuitive syntax on an abstraction level similar to that of their mathematical descriptions and [plate notations](https://en.wikipedia.org/wiki/Plate_notation). PyMC3 does automatic [Bayesian inference](https://en.wikipedia.org/wiki/Bayesian_inference) for unknown variables in probabilistic models via [Markow Chain Monte Carlo](https://en.wikipedia.org/wiki/Markov_chain_Monte_Carlo) (MCMC) sampling or via [variational inference](https://en.wikipedia.org/wiki/Variational_Bayesian_methods) (ADVI, SVGD, ...).
#
# In this article MCMC is used to obtain posterior estimates for unknown model variables. I won't introduce PyMC3 from scratch here and therefore recommend to read the initial sections of the PyMC3 [getting started guide](https://docs.pymc.io/notebooks/getting_started.html) first (up to and including the linear regression example). But even without reading it you should be able to follow this article and get an intuition how PyMC3 can be used to implement topic models. The following implementation examples are based on PyMC3 version 3.5.
#
# ## Topic models
#
# Topic models are statistical models for discovering abstract topics in a collection of documents. For example, a document containing words like "dog", "cat" or "rat" likely has a different underlying topic than a document containing words like "CPU", "GPU" or "RAM". When learning a topic model only the words in documents are observed, the topics to be discovered are not observed. The random variables that model topics are therefore called hidden or latent variables and the corresponding model is called a latent variable model. When learning a latent variable model, the latent variables are inferred together with the other unknown model parameters.
#
# The topic models presented in this article are [generative models](https://en.wikipedia.org/wiki/Generative_model). These models learn a joint probability distribution $p(x, z) = p(x \lvert z)p(z)$ over observed words $x$ in a training dataset and hidden topics $z$. By sampling from this distribution new documents can be generated from their underlying topic(s), hence the term *generative model*. This is in contrast to a *discriminative model* that only learns a probability distribution $p(z \lvert x)$ over topics given a word $x$ or a set of words, often used in a supervised learning context where topics (classes) are observed.
#
# Throughout the following examples an over-simplified set of `documents` is used. The documents contain words that we can categorize into topics *programming languages*, *machine learning* and *databases*. During inference though only abstract topics `0`, `1`, `2`, ... are assigned to documents and words, semantic interpretation is up to us. For all presented models, the number of topics $K$ is pre-defined based on our intuition and is not inferred. For processing documents with PyMC3 they are categorically encoded based on the entries in a `vocabulary`.
# +
from sklearn.preprocessing import LabelEncoder
documents = [['Python', 'Scala', 'Python', 'Python', 'Java'],
['Scala', 'Python', 'Python', 'Java', 'Scala'],
['Python', 'Python', 'Scala', 'Python'],
['Java', 'Java', 'Java', 'Scala', 'Scala'],
['Scala', 'Scala', 'Scala', 'Python', 'Java', 'Scala', 'deep learning'],
['Python', 'Scala', 'Python', 'Python', 'Python', 'machine learning'],
['Java', 'Python', 'Python', 'Java', 'Scala'],
['deep learning', 'statistics', 'machine learning', 'Python'],
['machine learning', 'machine learning', 'deep learning', 'deep learning', 'machine learning'],
['statistics', 'Python', 'statistics', 'statistics', 'deep learning', 'Postgres'],
['deep learning', 'machine learning', 'machine learning', 'deep learning', 'deep learning', 'Postgres'],
['MySQL', 'Cassandra', 'Postgres', 'Postgres', 'Postgres', 'machine learning'],
['Cassandra', 'Cassandra', 'Postgres', 'Scala', 'MySQL', 'MySQL']]
# Number of topics
K = 3
# Number of documents
D = len(documents)
# (Ab)use label encoder for categorical encoding of words
encoder = LabelEncoder()
encoder.fit([word for document in documents for word in document])
# Vocabulary derived from documents
vocabulary = encoder.classes_
# Vocabulary size
V = len(vocabulary)
# Encoded documents
X = [encoder.transform(d) for d in documents]
# -
# ### Naive Bayes model
#
# We start with a simple topic model where each document is assigned a single topic based on the words contained in that document i.e. documents are classified into different topics. We further assume that the occurence of one word in a document doesn't give any additional information about the occurence of other words in that document. The distribution of words in a document only depends on the topic of that document. This means that the words in a document are conditionally independent i.e. independent given the document's topic. Models that make this naive assumption are called *Naive Bayes* models.
#
# The mathematical description of the model is
#
# $$
# \begin{align*}
# \boldsymbol{\theta} &\sim \mathrm{Dir(\boldsymbol\alpha)} \\
# \boldsymbol{\phi}_{k=1,\ldots,K} &\sim \mathrm{Dir(\boldsymbol\beta)} \\
# z_{d=1,\ldots,D} &\sim \mathrm{Cat}(\boldsymbol{\theta}) \\
# w_{d=1,\ldots,D,n=1,\ldots,N_d} &\sim \mathrm{Cat}(\boldsymbol{\phi}_{z_{d}})
# \end{align*}
# $$
#
# where
#
# - $\boldsymbol\theta$ is a random variable that models the global topic distribution. It is a $K$-dimensional random vector whose elements are the global topic probabilities. They must be non-negative and sum up to 1. A prior distribution that enforces these constraints is the [Dirichlet distribution](https://en.wikipedia.org/wiki/Dirichlet_distribution). It is parameterized with hyper-parameter $\boldsymbol\alpha$. We start with a uniform prior and therefore set $\boldsymbol\alpha = \boldsymbol{1}$.
#
# - $\boldsymbol\phi_k$ is a random variable that models the word distribution of topic $k$. It is a $V$-dimensional vector whose elements are the probabilities of vocabulary words where $V$ is the size of the vocabulary. The prior is also a Dirichlet distribution but with hyper-parameter $\boldsymbol\beta$. Our assumption is that only a small set of words from the vocabulary have higher probability per topic. We therefore define a sparse Dirichlet prior by setting the elements of $\boldsymbol\beta$ to values $<1$.
#
# - $z_d$ is the topic of document $d$. It follows a [categorical distribution](https://en.wikipedia.org/wiki/Categorical_distribution) that is parameterized by $\boldsymbol\theta$.
#
# - $w_{dn}$ is the $n$-th word of document $d$. It also follows a categorical distribution that is parameterized by $\boldsymbol\phi_{z_{d}}$ i.e. the word distribution of topic $z_d$.
#
# From a generative model perspective, a document's words can be generated by first sampling a topic from the global topic distribution and then sampling the words from that topic's word distribution. For drawing the words of a document we could also use a [multinomial distribution](https://en.wikipedia.org/wiki/Multinomial_distribution) but for easier comparison with the model in the next section we use a categorical distribution. The model can be summarized with the following plate notation:
#
# 
#
# Model specification with PyMC3 closely follows its mathematical description:
# +
import numpy as np
import pymc3 as pm
# Hyper-parameter for uniform Dirichlet prior
alpha = np.ones(K)
# Hyper-parameter for sparse Dirichlet prior
beta = np.ones(V) * 0.3
with pm.Model() as model:
# Global topic distribution
theta = pm.Dirichlet('theta', a=alpha)
# Word distributions for K topics
phi = pm.Dirichlet('phi', a=beta, shape=(K, V))
# Topic of documents
z = pm.Categorical('z', p=theta, shape=(D,))
for i in range(D):
# Words of document
w = pm.Categorical(f'w_{i}', p=phi[z[i]], observed=X[i])
# -
# - Variables $\boldsymbol\phi_k$ from the mathematical description are implemented as PyMC3 random variable `phi` with shape `(K, V)`. The variable name is the first argument to the distribution constructor.
#
# - Variables $z_d$ are implemented as PyMC3 random variable `z` with shape `(D,)` where $D$ is the number of documents (13 in our example).
#
# - Variables $w_{dn}$ are implemented as PyMC3 random variables `w_0`, `w_1`, ..., `w_12`, one for each document. These variables are observed and linked to training data via the `observed` parameter. These variables are not changed during model fitting. Their shape is derived from the number of words in a document.
#
# Next step is to infer the global topic distribution, the word distribution for each topic and the topic for each document. The corresponding estimates for `theta`, `phi` and `z` are computed from posterior samples obtained via MCMC.
with model:
trace = pm.sample(2000, chains=1)
# The `trace` object returned by `pm.sample` contains `2000` posterior samples for variables `theta`, `phi` and `z`. For visualizing `theta` posterior estimates we use the built-in `pm.plot_posterior` function. It plots the mean and the 95% highest posterior density (HPD) interval for each element of random vector `theta` i.e. `theta_0`, `theta_1` and `theta_2`.
pm.plot_posterior(trace, varnames=['theta']);
# The posterior means closely match the relative frequency or distribution of topics in our dataset of 13 documents: 7 documents are related to programming languages, 4 documents are related to machine learning and 2 documents are related to databases.
#
# For visualizing the prominent words of each topic we plot [kernel density estimates](https://en.wikipedia.org/wiki/Kernel_density_estimation) for each element of `phi`. We can see that for topic `0` the words with the highest probabilities are "Python", "Scala" and "Java", for topic `1` these are "deep learning", "machine learning" and "statistics" and for topic `2` "Postgres", "MySQL" and "Cassandra".
# +
import matplotlib.pyplot as plt
import seaborn as sns
def plot_phi_estimates(trace):
plt.figure(figsize=(20,5))
for i in range(K):
plt.subplot(1, 3, i+1)
for j in range(V):
sns.distplot(trace['phi'][:, i, j], kde=True, hist=False, label=vocabulary[j])
plt.ylim([0, 10])
plt.title(f'Phi density estimates for topic {i}')
plt.legend(vocabulary)
plot_phi_estimates(trace)
# -
# For obtaining the topic of each document we compute the posterior mode of `z` but omit computing uncertainty estimates here.
for i in range(D):
topics, counts = np.unique(trace['z'][:, i], return_counts=True)
print(f'Topic of document {i: >2} = {topics[np.argmax(counts)]}')
# ### Latent Dirichlet allocation
#
# Making the assumption that a document has only a single underlying topic is often too restrictive. A more realistic assumption is that a document is made up of a mixture of topics. Instead of having a global topic distribution we want to have a topic distribution per document. By additionally allowing the assignment of topics to individual words we end up with a topic model called [Latent Dirichlet allocation](https://en.wikipedia.org/wiki/Latent_Dirichlet_allocation) (LDA). It is a popular topic model that can be summarized with the following plate notation:
#
# 
#
# The difference to the Naive Bayes model is that the $D$ and $N_d$ plates are extended by one node to the left. The LDA model can be mathematically described as
#
# $$
# \begin{align*}
# \boldsymbol{\theta}_{d=1,\ldots,D} &\sim \mathrm{Dir(\boldsymbol\alpha)} \\
# \boldsymbol{\phi}_{k=1,\ldots,K} &\sim \mathrm{Dir(\boldsymbol\beta)} \\
# z_{d=1,\ldots,D,n=1,\ldots,N_d} &\sim \mathrm{Cat}(\boldsymbol{\theta}_d) \\
# w_{d=1,\ldots,D,n=1,\ldots,N_d} &\sim \mathrm{Cat}(\boldsymbol{\phi}_{z_{dn}})
# \end{align*}
# $$
#
# where
#
# - $\boldsymbol\theta_d$ is the topic distribution of document $d$. We assume that only a small number of topics have higher probability per document and therefore use a sparse Dirichlet prior for $\boldsymbol\theta_d$.
#
# - $\boldsymbol\phi_k$ is the word distribution of topic $k$, as in the previous model.
#
# - $z_{dn}$ is the topic of word $w_{dn}$. It follows a categorical distribution that is parameterized by $\boldsymbol\theta_d$.
#
# - $w_{dn}$ is the $n$-th word of document $d$. It also follows a categorical distribution that is parameterized by $\boldsymbol\phi_{z_{dn}}$ i.e. the word distribution of topic $z_{dn}$.
#
# Model specification with PyMC3 again closely follows the mathematical description:
# +
alpha = np.ones(K) * 0.3
beta = np.ones(V) * 0.3
with pm.Model() as model:
phi = pm.Dirichlet("phi", a=beta, shape=(K, V))
for i in range(D):
theta = pm.Dirichlet(f"theta_{i}", a=alpha)
z = pm.Categorical(f"z_{i}", p=theta, shape=X[i].shape)
w = pm.Categorical(f"w_{i}", p=phi[z], observed=X[i])
# -
# - Variables $\boldsymbol\theta_d$ from the mathematical description are now implemented inside the `for` loop as PyMC3 random variables `theta_0`, ..., `theta_12`, one for each document.
#
# - Variables $z_{dn}$ are implemented inside the `for` loop as PyMC3 variables `z_0`, ..., `z_12`. Their shape is derived from the number of words in the document.
#
# - The implementation of variables $\boldsymbol\phi_k$ and $w_{dn}$ is identical to the previous model.
#
# Again, we draw 2000 posterior samples via MCMC so that we can compute estimates.
with model:
trace = pm.sample(2000, chains=1, nuts_kwargs={'target_accept': 0.9})
# The density estimates for `phi` are similar to the previous example which is expected. Only the assignment of topic numbers may change between MCMC runs.
plot_phi_estimates(trace)
# We can additionally analyze the topic distributions per document. In the following output we see that in documents 0-6 topic 1 has the highest probability, in documents 7-10 it is topic 0 and in documents 11-12 it is topic 2. In addition to the $\boldsymbol\theta_d$ mean values the 95% HPD interval is computed for the topic with the highest probability.
# +
import pandas as pd
data = []
for i in range(D):
row = np.mean(trace[f'theta_{i}'], axis=0).tolist()
row.extend(pm.stats.hpd(trace[f'theta_{i}'])[np.argmax(row)])
data.append(row)
pd.options.display.float_format = '{:,.3f}'.format
df = pd.DataFrame(data, columns=['$\\boldsymbol\\theta_{d,0}$',
'$\\boldsymbol\\theta_{d,1}$',
'$\\boldsymbol\\theta_{d,2}$', 'HPD 2.5', 'HPD 97.5'])
df.index.name = 'Document'
df
# -
# The topic distribution for document with index 9 is less skewed compared to other documents which makes sense since it contains words from all three topics. To obtain the topic for each word in that document we compute the posterior mode of `z_9`, again skipping the computation of uncertainty estimates.
# +
doc_idx = 9
for i, w in enumerate(documents[doc_idx]):
topics, counts = np.unique(trace[f'z_{doc_idx}'][:, i], return_counts=True)
print(f'Topic of word "{w}" = {topics[np.argmax(counts)]}')
# -
# The results make perfectly sense, topics have been correctly assigned to words. For more options to analyze MCMC traces you may want to take a look at the PyMC3 modules [stats](https://docs.pymc.io/api/stats.html) and [plots](https://docs.pymc.io/api/plots.html).
#
# It is quite easy to get started with topic modeling in PyMC3 but the implementation presented here is rather naive. It samples from the full posterior over all unknown variables which is quite inefficient. A more efficient implementation would sample from marginal posteriors as described in [this paper](https://www.semanticscholar.org/paper/Finding-scientific-topics.-Griffiths-Steyvers/e99f196cf21e0781ef1e119d14e6db45cd71bf3b), for example. Other approaches use variational inference instead of MCMC. A PyMC3 example based on ADVI is [here](https://docs.pymc.io/notebooks/lda-advi-aevb.html). An efficient implementation in [scikit-learn](https://scikit-learn.org/stable/index.html) is [LatentDirichletAllocation](https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.LatentDirichletAllocation.html) which is also based on a variational Bayes algorithm.
# ## Appendix
#
# ### Code for plate notations
#
# [DAFT](http://daft-pgm.org/) is used for generating plate notation figures.
import daft
# #### Naive Bayes model
pgm = daft.PGM([4, 3], origin=(-0.5, -1))
pgm.add_node(daft.Node("alpha", r"$\mathbf{\alpha}$", 0, 0))
pgm.add_node(daft.Node("beta", r"$\mathbf{\beta}$", 2, 1.5))
pgm.add_node(daft.Node("theta", r"$\mathbf{\theta}$", 1, 0))
pgm.add_node(daft.Node("phi", r"$\mathbf{\phi}_k$", 3, 1.5))
pgm.add_node(daft.Node("z", r"$z_{d}$", 2, 0))
pgm.add_node(daft.Node("w", r"$w_{dn}$", 3, 0, observed=True))
pgm.add_edge("alpha", "theta")
pgm.add_edge("beta", "phi")
pgm.add_edge("theta", "z")
pgm.add_edge("phi", "w")
pgm.add_edge("z", "w")
pgm.add_plate(daft.Plate([2.4, 1.0, 1.0, 1.0], label=r"$K$"))
pgm.add_plate(daft.Plate([1.4, -0.7, 2.0, 1.4], label=r"$D$"))
pgm.add_plate(daft.Plate([2.4, -0.55, 0.95, 1.1], label=r"$N_d$"))
ax = pgm.render()
ax.set_title('Naive Bayes model');
# #### Latent Dirichlet allocation
pgm = daft.PGM([4, 3], origin=(-0.5, -1))
pgm.add_node(daft.Node("alpha", r"$\mathbf{\alpha}$", 0, 0))
pgm.add_node(daft.Node("beta", r"$\mathbf{\beta}$", 2, 1.5))
pgm.add_node(daft.Node("theta", r"$\mathbf{\theta}_d$", 1, 0))
pgm.add_node(daft.Node("phi", r"$\mathbf{\phi}_k$", 3, 1.5))
pgm.add_node(daft.Node("z", r"$z_{dn}$", 2, 0))
pgm.add_node(daft.Node("w", r"$w_{dn}$", 3, 0, observed=True))
pgm.add_edge("alpha", "theta")
pgm.add_edge("beta", "phi")
pgm.add_edge("theta", "z")
pgm.add_edge("phi", "w")
pgm.add_edge("z", "w")
pgm.add_plate(daft.Plate([2.4, 1.0, 1.0, 1.0], label=r"$K$"))
pgm.add_plate(daft.Plate([0.4, -0.7, 3.0, 1.4], label=r"$D$"))
pgm.add_plate(daft.Plate([1.4, -0.55, 1.95, 1.1], label=r"$N_d$"))
ax = pgm.render()
ax.set_title('Latent Dirichlet allocation');
| topic_modeling_pymc3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Series
#
# Funcionam como se fossem vetores, e são as colunas dos dataframes.
#
# Cada valor apresenta um índice associado, assemelhando-se a uma lista.
import pandas as pd
import numpy as np
serie = pd.Series([1,2,3,4,5])
print(serie)
# Argumentos: (index = [], dtype = , name = "")
# ### Extrair características das Series:
#
# +
print(serie.axes)
print(serie.dtype) # tipo dos elementos
print(serie.ndim) # número de dimensões
print(serie.empty) # é vazia ou não
print(serie.size) # quantidade de elementos
print(serie.values) # converte para um array numpy
# -
# Visualização de series:
serie.head(2)
serie.tail(2)
# Acessar elementos de series: funciona de forma semelhante às listas
serie[0]
# ### Operações com series:
#
# (operações básicas são semelhantes às listas)
serie.cumsum()
serie.cumprod()
serie.idxmin() # retornam indices
serie.idxmax()
# ### Estatísticas descritivas
serie.mean()
serie.median()
serie.std()
serie.var()
serie.mad() # desvio médio absoluto
serie.describe() # sumário
| pandas/series.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Desafios
# +
# Quais casas o CEO da House Rocket deveria comprar e por qual preço de compra?
# Uma vez a casa em posse da empresa, qual o melhor momento para vendê-las e qual seria o preço da venda?
# A House Rocket deveria fazer uma reforma para aumentar o preço da venda? Quais seriam as sugestões de mudanças? Qual o incremento no preço dado por cada opção de reforma?
# -
# # 1 - Identificando os Motivos das Perguntas
# ### O Por quê das perguntas..
# #### 1. A empresa pode esta estagnada, não tem crescimento nem destaque no mercado, levando a uma falta de confiança do cliente.
# #### 2. A empresa pode estar tendo prejuizo, por não ter acetividade nas compras e vendas.
# #### 3. A empresa pode estar tendo um grande volume de trabalho, mas, uma baixa lucratividade.
# # 2 - Coletando e Limpando os Dados para Análise
import pandas as pd
dados = pd.read_csv('dados/kc_house_data.csv')
dados.head(10)
mapa = {
'date': 'data_venda',
'price': 'preco_venda',
'bedrooms': 'quartos',
'bathrooms': 'banheiros',
'sqft_living': 'tamanho_casa',
'sqft_lot': 'tamanho_lote',
'floors': 'andares',
'waterfront': 'beira_mar',
'grade': 'porte',
'yr_built': 'ano_contrucao',
'yr_renovated': 'ano_reforma',
'zipcode': 'cep',
'sqft_living15': 'tamanho_casa_vizinhos',
'sqft_lot15': 'tamanho_lote_vizinhos'
}
dados = dados.rename(columns = mapa)
dados
dataset = dados[['data_venda'] + ['preco_venda'] + ['quartos'] + ['banheiros'] + ['tamanho_casa'] + ['tamanho_lote'] + ['andares'] + ['beira_mar'] + ['porte'] + ['ano_contrucao'] + ['ano_reforma'] + ['cep'] + ['tamanho_casa_vizinhos'] + ['tamanho_lote_vizinhos']]
dataset
# +
dia_venda = []
mes_venda = []
ano_venda = []
for i in dataset['data_venda'].values:
data_br = i.replace('T000000', '')
data_br = data_br[6:8] + data_br[4:6] + data_br[0:4]
dia_venda.append(data_br[0:2])
mes_venda.append(data_br[2:4])
ano_venda.append(data_br[4:8])
dataset['dia_venda'] = dia_venda
dataset['mes_venda'] = mes_venda
dataset['ano_venda'] = ano_venda
dataset = dataset[['dia_venda'] + ['mes_venda'] + ['ano_venda'] + ['preco_venda'] + ['quartos'] + ['banheiros'] + ['tamanho_casa'] + ['tamanho_lote'] + ['andares'] + ['beira_mar'] + ['porte'] + ['ano_contrucao'] + ['ano_reforma'] + ['cep'] + ['tamanho_casa_vizinhos'] + ['tamanho_lote_vizinhos']]
# -
dataset
# +
tamanho_casa_m2 = []
tamanho_lote_m2 = []
tamanho_casa_vizinhos_m2 = []
tamanho_lote_vizinhos_m2 = []
for i in dataset['tamanho_casa'].values:
m2 = (round(i*0.092903, 2))
tamanho_casa_m2.append(m2)
dataset['tamanho_casa_m2'] = tamanho_casa_m2
for i in dataset['tamanho_lote'].values:
m2 = (round(i*0.092903, 2))
tamanho_lote_m2.append(m2)
dataset['tamanho_lote_m2'] = tamanho_lote_m2
for i in dataset['tamanho_casa_vizinhos'].values:
m2 = (round(i*0.092903, 2))
tamanho_casa_vizinhos_m2.append(m2)
dataset['tamanho_casa_vizinhos_m2'] = tamanho_casa_vizinhos_m2
for i in dataset['tamanho_lote_vizinhos'].values:
m2 = (round(i*0.092903, 2))
tamanho_lote_vizinhos_m2.append(m2)
dataset['tamanho_lote_vizinhos_m2'] = tamanho_lote_vizinhos_m2
dataset
# -
dataset = dataset[['dia_venda'] + ['mes_venda'] + ['ano_venda'] + ['preco_venda'] + ['quartos'] + ['banheiros'] + ['tamanho_casa_m2'] + ['tamanho_lote_m2'] + ['andares'] + ['beira_mar'] + ['porte'] + ['ano_contrucao'] + ['ano_reforma'] + ['cep'] + ['tamanho_casa_vizinhos_m2'] + ['tamanho_lote_vizinhos_m2']]
dataset.head()
# +
preco_casa_m2 = []
preco_lote_m2 = []
preco_casa_viz_m2 = []
preco_lot_viz_m2 = []
for i, v in enumerate(dataset['preco_venda']):
casa_m2 = round((v / tamanho_casa_m2[i]), 2)
preco_casa_m2.append(casa_m2)
lote_m2 = round((v / tamanho_lote_m2[i]), 2)
preco_lote_m2.append(lote_m2)
casa_viz_m2 = round((v / tamanho_casa_vizinhos_m2[i]), 2)
preco_casa_viz_m2.append(casa_viz_m2)
lote_viz_m2 = round((v / tamanho_lote_vizinhos_m2[i]), 2)
preco_lot_viz_m2.append(lote_viz_m2)
media_casa_m2 = round(sum(preco_casa_m2 + preco_casa_viz_m2) / len(preco_casa_m2 + preco_casa_viz_m2), 2)
media_lote_m2 = round(sum(preco_lote_m2 + preco_lot_viz_m2) / len(preco_lote_m2 + preco_lot_viz_m2), 2)
preco_m2 = round(sum((preco_casa_m2 + preco_lote_m2 + preco_casa_viz_m2 + preco_lot_viz_m2)) / len((preco_casa_m2 + preco_lote_m2 + preco_casa_viz_m2 + preco_lot_viz_m2)), 2)
print(f'Media de preço por metro² das casas: {media_casa_m2}, lotes: {media_lote_m2}. A media geral é de: {preco_m2}')
# + jupyter={"outputs_hidden": true}
dataset['preco_casa_m2'] = preco_casa_m2
dataset['preco_lote_m2'] = preco_lote_m2
dataset['preco_casa_viz_m2'] = preco_casa_viz_m2
dataset['preco_lot_viz_m2'] = preco_lot_viz_m2
# -
dataset
# # 3. Analisando os Dados e Levantando Hipóteses
| Analise_House Rocket(Data_Science)/Analise_House_Rocket.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 自己相関法により基本周波数を推定する
# パワー最大となるフレームに対して推定を行うので、系列ではない
import librosa
import matplotlib.pyplot as plt
import numpy as np
import scipy
from scipy.io import wavfile
# +
IN_WAVE_FILE = "in.wav" # 分析対象の音声
FRAME_LENGTH = 1024 # フレーム長 (FFTサイズ)
HOP_LENGTH = 80 # フレームのシフト長
FFT_LENGTH = FRAME_LENGTH
MAX_Fo = 200 # 分析における基本周波数の最大値 (Hz)
MIN_Fo = 60 # 分析における基本周波数の最小値 (Hz)
# -
# 音声のロード
fs, data = wavfile.read(IN_WAVE_FILE)
data = data.astype(np.float64)
# +
# フレーム化
frames = librosa.util.frame(data, frame_length=FRAME_LENGTH, hop_length=HOP_LENGTH).T
# パワーが最大のフレーム位置を取得
max_ind = np.argmax(np.sum(frames * frames, axis=1))
# パワーが最大となるフレームを取り出す
pow_max_frame = frames[max_ind, :]
# -
# ## 自己相関法に基づく基本周波数の推定
# +
# 窓掛け
window = scipy.signal.blackman(FFT_LENGTH)
windowed_frame = pow_max_frame * window
# 自己相関関数の計算
autocorr = scipy.signal.correlate(windowed_frame, windowed_frame)
autocorr /= np.max(autocorr) # 正規化
# 「右半分」を取得
autocorr = autocorr[int(len(autocorr) / 2) :]
# 自己相関関数の極大点を与えるインデックスを取得(ピーク位置)
relmax_index = scipy.signal.argrelmax(autocorr)[0]
# 各ピーク位置における自己相関関数の値のうち、
# 最大値を与えるときのピーク位置を計算
peak_index = np.argmax(autocorr[relmax_index])
# ピーク位置を基本周期に変換
period = relmax_index[peak_index] / fs
# 基本周波数を計算
fo = 1.0 / period
print(f"Fundamental Frequency = {fo:.2f} Hz")
# -
# ## 結果の表示
# +
# パワー最大となるフレームの音声波形を表示
fig = plt.figure(figsize=(12, 6))
time = np.arange(len(windowed_frame)) / fs
axes = fig.add_subplot(2, 1, 1)
axes.plot(time, pow_max_frame, label="original")
axes.plot(time, windowed_frame, label="windowed")
axes.set_xlabel("Time (sec)")
axes.set_ylabel("Amplitude")
axes.set_title("Waveform")
axes.legend()
# 自己相関関数と極大値を表示
axes = fig.add_subplot(2, 1, 2)
axes.plot(time, autocorr, label="autocorrelation")
axes.plot(
time[relmax_index],
autocorr[relmax_index],
marker="o",
linestyle="",
label="local maximum",
)
axes.plot([0], autocorr[0], marker="o", linestyle="", color="#ff7f00")
axes.plot(
time[relmax_index[peak_index]],
autocorr[relmax_index[peak_index]],
marker="o",
markersize="10",
linestyle="",
color="blue",
label="fundamental period",
)
axes.set_xlabel("Time (sec)")
axes.set_ylabel("Autocorrelation function")
axes.set_title(
"Fundamental frequency estimation " f"via autocorrelation method: fo = {fo:.2f} Hz"
)
plt.tight_layout()
plt.legend()
plt.show()
| SpeechAnalysis/feat_fo_autocorr.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + id="ur8xi4C7S06n"
# Copyright 2021 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="eHLV0D7Y5jtU"
# # Vertex AI SDK for Python: Vertex AI Forecasting Model Training Example
#
# To use this Colaboratory notebook, you copy the notebook to your own Google Drive and open it with Colaboratory (or Colab). You can run each step, or cell, and see its results. To run a cell, use Shift+Enter. Colab automatically displays the return value of the last line in each cell. For more information about running notebooks in Colab, see the [Colab welcome page](https://colab.research.google.com/notebooks/welcome.ipynb).
#
# This notebook demonstrates how to create an AutoML model based on a time series dataset. It will require you provide a bucket where the dataset will be stored.
#
# Note: you may incur charges for training, prediction, storage or usage of other GCP products in connection with testing this SDK.
# + [markdown] id="lld3eeJUs5yM"
# # Install Vertex AI SDK, Authenticate, and upload of a Dataset to your GCS bucket
#
# After the SDK installation the kernel will be automatically restarted. You may see this error message `Your session crashed for an unknown reason` which is normal.
# + id="cMZLb8Arr2AG"
# %%capture
# !pip3 uninstall -y google-cloud-aiplatform
# !pip3 install google-cloud-aiplatform
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
# + id="ApsLDJjdsGPN"
import sys
if "google.colab" in sys.modules:
from google.colab import auth
auth.authenticate_user()
# + [markdown] id="c0SNmTBeD2nV"
# ### Enter your project and GCS bucket
#
# Enter your Project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook.
# + [markdown] id="s19AzYSGLIb9"
# **If you don't know your project ID**, you may be able to get your project ID using gcloud.
# + id="nwlVqT6RKxG7"
import os
PROJECT_ID = ""
# Get your Google Cloud project ID from gcloud
if not os.getenv("IS_TESTING"):
# shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID: ", PROJECT_ID)
# + [markdown] id="H5E8VB3jLOFC"
# Otherwise, set your project ID here.
# + id="DrED76XTK9OB"
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
# + [markdown] id="zkJk7agzT6F9"
# If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append it onto the name of resources you create in this tutorial.
# + id="qcRkdZBaUAz4"
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
# + [markdown] id="TFfpJs3DQsfo"
# Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets.
#
# You may also change the REGION variable, which is used for operations throughout the rest of this notebook. Make sure to [choose a region where Vertex AI services are available](https://cloud.google.com/vertex-ai/docs/general/locations#available_regions). You may not use a Multi-Regional Storage bucket for training with Vertex AI.
# + id="iqSQT6Z6bekX"
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
REGION = "[your-region]" # @param {type:"string"}
# + id="ukGsLjm-Ki14"
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
# + [markdown] id="-6AQjKlnx0mf"
# The datasets we are using are samples from the [Iowa Liquor Retail Sales](https://pantheon.corp.google.com/marketplace/product/iowa-department-of-commerce/iowa-liquor-sales) dataset. The training sample contains the sales from 2020 and the prediction sample (used in the batch prediction step) contains the January - April sales from 2021.
# + id="V_T10yTTqcS_"
TRAINING_DATASET_BQ_PATH = 'bq://bigquery-public-data:iowa_liquor_sales_forecasting.2020_sales_train'
# + [markdown] id="rk43VP_IqcTE"
# # Initialize Vertex AI SDK
#
# Initialize the *client* for Vertex AI.
# + id="VCiC9gBWqcTF"
from google.cloud import aiplatform
aiplatform.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
# + [markdown] id="35QVNhACqcTJ"
# # Create a Managed Time Series Dataset from BigQuery
#
# This section will create a dataset from a BigQuery table.
# + id="4OfCqaYRqcTJ"
ds = aiplatform.datasets.TimeSeriesDataset.create(
display_name='iowa_liquor_sales_train',
bq_source=[TRAINING_DATASET_BQ_PATH])
ds.resource_name
# + [markdown] id="6-bBqipfqcTS"
# # Launch a Training Job to Create a Model
#
# Once we have defined your training script, we will create a model.
# + id="aA41rT_mb-rV"
time_column = "date"
time_series_identifier_column="store_name"
target_column="sale_dollars"
job = aiplatform.AutoMLForecastingTrainingJob(
display_name='train-iowa-liquor-sales-automl_1',
optimization_objective='minimize-rmse',
column_transformations=[
{"timestamp": {"column_name": time_column}},
{"numeric": {"column_name": target_column}},
{"categorical": {"column_name": "city"}},
{"categorical": {"column_name": "zip_code"}},
{"categorical": {"column_name": "county"}},
]
)
# This will take around an hour to run
model = job.run(
dataset=ds,
target_column=target_column,
time_column=time_column,
time_series_identifier_column=time_series_identifier_column,
available_at_forecast_columns=[time_column],
unavailable_at_forecast_columns=[target_column],
time_series_attribute_columns=["city", "zip_code", "county"],
forecast_horizon=30,
context_window=30,
data_granularity_unit="day",
data_granularity_count=1,
weight_column=None,
budget_milli_node_hours=1000,
model_display_name="iowa-liquor-sales-forecast-model",
predefined_split_column_name=None,
)
# + id="muSC-mvgHno7" cellView="form"
#@title # Fetch Model Evaluation Metrics
#@markdown Fetch the model evaluation metrics calculated during training on the test set.
import pandas as pd
list_evaluation_pager = model.api_client.list_model_evaluations(parent=model.resource_name)
for model_evaluation in list_evaluation_pager:
metrics_dict = {m[0]: m[1] for m in model_evaluation.metrics.items()}
df = pd.DataFrame(metrics_dict.items(), columns=["Metric", "Value"])
print(df.to_string(index=False))
# + [markdown] id="nIw1ifPuqcTb"
# # Run Batch Prediction
# + id="nT-bZ1autijD"
#@markdown ## Create Output BigQuery Dataset
#@markdown First, create a new BigQuery dataset for the batch prediction output in the same region as the batch prediction input dataset.
import os
from google.cloud import bigquery
os.environ["GOOGLE_CLOUD_PROJECT"] = PROJECT_ID
batch_predict_bq_input_uri = "bq://bigquery-public-data.iowa_liquor_sales_forecasting.2021_sales_predict"
batch_predict_bq_output_dataset_name = "iowa_liquor_sales_predictions"
batch_predict_bq_output_dataset_path = "{}.{}".format(PROJECT_ID, batch_predict_bq_output_dataset_name)
batch_predict_bq_output_uri_prefix = "bq://{}.{}".format(PROJECT_ID, batch_predict_bq_output_dataset_name)
# Must be the same region as batch_predict_bq_input_uri
client = bigquery.Client()
dataset = bigquery.Dataset(batch_predict_bq_output_dataset_path)
dataset_region = "US" # @param {type : "string"}
dataset.location = dataset_region
dataset = client.create_dataset(dataset)
print("Created bigquery dataset {} in {}".format(batch_predict_bq_output_dataset_path, dataset_region))
# + [markdown] id="krKRn9W0xxI2"
# Run a batch prediction job to generate liquor sales forecasts for stores in Iowa from an input dataset containing historical sales.
# + id="8I8aRjRh6GGG"
model.batch_predict(
bigquery_source=batch_predict_bq_input_uri,
instances_format="bigquery",
bigquery_destination_prefix=batch_predict_bq_output_uri_prefix,
predictions_format="bigquery",
job_display_name="predict-iowa-liquor-sales-automl_1")
# + id="CTQl3fH6Ur2Z" cellView="form"
#@title # Visualize the Forecasts
#@markdown Follow the given link to visualize the generated forecasts in [Data Studio](https://support.google.com/datastudio/answer/6283323?hl=en).
import urllib
tables = client.list_tables(batch_predict_bq_output_dataset_path)
prediction_table_id = ""
for table in tables:
if table.table_id.startswith(
"predictions_") and table.table_id > prediction_table_id:
prediction_table_id = table.table_id
batch_predict_bq_output_uri = "{}.{}".format(
batch_predict_bq_output_dataset_path, prediction_table_id)
def _sanitize_bq_uri(bq_uri):
if bq_uri.startswith("bq://"):
bq_uri = bq_uri[5:]
return bq_uri.replace(":", ".")
def get_data_studio_link(batch_prediction_bq_input_uri,
batch_prediction_bq_output_uri, time_column,
time_series_identifier_column, target_column):
batch_prediction_bq_input_uri = _sanitize_bq_uri(
batch_prediction_bq_input_uri)
batch_prediction_bq_output_uri = _sanitize_bq_uri(
batch_prediction_bq_output_uri)
base_url = "https://datastudio.google.com/c/u/0/reporting"
query = "SELECT \\n" \
" CAST(input.{} as DATETIME) timestamp_col,\\n" \
" CAST(input.{} as STRING) time_series_identifier_col,\\n" \
" CAST(input.{} as NUMERIC) historical_values,\\n" \
" CAST(predicted_{}.value as NUMERIC) predicted_values,\\n" \
" * \\n" \
"FROM `{}` input\\n" \
"LEFT JOIN `{}` output\\n" \
"ON\\n" \
"CAST(input.{} as DATETIME) = CAST(output.{} as DATETIME)\\n" \
"AND CAST(input.{} as STRING) = CAST(output.{} as STRING)"
query = query.format(time_column, time_series_identifier_column,
target_column, target_column,
batch_prediction_bq_input_uri,
batch_prediction_bq_output_uri, time_column, time_column,
time_series_identifier_column,
time_series_identifier_column)
params = {
"templateId": "067f70d2-8cd6-4a4c-a099-292acd1053e8",
"ds0.connector": "BIG_QUERY",
"ds0.projectId": PROJECT_ID,
"ds0.billingProjectId": PROJECT_ID,
"ds0.type": "CUSTOM_QUERY",
"ds0.sql": query
}
params_str_parts = []
for k, v in params.items():
params_str_parts.append("\"{}\":\"{}\"".format(k, v))
params_str = "".join(["{", ",".join(params_str_parts), "}"])
return "{}?{}".format(base_url,
urllib.parse.urlencode({"params": params_str}))
print(
get_data_studio_link(batch_predict_bq_input_uri,
batch_predict_bq_output_uri, time_column,
time_series_identifier_column, target_column))
# + [markdown] id="24NPJ7nCRchZ"
#
# # Cleaning up
#
# To clean up all Google Cloud resources used in this project, you can [delete the Google Cloud project](https://cloud.google.com/resource-manager/docs/creating-managing-projects#shutting_down_projects) you used for the tutorial.
#
# Otherwise, you can delete the individual resources you created in this tutorial:
#
#
# + id="gq3ZSsAkRnXh"
# Delete model resource
model.delete(sync=True)
# Delete Cloud Storage objects that were created
# ! gsutil -m rm -r $BUCKET_NAME
| notebooks/community/sdk/Vertex_AI_SDK_AutoML_Forecasting_Model_Training_Example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
import seaborn as sns
data = pd.read_csv("C:/Users/charl/Downloads/raw.csv")
data.head()
data.shape
data.describe()
data.columns
import os
os.listdir(r"C:\Users\charl\Downloads\2019_5yr_Summary_FileTemplates")
import glob
# +
path = r"C:\Users\charl\Downloads\2019_5yr_Summary_FileTemplates"
filenames = glob.glob(path + '/s*.xlsx')
dfs = []
for filename in filenames:
dfs.append(pd.read_excel(filename))
acs_df = pd.concat(dfs, ignore_index=True)
acs_df.head()
# -
os.curdir
acs_data = pd.read_excel(r"C:\Users\charl\Downloads\2019_5yr_Summary_FileTemplates\seq1.xlsx")
acs_data.head()
acs_data.describe()
| old_scripts/coiprojectEDA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # EDA on Python
# ## Data Import
# For basic data profile, please check out [Profile Report]('ProfileReport_mini.html')
import pandas as pd
import altair as alt
from altair_saver import save
alt.data_transformers.disable_max_rows();
alt.data_transformers.enable('data_server');
alt.renderers.enable('mimetype');
alt.renderers.enable('altair_saver', fmts=['vega-lite', 'svg']);
# + tags=[]
train_df = pd.read_csv('data/widsdatathon2022/train.csv')
train_df.head()
# -
# ## EDA
# ### Facility type
# - Commercial buildings has higher Energy Use Intensity (EUI) than residential building in general.
train_df.groupby(['facility_type', 'building_class']).mean()['site_eui'].sort_values(ascending = False).head(10)
sorted_mean = train_df.groupby('facility_type').mean('site_eui').sort_values('site_eui', ascending = False).index
alt.Chart(train_df).mark_boxplot(extent="min-max").encode(
x=alt.X(
"facility_type",
sort=list(sorted_mean)
),
y="site_eui:Q",
color="building_class",
)
# ### Energy star rating
# - Energy star rating is highly correlated to EUI
alt.Chart(train_df).mark_boxplot(extent="min-max").encode(
x=alt.X(
"energy_star_rating",
),
y="site_eui:Q",
color="building_class",
).properties(width=1000)
# ### Elevation
# - Elevation does not correlate much with EUI.
# - How can we make use of this data?
## Do a Binning
alt.Chart(train_df).mark_boxplot(extent="min-max").encode(
x=alt.X(
"ELEVATION",
bin=alt.Bin(maxbins=50)
),
y="site_eui:Q",
color="building_class",
tooltip=["count()"],
facet=alt.Facet('building_class:N', columns=1),
).properties(height= 150, width=800)
# ### Built year
# - Residential buildings are less vary than Commercial Building.
# - New residential buildings built after 2000 has obvious EUI drop
# +
## Do facat
alt.Chart(train_df.query('year_built >= 1840')).mark_boxplot(extent="min-max").encode(
x=alt.X(
"year_built",
scale=alt.Scale(domain=[1840, 2020])
),
y="site_eui:Q",
color="building_class:N",
facet=alt.Facet('building_class:N', columns=1)
).properties(width=1000)#.configure_mark(
# opacity=0.8,
#)
# -
| site_energy_consumption_prediction/_build/html/_sources/3_EDA-Python.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
####################################################
#
# If you do not have WordCloud and want to use it, please run this cell to install it.
#
####################################################
# !pip install wordcloud
# +
####################################################
####################################################
# coding: utf-8
# Copyright 2020 IBM All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
####################################################
####################################################
#
# The data used by this notebook has been generated from various sources including content from the
# COVID-19 Open Research Dataset (CORD-19) (https://pages.semanticscholar.org/coronavirus-research)
#
####################################################
import sys
import os
from os import walk
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from wordcloud import WordCloud
import json
import urllib.request
use_local_data=False
# -
def word_cloud(words_for_cloud):
######################################
# Given a list of space-delimited words, this function will build
# and display a word cloud image
######################################
wordcloud = WordCloud(width = 800, height = 800,
background_color ='white',
# stopwords = stopwords,
min_font_size = 10).generate(words_for_cloud)
# plot the WordCloud image
plt.figure(figsize = (8, 8), facecolor = None)
plt.imshow(wordcloud)
plt.axis("off")
plt.tight_layout(pad = 0)
plt.show()
def initialize_demo():
######################################
# This function will initialize global variables and load the master index. This index is
# in CSV format and contains the main items of information that allow a user to drill
# down into the actual raw ACD enrichment detail files in order to perform deeper analysis.
######################################
print("Running initialize...")
global use_local_data
use_local_data=False
global public_url_path
public_url_path="https://whcs-dev-covid19-data.s3.us-east.cloud-object-storage.appdomain.cloud/"
csv_path = public_url_path+"data_index.csv"
data_file_names_file=urllib.request.urlopen(public_url_path+"data_file_names.txt")
global data_file_names
data_file_names_lines=data_file_names_file.readlines()
data_file_names = []
for datafile in data_file_names_lines:
data_file_names.append(str(datafile,"utf-8").strip())
global master_index
master_index = pd.read_csv(csv_path,
usecols=["docId","name","preferredName"],
dtype={"docId":"str"}
) #,nrows=15800000)
print("...initialize complete.")
def initialize_local_data():
######################################
# This function will initialize global variables and load the master index. This index is
# in CSV format and contains the main items of information that allow a user to drill
# down into the actual raw ACD enrichment detail files in order to perform deeper analysis.
######################################
print("Running initialize_local_data...")
global use_local_data
use_local_data=True
global raw_files_path
# !!!!!!!!!!!!!!!!!!!!!
# TO DO:
# !!!!!!!!!!!!!!!!!!!!!
# Set path values for the location of the csv file and the raw data files
# Example:
# csv_path = "/Users/myname/folder1/folder2/xxxxxx.csv"
# raw_files_path = "/Users/myname/raw_files_place"
csv_path = "<.......your csv file path here.......>"
raw_files_path = "<.......your raw files root directory here.......>"
global master_index
master_index = pd.read_csv(csv_path,
usecols=["docId","name","preferredName"],
dtype={"docId":"str"}
) #,nrows=15800000)
print("...initialize_local_data complete.")
def get_first_data_file(dirname):
######################################
# This function will return the first raw data file it finds in the folder structure
# of raw json files. A file is needed to support the methods that list the
# names of the data elements that a user might want to use to perform analysis.
######################################
if use_local_data:
for (pth, dir, fn) in walk(dirname):
for n in fn:
if n.endswith(".json"):
return os.path.join(pth,n)
return "no_datafile_found"
else:
return public_url_path+data_file_names[0]
def list_data_types():
######################################
# This function uses an arbitrary raw data file to obtain
# and list out for the user a list of the data types that
# are available for exploration.
######################################
print("=============================")
print(" ACD raw data - data types")
print("=============================")
if use_local_data:
targetJsonFile=get_first_data_file(raw_files_path)
else:
targetJsonFile=get_first_data_file("")
# read in json file as a dataframe
jdata = pd.read_json(targetJsonFile)
json_dataframe = pd.DataFrame(jdata)
xresult=json_dataframe.get(key="result")
xunstruc=xresult.get(key="unstructured")
xzero=xunstruc[0]
xdata=xzero["data"]
for i in xdata:
print(f'{i:30}',type(xdata[i]))
def list_data_type_fields(data_type):
######################################
# Given a data type, this function uses an arbitrary raw data file to obtain
# and list out for the user a list of the fields supporting that
# data type. These fields can then be used to get at the lowest level
# of ACD enrichment data.
######################################
print("=============================")
print(" ACD raw data - ",data_type,"field names")
print("=============================")
if use_local_data:
targetJsonFile=get_first_data_file(raw_files_path)
else:
targetJsonFile=get_first_data_file("")
# read in json file as a dataframe
jdata = pd.read_json(targetJsonFile)
json_dataframe = pd.DataFrame(jdata)
xresult=json_dataframe.get(key="result")
xunstruc=xresult.get(key="unstructured")
xzero=xunstruc[0]
xdata=xzero["data"]
this_data_type=xdata[data_type]
tdtzero=this_data_type[0]
for i in tdtzero:
print(f'{i:30}',type(tdtzero[i]))
def get_top_names(topnamedepth):
######################################
# This function will list, in ranked order, the attribute names
# and the associate preferred names of the concept it is associated
# with. The ranking is done by instance counts of the relationships
# across all documents processed by this enrichment run.
######################################
print("\n\n=============================")
print("Top attributeValue Names in Ranked Order of Occurrence")
print("=============================")
name_rank_index=master_index["name"].value_counts()
nr_len = name_rank_index.size
if nr_len < topnamedepth:
topnamedepth = nr_len
name_list=[]
for x in range(0,topnamedepth):
name_list.append(name_rank_index.index[x])
return name_list
def get_top_name_selection(top_name_list):
######################################
# Function to prompt for and return the value chosen which corresponds to the
# name value that the user wants to work with.
######################################
list_size=len(top_name_list)
ct=0
print("\n\n")
print(ct,"Exit")
for x in top_name_list:
ct += 1
print(ct,x)
top_name_index_int = -1
while top_name_index_int < 0 or top_name_index_int > list_size:
top_name_index=input("\nEnter number of desired name: ")
try:
top_name_index_int=int(top_name_index)
except:
top_name_index_int=-1
return top_name_index_int-1 # allow for zero-based index
def get_top_preferred_names(topprefnamedepth,df_top_name):
######################################
# This function will list, in ranked order, the attribute names
# and the associate preferred names of the concept it is associated
# with. The ranking is done by instance counts of the relationships
# across all documents processed by this enrichment run.
######################################
print("\n\n=============================")
print("Top attributeValue Preferred Names in Ranked Order of Occurrence")
print("=============================")
name_rank_index=df_top_name["preferredName"].value_counts()
nr_len = name_rank_index.size
if nr_len < topprefnamedepth:
topprefnamedepth = nr_len
name_list=[]
for x in range(0,topprefnamedepth):
name_list.append(name_rank_index.index[x])
return name_list
def get_top_preferred_name_selection(top_preferred_name_list):
######################################
# Function to prompt for and return the value chosen which corresponds to the
# preferred name value that the user wants to work with.
######################################
list_size=len(top_preferred_name_list)
ct=0
print("\n\n")
print(ct,"Exit")
for x in top_preferred_name_list:
ct += 1
print(ct,x)
top_preferred_name_index_int = -1
while top_preferred_name_index_int < 0 or top_preferred_name_index_int > list_size:
top_preferred_name_index=input("\nEnter number of desired name: ")
try:
top_preferred_name_index_int=int(top_preferred_name_index)
except:
top_preferred_name_index_int=-1
return top_preferred_name_index_int-1 #allow for zero-based index
def get_document_count():
print(master_index["docId"].value_counts().size)
def run_local_data():
list_depth=20
#############################
# load top names from the ACD Enrichment Result CSV
#############################
top_name_list=get_top_names(list_depth)
top_name_index=get_top_name_selection(top_name_list)
while top_name_index > -1:
df_top_name = master_index.loc[master_index['name']==top_name_list[top_name_index]]
#############################
# load top preferred names from the ACD Enrichment Result CSV
#############################
top_preferred_name_list=get_top_preferred_names(list_depth,df_top_name)
top_preferred_name_index=get_top_preferred_name_selection(top_preferred_name_list)
while top_preferred_name_index > -1:
df_top_pref_name = df_top_name.loc[df_top_name['preferredName']==top_preferred_name_list[top_preferred_name_index]]
#############################
# get top documents for preferred names
#############################
docList=df_top_pref_name["docId"].value_counts()
print("\n========================================================\n")
print(docList.size,"documents were found matching your selection of",top_name_list[top_name_index],"and",top_preferred_name_list[top_preferred_name_index])
print("\nHow many documents do you want to include in your analysis?")
print("Note: Documents will be included in descending order of occurrences per document of your selection.")
print("It is recommended that you choose 500 documents or less, unless you want to wait a long time.")
doc_count=input()
doc_count=int(doc_count)
if doc_count > 5000:
doc_count=5000
if doc_count > docList.size:
doc_count = docList.size
print("Will process",doc_count,"files.")
if doc_count == 0:
break
current_doc_count=0
fflist = os.listdir(raw_files_path)
wordlist=""
found_atleast_one_doc=False
flist=[]
for (pth, dir, fn) in walk(raw_files_path):
for fnn in fn:
flist.append(os.path.join(pth,fnn))
for doc_id in docList.index:
#sometimes doc_id can be all numbers, so let's make sure it's a string type
doc_id=str(doc_id)
doc_id_str=str(doc_id)+"_body"
found_doc=False
for fname in flist:
if doc_id_str in fname and fname.endswith(".json"):
targetJsonFile = fname
found_doc=True
fount_atleast_one_doc=True
# read in json file as a dataframe
jdata = pd.read_json(targetJsonFile)
json_dataframe = pd.DataFrame(jdata)
xresult=json_dataframe.get(key="result")
xunstruc=xresult.get(key="unstructured")
if type(xunstruc) is not list:
continue
xzero=xunstruc[0]
if "data" in xzero:
xdata=xzero["data"]
if "attributeValues" in xdata:
xattrv=xdata["attributeValues"]
for oneattrv in xattrv:
if "coveredText" in oneattrv:
covt=oneattrv["coveredText"]
covt=covt.replace(" ","_")
wordlist=wordlist+" "+covt
if found_doc:
break
if found_doc:
current_doc_count += 1
if current_doc_count == doc_count:
break;
if wordlist=="":
if found_atleast_one_doc==False:
wordlist="no_matching_documents"
else:
wordlist="no_words"
word_cloud(wordlist)
top_preferred_name_index=get_top_preferred_name_selection(top_preferred_name_list)
top_name_index=get_top_name_selection(top_name_list)
def run_demo():
list_depth=20
#############################
# load top names from the ACD Enrichment Result CSV
#############################
top_name_list=get_top_names(list_depth)
top_name_index=get_top_name_selection(top_name_list)
while top_name_index > -1:
df_top_name = master_index.loc[master_index['name']==top_name_list[top_name_index]]
#############################
# load top preferred names from the ACD Enrichment Result CSV
#############################
top_preferred_name_list=get_top_preferred_names(list_depth,df_top_name)
top_preferred_name_index=get_top_preferred_name_selection(top_preferred_name_list)
while top_preferred_name_index > -1:
df_top_pref_name = df_top_name.loc[df_top_name['preferredName']==top_preferred_name_list[top_preferred_name_index]]
#############################
# get top documents for preferred names
#############################
docList=df_top_pref_name["docId"].value_counts()
print("\n========================================================\n")
print(docList.size,"documents were found matching your selection of",top_name_list[top_name_index],"and",top_preferred_name_list[top_preferred_name_index])
print("\nHow many documents do you want to include in your analysis?")
print("Note: Documents will be included in descending order of occurrences per document of your selection.")
print("It is recommended that you choose 500 documents or less, unless you want to wait a long time.")
doc_count=input()
doc_count=int(doc_count)
if doc_count > 5000:
doc_count=5000
if doc_count > docList.size:
doc_count = docList.size
print("Will process",doc_count,"files.")
if doc_count == 0:
break
current_doc_count=0
wordlist=""
found_atleast_one_doc=False
for doc_id in docList.index:
#sometimes doc_id can be all numbers, so let's make sure it's a string type
doc_id=str(doc_id)
doc_id_str=str(doc_id)+"_body"
found_doc=False
for fname in data_file_names:
if doc_id_str in fname and fname.endswith(".json"):
targetJsonFile = public_url_path+fname
found_doc=True
fount_atleast_one_doc=True
# read in json file as a dataframe
jdata = pd.read_json(targetJsonFile)
json_dataframe = pd.DataFrame(jdata)
xresult=json_dataframe.get(key="result")
xunstruc=xresult.get(key="unstructured")
if type(xunstruc) is not list:
continue
xzero=xunstruc[0]
if "data" in xzero:
xdata=xzero["data"]
if "attributeValues" in xdata:
xattrv=xdata["attributeValues"]
for oneattrv in xattrv:
if "coveredText" in oneattrv:
covt=oneattrv["coveredText"]
covt=covt.replace(" ","_")
wordlist=wordlist+" "+covt
if found_doc:
break
if found_doc:
current_doc_count += 1
if current_doc_count == doc_count:
break;
if wordlist=="":
if found_atleast_one_doc==False:
wordlist="no_matching_documents"
else:
wordlist="no_words"
word_cloud(wordlist)
top_preferred_name_index=get_top_preferred_name_selection(top_preferred_name_list)
top_name_index=get_top_name_selection(top_name_list)
#################################
# run this method to perform all initialization
#################################
# !!!!!!!!!!!!!!!!!!!!!
# TO DO: If you want to run the demo, uncomment the first statement and comment out the second.
# If you want to run using locally data, comment out the first statement and uncomment the second.
# !!!!!!!!!!!!!!!!!!!!!
initialize_demo()
#initialize_local_data()
#################################
# run this method to return the number of documents defined in the
# master index file
#################################
get_document_count()
#################################
# run this method to list out the data types that are available in the raw files
#################################
list_data_types()
#################################
# run this method to list the field names and types for a
# given data type (that would be listed by the preceding method)
#################################
list_data_type_fields("attributeValues")
#################################
# run this method to perform all initialization
#################################
global use_local_data
if use_local_data:
run_local_data()
else:
run_demo()
| covid19-processed-literature-notebook/COVID-19 Processed Literature Notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from easyAI import TwoPlayersGame, AI_Player, Human_Player, Negamax
def to_tuple(s):
return (3 - int(s[1]), 'abc'.index(s[0]))
def to_string(moves):
pre, post = moves
return 'abc'[pre[1]] + str(3 - pre[0]) + ' ' + \
'abc'[post[1]] + str(3 - post[0])
class GameController(TwoPlayersGame):
def __init__(self, players):
self.players = players
self.nplayer = 1
players[0].direction = 1
players[0].goal_line = 2
players[0].pawns = [(0, 0), (0, 1), (0, 2)]
players[1].direction = -1
players[1].goal_line = 0
players[1].pawns = [(2, 0), (2, 1), (2, 2)]
def possible_moves(self):
moves = []
opponent_pawns = self.opponent.pawns
d = self.player.direction
for i, j in self.player.pawns:
if (i + d, j) not in opponent_pawns: # 前方に敵なし
moves.append(((i, j), (i + d, j)))
if (i + d, j + 1) in opponent_pawns: # 斜め前に敵あり
moves.append(((i, j), (i + d, j + 1)))
if (i + d, j - 1) in opponent_pawns: # 斜め前に敵あり
moves.append(((i, j), (i + d, j - 1)))
return list(map(to_string, [(pre, post) for pre, post in moves]))
def make_move(self, moves):
pre, post = tuple(map(to_tuple, moves.split(' ')))
ind = self.player.pawns.index(pre)
self.player.pawns[ind] = post
if post in self.opponent.pawns:
self.opponent.pawns.remove(post)
def loss_condition(self):
return any([i == self.opponent.goal_line
for i, _ in self.opponent.pawns]) \
or self.possible_moves() == []
def is_over(self):
return self.loss_condition()
def grid(self, pos):
if pos in self.players[0].pawns:
return '1'
elif pos in self.players[1].pawns:
return '2'
else:
return '.'
def show(self):
print(' a b c')
for i in range(3):
print(3 - i,
' '.join([self.grid((i,j)) for j in range(3)]))
def scoring(self):
return -100 if self.loss_condition() else 0
algorithm = Negamax(12)
game = GameController([AI_Player(algorithm), AI_Player(algorithm)])
game.play()
print('Player', game.nopponent, 'wins after', game.nmove, 'turns')
# -
| artificial-intelligence-with-python-ja-master/Chapter 9/hexapawn.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # <img style="float: left; padding-right: 10px; width: 45px" src="https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/iacs.png"> CS-109A Introduction to Data Science
#
#
# ## Lab 6: Logistic Regression
#
# **Harvard University**<br>
# **Fall 2019**<br>
# **Instructors:** <NAME>, <NAME>, <NAME><br>
# **Lab Instructors:** <NAME> and <NAME>. <br>
# **Contributors:** <NAME>, <NAME>, <NAME>
#
# ---
# + slideshow={"slide_type": "-"}
## RUN THIS CELL TO PROPERLY HIGHLIGHT THE EXERCISES
import requests
from IPython.core.display import HTML
styles = requests.get("https://raw.githubusercontent.com/Harvard-IACS/2018-CS109A/master/content/styles/cs109.css").text
HTML(styles)
# + [markdown] slideshow={"slide_type": "-"}
# ## Learning Goals
# In this lab, we'll explore different models used to predict which of several labels applies to a new datapoint based on labels observed in the training data.
#
# By the end of this lab, you should:
# - Be familiar with the `sklearn` implementations of
# - Linear Regression
# - Logistic Regression
# - Be able to make an informed choice of model based on the data at hand
# - (Bonus) Structure your sklearn code into Pipelines to make building, fitting, and tracking your models easier
# - (Bonus) Apply weights to each class in the model to achieve your desired tradeoffs between discovery and false alarm in various classes
# + slideshow={"slide_type": "-"}
# %matplotlib inline
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import pandas as pd
pd.set_option('display.width', 500)
pd.set_option('display.max_columns', 100)
pd.set_option('display.notebook_repr_html', True)
from sklearn.model_selection import train_test_split
# + [markdown] slideshow={"slide_type": "slide"}
# ## Part 1: The Wine Dataset
# The dataset contains 11 chemical features of various wines, along with experts' rating of that wine's quality. The quality scale technically runs from 1-10, but only 3-9 are actually used in the data.
#
# Our goal will be to distinguish good wines from bad wines based on their chemical properties.
# -
# ### Read-in and checking
# We do the usual read-in and verification of the data:
# + slideshow={"slide_type": "slide"}
wines_df = pd.read_csv("../data/wines.csv", index_col=0)
wines_df.head()
# + slideshow={"slide_type": "slide"}
wines_df.describe()
# + [markdown] slideshow={"slide_type": "slide"}
# ### Building the training/test data
# As usual, we split the data before we begin our analysis.
#
# Today, we take the 'quality' variable as our target. There's a debate to be had about the best way to handle this variable. It has 10 categories (1-10), though only 3-9 are used. While the variable is definitely ordinal- we can put the categories in an order everyone agrees on- the variable probably isn't a simple numeric feature; it's not clear whether the gap between a 5 and a 6 wine is the same as the gap between an 8 and a 9.
#
# [Ordinal regression](https://pythonhosted.org/mord/) is one possibility for our analysis (beyond the scope of this course), but we'll view the quality variable as categorical. Further, we'll simplify it down to 'good' and 'bad' wines (quality at or above 7, and quality at or below 6, respectively). This binary column already exists in the data, under the name 'good'.
# + slideshow={"slide_type": "slide"}
wines_train, wines_test = train_test_split(wines_df, test_size=0.2, random_state=8, stratify=wines_df['good'])
x_train = wines_train.drop(['quality','good'], axis=1)
y_train = wines_train['good']
x_test = wines_test.drop(['quality','good'], axis=1)
y_test = wines_test['good']
x_train.head()
# + [markdown] slideshow={"slide_type": "slide"}
# Now that we've split, let's explore some patterns in the data
# + slideshow={"slide_type": "slide"}
from pandas.plotting import scatter_matrix
scatter_matrix(wines_train, figsize=(30,20));
# -
# It looks like there aren't any particularly strong correlations among the predictors (maybe sulfur dioxide and free sulfur dioxide) so we're safe to keep them all. It also looks like the different quality categories have roughly the same distribution of most variables, with volatile/fixed acidity and alcohol seeming like the most useful predictors.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Part 2 (Introduction): Binary Logistic Regression
# Linear regression is usually a good baseline model, but since the outcome we're trying to predict only takes values 0 and 1 we'll want to use logistic regression instead of basic linear regression.
#
# We'll begin with `statsmodels`, because `cs109` likes confidence intervals and checking that coefficients make sense.
# + slideshow={"slide_type": "slide"}
import statsmodels.api as sm
sm_fitted_logit = sm.Logit(y_train, sm.add_constant(x_train)).fit()
#sm_fitted_logit.summary() ### ORIGINAL VERSION. GAVE AttributeError: module 'scipy.stats' has no attribute 'chisqprob'
sm_fitted_logit.summary2() ### WORKS
# + [markdown] slideshow={"slide_type": "slide"}
# Let's talk about the output:
# First, "optimization terminated successfully". Recall that linear regression and its simple formula for the optimal betas is a rarity in machine learning and statistics: most models are fit to the data algorithmically, not via a formula. This message is letting us know that the algorithm seems to have worked.
#
# Second, the pseudo $R^2$ is rather low (.23). As with regular $R^2$, we might take this as a sign that the model is struggling.
#
# Finally, let's look at the coefficients.
# - Several of the coefficients are statistically significant, including
# - Fixed acidity - good
# - Volatile Acidity - bad
# - Residual Sugar - good (judge have a sweet tooth?)
# - Chlorides - bad
# - Sulphates - good
# - Alcohol - good (judges like getting drunk?)
# - The rest only reach a coefficient size we would often observe by chance alone, without any actual effect from the predictor
#
#
# More formal interpretations are of coefficients are long-winded. "A one unit increase in alcohol (holding all else constant) results in a predicted 0.494 increase in the log odds of a wine being classified as good".
#
# We can't be more precise because the effect of one unit of alcohol depends on how much alcohol there already is. The one unit increase/decrease matters more if the wine is otherwise on the border between good and bad. If the wine is undrinkable (in the far left tail of the sigmoidal curve) one unit of alcohol barely moves the probability, while if the wine is in the middle of the curve that unit of acidity has much more practical impact.
# + [markdown] slideshow={"slide_type": "slide"}
# <div class="discussion"><b>Discussion</b></div>
# 1. Are there any bones you'd like to pick with the model I've laid out? Can you think of a better logistic regression model?
# + [markdown] slideshow={"slide_type": "slide"}
# #### Prediction
# One of the really cool features of logistic regression is that it hands back _probabilities_ of a given case being 1 or 0, rather than just 1s and 0s. That lets us do neat things like set different cutoffs for what counts as a 1 and do ROC analysis and so on. Here, we'll just set the cutoff at 0.5: if a 1 is reported as more likely, predict a 1. (You can play with the cutoff yourself and see if you can make the model do better by trading false positives and false negatives).
#
# Because this is statsmodels, we'll need to import a tool or do the test set score calculation ourselves. Here, it's easy enough to implement:
# * do the predictions
# * compare with .5
# * find out what percentage of our binary predictions matched the truth
# + slideshow={"slide_type": "slide"}
sm_binary_prediction = sm_fitted_logit.predict(sm.add_constant(x_test)) >= .5
np.sum(y_test == sm_binary_prediction) / len(y_test)
# + [markdown] slideshow={"slide_type": "slide"}
# Wow! 80% is a pretty good performance! We can pretty much tell the bad wines from the good.
# + [markdown] slideshow={"slide_type": "slide"}
# Here's a sanity check:
# + slideshow={"slide_type": "slide"}
np.sum(y_test == 0) / len(y_test)
# + [markdown] slideshow={"slide_type": "slide"}
# Oh... no... wait. A model that says "all wines are bad" also scores 80% on the test set. Our fancy model isn't really doing that well.
#
# **Moral of the story**: Before you congratulate a model, think of a **truly** trivial model to compare it to.
# + [markdown] slideshow={"slide_type": "slide"}
# <div class="exercise"><b>Exercise 1</b></div>
#
# 1. Re-create the results above but this time work with `sklearn`. Use the `LogisticRegression` class. Follow the usual `.fit`, `.score` procedure. To match `statsmodel`'s coefficient values (roughly), you will need to adjust the input parameters:
# * `C`
# * `solver`
# * One other parameter
# * See [the sklearn documentation](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html)
#
# Hint: `statsmodels` uses a Newton-Raphson method to optimize the beta values.
# + slideshow={"slide_type": "slide"}
from sklearn.linear_model import LogisticRegression
print("target:\n{}".format(sm_fitted_logit.params))
#
#fitted_lr = LogisticRegression(C=___, solver=___, ___)
# + [markdown] slideshow={"slide_type": "slide"}
# **Answer**:
# +
# your code here
from sklearn.linear_model import LogisticRegression
fitted_lr = LogisticRegression(C=1000000, solver='newton-cg', max_iter=250).fit(x_train,y_train)
print(fitted_lr.coef_)
print("Test set score:", fitted_lr.score(x_test,y_test))
# +
# uncoment me and execute me - this will erase your cell ...
# #%load solutions/sklearn_logistic.py
# -
# Speaker note: When presenting solution, model reading the documentation from the webpage. How does one know where to look?
# Speaker note: Mention the wide variety of solvers and how (some) use different levels of derivatives to converge in fewer steps
# + [markdown] slideshow={"slide_type": "slide"}
# #### The Decision Boundary
# One powerful way to think about classification models is to consider where and how they draw the line between predicting "class A" and "class B". The code below lets you play with a 2d logistic regression. Points towards yellow will be predicted as 1s, points towards violet will be predicted as 0s.
# +
from scipy.special import expit
def plot_logistic_contour(beta0, betaX, betaY, betaXY=0, betaX2=0, betaY2=0):
delta=.1
x_values = np.arange(-3.0, 3.0, delta)
y_values = np.arange(-3.0, 3.0, delta)
x_grid, y_grid = np.meshgrid(x_values, y_values)
logistic_output = expit(beta0 + betaX*x_grid + betaY*y_grid
+ betaXY*x_grid*y_grid + betaX2*x_grid**2 + betaY2*y_grid**2)
contour_figure = plt.contour(x_grid, y_grid, logistic_output)
plt.clabel(contour_figure, inline=1, fontsize=10);
plt.xlim(-3,3)
plt.ylim(-3,3)
plt.show()
#plot_logistic_contour(beta0=1, betaX=2, betaY=3, betaXY=0, betaY2=.1)
# + slideshow={"slide_type": "slide"}
# Use this cell to experiment
plot_logistic_contour(beta0=1, betaX=2, betaY=3)
# + [markdown] slideshow={"slide_type": "slide"}
# <div class="exercise"><b>Exercise 2</b></div>
# 1. What happens to the decision boundary as the coefficient on X increases?
# 2. What happens if you increase the Y coefficient to match?
# 3. What does the constant term control?
# 4. What impact do higher-order and interaction terms have on the boundary?
# 5. What parameter settings should I show the class?
# + [markdown] slideshow={"slide_type": "slide"}
# **Answers**:
#
# *your answer here*
#
# 1. The boundary tips towards vertical
# 2. The boundary is in the same place as it was originally, but is squished together. The model is much more certain about how to predict points a given distance from the boundary
# 3. It shifts the boundary, perpendicular to its current orientation
# 4. Including squared terms allows quadratic decision boundaries, and the interraction term allows hyperbolic boundaries
#
# + slideshow={"slide_type": "slide"}
# # %load solutions/boundaries.txt
# + [markdown] slideshow={"slide_type": "slide"}
# ## Part 3 (The Real Challenge): Multiclass Classification
# Before we move on, let's consider a more common use case of logistic regression: predicting not just a binary variable, but what level a categorical variable will take. Instead of breaking the quality variable into 'good' and 'other', let's discretize into 'good, 'medium', and 'bad'.
# + slideshow={"slide_type": "slide"}
# # copy the original data so that we're free to make changes
wines_df_recode = wines_df.copy()
# use the 'cut' function to reduce a variable down to particular bins. Here the lowest bin is 0-4, next is 5-7,
# and the last is 7-10
wines_df_recode['quality'] = pd.cut(wines_df_recode['quality'],[0,4,7,10], labels=[0,1,2])
# drop the un-needed columns
x_data = wines_df_recode.drop(['quality','good'], axis=1)
y_data = wines_df_recode['quality']
x_train,x_test, y_train,y_test = train_test_split(x_data, y_data, test_size=.2, random_state=8, stratify=y_data)
print(wines_df['quality'].head())
print(wines_df_recode['quality'].head())
# + [markdown] slideshow={"slide_type": "slide"}
# The `cut` function obviously stores a lot of extra information for us. It's a very useful tool for discretizing an existing variable.
# + [markdown] slideshow={"slide_type": "slide"}
# <div class="exercise"><b>Exercise 3</b></div>
# 1. Adapt your earlier logistic regression code to fit to the new training data. What is stored in `.coef_` and `.intercept_`?
# 2. How well does this model predict the test data?
# 3. Put the model's performance in context. Think of a trivial model to compare to, and provide its accuracy score on the test set.
# + [markdown] slideshow={"slide_type": "slide"}
# **Answers**:
#
# 1.
# + slideshow={"slide_type": "slide"}
# your code here
from sklearn.linear_model import LogisticRegression
fitted_lr = LogisticRegression(C=1000000, solver='newton-cg', max_iter=250).fit(x_train,y_train)
print("Coefficients:")
print(fitted_lr.coef_)
print("Intercepts:")
print(fitted_lr.intercept_)
# + slideshow={"slide_type": "slide"}
# # %load solutions/multi_logistic.py
# + [markdown] slideshow={"slide_type": "slide"}
# *your answer here*
#
# 1\. We get three sets of coefficients, and three intercepts.
#
# We need three sets because (under the default 'one versus rest' strategy) we fit three models. When predicting, model 1 reports a probability of the new example coming from class A or from the cloud of remaining classes. Model 2 reports the probability of whether the example comes from class B or the cloud of remaining classes, and so on. We take this set of scores and pick the biggest one (we classify as whichever class has the biggest ratio of "this class" to "not this class").
# + slideshow={"slide_type": "slide"}
# # %load solutions/multi_logistic.txt
# -
# 2.
# + slideshow={"slide_type": "slide"}
# your code here
fitted_lr.score(x_test,y_test)
# -
# # %load solutions/score1.py
# + [markdown] slideshow={"slide_type": "slide"}
# *your answer here*
#
# 2\. The model does pretty well at predicting the test data...
# -
# 3.
# + slideshow={"slide_type": "slide"}
# make a dumb prediction that always guesses 1, the most common class
# your code here
dumb_prediction = np.ones(len(y_test))
np.sum(y_test == dumb_prediction) / len(y_test)
# +
# # %load solutions/trivial_model.py
# -
# *your solution here*
#
# But, a trivial model that guesses the most likely class also does really well on the test set, too.
# + slideshow={"slide_type": "slide"}
# # %load solutions/3.3.txt
# + [markdown] slideshow={"slide_type": "slide"}
# #### Summary
# - Logistic regression extends OLS to work naturally with a dependent variable that's only ever 0 and 1.
# - In fact, Logistic regression is even more general and is used for predicting the probability of an example belonging to each of $N$ classes.
# - The code for the two cases is identical and just like `LinearRegression`: `.fit`, `.score`, and all the rest
# - Significant predictors does not imply that the model actually works well. Signifigance can be driven by data size alone.
# - The data aren't enough to do what we want
#
# **Warning**: Logistic regression _tries_ to hand back valid probabilities. As with all models, you can't trust the results until you validate them- if you're going to use raw probabilities instead of just predicted class, take the time to verify that if you pool all cases where the model says "I'm 30% confident it's class A" 30% of them actually are class A.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Part 4: Dimensionality Reduction
# Our models are clearly struggling, but it's hard to tell why. Let's PCA to shrink the problem down to 2d (with as little loss as possible) and see if that gives us a clue about what makes this problem tough.
# + slideshow={"slide_type": "slide"}
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
# scale the datasets
scale_transformer = StandardScaler(copy=True).fit(x_train)
x_train_scaled = scale_transformer.transform(x_train)
x_test_scaled = scale_transformer.transform(x_test)
# reduce dimensions
pca_transformer = PCA(2).fit(x_train_scaled)
x_train_2d = pca_transformer.transform(x_train_scaled)
x_test_2d = pca_transformer.transform(x_test_scaled)
print(x_train_2d.shape)
x_train_2d[0:5,:]
# + [markdown] slideshow={"slide_type": "slide"}
# Some comments:
# 1. Both scaling and reducing dimension follow the same pattern: we fit the object to the training data, then use .transform to convert the training and test data. This ensures that, for instance, we scale the test data using the _training_ mean and variance, not its own mean and variance
# 2. We need to equalize the variance of each feature before applying PCA, otherwise certain dimensions will dominate the scaling: our PCA dimensions would just be the features with the largest spread.
# + slideshow={"slide_type": "slide"}
## plot each group
# notice that we set up lists to track each group's plotting color and label
colors = ['r','c','b']
label_text = ["Bad Wines", "Medium Wines", "Good Wines"]
# and we loop over the different groups
for cur_quality in [0,1,2]:
cur_df = x_train_2d[y_train==cur_quality]
plt.scatter(cur_df[:,0], cur_df[:,1], c = colors[cur_quality], label=label_text[cur_quality])
# all plots need labels
plt.xlabel("PCA Dimension 1")
plt.ylabel("PCA Dimention 2")
plt.legend();
# + [markdown] slideshow={"slide_type": "slide"}
# Well, that gives us some idea of why the problem is difficult: the good wines and bad wines are hiding right among the average wines. It does look like the wines separate into two groups, though, possibly one for reds and one for whites.
# + [markdown] slideshow={"slide_type": "slide"}
# <div class="exercise"><b>Exercise 4</b></div>
# 1. What critique can you make against the plot above? Why does this plot not prove that the different wines are hopelessly similar?
# 2. The wine data we've used so far consist entirely of continuous predictors. Would PCA work with categorical data?
# + [markdown] slideshow={"slide_type": "slide"}
# **Answer**:
#
# *your answer here*
# 1. The PCA dimensions are chosen without regard to the y variable. Thus it is possible that the very next PCA dimension will lift the red points up out of the page, push the blue points down into it, and leave the cyan points where they are; such a dimension would separate the different types of wine and make classification easy.
# 2. PCA would not work with categorical data. PCA requires there to be a meaningful notion of distance between points. Categorical or ordinal data is not enough.
# +
# # %load solutions/4.txt
# + [markdown] slideshow={"slide_type": "slide"}
# <div class="exercise"><b>Exercise 5</b></div>
# 1. Edit the code above to plot the locations of red wines and white wines
#
# + [markdown] slideshow={"slide_type": "slide"}
# **Answer**:
# + slideshow={"slide_type": "slide"}
# your code here
## plot each group
# notice that we set up lists to track each group's plotting color and label
colors = ['r','c','b']
label_text = ["Reds", "Whites"]
# and we loop over the different groups
for cur_color in [0,1]:
cur_df = x_train_2d[x_train['red']==cur_color]
plt.scatter(cur_df[:,0], cur_df[:,1], c = colors[cur_color], label=label_text[cur_color])
# all plots need labels
plt.xlabel("PCA Dimension 1")
plt.ylabel("PCA Dimention 2")
plt.legend();
# + [markdown] slideshow={"slide_type": "slide"}
# ## Evaluating PCA - Variance Explained
# One of the criticisms we made of the PCA plot was that it's lost something from the original data.
#
# Let's actually investigate how much of the original data's structure the 2d PCA captures. We'll look at the `explained_variance_ratio_` portion of the PCA fit. This lists, in order, the percentage of the x data's total variance that is captured by the nth PCA dimension.
# + slideshow={"slide_type": "slide"}
var_explained = pca_transformer.explained_variance_ratio_
print("Variance explained by each PCA component:", var_explained)
print("Total Variance Explained:", np.sum(var_explained))
# + [markdown] slideshow={"slide_type": "slide"}
# The first PCA dimension captures 33% of the variance in the data, and the second PCA dimension adds another 20%. Together, we've got about half of the total variation in the training data covered with just these two dimensions.
# + [markdown] slideshow={"slide_type": "slide"}
# <div class="exercise"><b>Exercise 6</b></div>
# 1. Fit a PCA that finds the first 10 PCA components of our training data
# 2. Use `np.cumsum` to print out the variance we'd be able to explain by using n PCA dimensions for n=1 through 10
# 3. Does the 10-dimension PCA agree with the 2d PCA on how much variance the first components explain? Do the 10d and 2d PCAs find the same first two dimensions? Why or why not?
# 4. Make a plot of number of PCA dimensions against total variance explained. What PCA dimension looks good to you?
#
# Hint: `np.cumsum` stands for 'cumulative sum', so `np.cumsum([1,3,2,-1,2])` is `[1,4,6,5,7]`
# + [markdown] slideshow={"slide_type": "slide"}
# **Answer**:
# + slideshow={"slide_type": "slide"}
#your code here
pca_10_transformer = PCA(10).fit(x_train_scaled)
pca_10_transformer
np.cumsum(pca_10_transformer.explained_variance_ratio_)
# + [markdown] slideshow={"slide_type": "slide"}
# 3\.
#
# *your answer here*
#
# The 10d PCA and the 2d PCA agree about how much variance the first two components explain. The 10d and 2d PCA give the same components in the same order. This means it's safe to simply fit a PCA with the largest number of components you expect you will need, and take a subset as appropriate.
# + slideshow={"slide_type": "subslide"}
# # %load solutions/6.3.txt
# + [markdown] slideshow={"slide_type": "slide"}
# 4.
# + slideshow={"slide_type": "slide"}
#your code here
plt.scatter(range(1,11),np.cumsum(pca_10_transformer.explained_variance_ratio_))
plt.xlabel("PCA Dimension")
plt.ylabel("Total Variance Captured")
plt.title("Variance Explained by PCA");
# + slideshow={"slide_type": "subslide"}
# # %load solutions/6.4.py
# + [markdown] slideshow={"slide_type": "slide"}
# A PCA dimension of 3, 4, or 5 looks good to me. These values are roughly where we hit diminishing returns on variance explained.
#
# Plots like the one above are called 'Scree' or 'Elbow' plots. They are often used to heuristically select a good number of PCA dimensions.
# + [markdown] slideshow={"slide_type": "slide"}
# #### Summary
# - PCA maps a high-dimensional space into a lower dimensional space.
# - The PCA dimensions are ordered by how much of the original data's variance they capture
# - There are other cool and useful properties of the PCA dimensions (orthogonal, etc.). See a [textbook](http://math.mit.edu/~gs/linearalgebra/).
# - PCA on a given dataset always gives the same dimensions in the same order.
# - You can select the number of dimensions by fitting a big PCA and examining a plot of the cumulative variance explained.
# -
# ## Part 5: Did we fail?
# None of the models worked, and we can't tell good wines from bad. Was it all a waste of time and money?
#
# Not really. All analyses are a roll of the dice. Some analyses fail, like this one did, becuase the data at hand just don't support the task we've set out.
#
# What can we do about it?
# 1. Be honest about the methods and the null result. Lots of analyses fail.
# 2. Collect a dataset you think has a better chance of success. Maybe we collected the wrong chemical signals...
# 3. Keep trying new approaches. Just beware of overfitting the data you're validating on. Always have a test set locked away for when the final model is built.
# 4. Change the question. Maybe something you noticed during analysis seems interesting or useful (classifying red versus white). But again, you the more you try, the more you might overfit, so have test data locked away.
# 5. Just move on. If the odds of success start to seem small, maybe you need a new project.
# #### The Moral of the Lab
# - Sometimes, the data just aren't enough to adequately predict outcomes.
# - In this lab we saw that no amount of modeling finesse would let us use a wine's chemical properties to tell good wines and bad wines from mediocre ones.
# - The chemical properties were very good at telling red wines from whites, however.
# - PCA helped us visualize the data and confirm that the highly rated wines just aren't chemically distinct from the other wines.
# - **NOT ALL ANALYSES YIELD USEFUL RESULTS** Sometimes (arguably most of the time), the data aren't suitable for a task or just don't have anything interesting to say.
# ## Part 6 (Sidebar): Pipelines
# Remember when we were trying to adapt our LDA model to run on the training data with 'red' dropped? We had to invent new variable names and define functions and it was generally much harder than it needed to be. Pipelines are `sklearn`'s tool for packaging an entire analysis together into a single object. This enables convenient inspection, saving, deployment, and (yes) cross validation of the model.
#
# Let's look at an example (we'll switch the model to KNN to justify some later analysis).
# +
from sklearn.pipeline import Pipeline
knn_pipeline = Pipeline(
[
('scaling', StandardScaler()), # scale all columns
('dim_reduction', PCA()), # PCA to reduce dimension
('model', KNeighborsClassifier()) # KNN to predict
]
)
# run with default settings ()
knn_pipeline.fit(x_train, y_train)
print("Test set score (default parameters)", knn_pipeline.score(x_test, y_test))
# particular sub-component settings are accessed with the component name, two
# underscores, and the parameter name
knn_pipeline.set_params(dim_reduction__n_components = 2, model__n_neighbors = 5)
knn_pipeline.fit(x_train, y_train)
print("Test set score (updated parameters)", knn_pipeline.score(x_test, y_test))
# -
# There's also a convenience function `make_pipeline` that lets us skip naming the different steps. Notice the default names are all-lowercase versions of the class names (standardscaler, pca, kneighborsclassifier)
from sklearn.pipeline import make_pipeline
knn_pipeline = make_pipeline(StandardScaler(), PCA(), KNeighborsClassifier())
knn_pipeline
# It's easy to run the whole modelling process on new data:
red_model = knn_pipeline.fit(x_train.drop('red', axis=1), x_train['red'])
red_model.score(x_test.drop('red', axis=1), x_test['red'])
# As promised, cross validation tools work directly with the pipeline object.
from sklearn.model_selection import cross_val_score
cross_val_score(knn_pipeline, x_train, y_train, cv=3)
from sklearn.model_selection import GridSearchCV
search_dict = {
'pca__n_components': [3,5,10],
'kneighborsclassifier__n_neighbors': [1,2,3,4,5]
}
cv_results = GridSearchCV(knn_pipeline, search_dict, cv=3).fit(x_train, y_train)
cv_results.best_params_
# **Note**: In general, more PCA components will work better for prediction. However, KNN often performs worse as dimension increases, meaning there may be a meaningful balance point between capturing more variance and a space small enough for KNN to work well.
# ## Part 7 (Sidebar): Weighting the training points
# Some models can accept weights on the training points to given them greater priority in the model's fitting process. This can be useful if, for instance, certain classes are rare but we want to be sure the model classifies them correctly (e.g. we're trying to classify cancers and one form is rare but very aggressive). In general, weighting training points is like moving along the ROC curve; we change some model parameters to alter the mistakes the model makes to be more in line with our tastes.
#
# Let's see this in action with a logistic regression:
# +
unweighted_lr = LogisticRegression(C=1000000).fit(x_train,y_train)
weight_dict = {0:100, 1:1, 2:100}
weighted_lr = LogisticRegression(C=1000000, class_weight=weight_dict).fit(x_train,y_train)
# +
from sklearn.metrics import confusion_matrix
print("Rows: True Lables (Bad, Medium, Good), \nColummns: Predicted Lables (Bad, Medium, Good)")
print()
print("unweighted:")
print(confusion_matrix(y_test, unweighted_lr.predict(x_test)))
print("weighted:")
print(confusion_matrix(y_test, weighted_lr.predict(x_test)))
# -
# Without weighting, the model plays it safe and predicts that all of the test set wines are medium. With weighting, the model is told to care more about getting the bad and good wines right. The model does as we've asked and correctly IDs 3 good/bad test wines, at the price of 17 falsely bad wines and 16 falsely good wines. However, if identifying bad and good wines is, as implied, 100 times more important than identifying medium wines, we've made a really good trade.
#
# <div class="exercise"><b>Exercise 7</b></div>
# 1. What happens if you give a weight of 0 to the medium wines?
# 2. What weighting gives results that accord with your personal sense of what the model should be doing? How many actually-medium bottles is a single good bottle worth?
# **Answers**:
# 1. The model learns a classification rule that never predicts 'medium'. It's as it we dropped the medium wines from training.
# 2. 100, 1, 100 looks the best to me. We get a 1-in-8 sucess rate on the wines flagged as good. However, I found these values by looking at the test set confusion matrix; it's not clear they'd maintain the 1-in-8 ratio on new data.
| content/labs/lab06/notebook/cs109a_lab6_logistic_regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Basic Example
# This example shows the basic features of the pansys module. In this example, an ansys session is started and a basic FE model is created.
from pansys import Ansys
# To start a new session of ansys, just initialize the class ``Ansys()`` as shown below.
a = Ansys()
# Now we are ready to send commands to the ansys session.
a.send("/prep7")
# Let's add a new ``BEAM188`` element type and some section and material properties.
a.send("""et,1,188
sectype,1,beam,csolid
secdata,0.1
mp,ex,1,1e12
mp,prxy,1,0.3
""")
# Now let's create our model.
a.send("csys,1")
a.send("n")
for i in range(10):
a.send("n,,1,{}".format(360/10*i))
a.send("e,1,{}".format(i+2))
# + raw_mimetype="text/restructuredtext" active=""
# We can take out an ansys plot directly from python. To do that, use the :meth:`pansys.Ansys.plot` function.
# -
img = a.plot("eplot")
# Now the ``img`` variable contains the path to the image file. You can render that in a jupyter notebook using the IPython.display.Image method as shown below.
from IPython.display import Image
Image(img)
| docs/_src/examples/basic.ipynb |