code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # SLU09: Model Selection and Overfitting
# ---
# <a id='top'></a>
#
# In this notebook we will cover the following:
#
# ### 1. [Generalization Error](#generror)
# ### 2. [Model Selection](#modelselection)
# ### 3. [Regularized Linear Regression](#regularization)
# ### 4. [Conclusion (More on Overfitting)](#overfit)
# +
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import utils
# %matplotlib inline
import warnings
warnings.filterwarnings('ignore')
# -
# # Introduction: The need for validation
#
# Before we jump into definitions, let's try to get a couple of intuitions going.
#
# To do so, let's answer the question:
#
# > **"How does the number of hours of TV per day relate to a person's age?"**
df = utils.generate_time_on_tv()
# As usual, we'll start with a little plot:
fig, ax = plt.subplots()
df.plot(kind='scatter', x='age', y='minutes_per_day', ax=ax, alpha=.5);
# Great! We can notice there is a clear pattern here.
#
# Now we will try to model it.
# ### High bias
# We will use a very "inflexible" model, simple linear regression (linear regression with just one explanatory variable). Simple linear regression requires that the answer be of the format
#
# $$ y = \beta x + c $$
#
# <center>(a line 😛)</center>
#
# So let's try that...
utils.fit_lin_reg(df)
# This is no good! The reason is simple: our model had too many assumptions about the data ahead of time. It had too much **bias**.
#
# In machine learning terms, we say we've **underfit the data**.
# ### High Variance
# Now, in normal life the word "bias" has a negative connotation.
#
# So if bias is bad, maybe we should get rid of as much bias as we can, and just let the data speak for itself! Right? Ehm...
#
# We'll use a really high variance algo *(don't worry about which one just yet)*:
utils.fit_high_variance_algo(df)
# Well that's no good either! The model just followed the data like an idiot, and doesn't seem to have any sort of assumptions about the data, or bias. We could say it has too much **variance**. ... oh wait that's what we wanted. (Do we though?)
#
# The learning here is...
#
# ***Bias and Variance are a tradeoff***
#
# A bit of bias is necessary, and not enough will make your model **overfit** to the data.
# ### Why does this matter?
# The goal of building machine learning models is to be able to make accurate predictions on previously unseen data; in other words, we want the model to generalize.
#
# By having too much bias, the model does not "learn" enough to be accurate on seen (training) data or on unseen (test) data.
#
# By having too much variance, the model may have high accuracy on training data, but it loses its power to generalize to unseen data.
# <a id='generror'></a>
# [Return to top](#top)
# # 1. Generalization Error
# ### Subtopics:
# 1. **Decomposition**
# 1. Bias
# 2. Variance
# 3. Irreducible error
# 2. **Bias-variance trade-off**
# 3. **Sources of complexity**
# ## 1.1 Decomposing the generalization error
#
# Bias-variance decomposition is a way of analyzing an algorithm's expected generalization error, with respect to the sum of three terms:
#
# 1. Bias
#
# 2. Variance
#
# 3. Irreducible error.
#
# As we will see, dealing with bias and variance is really about under- (high bias) and over-fitting (high variance).
#
# 
#
# *Fig.: Graphical illustration of bias and variance using dart-throwing, from [<NAME>'s "Understanding the Bias-Variance Trade-off"](http://scott.fortmann-roe.com/docs/BiasVariance.html)*
#
# ### 1.1.A Bias and underfitting
#
# Bias results from simplistic assumptions and a lack of flexibility: in short, we are missing parameters that would be in a correct model.
#
# Bias is always learning the same wrong thing, skewing predictions consistently across different training samples (i.e., far-off from the real value):
#
# $$ Bias = E\big[\hat{y} - y\big] $$
#
# Fixing bias requires adding complexity to our models to allow them to adapt better to the data.
#
# ### 1.1.B Variance and overfitting
#
# On the other side, extremely flexible models overreact to the specifics of the training data (including the random noise).
#
# Variance creeps in when we have more parameters than justified by the data and learn random things from different training samples:
#
# $$ Variance = E\big[\big(\hat{y} - E[\hat{y}]\big)^2\big] $$
#
# Fixing variance requires decreasing complexity to prevent the model from adapting too much to the training data.
#
# ### 1.1.C Irreducible error
#
# Irreducible error is error that cannot be eliminated by building good models. It is essentially a measure of noise in the data.
# ## 1.2 The bias-variance tradeoff
# There is a trade-off between the bias and variance. In relation to increasing model complexity, bias is reduced while variance increases.
#
# 
#
# *Fig.: The bias-variance tradeoff, bias is reduced and variance is increased in relation to model complexity*
#
# In theory, we reach the right level of complexity when the increase in bias is equivalent to the reduction in variance:
#
# $$ \frac{dBias}{dComplexity} = - \frac{dVariance}{dComplexity} $$
#
# In practice, *there is not an analytical way to find this location* and the more we (over)reach for signal, the greater the noise.
# ## 1.3 Sources of model complexity
# Model complexity arises from, among others:
# * adding new features
# * increasing the polynomial degree of the hypothesis
# * using highly flexible models
# <a id='modelselection'></a>
# [Return to top](#top)
# # 2. Model Selection
# ### Subtopics:
#
# 1. **Offline evaluation**
# 1. Leave-one-out or hold-out method
# 1. [`sklearn.model_selection.train_test_split`](#traintestsplit)
# 2. In-sample or training error
# 3. Out-of-sample or testing error
# 4. [Validation dataset](#valset)
# 5. Evaluating overfitting and underfitting
# 2. [K-Fold cross-validation](#kfolds)
# 1. `sklearn.model_selection.cross_val_score`
# 3. [Data leakage](#dataleak)
# 2. [**Practical considerations**](#practical)
# 1. Training time
# 2. Prediction time
# 3. Memory
# # 2.1 Offline evaluation
#
# One way to understand overfitting is by decomposing the generalization error of a model into bias and variance.
#
# We will be using data about craft beer to try to predict whether a particular beer is an [India Pale Ale (IPA)](https://en.wikipedia.org/wiki/India_pale_ale).
#
# The data was preprocessed in advance, as the original dataset was simplified and manipulated for teaching purposes. There are two features:
# * `IBU`, which stands for International Bitterness Units and is a measure of bitterness
# * `Color`.
# 
data = pd.read_csv('data/beer.csv')
data.head(n=3)
X = data[['Color', 'IBU']]
y = data['IsIPA'] # <--- to be an IPA or not to be an IPA, that is the question
# Let's get a quick idea of how the target (IsIPA) varies with the features:
data.plot(kind='scatter', x='Color', y='IBU', c='IsIPA', colormap='coolwarm', figsize=(10, 8), alpha=.5)
plt.xlabel('Color')
plt.ylabel('IBU')
plt.show()
# We are going to try to model the type of beer (isIPA) by using `Color` and `IBU`.
#
# We will use 3 classifiers:
#
# > `SuperConservative` - will have very **low variance**, and **high bias**
# > `SuperFlexible` - Will have very **high variance**, and **low bias**
# > `WellBalanced` - Will be juuuust right
# We will then calculate the accuracy of each of the classifiers.
# ### `SuperConservative` (high bias)
#
# For our `SuperConservative` model we will use a `Logistic Regression`.
#
# Logistic regression provides an example of bias because it makes a lot of assumptions about the form of the target function.
#
# Visually, we can understand the model's inability to adjust to a non-linear decision boundary, structurally enforcing a linear one instead.
preds_super_conservative = utils.plot_super_conservative(X, y)
# Let's check the accuracy too.
from sklearn.metrics import accuracy_score
accuracy_score(y, preds_super_conservative)
# It actually doesn't do too badly, but when we look at the "right side" we can tell it is probably blue all the way to the top, right?
#
# Let's give it tons of flexibility!
# ### `SuperFlexible` (high variance)
#
# For our super flexible model, we will use a `k-Nearest Neighbors` with k=1 (don't worry about what this is yet!)
#
# The k-Nearest neighbors algorithm provides great flexibility and minimum underlying structure.
#
# The small orange *pockets* or *islands* show that our model is overadapting to the training data and, most probably, fitting to noise.
preds_super_flexible = utils.plot_super_flexible(X, y)
# Oh. Right. It definitely figured out that the top right is blue, but it also did some pretty crazy things. It's pretty clear that it's fitting noise.
#
# What about the accuracy?
accuracy_score(y, preds_super_flexible)
# It looks like the model has perfect accuracy, but when we look at the decision regions we know that this is not a "good" model. We will see how to reconcile these seemingly conflicting ideas in a bit.
# ### `WellBalanced` (sort of better)
#
# For the well balanced one, we will use a `K-Nearest Neighbors` with k=9
#
# A key part of the k-NN algorithm is the choice of *k*: the number of nearest numbers in which to base the prediction.
#
# Increasing *k* results in considering more observations in each prediction and makes the model more rigid, for good effect.
preds_just_right = utils.plot_just_right(X, y)
# Accuracy:
accuracy_score(y, preds_just_right)
# ### So... how do we choose?
#
# The irony is that when we calculate the accuracy (or any other metric) of these models, our "best" one is`SuperFlexible`! The reason is simple: it has clearly overfit the data, but if we evaluate (calculate metrics on) it, we're using that data again, so it will be right.
#
# Plotting helps us see that the SuperFlexible model is not actually best. Still, comparing different models by plotting decision boundaries is not very scientific, especially at higher dimensions.
#
# There must be a better way!
# ### The need for validation
#
# Given the above, we need to validate our models after training, to know if they are any good:
# 1. Our assumptions may not hold (that is, we trained a garbage model) or there may be better models
# 2. We may be learning parameters that don't generalize for the entire population (i.e., statistical noise).
#
# Remember, our goal is to approximate the true, universal target function *f* and we need our model to generalize to unseen data.
# <a id='traintestsplit'></a> [Top of section](#modelselection)
# ## 2.1.A Train-test split (holdout method)
# The most obvious way to solve this problem is to separate your dataset into two parts:
# - the training set, where we will find out which model to use
# - the test set, where we will make sure we didn't just overfit the training set
# <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/8/88/Machine_learning_nutshell_--_Split_into_train-test_set.svg/2000px-Machine_learning_nutshell_--_Split_into_train-test_set.svg.png" width="400">
#
# Although there are other, more sophisticated approaches to measuring generalization power of machine learning models, from a basic data science perspective, having a held-out test set that is only used at the end of the process is one of the most sacred concepts.
#
# Someone brilliant (and whose name I can't recall) once said:
# > _**"Every time you use your test set your data dies a little"**_
# That is because every time you use your test set you lose the ability to tell whether you are overfitting the data you happen to have at hand.
# ### In-sample-error (ISE) or training error
#
#
# The in-sample-error is how well our model performs on the training data.
#
# We will measure the error rate for each model in the simplest way, by computing the fractions of misclassified cases.
#
# Remember our 3 classifiers? Let's calculate the in-sample-error for each:
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
clfs = {'SuperConservative': LogisticRegression(),
'WellBalanced': KNeighborsClassifier(n_neighbors=9),
'SuperFlexible': KNeighborsClassifier(n_neighbors=1)}
def classification_error(clf, X, y):
y_pred = clf.predict(X)
error_rate = 1 - accuracy_score(y, y_pred)
return round(error_rate * 100, 2)
# We will make the first 800 rows (80%) training data
X_train = X[:800]
y_train = y[:800]
# We will make the last 200 rows (20%) test data
X_test = X[800:]
y_test = y[800:]
# Testing our model's performance on the training data is a common mistake and underestimates the generalization error.
# +
training_error = {}
for key, clf in clfs.items():
clf.fit(X_train, y_train)
training_error[key] = classification_error(clf, X_train, y_train)
pd.Series(training_error).plot(figsize=(7, 5), kind='bar', rot=0)
plt.ylabel('Training Error')
plt.title('Training error per classifier')
plt.show()
# -
# I mean, clearly the `SuperFlexible` model is the best one! Right? (wrong, as we've seen before).
#
# Next, we'll measure the out of sample error of each of the models.
# ### Out-of-sample error (OSE) or testing error
#
# The out-of-sample error measures how well the model performs on previsouly unseen data and if it's picking up patterns that generalize well.
#
# Ideally, both training and test errors are low and close to one another.
#
# * *Underfitted* models tend to perform poorly on both train and test data, having large (and similar) in-sample- and out-of-sample errors.
#
# * *Overfitting* is detected when a model that performs on training data but not quite so well in the test set: the bigger the gap, the greater the overfitting.
#
# 
#
# *Fig.: How training and test errors behave in regards to model complexity, bias and variance*
# But okay, let's see how our models perform on the test set.
# +
test_set_error = {}
for key, clf in clfs.items():
test_set_error[key] = classification_error(clf, X_test, y_test)
pd.Series(test_set_error).plot(figsize=(7, 5), kind='bar', rot=0)
plt.ylabel('Test set Error')
plt.title('Test set error per classifier')
plt.show()
# -
# As mentioned before, there are different techniques to measure the testing error. We will focus on:
# 1. Train-test split
# 2. Validation set
# 3. Cross-validation
# 4. Bootstrapping.
# ### Train-test split (aka holdout method)
#
# The simplest solution is to leave a random subset of the data aside from the beginning to test our final model at the end.
#
# 
#
# *Fig.: Test set illustrated, you holdout a significant chunk of the data for testing your model in the end*
#
#
# 
#
# *Fig.: Workflow with test and training sets*
#
# After evaluation, you can relearn your final chosen model on the whole data.
# Scikit-learn has some amazing functionality to help you with model selection. Here we are importing a data scientist's best friend, `train_test_split`. This function randomly assigns data points to the train or test set.
# +
from sklearn.model_selection import train_test_split
# You can specify the percentage of the full dataset you want to reserve for testing, here we are using 40%
# Setting the random state fixes the randomness of train/test split so the sets are reproducible
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=42)
print(f"---\nNumber of observations:\nTrain: {X_train.shape[0]} | Test: {X_test.shape[0]}")
# -
# Now we will compute the classification error on both the train and test sets for each model.
# +
def compute_metrics(X_train, y_train, X_test, y_test, clf):
training_error = classification_error(clf, X_train, y_train)
test_error = classification_error(clf, X_test, y_test)
return training_error, test_error
for key, clf in clfs.items():
clf.fit(X_train, y_train)
training_error, test_error = compute_metrics(X_train, y_train, X_test, y_test, clf)
print(f'---\n{key} error:\nTrain: {training_error}% | Test: {test_error}%')
# -
# To quickly recap, SuperConservative has high bias and has both high train and test error. It is underfit.
#
# SuperFlexible has high variance and has low train but high test error. It is overfit.
#
# WellBalanced has a balance of bias and variance and has relatively low train and test error. It is well-fit.
# <a id='valset'></a> [Top of section](#modelselection)
# ### Validation set
#
# Given we have enough data, we can create a *validation dataset*.
#
# 
#
# *Fig.: Validation set as compared with the holdout approach*
#
# 
#
# *Fig.: Workflow with test, validation and training sets*
# To create a validation set and a test set, we use `train_test_split` twice!
# +
# First separate how much data you want to reserve for val + test, we are using 40% again
X_train, X_temp, y_train, y_temp = train_test_split(X, y, test_size=0.4, random_state=1234)
# Then separate the temporary val + test set, typically they are the same size so we are using 50%
X_val, X_test, y_val, y_test = train_test_split(X_temp, y_temp, test_size=0.50, random_state=1234)
del X_temp, y_temp
print(f"Number of observations:\nTrain: {X_train.shape[0]} | Test: {X_test.shape[0]} | Validation: {X_val.shape[0]}")
# -
# Now we will compute the classification error on the train, validation, and test sets for each model.
# +
def compute_validation_metrics(X_train, y_train, X_test, y_test, X_val, y_val, clf):
training_error, test_error = compute_metrics(X_train, y_train, X_test, y_test, clf)
validation_error = classification_error(clf, X_val, y_val)
return training_error, test_error, validation_error
for key, clf in clfs.items():
clf.fit(X_train, y_train)
training_error, test_error, validation_error = compute_validation_metrics(
X_train, y_train, X_test, y_test, X_val, y_val, clf)
print(f'---\n{key} error:\nTrain: {training_error}% | Validation: {validation_error}% | Test: {test_error}%')
# -
# You might be wondering, how is this validation set different from the test set? Don't we basically just have two test sets?
#
# Typically, a validation set will be used to tune parameters of the model, and then final evaluation of OSE will be done on the test set.
#
# To demonstrate this, we will use the validation set we created above to find the optimal value of *k* for our KNN classifier.
#
# Again, don't worry about what this means exactly, just know that with increasing *k*, the model becomes less flexible.
# store the errors
error_dict = {}
# store the classifiers so we can retrieve the best one later
clf_dict = {}
for k in range(1,20):
knn = KNeighborsClassifier(n_neighbors=k)
knn.fit(X_train, y_train)
train_error, validation_error = compute_metrics(X_train, y_train, X_val, y_val, knn)
error_dict[k] = {'train_error': train_error, 'validation_error': validation_error}
clf_dict[k] = knn
plt.figure(figsize=(10,7))
plt.plot(list(error_dict.keys()), [d['train_error'] for d in error_dict.values()], label='train_error')
plt.plot(list(error_dict.keys()), [d['validation_error'] for d in error_dict.values()], label='val_error')
plt.xlabel('k')
plt.ylabel('error')
plt.legend();
# Judging by the graph, it looks like `k=5` is actually the best! Now we will use this value for *k* in evaluating the test set.
knn_5 = clf_dict[5]
test_error = classification_error(knn_5, X_test, y_test)
print(f'KNN, k=5 error:\nTrain: {error_dict[5]["train_error"]}% | Validation: {error_dict[5]["validation_error"]}% '\
f'| Test: {test_error}%')
# This is pretty cool! We were able to tune the parameter *k* on our validation set, and the OSE (test error) actually dropped from the `WellBalanced` (k=9) model we trained before!
# <a id='kfolds'></a> [Top of section](#modelselection)
# ## 2.1.B k-Fold Cross-validation
#
# Test error results can be subject to great variability, especially for smaller datasets, depending on how we split the data (i.e., which observations go to train and which go to val/test).
#
# Also, and quite obviously, holding out *more* data reduces the amount available for training, possibly leading us to *overestimate* the test error.
#
# One way to mitigate this is to use ***k*-fold cross validation.**
#
# In *k*-fold cross validation:
# 1. The original sample is randomly partitioned into *k* equal sized parts, or subsamples
#
# 2. Each time, we leave out part *k*, fit the model to the other *k*-1 subsets combined in a single dataset, and then test the model against the left out *k*th part
#
# 3. This is done for each *k* = 1, 2, ..., *K*, and then the results are combined, using, for example, the mean error.
#
# 
#
# *Fig.: Creating multiple (K=5) train and test set pairs using cross-validation*
#
# This way, we use every observation to both train and test out model: each fold is used once as validation, while the *k*-1 remaining folds form the training set.
#
# The mean of the error of every fold can be seen as a proxy for OSE.
# Again, scikit-learn already has cross validation implemented for us!
# +
from sklearn.model_selection import cross_val_score
for key, clf in clfs.items():
# We can specify the number of folds we want with cv
scores = cross_val_score(clf, X, y, cv=10, scoring=classification_error)
mean_error = round(np.mean(scores), 2)
var_error = round(np.var(scores), 2)
print(f'---\n{key} validation error:\nMean: {mean_error}% | Variance: {var_error}')
# -
# Nonetheless, since each training set contains just part of the data, the estimated test error can, still, be biased upward.
# <a id='dataleak'></a> [Top of section](#modelselection)
# ## 2.1.C Data Leakage
# We must keep the test data aside for the entire modeling process (including data preprocessing, feature engineering, etc.), otherwise knowledge about the test set will *leak* into the model and ruin the experiment. Data leakage can be hard to detect, but if your results seem a little too good to be true, that's one sign. Ways to combat data leakage include:
#
# * Perform data preparation within your cross validation folds
# * Hold back a test dataset for final sanity check of your developed models
# <a id='practical'></a> [Top of section](#modelselection)
# # 2.2 Practical considerations
# In addition to choosing models based on their performance, there are practical considerations that data scientists need to factor into model selection. It is important to always keep your business case in mind, and consider if factors like speed and memory usage are more important than that extra 0.1% in accuracy.
# ### Training time
#
# * Sometimes the "best" models can take a long time to train (for certain deep learning models, it can be as long as a few days or weeks)
# * We need to consider if the business case warrants waiting this long for results
# * It is best practice to work towards a quick baseline, or **MVP** (Minimum Viable Product), with a simple model, and then iterate to improve it
# * Oftentimes a simple model will be quick to train and still yield decent performance
# * Furthermore, if your data is noisy, or if there is too much irreducible error, the most complex and advanced models will not be able to "learn" much more than a simple model anyway. So, it is better to try something quick before wasting time trying to train a complex model just to find out there is too much noise
# ### Prediction time
#
# * The time your model takes to return predictions is also very important, especially in production environments
# * Again, you need to consider your use case and decide what prediction time is reasonable (i.e., do you need real-time predictions??)
# * You can have a near-perfect model, but if it takes 30 seconds to return a prediction for one sample, a slightly worse model that takes 0.1 seconds for prediction might be better
# ### Memory (and $$$)
#
# * Some complex models (again, deep learning models are a good example) occupy a lot of disk space and/or require a large amount of memory (RAM)
# * These factors not only play into prediction time but can translate to actual costs for your business
# * Training and then serving heavy models in production may require more expensive machines that can impact margins!
# <a id='regularization'></a>
# [Return to top](#top)
# # 3. Regularized Linear Regression
# ### Subtopics
#
# 1. Intuition and use-cases
# 3. [Ridge, or L2](#ridge)
# 1. `sklearn.linear_model.Ridge`
# 2. [Lasso, or L1](#lasso)
# 1. `sklearn.linear_model.Lasso`
# 4. [Elastic Net](#elasticnet)
# 1. `sklearn.linear_model.ElasticNet`
# ## 3.1 Intuition and use-cases
# Throughout your jouney into data science, it will be very common for you to deal with problems stemming from having a small dataset, noisy features and, also, a high sparsity level in that dataset (i.e. a lot of entries in your dataset will be "missing"). Many of the models we usually have can suffer greatly under these circumstances, especially if they have a lot of parameters to be estimated (i.e. many degrees of freedom). Having many parameters is analogous to a model having high complexity, or high variance.
#
# In this SLU, we've already talked a lot about how high variance can lead to overfitting, and we've seen how to estimate a model's level of overfitting by comparing metrics on the train and test sets.
#
# Now we'll discuss a way to actually combat overfitting during training time, **Regularization**.
# ### Regularization
#
# > *(...) regularization is the process of introducing additional information in order to solve an ill-posed problem or to avoid overfitting.*
#
# Regularization awards a model's *goodness of fit* while penalizing *model complexity* automatically, while it is fitting the data!
#
# In this notebook, we will explore $L_1$ and $L_2$ regularizers.
# Previously, we described the loss function of linear regression as
#
# $$J = \frac{1}{N} \sum_{n=1}^N (y_n - \hat{y}_n)^2$$
#
# **This loss function has a (serious) problem: the optimization methods will adapt, as much as they can, the parameters to the training set.**
#
# To illustrate this, let's explore the following example:
# %load_ext autoreload
# %autoreload 2
from utils import create_dataset
data = create_dataset()
original_data = data.copy()
data.head(5)
plt.scatter(data['x'], data['y'], c='orange', s=5)
plt.xlabel('X')
plt.ylabel('y');
# As you can see, this dataset is noisy but has a clear relation between the input and the target. Let's fit a simple linear regression.
from sklearn.linear_model import LinearRegression
# +
X = data.drop('y', axis=1)
y = data['y']
lr = LinearRegression(normalize=True)
lr.fit(X, y)
plt.scatter(X['x'], data['y'], c='orange', s=5)
plt.plot(X['x'], lr.predict(X))
plt.title(f'Linear Regression (R²: {lr.score(X, y)})')
plt.xlabel('X')
plt.ylabel('y');
# -
# Clearly this model is underfit.
#
# In order to try to get a better result, let's add extra inputs: **powers of `data['x']`** (aka polynomial features).
from utils import expand_dataset, fit_and_plot_linear_regression
data = expand_dataset(original_data, 3)
data.head(5)
fit_and_plot_linear_regression(data)
# We improved our R²! Let's get crazy and see what happens with many more powers
data = expand_dataset(original_data, 10)
data.head(5)
fit_and_plot_linear_regression(data)
# Our $R^2$ is even better still!
#
# Let's keep going with more powers!
data = expand_dataset(original_data, 20)
fit_and_plot_linear_regression(data)
data = expand_dataset(original_data, 40)
fit_and_plot_linear_regression(data)
data = expand_dataset(original_data, 200)
fit_and_plot_linear_regression(data)
# You might have noticed that the model, when adding a large number of this type of features, starts to fit to the noise as well! This means that a test set will, very likely, produce a really bad $R^2$, eventhough we are increasing the $R^2$ in the training set (also, remember that issue with noise fitting and $R^2$?).
# <a id='ridge'></a> [Top of section](#regularization)
# ## 3.2 Ridge Regularization ($L_2$ norm)
# One thing that we can do to address this overfitting is apply $L_2$ regularization in our linear regression model. This means changing the loss function from the standard MSE
# $$J = \frac{1}{N} \sum_{n=1}^N (y_n - \hat{y}_n)^2$$
# to
# $$J = \frac{1}{N} \sum_{n=1}^N (y_n - \hat{y}_n)^2 + \lambda_2 \|\beta\|_2^2$$
# $$=$$
# $$J = \frac{1}{N} \sum_{n=1}^N (y_n - \hat{y}_n)^2 + \lambda_2 \sum_{k=1}^K \beta_k^2$$
#
# You'll notice that the left part of the loss function, $\frac{1}{N} \sum_{n=1}^N (y_n - \hat{y}_n)^2$, is still MSE. By keeping this part in the loss function, the model will continue to try to reduce this amount. If this part is low, that indicates a good fit. If it is high, that indicates a bad fit.
#
# However, the new part of the function, $\lambda_2 \sum_{k=1}^K \beta_k^2$, is the sum of the squares of the betas, or the coefficients. The model will also try to reduce this amount, meaning the model will award smaller coefficients. If this part is low, that indicates a simple model. If it is high, that indicates a complex model.
#
# Ideally we will have low values for both parts, resulting in a well-fitted, simple model.
#
# In the $L_2$ loss function, $\lambda_2$ is the strength of the regularization part, which is a parameter that can be tuned.
# As you might have noticed, $\beta_0$ (i.e. the intercept) is excluded from the regularization expression (*k* starts at 1).
#
# This is due to certain theoretical aspects related to the intercept that are completely out of scope in here. If you are interested in knowing more about them, check the discussion in [stats.exchange.com](https://stats.stackexchange.com/questions/86991/reason-for-not-shrinking-the-bias-intercept-term-in-regression) or check this **bible** called [*Elements of Statistical Learning*](https://web.stanford.edu/~hastie/ElemStatLearn/) (there is also a MOOC for this).
# We'll go through an example using a new class that implements this type of regularization: [Ridge](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html).
from sklearn.linear_model import Ridge
# +
data = expand_dataset(original_data, 40)
X = data.drop('y', axis=1)
y = data['y']
# alpha here is the same as lambda in the loss function from above
ridge = Ridge(normalize=True, alpha=0.0001, random_state=10)
ridge.fit(X, y)
plt.scatter(X['x'], data['y'], c='orange', s=5)
plt.plot(X['x'], ridge.predict(X))
plt.title(f'Ridge Regression (R²: {ridge.score(X, y)})')
plt.xlabel('X')
plt.ylabel('y');
# -
# Even though we are still allowing the model to have up to 40-degree polynomial features, the model doesn't look very overfit!
# We can visualize the coefficients of the non-regularized Linear Regression and the regularized version to see what's happening:
# No regularization
lr = LinearRegression(normalize=True)
lr.fit(X, y)
plt.figure(figsize = (8,5))
plt.plot(range(len(lr.coef_)), [abs(coef) for coef in lr.coef_], marker='o', markerfacecolor='r')
plt.xlabel('Polynomial degree')
plt.ylabel('Coef. magnitude');
# With regularization
plt.figure(figsize = (8,5))
plt.plot(range(len(ridge.coef_)), [abs(coef) for coef in ridge.coef_], marker='o', markerfacecolor='r')
plt.xlabel('Polynomial degree')
plt.ylabel('Coef. magnitude');
# The Ridge model is clearly less overfit than the normal Linear Regression model, and only has 3-4 significant parameters, while the LR has 10-15!
#
# From this, we see that another benefit of regularization can be increased **interpretability**. The regularized model automatically learns which features are important.
# Let's increase the number of power features and see what happens
# +
data = expand_dataset(original_data, 400)
X = data.drop('y', axis=1)
y = data['y']
ridge = Ridge(normalize=True, alpha=0.0001, random_state=10)
ridge.fit(X, y)
plt.scatter(X['x'], data['y'], c='orange', s=5)
plt.plot(X['x'], ridge.predict(X))
plt.title(f'Ridge Regression (R²: {ridge.score(X, y)})')
plt.xlabel('X')
plt.ylabel('y');
# -
# Interesting! Even after adding more features, out model didn't change (almost) anything!
#
# By the way, keep in mind that if we used the adjusted R², by this time, we would have an awful R² due to the number of useless features we have! If you don't believe us, let's check the coefficients
(ridge.coef_ == 0).sum() / ridge.coef_.shape[0]
# ~44% of feature coefficients are 0! This means that those features are completely useless and, as desired, the adjusted R² would greatly penalize us!
# <a id='lasso'></a> [Top of section](#regularization)
# ## 3.3 Lasso Regularization ($L_1$ norm)
# Besides $L_2$, we also have $L_1$ regularization.
#
# The loss function for $L_1$ is
#
# $$J = \frac{1}{N} \sum_{n=1}^N (y_n - \hat{y}_n)^2 + \lambda_1 \|\beta\|_1^1$$
# $$=$$
# $$J = \frac{1}{N} \sum_{n=1}^N (y_n - \hat{y}_n)^2 + \lambda_1 \sum_{k=1}^K \left|\beta_k\right|$$
#
# This is usually called [Lasso regression](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Lasso.html). The difference between lasso and ridge is that instead of squaring the coefficients in ridge, lasso uses the absolute value of the coefficients. This type of regression is way more aggressive in its approach to constraining coefficient magnitude. In many real world scenarios, it usually has just a few features with coefficients different from 0.
# Let's repeat the same examples, this time using Lasso.
from sklearn.linear_model import Lasso
# +
data = expand_dataset(original_data, 40)
X = data.drop('y', axis=1)
y = data['y']
# alpha here is the same as lambda in the loss function from above
lasso = Lasso(normalize=True, alpha=0.0002, random_state=10)
lasso.fit(X, y)
plt.scatter(X['x'], data['y'], c='orange', s=5)
plt.plot(X['x'], lasso.predict(X))
plt.title(f'Lasso Regression (R²: {lasso.score(X, y)})')
plt.xlabel('X')
plt.ylabel('y');
# -
# Let's visualize the coefficients of the fitted model:
plt.figure(figsize = (10,5))
plt.plot(range(len(lasso.coef_)), [abs(coef) for coef in lasso.coef_], marker='o', markerfacecolor='r')
plt.xlabel('Polynomial degree')
plt.ylabel('Coef. magnitude');
# Like we saw with Ridge, just a few features (the lowest-order polynomials) are significant, and all other features are zero!
# <a id='elasticnet'></a> [Top of section](#regularization)
# ## 3.4 Elastic Net Regularization ($L_1 + L_2$ norm)
# Also, finally, we can have both $L_1$ and $L_2$ in what is called [Elastic Net regression](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.ElasticNet.html)
#
# The loss function for elastic net just adds the extra parts we added in ridge and lasso:
#
# $$J = \frac{1}{N} \sum_{n=1}^N (y_n - \hat{y}_n)^2 + \lambda_1 \sum_{k=1}^K \left|\beta_k\right| + \lambda_2 \sum_{k=1}^K \beta_k^2$$
#
# We'll repeat the same example again using Elastic Net. This time `alpha` is the total weight of the penalty terms ($\lambda_1 + \lambda_2$), and we can also set the ratio of $\lambda_1$ to $\lambda_2$ with `l1_ratio`.
from sklearn.linear_model import ElasticNet
# +
data = expand_dataset(original_data, 40)
X = data.drop('y', axis=1)
y = data['y']
# alpha here is the same as lambda in the loss function from above
en = ElasticNet(normalize=True, alpha=0.00001, l1_ratio=0.5, random_state=10)
en.fit(X, y)
plt.scatter(X['x'], data['y'], c='orange', s=5)
plt.plot(X['x'], en.predict(X))
plt.title(f'Elastic Net Regression (R²: {en.score(X, y)})')
plt.xlabel('X')
plt.ylabel('y');
# -
plt.figure(figsize = (10,5))
plt.plot(range(len(en.coef_)), [abs(coef) for coef in en.coef_], marker='o', markerfacecolor='r')
plt.xlabel('Polynomial degree')
plt.ylabel('Coef. magnitude');
# Again we see a similar behavior to the other regularizers!
# <a id='overfit'></a>
# [Return to top](#top)
#
# # 4. Conclusion (More on Overfitting)
#
# How to deal with overfitting is one of the fundamentals that every data scientist must know, so we're going to talk about it some more! Here are some already-mentioned and additional techniques to prevent overfitting:
#
# #### Removing features
# We can manually remove features to reduce model complexity. In the beer example, it's possible that the color feature is not very informative and the model can classify IPAs on bitterness alone. So, we might try removing the color feature. Regularized linear regression can be considered a subset of this strategy since it automatically encourages the number of relevant features to be small.
#
# #### Adding more data
# If you can get your hands on it, adding additional data can make the signal in your data stronger, provided it is clean data and doesn't just add more noise. If we found some new beers to add to our training data it might help the model generalize better.
#
# #### Using a simpler model
# Using a model with fewer learnable parameters can reduce overfitting since very complex models can overreact to the specifics of the training data and learn random noise. To classify IPAs based on 2 features, choosing logistic regression over, say, a deep neural network makes sense because it is a fairly simple task. Of course, there are many cases where we need a complex model, image classification for example. Basically, choose a model that is fit to the task.
#
# #### Algorithm-specific techniques and regularization
# When learning new machine learning algorithms (like in the next units of the bootcamp!), always ask, "How can we prevent overfitting with this model?"
# Thank you for finishing this learning notebook! Hopefully you now have a better idea of
# 1. How to choose the best models, optimizing for both bias and variance and keeping practical considerations in mind
# 2. Different techniques to separate your data into train and validation/test sets, and the benefits and drawbacks of each
# 3. What is overfitting, why it's bad, and ways to combat it, including with regularization
| S01 - Bootcamp and Binary Classification/SLU09 - Model Selection and Overfitting/Learning notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Julianxillo/Datasciencice300/blob/main/ejercicio_03.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="rnqKeyV2XMzM"
# # Ejercicio 1
#
# Escribir un programa para mostrar sólo los números de una lista (dada) que cumplan las condiciones siguientes ✈⏰♥⌛⛸🏀
#
# ```lista = [14, 75, 150, 180, 145, 525, 50, 10, 56,55]```
#
# 1. El número debe ser divisible por cinco 🙏
# 2. Si el número es mayor que 150, sáltelo y pase al siguiente número 🫂
# 3. Si el número es mayor que 500, paren el bucle 🔊
# + colab={"base_uri": "https://localhost:8080/"} id="dSKuBmZXXRSY" outputId="9d0e6a9c-0e8a-4279-d398-33a87a2fe3cb"
lista = [14, 75, 150, 180, 145, 525, 50, 10, 56,55]
listaf = []
for x in lista:
if x>500:
break
elif x>150:
continue
elif x%5==0:
listaf.append(x)
print(listaf)
# + [markdown] id="KI28Moh5mzDj"
# # Ejercicio 2
#
# Escriba un programa para contar el número **entero** total de dígitos en un número usando un bucle while. ⚪❌💯👍⚡
#
# Por ejemplo, el número es 12345, por lo que la salida debe ser 5. 📷🥿🥲😕🔮
#
# > Bloque con sangría
#
#
# + colab={"base_uri": "https://localhost:8080/"} id="bVIpHT_-tZQk" outputId="c8763fb3-6e27-4f04-f1d2-cdd487cdead0"
a = int(input("Por favor ingrese un número positivo: "))
i=0
while a>0:
a=a//10
i=i+1
print("El número ingresado tiene digitos", i)
# + [markdown] id="TCqQf_KqvVy8"
# # Ejercicio 3
#
# Escriba un programa para usar en bucle para imprimir el siguiente patrón de números inverso. **Ayuda** *Pueden usar un loop anidado* ☣🙂🚒⛴🗃⛲
#
# ```
# 6 5 4 3 2 1
# 5 4 3 2 1
# 4 3 2 1
# 3 2 1
# 2 1
# 1
# ```
# + colab={"base_uri": "https://localhost:8080/"} id="-m9uspadxKUn" outputId="ff88fe97-edb4-4e98-d0e1-bf0995007d8f"
x = [7,6,5,4,3,2,1]
while x:
x.pop(0)
print(x)
# + [markdown] id="TMAvCzLJDdct"
# # Ejercicio 4
#
# Escriba un programa para calcular la suma de la siguiente serie hasta el n-ésimo término. 🧉👒😵🧭🈹🥸
#
# $$3+33+333+3333+33333+.....$$
#
# Por ejemplo, si n = 5 la serie se convertirá en 3 + 33 + 333 + 3333 + 33333 = 37035 🥸💵🐬⚓♊
# + colab={"base_uri": "https://localhost:8080/"} id="giWEUH2pTnBW" outputId="3e193fa5-3a8f-4937-a3e2-5c0f32e24c1a"
n=int(input('Por favor ingrese el valor de n:'))
t=3
p=0
for i in range(0,n):
print(t,end="+")
p+=t
t=t*10+3
print(" la suma de la serie es", p)
# + [markdown] id="Mh6wCRtoHhtL"
# # Ejercicio 5
#
# Crear una función que se llama ```funcion_xy``` que tome como argumento una lista de **numeros enteros** (dada, ver mas abajo) y adicionalmente permita la posibilidad de incorporar parametros adicionales ```*args```y ```**kargs```.
#
# ```lista=[2,1,3,5,4,7,9,8,6,10]```
#
# Se requiere que la funcion tenga las siguientes caracteristicas
#
# ```*args```
#
# 1. Ordene la lista de menor a mayor y la devuelva como salida.
# 2. Extraiga y muestre solo los numeros pares
# 3. Calcular la media recortada al 10% de la lista. Para eso pueden utilizar importar la siguiente librería:
# ```from scipy import stats```
# Y utilizar la función ```stats.trim_mean(lista, 0.1, axis=0)```
#
# ```**kargs```
#
# 1. ```reverse == True``` => Debera invertir la lista
# 2. ```odd == True``` => retorna una lista con los nombres 'pares' e 'impares' para cada posicion
# 3. ```filter == True``` => extraer los valores mayores a 4 y mostrarlos
#
#
# **Nota :** En todos los casos n olviden la importancia del return y traten de aplicar todos los conceptos que hemos visto para resolver de la manera más eficiente posible.
# + id="i4w-qcHuHxgp"
import numpy as np
from scipy import stats
def funcion_xy(l, *args, **kargs):
if args:
if args[0] == a:
print('1. ordenando lista de la menor a mayor: ')
vec=np.sort(l)
return vec
elif args[0] == b:
print('2. extracción de números pares: ')
result = []
for i in l:
if i % 2 == 0:
vec.append(i)
return vec
elif args[0] == 3:
print('3. La media recortada de la serie al 10%: ')
vec=stats.trim_mean(l, 0.1, axis=0)
return vec
if kargs.keys():
if 'reverse' in kargs.keys():
if kargs['reverse'] == True:
print('4. Lista inversa')
l.reverse()
vec=l
return vec
elif 'odd' in kargs.keys():
if kargs['odd'] == True:
vec = []
print('5. Determinar si es par o impar: ')
for i in l:
if i % 2 == 0:
vec.append('par')
else:
vec.append('impar')
return vec
if 'filter' in kargs.keys():
if kargs['filter'] == True:
print('6. Extraer números mayores a 4')
vec = []
for i in l:
if i >4:
vec.append(i)
return vec
# + colab={"base_uri": "https://localhost:8080/"} id="BfDVZ-u1Is9N" outputId="a015252a-7856-4027-d867-8c83b7f0afcc"
lista=[2,1,3,5,4,7,9,8,6,10]
funcion_xy(lista,a)
# + colab={"base_uri": "https://localhost:8080/"} id="SzJBWuWiLnt5" outputId="65335859-9a37-4493-df1c-c01e0ac7591f"
lista=[2,1,3,5,4,7,9,8,6,10]
funcion_xy(lista,b)
# + colab={"base_uri": "https://localhost:8080/"} id="4JlYBD1iOaam" outputId="98a2fea6-8d02-430d-d7c9-a47fb11bb358"
lista=[2,1,3,5,4,7,9,8,6,10]
funcion_xy(lista,3)
# + colab={"base_uri": "https://localhost:8080/"} id="M10VEIFNOtC9" outputId="fc7b53a4-dc1d-4e42-dd57-d564304fd7d8"
lista=[2,1,3,5,4,7,9,8,6,10]
funcion_xy(lista,reverse=True)
# + colab={"base_uri": "https://localhost:8080/"} id="lEfKw6XmQj1b" outputId="f4e62682-dcba-4cce-d2af-576854707ab1"
lista=[2,1,3,5,4,7,9,8,6,10]
funcion_xy(lista,odd=True)
# + colab={"base_uri": "https://localhost:8080/"} id="NWwRjdM0SbZM" outputId="ad9d9da4-cd64-42a2-b56b-874cde53fd72"
lista=[2,1,3,5,4,7,9,8,6,10]
funcion_xy(lista,filter=True)
| ejercicio_03.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# ### Mi a formális logika?
# Szimbólumok szabályos összefűzésével formált, jelentéssel bíró kifejezések rendszere.
# + [markdown] slideshow={"slide_type": "fragment"}
# $\Rightarrow$ **NYELV**
# + [markdown] slideshow={"slide_type": "fragment"}
# A szimbólumok sorrendjét egyfajta nyelvtan, míg a keletkező kifejezések igazságtartalmát a logikai következtetés szabályai rögzítik.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Ítéletlogika
# - logikai változókkal és a velük elvégezhető műveletekkel foglalkozik
#
# + [markdown] slideshow={"slide_type": "fragment"}
# - **logikai változó**: csak az IGAZ és HAMIS értékeket veheti fel
#
# + [markdown] slideshow={"slide_type": "fragment"}
# - ilyen pl. egy állítás vagy egy matematikai kifejezés igazságértéke
#
# + [markdown] slideshow={"slide_type": "fragment"}
# - a logikai változók között - más típusú változókhoz hasonlóan - létezhetnek *logikai műveletek*
# + [markdown] slideshow={"slide_type": "slide"}
# ### Milyen értéke nem lehet logikai változónak?
# - 3
# + [markdown] slideshow={"slide_type": "fragment"}
# - "malac"
# + [markdown] slideshow={"slide_type": "fragment"}
# - "nem tudom"
# + [markdown] slideshow={"slide_type": "fragment"}
# - "attól függ"
# + [markdown] slideshow={"slide_type": "slide"}
# ### Alakítsd át logikai változóvá!
# *Nem logikai változó*: Az eső valószínűsége (0%-tól 100%-ig bármi lehet az értéke)
#
# + [markdown] slideshow={"slide_type": "fragment"}
# A fentiből képzett *logikai változó*, amely igaz, ha...
#
# + [markdown] slideshow={"slide_type": "fragment"}
# #### 1. Az eső valószínűsége nagyobb, mint 50%:
#
# + [markdown] slideshow={"slide_type": "fragment"}
# - Valószínűbb, hogy esni fog, mint hogy nem fog esni?
# + [markdown] slideshow={"slide_type": "fragment"}
# #### 2. Az eső valószínűsége 100%, különben hamis:
#
# + [markdown] slideshow={"slide_type": "fragment"}
# - Biztosan esni fog?
# + [markdown] slideshow={"slide_type": "slide"}
# ### Alakítsd át logikai változóvá!
# *Nem logikai változó*: a vizsgajegyem (egész szám 1 és 5 között)
#
# + [markdown] slideshow={"slide_type": "fragment"}
# A fentiből képzett *logikai változó*: ...
# + [markdown] slideshow={"slide_type": "slide"}
# ### Műveletek
# - $\neg$: tagadás
#
# + [markdown] slideshow={"slide_type": "fragment"}
# - $\land$: és (konjukció)
#
# + [markdown] slideshow={"slide_type": "fragment"}
# - $\lor$: vagy (diszjunkció)
#
# + [markdown] slideshow={"slide_type": "fragment"}
# - $\oplus$: kizáró vagy
#
# + [markdown] slideshow={"slide_type": "fragment"}
# - $\Rightarrow$: implikáció
#
# + [markdown] slideshow={"slide_type": "fragment"}
# - $\Leftrightarrow$: ekvivalencia
# + [markdown] slideshow={"slide_type": "slide"}
# ### Kijelentések:
# A: Misi szereti a kevertet, B: Misi részt vesz az Ábel kupán, C: Misi rajkos, D: kevesen szeretik a kevertet
# + [markdown] slideshow={"slide_type": "slide"}
# #### 1. Misi szereti a kevertet és részt vesz az Ábel kupán
#
# + [markdown] slideshow={"slide_type": "fragment"}
# - A $\land$ B
# + [markdown] slideshow={"slide_type": "fragment"}
# #### 2. Misi nem szereti a kevertet, de részt vesz az Ábel kupán
#
# + [markdown] slideshow={"slide_type": "fragment"}
# - $\neg$ A $\land$ B
# + [markdown] slideshow={"slide_type": "fragment"}
# #### 3. Ha kevesen szeretik a kevertet és Misi szereti, akkor Misi rajkos
#
# + [markdown] slideshow={"slide_type": "fragment"}
# - (D $\land$ A) $\Rightarrow$ C
# + [markdown] slideshow={"slide_type": "fragment"}
# #### 4. Misi vagy nem rajkos, vagy szereti a kevertet és részt vesz az Ábel kupán
#
# + [markdown] slideshow={"slide_type": "fragment"}
# - $\neg$ C $\oplus$ (A $\land$ B) VAGY $\neg$ C $\lor$ (A $\land$ B)
# + [markdown] slideshow={"slide_type": "fragment"}
# #### 5. Misi akkor és csak akkor vesz részt az Ábel kupán, ha vagy szereti a kevertet, vagy rajkos
#
# + [markdown] slideshow={"slide_type": "fragment"}
# - B $\Leftrightarrow$ (A $\lor$ C)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Igazságtábla
#
# | A | B | $\neg$A | A$\land$B | A$\lor$B | A$\oplus$B | A$\Rightarrow$B | A$\Leftrightarrow$B |
# | -- | -- | -- | -- | -- | -- | -- | -- |
# | I | I | | | | | | |
# | I | H | | | | | | |
# | H | I | | | | | | |
# | H | H | | | | | | |
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Igazságtábla
#
#
# | A | B | $\neg$A | A$\land$B | A$\lor$B | A$\oplus$B | A$\Rightarrow$B | A$\Leftrightarrow$B |
# | -- | -- | -- | -- | -- | -- | -- | -- |
# | I | I | H | | | | | |
# | I | H | H | | | | | |
# | H | I | I | | | | | |
# | H | H | I | | | | | |
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Igazságtábla
#
#
# |A | B | $\neg$A | A$\land$B | A$\lor$B | A$\oplus$B | A$\Rightarrow$B | A$\Leftrightarrow$B |
# | -- | -- | -- | -- | -- | -- | -- | -- |
# | I | I | H | I | | | | |
# | I | H | H | H | | | | |
# | H | I | I | H | | | | |
# | H | H | I | H | | | | |
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Igazságtábla
#
#
# | A | B | $\neg$A | A$\land$B | A$\lor$B | A$\oplus$B | A$\Rightarrow$B | A$\Leftrightarrow$B |
# | -- | -- | -- | -- | -- | -- | -- | -- |
# | I | I | H | I | I | | | |
# | I | H | H | H | I | | | |
# | H | I | I | H | I | | | |
# | H | H | I | H | H | | | |
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Igazságtábla
#
#
# | A | B | $\neg$A | A$\land$B | A$\lor$B | A$\oplus$B | A$\Rightarrow$B | A$\Leftrightarrow$B |
# | -- | -- | -- | -- | -- | -- | -- | -- |
# | I | I | H | I | I | H | | |
# | I | H | H | H | I | I | | |
# | H | I | I | H | I | I | | |
# | H | H | I | H | H | H | | |
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Igazságtábla
#
#
# | A | B | $\neg$A | A$\land$B | A$\lor$B | A$\oplus$B | A$\Rightarrow$B | A$\Leftrightarrow$B |
# | -- | -- | -- | -- | -- | -- | -- | -- |
# | I | I | H | I | I | H | I | |
# | I | H | H | H | I | I | H | |
# | H | I | I | H | I | I | I | |
# | H | H | I | H | H | H | I | |
# + [markdown] slideshow={"slide_type": "slide"}
# ### Igazságtábla
#
#
# | A | B | $\neg$A | A$\land$B | A$\lor$B | A$\oplus$B | A$\Rightarrow$B | A$\Leftrightarrow$B |
# | -- | -- | -- | -- | -- | -- | -- | -- |
# | I | I | H | I | I | H | I | I |
# | I | H | H | H | I | I | H | H |
# | H | I | I | H | I | I | I | H |
# | H | H | I | H | H | H | I | I |
# + [markdown] slideshow={"slide_type": "slide"}
# ### Félösszeadó
# Az ALU (*arithmetic processing unit*) egyszerű számításokat végző áramkör, a számítógépek processzorának alapköve. Ennek egyik komponense a félösszeadó, ami bináris számokat ad össze.
#
# + [markdown] slideshow={"slide_type": "fragment"}
# <img src="halfadder_1.png">
#
# + [markdown] slideshow={"slide_type": "subslide"}
# <img src="halfadder_2.png">
# + [markdown] slideshow={"slide_type": "slide"}
# ### <NAME>?
# - változók
#
# + [markdown] slideshow={"slide_type": "fragment"}
# - meghatározott lehetséges értékek amiket változó felvehet
#
# + [markdown] slideshow={"slide_type": "fragment"}
# - változók közötti műveletek
#
# + [markdown] slideshow={"slide_type": "fragment"}
# ### algebra! <!-- .element: class="fragment" data-fragment-index="1" -->
# + [markdown] slideshow={"slide_type": "slide"}
# ### Mik ezek?
# - $ x \times 0 = 0 $
# + [markdown] slideshow={"slide_type": "fragment"}
# - $ y ^ 0 = 1 $
# + [markdown] slideshow={"slide_type": "fragment"}
# - $ (a + b)^2 = a^2 + 2ab + b^2 $
# + [markdown] slideshow={"slide_type": "fragment"}
# ### tételek
# + [markdown] slideshow={"slide_type": "fragment"}
# olyan formulák, melyek a bennük szereplő változók értékétől függetlenül igazak
# + [markdown] slideshow={"slide_type": "slide"}
# ### Tételek
# #### 1. asszociativitás:
# - $A \lor (B \lor C) \Leftrightarrow (A \lor B) \lor C$
# + [markdown] slideshow={"slide_type": "fragment"}
# #### 2. disztributivitás:
# - $A \land (B \lor C) \Leftrightarrow (A \land B) \lor (A \land C)$
# + [markdown] slideshow={"slide_type": "fragment"}
# #### 3. <NAME> szabály:
# - $\neg (A \lor B) \Leftrightarrow \neg A \land \neg B$
# + [markdown] slideshow={"slide_type": "fragment"}
# #### 4. szillogizmus:
# - $(A \Rightarrow B) \land (B \Rightarrow C) \Rightarrow (A \Rightarrow C)$
# + [markdown] slideshow={"slide_type": "slide"}
# ### Tétel-e?
# #### 1. $A \Rightarrow B \Leftrightarrow \neg A \lor B$
#
# + [markdown] slideshow={"slide_type": "fragment"}
# #### 2. $(A \Rightarrow B) \lor (B \Rightarrow A)$
# + [markdown] slideshow={"slide_type": "slide"}
# ### Elsőrendű logika
# #### 1. Predikátumok
#
# - paraméter(eke)t tartalmazó kifejezések
# - igazságértékük a bennük szereplő paraméter(ek)től függ
# + [markdown] slideshow={"slide_type": "fragment"}
# #### 2. Kvantorok
#
# - $\forall$: "mindegyik"
# - $\exists$: "létezik"
# + [markdown] slideshow={"slide_type": "slide"}
# ### Predikátumok
# #### 1. egyváltozós
#
# - $P$: "páros szám"
# - $P(x)$ akkor (és csak akkor) igaz, ha $x$ páros
# + [markdown] slideshow={"slide_type": "fragment"}
# #### 2. többváltozós
#
# - $NE$: "nagyobb egyenlő"
# - $NE(x,y)$ akkor (és csak akkor) igaz, ha $x \geq y$
# + [markdown] slideshow={"slide_type": "fragment"}
# #### 3. formulában
#
# - $NE(x,y) \land P(y)$
# - $NE(x,y) \Rightarrow NE(y,x)$
# + [markdown] slideshow={"slide_type": "slide"}
# ### Kvantorok
#
# - $\forall$: "mindegyik"
# - $\exists$: "létezik"
# + [markdown] slideshow={"slide_type": "fragment"}
# Formulában:
#
# #### 1. $\forall x (P(x))$: bármely $x$-re a $P(x)$ kifejezés igazságértéke IGAZ
#
# #### 2. $\exists x (P(x))$: létezik legalább egy $x$, melyre a $P(x)$ kifejezés igazságértéke IGAZ
# + [markdown] slideshow={"slide_type": "fragment"}
# #### 3. $\forall x ( P(2x) )$
# + [markdown] slideshow={"slide_type": "fragment"}
# #### 4. $\forall x ( \exists y (NE(x,y)))$
# + [markdown] slideshow={"slide_type": "fragment"}
# #### 5. $\nexists x (\neg NE(x,x))$
# + [markdown] slideshow={"slide_type": "slide"}
# ### Kötött paraméter
# Ha egy kijelentésben egy paraméter minden előfordulása valamilyen kvantor hatáskörében van, akkor azt mondjuk, hogy a paraméter **kötött**, különben **szabad**.
#
# ### Zárt formula
# Az a formula, aminek nincs szabad paramétere, **zárt formula**. Máskülönben **nyitott formula**.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Predikátumok:
# $F(x)$: $x$ férfi, $N(x)$: $x$ nő, $V(x,y)$: $x$ vonzónak tartja $y$-t
#
# ### Kifejezni predikátumokkal
# #### 1. $d$-nek minden nő tetszik
# + [markdown] slideshow={"slide_type": "fragment"}
# #### 2. $k$ egy biszexuális nő
# + [markdown] slideshow={"slide_type": "fragment"}
# #### 3. léteznek aszexuális emberek
# + [markdown] slideshow={"slide_type": "fragment"}
# #### 4. mindenki biszexuális
# + [markdown] slideshow={"slide_type": "fragment"}
# #### 5. csak a férfiak között vannak melegek
| lecture_notebooks/prelog1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
from numpy import array
from keras.optimizers import Adam
from keras.models import Sequential
from keras.metrics import Accuracy
from keras.layers import Dense, LSTM, Dropout
from sklearn.preprocessing import MinMaxScaler
import matplotlib.pyplot as plt
import tensorflow as tf
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
try:
# Restrict TensorFlow to only use the fourth GPU
tf.config.experimental.set_visible_devices(gpus[0], 'GPU')
# Currently, memory growth needs to be the same across GPUs
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
logical_gpus = tf.config.experimental.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
# Memory growth must be set before GPUs have been initialized
print(e)
# -
def create_dataset(dataset, look_back=1):
dataX, dataY = [], []
for i in range(len(dataset)-look_back):
dataX.append(dataset[i:(i+look_back), 0])
dataY.append(dataset[i + look_back, 0])
return np.array(dataX), np.array(dataY)
# +
look_back =8
data=pd.read_csv('gold_train.csv')
dataset = np.asarray(data[['price']])
scaler = MinMaxScaler(feature_range=(0, 1))
x_sample = [200, 2000]
scaler.fit(np.array(x_sample)[:, np.newaxis])
#dataset = scaler.fit_transform(dataset)
# split into train and test sets
train_size = int(len(dataset) * 0.80)
test_size = len(dataset) - train_size
trainx, testy = dataset[0:train_size,:], dataset[train_size:len(dataset),:]
train=scaler.transform(trainx)
test=scaler.transform(testy)
trainX, trainY = create_dataset(train, look_back)
testX, testY = create_dataset(test, look_back)
trainX = np.reshape(trainX, (trainX.shape[0], trainX.shape[1], 1))
testX = np.reshape(testX, (testX.shape[0], testX.shape[1], 1))
print(len(testX),len(testY))
# -
batch_size = 16
model = Sequential()
model.add(LSTM(25,activation='relu',input_dim=1))
#model.add(Dropout(0.2))
model.add(Dense(5,activation='relu'))
model.add(Dense(1))
opt = Adam(learning_rate=0.0012)
model.compile(loss='mean_squared_error', optimizer=opt)
history=model.fit(trainX, trainY, epochs=10, batch_size=batch_size,validation_data=(testX, testY), shuffle=False)
trainScore = model.evaluate(trainX, trainY, batch_size=batch_size, verbose=0)
print('Train Score: ', trainScore)
testScore = model.evaluate(testX, testY, batch_size=batch_size, verbose=0)
#print(testX[:252])
print('Test Score: ', testScore)
plt.plot(history.history['loss'], label='GRU train', color='brown')
plt.plot(history.history['val_loss'], label='GRU test', color='blue')
plt.legend()
plt.show()
# +
input=test[test_size-look_back:]
r_data=pd.read_csv('gold_july.csv').values
size=r_data.size
predicted_y=[]
for i in range(size):
#print(input)
input_lstm=np.reshape(input[i:look_back+i],newshape=1*look_back)
input_lstm=np.reshape(input_lstm, (1,look_back,1))
#print(input_lstm)
y=model.predict(input_lstm,verbose=0)
print(y)
input=np.append(input,y[0])
y=scaler.inverse_transform(y)
predicted_y.append(y[0])
#print(predicted_y)
#i=i+1
#print(predicted_y)
# -
plt.figure(figsize=(12,5))
plt.plot(r_data, color = 'red', label = 'Real Test Data')
plt.plot(predicted_y, color = 'blue', label = 'Predicted Test Data')
plt.title('one month ahead oil price prediction using vanilla LSTM')
plt.xlabel('Time')
plt.ylabel('oil price')
plt.legend()
plt.show()
# +
input=test[test_size-look_back:]
r_data=pd.read_csv('gold_july.csv').values
size=r_data.size
predicted_1y=[]
for i in range(size):
#print(input)
input_lstm=np.reshape(input[i:look_back+i],newshape=1*look_back)
input_lstm=np.reshape(input_lstm, (1,look_back,1))
#print(input_lstm)
y=model.predict(input_lstm,verbose=0)
#print(y)
x=x=scaler.transform([r_data[i]])
input=np.append(input,x)
y=scaler.inverse_transform(y)
predicted_1y.append(y[0])
#print(predicted_y)
#i=i+1
#print(scaler.transform(r_data))
# -
plt.figure(figsize=(12,5))
plt.plot(r_data, color = 'red', label = 'Real Test Data')
plt.plot(predicted_1y, color = 'blue', label = 'Predicted Test Data')
plt.title('one day ahead oil price prediction using vanilla LSTM')
plt.xlabel('Time')
plt.ylabel('oil price')
plt.legend()
plt.show()
import pickle
filename = 'vanilla_lstm_gold.sav'
pickle.dump(model, open(filename, 'wb'))
| gold and oil forecast/vanillalstm_gold.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# %matplotlib inline
import io
from collections import OrderedDict
import datetime
import pytz
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
# plt.style.use('ggplot')
plt.style.use('fivethirtyeight')
# +
from IPython.display import Markdown, display, HTML
def printhtml(string):
display(HTML(string))
def printmd(string):
display(Markdown(string))
# -
# # Parse the file into DataFrames
tables = ['ZPILLOWUSER', 'ZSLEEPNOTE', 'Z_2SLEEPSESSION', 'ZSLEEPSESSION', 'ZSLEEPSTAGEDATAPOINT', 'ZSNOOZELAB', 'ZSOUNDDATAPOINT', 'Z_PRIMARYKEY', 'Z_METADATA', 'Z_MODELCACHE', 'Y_UBMETA', 'Y_UBRANGE', 'Y_UBKVS']
# tables = ['ZSLEEPSESSION', 'ZSLEEPSTAGEDATAPOINT']
def getDictFromString(string):
tokens = string.split()
# tokens.reverse()
row = OrderedDict()
while tokens:
# Values may be multiple words. We build up a value until we reach a
# separator. The value is then the concatenation of the reverse of the list.
value_parts = [tokens.pop()]
value_part_or_sep = tokens.pop() # ' -> '
while value_part_or_sep != '->':
value_parts.append(value_part_or_sep)
value_part_or_sep = tokens.pop()
value_parts.reverse()
# Keys are one word.
key = tokens.pop()
row[key] = ' '.join(value_parts)
return row
# +
filename = 'PillowData.txt'
reading_table = None
dataframes = {}
rows = []
count = 0
with open(filename) as file:
for line in file:
# count += 1
# if count >2:
# break
line = line.strip()
# if we are at a new table heading
if line in tables:
# If we are finished reading the last table
if reading_table:
dataframes[reading_table] = pd.DataFrame(rows)
# record which table we are reading.
reading_table = line
# Init the list of rows.
rows = []
continue
elif line == '':
# EOF has extra newline, conveniently signalling that we need to
# make the dataframe for the last table.
dataframes[reading_table] = pd.DataFrame(rows)
del rows
break
rows.append(getDictFromString(line))
# Reverse the order of the columns in each dataframe.
for k, v in dataframes.items():
dataframes[k] = v.iloc[:, ::-1]
# Convert each column to numeric datatype if possible.
for df in dataframes.values():
for col in df.columns:
try:
df[col] = pd.to_numeric(df[col])
except:
pass
# -
df_sessions = dataframes['ZSLEEPSESSION']
df_stages = dataframes['ZSLEEPSTAGEDATAPOINT']
df_audio = dataframes['ZSOUNDDATAPOINT']
df_session_notes = dataframes['Z_2SLEEPSESSION']
df_notes = dataframes['ZSLEEPNOTE']
# # Deal with timestamps
#
# Timestamps are dates and times, but with incorrect years.
# +
def makeDateTime(timestamp, year=2018):
return datetime.datetime.fromtimestamp(timestamp, pytz.timezone('US/Eastern')).replace(year=year)
df_stages.sort_values('ZTIMESTAMP', inplace=True)
df_stages['ZTIMESTAMP'] = df_stages['ZTIMESTAMP'].apply(makeDateTime)
df_sessions['ZSTARTTIME'] = df_sessions['ZSTARTTIME'].apply(makeDateTime)
df_sessions['ZENDTIME'] = df_sessions['ZENDTIME'].apply(makeDateTime)
df_audio.sort_values('ZTIMESTAMP', inplace=True)
df_audio['ZTIMESTAMP'] = df_audio['ZTIMESTAMP'].apply(makeDateTime)
# -
# # Plotting
# +
audio_color = 'purple'
colors=['dodgerblue', 'mediumaquamarine', 'deeppink', 'darkorange']
def plotSession(session):
df = df_stages[df_stages['ZSLEEPSESSION']==session]
dfa = df_audio[df_audio['ZSLEEPSESSION']==session]
x = df['ZTIMESTAMP'].values
y = df['ZSLEEPSTAGE'].values
# Plot initialization boilerplate
plt.close('all')
fig = plt.figure(figsize=(10, 4))
ax = fig.add_subplot(111)
# First plot the audio intervals.
for row in dfa.iterrows():
ax.bar(row[1]['ZTIMESTAMP'], 4, 5*np.timedelta64(row[1]['ZDURATION'], 's')/np.timedelta64(1, 'D') , 0, align='edge', color=audio_color)
# Now plot the intervals for the different stages.
left = x[0]
width = datetime.timedelta(0)
bottom = y[0]
height = 1
for i, val in enumerate(y):
if val != bottom:
box = ax.bar(left, height, width, bottom, align='edge', color=colors[int(bottom)])
left = x[i]
bottom = val
width = datetime.timedelta(0)
# The division below converts to a fraction of a day which is expected by matplotlib for
# some reason. Basically, matplotlib doesn't support timedeltas as widths.
width = (x[i] - left)/np.timedelta64(1, 'D')
# finish last box:
box = ax.bar(left, height, width, bottom, align='edge', color=colors[int(bottom)])
ax.set_yticks([0, 1, 2, 3, 4])
ax.set_yticklabels(['', 'Deep', 'Light', 'REM', 'Awake'])
plt.tick_params(
axis='y', # changes apply to the x-axis
which='minor', # both major and minor ticks are affected
bottom=False, # ticks along the bottom edge are off
top=False, # ticks along the top edge are off
labelbottom=False)
# Format the xticklabels.
ax.xaxis.set_major_formatter(matplotlib.dates.DateFormatter('%I:%M %p', tz=pytz.timezone('US/Eastern')))
fig.tight_layout()
output = io.StringIO()
plt.savefig(output, format="svg")
plt.close()
# The first 214 characters are unnecessary header material. The last character is \n.
return output.getvalue()[214:-1]
# -
def plotStackedPercentages(*args):
# Plot initialization boilerplate
plt.close('all')
fig = plt.figure(figsize=(10, 1))
ax = fig.add_subplot(111)
height = 1
left = 0
bottom = 0
for i, c in enumerate(reversed(colors)):
width = args[i]
ax.bar(left, height, width, bottom, align='edge', color=c)
left += width
plt.axis('off')
ax.set_facecolor(None)
fig.tight_layout()
output = io.StringIO()
plt.savefig(output, format="svg", transparent=True)
plt.close()
# The first 214 characters are unnecessary header material. The last character is \n.
return output.getvalue()[214:-1]
# # Summary Data Table
moods = ['Bad', 'Not so good', 'OK', 'Good', 'Great']
mood_emojis = ['😧', '🙁', '😐', '🙂', '😀']
def roundTimedelta(t):
if hasattr(t, 'total_seconds'):
# Must be datetime.timedelta or np.timedelta64 or equivalent.
return datetime.timedelta(seconds=round(t.total_seconds()))
# Must be float or equivalent.
return datetime.timedelta(seconds=round(t))
def showSession(row):
session = row['Z_PK']
when = row['ZSTARTTIME'].strftime('%A, %B %-d, %Y at %-I:%M {%p}').format(AM='a.m.', PM='p.m.')
total_time = row['ZENDTIME'] - row['ZSTARTTIME']
duration = str(roundTimedelta(total_time))
time_to_sleep = roundTimedelta(row['ZTIMEAWAKE'])
asleep = roundTimedelta(total_time.total_seconds() - row['ZTIMEAWAKE'])
quality = row['ZSLEEPQUALITY']
mood = row['ZWAKEUPMOOD'] - 1
mood_string = '{emoji} {description}'.format(emoji=mood_emojis[mood], description=moods[mood])
rem = round(roundTimedelta(row['ZTIMEINREMSLEEP'])/total_time*100)
light = round(roundTimedelta(row['ZTIMEINLIGHTSLEEP'])/total_time*100)
deep = round(roundTimedelta(row['ZTIMEINDEEPSLEEP'])/total_time*100)
awake = 100-(rem+light+deep)
# Get notes.
notes = []
for note in df_session_notes[df_session_notes['Z_3SLEEPSESSION']==session].iterrows():
note_pk = note[1]['Z_2SLEEPNOTE']
# In classic parsimonious Pandas style, this looks up the text for a note code.
note_text = df_notes[df_notes['Z_PK']==note_pk].iloc[0].at['ZCONTENTTEXT']
notes.append(note_text)
# Display.
# Date
# printmd('<div style="text-align: center">**{when}**</div>'.format(when=when))
printmd('# {when}'.format(when=when))
# Sleep stages graph
stages_graph = plotSession(session)
printhtml(stages_graph)
# Summary Information Table
table = """| | | | |
|------------:|:-----|----:|:----|
| Time in bed | {duration} | Sleep quality |{quality:.0%} |
| Time asleep | {asleep} | Mood | {mood} |
| Time to sleep | {time_to_sleep} | Notes | {notes} |"""
printmd(table.format(when=when,
duration=duration,
asleep=asleep,
time_to_sleep=time_to_sleep,
quality=quality,
mood=mood_string,
notes='\n'.join(notes)
))
# Stacked percentages plot.
printhtml(plotStackedPercentages(awake, rem, light, deep))
# Table of percentages.
colored_dots = ['<font style="color:{color}">\u2B24</font>'.format(color=c) for c in colors]
table_percents = """| | | | |
|-----------:|-----:|----:|----:|
| Awake | {awake}%{dota} | REM | {rem}%{dotr} |
| Light Sleep | {light}%{dotl} | Deep Sleep | {deep}%{dotd} |"""
printmd(table_percents.format(awake=awake,
rem=rem,
light=light,
deep=deep,
dotd=colored_dots[0],
dotl=colored_dots[1],
dotr=colored_dots[2],
dota=colored_dots[3]
))
for row in df_sessions.iterrows():
showSession(row[1])
| Parse Pillow Database.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import ampligraph
from ampligraph.datasets import load_from_ntriples
from ampligraph.evaluation import train_test_split_no_unseen
from ampligraph.latent_features import ComplEx, save_model, restore_model
from rdflib import Graph, URIRef
from rdflib.namespace import RDF, OWL
import numpy as np
np.set_printoptions(threshold=100)
import random
np.random.seed(1) #0 before
random.seed(1)
# +
# Create a new graph with the explicit triplets
def create_explicit_graph(graph, explicit_owlThings=False, is_correctness=True):
new_g = Graph()
statements = set()
triplets = set()
all_statements = []
if is_correctness:
true_entities = set()
false_entities = set()
truth_dico = {}
hasTruthValue = URIRef("http://swc2017.aksw.org/hasTruthValue")
OWLthing = URIRef("http://www.w3.org/2002/07/owl#Thing")
chevr = lambda s : '<' + s + '>'
# Iterate through statements
for i,s in enumerate(graph.subjects(RDF.type, RDF.Statement)):
statements.add(s)
# Get <s> <p> <o>
suj = graph.value(s, RDF.subject)
pred = graph.value(s, RDF.predicate)
obj = graph.value(s, RDF.object)
# If the graph file have information on the correctess of statement (for the train set)
if is_correctness:
is_true = graph.value(s, hasTruthValue)
# Save entities in false and true statements
if float(is_true) == 1.0:
true_entities.add(suj)
true_entities.add(obj)
elif float(is_true) == 0.0:
false_entities.add(suj)
false_entities.add(obj)
# Save the truth of statement with a dictionnary
if (suj, pred, obj) not in triplets:
truth_dico[(chevr(str(suj)), chevr(str(pred)), chevr(str(obj)))] = float(is_true)
#is_true_v.append(float(is_true))
# Adding triplet to explicit graph and set of triplets
new_g.add((suj, pred, obj))
triplets.add((suj, pred, obj))
# Add all statements, even if duplicate (for test set)
all_statements.append([chevr(str(suj)), chevr(str(pred)), chevr(str(obj)), chevr(str(s))])
# Make statements that subjects and predicates are owlthings (it is needed for rdf2vec)
if explicit_owlThings:
new_g.add((suj, RDF.type, OWLthing))
new_g.add((obj, RDF.type, OWLthing))
if is_correctness:
return new_g, statements, triplets, true_entities, false_entities, truth_dico
else:
return new_g, triplets, all_statements
# -
# Import raw training and test graph
g_train_raw = Graph()
g_train_raw.parse("SWC_2019_Train.nt", format="nt")
g_test_raw = Graph()
g_test_raw.parse("SWC_2019_Test.nt", format="nt")
g_train, statements_train, triplets_train, true_entities_train, false_entities_train, truth_dict_train = create_explicit_graph(g_train_raw, explicit_owlThings=False)
g_test, triplets_test, test_all_statements = create_explicit_graph(g_test_raw, explicit_owlThings=False, is_correctness=False)
# It seems that some entities are specific to correct and false statements
print(len(true_entities_train), len(false_entities_train))
print(len(true_entities_train.union(false_entities_train)))
# Some statements seem to be equivalents
print(len(statements_train))
print(len(triplets_train))
print(len(g_train))
# Save rdflib graphs into nt files
g_train.serialize(destination='trainGraphExplicit.nt', format="nt")
# Load Graph with Ampligraph
triplets_data_train = load_from_ntriples("/home/mondeca/Documents/FactTriplesChecker", "trainGraphExplicit.nt")
print(triplets_data_train.shape)
print(triplets_data_train[:4,:])
# Load test data
test_data = np.array(test_all_statements)
# Get truth value of each statement from dictionnary and put in in a vector
truth_vect_train = []
for i,x in enumerate(triplets_data_train):
is_true = truth_dict_train[tuple(x)]
truth_vect_train.append(is_true)
# Combine array of statements and array of truth values
print(np.array(truth_vect_train).shape)
train_data = np.hstack((triplets_data_train, np.reshape(np.array(truth_vect_train), (len(triplets_train),1))))
print(train_data[:4,:])
# +
# Splitting training graph into train/valid set
train, valid = train_test_split_no_unseen(train_data, test_size=int(0.25 * train_data.shape[0]), seed=0)
print(train.shape, valid.shape)
# Save explicit statements with their binary labels, for train and development data
np.save("dev/trainDev", train)
np.save("dev/valid", valid)
# Save explicit statements for train+dev data and test data
np.save("test/train", train_data)
np.save("test/test", test_data)
# +
from ampligraph.latent_features import ComplEx, DistMult, TransE, HolE
from ampligraph.evaluation import evaluate_performance, mrr_score, hits_at_n_score
# Train an embedding model and save it
def train_save_model(train_data, model_name, nbatches, epochs, k, eta):
model = model_name(batches_count=nbatches, seed=0, epochs=epochs, k=k, eta=eta,
# Use adam optimizer with learning rate 1e-3
optimizer='adam', optimizer_params={'lr':1e-3},
# Use pairwise loss with margin 0.5
loss='pairwise', loss_params={'margin':0.5},
# Use L2 regularizer with regularizer weight 1e-5
regularizer='LP', regularizer_params={'p':2, 'lambda':1e-5},
# Enable stdout messages (set to false if you don't want to display)
verbose=False)
#Training
model.fit(train_data[:,:3])
fn = "./ampligraphModels/" + model_name.__name__ + "_" + str(k) +"_"+ str(epochs) +"_"+ str(eta)
save_model(model, fn)
# -
# Some good parameters : epochs = 200, k = 150, eta = 1
train_save_model(train_data, TransE, nbatches=10, epochs=200, k=150, eta=1)
train_save_model(train_data, DistMult, nbatches=10, epochs=200, k=150, eta=1)
train_save_model(train_data, HolE, nbatches=10, epochs=200, k=150, eta=1)
train_save_model(train_data, ComplEx, nbatches=10, epochs=200, k=150, eta=1)
# TRANSR algorithm
import pykeen
import os
from pykeen.kge_models import TransR
TransR.hyper_params
# Get output directory
output_directory = os.getcwd() + "/transRmodel"
print(output_directory)
config = dict(
training_set_path = './trainGraphExplicit.nt',
execution_mode = 'Training_mode',
random_seed = 0,
kg_embedding_model_name = 'TransR',
embedding_dim = 150,
relation_embedding_dim = 150,
scoring_function = 2, # corresponds to L2
margin_loss = 0.5,
learning_rate = 1e-3,
num_epochs = 200,
batch_size = 10,
preferred_device = 'cpu'
)
# Training
results = pykeen.run(
config=config,
output_directory=output_directory,
)
results.results.keys()
# +
import matplotlib.pyplot as plt
# %matplotlib inline
losses = results.results['losses']
epochs = np.arange(len(losses))
plt.title(r'Loss Per Epoch')
plt.xlabel('epoch')
plt.ylabel('loss')
plt.plot(epochs, losses)
plt.show()
# -
| processing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Image Classification using Logistic Regression in PyTorch
#
#
# Imports
import torch
import torchvision
from torchvision.datasets import MNIST
# Download training dataset
dataset = MNIST(root='data/', download=True)
len(dataset)
test_dataset = MNIST(root='data/', train=False)
len(test_dataset)
dataset[0]
import matplotlib.pyplot as plt
# %matplotlib inline
image, label = dataset[0]
plt.imshow(image, cmap='gray')
print('Label:', label)
image, label = dataset[10]
plt.imshow(image, cmap='gray')
print('Label:', label)
import torchvision.transforms as transforms
# PyTorch datasets allow us to specify one or more transformation functions which are applied to the images as they are loaded. `torchvision.transforms` contains many such predefined functions, and we'll use the `ToTensor` transform to convert images into PyTorch tensors.
# MNIST dataset (images and labels)
dataset = MNIST(root='data/',
train=True,
transform=transforms.ToTensor())
img_tensor, label = dataset[0]
print(img_tensor.shape, label)
# The image is now converted to a 1x28x28 tensor. The first dimension is used to keep track of the color channels. Since images in the MNIST dataset are grayscale, there's just one channel. Other datasets have images with color, in which case there are 3 channels: red, green and blue (RGB). Let's look at some sample values inside the tensor:
# ## Training and Validation Datasets
#
# We will split the dataset into 3 parts:
#
# 1. **Training set** - used to train the model i.e. compute the loss and adjust the weights of the model using gradient descent.
# 2. **Validation set** - used to evaluate the model while training, adjust hyperparameters (learning rate etc.) and pick the best version of the model.
# 3. **Test set** - used to compare different models, or different types of modeling approaches, and report the final accuracy of the model.
#
# +
from torch.utils.data import random_split
train_ds, val_ds = random_split(dataset, [50000, 10000])
len(train_ds), len(val_ds)
# +
from torch.utils.data import DataLoader
batch_size = 128
train_loader = DataLoader(train_ds, batch_size, shuffle=True)
val_loader = DataLoader(val_ds, batch_size)
# -
# ## Model
#
# Now that we have prepared our data loaders, we can define our model.
#
# * A **logistic regression** model there are weights and bias matrices, and the output is obtained using simple matrix operations (`pred = x @ w.t() + b`).
#
# * Just as we did with linear regression, we can use `nn.Linear` to create the model instead of defining and initializing the matrices manually.
#
# * Since `nn.Linear` expects the each training example to be a vector, each `1x28x28` image tensor needs to be flattened out into a vector of size 784 (`28*28`), before being passed into the model.
#
# * The output for each image is vector of size 10, with each element of the vector signifying the probability a particular target label (i.e. 0 to 9). The predicted label for an image is simply the one with the highest probability.
# +
import torch.nn as nn
input_size = 28*28
num_classes = 10
# +
class MnistModel(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(input_size, num_classes)
def forward(self, xb):
xb = xb.reshape(-1, 784)
out = self.linear(xb)
return out
model = MnistModel()
# -
# Inside the `__init__` constructor method, we instantiate the weights and biases using `nn.Linear`. And inside the `forward` method, which is invoked when we pass a batch of inputs to the model, we flatten out the input tensor, and then pass it into `self.linear`.
#
# `xb.reshape(-1, 28*28)` indicates to PyTorch that we want a *view* of the `xb` tensor with two dimensions, where the length along the 2nd dimension is 28\*28 (i.e. 784). One argument to `.reshape` can be set to `-1` (in this case the first dimension), to let PyTorch figure it out automatically based on the shape of the original tensor.
#
# Note that the model no longer has `.weight` and `.bias` attributes (as they are now inside the `.linear` attribute), but it does have a `.parameters` method which returns a list containing the weights and bias, and can be used by a PyTorch optimizer.
print(model.linear.weight.shape, model.linear.bias.shape)
list(model.parameters())
# +
for images, labels in train_loader:
outputs = model(images)
break
print('outputs.shape : ', outputs.shape)
print('Sample outputs :\n', outputs[:2].data)
# -
import torch.nn.functional as F
# +
# Apply softmax for each output row
probs = F.softmax(outputs, dim=1)
# Look at sample probabilities
print("Sample probabilities:\n", probs[:2].data)
# Add up the probabilities of an output row
print("Sum: ", torch.sum(probs[0]).item())
# -
max_probs, preds = torch.max(probs, dim=1)
print(preds)
print(max_probs)
labels
# Clearly, the predicted and the actual labels are completely different. Obviously, that's because we have started with randomly initialized weights and biases. We need to train the model i.e. adjust the weights using gradient descent to make better predictions.
# ## Evaluation Metric and Loss Function
# We need a way to evaluate how well our model is performing. A natural way to do this would be to find the percentage of labels that were predicted correctly i.e. the **accuracy** of the predictions.
def accuracy(outputs, labels):
_, preds = torch.max(outputs, dim=1)
return torch.tensor(torch.sum(preds == labels).item() / len(preds))
accuracy(outputs, labels)
loss_fn = F.cross_entropy
# Loss for current batch of data
loss = loss_fn(outputs, labels)
print(loss)
# ## Training the model
#
# Now that we have defined the data loaders, model, loss function and optimizer, we are ready to train the model. The training process is identical to linear regression, with the addition of a "validation phase" to evaluate the model in each epoch.
#
# ```
# for epoch in range(num_epochs):
# # Training phase
# for batch in train_loader:
# # Generate predictions
# # Calculate loss
# # Compute gradients
# # Update weights
# # Reset gradients
#
# # Validation phase
# for batch in val_loader:
# # Generate predictions
# # Calculate loss
# # Calculate metrics (accuracy etc.)
# # Calculate average validation loss & metrics
#
# # Log epoch, loss & metrics for inspection
# ```
#
#
# +
class MnistModel(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(input_size, num_classes)
def forward(self, xb):
xb = xb.reshape(-1, 784)
out = self.linear(xb)
return out
def training_step(self, batch):
images, labels = batch
out = self(images) # Generate predictions
loss = F.cross_entropy(out, labels) # Calculate loss
return loss
def validation_step(self, batch):
images, labels = batch
out = self(images) # Generate predictions
loss = F.cross_entropy(out, labels) # Calculate loss
acc = accuracy(out, labels) # Calculate accuracy
return {'val_loss': loss, 'val_acc': acc}
def validation_epoch_end(self, outputs):
batch_losses = [x['val_loss'] for x in outputs]
epoch_loss = torch.stack(batch_losses).mean() # Combine losses
batch_accs = [x['val_acc'] for x in outputs]
epoch_acc = torch.stack(batch_accs).mean() # Combine accuracies
return {'val_loss': epoch_loss.item(), 'val_acc': epoch_acc.item()}
def epoch_end(self, epoch, result):
print("Epoch [{}], val_loss: {:.4f}, val_acc: {:.4f}".format(epoch, result['val_loss'], result['val_acc']))
model = MnistModel()
# -
# Now we'll define an `evaluate` function, which will perform the validation phase, and a `fit` function which will peform the entire training process.
# +
def evaluate(model, val_loader):
outputs = [model.validation_step(batch) for batch in val_loader]
return model.validation_epoch_end(outputs)
def fit(epochs, lr, model, train_loader, val_loader, opt_func=torch.optim.SGD):
history = []
optimizer = opt_func(model.parameters(), lr)
for epoch in range(epochs):
# Training Phase
for batch in train_loader:
loss = model.training_step(batch)
loss.backward()
optimizer.step()
optimizer.zero_grad()
# Validation phase
result = evaluate(model, val_loader)
model.epoch_end(epoch, result)
history.append(result)
return history
# -
result0 = evaluate(model, val_loader)
result0
history1 = fit(5, 0.001, model, train_loader, val_loader)
# That's a great result! With just 5 epochs of training, our model has reached an accuracy of over 80% on the validation set. Let's see if we can improve that by training for a few more epochs.
history2 = fit(5, 0.001, model, train_loader, val_loader)
history3 = fit(5, 0.001, model, train_loader, val_loader)
history4 = fit(5, 0.001, model, train_loader, val_loader)
# Replace these values with your results
history = [result0] + history1 + history2 + history3 + history4
accuracies = [result['val_acc'] for result in history]
plt.plot(accuracies, '-x')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.title('Accuracy vs. No. of epochs');
# ## Testing with individual images
# Define test dataset
test_dataset = MNIST(root='data/',
train=False,
transform=transforms.ToTensor())
img, label = test_dataset[0]
plt.imshow(img[0], cmap='gray')
print('Shape:', img.shape)
print('Label:', label)
img.unsqueeze(0).shape
def predict_image(img, model):
xb = img.unsqueeze(0)
yb = model(xb)
_, preds = torch.max(yb, dim=1)
return preds[0].item()
# `img.unsqueeze` simply adds another dimension at the begining of the 1x28x28 tensor, making it a 1x1x28x28 tensor, which the model views as a batch containing a single image.
#
#
img, label = test_dataset[0]
plt.imshow(img[0], cmap='gray')
print('Label:', label, ', Predicted:', predict_image(img, model))
img, label = test_dataset[10]
plt.imshow(img[0], cmap='gray')
print('Label:', label, ', Predicted:', predict_image(img, model))
test_loader = DataLoader(test_dataset, batch_size=256)
result = evaluate(model, test_loader)
result
# ## Saving and loading the model
# Since we've trained our model for a long time and achieved a resonable accuracy, it would be a good idea to save the weights and bias matrices to disk, so that we can reuse the model later and avoid retraining from scratch.
torch.save(model.state_dict(), 'mnist-logistic.pth')
# The `.state_dict` method returns an `OrderedDict` containing all the weights and bias matrices mapped to the right attributes of the model.
model.state_dict()
# To load the model weights, we can instante a new object of the class `MnistModel`, and use the `.load_state_dict` method.
model2 = MnistModel()
model2.load_state_dict(torch.load('mnist-logistic.pth'))
model2.state_dict()
test_loader = DataLoader(test_dataset, batch_size=256)
result = evaluate(model2, test_loader)
result
| Deep-Learning-using-Pytorch/Logistic Regression MNIST Handwritten Digits.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Hands-On Lab
# My Dear Student!
#
# **Let me tell you the best way to learn python is, practice!**
# Let's do this small hands-on exercise together with me to understand Python essentials
#
# All the best! The fate of the mission is in your hands now.
#
# ## Why you will learn in this hands-on exercise?
# Python's power. You will learn how the cocepts of Python, you will be applying the following concepts:
#
# - variable
# - rules for gud coding style
# - Data types(number, string, boolean, float)
# - Most used complex types:
# - list
# - Access Element
# - Looping
# - List comprehension
# - Adding item
# - Removing item
# - dictionary
# - Access Element
# - Looping
# - dic comprehension
# - Adding item
# - Removing item
# - touple
# - Conditional statements
# - if
# - elif
# - else
# - loop
# - for
# - while
# - Functions
# - inbuilt
# - user defined
#Declare a Variable
#Assign value to a variable
#Talk about dynamic types
#print
#talk about number, string, float
#string operations
#loop
#declre a list
#list operations
#declare a dictionary
#dictionay operation
#declare a touple
#touple operations
#moving back a pice of code to a function and calling it
# ### Task 1
#
# **A college is maintaining operations of a college (IET-North Engineering College) through a software. This software captures and maintain college_name, year_of_establishment, address, courses and degrees, pincode.
# With respect to area pincode college has different facilities to engage the students**
#
# #### Instructions:
#
# * write down the variables the software program will use
# * Initialize the variables with required set of data
# * print the college name and year of establishment
# +
#Sometimes there are multiple ways of viewing problem statements
#Sometimes it is importnat to understand the futuristic changes
colleges1 ={201301:['college 1', '1981', '<NAME>', {'courses':['CS', 'IT']}, {'degrees':['B.Tech', 'M.TECH']},{'facilities':['lib', 'sports', 'gym']}]}
colleges2 ={201302:['college 2', '1982', '<NAME>', {'courses':['CS', 'IT']}, {'degrees':['B.Tech', 'M.TECH']},{'facilities':['lib', 'gym']}]}
print(colleges1.get(201301)[0])
print(colleges1.get(201301)[3])
# -
# ### Task 2
#
# **College management has decided to crate a new holdings to show all courses offered by college.
# So, for this let's print all courses offered by college**
#
# +
#Sometimes we need to club different items in one
#Sometimes we need to find Index Of a item
#sometimes we need to check the type of item
#Sometimes we need to put logical conditions to make sure we are dealing with right item
print(colleges1.get(201301)[3])
print(colleges2.get(201302)[3])
# -
# ### Task 3
#
# **Add a new recently added facility `digital library` to area with pin code 201302**
#
# +
#Can a List Item be accessed using index?
#Can a List Item be accessed using value?
fac = colleges2.get(201302)[5]
fac['facilities'].append('sports')
print(fac)
# -
# ### Task 4
#
# **One of the degree course is closed by college in year 2020-21 in all colleges. Remove that degree from list of degree courses.**
# **Add one degree course which is newly introduced**
# +
#What probing you may do while solving this task
#Which Degree is added?
#Which Degree is removed?
#Will this change for all colleges or for a specific area?
deg = colleges1.get(201301)[4]
deg['degrees'].remove('M.TECH')
print(deg)
# -
# ### Task 5
#
# **One interesting thing about college is, for security reasons, there are 3 gates in college with even pin codes but the name of each gate is the square the number from 1 to 3**
#
# #### Instructions:
# * explanation: the names of Gate(1,2,3) are Gate1, Gate4, Gate9 repectively
# * print all Gates name
#What probing you may do while solving this task?
#Can there be more than one colleges in future with even Pin Code?
#Can there be more than 3 gates in a college?
# ## Task 6
# **Check if the name of the College is less than 10 characters or equal, take first word as college name for any print media**
#What probing you may do while solving this task
#Can First World of college be more than 10 characters, what should be done in that case?
#What does print media means?
#What if Name is B.G.Reddy Campus Of Engineering?
# ## Home Exercises
#
# ### Task 7
#
# **Let's create a dictionary for `<NAME>` in each subject to generate his progress report.**
#
#
#
# #### Instructions
#
# * Create a dictionary named `'courses'` using the keys and values given below
#
#
# |course|marks|
# |-----|-----|
# |Math|65|
# |English|70|
# |History|80|
# |French|70|
# |Science|60|
#
# * Print the marks obtained in each subject.
#
# * Sum all the marks scored out of 100 and store it in a variable named `'total_marks'`.
#
#
# * Print `'total_marks'`.
#
#
# * Calculate the `'perntage'` scored by `'<NAME>'` where addition of all the subjects is `500`.
#
#
# * Print the `'perntage'`.
#
#
# +
courses ={'Math':65, 'English':70, 'History':80, 'French':70, 'Science':60 }
total_marks =0
sum = 0
for keys, values in courses.items():
print(keys,':', values)
sum = sum + values
total_marks = sum
print(total_marks)
percentage =(total_marks/500)*100
print(f'{percentage} %')
# -
# ### Task 8
#
# **From the previous task, we know who is the 'Maths' topper in the class. You now have to print the name of the student on the certificate, but you will have to print his last name and then his first name.**
#
#
# ## Instructions
#
# * String name `'topper'` with the Math topper's name is given.
#
#
# * Split the `'topper'` using the `"split()"` function and store the results in `'first_name'` and `'last_name'`.
#
# ***
# --Hint: Use split function:
#
#
# * Display the full name concatenating `+` the strings `'last_name'` and `'first_name'` strings. Don't forget to add a whitespace character between them to avoid displaying them together and store it in a `full_name` variable .
#
#
# * Convert all the elements in the `'full_name'`string to uppercase, and store it to the `'certificate_name'` variable.
#
#
# * Print `'certificate_name'`.
# +
topper = '<NAME>'
first_name =''
last_name=''
topper.split()
first_name = topper.split()[0]
last_name = topper.split()[1]
full_name = first_name +' '+ last_name
print(first_name)
print(last_name)
print(full_name)
certificate_name = full_name.upper()
print(certificate_name)
| Day1/Python-ABC.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import matplotlib as plt
from shapely.geometry import Point, Polygon
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import KFold
import zipfile
import requests
import os
import shutil
from downloading_funcs import addr_shape, down_extract_zip
from supp_funcs import *
import lnks
import warnings #DANGER: I triggered a ton of warnings.
warnings.filterwarnings('ignore')
# +
import geopandas as gpd
# %matplotlib inline
# -
#Load the BBL list
BBL12_17CSV = ['https://hub.arcgis.com/datasets/82ab09c9541b4eb8ba4b537e131998ce_22.csv', 'https://hub.arcgis.com/datasets/4c4d6b4defdf4561b737a594b6f2b0dd_23.csv', 'https://hub.arcgis.com/datasets/d7aa6d3a3fdc42c4b354b9e90da443b7_1.csv', 'https://hub.arcgis.com/datasets/a8434614d90e416b80fbdfe2cb2901d8_2.csv', 'https://hub.arcgis.com/datasets/714d5f8b06914b8596b34b181439e702_36.csv', 'https://hub.arcgis.com/datasets/c4368a66ce65455595a211d530facc54_3.csv',]
def data_pipeline(shapetype, bbl_links, supplement=None,
dex=None, ts_lst_range=None):
#A pipeline for group_e dataframe operations
#Test inputs --------------------------------------------------------------
if supplement:
assert isinstance(supplement, list)
assert isinstance(bbl_links, list)
if ts_lst_range:
assert isinstance(ts_lst_range, list)
assert len(ts_lst_range) == 2 #Must be list of format [start-yr, end-yr]
#We'll need our addresspoints and our shapefile
if not dex:
dex = addr_shape(shapetype)
#We need a list of time_unit_of_analysis
if ts_lst_range:
ts_lst = [x+(i/100) for i in range(1,13,1) for x in range(1980, 2025)]
ts_lst = [x for x in ts_lst if
x >= ts_lst_range[0] and x <= ts_lst_range[1]]
ts_lst = sorted(ts_lst)
if not ts_lst_range:
ts_lst = [x+(i/100) for i in range(1,13,1) for x in range(2012, 2017)]
ts_lst = sorted(ts_lst)
#Now we need to stack our BBL data ----------------------------------------
#Begin by forming an empty DF
bbl_df = pd.DataFrame()
for i in list(range(2012, 2018)):
bblpth = './data/bbls/Basic_Business_License_in_'+str(i)+'.csv' #Messy hack
#TODO: generalize bblpth above
bbl = pd.read_csv(bblpth, low_memory=False)
col_len = len(bbl.columns)
bbl_df = bbl_df.append(bbl)
if len(bbl.columns) != col_len:
print('Column Mismatch!')
del bbl
bbl_df.LICENSE_START_DATE = pd.to_datetime(
bbl_df.LICENSE_START_DATE)
bbl_df.LICENSE_EXPIRATION_DATE = pd.to_datetime(
bbl_df.LICENSE_EXPIRATION_DATE)
bbl_df.LICENSE_ISSUE_DATE = pd.to_datetime(
bbl_df.LICENSE_ISSUE_DATE)
bbl_df.sort_values('LICENSE_START_DATE')
#Set up our time unit of analysis
bbl_df['month'] = 0
bbl_df['endMonth'] = 0
bbl_df['issueMonth'] = 0
bbl_df['month'] = bbl_df['LICENSE_START_DATE'].dt.year + (
bbl_df['LICENSE_START_DATE'].dt.month/100
)
bbl_df['endMonth'] = bbl_df['LICENSE_EXPIRATION_DATE'].dt.year + (
bbl_df['LICENSE_EXPIRATION_DATE'].dt.month/100
)
bbl_df['issueMonth'] = bbl_df['LICENSE_ISSUE_DATE'].dt.year + (
bbl_df['LICENSE_ISSUE_DATE'].dt.month/100
)
bbl_df.endMonth.fillna(max(ts_lst))
bbl_df['endMonth'][bbl_df['endMonth'] > max(ts_lst)] = max(ts_lst)
#Sort on month
bbl_df = bbl_df.dropna(subset=['month'])
bbl_df = bbl_df.set_index(['MARADDRESSREPOSITORYID','month'])
bbl_df = bbl_df.sort_index(ascending=True)
bbl_df.reset_index(inplace=True)
bbl_df = bbl_df[bbl_df['MARADDRESSREPOSITORYID'] >= 0]
bbl_df = bbl_df.dropna(subset=['LICENSESTATUS', 'issueMonth', 'endMonth',
'MARADDRESSREPOSITORYID','month',
'LONGITUDE', 'LATITUDE'
])
#Now that we have the BBL data, let's create our flag and points data -----
#This is the addresspoints, passed from the dex param
addr_df = dex[0]
#Zip the latlongs
addr_df['geometry'] = [
Point(xy) for xy in zip(
addr_df.LONGITUDE.apply(float), addr_df.LATITUDE.apply(float)
)
]
addr_df['Points'] = addr_df['geometry'] #Duplicate, so raw retains points
addr_df['dummy_counter'] = 1 #Always one, always dropped before export
crs='EPSG:4326' #Convenience assignment of crs
#Now we're stacking for each month ----------------------------------------
out_gdf = pd.DataFrame() #Empty storage df
for i in ts_lst: #iterate through the list of months
print('Month '+ str(i))
strmfile_pth = str(
'./data/strm_file/' + str(i) +'_' + shapetype + '.csv')
if os.path.exists(strmfile_pth):
print('Skipping, ' + str(i) + ' stream file path already exists:')
print(strmfile_pth)
continue
#dex[1] is the designated shapefile passed from the dex param,
#and should match the shapetype defined in that param
#Copy of the dex[1] shapefile
shp_gdf = dex[1]
#Active BBL in month i
bbl_df['inRange'] = 0
bbl_df['inRange'][(bbl_df.endMonth > i) & (bbl_df.month <= i)] = 1
#Issued BBL in month i
bbl_df['isuFlag'] = 0
bbl_df['isuFlag'][bbl_df.issueMonth == i] = 1
#Merge BBL and MAR datasets -------------------------------------------
addr = pd.merge(addr_df, bbl_df, how='left',
left_on='ADDRESS_ID', right_on='MARADDRESSREPOSITORYID')
addr = gpd.GeoDataFrame(addr, crs=crs, geometry=addr.geometry)
shp_gdf.crs = addr.crs
raw = gpd.sjoin(shp_gdf, addr, how='left', op='intersects')
#A simple percent of buildings with active flags per shape,
#and call it a 'utilization index'
numer = raw.groupby('NAME').sum()
numer = numer.inRange
denom = raw.groupby('NAME').sum()
denom = denom.dummy_counter
issue = raw.groupby('NAME').sum()
issue = issue.isuFlag
flags = []
utl_inx = pd.DataFrame(numer/denom)
utl_inx.columns = [
'Util_Indx_BBL'
]
flags.append(utl_inx)
#This is number of buildings with an active BBL in month i
bbl_count = pd.DataFrame(numer)
bbl_count.columns = [
'countBBL'
]
flags.append(bbl_count)
#This is number of buildings that were issued a BBL in month i
isu_count = pd.DataFrame(issue)
isu_count.columns = [
'countIssued'
]
flags.append(isu_count)
for flag in flags:
flag.crs = shp_gdf.crs
shp_gdf = shp_gdf.merge(flag,
how="left", left_on='NAME', right_index=True)
shp_gdf['month'] = i
#Head will be the list of retained columns
head = ['NAME', 'Util_Indx_BBL',
'countBBL', 'countIssued',
'month', 'geometry']
shp_gdf = shp_gdf[head]
print('Merging...')
if supplement: #this is where your code will be fed into the pipeline.
#To include time unit of analysis, pass 'i=i' as the last
#item in your args list over on lnks.py, and the for-loop
#will catch that. Else, it will pass your last item as an arg.
#Ping CDL if you need to pass a func with more args and we
#can extend this.
for supp_func in supplement:
if len(supp_func) == 2:
if supp_func[1] == 'i=i':
shp_gdf = supp_func[0](shp_gdf, raw, i=i)
if supp_func[1] != 'i=i':
shp_gdf = supp_func[0](shp_gdf, raw, supp_func[1])
if len(supp_func) == 3:
if supp_func[2] == 'i=i':
shp_gdf = supp_func[0](shp_gdf, raw, supp_func[1], i=i)
if supp_func[2] != 'i=i':
shp_gdf = supp_func[0](shp_gdf, raw, supp_func[1],
supp_func[2])
if len(supp_func) == 4:
if supp_func[3] == 'i=i':
shp_gdf = supp_func[0](shp_gdf, raw, supp_func[1],
supp_func[2], i=i)
if supp_func[3] != 'i=i':
shp_gdf = supp_func[0](shp_gdf, raw, supp_func[1],
supp_func[2], supp_func[3])
print(str(supp_func[0]) + ' is done.')
if not os.path.exists(strmfile_pth):
shp_gdf = shp_gdf.drop('geometry', axis=1)
#Save, also verify re-read works
shp_gdf.to_csv(strmfile_pth, encoding='utf-8', index=False)
shp_gdf = pd.read_csv(strmfile_pth, encoding='utf-8',
engine='python')
del shp_gdf, addr, utl_inx, numer, denom, issue, raw #Save me some memory please!
#if i != 2016.12:
# del raw
print('Merged month:', i)
print()
#Done iterating through months here....
pth = './data/strm_file/' #path of the streamfiles
for file in os.listdir(pth):
try:
filepth = str(os.path.join(pth, file))
print([os.path.getsize(filepth), filepth])
fl = pd.read_csv(filepth, encoding='utf-8', engine='python') #read the stream file
out_gdf = out_gdf.append(fl) #This does the stacking
del fl
except IsADirectoryError:
continue
out_gdf.to_csv('./data/' + shapetype + '_out.csv') #Save
#shutil.rmtree('./data/strm_file/')
print('Done!')
return [bbl_df, addr_df, out_gdf] #Remove this later, for testing now
dex = addr_shape('anc')
sets = data_pipeline('anc', BBL12_17CSV, supplement=lnks.supplm, dex=dex, ts_lst_range=None)
sets[2].columns #Our number of rows equals our number of shapes * number of months
| Data_Pipeline.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Use deep learning to recognise LCD readings
# ## Train the text recognition model using <u>deep-text-recognition</u> ([github link](https://github.com/clovaai/deep-text-recognition-benchmark))
# ### Different settings and models were used to achieve best acuracy. The arguments are listed as follow:<br>
# ---
# **Basic settings:**
#
# |Command|help|Input|
# |:---:|:---:|:---:|
# |--exp_name|Where to store logs and models|Directory to store trained model|
# |--train_data|required=True, path to training dataset|Directory of training dataset|
# |--valid_data|required=True, path to validation dataset|Directory of training dataset|
# |--manualSeed|type=int, default=1111|for random seed setting|
# |--workers|type=int, number of data loading workers, default=4|int|
# |--batch_size|type=int, default=192|input batch size|
# |--num_iter|type=int, default=300000|number of iterations to train for|
# |--valInterval|type=int, default=2000, Interval between each validation|int|
# |--saved_model|default='', path of model to continue training|Directory|
# |--FT|action='store_true', whether to do fine-tuning|No input, activates by include this argument|
# |--adam|action='store_true', Whether to use adam (default is Adadelta)|No input|
# |--lr|type=float, default=1, learning rate, default=1.0 for Adadelta|float|
# |--beta1|type=float, default=0.9, beta1 for adam. default=0.9|float|
# |--rho|type=float, default=0.95, decay rate rho for Adadelta. default=0.95|float|
# |--eps|type=float, default=1e-8, eps for Adadelta. default=1e-8|float|
# |--grad_clip| type=float, default=5, gradient clipping value. default=5|float|
# |--baiduCTC| action='store_true', for data_filtering_off mode|No input|
#
# ---
# **Data processing:**
#
# |Command|help|Input|
# |:---:|:---:|:---:|
# |--select_data| type=str, default='MJ-ST', select training data (default is MJ-ST, which means MJ and ST used as training data|For use sample data|
# |--batch_ratio| type=str, default='0.5-0.5', assign ratio for each selected data in the batch|Use with MJ-ST|
# |--total_data_usage_ratio| type=str, default='1.0', total data usage ratio, this ratio is multiplied to total number of data.|For use part of data|
# |--batch_max_length| type=int, default=25, maximum-label-length| |
# |--imgH| type=int, default=32, the height of the input image|image size|
# |--imgW| type=int, default=100, the width of the input image|image size|
# |--rgb| action='store_true', use rgb input'|No input|
# |--character| type=str, default='0123456789abcdefghijklmnopqrstuvwxyz', character label|To add or fileter symbols, characters|
# |--sensitive| action='store_true', for sensitive character mode|Use this to recognise Upper case|
# |--PAD| action='store_true', whether to keep ratio then pad for image resize| |
# |--data_filtering_off| action='store_true', for data_filtering_off mode|No input|
#
# ---
# **Model Architecture:**
#
# |Command|help|Input|
# |:---:|:---:|:---:|
# |--Transformation| type=str, required=True, Transformation stage. |None or TPS|
# |--FeatureExtraction| type=str, required=True, FeatureExtraction stage. |VGG, RCNN or ResNet|
# |--SequenceModeling| type=str, required=True, SequenceModeling stage. |None or BiLSTM|
# |--Prediction| type=str, required=True, Prediction stage. |CTC or Attn|
# |--num_fiducial| type=int, default=20, number of fiducial points of TPS-STN|int|
# |--input_channel| type=int, default=1, the number of input channel of Feature extractor|int|
# |--output_channel| type=int, default=512, the number of output channel of Feature extractor|int|
# |--hidden_size| type=int, default=256, the size of the LSTM hidden state|int|
# ### Train the models
# The variables used will be:
#
# |Model|Experiment Name|Command used|
# |:---:|:---:|:---:|
# |VGG | vgg-notran-nolstm-ctc | CUDA_VISIBLE_DEVICES=0 python3 train.py --exp_name vgg-notran-nolstm-ctc \ --train_data result/train --valid_data result/test --batch_size 200 \ --Transformation None --FeatureExtraction VGG --SequenceModeling None --Prediction CTC \ --num_iter 10000 --valInterval 1000 |
# |VGG | vgg-tps-nolstm-ctc| CUDA_VISIBLE_DEVICES=0 python3 train.py --exp_name vgg-tps-nolstm-ctc \ --train_data result/train --valid_data result/test --batch_size 200 \ --Transformation TPS --FeatureExtraction VGG --SequenceModeling None --Prediction CTC \ --num_iter 10000 --valInterval 1000 |
# |VGG |vgg-notran-nolstm-attn|CUDA_VISIBLE_DEVICES=0 python3 train.py --exp_name vgg-notran-nolstm-attn \ --train_data result/train --valid_data result/test --batch_size 200 \ --Transformation None --FeatureExtraction VGG --SequenceModeling None --Prediction Attn \ --num_iter 10000 --valInterval 1000|
# |RCNN | rcnn-notran-nolstm-ctc | CUDA_VISIBLE_DEVICES=0 python3 train.py --exp_name rcnn-notran-nolstm-ctc \ --train_data result/train --valid_data result/test --batch_size 200 \ --Transformation None --FeatureExtraction RCNN --SequenceModeling None --Prediction CTC \ --num_iter 10000 --valInterval 1000 |
# |RCNN | rcnn-notran-nolstm-atnn | CUDA_VISIBLE_DEVICES=0 python3 train.py --exp_name rcnn-notran-nolstm-atnn \ --train_data result/train --valid_data result/test --batch_size 200 \ --Transformation None --FeatureExtraction RCNN --SequenceModeling None --Prediction Attn \ --num_iter 10000 --valInterval 1000 |
# |ResNet | resnet-notran-nolstm-ctc | CUDA_VISIBLE_DEVICES=0 python3 train.py --exp_name resnet-notran-nolstm-ctc \ --train_data result/train --valid_data result/test --batch_size 200 \ --Transformation None --FeatureExtraction ResNet --SequenceModeling None --Prediction CTC \ --num_iter 10000 --valInterval 1000 |
# |ResNet | resnet-notran-nolstm-atnn | CUDA_VISIBLE_DEVICES=0 python3 train.py --exp_name resnet-notran-nolstm-atnn \ --train_data result/train --valid_data result/test --batch_size 200 \ --Transformation None --FeatureExtraction ResNet --SequenceModeling None --Prediction Attn \ --num_iter 10000 --valInterval 1000 |
# ### Experiment checklist
# +
from IPython.display import display
from ipywidgets import Checkbox
box1 = Checkbox(False, description='vgg-notran-nolstm-ctc')
box2 = Checkbox(False, description='vgg-notran-nolstm-attn')
box3 = Checkbox(False, description='rcnn-notran-nolstm-ctc')
box4 = Checkbox(False, description='rcnn-notran-nolstm-atnn')
box5 = Checkbox(False, description='resnet-notran-nolstm-ctc')
box6 = Checkbox(False, description='resnet-notran-nolstm-atnn')
display(box1,box2,box3,box4,box5,box6)
def changed(b):
print(b)
box1.observe(changed)
box2.observe(changed)
box3.observe(changed)
box4.observe(changed)
box5.observe(changed)
box6.observe(changed)
# -
# ### Experiment summary
# By using ResNet (no Transformation, no BiLTSM) with ctc prediction, an prediction accuracy of over 98 % was achieved.
#
# |Model|Exp Name|Accuracy|
# |:---:|:---:|:---:|
# |VGG | vgg-notran-nolstm-ctc |90.837|
# |VGG | vgg-tps-nolstm-ctc|64.542|
# |VGG |vgg-notran-nolstm-attn|86.853|
# |RCNN | rcnn-notran-nolstm-ctc |80.080|
# |RCNN | rcnn-notran-nolstm-atnn | - |
# |ResNet | resnet-notran-nolstm-ctc |<mark>98.805</mark>|
# |ResNet | resnet-notran-nolstm-atnn |94.422|
# + [markdown] tags=[]
# Command to train ResNet with a batch size of 50:
#
# ```
# # # !CUDA_VISIBLE_DEVICES=0 python3 train.py --exp_name resnet-notran-nolstm-ctc-bs50 \
# # # --train_data result/train --valid_data result/test --batch_size 50 \
# # # --Transformation None --FeatureExtraction ResNet --SequenceModeling None --Prediction CTC \
# # # --num_iter 10000 --valInterval 1000 \
# # # --saved_model saved_models/resnet-notran-nolstm-ctc/best_accuracy.pth
# ```
# -
# ### Predict readings from trained model
# %cd /mnt/c/Users/stcik/scire/papers/muon/deep-text-recognition-benchmark
# Predict 90C data
# output = !python3 predict.py \
# --Transformation None --FeatureExtraction ResNet --SequenceModeling None --Prediction CTC \
# --image_folder 90C/ --batch_size 400 \
# --saved_model resnet-notran-nolstm-ctc-50bs.pth
output
from IPython.core.display import display, HTML
from PIL import Image
import base64
import io
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from cycler import cycler
plt.rcParams.update({
"text.usetex": True,
"font.family": "DejaVu Sans",
"font.serif": ["Computer Modern Roman"],
"font.size": 10,
"xtick.labelsize": 10,
"ytick.labelsize": 10,
"figure.subplot.left": 0.21,
"figure.subplot.right": 0.96,
"figure.subplot.bottom": 0.18,
"figure.subplot.top": 0.93,
"legend.frameon": False,
})
params= {'text.latex.preamble' : [r'\usepackage{amsmath, amssymb, unicode-math}',
r'\usepackage[dvips]{graphicx}',
r'\usepackage{xfrac}', r'\usepackage{amsbsy}']}
# + tags=[]
data = pd.DataFrame()
for ind, row in enumerate(output[
output.index('image_path \tpredicted_labels \tconfidence score')+2:
]):
row = row.split('\t')
filename = row[0].strip()
label = row[1].strip()
conf = row[2].strip()
img = Image.open(filename)
img_buffer = io.BytesIO()
img.save(img_buffer, format="PNG")
imgStr = base64.b64encode(img_buffer.getvalue()).decode("utf-8")
data.loc[ind, 'Image'] = '<img src="data:image/png;base64,{0:s}">'.format(imgStr)
data.loc[ind, 'File name'] = filename
data.loc[ind, 'Reading'] = label
data.loc[ind, 'Confidence'] = conf
html_all = data.to_html(escape=False)
display(HTML(html_all))
# -
# ### Visualise the predicted data, correct wrong readings and calculate the average and error off the readings.
# Correct the readings
# Convert data from string to float
data['Reading']=data['Reading'].astype(float)
# selecting rows based on condition
rslt_df = data[(data['Reading'] < 85) | (data['Reading'] > 95)]
html_failed = rslt_df.to_html(escape=False)
display(HTML(html_failed))
data['Reading'].to_excel("90C_readings.xlsx")
# There are no wrong predictions, we can directly plot the data.
# +
import numpy as np
import matplotlib.pyplot as plt
def adjacent_values(vals, q1, q3):
upper_adjacent_value = q3 + (q3 - q1) * 1.5
upper_adjacent_value = np.clip(upper_adjacent_value, q3, vals[-1])
lower_adjacent_value = q1 - (q3 - q1) * 1.5
lower_adjacent_value = np.clip(lower_adjacent_value, vals[0], q1)
return lower_adjacent_value, upper_adjacent_value
# +
fig, ax = plt.subplots(1,2,figsize=(6.4,3),tight_layout=True)
time = range(1,196)
num_bins = 20
# the histogram of the data
ax[0].plot(time,data['Reading'])
ax[0].set_xlabel('$t$ (s)')
ax[0].set_ylabel('Readings ($\degree$C)')
violin_data = [sorted(data['Reading'])]
parts = ax[1].violinplot(violin_data, positions=[0],
showmeans=False, showmedians=False, showextrema=False)
for pc in parts['bodies']:
pc.set_facecolor('#D43F3A')
pc.set_edgecolor('black')
pc.set_alpha(1)
quartile1, medians, quartile3 = np.percentile(violin_data, [25, 50, 75], axis=1)
whiskers = np.array([
adjacent_values(sorted_array, q1, q3)
for sorted_array, q1, q3 in zip(violin_data, quartile1, quartile3)])
whiskers_min, whiskers_max = whiskers[:, 0], whiskers[:, 1]
mean=np.mean(violin_data)
inds = np.arange(0, len(medians))
ax[1].scatter(inds, medians, marker='o', edgecolors='tab:blue',
c='white', s=55, zorder=3, label = f'median: %.1f'% mean)
ax[1].scatter(inds, mean, marker='s', edgecolors='tab:blue',
c='white', s=45, zorder=4, label = f'mean: %.1f'% medians)
ax[1].vlines(inds, quartile1, quartile3, color='k', linestyle='-', lw=5)
ax[1].vlines(inds, whiskers_min, whiskers_max, color='k', linestyle='-', lw=1)
ax[1].set_xlabel('Probability density')
ax[1].legend(frameon=False, loc=0)
plt.savefig('90C_prediction.eps')
plt.show()
| _build/html/_sources/ML_LCD_readings/LCD_readings.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
#hide
#skip
! [ -e /content ] && pip install -Uqq mrl-pypi # upgrade mrl on colab
# +
# default_exp policy_gradient
# -
# # Policy Gradient
#
# > Policy gradient modules
#hide
from nbdev.showdoc import *
# %load_ext autoreload
# %autoreload 2
# +
# export
from mrl.imports import *
from mrl.torch_imports import *
from mrl.torch_core import *
from mrl.layers import *
# -
# ## Policy Gradients
#
# The code in this module implements several policy gradient algorithms
#
# - `PolicyGradient` - implements [Policy Gradients](https://papers.nips.cc/paper/1999/hash/464d828b85b0bed98e80ade0a5c43b0f-Abstract.html)
#
# - `TRPO` - implements [Trust Region Policy Optimization](https://arxiv.org/pdf/1502.05477.pdf)
#
# - `PPO` - implemeents [Proximal Policy Optimization](https://arxiv.org/pdf/1707.06347.pdf)
#
# ### Current Limitations
#
# The implementations below are designed for the scenario where the output of the model is a series of actions over time. Importantly, rewards are discounted going backwards, meaning the discounted reward at very timestep contains some of the future rewards. If your model does not predict a series of rewards (ie predicts a single graph), you may need to revisit these assumptions
# +
# export
class BasePolicy():
'''
BasePolicy - base policy class
Inputs:
- `gamma float`: discount factor
'''
def __init__(self, gamma=1.):
self.gamma = gamma
def discount_rewards(self, rewards, mask, traj_rewards=None):
'''
discount_rewards - discounts rewards
Inputs:
- `rewards torch.Tensor[bs]`: reward tensor (one reward per batch item)
- `mask torch.BoolTensor[bs, sl]`: mask (ie for padding). `True` indicates
values that will be kept, `False` indicates values that will be masked
- `traj_rewards Optional[torch.Tensor[bs, sl]]`: trajectory rewards.
Has a reward value for each time point
'''
rewards = scatter_rewards(rewards, mask)
if traj_rewards is not None:
rewards += traj_rewards
discounted = discount_rewards(rewards, self.gamma)
return discounted
# -
show_doc(BasePolicy.discount_rewards)
# ## Policy Gradients
#
# `PolicyGradient` implements standard [Policy Gradients](https://papers.nips.cc/paper/1999/hash/464d828b85b0bed98e80ade0a5c43b0f-Abstract.html) following:
#
# $$ \nabla_\theta J(\theta) = \mathbb{E}_\pi [R(s,a) \nabla_\theta \ln \pi_\theta(a \vert s)] $$
#
# When we generate a sample through autoregressive Monte Carlo sampling, we create a sequence of actions which we represent as a tensor of size `(bs, sl)`.
#
# For each step in this series, we have a probability distribution over all possible actions. This give us a tensor of log probabilities of size `(bs, sl, n_actions)`. We can then gather the log probabilities for the actions we actually took, giving us a tensor of gathered log probabilities of size `(bs, sl)`.
#
# We also have a set of rewards associated with each sample. In the context of generating compounds, we most often have a single reward for each sampling trajectory that represents the final score of he whole molecule. This would be a tensor of size `(bs)`. If applicable, we can also have a tensor of trajectory rewards which has a reward for each sampling timestep. This trajectory reward tensor would be of size `(bs, sl)`.
#
# These rewards are discounted over all timesteps using `discount_rewards`, then scaled using `whiten`. This gives is our final tensor of rewards of size `(bs, sl)`.
#
# Now we can compute the empirical expectation $\mathbb{E}_\pi [R(s,a) \nabla_\theta \ln \pi_\theta(a \vert s)]$ by multiplying the gathered log probabilities by the discounted rewards and taking the mean over the batch.
#
# Then of course we want to maximize this expectation, so we use gradient descent to minimize $-\mathbb{E}_\pi [R(s,a) \nabla_\theta \ln \pi_\theta(a \vert s)]$
#
# This basically tells the model to increase the probability of sample paths that had above-average rewards within the batch, and decrease the probability of sample paths with below-average rewards.
# +
# export
class PolicyGradient(BasePolicy):
'''
PolicyGradient - Basic policy gradient implementation
papers.nips.cc/paper/1999/hash/464d828b85b0bed98e80ade0a5c43b0f-Abstract.html
Inputs:
- `discount bool`: if True, rewards are discounted over all timesteps
- `gamma float`: discount factor (ignored if `discount=False`)
- `ratio True`: if True, model log probbilities are replaced with the
ratio between main model log probabilities and baseline model log probabilities,
a technique used in more sophistocated policy gradient algorithms. This can
improve stability
- `scale_rewards bool`: if True, rewards are mean-scaled before discounting.
This can lead to quicker convergence
- `ratio_clip Optional[float]`: if not None, value passed will be
used to clamp log probability ratios to `[-ratio_clip, ratio_clip]`
to prevent extreme values
'''
def __init__(self,
discount=True,
gamma=0.97,
ratio=False,
scale_rewards=True,
ratio_clip=None
):
super().__init__(gamma)
self.discount = discount
self.ratio = ratio
if ratio_clip is not None:
ratio_clip = abs(ratio_clip)
self.ratio_clip = ratio_clip
self.scale_rewards = scale_rewards
self.mean_reward = None
def __call__(self, lps, mask, rewards, base_lps=None, traj_rewards=None):
'''
Inputs:
- `lps torch.FloatTensor[bs, sl]`: gathered log probabilities
- `mask torch.BoolTensor[bs, sl]`: padding mask. `True` indicates
values that will be kept, `False` indicates values that will be masked
- `rewards torch.FloatTensor[bs]`: reward tensor (one reward per batch item)
- `base_lps Optional[torch.FloatTensor[bs, sl]]`: optional
base model gathered log probabilities
- `traj_rewards Optional[torch.FloatTensor[bs, sl]]`: optional tensor of
trajectory rewards with one reward value per timestep
'''
if self.ratio:
lps = (lps - base_lps.detach()).exp()
if self.ratio_clip is not None:
lps = lps.clamp(-self.ratio_clip, self.ratio_clip)
if not self.discount:
pg_loss = -((lps*mask).sum(-1)*rewards)/mask.sum(-1)
else:
rewards = self.discount_rewards(rewards, mask, traj_rewards)
rewards = whiten(rewards, mask=mask)
pg_loss = -(lps*rewards*mask).sum(-1)/mask.sum(-1)
pg_dict = {'loss':pg_loss.detach().cpu(),
'rewards':rewards.detach().cpu()}
self.last_outputs = pg_dict
return pg_loss, pg_dict
def from_batch_state(self, batch_state):
lps = batch_state.model_gathered_logprobs
base_lps = batch_state.base_gathered_logprobs
mask = batch_state.mask
rewards = batch_state.rewards_final
if self.scale_rewards:
if self.mean_reward is None:
self.mean_reward = rewards.mean()
else:
self.mean_reward = (1-self.gamma)*rewards.mean() + self.gamma*self.mean_reward
rewards = rewards - self.mean_reward
traj_rewards = batch_state.trajectory_rewards
loss, pg_dict = self(lps, mask, rewards, base_lps, traj_rewards)
return loss
# -
show_doc(PolicyGradient.__call__)
# ## Trust Region Policy Optimization
#
# [Trust Region Policy Optimization](https://arxiv.org/pdf/1502.05477.pdf) (TRPO) adapts the policy gradient algorithm by constraining the maximum update size based on how far the current agent has deviated from the baseline agent.
#
# $$ J(\theta) = \mathbb{E}_{s \sim \rho^{\pi_{\theta_\text{old}}}, a \sim \pi_{\theta_\text{old}}} \big[ \frac{\pi_\theta(a \vert s)}{\pi_{\theta_\text{old}}(a \vert s)} \hat{A}_{\theta_\text{old}}(s, a) \big] $$
#
# Subject to a KL constraint between the current policy and the baseline policy
#
# $$ \mathbb{E}_{s \sim \rho^{\pi_{\theta_\text{old}}}} [D_\text{KL}(\pi_{\theta_\text{old}}(.\vert s) \| \pi_\theta(.\vert s)] \leq \delta $$
# +
# export
class TRPO(BasePolicy):
'''
TRPO - Trust Region Policy Optimization
arxiv.org/pdf/1502.05477.pdf
Inputs:
- `gamma float`: discount factor
- `kl_target float`: target maximum KL divergence from baseline policy
- `beta float`: coefficient for the KL loss
- `eta float`: coefficient for penalizing KL higher than `2*kl_target`
- `lam float`: lambda coefficient for advantage calculation
- `v_coef float`: value function loss coefficient
- `scale_rewards bool`: if True, rewards are mean-scaled before discounting.
This can lead to quicker convergence
- `ratio_clip Optional[float]`: if not None, value passed will be
used to clamp log probability ratios to `[-ratio_clip, ratio_clip]`
to prevent extreme values
'''
def __init__(self,
gamma,
kl_target,
beta=1.,
eta=50,
lam=0.95,
v_coef=0.5,
scale_rewards=True,
ratio_clip=None
):
self.gamma = gamma
self.beta = beta
self.eta = eta
self.lam = lam
self.kl_target = kl_target
self.v_coef = v_coef
self.scale_rewards = scale_rewards
self.mean_reward = None
self.ratio_clip = ratio_clip
def __call__(self, lps_g, base_lps_g, lps, base_lps, mask,
rewards, values, traj_rewards=None):
'''
Inputs:
- `lps_g torch.FloatTensor[bs, sl]`: model gathered log probabilities
- `base_lps_g torch.FloatTensor[bs, sl]`: baseline model
gathered log probabilities
- `lps torch.FloatTensor[bs, sl, n_actions]`: model full log probabilities
- `base_lps torch.FloatTensor[bs, sl, n_actions]`: baseline model
full log probabilities
- `mask torch.BoolTensor[bs, sl]`: padding mask. `True` indicates
values that will be kept, `False` indicates values that will be masked
- `rewards torch.FloatTensor[bs]`: reward tensor (one reward per batch item)
- `values torch.FloatTensor[bs, sl]`: state value predictions
- `traj_rewards Optional[torch.FloatTensor[bs, sl]]`: optional tensor of
trajectory rewards with one reward value per timestep
'''
discounted_rewards = self.discount_rewards(rewards, mask, traj_rewards)
advantages = self.compute_advantages(discounted_rewards, values)
advantages = whiten(advantages, mask=mask)
v_loss = self.value_loss(values, discounted_rewards)
ratios = (lps_g - base_lps_g.detach()).exp()
if self.ratio_clip is not None:
ratios = ratios.clamp(-self.ratio_clip, self.ratio_clip)
loss1 = -(ratios*advantages*mask).sum(-1)/mask.sum(-1)
kl = torch.distributions.kl.kl_divergence(
Categorical(logits=base_lps),
Categorical(logits=lps))
kl = (kl*mask).sum(-1)/mask.sum(-1)
loss2 = self.beta*kl
loss3 = self.eta * torch.maximum(to_device(torch.tensor(0.)),
kl - 2.0*self.kl_target)
pg_loss = loss1 + loss2 + loss3 + v_loss.mean(-1)
pg_dict = { 'loss' : pg_loss.detach().cpu(),
'pg_discounted' : discounted_rewards.detach().cpu(),
'pg_advantage' : advantages.detach().cpu(),
'ratios' : ratios.detach().cpu(),
'kl' : kl.detach().cpu(),
'loss1' : loss1.detach().cpu(),
'loss2' : loss2.detach().cpu(),
'loss3' : loss3.detach().cpu(),
'v_loss' : v_loss.detach().cpu()}
self.last_outputs = pg_dict
return pg_loss, pg_dict
def from_batch_state(self, batch_state):
lps_g = batch_state.model_gathered_logprobs
base_lps_g = batch_state.base_gathered_logprobs
lps = batch_state.model_logprobs
base_lps = batch_state.base_logprobs
mask = batch_state.mask
rewards = batch_state.rewards_final
if self.scale_rewards:
if self.mean_reward is None:
self.mean_reward = rewards.mean()
else:
self.mean_reward = (1-self.gamma)*rewards.mean() + self.gamma*self.mean_reward
rewards = rewards - self.mean_reward
traj_rewards = batch_state.trajectory_rewards
values = batch_state.state_values
loss, pg_dict = self(lps_g, base_lps_g, lps, base_lps, mask,
rewards, values, traj_rewards)
return loss
def compute_advantages(self, rewards, values):
if values is None:
advantages = rewards
else:
advantages = compute_advantages(rewards, values.detach(), self.gamma, self.lam)
return advantages
def value_loss(self, values, rewards):
if values is None:
v_loss = to_device(torch.tensor(0.))
else:
v_loss = self.v_coef*F.mse_loss(values, rewards, reduction='none')
return v_loss
# -
show_doc(TRPO.__call__)
# # Proximal Policy Optimization
#
# [Proximal Policy Optimization](https://arxiv.org/pdf/1707.06347.pdf) (PPO) applies clipping to the surrogate objective along with the KL constraints
#
#
# $$ r(\theta) = \frac{\pi_\theta(a \vert s)}{\pi_{\theta_\text{old}}(a \vert s)} $$
#
# $$ J(\theta) = \mathbb{E} [ r(\theta) \hat{A}_{\theta_\text{old}}(s, a) ] $$
#
# $$ J^\text{CLIP} (\theta) = \mathbb{E} [ \min( r(\theta) \hat{A}_{\theta_\text{old}}(s, a), \text{clip}(r(\theta), 1 - \epsilon, 1 + \epsilon) \hat{A}_{\theta_\text{old}}(s, a))] $$
# +
# export
class PPO(BasePolicy):
'''
PPO - Proximal policy optimization
arxiv.org/pdf/1707.06347.pdf
Inputs:
- `gamma float`: discount factor
- `kl_coef float`: KL reward coefficient
- `lam float`: lambda coefficient for advantage calculation
- `v_coef float`: value function loss coefficient
- `cliprange float`: clip value for surrogate loss
- `v_cliprange float`: clip value for value function predictions
- `ent_coef float`: entropy regularization coefficient
- `kl_target Optional[float]`: target value for adaptive KL penalty
- `kl_horizon Optional[float]`: horizon for adaptive KL penalty
- `scale_rewards bool`: if True, rewards are mean-scaled before discounting.
This can lead to quicker convergence
'''
def __init__(self,
gamma,
kl_coef,
lam=0.95,
v_coef=0.5,
cliprange=0.2,
v_cliprange=0.2,
ent_coef=0.01,
kl_target=None,
kl_horizon=None,
scale_rewards=True,
):
self.gamma = gamma
self.lam = lam
self.ent_coef = ent_coef
self.kl_coef = kl_coef
self.kl_target = kl_target
self.kl_horizon = kl_horizon
self.v_coef = v_coef
self.cliprange = cliprange
self.v_cliprange = v_cliprange
self.scale_rewards = scale_rewards
self.mean_reward = None
def __call__(self, lps_g, base_lps_g, lps, mask,
rewards, values, ref_values, traj_rewards=None):
'''
Inputs:
- `lps_g torch.FloatTensor[bs, sl]`: model gathered log probabilities
- `base_lps_g torch.FloatTensor[bs, sl]`: baseline model
gathered log probabilities
- `lps torch.FloatTensor[bs, sl, n_actions]`: model full log probabilities
- `mask torch.BoolTensor[bs, sl]`: padding mask. `True` indicates
values that will be kept, `False` indicates values that will be masked
- `rewards torch.FloatTensor[bs]`: reward tensor (one reward per batch item)
- `values torch.FloatTensor[bs, sl]`: state value predictions
- `ref_values torch.FloatTensor[bs, sl]`: baseline state value predictions
- `traj_rewards Optional[torch.FloatTensor[bs, sl]]`: optional tensor of
trajectory rewards with one reward value per timestep
'''
discounted_rewards = self.discount_rewards(rewards, mask, traj_rewards)
kl_reward = self.compute_kl_reward(lps_g, base_lps_g)
discounted_rewards = discounted_rewards + kl_reward
advantages = self.compute_advantages(discounted_rewards, values)
advantages = whiten(advantages, mask=mask)
v_loss = self.value_loss(values, ref_values, discounted_rewards)
ratios = (lps_g - base_lps_g.detach()).exp()
ratios_clipped = torch.clamp(ratios, 1.0-self.cliprange, 1.0+self.cliprange)
loss1 = -(ratios*advantages)
loss2 = -(ratios_clipped*advantages)
loss = torch.maximum(loss1, loss2)
loss = (loss*mask).sum(-1)/mask.sum(-1)
entropy = Categorical(logits=lps).entropy().mean(-1)
pg_loss = loss + v_loss.mean(-1) - self.ent_coef*entropy
self.update_kl(lps_g, base_lps_g, mask)
pg_dict = { 'loss' : pg_loss.detach().cpu(),
'pg_discounted' : discounted_rewards,
'pg_advantage' : advantages,
'ratios' : ratios.detach().cpu(),
'ppo_loss' : loss.detach().cpu(),
'v_loss' : v_loss.detach().cpu(),
'entropy' : entropy.detach().cpu()}
self.last_outputs = pg_dict
return pg_loss, pg_dict
def from_batch_state(self, batch_state):
lps_g = batch_state.model_gathered_logprobs
base_lps_g = batch_state.base_gathered_logprobs
lps = batch_state.model_logprobs
mask = batch_state.mask
rewards = batch_state.rewards_final
if self.scale_rewards:
if self.mean_reward is None:
self.mean_reward = rewards.mean()
else:
self.mean_reward = (1-self.gamma)*rewards.mean() + self.gamma*self.mean_reward
rewards = rewards - self.mean_reward
traj_rewards = batch_state.trajectory_rewards
values = batch_state.state_values
ref_values = batch_state.ref_state_values
loss, pg_dict = self(lps_g, base_lps_g, lps, mask, rewards,
values, ref_values, traj_rewards)
return loss
def compute_kl_reward(self, lps_g, base_lps_g):
kl = lps_g - base_lps_g
kl_reward = -self.kl_coef * kl.detach()
return kl_reward
def value_loss(self, values, old_values, rewards):
if values is None:
v_loss = to_device(torch.tensor(0.))
else:
v_loss = F.mse_loss(values, rewards, reduction='none')
if old_values is not None:
min_v = old_values - self.v_cliprange
max_v = old_values + self.v_cliprange
values_clipped = torch.max(torch.min(values, max_v), min_v)
v_loss2 = F.mse_loss(values_clipped, rewards, reduction='none')
v_loss = torch.max(v_loss, v_loss2)
v_loss = self.v_coef*v_loss
return v_loss
def compute_advantages(self, rewards, values):
if values is None:
advantages = rewards
else:
advantages = compute_advantages(rewards, values.detach(), self.gamma, self.lam)
return advantages
def update_kl(self, lps_g, base_lps_g, mask):
if (self.kl_target is not None) and (self.kl_horizon is not None):
kl = (lps_g - base_lps_g).detach()
kl = (kl*mask).sum(-1)/mask.sum(-1)
kl = kl.cpu().mean()
error = torch.clip(kl/self.kl_target - 1, -0.2, 0.2)
factor = 1 + error * lps_g.shape[0]/self.kl_horizon
self.kl_coef *= factor
# -
show_doc(PPO.__call__)
# hide
from nbdev.export import notebook2script; notebook2script()
| nbs/17_policy_gradient.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Crosswalk between O*NET to ESCO occupations
#
# This notebook generates a pre-validated crosswalk from the occupations described in O\*NET to ESCO occupations.
#
# The automated mapping strategy primarily involved applying natural language processing (NLP) techniques to identify occupations with similar job titles and descriptions. Similarity in this context refers to semantic similarity and was calculated by comparing sentence embeddings generated by [Sentence-BERT](github.com/UKPLab/sentence-transformers) (Reimers and Gurevych 2019), a recent neural network model that outputs high-quality, semantically meaningful numerical representations of text.
#
# The resulting automated mapping was subsequently manually validated by the authors. See also the Appendix of the Mapping Career Causeways report for further details.
# # 1. Set up dependencies and helper functions
# +
import os
import pandas as pd
import numpy as np
import pickle
import collections
import seaborn as sns
from scipy.spatial.distance import pdist, squareform, cosine
import itertools
from time import time
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('bert-base-nli-mean-tokens')
# -
def get_title_sim(some_esco_code, some_onet_code):
'''
Measure pairwise cosine similarities of ESCO and O*NET job titles; identify common job titles
'''
esco_options = esco_alt_titles[some_esco_code]
onet_options = onet_alt_titles[some_onet_code]
title_combo = list(itertools.product(esco_options, onet_options))
cosines = [1-cosine(esco_titles_dict[elem[0]], onet_titles_dict[elem[1]]) for elem in title_combo]
common = [elem for elem in esco_options if elem in onet_options]
if len(common):
res2 = (common)
else:
res2 = ('na')
return (cosines, res2)
def get_desc_sim(some_esco_code, some_onet_code):
'''
Measure cosine similarity of occupation descriptions
'''
esco_desc_embed = esco_desc_dict[some_esco_code]
onet_desc_embed = onet_desc_dict[some_onet_code]
desc_sim = 1- cosine(esco_desc_embed, onet_desc_embed)
return desc_sim
def eval_onet_match(some_esco_code, some_onet_code):
'''
Calculate various measures of similarity; return for evaluating the quality of the matches
'''
onet_title = onet_off_titles[some_onet_code]
title_sims, shared_titles = get_title_sim(some_esco_code, some_onet_code)
desc_sim = get_desc_sim(some_esco_code, some_onet_code)
if len(title_sims):
res = (onet_title, max(title_sims), np.median(title_sims), desc_sim, shared_titles)
else:
res = (onet_title, 0, 0, desc_sim, shared_titles)
return res
# # 2. Read in various lookups and existing crosswalks
#
# First, we compared a given ESCO occupation with its most likely O\*NET matches. These are also referred to as ‘constrained’ matches and were derived by extrapolating from existing crosswalks between the US 2010 Standard Occupational Classification (SOC) and the ISCO-08.
#
# The logic of this initial mapping was as follows: ESCO occupations with an immediate ISCO parent code (i.e. so-called level 5 ESCO occupations) → 4-digit ISCO code → US 2010 SOC → O\*NET occupations.
base_dir = ''
# ## 2.1 ONET to US 2010 SOC
# Crosswalk from O\*NET to US 2010 SOC obtained from the [O\*NET website](https://www.onetcenter.org/taxonomy/2010/soc.html/2010_to_SOC_Crosswalk.xls) in February 2020.
#
# Import O*NET to US 2010 SOC
onet_us2010soc = pd.read_excel(os.path.join(base_dir, 'lookups', 'ONET_to_US2010SOC.xlsx'))
onet_us2010soc.head(10)
#Create a mapping of US 2010 SOC options to ONET
onet_soc_lookup = collections.defaultdict(list)
for name, group in onet_us2010soc.groupby('2010 SOC Code'):
options = group['O*NET-SOC 2010 Code']
for option in options:
onet_soc_lookup[name].append(option)
# Map ONET codes to occupation titles
onet_titles = {}
for ix, row in onet_us2010soc.iterrows():
onet_titles[row['O*NET-SOC 2010 Code']] = row['O*NET-SOC 2010 Title']
# ## 2.2 ESCO (directly associated with an ISCO code) to ISCO
#
# Mapping of ESCO to ISCO was obtained using ESCO API in February 2020.
# Note that the structure of ESCO is not straightforward, as at each level there is a combination of nested and leaf nodes.
# Import ESCO to ISCO-08
esco_occup_level5 = pd.read_csv(os.path.join(base_dir, 'lookups', 'esco_occup_level5.csv'), encoding = 'utf-8')
esco_occup_level5.head()
# ## 2.3 US 2010 SOC to ISCO-08
#
# The mapping between ISCO-08 to US 2010 SOC has been obtained from [BLS website](https://www.bls.gov/soc/soccrosswalks.htm) on February 28, 2020.
#US 2010 SOC to ISCO-08
isco_us2010soc = pd.read_excel(os.path.join(base_dir, 'lookups', 'ISCO_SOC_Crosswalk.xls'),
dtype = 'object',
skiprows = range(0,6))
isco_us2010soc.head()
#Create mapping of US 2010 SOC options to ISCO-08
isco_soc_lookup = collections.defaultdict(list)
for name, group in isco_us2010soc.groupby('ISCO-08 Code'):
options = group['2010 SOC Code']
for option in options:
isco_soc_lookup[name].append(option)
# # 3. First initial mapping
#
# ESCO level 5 (ESCO occupations that have an immediate ISCO parent code) → 4-digit ISCO code → US 2010 SOC → O\*NET occupation
# +
#Retrieve US 2010 SOC options for each ESCO occupation using its corresponding 4-digit ISCO-08 code
us_soc = esco_occup_level5['isco_group'].apply(lambda x: isco_soc_lookup[str(x)])
us_soc = us_soc.apply(lambda x: [elem.strip() for elem in x])
#Generate more granular O*NET options from US 2010 SOC
onet_options = us_soc.apply(lambda x: [onet_soc_lookup[elem] for elem in x])
#Create a flat list of O*NET codes
onet_options_flat = onet_options.apply(lambda x: [item for sublist in x for item in sublist])
#Generate a flat list of O*NET titles corresponding to the codes above
onet_names_flat = onet_options_flat.apply(lambda x: [onet_titles[elem] for elem in x])
# -
lens = onet_names_flat.apply(lambda x: len(x))
# We can now produce an initial mapping of ESCO occupations to possible O\*NET code options
mini_esco = esco_occup_level5[['id', 'preferred_label', 'alt_labels', 'description', 'isco_group']].copy()
mini_esco['onet_codes'] = onet_options_flat
mini_esco['onet_titles'] = onet_names_flat
mini_esco['lens'] = lens
mini_esco.head()
# ### Quick summary of the first intermediate mapping
#
# Out of 1701 ESCO level 5 occupations:
#
# - 21 with no matches (military occupations, ISCO codes need padding with 0s)
#
# - 341 1-1 matches
#
# - 312 1-2 matches
#
# ### Next steps
#
# - Calculate the semantic similarity of ESCO occupations with potential O\*NET options
# - Identify the most similar O\*NET occupation
# - Manually review the results
# # 4. Measure semantic similarity of mapping options
# ## 4.1 Collect all known job titles for ESCO and O\*NET occupations
#
# To analyse semantic similarity of ESCO occupations to O\*NET options, we collect the availabe occupation descriptions and known job titles. The similarity we use is a composite metric which reflects cosine similarity of Sentence-BERT embeddings and comprises:
# - Highest pairwise similarity among all known job titles (40%)
# - Median pairwise similarity between all known job titles (30%)
# - Similarity of occupation descriptions (30%)
#
# Collect all titles for ESCO
mini_esco['isco_group'] = mini_esco['isco_group'].astype('str')
mini_esco['id'] = mini_esco['id'].astype('str')
mini_esco = mini_esco.fillna('na')
esco_alt_titles = collections.defaultdict(list)
for ix, row in mini_esco.iterrows():
esco_code = row['id']
esco_off_title = row['preferred_label']
esco_alt_titles[esco_code].append(esco_off_title)
esco_alt_title = row['alt_labels']
if esco_alt_title != 'na':
esco_alt_title = esco_alt_title.split('\n')
esco_alt_titles[esco_code].extend(esco_alt_title)
# Collect job titles for O*NET
onet_titles_df = pd.read_excel(os.path.join(base_dir, 'lookups', 'Alternate Titles.xlsx'))
onet_alt_titles = collections.defaultdict(list)
for code, group in onet_titles_df.groupby('O*NET-SOC Code'):
onet_off_title = group.iloc[0]['Title'].lower()
onet_alt_title = list(group['Alternate Title'].values)
onet_alt_title = [elem.lower() for elem in onet_alt_title]
onet_alt_titles[code].append(onet_off_title)
onet_alt_titles[code].extend(onet_alt_title)
# ## 4.2 Collect occupation descriptions for ESCO and O\*NET
# Collect occupation descriptions for ESCO
esco_desc = {}
for ix, row in mini_esco.iterrows():
esco_code = row['id']
esco_occ_desc = row['description'].lower()
esco_desc[esco_code] = esco_occ_desc
# Collect occupation descriptions for O*NET
onet_occ_info = pd.read_excel(os.path.join(base_dir, 'lookups', 'Occupation Data.xlsx'))
onet_desc = {}
for ix, row in onet_occ_info.iterrows():
onet_code = row['O*NET-SOC Code']
onet_occ_desc = row['Description'].lower()
onet_desc[onet_code] = onet_occ_desc
# Add official job titles
onet_off_titles = {}
for ix, row in onet_occ_info.iterrows():
onet_code = row['O*NET-SOC Code']
onet_occ_title = row['Title'].lower()
onet_off_titles[onet_code] = onet_occ_title
# +
#Save all description and title dicts
with open(os.path.join(base_dir, 'outputs', 'onet_desc.pkl'), 'wb') as f:
pickle.dump(onet_desc, f)
with open(os.path.join(base_dir, 'outputs', 'esco_desc.pkl'), 'wb') as f:
pickle.dump(esco_desc, f)
with open(os.path.join(base_dir, 'outputs', 'onet_alt_titles.pkl'), 'wb') as f:
pickle.dump(onet_alt_titles, f)
with open(os.path.join(base_dir, 'outputs', 'esco_alt_titles.pkl'), 'wb') as f:
pickle.dump(esco_alt_titles, f)
# -
# ## 4.3 Calculate sentence embeddings for job titles and occupation descriptors
# +
# WARNING: If you run this in a notebook (approx. 30 mins), the kernel might hang; suggestion is to run as a script
# Alternatively, you could skip this cell and read in the pre-computed embeddings if available.
start_time = time()
#ONET description embeddings
onet_desc_sentences = list(onet_desc.values())
onet_desc_embeddings = model.encode(onet_desc_sentences)
onet_desc_dict = {occup: embed for occup, embed in zip(list(onet_desc.keys()), onet_desc_embeddings)}
#ESCO description embeddings
esco_desc_sentences = list(esco_desc.values())
esco_desc_embeddings = model.encode(esco_desc_sentences)
esco_desc_dict = {occup: embed for occup, embed in zip(list(esco_desc.keys()), esco_desc_embeddings)}
#ONET title embeddings
all_onet_titles = [item for sublist in list(onet_alt_titles.values()) for item in sublist]
flat_onet_titles = list(set(all_onet_titles))
onet_title_embeddings = model.encode(flat_onet_titles)
onet_titles_dict = {title: embed for title, embed in zip(flat_onet_titles, onet_title_embeddings)}
#ESCO title embeddings
all_esco_titles = [item for sublist in list(esco_alt_titles.values()) for item in sublist]
flat_esco_titles = list(set(all_esco_titles))
esco_title_embeddings = model.encode(flat_esco_titles)
esco_titles_dict = {title: embed for title, embed in zip(flat_esco_titles, esco_title_embeddings)}
print(f'Done in {np.round(time() - start_time) / 60:.2f} minutes!')
#Save outputs
with open(os.path.join(base_dir, 'outputs', 'onet_desc_embed.pkl'), 'wb') as f:
pickle.dump(onet_desc_dict, f)
with open(os.path.join(base_dir, 'outputs', 'esco_desc_embed.pkl'), 'wb') as f:
pickle.dump(esco_desc_dict, f)
with open(os.path.join(base_dir, 'outputs', 'onet_title_embed.pkl'), 'wb') as f:
pickle.dump(onet_titles_dict, f)
with open(os.path.join(base_dir, 'outputs', 'esco_title_embed.pkl'), 'wb') as f:
pickle.dump(esco_titles_dict, f)
# -
# Read in the pre-computed embeddings, if available (see instructions for downloading the embeddings in the readme.md document).
# +
# Read in computed embeddings
with open(os.path.join(base_dir, 'outputs', 'onet_desc_embed.pkl'), 'rb') as f:
onet_desc_dict = pickle.load(f)
with open(os.path.join(base_dir, 'outputs', 'esco_desc_embed.pkl'), 'rb') as f:
esco_desc_dict = pickle.load(f)
with open(os.path.join(base_dir, 'outputs', 'onet_title_embed.pkl'), 'rb') as f:
onet_titles_dict = pickle.load(f)
with open(os.path.join(base_dir, 'outputs', 'esco_title_embed.pkl'), 'rb') as f:
esco_titles_dict = pickle.load(f)
# -
# ## 4.4 Measure similarity of ESCO occupations against most likely O\*NET occupations
# +
# Calculate similarities (approx. 5 mins);
# Alternatively, can skip two cells ahead if pre-computed results are available
start_time = time()
esco_onet_dict = collections.defaultdict(dict)
for ix, row in mini_esco.iterrows():
esco_code = row['id']
onet_codes = row['onet_codes']
isco_code = row['isco_group']
for code in onet_codes:
res = eval_onet_match(esco_code, code)
esco_onet_dict[esco_code][code] = res+(isco_code,)
print(f'Done in {np.round(time() - start_time) / 60:.2f} minutes!')
# +
# Uncomment if saving the `esco_onet_dict` dictionary
# with open(os.path.join(base_dir, 'outputs', 'esco_onet_dict.pkl'), 'wb') as f:
# pickle.dump(esco_onet_dict, f)
# -
# If pre-computed results available, can skip to here
with open(os.path.join(base_dir, 'outputs', 'esco_onet_dict.pkl'), 'rb') as f:
esco_onet_dict = pickle.load(f)
# Condense the dict above and calculate single similarity value as a weighted average
compressed_esco_onet_dict = dict()
for k, v in esco_onet_dict.items():
new_values = []
for k2,v2 in v.items():
score = v2[1]*0.4 + v2[2]*0.3 + v2[3]*0.3
new_values.append((k2, v2[0], score, v2[4], v2[5]))
new_values = sorted(new_values, key = lambda x: x[2], reverse = True)
compressed_esco_onet_dict[k] = new_values
# Check
compressed_esco_onet_dict['956']
esco_onet_df = pd.DataFrame.from_dict(compressed_esco_onet_dict, orient = 'index')
esco_onet_df['id'] = esco_onet_df.index
esco_onet_df['esco_title'] = esco_onet_df['id'].apply(lambda x: esco_alt_titles[x][0])
esco_onet_df.head(3)
# This file was used for the first sweep of manual review
esco_onet_df.to_csv(os.path.join(base_dir, 'outputs', 'esco_onet_df.csv'))
# # 5. First sweep of manual review
#
# In the first sweep of the manual review, the 'constrained' matches were reviewed, and the most suitable match was recorded (if the reviewer was confident). The recommendations from the first sweep of reviews are saved in `reviews/esco_onet_recommended.csv`.
# # 6. Measure similarity of ESCO occupations against all O\*NET occupations
#
# In addition to evaluating the 'constrained' most likely matches, we also measured similarity of an ESCO occupation to all O\*NET occupations in case the best matching O\*NET occupation was not included in the set of 'constrained' O\*NET matches.
# Find the best ESCO match against all ONET codes (may take several hours)
# Alternatively, can skip two cells ahead if pre-computed results are available
start_time = time()
esco_onet_best_dict = collections.defaultdict(dict)
for ix, row in mini_esco.iterrows():
esco_code = row['id']
onet_codes = onet_off_titles.keys()
isco_code = row['isco_group']
for code in onet_codes:
res = eval_onet_match(esco_code, code)
esco_onet_best_dict[esco_code][code] = res+(isco_code,)
print(f'Done in {np.round(time() - start_time) / 60:.2f} minutes!')
# +
# Uncomment if saving the `esco_onet_best_dict` dictionary
# with open(os.path.join(base_dir, 'outputs', 'esco_onet_best_dict.pkl'), 'wb') as f:
# pickle.dump(esco_onet_best_dict, f)
# -
# If pre-computed results available, can skip to here
with open(os.path.join(base_dir, 'outputs', 'esco_onet_best_dict.pkl'), 'rb') as f:
esco_onet_best_dict = pickle.load(f)
compressed_esco_onet_best_dict = dict()
for k, v in esco_onet_best_dict.items():
new_values = []
for k2,v2 in v.items():
score = v2[1]*0.4 + v2[2]*0.3 + v2[3]*0.3
new_values.append((k2, v2[0], score, v2[4], v2[5]))
new_values = sorted(new_values, key = lambda x: x[2], reverse = True)
compressed_esco_onet_best_dict[k] = new_values[0]
# # 7. Second sweep of manual review
#
# The most likely 'constrained' matches, the recommendations from the first sweep of review, and the best matching options across all O\*NET occupations were combined in `esco_onet_merged.xlsx` and again manually reviewed.
# Read in recommendations from the first manual review
recommendations = pd.read_csv(os.path.join(base_dir, 'review','esco_onet_recommended.csv'), encoding = 'utf-8')
recommendations['id'] = recommendations['id'].astype(str)
recommendations.head()
# +
# Combine the recommendation with the 'constrained' matches and the overall most similar option
merged = esco_onet_df.merge(recommendations[['id', 'esco_title', 'Recommended option']],
how = 'inner',
left_on = 'id',
right_on = 'id')
merged['most_similar_overall'] = merged['id'].apply(lambda x: compressed_esco_onet_best_dict[str(x)])
# This file was used to create 'esco_onet_merged.xlsx', which was then used
# for the second sweep of manual reviews and independent validation
merged.to_csv(os.path.join(base_dir, 'outputs', 'esco_onet_merged.csv'), index = False)
# -
# # 8. Final crosswalk
#
# **The final validated mapping between O\*NET and ESCO is saved in `esco_onet_crosswalk_Nov2020.csv`**
#
# For a number of occupations, additional research was required. This involved reading occupation descriptions and job requirements. We used the following considerations to decide between multiple potential matches:
#
# - ‘Constrained’ occupations (i.e. occupations that fit existing O*NET to ISCO mapping) were given preference.
# - A higher number of shared job titles was assumed to indicate a better match between occupations.
# - General O*NET occupational codes (e.g. 11-9039.00 ‘...all other’) were avoided if possible.
# - We attempted to take into account the ISCO-08 skill level (i.e. the first unit of ISCO which reflects the ranking of occupations from managerial to elementary) when assigning the corresponding O*NET occupations.
#
# The crosswalk also contains information about our level of confidence in the assigned match. There are three levels of confidence:
#
# - A score of 2 indicates that the best ‘constrained’ O*NET occupation was also the most semantically similar across all occupations (31 per cent of matches).
# - A score of 1 indicates that the two automatically identified options disagree but the reviewers have agreed on the best O*NET match following two rounds of manual review (65 per cent).
# - A score of 0.5 indicates that reviewers disagreed with the initial reviewer’s assignment and there was no consensus on the most suitable O\*NET match (4 per cent of the cases). In this case, the ESCO occupation in question was assigned to an O\*NET occupation suggested by a second reviewer.
| supplementary_online_data/ONET_ESCO_crosswalk/ONET_to_ESCO_crosswalk.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/cybertraining-dsc/sp20-523-326/blob/master/notebooks/Assignment2_Shahapurkar_Gangaprasad.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] colab_type="text" id="KScAHZqbQ_Sr"
# # MNIST Classification
# + [markdown] colab_type="text" id="eXNb7rQOQ2bW"
#
# In this lesson we discuss in how to create a simple IPython Notebook to solve
# an image classification problem. MNIST contains a set of pictures
#
# + [markdown] colab_type="text" id="ZLpLVkFLRK-1"
# ## Import Libraries
#
# Note: https://python-future.org/quickstart.html
# + colab_type="code" id="HqorYeyBkCyi" colab={}
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from keras.models import Sequential
from keras.layers import Dense, Activation, Dropout
from keras.utils import to_categorical, plot_model
from keras.datasets import mnist
# + [markdown] colab_type="text" id="kmlJQqK42Cs2"
# ## Warm Up Exercise
# + [markdown] colab_type="text" id="YajEPjlyRkrr"
# ## Pre-process data
# + [markdown] colab_type="text" id="4yOjQ9cjRrwQ"
# ### Load data
#
# First we load the data from the inbuilt mnist dataset from Keras
# Here we have to split the data set into training and testing data.
# The training data or testing data has two components.
# Training features and training labels.
# For instance every sample in the dataset has a corresponding label.
# In Mnist the training sample contains image data represented in terms of
# an array. The training labels are from 0-9.
#
# Here we say x_train for training data features and y_train as the training labels. Same goes for testing data.
# + colab_type="code" id="LN7h9FQ-kIzB" colab={}
(x_train, y_train), (x_test, y_test) = mnist.load_data()
# + [markdown] colab_type="text" id="AJJxAbKxR7kZ"
# ### Identify Number of Classes
#
# As this is a number classification problem. We need to know how many classes are there.
# So we'll count the number of unique labels.
# + colab_type="code" id="laqnfrEBSFxZ" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="10b86ed2-c8f8-4f02-9f82-9cbb3f11405d"
num_labels = len(np.unique(y_train))
num_labels
# + [markdown] colab_type="text" id="ugx6012YSXtA"
# ### Convert Labels To One-Hot Vector
#
# Read more on one-hot vector.
# + colab_type="code" id="dpSBHnBEScZN" colab={}
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
# + [markdown] colab_type="text" id="vyAfgPaJU753"
# ## Image Reshaping
#
# The training model is designed by considering the data as a vector.
# This is a model dependent modification. Here we assume the image is
# a squared shape image.
# + colab_type="code" id="8vxzUIK8Sedn" colab={}
image_size = x_train.shape[1]
input_size = image_size * image_size
# + [markdown] colab_type="text" id="gZnBo49lVWDM"
# ## Resize and Normalize
#
# The next step is to continue the reshaping to a fit into a vector
# and normalize the data. Image values are from 0 - 255, so an
# easy way to normalize is to divide by the maximum value.
#
# + colab_type="code" id="b9qUX7mwSf-u" colab={}
x_train = np.reshape(x_train, [-1, input_size])
x_train = x_train.astype('float32') / 255
x_test = np.reshape(x_test, [-1, input_size])
x_test = x_test.astype('float32') / 255
# + [markdown] colab_type="text" id="tn89L_-zVxUB"
# ## Create a Keras Model
#
# Keras is a neural network library. The summary function provides tabular summary on the model you created. And the plot_model function provides a grpah on the network you created.
# + colab_type="code" id="c3o_k-adkOy4" colab={"base_uri": "https://localhost:8080/", "height": 676} outputId="8b2e1a0e-386a-4b43-8cb2-afd82d2bfd1a"
# Create Model
# network parameters
batch_size = 4
hidden_units = 64
model = Sequential()
model.add(Dense(hidden_units, input_dim=input_size))
model.add(Dense(num_labels))
model.add(Activation('softmax'))
model.summary()
plot_model(model, to_file='mlp-mnist.png', show_shapes=True)
# + [markdown] colab_type="text" id="nwl6dlU3aStZ"
# ## Compile and Train
#
# A keras model need to be compiled before it can be used to train
# the model. In the compile function, you can provide the optimization
# that you want to add, metrics you expect and the type of loss function
# you need to use.
#
# Here we use adam optimizer, a famous optimizer used in neural networks.
#
# The loss funtion we have used is the categorical_crossentropy.
#
# Once the model is compiled, then the fit function is called upon passing the number of epochs, traing data and batch size.
#
# The batch size determines the number of elements used per minibatch in optimizing the function.
#
# **Note: Change the number of epochs, batch size and see what happens.**
#
#
# + colab_type="code" id="WkUMyJyEmsM6" colab={"base_uri": "https://localhost:8080/", "height": 145} outputId="0645d95f-1b56-4cb7-fde1-8f9b244178be"
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=3, batch_size=batch_size)
# + [markdown] colab_type="text" id="fDAY7JYmbmOq"
# ## Testing
#
# Now we can test the trained model. Use the evaluate function by passing
# test data and batch size and the accuracy and the loss value can be retrieved.
#
# **MNIST_V1.0|Exercise: Try to observe the network behavior by changing the number of epochs, batch size and record the best accuracy that you can gain. Here you can record what happens when you change these values. Describe your observations in 50-100 words.**
#
# + colab_type="code" id="0sfTk_pcjXHD" colab={"base_uri": "https://localhost:8080/", "height": 72} outputId="88fffbf5-500b-4708-9dcc-829501830d35"
loss, acc = model.evaluate(x_test, y_test, batch_size=batch_size)
print("\nTest accuracy: %.1f%%" % (100.0 * acc))
# + [markdown] colab_type="text" id="b2fDpqnfcEmC"
# ## Final Note
#
# This programme can be defined as a hello world programme in deep
# learning. Objective of this exercise is not to teach you the depths of
# deep learning. But to teach you basic concepts that may need to design a
# simple network to solve a problem. Before running the whole code, read
# all the instructions before a code section.
#
# ## Homework
#
# **Solve Exercise MNIST_V1.0.**
# + [markdown] colab_type="text" id="UufRkOsSRaUR"
#
# ### Reference:
#
# [Orignal Source to Source Code](https://github.com/PacktPublishing/Advanced-Deep-Learning-with-Keras)
#
| docs/report/fa20-523-326/notebooks/Assignment2_Shahapurkar_Gangaprasad.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import multiprocessing as mp
import spacy
from spacy.lemmatizer import Lemmatizer
from spacy.lang.en import LEMMA_INDEX, LEMMA_EXC, LEMMA_RULES
lemmatizer = Lemmatizer(LEMMA_INDEX, LEMMA_EXC, LEMMA_RULES)
import time
import numpy as np
from multiprocessing import Pool
np.random.seed(seed=1991)
import pickle as pkl
def decader(x):
return(x -x%10)
decades=[2000, 1990, 1980, 1970, 1960, 1950, 1940, 1930, 1920, 1900, 1910,
1890, 1880, 1870, 1850, 1860, 1840, 1830, 1820, 1810, 1800]
# NOUN (nouns),
# VERB (verbs), ADJ (adjectives), ADV (adverbs),
# PRON (pronouns), DET (determiners and articles),
# ADP (prepositions and postpositions), NUM (numerals), CONJ (conjunctions), PRT (particles), ‘.’
# (punctuation marks) and X
unigrams="a b c d e f g h i j k l m n o other p pos punctuation q r s t u v w x y z"
unigram_list=unigrams.split()
br_to_us=pd.read_excel("Book.xlsx",header=1)
br_to_us_dict=dict(zip(br_to_us.UK.tolist(),br_to_us.US.tolist()))
np.random.shuffle(unigram_list)
spelling_replacement={'context':br_to_us_dict}
num_cores=mp.cpu_count() -1
def lemma_maker(x, y):
#print(lemmatizer(x,y)[0])
return lemmatizer(x,y)[0]
def reducer(dfs):
df=dfs.copy()
if df.empty==True:
return df
df['context'],df['pos']=df['context_pos'].str.split('_', 1).str
df.replace(spelling_replacement,inplace=True)
df['context']=np.vectorize(lemma_maker)(df['context'], df['pos'])
df['context']=df['context']+"_"+df['pos']
#df.drop(['pos'],axis=1,inplace=True)
df=df.groupby(['context','decade'])['count'].sum().to_frame()
df.reset_index(inplace=True)
return df
def chunked_dataset_extracter(dfs):
#df=dfs.copy()
dfs.columns=['context_pos','decade','count']
df=dfs.query('decade > 1799').copy()
df['decade']=np.vectorize(decader)(df['decade'])
df=df.groupby(['context_pos','decade'])['count'].sum().to_frame()
df.reset_index(inplace=True)
df.context_pos=df.context_pos.str.lower()
df=df.loc[df.context_pos.str.match("^['.a-z-]+_(noun|verb|adj|adv)$",na=False)]
df=df.loc[~(df.context_pos.str.match("^.*['.-]{2,}.*$",na=False))]
df=df.groupby(['context_pos','decade'])['count'].sum().to_frame()
df.reset_index(inplace=True)
#df=reducer(df)
return df
def dataset_extracter(letter):
CHUNKSIZE = 1_000_000
print(f"Started with letter(s) {letter}")
cur_time=time.time()
total_df_shape=0
df_list=[]
path_loc="http://storage.googleapis.com/books/ngrams/books/googlebooks-eng-all-1gram-20120701-"+letter+".gz"
dfs = pd.read_csv(path_loc, compression='gzip', header=None, sep="\t", quotechar='"',usecols=[0,1,2],chunksize=CHUNKSIZE)
for df in dfs:
total_df_shape+=df.shape[0]
df_list.append(chunked_dataset_extracter(df))
#print(total_df_shape)
complete_df=pd.concat(df_list,ignore_index=True)
#print(complete_df.shape[0])
total_contexts=reducer(complete_df)
after_shape=total_contexts.shape[0]
print(f"Finished with letter(s) {letter} ; Before : {total_df_shape}, After : {after_shape} Change in percentage : {(total_df_shape-after_shape)/total_df_shape*100:0.2f}%")
print(f"Letter(s) {letter} took time {(time.time()-cur_time):0.2f} seconds")
print("\n")
return total_contexts
dfs=dataset_extracter('j')
dfs.loc[dfs.context.str.contains('_')]
# +
context_list=[]
pool = Pool(num_cores)
#print("Started with letter "+str(unigram_list))
for temp_contexts in pool.imap_unordered(dataset_extracter,unigram_list):
context_list.append(temp_contexts)
pool.close()
pool.join()
contexts = pd.concat(context_list,ignore_index=True,sort=False)
contexts.decade=contexts.decade.astype("int32")
contexts_df=contexts.pivot_table(values='count',columns='decade',index='context',aggfunc=np.sum)
# -
contexts_df
# +
total_freq_all=contexts_df.sum(axis=1)
total_freq_no_2000=contexts_df.drop(2000,axis=1).sum(axis=1)
all_decade_freq=contexts_df.dropna().sum(axis=1)
all_decade_contents=contexts_df[contexts_df>10]
all_decade_free_min_10=all_decade_contents.dropna().sum(axis=1)
# -
top_total_freq_all=total_freq_all.sort_values(ascending=False).head(50_000).index.tolist()
top_total_freq_no_2000=total_freq_no_2000.sort_values(ascending=False).head(50_000).index.tolist()
top_all_decade_freq=all_decade_freq.sort_values(ascending=False).head(50_000).index.tolist()
top_all_decade_free_min_10=all_decade_free_min_10.sort_values(ascending=False).head(50_000).index.tolist()
len(set(top_total_freq_all).intersection(top_total_freq_no_2000))
len(set(top_all_decade_freq).intersection(top_all_decade_free_min_10))
len(set(top_total_freq_all).intersection(top_all_decade_free_min_10))
chosen_context=contexts.sort_values('count',ascending=False).head(50_000)
chosen_context.context["of"]
top_all_decade_free_min_10[:100]
# Four extra columns are added, which are:
#
# $cf$ : Collection frequency, which is the log of the sum of the term across decades, i.e. log(sum(term).
#
# $presence$ : Number of decades a term is present in.
#
# $pattern$ : A binary representation of a word. 1 if the $word$ exists in a decade, 0 otherwise.
pkl.dump(top_all_decade_free_min_10,open( "contexts.pkl", "wb" ) )
| src/Notebooks/ContextWordFilter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Modeling spectral bands with `climlab`
# Here is a brief introduction to the `climlab.BandRCModel` process.
#
# This is a model that divides the spectrum into 7 distinct bands: three shortwave and four longwave.
#
# As we will see, the process works much like the familiar `climlab.RadiativeConvectiveModel`.
#
# ## About the spectra
# The shortwave is divided into three channels:
#
# - Channel 0 is the Hartley and Huggins band (extreme UV, 200 - 340 nm, 1% of total flux, strong ozone absorption)
# - Channel 1 is Chappuis band (450 - 800 nm, 27% of total flux, moderate ozone absorption)
# - Channel 2 is remaining radiation (72% of total flux, largely in the visible range, no ozone absorption)
#
# The longwave is divided into four bands:
#
# - Band 0 is the window region (between 8.5 and 11 $\mu$m), 17% of total flux.
# - Band 1 is the CO2 absorption channel (the band of strong absorption by CO2 around 15 $\mu$m), 15% of total flux
# - Band 2 is a weak water vapor absorption channel, 35% of total flux
# - Band 3 is a strong water vapor absorption channel, 33% of total flux
#
# The longwave decomposition is not as easily related to specific wavelengths, as in reality there is a lot of overlap between H$_2$O and CO$_2$ absorption features (as well as absorption by other greenhouse gases such as CH$_4$ and N$_2$O that we are not representing).
# ### Example usage of the spectral model
# %matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import climlab
from climlab import constants as const
# First try a model with all default parameters. Usage is very similar to the familiar `RadiativeConvectiveModel`.
col1 = climlab.BandRCModel()
print col1
# Check out the list of subprocesses.
#
# We now have a process called `H2O`, in addition to things we've seen before.
#
# This model keeps track of water vapor. We see the specific humidity in the list of state variables:
col1.state
# The water vapor field is initialized to zero. The `H2O` process will set the specific humidity field at every timestep to a specified profile. More on that below. For now, let's compute a radiative equilibrium state.
col1.integrate_years(2)
# Check for energy balance
col1.ASR - col1.OLR
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot( col1.Tatm, col1.lev, 'c-', label='default' )
ax.plot( col1.Ts, climlab.constants.ps, 'co', markersize=16 )
ax.invert_yaxis()
ax.set_xlabel('Temperature (K)', fontsize=16)
ax.set_ylabel('Pressure (hPa)', fontsize=16 )
ax.set_title('Temperature profiles', fontsize = 18)
ax.grid()
# By default this model has convective adjustment. We can set the adjusted lapse rate by passing a parameter when we create the model.
#
# The model currently has no ozone (so there is no stratosphere). Not very realistic!
#
# More reasonable-looking troposphere, but still no stratosphere.
# ### About the radiatively active gases
# The Band model is aware of three different absorbing gases: O3 (ozone), CO2, and H2O (water vapor). The abundances of these gases are stored in a dictionary of arrays as follows:
col1.absorber_vmr
# Ozone and CO2 are both specified in the model. The default, as you see above, is zero ozone, and constant (well-mixed) CO2 at a volume mixing ratio of 3.8E-4 or 380 ppm.
#
# Water vapor is handled differently: it is determined by the model at each timestep. We make the following assumptions, following a classic paper on radiative-convective equilibrium by Manabe and Wetherald (J. Atmos. Sci. 1967):
#
# - the relative humidity just above the surface is fixed at 77% (can be changed of course... see the parameter `col1.relative_humidity`
# - water vapor drops off linearly with pressure
# - there is a small specified amount of water vapor in the stratosphere.
# ## Putting in some ozone
# We need to provide some ozone data to the model in order to simulate a stratosphere. As we did with the original column model, we will use the ozone climatology data provided with the CESM model.
#
# See here for more information, including some plots of the ozone data:
# <http://www.atmos.albany.edu/facstaff/brose/classes/ENV480_Spring2014/styled-5/code-3/index.html>
# +
import netCDF4 as nc
datapath = "http://ramadda.atmos.albany.edu:8080/repository/opendap/latest/Top/Users/Brian+Rose/CESM+runs/"
endstr = "/entry.das"
topo = nc.Dataset( datapath + 'som_input/USGS-gtopo30_1.9x2.5_remap_c050602.nc' + endstr )
ozone = nc.Dataset( datapath + 'som_input/ozone_1.9x2.5_L26_2000clim_c091112.nc' + endstr )
# -
# Dimensions of the ozone file
lat = ozone.variables['lat'][:]
lon = ozone.variables['lon'][:]
lev = ozone.variables['lev'][:]
# Taking annual, zonal, and global averages of the ozone data
O3_zon = np.mean( ozone.variables['O3'],axis=(0,3) )
O3_global = np.sum( O3_zon * np.cos(np.deg2rad(lat)), axis=1 ) / sum( np.cos(np.deg2rad(lat) ) )
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot( O3_global*1E6, lev)
ax.invert_yaxis()
# We are going to create another instance of the model, this time using the same vertical coordinates as the ozone data.
# Create the column with appropriate vertical coordinate, surface albedo and convective adjustment
col2 = climlab.BandRCModel(lev=lev)
print col2
# +
# Set the ozone mixing ratio
# IMPORTANT: we need to flip the ozone array around because the vertical coordinate runs the wrong way
# (first element is top of atmosphere, whereas our model expects the first element to be just above the surface)
#col2.absorber_vmr['O3'] = np.flipud(O3_global)
# Not anymore!
col2.absorber_vmr['O3'] = O3_global
# -
# Run the model out to equilibrium!
col2.integrate_years(2.)
fig = plt.figure()
ax = fig.add_subplot(111)
ax.plot( col1.Tatm, np.log(col1.lev/1000), 'c-', label='RCE' )
ax.plot( col1.Ts, 0, 'co', markersize=16 )
ax.plot(col2.Tatm, np.log(col2.lev/1000), 'r-', label='RCE O3' )
ax.plot(col2.Ts, 0, 'ro', markersize=16 )
ax.invert_yaxis()
ax.set_xlabel('Temperature (K)', fontsize=16)
ax.set_ylabel('log(Pressure)', fontsize=16 )
ax.set_title('Temperature profiles', fontsize = 18)
ax.grid()
ax.legend()
# Once we include ozone we get a well-defined stratosphere. We can also a slight cooling effect in the troposphere.
#
# Things to consider / try:
#
# - Here we used the global annual mean Q = 341.3 W m$^{-2}$. We might want to consider latitudinal or seasonal variations in Q.
# - We also used the global annual mean ozone profile! Ozone varies tremendously in latitude and by season. That information is all contained in the ozone data file we opened above. We might explore the effects of those variations.
# - We can calculate climate sensitivity in this model by doubling the CO2 concentration and re-running out to the new equilibrium. Does the amount of ozone affect the climate sensitivity? (example below)
# - An important shortcoming of the model: there are no clouds! (that would be the next step in the hierarchy of column models)
# - Clouds would act both in the shortwave (increasing the albedo, cooling the climate) and in the longwave (greenhouse effect, warming the climate). Which effect is stronger depends on the vertical structure of the clouds (high or low clouds) and their optical properties (e.g. thin cirrus clouds are nearly transparent to solar radiation but are good longwave absorbers).
col3 = climlab.process_like(col2)
print col3
# Let's double CO2.
col3.absorber_vmr['CO2'] *= 2.
col3.compute_diagnostics()
print 'The radiative forcing for doubling CO2 is %f W/m2.' % (col2.OLR - col3.OLR)
col3.integrate_years(3)
col3.ASR - col3.OLR
print 'The Equilibrium Climate Sensitivity is %f K.' % (col3.Ts - col2.Ts)
col4 = climlab.process_like(col1)
print col4
col4.absorber_vmr
col4.absorber_vmr['CO2'] *= 2.
col4.compute_diagnostics()
print 'The radiative forcing for doubling CO2 is %f W/m2.' % (col1.OLR - col4.OLR)
col4.integrate_years(3.)
col4.ASR - col4.OLR
print 'The Equilibrium Climate Sensitivity is %f K.' % (col4.Ts - col1.Ts)
# Interesting that the model is MORE sensitive when ozone is set to zero.
| courseware/The spectral column model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
'''Modified from joblib documentation: https://joblib.readthedocs.io/en/latest/auto_examples/memory_basic_usage.html
'''
# %load_ext autoreload
# %autoreload 2
import time
from functools import partial
import numpy as np
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from vflow import Vset, init_args
# +
np.random.seed(13)
X, y = make_classification(n_samples=50, n_features=5)
X, y = init_args([X, y], names=['X', 'y'])
def costly_compute(data, row_index=0):
"""Simulate an expensive computation"""
time.sleep(5)
return data[row_index,]
subsampling_funcs = [partial(costly_compute, row_index=np.arange(25))]
uncached_set = Vset(name='subsampling_uncached', modules=subsampling_funcs)
cached_set = Vset(name='subsampling_cached', modules=subsampling_funcs, cache_dir='./')
# +
# %%time
# this always takes about 5 seconds
uncached_set.fit(X)
uncached_set.out
# +
# %%time
# the first time this runs it takes 5 seconds, but the next time you run the notebook it's very fast
cached_set.fit(X)
cached_set.out
# -
| notebooks/experimental/caching.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # TensorFlow Visual Recognition Sample Application Part 2
#
# ## Provide a User Interface with a PixieApp
#
# ## Define the model metadata
# + pixiedust={"displayParams": {}}
import tensorflow as tf
import requests
models = {
"mobilenet": {
"base_url":"https://github.com/DTAIEB/Thoughtful-Data-Science/raw/master/chapter%206/Visual%20Recognition/mobilenet_v1_0.50_224",
"model_file_url": "frozen_graph.pb",
"label_file": "labels.txt",
"output_layer": "MobilenetV1/Predictions/Softmax"
}
}
# helper method for reading attributes from the model metadata
def get_model_attribute(model, key, default_value = None):
if key not in model:
if default_value is None:
raise Exception("Require model attribute {} not found".format(key))
return default_value
return model[key]
# -
# ## Helper methods for loading the graph and labels for a given model
# +
# Helper method for resolving url relative to the selected model
def get_url(model, path):
return model["base_url"] + "/" + path
# Download the serialized model and create a TensorFlow graph
def load_graph(model):
graph = tf.Graph()
graph_def = tf.GraphDef()
graph_def.ParseFromString(
requests.get( get_url( model, model["model_file_url"] ) ).content
)
with graph.as_default():
tf.import_graph_def(graph_def)
return graph
# Load the labels
def load_labels(model, as_json = False):
labels = [line.rstrip() \
for line in requests.get( get_url( model, model["label_file"] ) ).text.split("\n") \
if line != ""]
if as_json:
return [{"index": item.split(":")[0], "label" : item.split(":")[1]} for item in labels]
return labels
# -
# ## Use BeautifulSoup to scrape the images from a given url
# +
from bs4 import BeautifulSoup as BS
import re
# return an array of all the images scraped from an html page
def get_image_urls(url):
# Instantiate a BeautifulSoup parser
soup = BS(requests.get(url).text, "html.parser")
# Local helper method for extracting url
def extract_url(val):
m = re.match(r"url\((.*)\)", val)
val = m.group(1) if m is not None else val
return "http:" + val if val.startswith("//") else val
# List comprehension that look for <img> elements and backgroud-image styles
return [extract_url(imgtag['src']) for imgtag in soup.find_all('img')] + [ \
extract_url(val.strip()) for key,val in \
[tuple(selector.split(":")) for elt in soup.select("[style]") \
for selector in elt["style"].strip(" ;").split(";")] \
if key.strip().lower()=='background-image' \
]
# -
# ## Helper method for downloading an image into a temp file
import tempfile
def download_image(url):
response = requests.get(url, stream=True)
if response.status_code == 200:
with tempfile.NamedTemporaryFile(delete=False) as f:
for chunk in response.iter_content(2048):
f.write(chunk)
return f.name
else:
raise Exception("Unable to download image: {}".format(response.status_code))
# ## Decode an image into a tensor
# decode a given image into a tensor
def read_tensor_from_image_file(model, file_name):
file_reader = tf.read_file(file_name, "file_reader")
if file_name.endswith(".png"):
image_reader = tf.image.decode_png(file_reader, channels = 3,name='png_reader')
elif file_name.endswith(".gif"):
image_reader = tf.squeeze(tf.image.decode_gif(file_reader,name='gif_reader'))
elif file_name.endswith(".bmp"):
image_reader = tf.image.decode_bmp(file_reader, name='bmp_reader')
else:
image_reader = tf.image.decode_jpeg(file_reader, channels = 3, name='jpeg_reader')
float_caster = tf.cast(image_reader, tf.float32)
dims_expander = tf.expand_dims(float_caster, 0);
# Read some info from the model metadata, providing default values
input_height = get_model_attribute(model, "input_height", 224)
input_width = get_model_attribute(model, "input_width", 224)
input_mean = get_model_attribute(model, "input_mean", 0)
input_std = get_model_attribute(model, "input_std", 255)
resized = tf.image.resize_bilinear(dims_expander, [input_height, input_width])
normalized = tf.divide(tf.subtract(resized, [input_mean]), [input_std])
sess = tf.Session()
result = sess.run(normalized)
return result
# ## Score_image method that run the model and return the top 5 candidate answers
# +
import numpy as np
# classify an image given its url
def score_image(graph, model, url):
# Get the input and output layer from the model
input_layer = get_model_attribute(model, "input_layer", "input")
output_layer = get_model_attribute(model, "output_layer")
# Download the image and build a tensor from its data
t = read_tensor_from_image_file(model, download_image(url))
# Retrieve the tensors corresponding to the input and output layers
input_tensor = graph.get_tensor_by_name("import/" + input_layer + ":0");
output_tensor = graph.get_tensor_by_name("import/" + output_layer + ":0");
with tf.Session(graph=graph) as sess:
# Execute the output, overriding the input tensor with the one corresponding
# to the image in the feed_dict argument
results = sess.run(output_tensor, {input_tensor: t})
results = np.squeeze(results)
# select the top 5 candidate and match them to the labels
top_k = results.argsort()[-5:][::-1]
labels = load_labels(model)
return [(labels[i].split(":")[1], results[i]) for i in top_k]
# -
# ## PixieApp with the following screens:
# 1. Ask the user for a url to a web page
# 2. Display the images with top 5 candidate classifications
# + pixiedust={"displayParams": {}}
from pixiedust.display.app import *
@PixieApp
class ScoreImageApp():
def setup(self):
self.model = models["mobilenet"]
self.graph = load_graph( self.model )
@route()
def main_screen(self):
return """
<style>
div.outer-wrapper {
display: table;width:100%;height:300px;
}
div.inner-wrapper {
display: table-cell;vertical-align: middle;height: 100%;width: 100%;
}
</style>
<div class="outer-wrapper">
<div class="inner-wrapper">
<div class="col-sm-3"></div>
<div class="input-group col-sm-6">
<input id="url{{prefix}}" type="text" class="form-control"
value="https://www.flickr.com/search/?text=cats"
placeholder="Enter a url that contains images">
<span class="input-group-btn">
<button class="btn btn-default" type="button" pd_options="image_url=$val(url{{prefix}})">Go</button>
</span>
</div>
</div>
</div>
"""
@route(image_url="*")
@templateArgs
def do_process_url(self, image_url):
image_urls = get_image_urls(image_url)
return """
<div>
{%for url in image_urls%}
<div style="float: left; font-size: 9pt; text-align: center; width: 30%; margin-right: 1%; margin-bottom: 0.5em;">
<img src="{{url}}" style="width: 100%">
<div style="display:inline-block" pd_render_onload pd_options="score_url={{url}}"></div>
</div>
{%endfor%}
<p style="clear: both;">
</div>
"""
@route(score_url="*")
@templateArgs
def do_score_url(self, score_url):
results = score_image(self.graph, self.model, score_url)
return """
<ul style="text-align:left">
{%for label, confidence in results%}
<li><b>{{label}}</b>: {{confidence}}</li>
{%endfor%}
</ul>
"""
app = ScoreImageApp()
app.run()
# -
| chapter 6/Tensorflow VR Part 2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from qiskit import *
from numpy.random import randint, shuffle
from qiskit.visualization import plot_histogram, plot_bloch_multivector
import numpy as np
def bit_string(n) :
return randint(0, 2, n)
def encode_bits(bits, bases):
l = len(bits)
base_circuit = QuantumCircuit(l, l)
for i in range(l):
if bases[i] == 0:
if bits[i] == 1:
base_circuit.x(i)
if bases[i] == 1:
if bits[i] == 0:
base_circuit.h(i)
if bits[i] == 1:
base_circuit.x(i)
base_circuit.h(i)
base_circuit.barrier()
return base_circuit
def measure_bits(circuit, bases):
backend = Aer.get_backend('qasm_simulator')
for j in range(len(bases)):
if bases[j] == 0:
circuit.measure(j,j)
if bases[j] == 1:
circuit.h(j)
circuit.measure(j,j)
r = execute(circuit, backend, shots=1, memory = True).result().get_counts()
return circuit, [int(ch) for ch in list(r.keys())[0]][::-1]
def agreed_bases(a, b):
return [j for j in range(len(a)) if a[j] == b[j]]
def select_bits(bits, selection, choice):
return [bits[i] for i in range(len(selection)) if selection[i] == choice]
def error_rate(atest, btest):
W = len([j for j in range(len(atest)) if atest[j] != btest[j]])
return W / len(atest)
def information_reconciliation(a, b):
return a, b
def toeplitz(n, k, bits, seed):
matrix = np.zeros((k, n), dtype = int)
for i in range(k) :
for j in range(n) :
matrix[i,j] = seed[i - j + n - 1]
key = np.matmul(matrix, np.transpose((np.array(bits))))
return [bit%2 for bit in key]
| QCHackChallenger/Functions/BB84_functions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
import numpy as np
from keras.models import Model
from keras.layers import Input
from keras.layers.convolutional import Convolution1D
from keras import backend as K
def format_decimal(arr, places=6):
return [round(x * 10**places) / 10**places for x in arr]
# ### Convolution1D
# ### _legacy weights_
# **[convolutional.Convolution1D.0.legacy] 4 length 3 filters on 5x2 input, activation='linear', border_mode='valid', subsample_length=1, bias=True**
# +
data_in_shape = (5, 2)
conv = Convolution1D(4, 3, activation='linear', border_mode='valid', subsample_length=1, bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(input=layer_0, output=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(200)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
print('b shape:', weights[1].shape)
print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
print('')
print('in shape:', data_in_shape)
print('in:', format_decimal(data_in.ravel().tolist()))
result = model.predict(np.array([data_in]))
print('out shape:', result[0].shape)
print('out:', format_decimal(result[0].ravel().tolist()))
# -
# **[convolutional.Convolution1D.1.legacy] 4 length 3 filters on 6x3 input, activation='linear', border_mode='valid', subsample_length=1, bias=False**
# +
data_in_shape = (6, 3)
conv = Convolution1D(4, 3, activation='linear', border_mode='valid', subsample_length=1, bias=False)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(input=layer_0, output=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(201)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
print('')
print('in shape:', data_in_shape)
print('in:', format_decimal(data_in.ravel().tolist()))
result = model.predict(np.array([data_in]))
print('out shape:', result[0].shape)
print('out:', format_decimal(result[0].ravel().tolist()))
# -
# **[convolutional.Convolution1D.2.legacy] 2 length 3 filters on 4x6 input, activation='sigmoid', border_mode='valid', subsample_length=2, bias=True**
# +
data_in_shape = (4, 6)
conv = Convolution1D(2, 3, activation='sigmoid', border_mode='same', subsample_length=2, bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(input=layer_0, output=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(200)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
print('b shape:', weights[1].shape)
print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
print('')
print('in shape:', data_in_shape)
print('in:', format_decimal(data_in.ravel().tolist()))
result = model.predict(np.array([data_in]))
print('out shape:', result[0].shape)
print('out:', format_decimal(result[0].ravel().tolist()))
# -
# **[convolutional.Convolution1D.3.legacy] 2 length 7 filters on 8x3 input, activation='tanh', border_mode='same', subsample_length=1, bias=True**
# +
data_in_shape = (8, 3)
conv = Convolution1D(2, 7, activation='tanh', border_mode='same', subsample_length=1, bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(input=layer_0, output=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(204)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
print('b shape:', weights[1].shape)
print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
print('')
print('in shape:', data_in_shape)
print('in:', format_decimal(data_in.ravel().tolist()))
result = model.predict(np.array([data_in]))
print('out shape:', result[0].shape)
print('out:', format_decimal(result[0].ravel().tolist()))
# -
# ### _normal weights_
# **[convolutional.Convolution1D.0] 4 length 3 filters on 5x2 input, activation='linear', border_mode='valid', subsample_length=1, bias=True**
# +
data_in_shape = (5, 2)
conv = Convolution1D(4, 3, activation='linear', border_mode='valid', subsample_length=1, bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(input=layer_0, output=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(200)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
print('b shape:', weights[1].shape)
print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
print('')
print('in shape:', data_in_shape)
print('in:', format_decimal(data_in.ravel().tolist()))
result = model.predict(np.array([data_in]))
print('out shape:', result[0].shape)
print('out:', format_decimal(result[0].ravel().tolist()))
# -
# **[convolutional.Convolution1D.1] 4 length 3 filters on 6x3 input, activation='linear', border_mode='valid', subsample_length=1, bias=False**
# +
data_in_shape = (6, 3)
conv = Convolution1D(4, 3, activation='linear', border_mode='valid', subsample_length=1, bias=False)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(input=layer_0, output=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(201)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
print('')
print('in shape:', data_in_shape)
print('in:', format_decimal(data_in.ravel().tolist()))
result = model.predict(np.array([data_in]))
print('out shape:', result[0].shape)
print('out:', format_decimal(result[0].ravel().tolist()))
# -
# **[convolutional.Convolution1D.2] 2 length 3 filters on 4x6 input, activation='sigmoid', border_mode='valid', subsample_length=2, bias=True**
# +
data_in_shape = (4, 6)
conv = Convolution1D(2, 3, activation='sigmoid', border_mode='same', subsample_length=2, bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(input=layer_0, output=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(200)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
print('b shape:', weights[1].shape)
print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
print('')
print('in shape:', data_in_shape)
print('in:', format_decimal(data_in.ravel().tolist()))
result = model.predict(np.array([data_in]))
print('out shape:', result[0].shape)
print('out:', format_decimal(result[0].ravel().tolist()))
# -
# **[convolutional.Convolution1D.3] 2 length 7 filters on 8x3 input, activation='tanh', border_mode='same', subsample_length=1, bias=True**
# +
data_in_shape = (8, 3)
conv = Convolution1D(2, 7, activation='tanh', border_mode='same', subsample_length=1, bias=True)
layer_0 = Input(shape=data_in_shape)
layer_1 = conv(layer_0)
model = Model(input=layer_0, output=layer_1)
# set weights to random (use seed for reproducibility)
weights = []
for w in model.get_weights():
np.random.seed(204)
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
print('W shape:', weights[0].shape)
print('W:', format_decimal(weights[0].ravel().tolist()))
print('b shape:', weights[1].shape)
print('b:', format_decimal(weights[1].ravel().tolist()))
data_in = 2 * np.random.random(data_in_shape) - 1
print('')
print('in shape:', data_in_shape)
print('in:', format_decimal(data_in.ravel().tolist()))
result = model.predict(np.array([data_in]))
print('out shape:', result[0].shape)
print('out:', format_decimal(result[0].ravel().tolist()))
# -
| notebooks/layers/convolutional/Convolution1D.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Label_Stats.ipynb
#
# Compute summary statistics about hand-labeled data
# +
import os
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# -
# ## Input file locations
# +
# TODO: fix spelling error "auditted" -> "audited"
data_dir = os.path.join("..", "inter_annotator_agreement", "human_labels_auditted")
# Files with labels in the gold standard
_CONLL_2_IN_GOLD_FILE = os.path.join(data_dir, "CoNLL_2_in_gold.csv")
_CONLL_3_IN_GOLD_FILE = os.path.join(data_dir, "CoNLL_3_in_gold.csv")
_CONLL_3_TRAIN_IN_GOLD_FILE = os.path.join(data_dir, "CoNLL_3_train_in_gold.csv")
_CONLL_4_IN_GOLD_FILE = os.path.join(data_dir, "CoNLL_4_in_gold.csv")
_CONLL_4_TRAIN_IN_GOLD_FILE = os.path.join(data_dir, "CoNLL_4_train_in_gold.csv")
# Files with labels not in the gold standard
_CONLL_2_NOT_IN_GOLD_FILE = os.path.join(data_dir, "CoNLL_2_not_in_gold.csv")
_CONLL_3_NOT_IN_GOLD_FILE = os.path.join(data_dir, "CoNLL_3_not_in_gold.csv")
_CONLL_3_TRAIN_NOT_IN_GOLD_FILE = os.path.join(data_dir, "CoNLL_3_train_not_in_gold.csv")
_CONLL_4_NOT_IN_GOLD_FILE = os.path.join(data_dir, "CoNLL_4_not_in_gold.csv")
_CONLL_4_TRAIN_NOT_IN_GOLD_FILE = os.path.join(data_dir, "CoNLL_4_train_not_in_gold.csv")
# Optionally output figures generated
save_figures = True
figure_dir = os.path.join(data_dir, "label_stats_images")
if save_figures and not os.path.exists(figure_dir):
os.mkdir(figure_dir)
# -
# ## Read labels into dataframes
# +
# Constants that govern reading CSV files
Excel_encoding = "Windows-1252" # Excel's unchangeable default CSV encoding
# Column types in a file of examples not in the gold standard
_NOT_IN_GOLD_DTYPES = {
"num_teams": "Int64",
"num_models": "Int64",
"fold" : "string",
"doc_offset" : "Int64",
"model_span": "string",
"error_type": "string",
"corpus_span": "string",
"corpus_ent_type": "string",
"correct_span": "string",
}
_NOT_IN_GOLD_DROP_COLS = ["time_started", "time_stopped", "time_elapsed"]
_IN_GOLD_DTYPES = {
"num_teams": "Int64",
"num_models": "Int64",
"fold" : "string",
"doc_offset" : "Int64",
"corpus_span": "string",
"error_type": "string",
"corpus_ent_type": "string",
"correct_span": "string",
}
_IN_GOLD_DROP_COLS = ["time_started", "time_stopped", "time_elapsed"]
utf_8_encoding = "utf-8"
def read_star_gold_df(file_name, dtypes, drop_cols, encoding):
result = (
pd
.read_csv(file_name, dtype=dtypes, encoding=encoding)
.drop(columns=drop_cols)
)
#homogenize "num_teams" with "num_models" label to allow combination of the two datasets
if "num_teams" in result.columns:
result = result.rename(columns = {"num_teams" : "num_models"})
if "verified" in result.columns:
result = result.drop(columns = "verified")
return result[~result["error_type"].isna()].copy()
def read_not_in_gold_df(file_name, encoding):
result = read_star_gold_df(file_name, _NOT_IN_GOLD_DTYPES, _NOT_IN_GOLD_DROP_COLS, encoding )
result["subset"] = "not_in_gold"
return result
def read_in_gold_df(file_name, encoding):
result = read_star_gold_df(file_name, _IN_GOLD_DTYPES, _IN_GOLD_DROP_COLS, encoding) # maybe process a little to add "model span" I dont think I use it though so for now I wont
result["subset"] = "in_gold"
return result
# +
conll_2_not_in_gold_df = read_not_in_gold_df(_CONLL_2_NOT_IN_GOLD_FILE, encoding=utf_8_encoding)
conll_3_not_in_gold_df = pd.concat([read_not_in_gold_df(_CONLL_3_NOT_IN_GOLD_FILE, encoding=Excel_encoding),
read_not_in_gold_df(_CONLL_3_TRAIN_NOT_IN_GOLD_FILE, encoding=Excel_encoding)])
conll_4_not_in_gold_df = pd.concat([read_not_in_gold_df(_CONLL_4_NOT_IN_GOLD_FILE, encoding=Excel_encoding),
read_not_in_gold_df(_CONLL_4_TRAIN_NOT_IN_GOLD_FILE, encoding=Excel_encoding)])
conll_2_in_gold_df = read_in_gold_df(_CONLL_2_IN_GOLD_FILE, encoding=utf_8_encoding)
conll_3_in_gold_df = pd.concat([read_in_gold_df(_CONLL_3_IN_GOLD_FILE, encoding=Excel_encoding),
read_in_gold_df(_CONLL_3_TRAIN_IN_GOLD_FILE, encoding=Excel_encoding)])
conll_4_in_gold_df = pd.concat([read_in_gold_df(_CONLL_4_IN_GOLD_FILE, encoding=Excel_encoding),
read_in_gold_df(_CONLL_4_TRAIN_IN_GOLD_FILE, encoding=Excel_encoding)])
# tag with sources :
conll_2_not_in_gold_df['conll_2'] = True
conll_3_not_in_gold_df["conll_3"] = True
conll_4_not_in_gold_df["conll_4"] = True
conll_2_in_gold_df["conll_2"] = True
conll_3_in_gold_df["conll_3"] = True
conll_4_in_gold_df["conll_4"] = True
# -
conll_4_not_in_gold_df
# +
print("CoNLL 2 Not in gold error types", conll_2_not_in_gold_df["error_type"].unique())
print("\nCoNLL 3 Not in gold error types", conll_3_not_in_gold_df["error_type"].unique())
print("\nCoNLL 2 In gold error types", conll_2_in_gold_df["error_type"].unique())
print("\nCoNLL 3 In gold error types", conll_3_in_gold_df["error_type"].unique())
# -
# ## Fill in blank values
#
# Many fields are left blank during manual labeling because they
# can be inferred from the remaining fields.
# +
def infer_corpus_span_not_in_gold(row):
if not pd.isna(row["corpus_span"]):
# Don't override values already present
return row["corpus_span"]
elif row["error_type"] in ["None", "Sentence", "Token", "Wrong", "Missing", "Ambiguous"]:
# Don't attempt inference for these error types.
return row["corpus_span"]
elif pd.isna(row["num_models"]):
# Not from a team's results
return row["corpus_span"]
elif row["error_type"] == "Tag":
return row["model_span"]
else:
raise ValueError(f"Can't infer corpus_span for row \n{row}")
def infer_correct_span_not_in_gold(row):
if not pd.isna(row["correct_span"]):
# Don't override values already present
return row["correct_span"]
elif row["error_type"] in ["None", "Wrong", "Ambiguous"]:
# Don't attempt inference for these error types.
return row["correct_span"]
elif row["error_type"] == "Tag":
#in this, case the original span was correct, just the tag needs changing
return row["corpus_span"]
elif pd.isna(row["num_models"]):
# Not from a team's results
return row["correct_span"]
elif row["error_type"] in ["Span", "Sentence", "Token", "Both", "Missing"]:
return row["model_span"]
else:
raise ValueError(f"Can't infer correct_span for row:\n{row}")
def infer_correct_ent_type_not_in_gold(row):
if not pd.isna(row["correct_ent_type"]):
# Don't override values already present
return row["correct_ent_type"]
elif row["error_type"] in ["None", "Sentence", "Token", "Wrong", "Span", "Ambiguous"]:
# Don't attempt inference for these error types.
return row["correct_ent_type"]
elif pd.isna(row["num_models"]):
# Not from a team's results
return row["correct_ent_type"]
elif row["error_type"] in ["Tag", "Both", "Missing"]:
return row["model_ent_type"]
else:
raise ValueError(f"Can't infer correct_ent_type for row:\n{row}")
def infer_blanks_not_in_gold(df):
ret = df.copy()
ret["corpus_span"] = [infer_corpus_span_not_in_gold(r) for _, r in ret.iterrows()]
ret["correct_span"] = [infer_correct_span_not_in_gold(r) for _, r in ret.iterrows()]
ret["correct_ent_type"] = [infer_correct_ent_type_not_in_gold(r) for _, r in ret.iterrows()]
return ret
#### now for the in_gold inferences: we can't quite do as much but we can try
#i'll have to go back over this when I have actual data to crossreference with
def infer_correct_span_in_gold(row):
if not pd.isna(row["correct_span"]):
#do nothing if correct span already exists
return row["correct_span"]
elif row["error_type"] in ['Sentence', 'Token', 'Span', 'Wrong', 'Both']:
# don't attempt to inference for these values
return row["correct_span"]
elif row["error_type"] in [ "Tag", "None" ]:
return row["corpus_span"]
elif pd.isna(row["num_models"]):
# Not from a team's results
return row["correct_span"]
else:
raise ValueError(f"Can't infer correct_span for row:\n{row}")
def infer_correct_ent_type_in_gold(row):
if not pd.isna(row["correct_ent_type"]):
# Don't override values already present
return row["correct_ent_type"]
elif pd.isna(row["num_models"]):
# Not from a team's results. can't really inference.
return row["correct_ent_type"]
elif row["error_type"] in ['Tag', 'Sentence', 'Token', 'Wrong', 'Both']:
#dont attempt inference for these model types
return row["correct_ent_type"]
elif row["error_type"] in ["None", "Span"]:
return row["corpus_ent_type"]
else:
raise ValueError(f"Can't infer correct_span for row:\n{row}")
def infer_blanks_in_gold(df):
ret = df.copy()
ret["correct_span"] = [infer_correct_span_in_gold(r) for _, r in ret.iterrows()]
ret["correct_ent_type"] = [infer_correct_ent_type_in_gold(r) for _, r in ret.iterrows()]
return ret
conll_2_not_in_gold_full = infer_blanks_not_in_gold(conll_2_not_in_gold_df)
conll_3_not_in_gold_full = infer_blanks_not_in_gold(conll_3_not_in_gold_df)
conll_4_not_in_gold_full = infer_blanks_not_in_gold(conll_4_not_in_gold_df)
conll_2_in_gold_full = infer_blanks_in_gold(conll_2_in_gold_df)
conll_3_in_gold_full = infer_blanks_in_gold(conll_3_in_gold_df)
# +
#check for data that cannot be used, due to mis-entering
def find_bad_data(df):
ret = df[df["fold"].isna() | df["doc_offset"].isna()]
return ret
bad_NG_conll_2 = find_bad_data(conll_2_not_in_gold_full)
bad_NG_conll_3 = find_bad_data(conll_3_not_in_gold_full)
bad_IG_conll_2 = find_bad_data(conll_2_in_gold_full)
bad_IG_conll_3 = find_bad_data(conll_3_in_gold_full)
bad_total = bad_NG_conll_2.append(bad_NG_conll_3).append(bad_IG_conll_2).append(bad_IG_conll_3)
display(bad_total)
# if if bad_total.count() > 0:
# raise ValueError("Found Data with no fold or document numbers")
# -
# ## Eliminate duplicates
#
# Use the values we just imputed to identify instances of the same span being
# fixed twice.
# +
def combine_eliminate_duplicate_entries(df_in_a, df_in_b, not_in_gold=False, cross_subset=False, remove_duped_outputs = True):
# Combines two datasets, labels and removes duplicates
# not_in_gold - should be true if both data sets are from the not_in_gold subset, false otherwise
# cross_subset - should be true if the data sets are from two different subsets (i.e. one is in gold and the other is not in gold)
# this is used for the correct label of duplicates.
# side effect: rearranges order of data
df_in = df_in_a.append(df_in_b, ignore_index= True)
#sort so that when duplicates are removed, the entry with the most "hits" is preferred
df_in = df_in.sort_values(by=["num_models_missing"], na_position='last')
#label duplicates
df_temp = df_in.fillna({"num_models" : np.nan, "corpus_span": "NA","corpus_ent_type":" NA",
"correct_span": "NA"})
df_duplicated = df_temp.duplicated(["fold", "doc_offset", "corpus_span",
"corpus_ent_type","correct_span"], keep = 'last')
if cross_subset:
df_in.loc[df_duplicated, 'subset'] = "both"
else:
df_in.loc[df_duplicated, 'dataset'] = "both"
df = df_in
#now, use df_temp to select elements to throw out, normally
to_remove = df_temp.duplicated(["fold", "doc_offset", "error_type", "corpus_span",
"corpus_ent_type", "correct_span"], keep = 'first')
df_temp = df_temp[~to_remove]
df_double_counted = df_temp.duplicated(["fold", "doc_offset", "correct_ent_type","correct_span"]) & \
df_temp['error_type'].isin(['Sentence', 'Token', 'Span', 'Both', 'Missing'])
ret = (df[(~to_remove)]).copy()
if remove_duped_outputs:
ret = (ret[~df_double_counted]).copy()
#reorder to improve ease of reading
ret.sort_values(by=["fold", "doc_offset","correct_span"], ignore_index = True, inplace=True)
if ~cross_subset:
ret.loc[ret['dataset'] == 'both'] = fix_subset_entries(ret.loc[ret['dataset'] == 'both'])
ret.fillna({'conll_2': False, 'conll_3': False, 'conll_4': False}, inplace=True)
return ret
def in_df(row, df):
feilds = ['fold', 'doc_offset', 'correct_span', 'correct_ent_type', 'error_type', 'corpus_span', 'corpus_ent_type' ]
df_temp = df
for feild in feilds:
df_temp = df_temp[(df_temp[feild].isna() & row.isna()[feild]) | (df_temp[feild] == row[feild])]
if df_temp.count().max() == 0: return False
return True
def fix_subset_entries(df_in):
df = df_in.copy()
for i, row in df.iterrows():
df.at[i, "conll_3"] = in_df(row,all_labels_conll_3_not_in_gold) | in_df(row,all_labels_conll_3_in_gold)
df.at[i, "conll_2"] = in_df(row,all_labels_conll_2_not_in_gold) | in_df(row,all_labels_conll_2_in_gold)
df.at[i, "conll_4"] = in_df(row,all_labels_conll_4_not_in_gold)
return df
def clean_data(df_in):
#removes data that is currently not able to be used (location not avaliable etc.)
df = df_in.copy()
ret = df[~df["error_type"].isna()]
ret = ret[~ret["fold"].isna()]
ret = ret[~ret["doc_offset"].isna()]
return df
#clean data
all_labels_conll_2_not_in_gold = clean_data(conll_2_not_in_gold_full)
all_labels_conll_3_not_in_gold = clean_data(conll_3_not_in_gold_full)
all_labels_conll_4_not_in_gold = clean_data(conll_4_not_in_gold_full)
all_labels_conll_2_in_gold = clean_data(conll_2_in_gold_full)
all_labels_conll_3_in_gold = clean_data(conll_3_in_gold_full)
#move data to "number of models missing". This alows for more "apples to apples" comparisons
all_labels_conll_2_not_in_gold["num_models_missing"] = (all_labels_conll_2_not_in_gold.num_models - 16) * -1 # number of models used
all_labels_conll_3_not_in_gold["num_models_missing"] = (all_labels_conll_3_not_in_gold.num_models - 17) * -1 # number of models used
all_labels_conll_4_not_in_gold["num_models_missing"] = (all_labels_conll_4_not_in_gold.num_models - 17) * -1 # number of models used
all_labels_conll_2_in_gold ["num_models_missing"] = all_labels_conll_2_in_gold.num_models
all_labels_conll_3_in_gold ["num_models_missing"] = all_labels_conll_3_in_gold.num_models
all_labels_conll_2_not_in_gold["agreeing_models"] = all_labels_conll_2_not_in_gold.num_models
all_labels_conll_2_not_in_gold["agreeing_models"] = all_labels_conll_2_not_in_gold.num_models
all_labels_conll_2_not_in_gold["agreeing_models"] = all_labels_conll_2_not_in_gold.num_models
all_labels_conll_2_in_gold ["agreeing_models"] = 16 - all_labels_conll_2_in_gold.num_models
all_labels_conll_3_in_gold ["agreeing_models"] = 17 - all_labels_conll_3_in_gold.num_models
#combine dataframes, and remove duplicates
#Note: due to the scemantics involved duplicate entries will defer to the first of two sets entered
# merge like subsets first
all_labels_not_in_gold = combine_eliminate_duplicate_entries(all_labels_conll_2_not_in_gold, all_labels_conll_3_not_in_gold, not_in_gold=True)
all_labels_not_in_gold = combine_eliminate_duplicate_entries(all_labels_not_in_gold, all_labels_conll_4_not_in_gold, not_in_gold=True)
all_labels_in_gold = combine_eliminate_duplicate_entries(all_labels_conll_2_in_gold, all_labels_conll_3_in_gold)
# then merge between subsets
all_labels = combine_eliminate_duplicate_entries(all_labels_not_in_gold, all_labels_in_gold, cross_subset=True)
#display all_labels
all_labels.head(14)
# -
all_labels.loc[:,"hand_labelled"] = all_labels["num_models"].isna()
all_labels.loc[all_labels["hand_labelled"], 'conll_2'] = False
all_labels.loc[all_labels["hand_labelled"], 'conll_3'] = False
all_labels.loc[all_labels["hand_labelled"], 'conll_4'] = False
all_labels.head(1)
labels_from_models = all_labels[~all_labels['hand_labelled']]
print(
f"""
Total number of labels flagged: {len(all_labels.index)}
Total number of labels flagged by models: {len(labels_from_models.index)}
Total number of labels flagged by humans: {all_labels['hand_labelled'].sum()}
Total number of labels flagged in by models in test fold: {(labels_from_models['fold'] == 'test').sum()}
Total number of labels flagged in by models in dev fold: {(labels_from_models['fold'] == 'dev').sum()}
Total number of labels flagged in by models in train fold: {(labels_from_models['fold'] == 'train').sum()}
"""
)
# +
# "Manual" labels -- those found by hand in the vicinity of labels that were
# suggested by our scripts -- should't overlap with any of the automatic ones.
manual_labels = all_labels[all_labels["num_models"].isna()]
#currently, dataframe.duplicated doesn't work well when NA values exist, so we remove them temporarily
manual_labels_filled = manual_labels.fillna({"corpus_span": "NA",
"corpus_ent_type": "NA",
"correct_span": "NA"})
manual_labels_filled.head()
# +
# First check that they don't overlap with each other.
duplicate_manual_labels = manual_labels_filled.duplicated(["fold", "doc_offset",
"error_type", "corpus_span",
"corpus_ent_type", "correct_span"])
if duplicate_manual_labels.sum() > 0:
manual_labels_filled[duplicate_manual_labels]
raise ValueError("Found {duplicate_manual_labels.sum()} duplicate manual labels")
else:
print("No duplicate manual labels found, good!")
# +
# Now we can check for overlap between manual and automatic labels
auto_labels = all_labels[~all_labels["num_models"].isna()]
#currently, dataframe.duplicated doesn't work well when NA values exist, so we remove them temporarily
auto_labels_filled = auto_labels.fillna({"corpus_span": "NA",
"corpus_ent_type": "NA",
"correct_span": "NA"})
merge_cols = ["fold", "doc_offset", "error_type", "corpus_span"]
shared_labels = (
pd.merge(auto_labels_filled, manual_labels_filled, on=merge_cols, suffixes=["_auto", "_manual"])
.sort_values(["fold", "doc_offset", "error_type"])
)
# This dataframe in its current form can have some false positives. Check manually.
pd.options.display.max_columns = None
shared_labels
# -
# ## Count up how many of each type of error we found
# +
def make_counts(labels_df):
ans = (
labels_df[["num_models_missing", "fold", "error_type"]]
.groupby(["num_models_missing", "fold", "error_type"])
.aggregate({"num_models_missing": "count"})
.rename(columns={"num_models_missing": "total"})
.reset_index()
)
return ans
counts_df = make_counts(all_labels)
counts_df_conll_3 = make_counts(all_labels[all_labels["conll_3"]])
counts_df_conll_2 = make_counts(all_labels[all_labels["conll_2"] ])
counts_df_conll_4 = make_counts(all_labels[all_labels["conll_4"] ])
#legacy name for now
counts_df_both_datasets = make_counts(all_labels[all_labels["conll_2"] & all_labels["conll_3"] ])
#look at In gold (IG) vs Not In Gold (NG)datasets
counts_df_IG = make_counts(all_labels_in_gold)
counts_df_NG = make_counts(all_labels_not_in_gold)
counts_df_both_subsets = make_counts(all_labels[(all_labels["subset"] == "both")])
#maybe try in gold vs Not gold differentiation for conll2 vs 3?
counts_df_conll_3_NG = make_counts(all_labels_conll_3_not_in_gold)
counts_df_conll_2_NG = make_counts(all_labels_conll_2_not_in_gold)
counts_df_both_NG = make_counts(all_labels_not_in_gold[all_labels_not_in_gold["dataset"] == "both"])
counts_df_conll_3_IG = make_counts(all_labels_conll_3_in_gold)
counts_df_conll_2_IG = make_counts(all_labels_conll_2_in_gold)
counts_df_both_IG = make_counts(all_labels_in_gold[all_labels_in_gold["dataset"] == "both"])
counts_df_conll_3_both = make_counts(all_labels[(all_labels["subset"]=="both")& (all_labels["conll_3"])])
counts_df_conll_2_both = make_counts(all_labels[(all_labels["subset"]=="both")& (all_labels["conll_2"])])
counts_df_conll_both_both = make_counts(all_labels[(all_labels["subset"]=="both")& (all_labels["conll_2"] & all_labels["conll_3"])])
#also run this on a file-by-file basis for later analysis
counts_df_conll_both_both
# -
# counts_df doesn't include errors that were found by inspection but weren't
# flagged by any model. Count those separately.
non_model_errors = all_labels[all_labels["num_models"].isna()]
non_model_errors.head()
# +
def count_model_errs(counts, fold):
ans = counts[(counts["error_type"] != "None") & (counts["fold"] == fold)]["total"].sum()
return ans
def error_report(fold_name, pretty_fold_name):
model_errors = count_model_errs(counts_df, fold_name)
model_errors_conll_2 = count_model_errs(counts_df_conll_2, fold_name)
model_errors_conll_3 = count_model_errs(counts_df_conll_3, fold_name)
model_errors_conll_4 = count_model_errs(counts_df_conll_4, fold_name)
model_errors_both = count_model_errs(counts_df_both_subsets, fold_name)
fold_non_model_errors = len(non_model_errors[non_model_errors["fold"] == fold_name].index)
total_errors = model_errors + fold_non_model_errors
print(f"{pretty_fold_name} set: \n Found {total_errors} errors in total "
f"({model_errors} from models and {fold_non_model_errors} "
f"from inspecting documents)")
print(f" CoNLL 2 model found {model_errors_conll_2} errors\n"
f" CoNLL 3 model found {model_errors_conll_3} errors\n"
f" CoNLL 4 model found {model_errors_conll_4} errors\n"
)
print(f" {model_errors_both} errors were found by both models" )
error_report("train", "Train")
error_report("dev", "Development")
error_report("test", "Test")
# -
# +
#compare incedence on In gold vs Not in Gold over all folds
total_model_errors = counts_df[(counts_df["error_type"] != "None")]["total"].sum()
IG_total_model_errors = counts_df_IG[counts_df_IG["error_type"] != "None"]["total"].sum()
NG_total_model_errors = counts_df_NG[counts_df_NG["error_type"] != "None"]["total"].sum()
both_total_model_errors =counts_df_both_subsets[counts_df_both_subsets["error_type"] != "None"]["total"].sum()
print(f"Total errors found:\n Models found {total_model_errors} errors. ({len(non_model_errors.index)} additional errors found by inspecting documents)")
print(f" {IG_total_model_errors} errors were found from the In Gold subset \n {NG_total_model_errors} errors were found from the Not In Gold subset")
print(f" {both_total_model_errors} errors were found from both subsets")
print("\n\nBeakdown by fold: ")
def error_breakdown_report(fold_name, pretty_fold_name):
model_errors = count_model_errs(counts_df, fold_name)
IG_dev_errors = count_model_errs(counts_df_IG, fold_name)
NG_dev_errors = count_model_errs(counts_df_NG, fold_name)
both_dev_errors = count_model_errs(counts_df_both_subsets, fold_name)
overlap_percent_total = (both_dev_errors/model_errors)*100.0
print(f"{pretty_fold_name} Set:\n Models found {model_errors} errors")
print(f" {IG_dev_errors} errors were found from the In Gold subset\n"
f" {NG_dev_errors} errors were found from the Not In Gold subset\n"
f" {both_dev_errors} errors were found from both subsets ({overlap_percent_total:2.1f}% of errors were found in both subsets)")
error_breakdown_report("train", "Train")
error_breakdown_report("dev", "Development")
error_breakdown_report("test", "Test")
# +
def find_error_counts_by_num_teams(counts, combine_folds= False):
groupby_set = ["num_models_missing"] if combine_folds else ["num_models_missing", "fold"]
errors_by_model_hits = (
counts[counts["error_type"] != "None"]
.groupby(groupby_set)
.aggregate({"total": "sum"})
.rename(columns={"total": "errors"})
.reset_index()
)
if combine_folds:
not_errors_by_model_hits = (
counts[counts["error_type"] == "None"]
.groupby(groupby_set).aggregate({"total":"sum"})
.rename(columns={"total": "not_errors"}).reset_index()
)
else:
not_errors_by_model_hits = (
counts[counts["error_type"] == "None"]
[["num_models_missing", "fold", "total"]]
.rename(columns={"total": "not_errors"})
)
error_counts_by_num_teams = pd.merge(errors_by_model_hits, not_errors_by_model_hits)
error_counts_by_num_teams["fraction_errors"] = (
error_counts_by_num_teams["errors"] / (error_counts_by_num_teams["errors"] + error_counts_by_num_teams["not_errors"])
)
return error_counts_by_num_teams
#total
error_counts_by_num_teams_total = find_error_counts_by_num_teams(counts_df)
error_counts_by_num_teams_total_combined = find_error_counts_by_num_teams(counts_df, combine_folds = True)
#seperate by dataset
error_counts_by_num_teams_conll_2 = find_error_counts_by_num_teams(counts_df_conll_2)
error_counts_by_num_teams_conll_2_combined = find_error_counts_by_num_teams(counts_df_conll_2, combine_folds = True)
error_counts_by_num_teams_conll_3 = find_error_counts_by_num_teams(counts_df_conll_3)
error_counts_by_num_teams_conll_3_combined = find_error_counts_by_num_teams(counts_df_conll_3, combine_folds = True)
#the in gold data has much lower "hit"rate, so also create a copy that excludes in_gold
error_counts_by_num_teams_conll_2_NG = find_error_counts_by_num_teams(counts_df_conll_2_NG)
error_counts_by_num_teams_conll_2_NG_combined = find_error_counts_by_num_teams(counts_df_conll_2_NG, combine_folds= True)
error_counts_by_num_teams_conll_3_NG = find_error_counts_by_num_teams(counts_df_conll_3_NG)
error_counts_by_num_teams_conll_3_NG_combined = find_error_counts_by_num_teams(counts_df_conll_3_NG, combine_folds= True)
#seperate by subset
error_counts_by_num_teams_IG = find_error_counts_by_num_teams(counts_df_IG)
error_counts_by_num_teams_IG_combined = find_error_counts_by_num_teams(counts_df_IG, combine_folds = True)
error_counts_by_num_teams_NG = find_error_counts_by_num_teams(counts_df_NG)
error_counts_by_num_teams_NG_combined = find_error_counts_by_num_teams(counts_df_NG, combine_folds = True)
#display one example
error_counts_by_num_teams_IG_combined
# +
def find_error_counts_by_num_teams(counts, combine_folds= False):
groupby_set = ["num_models_missing"] if combine_folds else ["num_models_missing", "fold"]
errors_by_model_hits = (
counts[counts["error_type"] != "None"]
.groupby(groupby_set)
.aggregate({"total": "sum"})
.rename(columns={"total": "errors"})
.reset_index()
)
if combine_folds:
not_errors_by_model_hits = (
counts[counts["error_type"] == "None"]
.groupby(groupby_set).aggregate({"total":"sum"})
.rename(columns={"total": "not_errors"}).reset_index()
)
else:
not_errors_by_model_hits = (
counts[counts["error_type"] == "None"]
[["num_models_missing", "fold", "total"]]
.rename(columns={"total": "not_errors"})
)
error_counts_by_num_teams = pd.merge(errors_by_model_hits, not_errors_by_model_hits)
error_counts_by_num_teams["fraction_errors"] = (
error_counts_by_num_teams["errors"] / (error_counts_by_num_teams["errors"] + error_counts_by_num_teams["not_errors"])
)
return error_counts_by_num_teams
#total
error_counts_by_num_teams_total = find_error_counts_by_num_teams(counts_df)
error_counts_by_num_teams_total_combined = find_error_counts_by_num_teams(counts_df, combine_folds = True)
#seperate by dataset
error_counts_by_num_teams_conll_2 = find_error_counts_by_num_teams(counts_df_conll_2)
error_counts_by_num_teams_conll_2_combined = find_error_counts_by_num_teams(counts_df_conll_2, combine_folds = True)
error_counts_by_num_teams_conll_3 = find_error_counts_by_num_teams(counts_df_conll_3)
error_counts_by_num_teams_conll_3_combined = find_error_counts_by_num_teams(counts_df_conll_3, combine_folds = True)
#the in gold data has much lower "hit"rate, so also create a copy that excludes in_gold
error_counts_by_num_teams_conll_2_NG = find_error_counts_by_num_teams(counts_df_conll_2_NG)
error_counts_by_num_teams_conll_2_NG_combined = find_error_counts_by_num_teams(counts_df_conll_2_NG, combine_folds= True)
error_counts_by_num_teams_conll_3_NG = find_error_counts_by_num_teams(counts_df_conll_3_NG)
error_counts_by_num_teams_conll_3_NG_combined = find_error_counts_by_num_teams(counts_df_conll_3_NG, combine_folds= True)
#seperate by subset
error_counts_by_num_teams_IG = find_error_counts_by_num_teams(counts_df_IG)
error_counts_by_num_teams_IG_combined = find_error_counts_by_num_teams(counts_df_IG, combine_folds = True)
error_counts_by_num_teams_NG = find_error_counts_by_num_teams(counts_df_NG)
error_counts_by_num_teams_NG_combined = find_error_counts_by_num_teams(counts_df_NG, combine_folds = True)
#display one example
error_counts_by_num_teams_conll_2_combined
# -
# Function to save figures generated here
def savefig( filename):
plt.savefig(os.path.join(figure_dir, filename + ".png"), bbox_inches="tight")
plt.savefig(os.path.join(figure_dir, filename + ".pdf"), bbox_inches="tight")
plt.savefig(os.path.join(figure_dir, filename + ".eps"), bbox_inches="tight")
# +
#break down into chart from total errors
def gen_by_dset_counts(df, name):
counts_by_source = {}
counts_by_source["Fold"] = name
counts_by_source ["Original models"] = df[df["conll_2"]].shape[0]
counts_by_source["Custom models"] = df[df["conll_3"]].shape[0]
counts_by_source["Custom models with cross validation"] = df[df["conll_4"]].shape[0]
counts_by_source["Original models and custom models"] = df[df["conll_2"] & df["conll_3"]].shape[0]
counts_by_source["Custom models and cross validated custom models"] = df[df["conll_3"] & df["conll_4"]].shape[0]
counts_by_source["Original models and cross validated models"] = df[df["conll_2"] & df["conll_4"]].shape[0]
counts_by_source["All models"] = df[df["conll_2"]& df["conll_3"] & df["conll_4"]].shape[0]
return counts_by_source
full_model_counts = gen_by_dset_counts(all_labels[all_labels["error_type"] != "None"], name="total")
dev_counts = gen_by_dset_counts(all_labels[(all_labels["error_type"] != "None") & (all_labels["fold"]=='dev')], name="Dev Fold")
test_counts = gen_by_dset_counts(all_labels[(all_labels["error_type"] != "None") & (all_labels["fold"]=='test')], name="Test Fold")
train_counts = gen_by_dset_counts(all_labels[(all_labels["error_type"] != "None") & (all_labels["fold"]=='train')], name="Train Fold")
count_list = [dev_counts, test_counts, train_counts]
df_counts = pd.DataFrame(count_list)
df_counts
plt.figure('a')
ax = df_counts.plot("Fold", ["Original models", "Custom models", "Custom models with cross validation", "Original models and custom models", "Custom models and cross validated custom models","Original models and cross validated models", "All models"], kind="bar")
#title = "Errors found by each model, by Fold"
ax.tick_params(axis='both', labelsize=25, labelrotation=0)
ax.set_ylabel('Count', fontsize=30)
ax.set_xlabel('Fold', fontsize=30)
ax.legend(fontsize=15, loc='lower left', ncol=2, borderaxespad=0, mode='expand',
bbox_to_anchor=(0., 1.02, 1., .102))
if save_figures:
savefig("Err_distribution_by_document")
# Combine the dev and test sets to generate stats for the figure in the paper
dev_plus_test_counts = {
k: dev_counts[k] + test_counts[k] for k in dev_counts.keys()
}
# Compute the sections of the Venn diagram
for d in [dev_counts, test_counts, dev_plus_test_counts, train_counts]:
d["Only custom models"] = (
d["Custom models"]
- d["Original models and custom models"]
- d["Custom models and cross validated custom models"]
+ d["All models"])
d["Only original models"] = (
d["Original models"]
- d["Original models and custom models"]
- d["Original models and cross validated models"]
+ d["All models"])
d["Only cross validated models"] = (
d["Custom models with cross validation"]
- d["Custom models and cross validated custom models"]
- d["Original models and cross validated models"]
+ d["All models"])
d["Only original and custom"] = (
d["Original models and custom models"]
- d["All models"]
)
d["Only original and cross validated"] = (
d["Original models and cross validated models"]
- d["All models"]
)
d["Only custom and cross validated"] = (
d["Custom models and cross validated custom models"]
- d["All models"]
)
print(f"""
Counts for dev set:
{dev_counts}
Counts for test set:
{test_counts}
Counts for dev set + test set:
{dev_plus_test_counts}
Counts for train set:
{train_counts}
"""
)
# -
# ### Display frequencies for the not_in_gold subset
# Currently we don't have enough data from the in gold data sets to be able to generate a meaningful comparison, so we'll stick with the not_In_gold subset for now
# +
df_total = error_counts_by_num_teams_NG
dev_df_total = df_total[df_total["fold"] == "dev"]
test_df_total = df_total[df_total["fold"] == "test"]
df_conll_2 = error_counts_by_num_teams_conll_2_NG
dev_df_conll_2 = df_conll_2[df_conll_2["fold"] == "dev"]
test_df_conll_2 = df_conll_2[df_conll_2["fold"] == "test"]
df_conll_3 = error_counts_by_num_teams_conll_3_NG
dev_df_conll_3 = df_conll_3[df_conll_3["fold"] == "dev"]
test_df_conll_3 = df_conll_3[df_conll_3["fold"] == "test"]
plt.plot(error_counts_by_num_teams_total_combined ["num_models_missing"], error_counts_by_num_teams_total_combined["fraction_errors"], label="All_sets")
plt.plot(dev_df_total ["num_models_missing"], dev_df_total["fraction_errors"], label="dev total")
plt.plot(test_df_total ["num_models_missing"], test_df_total["fraction_errors"], label="test total")
plt.plot(dev_df_conll_2 ["num_models_missing"], dev_df_conll_2["fraction_errors"], label="dev conll 2")
plt.plot(test_df_conll_2 ["num_models_missing"], test_df_conll_2["fraction_errors"], label="test conll 2")
plt.plot(dev_df_conll_3 ["num_models_missing"], dev_df_conll_3["fraction_errors"], label="dev conll 3")
plt.plot(test_df_conll_3 ["num_models_missing"], test_df_conll_3["fraction_errors"], label="test conll 3")
plt.xlim((7,0))
plt.xlabel("Number of models that didn't find entity")
plt.ylabel("Fraction of entities flagged that are incorrect")
plt.title("Combined: all sub-sections")
plt.legend()
plt.rcParams["figure.figsize"] = [14,7]
plt.show()
plt.figure(figsize = [8, 4])
plt.xlim((7,0))
plt.xlabel("Number of models that didn't find entity")
plt.ylabel("Fraction of entities flagged that are incorrect")
plt.title("Totals")
plt.plot(dev_df_total ["num_models_missing"], dev_df_total["fraction_errors"], label="dev total")
plt.plot(test_df_total ["num_models_missing"], test_df_total["fraction_errors"], label="test total")
plt.legend()
plt.show()
plt.figure(figsize = [8, 4])
plt.xlim((7,0))
plt.xlabel("Number of models that didn't find entity")
plt.ylabel("Fraction of entities flagged that are incorrect")
plt.title("Conll 2")
plt.plot(dev_df_conll_2 ["num_models_missing"], dev_df_conll_2["fraction_errors"], label="dev")
plt.plot(test_df_conll_2 ["num_models_missing"], test_df_conll_2["fraction_errors"], label="test")
plt.legend()
plt.show()
plt.figure(figsize = [8, 4])
plt.xlim((7,0))
plt.xlabel("Number of models that didn't find entity")
plt.ylabel("Fraction of entities flagged that are incorrect")
plt.title("Conll 3")
plt.plot(dev_df_conll_3 ["num_models_missing"], dev_df_conll_3["fraction_errors"], label="dev")
plt.plot(test_df_conll_3 ["num_models_missing"], test_df_conll_3["fraction_errors"], label="test")
plt.legend()
plt.show()
# +
# now look at dev only and test only
plt.figure(figsize = [10, 6])
plt.xlim((7,0))
plt.xlabel("Number of models that didn't find entity")
plt.ylabel("Fraction of entities flagged that are incorrect")
plt.title("Number of models missing flagged entity vs fraction of labels incorrect")
plt.plot(error_counts_by_num_teams_total_combined ["num_models_missing"], error_counts_by_num_teams_total_combined["fraction_errors"], label="Combined")
plt.plot(dev_df_total ["num_models_missing"], dev_df_total ["fraction_errors"], label="Dev fold")
plt.plot(test_df_total ["num_models_missing"], test_df_total ["fraction_errors"], label="Test fold")
plt.legend()
plt.show()
plt.figure(figsize = [8, 4])
plt.xlim((7,0))
plt.xlabel("Number of models that didn't find entity")
plt.ylabel("Fraction of entities flagged that are incorrect")
plt.title("Dev Sets")
plt.plot(dev_df_total ["num_models_missing"], dev_df_total ["fraction_errors"], label="Total")
plt.plot(dev_df_conll_2 ["num_models_missing"], dev_df_conll_2["fraction_errors"], label="Conll 2")
plt.plot(dev_df_conll_3 ["num_models_missing"], dev_df_conll_3["fraction_errors"], label="Conll 3")
plt.legend()
plt.show()
plt.figure(figsize = [8, 4])
plt.xlim((7,0))
plt.xlabel("Number of models that didn't find entity")
plt.ylabel("Fraction of entities flagged that are incorrect")
plt.title("test Sets")
plt.plot(test_df_total ["num_models_missing"], test_df_total ["fraction_errors"], label="Total")
plt.plot(test_df_conll_2 ["num_models_missing"], test_df_conll_2["fraction_errors"], label="Conll 2")
plt.plot(test_df_conll_3 ["num_models_missing"], test_df_conll_3["fraction_errors"], label="Conll 3")
plt.legend()
plt.show()
# -
# ## Look at data by error type
# +
#break data into sections based on their tags. Later we will do some similar analysis to above, so maintain other data
# use a dictionary to store the info based on different tags. For now we will analyze each tag seperately
Error_types = ['Sentence', 'Wrong', 'Token', 'Tag', 'Span', 'Both', 'Missing']
ignore_types = [ 'None']
labels_by_error_type = {}
#all_labels_above_seven = all_labels[all_labels["num_models_missing"] <= 7]
for error_type in Error_types:
labels_by_error_type[error_type] = all_labels[all_labels["error_type"] == error_type].copy()
# now seperate out into counts for some preliminary analysis
row_list = []
for e_type in Error_types:
df = labels_by_error_type[e_type] #make a reference to the dataframe
temp_dict = {}
temp_dict["error_type"] = e_type
temp_dict["total"] = df.shape[0]
temp_dict["dev"] = df[df["fold"] == "dev"].shape[0]
temp_dict["test"] = df[df["fold"] == "test"].shape[0]
temp_dict["train"] = df[df["fold"] == "train"].shape[0]
temp_dict["count_conll_2"] = df[df["conll_2"]].shape[0]
temp_dict["count_conll_3"] = df[df["conll_3"]].shape[0]
temp_dict["count_conll_4"] = df[df["conll_4"]].shape[0]
temp_dict["count_hand"] = df[df["num_models"].isna()].shape[0]
temp_dict["count_not_in_gold"] = df[(df["subset"] != "in_gold")].shape[0]
temp_dict["count_in_gold"] = df[(df["subset"] != "not_in_gold")].shape[0]
row_list.append(temp_dict)
count_errs_by_type = pd.DataFrame(row_list)
print("Total number of errors by type")
display(count_errs_by_type[~count_errs_by_type["error_type"].isin(ignore_types)])
# change to percent incedence
for i in count_errs_by_type.columns:
if i !="error_type":
count_errs_by_type[i] = count_errs_by_type[i].div(count_errs_by_type[i].sum())*100
count_errs_by_type = count_errs_by_type[~count_errs_by_type["error_type"].isin(ignore_types)]
print("\nError type incedence by percent of total errors correctly flagged ")
count_errs_by_type
# +
plt.figure()
plt.pie(x=count_errs_by_type["count_conll_2"].array, labels=count_errs_by_type["error_type"].array, autopct='%1.1f%%', pctdistance=.75, labeldistance=1.1, textprops={'fontsize': 22})
plt.tight_layout()
if save_figures:
savefig("err_distribution_conll_2")
plt.show()
plt.figure()
plt.pie(x=count_errs_by_type["count_conll_3"].array, labels=count_errs_by_type["error_type"].array, autopct='%1.1f%%', pctdistance=.75, labeldistance=1.1, textprops={'fontsize': 22})
plt.tight_layout()
if save_figures:
savefig("err_distribution_conll_3")
plt.show()
# +
plt.clf()
plt.figure()
plt.pie(x=count_errs_by_type["count_conll_4"].array, labels=count_errs_by_type["error_type"].array, autopct='%1.1f%%', pctdistance=.83, labeldistance=1.1,textprops={'fontsize': 22})
plt.tight_layout()
if save_figures:
savefig("err_distribution_conll_4")
plt.show()
plt.clf()
plt.figure()
plt.pie(x=count_errs_by_type["count_hand"].array, labels=count_errs_by_type["error_type"].array, autopct='%1.1f%%', pctdistance=.75, labeldistance=1.1,textprops={'fontsize': 22})
plt.tight_layout()
if save_figures:
savefig("err_distribution_hand")
plt.show()
# -
count_errs_by_type.plot("error_type",["total", "count_in_gold","count_not_in_gold"], kind="bar", title="comparison of error type distribution across subsets")
count_errs_by_type.plot("error_type", ['total',"count_conll_2", "count_conll_3", "count_conll_4"], legend=['a', 'b', 'c'], kind="bar", title="Distribution of errors by error type, and by dataset")
#count_errs_by_type.plot("error_type", "total",kind = "bar", title= "Distribution of errors by error type")
count_errs_by_type.plot("error_type", ["total","dev", "test", "train"], kind="bar", title="Distribution of errors by error type and by fold")
# +
error_type_to_disp = "Tag"
# a little helper function to improve readability
def get_fold(df,fold):
return df[df["fold"] ==fold]
data_df = labels_by_error_type[error_type_to_disp]
data_df_conll_2 = data_df[data_df["conll_2"] &(~data_df["num_models_missing"].isna())]
data_df_conll_3 = data_df[data_df["conll_3"] &(~data_df["num_models_missing"].isna())]
counts_total = make_counts(data_df)
counts_conll_2 = make_counts(data_df_conll_2)
counts_conll_3 = make_counts(data_df_conll_3)
#plot test set
plt.figure(figsize = [8, 4])
plt.xlim((7,0))
plt.title("Labels found vs number of models flagged on test set Error Type = " +error_type_to_disp)
plt.xlabel("Number of models that didn't find entity")
plt.ylabel("Number of errors found")
plt.plot(get_fold(counts_total,"test")["num_models_missing"], get_fold(counts_total,"test")["total"], label="Combined")
plt.plot(get_fold(counts_conll_2,"test")["num_models_missing"], get_fold(counts_conll_2,"test")["total"], label="Conll_2")
plt.plot(get_fold(counts_conll_3,"test")["num_models_missing"], get_fold(counts_conll_3,"test")["total"], label="Conll_3")
plt.legend()
#plot dev set
plt.figure(figsize = [8, 4])
plt.xlim((7,0))
plt.title("Labels found vs number of models flagged on Dev set Error Type = " +error_type_to_disp)
plt.xlabel("Number of models that didn't find entity")
plt.ylabel("Number of errors found")
plt.plot(get_fold(counts_total,"dev")["num_models_missing"], get_fold(counts_total,"dev")["total"], label="Combined")
plt.plot(get_fold(counts_conll_2,"dev")["num_models_missing"], get_fold(counts_conll_2,"dev")["total"], label="Conll_2")
plt.plot(get_fold(counts_conll_3,"dev")["num_models_missing"], get_fold(counts_conll_3,"dev")["total"], label="Conll_3")
plt.legend()
# +
# show frequency distribution for given document
def count_error_distribution_for_doc(Doc_num, fold):
counts = []
for tag in Error_types:
df = labels_by_error_type[tag] #make a reference to the dataframe
df = df[(df["doc_offset"] == Doc_num) & (df["fold"] == fold)]
temp_dict = {}
temp_dict["error_type"] = tag
temp_dict["count"] = df.shape[0]
temp_dict["count_conll_2"] = df[df["conll_2"]].shape[0]
temp_dict["count_conll_3"] = df[df["conll_3"]].shape[0]
temp_dict["count_conll_4"] = df[df["conll_4"]].shape[0]
counts.append(temp_dict)
return pd.DataFrame(counts)
count_error_distribution_for_doc(35,"test")
# +
#find histogram of hits for dataset on given error type (may apply multiple times to get different looking data set)
def graph_hit_frequency_for_err_type(error_type):
df = labels_by_error_type[error_type]
counts =(df[["doc_offset", "fold"]]
.groupby(["doc_offset","fold"])
.aggregate({"doc_offset": "count"})
.rename(columns={"doc_offset": "total"})
.reset_index() )
max_val = counts["total"].max()
counts.drop("doc_offset", axis=1).plot.hist(xticks = range(1,16),bins = max_val-1, figsize = (6,3),
title = "Frequency distribution for hits on error type = \"" + error_type + "\"")
return counts;
frequency_by_err_types = {}
for e_type in Error_types:
frequency_by_err_types[e_type] = graph_hit_frequency_for_err_type(e_type)
# -
frequency_by_err_types["Token"]
# +
def make_doc_histogram(selection, selection_name):
df = selection[selection["error_type"] != "None"]
counts =(df[["doc_offset", "fold"]]
.groupby(["doc_offset","fold"])
.aggregate({"doc_offset": "count"})
.rename(columns={"doc_offset": "total"})
.reset_index() )
max_val = counts["total"].max()
counts.plot("doc_offset","total", kind="hist", xticks= range(1,max_val +1), figsize = (10,6), bins = max_val-1,
title = "Frequency distribution for " +selection_name )
make_doc_histogram(all_labels, "All errors")
make_doc_histogram(all_labels_in_gold, "In_gold errors")
make_doc_histogram(all_labels_not_in_gold, "not_in_gold errors")
# +
##print out all_labels dataset as a csv
ALL_LABELS_OUTPUT_FILE_NAME = os.path.join("..", "corrected_labels", "all_conll_corrections_combined.csv")
write_columns = ["fold", "doc_offset", "corpus_span", "corpus_ent_type", "error_type",
"correct_span", "correct_ent_type", "agreeing_models", "notes", "conll_2", "conll_3", "conll_4"]
write_file = all_labels[write_columns].copy()
write_file.loc[:, "hand_labelled"] = write_file["agreeing_models"].isna()
write_file = write_file[write_file.error_type != "None"]
write_file.rename(columns={"conll_2": "Original entrants ensemble", "conll_3": "custom models ensemble", "conll_4": "cross validation ensemble"}, inplace=True)
write_file.to_csv(ALL_LABELS_OUTPUT_FILE_NAME)
print("Done")
# -
write_file
pd.options.display.max_rows = 60
all_labels[(all_labels["error_type"] == "Token") & (all_labels["fold"] == 'test')]
| archive_old_files/scripts/Label_Stats.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 14.4 Case Study: Time Series and Simple Linear Regression
# ### Loading the Average High Temperatures into a `DataFrame`
#
# **We added `%matplotlib inline` to enable Matplotlib in this notebook.**
# %matplotlib inline
import pandas as pd
nyc = pd.read_csv('ave_hi_nyc_jan_1895-2018.csv')
nyc.columns = ['Date', 'Temperature', 'Anomaly']
nyc.Date = nyc.Date.floordiv(100)
nyc.head(3)
# ### Splitting the Data for Training and Testing
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
nyc.Date.values.reshape(-1, 1), nyc.Temperature.values,
random_state=11)
X_train.shape
X_test.shape
# ### Training the Model
from sklearn.linear_model import LinearRegression
linear_regression = LinearRegression()
linear_regression.fit(X=X_train, y=y_train)
linear_regression.coef_
linear_regression.intercept_
# ### Testing the Model
predicted = linear_regression.predict(X_test)
expected = y_test
for p, e in zip(predicted[::5], expected[::5]):
print(f'predicted: {p:.2f}, expected: {e:.2f}')
# ### Predicting Future Temperatures and Estimating Past Temperatures
predict = (lambda x: linear_regression.coef_ * x +
linear_regression.intercept_)
predict(2019)
predict(1890)
# ### Visualizing the Dataset with the Regression Line
import seaborn as sns
# +
axes = sns.scatterplot(data=nyc, x='Date', y='Temperature',
hue='Temperature', palette='winter', legend=False)
axes.set_ylim(10, 70)
import numpy as np
x = np.array([min(nyc.Date.values), max(nyc.Date.values)])
y = predict(x)
import matplotlib.pyplot as plt
line = plt.plot(x, y)
# +
# This placeholder cell was added because we had to combine
# the sections snippets 22-28 for the visualization to work in Jupyter
# and want the subsequent snippet numbers to match the book
# +
# Placeholder cell
# +
# Placeholder cell
# +
# Placeholder cell
# +
# Placeholder cell
# +
# Placeholder cell
# + active=""
# ### Overfitting/Underfitting
# -
# # More Info
# * See **video** Lesson 14 in [**Python Fundamentals LiveLessons** on Safari Online Learning](https://learning.oreilly.com/videos/python-fundamentals/9780135917411)
# * See **book** Chapter 14 in [**Python for Programmers** on Safari Online Learning](https://learning.oreilly.com/library/view/python-for-programmers/9780135231364/), or see **book** Chapter 15 in **Intro to Python for Computer Science and Data Science**
# * Interested in a print book? Check out:
#
# | Python for Programmers | Intro to Python for Computer<br>Science and Data Science
# | :------ | :------
# | <a href="https://amzn.to/2VvdnxE"><img alt="Python for Programmers cover" src="../images/PyFPCover.png" width="150" border="1"/></a> | <a href="https://amzn.to/2LiDCmt"><img alt="Intro to Python for Computer Science and Data Science: Learning to Program with AI, Big Data and the Cloud" src="../images/IntroToPythonCover.png" width="159" border="1"></a>
#
# >Please **do not** purchase both books—our professional book **_Python for Programmers_** is a subset of our college textbook **_Intro to Python for Computer Science and Data Science_**
##########################################################################
# (C) Copyright 2019 by Deitel & Associates, Inc. and #
# Pearson Education, Inc. All Rights Reserved. #
# #
# DISCLAIMER: The authors and publisher of this book have used their #
# best efforts in preparing the book. These efforts include the #
# development, research, and testing of the theories and programs #
# to determine their effectiveness. The authors and publisher make #
# no warranty of any kind, expressed or implied, with regard to these #
# programs or to the documentation contained in these books. The authors #
# and publisher shall not be liable in any event for incidental or #
# consequential damages in connection with, or arising out of, the #
# furnishing, performance, or use of these programs. #
##########################################################################
| examples/ch14/snippets_ipynb/14_04.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Components
# Components are the lowest level of building blocks in EvalML. Each component represents a fundamental operation to be applied to data.
#
# All components accept parameters as keyword arguments to their `__init__` methods. These parameters can be used to configure behavior.
#
# Each component class definition must include a human-readable `name` for the component. Additionally, each component class may expose parameters for AutoML search by defining a `hyperparameter_ranges` attribute containing the parameters in question.
#
# EvalML splits components into two categories: **transformers** and **estimators**.
# ## Transformers
#
# Transformers subclass the `Transformer` class, and define a `fit` method to learn information from training data and a `transform` method to apply a learned transformation to new data.
#
# For example, an [imputer](../generated/evalml.pipelines.components.SimpleImputer.ipynb) is configured with the desired impute strategy to follow, for instance the mean value. The imputers `fit` method would learn the mean from the training data, and the `transform` method would fill the learned mean value in for any missing values in new data.
#
# All transformers can execute `fit` and `transform` separately or in one step by calling `fit_transform`. Defining a custom `fit_transform` method can facilitate useful performance optimizations in some cases.
# +
import numpy as np
import pandas as pd
from evalml.pipelines.components import SimpleImputer
X = pd.DataFrame([[1, 2, 3], [1, np.nan, 3]])
display(X)
# +
import woodwork as ww
imp = SimpleImputer(impute_strategy="mean")
X = ww.DataTable(X)
X = imp.fit_transform(X)
display(X)
# -
# Below is a list of all transformers included with EvalML:
from evalml.pipelines.components.utils import all_components, Estimator, Transformer
for component in all_components():
if issubclass(component, Transformer):
print(f"Transformer: {component.name}")
# ## Estimators
#
# Each estimator wraps an ML algorithm. Estimators subclass the `Estimator` class, and define a `fit` method to learn information from training data and a `predict` method for generating predictions from new data. Classification estimators should also define a `predict_proba` method for generating predicted probabilities.
#
# Estimator classes each define a `model_family` attribute indicating what type of model is used.
#
# Here's an example of using the [LogisticRegressionClassifier](../generated/evalml.pipelines.components.LogisticRegressionClassifier.ipynb) estimator to fit and predict on a simple dataset:
# +
from evalml.pipelines.components import LogisticRegressionClassifier
clf = LogisticRegressionClassifier()
X = X
y = [1, 0]
clf.fit(X, y)
clf.predict(X)
# -
# Below is a list of all estimators included with EvalML:
from evalml.pipelines.components.utils import all_components, Estimator, Transformer
for component in all_components():
if issubclass(component, Estimator):
print(f"Estimator: {component.name}")
# ## Defining Custom Components
#
# EvalML allows you to easily create your own custom components by following the steps below.
#
# ### Custom Transformers
#
# Your transformer must inherit from the correct subclass. In this case [Transformer](../generated/evalml.pipelines.components.Transformer.ipynb) for components that transform data. Next we will use EvalML's [DropNullColumns](../generated/evalml.pipelines.components.DropNullColumns.ipynb) as an example.
# +
from evalml.pipelines.components import Transformer
from evalml.utils import (
infer_feature_types,
_convert_woodwork_types_wrapper
)
class DropNullColumns(Transformer):
"""Transformer to drop features whose percentage of NaN values exceeds a specified threshold"""
name = "Drop Null Columns Transformer"
hyperparameter_ranges = {}
def __init__(self, pct_null_threshold=1.0, random_seed=0, **kwargs):
"""Initalizes an transformer to drop features whose percentage of NaN values exceeds a specified threshold.
Arguments:
pct_null_threshold(float): The percentage of NaN values in an input feature to drop.
Must be a value between [0, 1] inclusive. If equal to 0.0, will drop columns with any null values.
If equal to 1.0, will drop columns with all null values. Defaults to 0.95.
"""
if pct_null_threshold < 0 or pct_null_threshold > 1:
raise ValueError("pct_null_threshold must be a float between 0 and 1, inclusive.")
parameters = {"pct_null_threshold": pct_null_threshold}
parameters.update(kwargs)
self._cols_to_drop = None
super().__init__(parameters=parameters,
component_obj=None,
random_seed=random_seed)
def fit(self, X, y=None):
"""Fits DropNullColumns component to data
Arguments:
X (list, ww.DataTable, pd.DataFrame): The input training data of shape [n_samples, n_features]
y (list, ww.DataColumn, pd.Series, np.ndarray, optional): The target training data of length [n_samples]
Returns:
self
"""
pct_null_threshold = self.parameters["pct_null_threshold"]
X_t = infer_feature_types(X)
X_t = _convert_woodwork_types_wrapper(X_t.to_dataframe())
percent_null = X_t.isnull().mean()
if pct_null_threshold == 0.0:
null_cols = percent_null[percent_null > 0]
else:
null_cols = percent_null[percent_null >= pct_null_threshold]
self._cols_to_drop = list(null_cols.index)
return self
def transform(self, X, y=None):
"""Transforms data X by dropping columns that exceed the threshold of null values.
Arguments:
X (ww.DataTable, pd.DataFrame): Data to transform
y (ww.DataColumn, pd.Series, optional): Ignored.
Returns:
ww.DataTable: Transformed X
"""
X_t = infer_feature_types(X)
return X_t.drop(self._cols_to_drop)
# -
# #### Required fields
#
# For a transformer you must provide a class attribute `name` indicating a human-readable name.
#
# #### Required methods
#
# Likewise, there are select methods you need to override as `Transformer` is an abstract base class:
#
# - `__init__()` - the `__init__()` method of your transformer will need to call `super().__init__()` and pass three parameters in: a `parameters` dictionary holding the parameters to the component, the `component_obj`, and the `random_seed` value. You can see that `component_obj` is set to `None` above and we will discuss `component_obj` in depth later on.
#
# - `fit()` - the `fit()` method is responsible for fitting your component on training data. It should return the component object.
#
# - `transform()` - after fitting a component, the `transform()` method will take in new data and transform accordingly. It should return a Woodwork DataTable. Note: a component must call `fit()` before `transform()`.
#
# You can also call or override `fit_transform()` that combines `fit()` and `transform()` into one method.
# ### Custom Estimators
#
# Your estimator must inherit from the correct subclass. In this case [Estimator](../generated/evalml.pipelines.components.Estimator.ipynb) for components that predict new target values. Next we will use EvalML's [BaselineRegressor](../generated/evalml.pipelines.components.BaselineRegressor.ipynb) as an example.
# +
import numpy as np
import pandas as pd
from evalml.model_family import ModelFamily
from evalml.pipelines.components.estimators import Estimator
from evalml.problem_types import ProblemTypes
class BaselineRegressor(Estimator):
"""Regressor that predicts using the specified strategy.
This is useful as a simple baseline regressor to compare with other regressors.
"""
name = "Baseline Regressor"
hyperparameter_ranges = {}
model_family = ModelFamily.BASELINE
supported_problem_types = [ProblemTypes.REGRESSION, ProblemTypes.TIME_SERIES_REGRESSION]
def __init__(self, strategy="mean", random_seed=0, **kwargs):
"""Baseline regressor that uses a simple strategy to make predictions.
Arguments:
strategy (str): Method used to predict. Valid options are "mean", "median". Defaults to "mean".
random_seed (int): Seed for the random number generator. Defaults to 0.
"""
if strategy not in ["mean", "median"]:
raise ValueError("'strategy' parameter must equal either 'mean' or 'median'")
parameters = {"strategy": strategy}
parameters.update(kwargs)
self._prediction_value = None
self._num_features = None
super().__init__(parameters=parameters,
component_obj=None,
random_seed=random_seed)
def fit(self, X, y=None):
if y is None:
raise ValueError("Cannot fit Baseline regressor if y is None")
X = infer_feature_types(X)
y = infer_feature_types(y)
y = _convert_woodwork_types_wrapper(y.to_series())
if self.parameters["strategy"] == "mean":
self._prediction_value = y.mean()
elif self.parameters["strategy"] == "median":
self._prediction_value = y.median()
self._num_features = X.shape[1]
return self
def predict(self, X):
X = infer_feature_types(X)
predictions = pd.Series([self._prediction_value] * len(X))
return infer_feature_types(predictions)
@property
def feature_importance(self):
"""Returns importance associated with each feature. Since baseline regressors do not use input features to calculate predictions, returns an array of zeroes.
Returns:
np.ndarray (float): An array of zeroes
"""
return np.zeros(self._num_features)
# -
# #### Required fields
#
# - `name` indicating a human-readable name.
#
# - `model_family` - EvalML [model_family](../generated/evalml.model_family.ModelFamily.ipynb) that this component belongs to
#
# - `supported_problem_types` - list of EvalML [problem_types](../generated/evalml.problem_types.ProblemTypes.ipynb) that this component supports
#
# Model families and problem types include:
# +
from evalml.model_family import ModelFamily
from evalml.problem_types import ProblemTypes
print("Model Families:\n", [m.value for m in ModelFamily])
print("Problem Types:\n", [p.value for p in ProblemTypes])
# -
# #### Required methods
#
# - `__init__()` - the `__init__()` method of your estimator will need to call `super().__init__()` and pass three parameters in: a `parameters` dictionary holding the parameters to the component, the `component_obj`, and the `random_seed` value.
#
# - `fit()` - the `fit()` method is responsible for fitting your component on training data.
#
# - `predict()` - after fitting a component, the `predict()` method will take in new data and predict new target values. Note: a component must call `fit()` before `predict()`.
#
# - `feature_importance` - `feature_importance` is a [Python property](https://docs.python.org/3/library/functions.html#property) that returns a list of importances associated with each feature.
#
# If your estimator handles classification problems it also requires an additonal method:
#
# - `predict_proba()` - this method predicts probability estimates for classification labels
# ### Components Wrapping Third-Party Objects
#
# The `component_obj` parameter is used for wrapping third-party objects and using them in component implementation. If you're using a `component_obj` you will need to define `__init__()` and pass in the relevant object that has also implemented the required methods mentioned above. However, if the `component_obj` does not follow EvalML component conventions, you may need to override methods as needed. Below is an example of EvalML's [LinearRegressor](../generated/evalml.pipelines.components.LinearRegressor.ipynb).
# +
from sklearn.linear_model import LinearRegression as SKLinearRegression
from evalml.model_family import ModelFamily
from evalml.pipelines.components.estimators import Estimator
from evalml.problem_types import ProblemTypes
class LinearRegressor(Estimator):
"""Linear Regressor."""
name = "Linear Regressor"
model_family = ModelFamily.LINEAR_MODEL
supported_problem_types = [ProblemTypes.REGRESSION]
def __init__(self, fit_intercept=True, normalize=False, n_jobs=-1, random_seed=0, **kwargs):
parameters = {
'fit_intercept': fit_intercept,
'normalize': normalize,
'n_jobs': n_jobs
}
parameters.update(kwargs)
linear_regressor = SKLinearRegression(**parameters)
super().__init__(parameters=parameters,
component_obj=linear_regressor,
random_seed=random_seed)
@property
def feature_importance(self):
return self._component_obj.coef_
# -
# ### Hyperparameter Ranges for AutoML
# `hyperparameter_ranges` is a dictionary mapping the parameter name (str) to an allowed range ([SkOpt Space](https://scikit-optimize.github.io/stable/modules/classes.html#module-skopt.space.space)) for that parameter. Both lists and `skopt.space.Categorical` values are accepted for categorical spaces.
#
# AutoML will perform a search over the allowed ranges for each parameter to select models which produce optimal performance within those ranges. AutoML gets the allowed ranges for each component from the component's `hyperparameter_ranges` class attribute. Any component parameter you add an entry for in `hyperparameter_ranges` will be included in the AutoML search. If parameters are omitted, AutoML will use the default value in all pipelines.
# ## Generate Component Code
#
# Once you have a component defined in EvalML, you can generate string Python code to recreate this component, which can then be saved and run elsewhere with EvalML. `generate_component_code` requires a component instance as the input. This method works for custom components as well, although it won't return the code required to define the custom component.
# +
from evalml.pipelines.components import LogisticRegressionClassifier
from evalml.pipelines.components.utils import generate_component_code
lr = LogisticRegressionClassifier(C=5)
code = generate_component_code(lr)
print(code)
# -
# this string can then be copy and pasted into a separate window and executed as python code
exec(code)
# +
# We can also do this for custom components
from evalml.pipelines.components.utils import generate_component_code
myDropNull = DropNullColumns()
print(generate_component_code(myDropNull))
# -
# ### Expectations for Custom Classification Components
# EvalML expects the following from custom classification component implementations:
#
# - Classification targets will range from 0 to n-1 and are integers.
# - For classification estimators, the order of predict_proba's columns must match the order of the target, and the column names must be integers ranging from 0 to n-1
| docs/source/user_guide/components.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/kalarea/detection-2016-nipsws/blob/master/joint1_slot_gate.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="jLXbBEdVZqBf" colab_type="text"
# [class]对应的分类,然后其他token直接接softmax
# + id="lJXw3N_lhdpX" colab_type="code" outputId="2f013c23-99dc-4b94-92df-0b7cc19f0a61" colab={"base_uri": "https://localhost:8080/", "height": 655}
# !pip install transformers
# + id="GfyoR-laZ4At" colab_type="code" colab={}
#tag和标签管理
BIO_TAG = {
"检查项目":'INS',
"科室":'DEP',
"疾病":'DIS',
"药品":'DRU',
"食物":'FOO',
"症状":'SYM',
"发病部位":'PAR',
"治疗方法":'MET',
"发病人群":'NUM'
}
BIO2ID = {
'B-INS': 0,
'I-INS': 1,
'B-DEP': 2,
'I-DEP': 3,
'B-DIS': 4,
'I-DIS': 5,
'B-DRU': 6,
'I-DRU': 7,
'B-FOO': 8,
'I-FOO': 9,
'B-SYM': 10,
'I-SYM': 11,
'B-PAR': 12,
'I-PAR': 13,
'B-MET': 14,
'I-MET': 15,
'B-NUM': 16,
'I-NUM': 17,
'O': 18
}
CLASSIFY_TAG = {
'疾病->并发症':1,
'疾病->所属科室':2,
'疾病->药品':3,
'疾病->食物(宜吃)':4,
'疾病->食物(忌吃)':5,
'疾病->症状':6,
'疾病->简介':7,
'疾病->发病部位':8,
'疾病->病因':9,
'疾病->预防措施':10,
'疾病->治疗周期':11,
'疾病->治疗方法':12,
'疾病->治愈概率':13,
'疾病->易感人群':14,
'疾病->检查项目':15,
'疾病->别名':16,
'疾病->是否医保':17,
'疾病->是否传染':18,
'疾病->费用':19,
'症状-(疾病)-科室':20,
'症状-(疾病)-药物':21,
'症状-(疾病)-食物(宜)':22,
'症状-(疾病)-食物(忌)':23,
'症状-(疾病)-检查项目':24,
'疾病->食物(综合)':25,
'症状-(疾病)-食物(综合)':26,
'症状-(疾病)-治疗方法':27,
'不支持':0
}
ID2BIO = {j:i for i,j in BIO2ID.items()}
ID2CLASS = {j:i for i,j in CLASSIFY_TAG.items()}
# + id="VOUw0488akbj" colab_type="code" colab={}
#载入数据,
'''
数据格式为:
{"id": 2, "text": "一岁半宝宝的食谱",
"ner_annotations": [{"label": "发病人群", "start_offset": 0, "end_offset": 5}],
"classify_annotations": [{"label": "不支持"}]}
'''
data_dict_list = []
import json
with open("/content/drive/My Drive/medical_ner_classification/data/medicalData/data_set.jsonl","r",encoding="utf-8") as f:
while True:
line = f.readline()
if(line ==''): break
line_json = json.loads(line)
data_dict_list.append(line_json)
# + id="We09NafoavwQ" colab_type="code" outputId="e10803c7-d41c-4152-ed9c-93d1a8e805fb" colab={"base_uri": "https://localhost:8080/", "height": 34}
len(data_dict_list)
# + id="oCGCQd9YazPX" colab_type="code" outputId="72783ff2-a727-49b6-d7d9-88c491d99811" colab={"base_uri": "https://localhost:8080/", "height": 34}
#对data_dict_list进行切分为train,dev,test
import random
train_rate = 0.8
dev_rate = 0.1
test_rate = 0.1
random.seed(1)
data_len = len(data_dict_list)
print("总共有%d条数据"%(data_len))
random_list = [i for i in range(len(data_dict_list))]
random.shuffle(random_list)
train_indexs = random_list[0:int(train_rate*data_len)]
dev_indexs = random_list[int(train_rate*data_len):int(train_rate*data_len + dev_rate*data_len)]
test_indexs = random_list[int(train_rate*data_len + dev_rate*data_len):]
assert len(train_indexs)+len(dev_indexs)+len(test_indexs)==data_len
# + id="bNi1sTVXa2yp" colab_type="code" colab={}
train_dict_list = [data_dict_list[i] for i in train_indexs]
dev_dict_list = [data_dict_list[i] for i in dev_indexs]
test_dict_list = [data_dict_list[i] for i in test_indexs]
# + id="MrCbzF0ta43a" colab_type="code" colab={}
#构建数据集
from torch.utils.data import Dataset
import random
class Dict_Dataset(Dataset):
def __init__(self, data_dict_list, tansformer_function, seed = 1):
self.data_dict_list = data_dict_list
self.len = len(self.data_dict_list)
self.transformer = tansformer_function
self.point = 0 #指向下一个被读取的数据
random.seed(seed)
def __getitem__(self,index):
return self.transformer(self.data_dict_list[index])
def __len__(self):
return self.len
def set_len(self,len):
self.len=len
def reset_len(self):
self.len=len(self.data_dict_list)
def shuffle(self):
random.shuffle(self.data_dict_list)
def reset(self):
self.point = 0
def has_next(self):
if(self.point>=self.len): return False
else: return True
def get_batch(self,batch_size):
num = 0
input_ids = []
attention_mask = []
token_type_ids = []
loss_mask = []
ner_tags = []
ner_labels = []
class_labels = []
sequence_lens = [] #用于lstm输入前pack
#input_ids, attention_mask, token_type_ids, loss_mask, ner_tags, ner_labels, class_labels
while(num<batch_size and self.has_next()):
curr_input_ids, curr_attention_mask, curr_token_type_ids, curr_loss_mask, curr_ner_tags, curr_ner_labels, curr_class_label,\
curr_sequence_len = self.__getitem__(self.point)
self.point += 1
num += 1
input_ids.append(curr_input_ids)
attention_mask.append(curr_attention_mask)
token_type_ids.append(curr_token_type_ids)
loss_mask.append(curr_loss_mask)
ner_tags.append(curr_ner_tags)
ner_labels.append(curr_ner_labels)
class_labels.append(curr_class_label)
sequence_lens.append(curr_sequence_len)
return input_ids, attention_mask, token_type_ids, loss_mask, ner_tags, ner_labels, class_labels, sequence_lens
# + id="rLj-dpnJa9Xl" colab_type="code" colab={}
#转化器,将字典形式的数据转化为要求的形式
def transformer_dict_to_tensor(max_len,tokenizer,BIO_TAG,BIO2ID):
'''
json_dict形如:
{"id": 2, "text": "一岁半宝宝的食谱",
"ner_annotations": [{"label": "发病人群", "start_offset": 0, "end_offset": 5}],
"classify_annotations": [{"label": "不支持"}]}
'''
max_len = max_len
tokenizer = tokenizer
BIO_TAG = BIO_TAG
BIO2ID = BIO2ID
def transformer(json_dict):
text = json_dict['text']
encode = tokenizer.encode_plus(list(text), max_length=max_len, return_token_type_ids=True, return_attention_mask=True, pad_to_max_length=True)
input_ids = encode['input_ids'] #
attention_mask = encode['attention_mask'] #
token_type_ids = encode['token_type_ids'] #
loss_mask = [0 for i in range(max_len)] #ner loss mask
loss_mask[1:1+len(text)] = [1 for i in range(len(text))]
ner_tags = ['O' for i in range(max_len)]
for i in json_dict['ner_annotations']:
label = i['label']
start_offset = i['start_offset']
end_offset = i['end_offset']
ner_tags[1+start_offset] = 'B-'+BIO_TAG[label]
for j in range(start_offset+1, end_offset):
ner_tags[1+j] = 'I-'+BIO_TAG[label] #
ner_labels = [BIO2ID[k] for k in ner_tags] #
ner_tags = ner_tags[1:1+len(text)]
class_label = CLASSIFY_TAG[json_dict['classify_annotations'][0]['label']] #
sequence_len = len(text)
return input_ids, attention_mask, token_type_ids, loss_mask, ner_tags, ner_labels, class_label, sequence_len
return transformer
# + id="Lm8oIbnwbBbA" colab_type="code" colab={}
test_data = {"id": 2, "text": "一岁半宝宝的食谱",
"ner_annotations": [{"label": "发病人群", "start_offset": 0, "end_offset": 5}],
"classify_annotations": [{"label": "不支持"}]}
# + id="YVGszjuubOv8" colab_type="code" outputId="73fdfe01-88f9-433b-ec57-1810c6a0dc70" colab={"base_uri": "https://localhost:8080/", "height": 66, "referenced_widgets": ["6595997e1cc74ae094a33747813a63f7", "db62411cb6334ab98d974538f8a3e677", "0f6cae380ffa47f8aaac19a224b7fd2f", "b1675a8afd304b5c81daf9080e2f51fe", "77eeaab048044d29a3359d0242c284e8", "e3d384e1bd084aa49a323899afbcba70", "c78a76e085b84859b918eca45b4a14b9", "c0e06826d7074c01884b0c4bae3fb727"]}
max_len=81
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained("bert-base-chinese")
transformer = transformer_dict_to_tensor(max_len,tokenizer,BIO_TAG,BIO2ID)
# + id="8QeRgS19h4xB" colab_type="code" colab={}
# input_ids, attention_mask, token_type_ids, loss_mask, ner_tags, ner_labels, class_label, sequence_len = transformer(test_data)
# + id="iLtNVqN6iAP-" colab_type="code" colab={}
# list(zip(tokenizer.convert_ids_to_tokens(input_ids),attention_mask,token_type_ids,loss_mask,ner_tags, ner_labels))
# + id="WUThSGVplup2" colab_type="code" colab={}
train_data_set = Dict_Dataset(train_dict_list, transformer, seed = 1)
dev_data_set = Dict_Dataset(dev_dict_list, transformer, seed = 1)
test_data_set = Dict_Dataset(test_dict_list, transformer, seed = 1)
# + id="hEW6GVpkmNgy" colab_type="code" colab={}
data = train_data_set.get_batch(2)
# + id="tOa_F1wEmTdH" colab_type="code" colab={}
input_ids, attention_mask, token_type_ids, loss_mask, ner_tags, ner_labels, class_label, sequence_len = data
# + id="GYWAeJk9mzHp" colab_type="code" outputId="42910e6c-efc7-4286-c7b3-c828cd9094a6" colab={"base_uri": "https://localhost:8080/", "height": 252}
list(zip(tokenizer.convert_ids_to_tokens(input_ids[0]),attention_mask[0],token_type_ids[0],loss_mask[0],ner_tags[0],ner_labels[0]))
# + id="qAOKwCXenfLY" colab_type="code" outputId="d1d11ead-2423-4a22-e1f5-63bce9bfb2df" colab={"base_uri": "https://localhost:8080/", "height": 34}
class_label
# + id="ba-IJwCUnh33" colab_type="code" outputId="a9336254-d7ba-47d4-d4af-1883385ac3e6" colab={"base_uri": "https://localhost:8080/", "height": 34}
sequence_len
# + id="aoA85WkSnpOJ" colab_type="code" colab={}
from torch import nn
from transformers import BertModel
from torch.nn import CrossEntropyLoss
# + id="s9yMFLC0pMie" colab_type="code" colab={}
class Config(object):
def __init__(self):
self.pretrained_bert_name = None
self.class_label_num = None
self.ner_tag_num = None
def describe(self):
attribute_dict = self.__dict__
for key, value in attribute_dict.items():
print(key + " is "+str(value))
# + id="QaqWv1Zunjuo" colab_type="code" colab={}
class BertJointNerClassify(nn.Module):
def __init__(self, config):
super(BertJointNerClassify,self).__init__()
self.config = config
self.bert = BertModel.from_pretrained(config.pretrained_bert_name)
self.classify = nn.Linear(in_features=self.bert.config.hidden_size, out_features=config.class_label_num, bias=True)
self.emission = nn.Linear(in_features=self.bert.config.hidden_size, out_features=config.ner_tag_num, bias=True)
def forward(self,input_ids, attention_mask, token_type_ids, loss_mask, ner_labels=None, class_labels=None):
bert_out, _ = self.bert(input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids)#(batch_size,seq_len,bert.config.hidden_size)
classify_out = self.classify(bert_out[:,0,:]) #(batch_size,1,class_label_num)
classify_out = classify_out.squeeze(-2) #(batch_size,class_label_num)
emission_out = self.emission(bert_out) #(batch_size,seq_len,ner_label_num)
if ner_labels is not None:
loss_fct = CrossEntropyLoss(reduction='none')
ner_loss = loss_fct(emission_out.view(-1,self.config.ner_tag_num),ner_labels.view(-1))
ner_loss = ner_loss * loss_mask.view(-1)
ner_output = ner_loss.sum()/input_ids.shape[0]
else:
ner_output = emission_out.max(dim=-1).indices.tolist() #(batch_size,seq_len)
if class_labels is not None:
loss_fct = CrossEntropyLoss(reduction='sum')
class_loss = loss_fct(classify_out,class_labels)
class_output = class_loss/input_ids.shape[0]
else:
class_output = classify_out.max(dim=-1).indices.tolist() #(bach_size)
return ner_output, class_output
# + id="cGMzugRGx7gk" colab_type="code" colab={}
config1 = Config()
# + id="XW8D3BqX3ZoO" colab_type="code" colab={}
config1.pretrained_bert_name = "bert-base-chinese"
config1.class_label_num = len(CLASSIFY_TAG)
config1.ner_tag_num = len(BIO2ID)
# + id="oYKDez4JvszC" colab_type="code" outputId="37205cda-cf36-433c-b678-30e2b118423c" colab={"base_uri": "https://localhost:8080/", "height": 114, "referenced_widgets": ["7f24c69f995441878939db2743397e54", "5d3db55ab45546b2b67d570981877dfb", "2eac73ee59a442a2b770ed11a571dea2", "c14e5c5bbd2e450aaaa1a94cab9e30b8", "<KEY>", "a6413bbf1b6a4d969c642baab5960f2f", "ba78afd2482f4851a5e642d6a724906b", "956d6301e9944f19895d2141df645801", "<KEY>", "abca44ee21de44a9b76c0d042668112b", "d2ffebf628ee4e4caaeaf48a47d1a299", "<KEY>", "c271e5944a99428da182faf747ec12e0", "<KEY>", "6be017b4b9e142949d95f399cc351541", "<KEY>"]}
model = BertJointNerClassify(config1)
# + id="JyiVjNIL5Jg1" colab_type="code" colab={}
# data = train_data_set.get_batch(2)
# input_ids, attention_mask, token_type_ids, loss_mask, ner_tags, ner_labels, class_label, sequence_len = data
# + id="edqD0daF5Ohe" colab_type="code" colab={}
# input_ids = torch.tensor(input_ids)
# attention_mask = torch.tensor(attention_mask)
# token_type_ids = torch.tensor(token_type_ids)
# loss_mask = torch.tensor(loss_mask)
# ner_labels = torch.tensor(ner_labels)
# class_label = torch.tensor(class_label)
# + id="2cfUcPRcMrQ5" colab_type="code" colab={}
# + id="xt5r8Ng159SH" colab_type="code" colab={}
# input_ids.shape
# + id="jo3YS1BM5mxq" colab_type="code" colab={}
# a,b = model.forward(input_ids=input_ids,attention_mask=attention_mask,token_type_ids=token_type_ids,loss_mask=loss_mask, ner_labels=ner_labels, class_labels=class_label)
# + id="IEsj8jv76e1-" colab_type="code" colab={}
# a,b
# + id="KHQp9fozDaMj" colab_type="code" colab={}
# len(a[0]),len(a[1])
# + id="I2_id0llM2uc" colab_type="code" colab={}
import torch
# + id="X7TFSsBwH0-Q" colab_type="code" colab={}
#定义优化器
from torch.optim import Adam
param_optimizer = list(model.named_parameters())
no_decay = ['bias', 'gamma', 'beta']
optimizer_grouped_parameters = [
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)],
'weight_decay_rate': 0.01},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)],
'weight_decay_rate': 0.0}
]
optimizer1 = Adam(optimizer_grouped_parameters, lr=1e-5)
# + id="xaiK03mxOpx2" colab_type="code" colab={}
from sklearn.metrics import f1_score
import os
import sys
os.getcwd()
sys.path.append("/content/drive/My Drive/functions")
from get_ner_fmeasure import get_ner_fmeasure
# + id="Q0rKjccNDd1D" colab_type="code" outputId="e9e1e422-44c0-46f8-85d2-20e40fd528f8" colab={"base_uri": "https://localhost:8080/", "height": 1000}
#train
epoch_num = 100
batch_size = 20
model.cuda()
from tqdm import trange
for epoch_i in trange(epoch_num):
total_loss_ner = 0
total_loss_class = 0
model.train()
train_data_set.shuffle()
train_data_set.reset()
num = 0
while train_data_set.has_next():
num+=1
input_ids, attention_mask, token_type_ids, loss_mask, ner_tags, ner_labels, class_label, sequence_len = train_data_set.get_batch(batch_size)
input_ids = torch.tensor(input_ids).cuda()
attention_mask = torch.tensor(attention_mask).cuda()
token_type_ids = torch.tensor(token_type_ids).cuda()
loss_mask = torch.tensor(loss_mask).cuda()
ner_labels = torch.tensor(ner_labels).cuda()
class_label = torch.tensor(class_label).cuda()
ner_output, class_output = model.forward(input_ids=input_ids,attention_mask=attention_mask,token_type_ids=token_type_ids,loss_mask=loss_mask,ner_labels=ner_labels,class_labels=class_label)
total_loss = ner_output+class_output
model.zero_grad()
total_loss.backward()
optimizer1.step()
total_loss_ner += ner_output.item()
total_loss_class += class_output.item()
total_loss_ner = total_loss_ner/num
total_loss_class = total_loss_class/num
#在dev上进行测试
ner_golds = []
ner_predicts = []
class_golds = []
class_predicts = []
with torch.no_grad():
model.eval()
dev_data_set.reset()
while dev_data_set.has_next():
input_ids, attention_mask, token_type_ids, loss_mask, ner_tags, ner_labels, class_label, sequence_len = dev_data_set.get_batch(batch_size=batch_size)
input_ids = torch.tensor(input_ids).cuda()
attention_mask = torch.tensor(attention_mask).cuda()
token_type_ids = torch.tensor(token_type_ids).cuda()
loss_mask = torch.tensor(loss_mask).cuda()
ner_labels = torch.tensor(ner_labels).cuda()
class_label = torch.tensor(class_label).cuda()
ner_output, class_output = model.forward(input_ids=input_ids,attention_mask=attention_mask,token_type_ids=token_type_ids,loss_mask=loss_mask)
#ner_golds, ner_predicts, class_golds, class_predicts
ner_gold = ner_tags
ner_predict = [[ID2BIO[i] for i in ner_output[ii][1:sequence_len[ii]+1]] for ii in range(len(ner_output))]
class_gold = class_label.tolist()
class_predict = class_output
ner_golds += ner_gold
ner_predicts += ner_predict
class_golds += class_gold
class_predicts += class_predict
#计算ner_f1, class_f1 ner_f1为micro average , class_f1为macro average
ner_accuracy, ner_precision, ner_recall, ner_f_measure = get_ner_fmeasure(ner_golds, ner_predicts,label_type="BIO")
class_f1 = f1_score(class_golds,class_predicts,average="micro")
print('epoch %d , total_loss_ner is %.4f, total_loss_class is %.4f, ner_f_measure is %.4f, class_f1 is %.4f '\
%(epoch_i, total_loss_ner, total_loss_class, ner_f_measure, class_f1))
# if(ner_f_measure>ner_f1_base and class_f1>class_f1_base and (ner_f_measure>ner_f1_max or class_f1 >class_f1_max)):
# torch.save(model.state_dict(),model_save_path+"+ner_f1+"+str(ner_f_measure)+"+class_f1+"+str(class_f1)+"+ner_loss+"+str(total_loss_ner)+"+class_loss+"+str(total_loss_class))
# + id="fkjXq4mHRqE7" colab_type="code" colab={}
len(dev_data_set)
# + id="pOuB9l4NPNYR" colab_type="code" colab={}
# %debug
| joint1_slot_gate.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
from keras.layers import Input, Dropout, Dense
from keras.layers import Conv2D, MaxPooling2D, Flatten
from keras.models import Model
from keras.layers.merge import concatenate
from keras.utils import to_categorical
from keras.utils import plot_model
from keras.datasets import mnist
import matplotlib.pyplot as plt
# -
# # Build a CNN Y-net with two independent inputs
# using Keras functional API
# * ## Read & convert the data
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train.shape
# Show a randomly selected pic from the dataset
plt.imshow(X_train[np.random.randint(60000)], cmap = 'gray')
plt.show()
# No. of labels
n_labels = len(np.unique(y_train))
n_labels
# Image size
img_size = X_train.shape[1]
# Convert labels to categoroical
y_train = to_categorical(y_train)
y_test = to_categorical(y_test)
# Reshape and noemalize input imgs
X_train = X_train.reshape([-1, img_size, img_size, 1])
X_test = X_test.reshape([-1, img_size, img_size, 1])
# Normalize
X_train = X_train.astype('float32') / 255
X_test = X_test.astype('float32') / 255
# * ## Define net params
input_shape = (img_size, img_size, 1)
batch_size = 32
kernel_size = 3
dropout = .4
n_filters = 32
# * ## Build the model
# +
# Left branch
inputs_left = Input(shape = input_shape)
x1 = inputs_left
filters = n_filters
# Build 3 layers of Conv2D-Dropout-Pooling
for i in range(3):
x1 = Conv2D(filters = filters,
kernel_size = kernel_size,
padding = 'same',
activation = 'relu')(x1)
x1 = Dropout(dropout)(x1)
x1 = MaxPooling2D()(x1)
filters *= 2
# +
# Right branch
inputs_right = Input(shape = input_shape)
x2 = inputs_right
filters = n_filters
# Build 3 layers of Conv2D-Dropout-Pooling
for i in range(3):
x2 = Conv2D(filters = filters,
kernel_size = kernel_size,
padding = 'same',
activation = 'relu',
dilation_rate = 2)(x2)
x2 = Dropout(dropout)(x2)
x2 = MaxPooling2D()(x2)
filters *= 2
# +
# Merge inputs
y = concatenate([x1, x2])
# Flatten & dropout
y = Flatten()(y)
y = Dropout(dropout)(y)
output = Dense(n_labels, activation = 'softmax')(y)
# -
# Build the model
model = Model([inputs_left, inputs_right], output)
plot_model(model, to_file = 'model_01.png')
png = plt.imread('model_01.png')
plt.figure(figsize = (33, 16.5))
plt.imshow(png)
plt.axis('off')
plt.show()
# * ## Compile the model
model.compile(loss = 'categorical_crossentropy',
optimizer = 'adam',
metrics = ['accuracy'])
# * ## Fit the model
model.fit([X_train, X_train], y_train,
validation_data = ([X_test, X_test], y_test),
epochs = 3,
batch_size = batch_size)
| .ipynb_checkpoints/01_CNN-Y-net_(functional_API)-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/Chelsea-Magleo/OOP-58002/blob/main/Operations_and_Expressions.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="5-ftUxSBC53S"
# Boolean Operators
# + colab={"base_uri": "https://localhost:8080/"} id="c-QcMxEMDz_6" outputId="d353922c-810d-4cdc-ddbe-ce65e9432d2d"
a = 7
b = 6
print(10>9)
print(10<9)
print(a>b)
# + [markdown] id="CouzyRFlEusL"
# bool() function
# + colab={"base_uri": "https://localhost:8080/"} id="hkLv7_CbEzrq" outputId="e10d319b-bea9-4319-c731-e4781aa92ddc"
print(bool("Chelsea"))
print(bool(1))
print(bool(0))
print(bool(None))
print(bool(False))
print(bool(True))
# + [markdown] id="yXE2R6RJFi9u"
# Functions can return a Boolean
# + colab={"base_uri": "https://localhost:8080/"} id="HZ_KKBVMFthM" outputId="20931830-5e38-4706-fd20-96dec835b287"
def my_Function():
return False
print(my_Function())
# + colab={"base_uri": "https://localhost:8080/"} id="qgRHfujnGkbp" outputId="b97a1d57-0ab4-46ef-8165-d93abe31af31"
if my_Function():
print("True")
else:
print("False")
# + [markdown] id="hDW6ErrFKKPo"
# Application 1
# + colab={"base_uri": "https://localhost:8080/"} id="uWGoxf_OKPvY" outputId="f5ee03f0-64e5-4024-9767-e7253921fd2f"
print(10>9)
g=6
f=7
print(g==f)
print(g!=f)
# + colab={"base_uri": "https://localhost:8080/"} id="i3qBAOMnLjt8" outputId="661da5bc-ad08-4e50-dad6-fd9fb059e4bb"
print(a==b)
print(a!=b)
# + [markdown] id="gW4gBoLyL-nI"
# Python Operators
# + colab={"base_uri": "https://localhost:8080/"} id="hL6TiuzNMAdq" outputId="8f513df5-d438-4483-c9b9-44edab970946"
print(10+5)
print(10-5)
print(int(10/5))
print(10*5)
print(10%5)
print(10//3) #10/3 = 3.33333
print(10**2) #power
# + [markdown] id="bilbIwUPNjw-"
# Bitwise Operators
# + colab={"base_uri": "https://localhost:8080/"} id="ZEVf6G7rNlwt" outputId="35445dd6-64dc-4826-a8de-cf5f318f37d7"
c = 60 #binary 0011 1100
d = 13 #binary 0000 1101
print(c&d)
print(c|d)
print(c^d)
print(d<<2)
# + [markdown] id="2ZahXz9uPp5-"
# Logical Operators
# + colab={"base_uri": "https://localhost:8080/"} id="pudyQoDGPrk8" outputId="54a4f586-d845-4eac-a4b1-85fbf622ae95"
h = True
l = False
h and l
h or l
not(h or l)
# + [markdown] id="S9RnSUv2QQ6f"
# Application 2
# + colab={"base_uri": "https://localhost:8080/"} id="lxo54OrwQTVf" outputId="6837f089-0063-4bbc-9c51-0d0d8d409676"
#Python Assignment Operators
x = 8
x += 3
print(x)
x -= 3
print(x)
x *= 3
print(x)
x /= 3
print(x)
x %= 3
print(x)
# + colab={"base_uri": "https://localhost:8080/"} id="4Terjh9dSSSF" outputId="e7e81ebd-123d-4039-f8df-64dd19dd0082"
#Python Assignment Operators
x = 100
x+=3 #Same as x = x +3, x = 100 + 3 = 103
print(x)
# + [markdown] id="qYuXEBiGSz8l"
# Identity Operator
# + colab={"base_uri": "https://localhost:8080/"} id="b09hDSH0S3L7" outputId="9db9cea8-34b4-4ed5-b8d3-0c1787af1c05"
h is l
h is not l
# + [markdown] id="r8pr20p3TLnJ"
# #Control Structure
# + [markdown] id="-50OY6LoTPu9"
# If statement
# + colab={"base_uri": "https://localhost:8080/"} id="kqf-wqziTO2Q" outputId="6ac927ab-87d4-432e-ea28-5af0c8914369"
if a>b:
print("a is greater than b")
# + [markdown] id="tBrKPLJRUUH8"
# Eif Statement
# + colab={"base_uri": "https://localhost:8080/"} id="XJmIp8vmUWS0" outputId="12e41d92-315f-4254-9eec-1d2e6c1ac7f9"
if a<b:
print("a is less than b")
elif a>b:
print("a is greater than b")
# + [markdown] id="lZclIiTHUutG"
# Else Statement
# + colab={"base_uri": "https://localhost:8080/"} id="lXcW9gFfUt0r" outputId="b7931e26-16dd-4787-dc92-f6dea24e372d"
a = 10
b = 10
if a<b:
print("a is less than b")
elif a>b:
print("a is greater than b")
else:
print("a is equal to b")
# + [markdown] id="zon4WjPGVBL8"
# Short Hand If Statement
# + colab={"base_uri": "https://localhost:8080/"} id="2Q1JPbUlVDer" outputId="9e35f2bf-31f4-4448-a7d7-46a67510655c"
if a==b: print("a is equal to b")
# + [markdown] id="Nj5LhXx3VvkW"
# Short Hand If... Else Statement
# + colab={"base_uri": "https://localhost:8080/"} id="1FHQmYlfVyzJ" outputId="d6f273a7-d78e-4bc0-820d-e7d15ec00765"
a = 10
b = 9
print("a is greater than b") if a>b else print("b is greater than a")
# + [markdown] id="O1FV9fVJWkAC"
# And-both Conditions are true
# + colab={"base_uri": "https://localhost:8080/"} id="b-O1Qo82WnTA" outputId="abe06cfb-9580-4ecc-e7b4-71f71313dfbc"
if a>b and b==b:
print("Both conditions are TRUE")
# + [markdown] id="pilxqq63XWgp"
# Or
# + colab={"base_uri": "https://localhost:8080/"} id="fWgiBD1IXVfK" outputId="69e26f47-e4bf-4d79-c8ad-bcb0044775ed"
if a>b or b==b:
print("the condition is TRUE")
# + [markdown] id="64Q2OSv9Xh0Y"
# Nested If
# + colab={"base_uri": "https://localhost:8080/"} id="izk9uMJjXjQ7" outputId="094818bf-6239-41d6-d99d-2adc4fcf614d"
x = int(input())
if x>10:
print("x is above 10")
if x>20:
print("and also above 20")
if x>30:
print("and also above 30")
if x>40:
print("and also above 40")
else:
print("but not above 40")
if x>50:
print("and also above 50")
else:
print("but not above 50")
# + [markdown] id="XbMoNdYbZ_zs"
# #LOOP Statement
# + [markdown] id="p_bZYBjqaosi"
# For Loop
# + colab={"base_uri": "https://localhost:8080/"} id="HAKITNcEaBzD" outputId="40147040-8e49-4685-cf35-94cf44bbe6c4"
week = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']
for x in week:
print(x)
# + [markdown] id="o86eskGmbVy2"
# The break statemnet
# + colab={"base_uri": "https://localhost:8080/"} id="e8XoCWaLbFKZ" outputId="e8a35942-f4d2-4633-e448-8a0db6c9467d"
#to display Sunday to Wednesday
for x in week:
print(x)
if x=="Wed":
break
# + colab={"base_uri": "https://localhost:8080/"} id="w6xurOrscr9O" outputId="9a8236eb-25fe-4f8a-d0ee-55665686ae7a"
#to display only Wednesday using break statement
for x in week:
if x=="Wed":
break
print(x)
# + [markdown] id="FrZOPD2VczJD"
# While Statement
# + colab={"base_uri": "https://localhost:8080/"} id="KHJ18wjpc1Ex" outputId="bb20e710-a2a0-4ca8-b38c-fc77340c3d61"
i = 1
while i<6:
print(i)
i+=1 #same as i = i + 1
# + [markdown] id="eqoGD5rtdYy3"
# Application 3 - Create a python program that displays no. 3 using break statement
# + colab={"base_uri": "https://localhost:8080/"} id="E3zVTg-LdbEy" outputId="4e556b6c-b8b1-45da-f804-1100b186a6f2"
i = 1
while i<6:
print(i)
i+=1 #same as i = i + 1
if i==3:
break
print(i)
# + colab={"base_uri": "https://localhost:8080/"} id="g-UOqfDve9qB" outputId="4e0feea3-1a17-4bfb-94c4-f24340009e2c"
i = 1
while i<6:
if i==3:
break
i+=1
print(i)
| Operations_and_Expressions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .r
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Ruby 2.6.5
# language: ruby
# name: ruby
# ---
# # Statistical Rethinking: Excercise 2M1
# Recall the globe tossing model from the chapter. Compute and plot the grid approximate posterior distribution for each of the following set of observations. In each case, assume a uniform prior for p.
#
# (1) W, W, W <br>
# (2) W, W, W, L <br>
# (3) L, W, W, L, W, W, W <br>
#
# <br>
# We want to estimate the probability of Water $p$ as parameter of the model. The parameter is at least 0 and at most 1.
require 'iruby/chartkick'
include IRuby::Chartkick
grid_size = 30
step_size = 1.0 / grid_size.to_f
grid = 0.step(by: step_size, to: 1).to_a
# +
# Let's first define and plot a uniform prior.
prior = grid.each_with_object({}) do |x, prior|
prior[x] = 1
end
IRuby.html(line_chart(prior))
# +
factorial = ->(n) do
return 1 if n < 1
n.to_i.downto(1).inject(:*)
end
likelihood = ->(w, l, p) do
(factorial[w+l].to_f / (factorial[w] * factorial[l])).to_f * (p**w) * ((1-p)**l)
end
# -
# Now, let's compute the grid aprroximation of the posterior for each of the cases. The difference is only the data input we give in terms of "count of Water" versus "count of Land" of our tossing result given in the exercise.
# +
# For case (1)
w = 3
l = 0
u_posterior = grid.each_with_object({}) do |x, u_posterior|
u_posterior[x] = prior[x] * likelihood[w, l, x]
end
posterior = u_posterior.each_with_object({}) do |(x,y), posterior|
posterior[x] = y.to_f / u_posterior.values.sum.to_f
end
line_chart(posterior)
# +
# For case (2)
w = 3
l = 1
u_posterior = grid.each_with_object({}) do |x, u_posterior|
u_posterior[x] = prior[x] * likelihood[w, l, x]
end
posterior = u_posterior.each_with_object({}) do |(x,y), posterior|
posterior[x] = y.to_f / u_posterior.values.sum.to_f
end
IRuby.html(line_chart(posterior))
# +
# For case (3)
w = 5
l = 2
u_posterior = grid.each_with_object({}) do |x, u_posterior|
u_posterior[x] = prior[x] * likelihood[w, l, x]
end
posterior = u_posterior.each_with_object({}) do |(x,y), posterior|
posterior[x] = y.to_f / u_posterior.values.sum.to_f
end
IRuby.html(line_chart(posterior))
# -
| .ipynb_checkpoints/2M1-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split, KFold
from statsmodels.graphics.tsaplots import plot_pacf
from darkgreybox.models import TiTe
from darkgreybox.fit import train_models
from docs.tutorials.util.plot import plot
# -
# ## Demo Notebook 03 - TiTe Model Wrapper Fit PASS
#
# This notebook demonstrates the usage of the `DarkGreyBox` models via fitting them with a wrapper function for a `TiTe` model.
#
# Our temporal resolution is 1 hour
# the duration of a record
rec_duration = 1 # hour
# Read some demo data.
# * Ph: Heating system power output
# * Ta: Ambient temperature
# * Ti: Internal temperature
#
# We are also setting a set of custom fields in our input data: `Ti0` and `Te0`. These fields define the initial conditions for `Ti` and `Te` for each record of the input data respectively. E.g. if a sub-model is trained based on the 10-20 records of the training data, the initial conditions for the above params will be set by the 10. record of the input data.
#
# Note: This demo data is intentionally far from ideal and particularly challenging to model with low errors. It is taken from a building where there are many factors influencing the heat dynamics of the building that are not accounted for in the modelling (solar gains, passive gains, hot water demand, gas use in the canteen etc.). The time period is relatively short to maintain a reasonable solution time for the demo. However, it includes an also challenging holiday period when the heating system is shut down.
# +
input_df = pd.read_csv('./data/demo_data.csv', index_col=0, parse_dates=True)
input_df['Ti0'] = input_df['Ti']
input_df['Te0'] = input_df['Ti'] - 2
input_X = input_df[['Ph', 'Ta', 'Ti0', 'Te0']]
input_y = input_df['Ti']
print(f'Input X shape: {input_X.shape}, input y shape: {input_y.shape}')
# -
# Use the `sklearn.model_selection.train_test_split` function to split the input data into train and test data. (Input data is 33 days long and 5 days of test data is specified)
# +
X_train, X_test, y_train, y_test = train_test_split(input_X, input_y, test_size=5 / 33, shuffle=False)
print(f'Train: X shape: {X_train.shape}, y shape: {y_train.shape}')
print(f'Test: X shape: {X_test.shape}, y shape: {y_test.shape}')
# -
# Set up the model training parameters.
#
# The `Ti0` param is the initial condition for the internal temperature at t=0 - this is set to the first record of `X_train['Ti0']` as described above and is fixed, hence `vary: False`.
#
# The `Te0` param is the initial condition for the building envelope temperature at t=0 - - this is set to the first record of `X_train['Te0']` as described above and is NOT fixed, hence `vary: True`.
#
# `Ci`, `Ce`, `Rie` and `Ria` params are the initial conditions for these thermal parameters. As these will be fit by the model training their default is `vary: True`. The values for these params' initial conditions are set arbitrarily to `1` it is assumed that no estimates have been calculated for them (e.g. based on building physical properties).
#
train_params = {
'Ti0': {'value': X_train.iloc[0]['Ti0'], 'vary': False},
'Te0': {'value': X_train.iloc[0]['Te0'], 'vary': True, 'min': 10, 'max': 25},
'Ci': {'value': 1},
'Ce': {'value': 1},
'Rie': {'value': 1},
'Rea': {'value': 1},
}
# We call the `train_models` wrapper function, passing it:
#
# * `models`: An `TiTe` model instantiated with the initial `train_params` as set above
# * `X_train`: The X of the train set
# * `y_train`: The y of the train set
# * `splits`: A list of training data indices specifying sub-sections of `X_train` and `y_train` for the models to be trained on. We use `sklearn.model_selection.KFold` to split the train set into chunks with lenghts of a single day each.
# * `error_metric`: An error metric function that confirms to the `sklearn.metrics` interface. The model results will be evaluated based on this metric.
# * `method`: Name of the fitting method to use. Valid values are described in: `lmfit.minimize`. We are passing `nelder` here as it is generally quicker than `leastsq`.
# * `n_jobs`: The number of parallel jobs to be run as described by `joblib.Parallel`. We are passing `-1` here to use all available CPU cores.
# * `verbose`: The degree of verbosity as described by `joblib.Parallel`. We are passing `10` here for medium verbosity.
#
prefit_df = train_models(models=[TiTe(train_params, rec_duration=1)],
X_train=X_train,
y_train=y_train,
splits=KFold(n_splits=int(len(X_train) / 24), shuffle=False).split(X_train),
error_metric=mean_squared_error,
method='nelder',
n_jobs=-1,
verbose=10)
# This returns a dataframe with a record for each split showing:
# * `start_date`: the start date of the train sub set for which the model was fit
# * `end_date`: the end date of the train sub set for which the model was fit
# * `model`: the fit model object
# * `model_result`: the model result that was generated by the model's `predict` method for the train sub set
# * `time`: the time it took for the fitting process to complete
# * `method`: the fitting method
# * `error`: the error of the model for the `y` variable (in this case the MSE)
prefit_df
# In this case, each record is a model fit for a single day. The params of these pre-fit models can be used to train models for the entire duration of the train set.
#
# We call the `train_models` wrapper function again, passing it (showing only different arguments:
#
# * `models`: a list of the pre-fit models
# * `splits`: None - as we want to fit for the entire train set
#
train_df = train_models(models=prefit_df['model'],
X_train=X_train,
y_train=y_train,
splits=None,
error_metric=mean_squared_error,
method='nelder',
n_jobs=-1,
verbose=10)
# This returns a dataframe with a record for each pre-fit model containing the model objects and model results.
train_df
# We can select the model with the lowest error and display its params
# +
select_idx = train_df['error'].argmin()
model = train_df.loc[select_idx, 'model']
train_results = train_df.loc[select_idx, 'model_result']
model.result.params
# -
# Plot the modelled and measured data for the train set.
#
# This yields a much better fit than the `Ti` model in `Demo 01`. (Bear in mind the complexity of the input data e.g. the building overheating during the second week of January.) We can also see the building envelope temperature `Te` on the plots.
plot(y_train, train_results, 'TiTe (Train)')
# To get the test results, the model takes `X_test` as the input data. Also, as the initial conditions for `Ti0` and `Te0` params at t=0 are different at the start of the test data to what the model params currently holds (which is `Ti0` and `Te0` at t=0 of the *training data*), we need to update those variables. For `Ti0` this is the same as before - we set it at the first value of `Ti` in the test set. We do not have a measured value for `Te0` though, so we take the last predicted value of `Te` of the model on the training set.
test_results = model.predict(X=X_test, ic_params={'Ti0': y_test.iloc[0], 'Te0': train_results.var['Te'][-1]})
# Plot the modelled and measured data for the test set.
#
# As expected, the `TiTe` model generalises much better than a `Ti` model on data it has not seen before.
plot(y_test, test_results, 'TiTe (Test)')
| docs/tutorials/darkgrey_poc_demo_03.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: deepchain-app-pfam
# language: python
# name: deepchain-app-pfam
# ---
# !which deepchain
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
device
# +
from biodatasets import list_datasets, load_dataset
from deepchain.models import MLP
from deepchain.models.utils import (
confusion_matrix_plot,
dataloader_from_numpy,
model_evaluation_accuracy,
)
from sklearn.model_selection import train_test_split
import numpy as np
from sklearn import preprocessing
# -
# load pfam dataset
pfam_dataset = load_dataset("pfam-32.0")
X, y = pfam_dataset.to_npy_arrays(input_names=["split"], target_names=["family_id"])
# get embeddings and filter on available embeddings
embeddings = pfam_dataset.get_embeddings("sequence", "protbert", "mean")
available_embeddings_len = len(embeddings)
print(f"We take only the first {available_embeddings_len} sequences as we have only their embeddings available.")
y = y[0][:available_embeddings_len]
# ### Split data
import gc
gc.collect()
split = X[0]
emb_train = embeddings[split == 'train']
y_train = y[split == 'train']
emb_val = embeddings[split == 'dev']
y_val = y[split == 'dev']
len(emb_train)
len(emb_val)
unique_classes = np.intersect1d(y_train, y_val)
# process targets
#unique_classes = np.unique(y)
num_classes = len(unique_classes)
print(f"There are {num_classes} unique classes for family_id.")
subset_classes = set(unique_classes)
#Train
x_train_generator = (x for (x, y) in zip(emb_train, y_train) if y in subset_classes)
y_train_generator = (y for (x, y) in zip(emb_train, y_train) if y in subset_classes)
emb_train = list(x_train_generator)
y_train = list(y_train_generator)
len(emb_train), len(y_train)
#Eval
x_val_generator = (x for (x, y) in zip(emb_val, y_val) if y in subset_classes)
y_val_generator = (y for (x, y) in zip(emb_val, y_val) if y in subset_classes)
emb_val = list(x_val_generator)
y_val = list(y_val_generator)
len(y_val)
le = preprocessing.LabelEncoder()
labels = le.fit(unique_classes)
targets_train = le.transform(y_train)
targets_val = le.transform(y_val)
print(f"Targets: {targets_train.shape}, {targets_train}, {len(labels.classes_)} classes")
from torch.utils.data import Dataset, DataLoader
import torch
class MyDataset(Dataset):
def __init__(self, data, targets, transform=None):
self.data = data
self.targets = torch.LongTensor(targets)
self.transform = transform
def __getitem__(self, index):
x = self.data[index]
y = self.targets[index]
return x, y
def __len__(self):
return len(self.data)
train_dataset = MyDataset(data=emb_train, targets=targets_train)
val_dataset = MyDataset(data=emb_val, targets=targets_val)
train_dataloader = DataLoader(train_dataset, batch_size=2048)
test_dataloader = DataLoader(val_dataset, batch_size=2048)
next(iter(train_dataloader))[0].shape
next(iter(train_dataloader))[1].shape
# +
import torch
import torch.nn.functional as F
from torch import nn
from deepchain.models.torch_model import TorchModel
from pytorch_lightning.metrics.functional import accuracy
# -
class FamilyMLP(TorchModel):
"""Multi-layer perceptron model."""
def __init__(self, input_shape: int = 768, output_shape: int = 1, **kwargs):
super().__init__(**kwargs)
self.output = nn.Softmax if output_shape > 1 else nn.Sigmoid
self.loss = F.cross_entropy if output_shape > 1 else F.binary_cross_entropy
self._model = nn.Sequential(
nn.Linear(input_shape, 256),
nn.ReLU(),
nn.Dropout(0.1),
nn.Linear(256, 256),
nn.ReLU(),
nn.Dropout(0.1),
nn.Linear(256, output_shape)
)
def forward(self, x):
"""Defines forward pass"""
if not isinstance(x, torch.Tensor):
x = torch.tensor(x).float()
return self._model(x)
def training_step(self, batch, batch_idx):
"""training_step defined the train loop. It is independent of forward"""
x, y = batch
y_hat = self._model(x)
y = y.long()
#y = torch.unsqueeze(y, 1)
loss = self.loss(y_hat, y)
self.log("train_loss", loss, prog_bar=True)
return loss
def validation_step(self, batch, batch_idx):
x, y = batch
y_hat = self._model(x)
y = y.long()
loss = self.loss(y_hat, y)
preds = torch.max(y_hat, dim=1)[1]
acc = accuracy(preds, y)
# Calling self.log will surface up scalars for you in TensorBoard
self.log('val_loss', loss, prog_bar=True)
self.log('val_acc', acc, prog_bar=True)
return loss
def save_model(self, path: str):
"""Save entire model with torch"""
torch.save(self._model, path)
mlp = FamilyMLP(input_shape=1024, output_shape=num_classes)
X_train.shape[1]
mlp
mlp._model = torch.load("checkpoint/family_model.pt")
mlp.fit(train_dataloader, test_dataloader, epochs=10, auto_lr_find=True, auto_scale_batch_size=True, gpus=1)
mlp.save_model("family_model.pt")
# !pwd
torch.max(mlp(next(iter(train_dataloader))[0]), 1)[1].shape
x, y = next(iter(train_dataloader))
torch.max(mlp(x), 1)[1] == y
# # Evaluation
#
# +
from sklearn.metrics import accuracy_score, confusion_matrix, roc_auc_score
from torch.utils.data import DataLoader, TensorDataset
from typing import Callable, List, Tuple, Union
# -
def model_evaluation_accuracy(
dataloader: DataLoader, model
) -> Tuple[np.array, np.array]:
"""
Make prediction for test data
Args:
dataloader: a torch dataloader containing dataset to be evaluated
model : a callable trained model with a predict method
"""
prediction, truth = [], []
for X, y in dataloader:
y_hat = torch.max(model.predict(X), 1)[1]
prediction += y_hat
truth += y.detach().numpy().flatten().tolist()
prediction, truth = np.array(prediction), np.array(truth)
acc_score = accuracy_score(truth, prediction)
print(f" Test : accuracy score : {acc_score:0.2f}")
return prediction, truth
prediction, truth = model_evaluation_accuracy(train_dataloader, mlp)
prediction, truth = model_evaluation_accuracy(test_dataloader, mlp)
# # Inference
#
le
# +
import joblib
joblib.dump(le, 'label_encoder.joblib')
label_encoder = joblib.load('label_encoder.joblib')
label_encoder
# -
def compute_scores(sequences: List[str]):
"""Return a list of all proteins score"""
#x_embedding = self.transformer.compute_embeddings(sequences)["mean"]
x_embedding = embeddings[:len(sequences)]
y_hat = mlp(torch.tensor(x_embedding))
preds = torch.max(y_hat, dim=1)[1]
preds = preds.detach().cpu().numpy()
family_preds = label_encoder.inverse_transform(preds)
family_list = [{"family_id": family_pred} for family_pred in family_preds]
return family_list
sequences = [
"MKTVRQERLKSIVRILERSKEPVSGAQLAEELSVSRQVIVQDIAYLRSLGYNIVATPRGYVLAGG",
"KALTARQQEVFDLIRDHISQTGMPPTRAEIAQRLGFRSPNAAEEHLKALARKGVIEIVSGASRGIRLLQEE",
]
compute_scores(sequences)
# Start tensorboard.
# %load_ext tensorboard
# %tensorboard --logdir lightning_logs/
| random.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "skip"}
# 
# + [markdown] slideshow={"slide_type": "skip"}
# # Being a Computational Social Scientist
# + [markdown] slideshow={"slide_type": "skip"}
# Welcome to the <a href="https://ukdataservice.ac.uk/" target=_blank>UK Data Service</a> training series on *New Forms of Data for Social Science Research*. This series guides you through some of the most common and valuable new sources of data available for social science research: data collected from websites, social media platorms, text data, conducting simulations (agent based modelling), to name a few. To help you get to grips with these new forms of data, we provide webinars, interactive notebooks containing live programming code, reading lists and more.
#
# * To access training materials for the entire series: <a href="https://github.com/UKDataServiceOpen/new-forms-of-data" target=_blank>[Training Materials]</a>
#
# * To keep up to date with upcoming and past training events: <a href="https://ukdataservice.ac.uk/news-and-events/events" target=_blank>[Events]</a>
#
# * To get in contact with feedback, ideas or to seek assistance: <a href="https://ukdataservice.ac.uk/help.aspx" target=_blank>[Help]</a>
#
# <a href="https://www.research.manchester.ac.uk/portal/julia.kasmire.html" target=_blank>Dr <NAME></a> and <a href="https://www.research.manchester.ac.uk/portal/diarmuid.mcdonnell.html" target=_blank>Dr <NAME></a> <br />
# UK Data Service <br />
# University of Manchester <br />
# May 2020
# + [markdown] slideshow={"slide_type": "skip"} toc=true
# <h1>Table of Contents<span class="tocSkip"></span></h1>
# <div class="toc"><ul class="toc-item"><li><span><a href="#Guide-to-using-this-resource" data-toc-modified-id="Guide-to-using-this-resource-1"><span class="toc-item-num">1 </span>Guide to using this resource</a></span><ul class="toc-item"><li><span><a href="#Interaction" data-toc-modified-id="Interaction-1.1"><span class="toc-item-num">1.1 </span>Interaction</a></span></li><li><span><a href="#Learn-more" data-toc-modified-id="Learn-more-1.2"><span class="toc-item-num">1.2 </span>Learn more</a></span></li></ul></li><li><span><a href="#Acquiring,-understanding-and-manipulating-unstructured/unfamiliar-data" data-toc-modified-id="Acquiring,-understanding-and-manipulating-unstructured/unfamiliar-data-2"><span class="toc-item-num">2 </span>Acquiring, understanding and manipulating unstructured/unfamiliar data</a></span><ul class="toc-item"><li><span><a href="#Acquiring-data" data-toc-modified-id="Acquiring-data-2.1"><span class="toc-item-num">2.1 </span>Acquiring data</a></span></li><li><span><a href="#Understanding-and-manipulating-data" data-toc-modified-id="Understanding-and-manipulating-data-2.2"><span class="toc-item-num">2.2 </span>Understanding and manipulating data</a></span></li></ul></li></ul></div>
# + [markdown] slideshow={"slide_type": "skip"}
# -------------------------------------
#
# <div style="text-align: center"><i><b>This is notebook 5 of 6 in this lesson</i></b></div>
#
# -------------------------------------
# + [markdown] slideshow={"slide_type": "skip"}
# ## Guide to using this resource
#
# This learning resource was built using <a href="https://jupyter.org/" target=_blank>Jupyter Notebook</a>, an open-source software application that allows you to mix code, results and narrative in a single document. As <a href="https://jupyter4edu.github.io/jupyter-edu-book/" target=_blank>Barba et al. (2019)</a> espouse:
# > In a world where every subject matter can have a data-supported treatment, where computational devices are omnipresent and pervasive, the union of natural language and computation creates compelling communication and learning opportunities.
#
# If you are familiar with Jupyter notebooks then skip ahead to the main content (*Collecting data from online databases using an API*). Otherwise, the following is a quick guide to navigating and interacting with the notebook.
# + [markdown] slideshow={"slide_type": "skip"}
# ### Interaction
#
# **You only need to execute the code that is contained in sections which are marked by `In []`.**
#
# To execute a cell, click or double-click the cell and press the `Run` button on the top toolbar (you can also use the keyboard shortcut Shift + Enter).
#
# Try it for yourself:
# + slideshow={"slide_type": "skip"}
print("Enter your name and press enter:")
name = input()
print("\r")
print("Hello {}, enjoy learning more about Python and computational social science!".format(name))
# + [markdown] slideshow={"slide_type": "skip"}
# ### Learn more
#
# Jupyter notebooks provide rich, flexible features for conducting and documenting your data analysis workflow. To learn more about additional notebook features, we recommend working through some of the <a href="https://github.com/darribas/gds19/blob/master/content/labs/lab_00.ipynb" target=_blank>materials</a> provided by <NAME> at the University of Liverpool.
# + [markdown] slideshow={"slide_type": "slide"}
# ## Acquiring, understanding and manipulating unstructured/unfamiliar data
# + [markdown] slideshow={"slide_type": "skip"}
# ### Acquiring data
#
# There are LOADS of ways to get data, some that are more 'computational' than others. You are all surely familiar with surveys and interviews, as well as Official data sources and data requests. You may also be familiar with (at least the concepts of):
# * scraped data that comes from web-pages or APIs
# * “found” data that is captured through alongside orinigally intended data targets
# * meta-data, which is data about data
# * repurposed data, or data collected for some other purspose that is used in new and creative ways or
# * other... cause this list is definitely not exhaustive.
#
# To some extent, using these data sources requires that you keep your ear to the ground so that you know when relevant new sources come available. But once you know *about* them, you still need to know *what* they are and *how* to access and use them.
#
# So, we will set data acquisition aside for the moment and instead focus on data literacy, which is knowldege of the types of data that you might find.
# + [markdown] slideshow={"slide_type": "subslide"}
# ### Understanding and manipulating data
#
# Being data literate involves understanding two key properties of datasets:
# 1. How the contents of the dataset are stored (e.g., as numbers, text, etc.).
# 2. How the contents of the dataset are structured (e.g., as rows of observations, or networks of relations).
# + [markdown] slideshow={"slide_type": "subslide"}
# #### Data types
#
# Data types provide a means of classifying the contents (values) of your dataset. For example, in [Understanding Society](https://www.understandingsociety.ac.uk/) there are questions where the answers are recorded as numbers e.g., [`prfitb`](https://www.understandingsociety.ac.uk/documentation/mainstage/dataset-documentation/variable/prfitb) which captures total personal income.
#
# Data types are important as they determine which values can be assigned to them, and what operations can be performed using them e.g., can you calculate the mean value of a piece of text (Tagliaferri, 2019). Let's cover some of the main data types in Python.
# + [markdown] slideshow={"slide_type": "subslide"}
# ##### Numbers
#
# These can be integers or floats (decimals), both of which behave a little differently.
# + slideshow={"slide_type": "fragment"}
# Integers
myint = 5
yourint = 10
# + [markdown] slideshow={"slide_type": "skip"}
# You double clicked, you hit run, but nothing happened, right? That is because naming and defining variables does not come with any commands that produce output. Basically, we ran a command that has no visible reaction. But maybe we want to check that it worked? To do that, we can call a print command.
#
# The cell below has a print command that includes the some text (within the quotation marks) and the result of a numerical operation over the variables we defined. Go ahead, double click in the cell and hit Run/Shift+Enter.
# + slideshow={"slide_type": "subslide"}
print("Summing integers: ", myint + yourint)
# + [markdown] slideshow={"slide_type": "skip"}
# Great! The print command worked and we see that it correctly summed the numerical value of the two variables that we defined.
#
# Let's try it again with Floats. Click in the code block below and hit Run/Shift+Enter.
# + slideshow={"slide_type": "subslide"}
# Floats
myflo = 5.5
yourflo = 10.7
print("Summing floats: ", myflo + yourflo)
# + [markdown] slideshow={"slide_type": "skip"}
# It might not be surprising, but it worked again. This time, the resulting sum had a decimal point and a following digit, which is how we know it was a float rather than an integer.
#
# What happens when we sum an integer and a float? Find out with the next code block!
# + slideshow={"slide_type": "subslide"}
# Combining integers and floats
newnum = myint + myflo
print("Value of summing an integer and a float: ", newnum)
print("Data type when we sum an integer and a float: ", type(newnum))
# + [markdown] slideshow={"slide_type": "skip"}
# In this case, create a new variable, called *newnum* and assign it the value of the sum of one of our previous integers and one of our previous floats.
#
# Then, we have two print statements. One returns the value of *newnum* while the other returns the *type* of *newnum*.
#
# You can always ask for the type. Go ahead and double click in the cell above again. This time, instead of just running the code, copy and past the final print statement. Before you run the code again with your new line, but change that line by rewriting the text inside the quotation marks to anything you like and change *newnum* to *myfloat* or *myint* or any of the other variables we defined.
#
# You can even define a whole new variable and then ask for the type of your new variable.
# + [markdown] slideshow={"slide_type": "subslide"}
# ##### Strings
#
# This data type stores text information. This should be a bit familiar, as we used text information in the previous code blocks within quotation marks.
#
# Strings are immutable in Python i.e., you cannot permanently change its value after creating it. But you can see what type of variable a string is (just like with the numerical variables above).
# + slideshow={"slide_type": "subslide"}
# Strings
mystring = "Thsi is my feurst string."
print(mystring)
print("What type is mystring: ", type(mystring))
mystring = "This is my correct first string."
print(mystring)
yourstring = mystring.replace("my", "your") # replace the word "my" with "your"
print(yourstring)
splitstring = yourstring.split("your") # split into separate strings
print(splitstring)
# + [markdown] slideshow={"slide_type": "skip"}
# Manipulating strings will be a common and crucial task during your computational social science work. We'll cover intermediate and advanced string manipulation throughout these training materials but for now we highly suggest you consult the resources listed below.
#
# *Further Resources*:
# * [Principles and Techniques of Data Science](https://www.textbook.ds100.org) - Chapter 8.
# * [Python 101](https://python101.pythonlibrary.org) - Chapter 2.
# + [markdown] slideshow={"slide_type": "subslide"}
# ##### Boolean
#
# This data type captures values that are true or false, 1 or 0, yes or no, etc. These will be like dummy or indicator variables, if you have used those in Stata, SPSS or other stats programmes.
#
# Boolean data allow us to evaluate expressions or calculations (e.g., is one variable equal to another? Is this word found in a paragraph?).
# + slideshow={"slide_type": "subslide"}
# Boolean
result = (10+5) == (14+1) # check if two sums are equal
print(result) # print the value of the "result" object
print(type(result)) # print the data type of the "result" object
# + [markdown] slideshow={"slide_type": "skip"}
# It is important to note that we did not define *result* as the value of 10+5 or the value of 14+1. We defined *result* as the value of whether 10+5 was exactly equal to 14+1.
#
# In this case, 10+5 is exacly equal to 14+1, so *result* was defined as True, which we can see in the output of the *print(result)* command.
#
# Booleans are very useful for controlling the flow of your code: in the below example, we assign somebody a grade and then use boolean logic to test whether the grade is above a threshold, which determines whether or not that grade receives a pass or fail notification.
#
# Double click in the code block below and hit Run/Shift+Enter.
#
# Then redefine grade as a different number by changing the number after the '=' and then hitting Run=Shift+Enter again.
# + slideshow={"slide_type": "subslide"}
grade = 71
if grade >= 40:
print("Congratulations, you have passed!")
else:
print("Uh oh, exam resits for you.")
# + [markdown] slideshow={"slide_type": "skip"}
# You can write a boolean statement more consicely, as demonstrated in the next code block. This time, you don't get the nicely worded pass/fail messages, but those will not always be important.
#
# Double click in the code block below and hit Run/Shift+Enter. Try it again, but change the number. This changes the threshold against which the command will return a true.
#
# Remember that you can redefine *grade* at any point, either by changing the definition in the code block above and re-running that code block or by copy/pasting/editing the grade = 71 line from above into this code block and re-running it here.
# + slideshow={"slide_type": "subslide"}
print(grade >= 40) # evaluate this expression
# + [markdown] slideshow={"slide_type": "skip"}
# *Further Resources*:
# * [How To Code in Python](https://assets.digitalocean.com/books/python/how-to-code-in-python.pdf) - Chapter 21.
# + [markdown] slideshow={"slide_type": "subslide"}
# ##### Lists
#
# The list data type stores a variable that is defined as an ordered, mutable (i.e., you can change its values) sequence of elements. Lists are defined by naming a variable and setting it equal to elements inside of square brackets.
#
# Double click in the code block below and hit Run/Shift+Enter.
#
# + slideshow={"slide_type": "subslide"}
# Creating a list
numbers = [1,2,3,4,5]
print("numbers is: ", numbers, type(numbers))
strings = ["Hello", "world"]
print("strings is: ", strings, type(strings))
mixed = [1,2,3,4,5,"Hello", "World"]
print("mixed is: ", mixed, type(mixed))
mixed_2 = [numbers, strings]
print("mixed_2 is: ", mixed_2, type(mixed_2)) # this is a list of lists
# + [markdown] slideshow={"slide_type": "skip"}
# Notice that most of these print commands print the value of the variable and also the type of variable.
#
# Also notice that you can define a list variable by listing all of the elements that you want to be in that list inside of square brackets (like the code that defines 'mixed') or you can define a list by including *other* lists inside of the square brackets for a new list (like thecode that defines 'mixed_2').
#
# As you can see, mixed has only one set of square brackets, but mixed_2 has square brackets nested inside of other square brackets to create a list of lists.
#
# Feel free to re-define these variables or add/define new variables too (but leave 'numbers' alone as we need it for the next several steps).
#
# When you are done testing out how to define work with lists, go on to run the next code block.
# + slideshow={"slide_type": "subslide"}
# List length
length_numbers = len(numbers)
print("The numbers list has {} items".format(length_numbers))
# the curly braces act as a placeholder for what we reference in .format()
# + [markdown] slideshow={"slide_type": "skip"}
# This one creates a new variable, called 'length_numbers' that is defined as the "len" of the "numbers" variable we defined above.
#
# The print statement underneath then goes on to tell us the value of the 'length_numbers' variable, but embeds that value inside of a sentence. We use the curly brackets as a placeholder for where the value should get embedded and use the '.format(length_numbers) to order the embedding and to define what is to be embedded.
#
# Try re-running the print command with other values embedded (by changing the variable that is to be emdedded), or embedding the variable in different places (be repositioning the curly brackets).
# + slideshow={"slide_type": "skip"}
# Accessing items (elements) within a list
print("{} is the second item in the list".format(numbers[1]))
# note that the position of items in a list (known as its 'index position')
# begins at zero i.e., [0] represents the first item in a list
# We can also loop through the items in a list:
print("\r") # add a new line to the output to aid readability
for item in numbers:
print(item)
# note that the word 'item' in the for loop is not special and
# can instead be defined by the user - see below
print("\r")
for chicken in numbers:
print(chicken)
# of course, such a silly name does nothing to aid interpretability of the code
# + slideshow={"slide_type": "skip"}
# Adding or removing items in a list
numbers.append(6) # add the number six to the end of the list
print(numbers)
numbers.remove(3) # remove the number three from the list
print(numbers)
# + [markdown] slideshow={"slide_type": "subslide"}
# ##### Dictionaries
#
# The dictionary data type maps keys (i.e., variables) to values; thus, data in a dictionary are stored in key-value pairs (known as items). Dictionaries are useful for storing data that are related e.g., variables and their values for an observation in a dataset.
# + slideshow={"slide_type": "subslide"}
# Creating a dictionary
dict = {"name": "Diarmuid", "age": 32, "occupation": "Researcher"}
print(dict)
# + slideshow={"slide_type": "subslide"}
# Accessing items in a dictionary
print(dict["name"]) # print the value of the "name" key
# + slideshow={"slide_type": "subslide"}
print(dict.keys()) # print the dictionary keys
# + slideshow={"slide_type": "subslide"}
print(dict.items()) # print the key-value pairs
# + slideshow={"slide_type": "subslide"}
# Combining with lists
obs = [] # create a blank list
ind_1 = dict # create dictionaries for three individuals
ind_2 = {"name": "Jeremy", "age": 50, "occupation": "Nurse"}
ind_3 = {"name": "Sandra", "age": 41, "occupation": "Chef"}
for ind in ind_1, ind_2, ind_3: # for each dictionary, add to the blank list
obs.append(ind)
print(obs)# print the list
print("\r")
print(type(obs)) # now we have a list of dictionaries
# + [markdown] slideshow={"slide_type": "subslide"}
# #### Data structures
#
# Indulge me: close your eyes and visualise a dataset. What do you picture? Heiberger and Riebling (2016, p. 4) are confident they can predict what you visualise:
#
# > Ask any social scientist to visualize data; chances are they will picture a rectangular table consisting of observations along the rows and variables as columns.
# + [markdown] slideshow={"slide_type": "subslide"}
# ##### Data frame
#
# A data frame is a rectangular data structure and is often stored in a Comma-Separated Value (CSV) file format. A CSV stores observations in rows, and separates (or "delimits") each value in an observation using a comma (','). Let's examine a CSV dataset in Python:
# + slideshow={"slide_type": "subslide"}
import pandas as pd # module for handling data frames
df = pd.read_csv("./data/oxfam-csv-2020-03-16.csv") # open the file and store its contents in the "df" object
df # view the data frame
# + [markdown] slideshow={"slide_type": "subslide"}
# ##### Dictionaries
#
# A dictionary is a hierarchical data structure based on key-value pairs. Dictionaries are often stored as Javascript Object Notation (JSON) files. Let's examine a JSON dataset in Python:
# + slideshow={"slide_type": "subslide"}
import json # import Python module for handling JSON files
with open('./data/oxfam-csv-2020-03-16.json', 'r') as f: # open file in 'read mode' and store in a Python JSON object called 'data'
data = json.load(f)
data # view the contents of the JSON file
# + [markdown] slideshow={"slide_type": "skip"}
# <div style="text-align: right"><a href="./bcss-notebook-four-2020-02-12.ipynb" target=_blank><i>Previous section: Computational environments</i></a> | <a href="./bcss-notebook-six-2020-02-12.ipynb" target=_blank><i>Next section: Reproducibility</i></a></div>
| code/bcss-notebook-five-2020-02-12.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
loan_amount = float(input("Enter amount of loan: "))
interest_rate = float(input("Enter interest rate (%): "))
years = float(input("Enter number of years: "))
i = interest_rate/1200
m = 12 * years
c = (1 + i)**m
monthly_payment = ((c * i)/(c-1)) * loan_amount
print("Monthly Payment: ${0:.2f}".format(monthly_payment))
# +
face_value = float(input("Enter face value of bond: "))
coupon_rate = float(input("Enter coupon interest rate: "))
market_price = float(input("Enter current market price: "))
maturity_years = float(input("Enter years until maturity: "))
intr = coupon_rate * 1000
a = (face_value - market_price)/ maturity_years
b = (face_value + market_price) / 2
YTM = (intr + a)/b
print("Approximate YTM: {0:.2%}".format(YTM))
# +
price_item = float(input("Enter price of item: "))
print("Enter weight of item in pounds and ounces separately.")
pounds = float(input("Enter pounds: "))
ounces = float(input("Enter weight: "))
pounds_in_ounce = pounds * 16
total_ounce = pounds_in_ounce + ounces
price_per_ounces = price_item/total_ounce
print("Price per ounce: ${0:.2f}".format(price_per_ounces))
# +
miles = int(input("Enter number of miles: "))
yards = int(input("Enter number of yards: "))
feet = int(input("Enter number of feet: "))
inches = int(input("Enter number of inches: "))
total_inches = (63360*miles) + (36*yards) + (12*feet) + inches
total_meters = total_inches/39.37
kilometers = total_meters/1000
meters = total_meters % 1000
centimeters = (meters % 1)*100
print("Metric Lenght: ")
print("{:3d} kilometers".format(int(kilometers)))
print("{:4d} meters".format(int(meters)))
print("{:6.1f} centimeters".format(centimeters))
# -
# if n < 2: return "Neither prime, nor composite"
# for i in range(2, int(n**0.5) + 1):
# if n % i == 0:
# return False
# return True
#
# sum = 0
# for i in range(2, 1000000):
# if isPrime(i):
# sum += i
#
# print(sum)
# +
spy = float(input("Enter amount invested in SPY: "))
qqq = float(input("Enter amount invested in QQQ: "))
eem = float(input("Enter amount invested in EEM: "))
vxx = float(input("Enter amount invested in VXX: "))
stock_name_list = ['SPY','QQQ','EEM','VXX']
stock_value_list = [spy,qqq,eem,vxx]
total_stock = sum(stock_list)
stock_per_list = []
stock_by_name = {}
for stock in stock_value_list:
stock_per_list.append(stock/total_stock)
stock_by_name = dict(zip(stock_name_list,stock_per_list))
#Formatting
print()
print("{:7}{}".format('ETF','PERCENTAGE'))
print("-----------------")
for key, val in stock_by_name.items():
print("{:s}{:>12.2%}".format(key,val))
print()
print("TOTAL AMOUNT INVESTED: ${:,}".format(total_stock))
# +
change_amount = int(input("Enter amount of change: "))
quater = 25
dime = 10
nikel = 5
cent = 1
list_of_deno_name = ['Quaters','Dimes','Nikels','Cents']
list_of_deno = [quater,dime,nikel,cent]
list_of_change = []
rem_amount = 0
dic_deno = {}
for deno in list_of_deno:
if rem_amount == 0:
list_of_change.append(change_amount//deno)
rem_amount = change_amount%deno
else:
list_of_change.append(rem_amount//deno)
rem_amount %= deno
dic_deno = dict(zip(list_of_deno_name,list_of_change))
for key,val in dic_deno.items():
print("{:s}: {:d}".format(key,val))
# -
| Class Assignments/Assignment3_python_programming.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.7 64-bit
# name: python3
# ---
# # FastAI Notes
# > Learning Deep Learning in Public
#
# - toc: true
# - hide: true
# - badges: false
# - comments: false
# - categories: ['Learning Data Science']
# - image: https://i.imgur.com/sKDI04h.png
# <style>
# table {font-size:100%; white-space:inherit}
# table td {max-width:inherit}
# </style>
#
# This is my tracker for following this much-celebrated MooC.
#
# It marks the start of my data science education and follows the ['Learning in Public'](https://www.swyx.io/learn-in-public/) ethos.
# | Part | Chapter | Section |
# |----------------------------------|--------------------------------------------|-------------------------------------------------|
# | 1. Deep Learning in Practice | 1. [Your Deep Learning Journey](#Chapter-One:-Your-Deep-Learning-Journey) | Deep Learning is for Everyone [O'Reailly](https://learning.oreilly.com/library/view/deep-learning-for/9781492045519/ch01.html#idm46055308627688) |
# | | | Neural Networks: A Brief History |
# | | | Who We Are |
# | | | How to Learn Deep Learning |
# | | | Your Projects and Your Mindset |
# | | | The Software: PyTorch, fastai, and Jupyter (And Why It Doesn't Matter) |
# | | | Your First Model |
# | | | Getting a GPU Deep Learning Server |
# | | | Running Your First Notebook |
# | | | What Is Machine Learning? |
# | | | What Is a Neural Network? |
# | | | A Bit of Deep Learning Jargon |
# | | | Limitations In herent to Machine Learning |
# | | | How Our Image Recognizer Works |
# | | | What Our Image Recognizer Learned |
# | | | Image Recognizers Can Tackle Non-Image Tasks |
# | | | Jargon Recap |
# | | | Deep Learning Is Not Just for Image Classification |
# | | | Validation Sets and Test Sets |
# | | | A Choose Your Own Adventure Moment |
# | | | Questionnaire [O'Reilly](https://learning.oreilly.com/library/view/deep-learning-for/9781492045519/ch01.html#idm46055317162952) • [Answers](http://www.google.com) |
# | | 2. From Model to Production | The Practice of Deep Learning |
# | | | Starting Your Project |
# | | | The State of Deep Learning |
# | | | The Drivetrain Approach |
# | | | Gathering Data |
# | | | From Data to DataLoaders |
# | | | Data Augmentation |
# | | | Training Your Model, and Using It to Clean Your Data |
# | | | Turning Your Model into an Online Application |
# | | | Using the Model for Inference |
# | | | Creating a Notebook App from the Model
# | | | Turning Your Notebook into a Real App |
# | | | Deploying Your App |
# | | | How to Avoid Disaster |
# | | | Unforeseen Consequences and Feedback Loops |
# | | | Get Writing! |
# | | | Questionnaire [O'Reilly](https://learning.oreilly.com/library/view/deep-learning-for/9781492045519/ch01.html#idm46055317162952) • [Answers](http://www.google.com) |
# | | 3. From Model to Production | Key Examples for Data Ethics |
# | | | Bugs and Recourse: Euggy Algorithm Used for Healthcare Benfits |
# | | | Feedback Loops: YouTube's Recommendation System |
# | | | Bias: Professor <NAME> "Arrested" |
# | | | Why Does This Matter? |
# | | | Integrating Machine Learning with Product Design |
# | | | Topics in Data Ethics |
# | | | Recourse and Accountability |
# | | | Feedback Loops |
# | | | Bias |
# | | | Disinformation |
# | | | Identifying and Addressing Ethical Issues |
# | | | Analyze a Project You Are Working On |
# | | | Processes to Implement |
# | | | The Power of Diversity |
# | | | Fairness, Accountability and Transparency |
# | | | Role of Policy |
# | | | The Effectiveness of Regulation |
# | | | Rights and Policy |
# | | | Cars: A Historical Precedent |
# | | | Conclusion |
# | | | Questionnaire [O'Reilly](https://learning.oreilly.com/library/view/deep-learning-for/9781492045519/ch01.html#idm46055317162952) • [Answers](http://www.google.com) |
# | 2. Understanding Fastai's Applications | 4. Under The Hood: Training A Digit Classifier | Pixels: The Foundations of Computer Vision |
# | | | First Try: Pixel Similarity |
# | | | NumPy Arrays and PyTorch Sensors |
# | | | Computing Metics Using Broadcasting |
# | | | Stochastic Gradient Descent |
# | | | Calculating Gradients |
# | | | Stepping with a Learning Rate |
# | | | An End-to-End SGD Example |
# | | | Summarizing Gradient Descent |
# | | | The MNIST Loss Function |
# | | | Sigmoid |
# | | | SGD and Mini-Batches |
# | | | Putting It All Together |
# | | | Creating an Optimizer |
# | | | Adding a Nonlinearity |
# | | | Going Deeper |
# | | | Jargon Recap |
# | | | Questionnaire [O'Reilly](https://learning.oreilly.com/library/view/deep-learning-for/9781492045519/ch01.html#idm46055317162952) • [Answers](http://www.google.com) |
# | | 5. Image Classification | From Dogs and Cats to Pet Breeds |
# | | | Presizing |
# | | | Checking and Debugging a DataBlock |
# | | | Cross-Entropy Loss |
# | | | Viewing Activations and Labels |
# | | | Softmax |
# | | | Log Likelihood |
# | | | Taking the log |
# | | | Model Interpretation |
# | | | Improving Our Model |
# | | | The Learning Rate Finder |
# | | | Unfreezing and Transfer Learning |
# | | | Discriminative Learning Rates |
# | | | Selecting the Number of Epochs |
# | | | Deeper Architectures |
# | | | Conclusion |
# | | | Questionnaire [O'Reilly](https://learning.oreilly.com/library/view/deep-learning-for/9781492045519/ch01.html#idm46055317162952) • [Answers](http://www.google.com) |
# | | | Further Research |
# | | 6. Other Computer Vision Problems | Multi-Label Classification |
# | python | | |
# ## End of Chapter Questions
#
#
# ### Chapter One: Your Deep Learning Journey [O'Reilly](https://learning.oreilly.com/library/view/deep-learning-for/9781492045519/ch01.html#idm46055317215864)
#
#
# #### 1.1 Do you need these for deep learning?
#
# - Lots of Math T/F
# - Lots of data T/F
# - Lots of expensive computers T/F
# - A PhD T/F
#
#
# +
#collapse
answers = '''
F Lots of Math T/F
- Lots of data T/F
- Lots of expensive computers T/F
- A PhD T/F
'''
# -
# #### 1.2 Name five areas where deep learning is now the best in the world.
# +
#collapse
answers = '''
- NLP. Natrual language processing is used to:
- answer questions
- speech recognition
- summarize documents
- classify documents
- finding names/dates
- Computer Vision
- Satellite and drone imagery interpreation
- face recognition
- image captioning
- Reading traffic signs
- Locating pedestrians/vehicales for autonomous vehicles
- Medicine
- Finding anomalises in radiology images
- Counting features in pathology slides
- Measure features in ultrasounds
- Diagnosing diabetic retinopathy
- Biology
- Folding proteins
- Classifying proteins
- Genomics tasks like tumour sequencing
- Classifying actionable genetic mutations
- Cell classifications
- Analyzing protein/protein interactions
- Image Generation
- Colourising images
- increasing image resolution
- Removing noise from images
- Converting images to art in certain styles
- Recommendation systems
- Web search
- Product recommendations,
- Home page layout
- Playing Games
- Chess, Go, most Atari video games and real-time strategy
- Robotics
- Handling objects that are challenging to locate (eg transparent or shiny) or hard to pick up
'''
# -
# #### 1.3 What was the name of the first device that was based on the principle of the artificial neuron?
# +
#collapse
answer='''
The <NAME>
'''
# -
# #### 1.4 Based on the book of the same name, what are the requirements for parallel distributed processing? (PDP)
# +
#collapse
answer='''
- A set of processing units
- A state of activuation
- An output function for each unit
- A pattern of connectivity among units
- A propogration rule for propogating patterns of activities through the network of connectivities
- An activation rule for combining the inputs impinging on a unity with the current state of that unit to produce an output for the unit
- A learning rule whereby patterns of connectivity are modified by experience
- An environment within which the system must operate
'''
# -
# #### 1.5 What were the two theoretical misunderstandings that held back the field of neural networks?
# +
#collapse
answer='''
- In the book "Perceptrons", <NAME> and <NAME> pointed out that only a single layer of artificial neurons couldn't learn simple mathematical functions, even though the latter part of the book addressed the limitation by highlighting the use of multiple layers.
- 30 years ago, researchers showed more layers were needed to get 'deep' learning and now neural nets are living up to their potential.
'''
# -
# #### 1.6 What is a GPU?
# +
#collapse
answer='''
A processor that can handle thousands of tasks at the same time. The tasks are very similar to what neural networks do and GPUs can run neural networks hundreds of times faster than regular CPUs.
The modern deep learning libraries only support NVIDIA GPUs
'''
# -
# #### 1.7 Open a notebook and execute a cell containing: 1+1
1+1
# #### 1.8 Follow through each cell of the stripped down version of the notebook for this chapter. Before executing each cell, guess what will happen.
#
# - [course.fast.ai - Using Colab](https://course.fast.ai/start_colab)
# - [Stripped Copy of 01_intro.ipynb on my Google Drive](https://colab.research.google.com/drive/1MvngrPu-vvXtE-DNV7dje5VQTeVJ7cHG?authuser=1) [original](https://colab.research.google.com/github/fastai/fastbook/blob/master/clean/01_intro.ipynb)
# - Change Runtime Type to GPU
# - Then run cells
# #### 1.9 Complete the Jupyter Notebook online appendix
#
# - [copy of app_jupyter.ipynb on my drive](https://colab.research.google.com/drive/1FsYPafxZNUlOfm8VeRbh_W-KtuNPniwM?authuser=1)
# #### 1.10 Why is it hard to use a traditional computer program to recognize images in a photo?
# +
#collapse
answer='''
- For a traditional computer program, we'd have to translate the exact steps necessary to recognize images into code but we don't really know what those steps are since it happens in our brain.
- There would be the need to 'spell out every minute step of the process in the most exasperating detail'
'''
# -
# #### 1.11 What did Samuel mean by 'Weight Assignment' ?
# +
#collapse
answer='''
- Weights ("Parameters" in ML jargon) are variables and an assignment is a particular choice of values for those variables.
- They are the inputs for the program ('model') and altering them will define how the program operates.
- In <NAME>'s work, his checkers-playing program had different values of weights that would result in different checkers-playing strategies.
'''
# -
# #### 1.12 What term do we normally use in deep learning for what Samuel called "Weights"
#collapse
answer='''
Parameters
'''
# #### 1.13 Draw a picture that summarizes Samuel's view of a machine learning model.
#
# <details>
# <summary>Answer</summary>
# <p>
# <img src="https://i.imgur.com/Dw3CaOt.png">
# </p>
# </details>
# #### 1.14 Why is it hard to understand why a deep learning model makes a particular prediction?
#
# <details>
# <summary>Answer:</summary>
# <p>
#
# > It is easy to create a model that makes predictions on exact data it's been trained on, but it's hard to make accurate predicitions on data the model hasn't seen before.
# </p>
#
# </details>
#
#
| _notebooks/2021-08-18-fast-ai-notes.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Lab 6 Softmax Classifier
import tensorflow as tf
tf.set_random_seed(777) # for reproducibility
x_data = [[1, 2, 1, 1],
[2, 1, 3, 2],
[3, 1, 3, 4],
[4, 1, 5, 5],
[1, 7, 5, 5],
[1, 2, 5, 6],
[1, 6, 6, 6],
[1, 7, 7, 7]]
y_data = [[0, 0, 1],
[0, 0, 1],
[0, 0, 1],
[0, 1, 0],
[0, 1, 0],
[0, 1, 0],
[1, 0, 0],
[1, 0, 0]]
# -
X = tf.placeholder("float", [None, 4])
Y = tf.placeholder("float", [None, 3])
nb_classes = 3
W = tf.Variable(tf.random_normal([4, nb_classes]), name='weight')
b = tf.Variable(tf.random_normal([nb_classes]), name='bias')
# +
hypothesis = tf.nn.softmax(tf.matmul(X,W)+b)
# -
cost = tf.reduce_mean(-tf.reduce_sum( Y * tf.log(hypothesis), axis = 1))
optimizer = tf.train.GradientDescentOptimizer(learing_rate=0.1).minimize(cost)
| ML_project/.ipynb_checkpoints/6-1 Softmax classifier-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Machine Learning
#
# In this file, instructions how to approach the challenge can be found.
# We are going to work on different types of Machine Learning problems:
#
# - **Regression Problem**: The goal is to predict delay of flights.
# - **(Stretch) Multiclass Classification**: If the plane was delayed, we will predict what type of delay it is (will be).
# - **(Stretch) Binary Classification**: The goal is to predict if the flight will be cancelled.
# +
# import pandas
import pandas as pd
import numpy as np
import copy
import seaborn as sns
import matplotlib.pyplot as plt
from scipy import stats
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
from sklearn.decomposition import PCA
from sklearn.model_selection import train_test_split, GridSearchCV, cross_validate, cross_val_score
from sklearn.linear_model import LinearRegression, Ridge, Lasso
from sklearn.ensemble import GradientBoostingRegressor, RandomForestRegressor
from sklearn.svm import SVR
from xgboost import XGBRegressor, XGBClassifier, plot_importance
from sklearn.metrics import r2_score, mean_squared_error
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
# -
# ### Read Preprocessed Data
# load data
df = pd.read_csv("data/flights_preprocessed_42k.csv", index_col=0)
df.head(3)
df.shape
# +
# reset dtypes
categorical_features = ['op_unique_carrier',
'tail_num',
'op_carrier_fl_num',
'origin_airport_id',
'dest_airport_id',
# 'share_code',
'origin_city',
'origin_state',
'dest_city',
'dest_state',
'fl_month',
'fl_weekday',
'season',
'inbound_fl']
df[categorical_features] = df[categorical_features].astype('str')
# df_train[categorical_features] = df_train[categorical_features].astype('str')
# df_test[categorical_features] =df_test[categorical_features].astype('str')
# + [markdown] tags=[]
# #### More Feature Engineering
# + [markdown] tags=[]
# ##### Transform some new features by using 'arr_delay'
# + [markdown] tags=[]
# ##### Target Encoding before splitting dataset
# -
def leave_one_out_pct(df, i, d='arr_delay'):
"""
Caculate group occurance percentage with cross calculation for interested categorical column, and imput leave_one_out_mean value into dataframe
PARAMS:
df (pd.DataFrame):
i (str): categorial independent variable
d (str): dependent variable
RETURNS (pd.Series):
pandas series containing leave-one-out occurance percentage
"""
data = df.copy()[[i, d]]
group_ct = data.groupby(i, as_index=False).count().rename(columns={d: 'ct'})
group_delay_ct = data[data[d] >= np.log(15 - diff)].groupby(i, as_index=False).count().rename(columns={d: 'delay_ct'})
data = pd.merge(data, group_ct, how='left', on=i)
data = pd.merge(data, group_delay_ct, how='left', on=i)
data['leftout_pct'] = (data['delay_ct'] - 1) / (data['ct'] - 1)
data = data.fillna(0)
return data['leftout_pct']
def leave_one_out_mean(df, i, d='arr_delay'):
"""
Caculate group means with cross calculation for interested categorical column, and imput leave_one_out_mean value into dataframe
PARAMS:
df (pd.DataFrame):
i (str): categorial independent variable
d (str): dependent variable
RETURNS (pd.Series):
pandas series containing leave-one-out mean values
"""
data = df.copy()[[i, d]]
group_sum_count = data.groupby(i)[d].agg(['sum', 'count']).reset_index()
data = pd.merge(data, group_sum_count, how='left', on=i)
data['leftout_sum'] = data['sum'] - data[d]
data['leftout_mean'] = data['leftout_sum'] / (data['count'] - 1)
data = data.fillna(0)
return data['leftout_mean']
df.shape
# +
# calculate how many delay count percentage ('arr_delay' > 15) happened on each carrier/flight_num/tail_num/carrier/origin_airport/dest_airport/origin_city/origin_state/dest_city/dest_state
# calculate average delay time of each ... (same as above)
# merge with df
tran_features = ['op_unique_carrier', 'tail_num', 'op_carrier_fl_num', 'origin_airport_id', 'dest_airport_id', 'origin_city', 'origin_state', 'dest_city', 'dest_state']
for col in tran_features:
df[f'{col}_leftout_pct'] = leave_one_out_pct(df, col)
df[f'{col}_leftout_mean'] = leave_one_out_mean(df, col)
# -
df.shape
df.iloc[:, -9:].isnull().sum()
# + [markdown] tags=[]
# ## Main Task: Regression Problem
# -
# #### XGBoost
avail_features = [
# 'fl_date',
# 'op_unique_carrier',
# 'tail_num',
# 'op_carrier_fl_num',
# 'origin_airport_id',
# 'dest_airport_id',
# 'crs_dep_time',
# 'crs_arr_time',
# 'crs_elapsed_time',
'distance',
'share_code',
# 'origin_city',
# 'origin_state',
# 'dest_city',
# 'dest_state',
# 'arr_date',
# 'dep_datetime',
# 'arr_datetime',
# 'fl_month',
# 'fl_weekday',
# 'season',
# 'day_num_of_flights',
'num_flights_6hrs',
'inbound_fl_num',
# 'inbound_fl',
# 'dep_min_of_day',
# 'arr_min_of_day',
# 'dep_hr',
# 'arr_hr',
'arr_min_sin',
'arr_min_cos',
# 'arr_hr_sin',
# 'arr_hr_cos',
'dep_min_sin',
'dep_min_cos',
# 'dep_hr_sin',
# 'dep_hr_cos',
'fl_mnth_sin',
'fl_mnth_cos',
'fl_wkday_sin',
'fl_wkday_cos',
'op_unique_carrier_leftout_pct',
'op_unique_carrier_leftout_mean',
'tail_num_leftout_pct',
'tail_num_leftout_mean',
'op_carrier_fl_num_leftout_pct',
'op_carrier_fl_num_leftout_mean',
'origin_airport_id_leftout_pct',
'origin_airport_id_leftout_mean',
'dest_airport_id_leftout_pct',
'dest_airport_id_leftout_mean',
# 'origin_city_leftout_pct',
'origin_city_leftout_mean',
# 'origin_state_leftout_pct',
'origin_state_leftout_mean',
# 'dest_city_leftout_pct',
'dest_city_leftout_mean',
'dest_state_leftout_pct',
# 'dest_state_leftout_mean'
]
# +
X_train, X_test, y_train, y_test = train_test_split(df[avail_features], df['arr_delay'], train_size=0.7, test_size=0.3, random_state=888)
xg_reg = XGBRegressor(objective ='reg:squarederror',
learning_rate = 0.05,
max_depth = 3,
reg_lambda = 15,
gamma = 10,
n_estimators = 150)
xg_reg.fit(X_train, y_train)
y_pred = xg_reg.predict(X_test)
# y_pred = np.exp(xg_reg.predict(X_test)) + diff
# -
r2_score(y_test, y_pred)
xg_reg.score(X_train, y_train)
# + jupyter={"source_hidden": true} tags=[]
# X_train = df_train[avail_features]
# # y_train = target_train_log
# y_train = target_train
# X_test = df_test[avail_features]
# y_test = target_test
# xg_reg = XGBRegressor(objective ='reg:squarederror',
# learning_rate = 0.1,
# max_depth = 6,
# # reg_lambda = 10,
# n_estimators = 300)
# xg_reg.fit(X_train, y_train)
# y_pred = xg_reg.predict(X_test)
# y_pred = np.exp(xg_reg.predict(X_test)) + diff
# +
# xg_reg.score(X_train, y_train)
# +
# xg_reg.score(X_test, y_test)
# +
## Best Score got so far
# r2_score(y_test, y_pred)
# + [markdown] jp-MarkdownHeadingCollapsed=true tags=[]
# ##### PCA
# +
# pca_features = [
# # 'op_unique_carrier',
# # 'tail_num'.
# # 'op_carrier_fl_num',
# # 'origin_airport_id',
# # 'dest_airport_id',
# 'crs_elapsed_time',
# 'distance',
# 'share_code',
# # 'origin_city',
# # 'origin_state',
# # 'dest_city',
# # 'dest_state',
# 'fl_month',
# 'fl_weekday',
# 'season',
# 'day_num_of_flights',
# 'num_flights_6hr',
# 'inbound_fl_num',
# 'inbound_fl',
# 'dep_min_of_day',
# 'arr_min_of_day',
# 'dep_hr',
# 'arr_hr',
# 'arr_hr_sin',
# 'arr_hr_cos',
# 'arr_min_sin',
# 'arr_min_cos',
# 'dep_min_sin',
# 'dep_min_cos',
# 'dep_hr_sin',
# 'dep_hr_cos',
# 'fl_mnth_sin',
# 'fl_mnth_cos',
# 'fl_wkday_sin',
# 'fl_wkday_cos',
# 'op_unique_carrier_delayct',
# 'op_unique_carrier_delaymedian',
# 'tail_num_delayct',
# 'tail_num_delaymedian',
# 'op_carrier_fl_num_delayct',
# 'op_carrier_fl_num_delaymedian',
# 'origin_airport_id_delayct',
# 'origin_airport_id_delaymedian',
# 'dest_airport_id_delayct',
# 'dest_airport_id_delaymedian',
# 'origin_city_delayct',
# 'origin_city_delaymedian',
# 'origin_state_delayct',
# 'origin_state_delaymedian',
# 'dest_city_delayct',
# 'dest_city_delaymedian',
# 'dest_state_delayct',
# 'dest_state_delaymedian'
# ]
# +
# df_X = pd.concat([df_train[pca_features], df_test[pca_features]])
# df_train.shape[0]
# +
# X_scaled = scaler.fit_transform(df_X)
# pca = PCA(n_components='mle')
# pca.fit(X_scaled)
# X_pca = pca.transform(X_scaled)
# +
# X_scaled_train = X_pca[:10609, :]
# X_scaled_test = X_pca[10609:, :]
# y_train = target_train_log
# y_test = target_test
# xg_reg = XGBRegressor(objective ='reg:squarederror',
# learning_rate = 0.1,
# max_depth = 6,
# # reg_lambda = 10,
# n_estimators = 300)
# xg_reg.fit(X_scaled_train, y_train)
# # y_pred = xg_reg.predict(X_test)
# y_pred = np.exp(xg_reg.predict(X_scaled_test)) + diff
# +
# r2_score(y_test, y_pred)
# +
# features = [
# # 'op_unique_carrier',
# # 'tail_num'.
# # 'op_carrier_fl_num',
# # 'origin_airport_id',
# # 'dest_airport_id',
# # 'crs_elapsed_time',
# 'distance',
# 'share_code',
# # 'origin_city',
# # 'origin_state',
# # 'dest_city',
# # 'dest_state',
# # 'fl_month',
# # 'fl_weekday',
# # 'season',
# # 'day_num_of_flights',
# # 'num_flights_6hr',
# # 'inbound_fl_num',
# # 'inbound_fl',
# # 'dep_min_of_day',
# # 'arr_min_of_day',
# # 'dep_hr',
# # 'arr_hr',
# # 'arr_hr_sin',
# # 'arr_hr_cos',
# # 'arr_min_sin',
# # 'arr_min_cos',
# 'dep_min_sin',
# # 'dep_min_cos',
# # 'dep_hr_sin',
# # 'dep_hr_cos',
# # 'fl_mnth_sin',
# # 'fl_mnth_cos',
# # 'fl_wkday_sin',
# # 'fl_wkday_cos',
# # 'op_unique_carrier_delayct',
# # 'op_unique_carrier_delaymedian',
# 'tail_num_delayct',
# # 'tail_num_delaymedian',
# 'op_carrier_fl_num_delayct',
# # 'op_carrier_fl_num_delaymedian',
# # 'origin_airport_id_delayct',
# # 'origin_airport_id_delaymedian',
# # 'dest_airport_id_delayct',
# # 'dest_airport_id_delaymedian',
# # 'origin_city_delayct',
# 'origin_city_delaymedian',
# # 'origin_state_delayct',
# 'origin_state_delaymedian',
# 'dest_city_delayct',
# # 'dest_city_delaymedian',
# # 'dest_state_delayct',
# 'dest_state_delaymedian'
# ]
# +
# scores = []
# for f in features:
# X_train = df_train[[f]]
# y_train = target_train_log
# X_test = df_test[[f]]
# y_test = target_test
# xg_reg = XGBRegressor(objective ='reg:squarederror',
# learning_rate = 0.1,
# max_depth = 6,
# # reg_lambda = 10,
# n_estimators = 300)
# xg_reg.fit(X_train, y_train)
# y_pred = np.exp(xg_reg.predict(X_test)) + diff
# # y_pred = xg_reg.predict(X_test)
# scores.append([f, xg_reg.score(X_train, y_train), r2_score(y_test, y_pred)])
# + jupyter={"outputs_hidden": true} tags=[]
# s = pd.DataFrame(scores)
# s[s[2]==s[2].max()]
# -
| src/notebooks/modeling_leaveoneout_target_encodeing_BEFORE_split.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] toc-hr-collapsed=false
# # Métodos Numéricos Aplicados à Transferência de Calor
# + [markdown] colab_type="text" id="xWip5AS5OehT"
# ## Introdução
# + [markdown] colab_type="text" id="hoOtyRJEOehU"
# ### Sobre o material
#
# * O objetivo desta palestra é **introduzir os principais conceitos empregados em programação e Python**, mais especificamente, no contexto interativo da plataforma Jupyter Notebook;
# * Além de demonstrar como **solucionar problemas em transferência de calor** por meio de propostas computacionais;
# * Para tanto, o material inclui uma breve **revisão de conceitos fundamentais** e as principais bibliotecas científicas disponíveis. Para maiores detalhes, **pode-se consultar a documentação disponível** ou mesmo as diversas leituras recomendadas que aparecem no decorrer do texto.
# + [markdown] colab_type="text" id="1Imc09KQOehV"
# ### Porque Python?
# + [markdown] colab_type="text" id="CGAn5ZvWOehW"
# <img src="../Assets/notebook.png">
# + [markdown] colab_type="text" id="33N69O2MOehX"
# > Leitura recomendada:
# > * [10 motivos para você aprender Python](https://www.hostgator.com.br/blog/10-motivos-para-voce-aprender-python/)
# + [markdown] colab_type="text" id="AumqKUUhOehX" toc-hr-collapsed=false
# ### Porque Jupyter Notebooks?
#
# 
#
# * Ferramenta web interativa, grátis e de código aberto;
# * Exploração de dados. Permite executar o código, ver o que acontece, modificar e repetir, onde temos uma *"conversa"* com os dados disponíveis;
# * Útil para a criação de tutoriais interativos;
# * Ele fala a nossa língua. Disponível para várias liguagens de programação, como Python, Julia, R, Fortran e muitas outras;
# * É possível combinar o código com células `Markdown`, para renderizar equações e tabelas, inserir figuras e explicações sobre o código;
# * Facilmente extensível para diversos formatos (PDF, HTML, $\LaTeX$, slides e outros);
# * Disponível em [jupyter.org](https://jupyter.org), além de:
# - Acompanhar a instalação do [Anaconda](https://www.anaconda.com/);
# - Ferramenta colaborativa na nuvem com [Google colab](https://colab.research.google.com) ou [binder](https://mybinder.org/).
# -
# > Leitura recomendada:
# > - [Mastering Markdown](https://guides.github.com/features/mastering-markdown/)
# > - [LaTeX/Mathematics](https://en.wikibooks.org/wiki/LaTeX/Mathematics)
# > - [Why Jupyter is data scientists’ computational notebook of choice](https://www.nature.com/articles/d41586-018-07196-1)
# > - [Why I write with LaTeX (and why you should too)](https://medium.com/@marko_kovic/why-i-write-with-latex-and-why-you-should-too-ba6a764fadf9)
# > - [New Developer? You should’ve learned Git yesterday](https://codeburst.io/number-one-piece-of-advice-for-new-developers-ddd08abc8bfa)
# > - [12 passos para Navier-Stokes](https://www.fschuch.com/blog/2020/01/12/cfd-com-python-12-passos-para-navier-stokes/)
# > - [Jupyter Notebook como uma Poderosa Ferramenta Educacional](https://www.fschuch.com/blog/2021/01/22/jupyter-notebook-como-uma-poderosa-ferramenta-educacional/#formas-de-acessarcompartilhar)
#
# + [markdown] colab_type="text" id="KMHr-02qOeha"
# ## Programação em Python
# + [markdown] colab_type="text" id="ZRiAA9-4Oehb"
# As primeiras linhas de código interativas dessa aula (`Shift+Enter` executam o bloco):
# + colab={} colab_type="code" id="yq7eFbnlOehb"
"""
Isso é um comentário
"""
print("Olá mundo")
# Isso também é um comentário
# + [markdown] colab_type="text" id="i4wlPN_kOehf"
# ### Atribuição de variáveis:
# + colab={} colab_type="code" id="s8dPgPbMOehg"
i = 5 # inteiro
f = 6.7 # ponto flutuante
g = 1e-2 # notação exponencial
s = "abcdef" # string
c = 5.0 + 6j # complexo
# + [markdown] colab_type="text" id="Kx2K9v7bOehk"
# ### Operações matemáticas
#
# Operador | Descrição | Exemplo | Resultado
# ---------|-----------|---------|----------
# `+` | Soma | `1 + 1` | `2`
# `-` | Subtração | `2 - 1` | `1`
# `*` | Multiplicação | `6 * 7` | `42`
# `/` | Divisão | `8 / 4` | 2.0
# `//` | Divisão inteira | `10 // 3` | 3
# `%` | Resto da divisão | `10 % 3` | 1
# `**` | Potência | `2 ** 3` | 8
#
# Teste qualquer uma das operações no bloco abaixo:
# + colab={} colab_type="code" id="aACxizWxOehl"
10 % 7.5
# +
a = 10.5
b = 5
c = a * b
c
# + [markdown] colab_type="text" id="YgTV2JraOehv"
# ### Operações em laços
#
# Computadores são ótimos para a realização de tarefas repetitivas. Para isso, temos à nossa disposição laços (ou *loops*), que geralmente percorrem um espaço definido pelo seu `valor inicial`, `valor final`, e o tamanho do `incremento`. Veja o exemplo:
# + colab={} colab_type="code" id="rMMWLrXHOehw"
inicio = 0 # opcional, será zero se não informado
final = 5
incremento = 1 # opcional, será um se não informado
for i in range(inicio, final, incremento):
print(i)
"""
Aqui realizaríamos operações da nossa aplicação
"""
# -
# **Nota**: Não precisamos indicar o final do laço em Python, porque isso é reconhecido por meio da identação.
#
# **Outra Nota:** sempre que precisar de ajuda para compreender qualquer objeto no Jupyter, digite seu nome seguido de uma interrogação `?`, ou use a função `help()`, veja só:
# +
# range?
# -
# Observe que, em Python:
# * **A contagem começa em zero**;
# * **O argumento inicial é inclusivo** (ele estará no espaço a percorrer);
# * Enquanto **o argumento final é exclusivo** (ele não estará no espaço a percorrer).
#
# Compreenda melhor esses conceitos com exemplos:
for i in range(10):
print(i, end=" ")
for i in range(0, 10, 1):
print(i, end=" ")
for i in range(15, 30, 5):
print(i, end=" ")
for i in range(0, 10, 3):
print(i, end=" ")
for i in range(0, -10, -1):
print(i, end=" ")
for i in range(0):
print(i, end=" ")
# **Nota**: Perceba que `range` é apenas uma das diferentes possibilidades que temos para contruir um laço em Python.
# + [markdown] colab_type="text" id="EjPM3SmAOehy"
# ### Testes lógicos
#
# Operador | Descrição | Exemplo | Resultado
# ---------|-----------|---------|----------
# `==` | Igualdade | `1 == 2` | `False`
# `!=` | Diferença | `1 != 2` | `True`
# `>` | Maior que | `1 > 3` | `False`
# `<` | Menor que | `1 < 3` | `True`
# `>=` | Maior ou igual que | `1 >= 3` | `False`
# `=<` | Menor ou igual que | `1 <= 3` | `True`
# `and` | Operador lógico "e" | `True and False` | `False`
# `or` | Operador lógico "ou" | `True or False` | `True`
# `not` | Operador lógico "não" | `not False` | `True`
# + colab={} colab_type="code" id="aZKkh1juOehy"
if 5 <= 3.0:
"""
Aqui realizaríamos operações da nossa aplicação
"""
print("Estou no bloco if")
elif 4 != 0:
"""
Aqui realizaríamos operações da nossa aplicação
"""
print("Estou no bloco elif")
else:
"""
Aqui realizaríamos operações da nossa aplicação
"""
print("estou no blobo else")
# -
# ### Funções
#
# Funções são uma forma de encapsular trechos de código que você porventura queira executar diversas vezes. Argumentos são parâmetros opcionais de entrada, que podem alterar ou controlar o comportamento no interior da função. E elas podem ou não retornar algum valor.
#
# No bloco a seguir, definimos um exemplo didático. Uma função que testa se um dado número de entrada é ímpar, retornando `True`, ou não, retornando `False`. Veja o exemplo:
def testa_se_impar(numero):
return bool(numero % 2)
# Agora invocamos e testamos a nossa função:
testa_se_impar(4)
testa_se_impar(5)
# Podemos incrementar a apresentação de nossa função com recursos extras. Por exemplo, atribuir valores padrões aos argumentos, caso eles não sejam informados ao invocar a função. Além disso temos o *type hint*, ou uma dica do tipo, onde podemos anotar na função a tipagem dos argumentos de entrada e saída, para auxiliar quem estiver utilizando nossa função. Finalmente, o comentário inicial é conhecido como *Docstring*, o lugar ideal para uma documentação rápida, que também estará disponível para nossos usuários:
def testa_se_impar_v2(numero: int = 0) -> bool:
"""
Dado um número inteiro como argumento de entrada,
retorna True se ele é ímpar e False se ele é par
"""
return bool(numero % 2)
# Quando invocada sem argumentos, número será o valor definido como padrão, zero nesse caso, então a função é executada sem erros:
testa_se_impar_v2()
# O nome dos argumentos podem estar presentes na chamada, oferecendo legibilidade extra ao seu código:
testa_se_impar_v2(numero=67)
# Note que o *Docstring* é exibido na tela quando solicitamos ajuda:
# +
# testa_se_impar_v2?
# + [markdown] colab_type="text" id="eH4kyitYOeh0"
# Material complementar:
#
# * [More Control Flow Tools](https://docs.python.org/3/tutorial/controlflow.html)
# * [The Python Tutorial - Modules](https://docs.python.org/3/tutorial/modules.html)
# * [Data Structures](https://docs.python.org/3/tutorial/datastructures.html)
# * [Classes](https://docs.python.org/2/tutorial/classes.html)
# + [markdown] colab_type="text" id="q4dW6qDfOeiB"
# ### Principais Pacotes
#
# Uma das grandes forças do Python é a enorme gama de pacotes que estão disponíveis, e em contínuo desenvolvimento, nas mais diversas áreas do conhecimento.
#
# A seguir, veremos algumas que são particularmente úteis para aplicações em transferência de calor.
# + [markdown] colab_type="text" id="9apCaxQSOeiC"
# #### SciPy
#
# 
#
# Ferramentas de computação científica para Python. SciPy refere-se a várias entidades relacionadas, mas distintas:
#
# * O ecossistema SciPy, uma coleção de software de código aberto para computação científica em Python;
# * A comunidade de pessoas que usam e desenvolvem essa biblioteca;
# * Várias conferências dedicadas à computação científica em Python - SciPy, EuroSciPy e SciPy.in;
# * Fazem parte da família os pacotes, que serão melhor descritos a seguir:
# * Numpy;
# * Matplotlib;
# * Sympy;
# * IPython;
# * Pandas.
# + [markdown] colab_type="text" id="qlis2j8OOeiE"
# * Além disso, a própria biblioteca SciPy, um componente do conjunto SciPy, fornecendo muitas rotinas numéricas:
# * Funções especiais;
# * Integração numérica;
# * Diferenciação numérica;
# * Otimização;
# * Interpolação;
# * Transformada de Fourier;
# * Processamento de sinal;
# * Algebra linear e Algebra linear esparsa;
# * Problema de autovalor esparso com ARPACK;
# * Algoritmos e estruturas de dados espaciais;
# * Estatistica;
# * Processamento de imagem multidimensional;
# * I/O de arquivos;
# + colab={} colab_type="code" id="VxTfWu9MOeiE"
import scipy as sp
import scipy.optimize
import scipy.integrate
# + [markdown] colab_type="text" id="5mZLE2EtOeiH"
# Material complementar:
# * [SciPy](https://www.scipy.org/)
# * [Getting Started](https://www.scipy.org/getting-started.html)
# * [Scipy Lecture Notes](http://scipy-lectures.org/index.html)
# + [markdown] colab_type="text" id="DbN37YL6OeiH"
# #### Numpy
#
# <img src="https://numpy.org/images/logos/numpy.svg" alt="Logotipo do Numpy" style="width: 100px;"/>
#
# Numpy é um pacote fundamental para a **computação científica em Python**. Entre outras coisas, destaca-se:
# * Objetos em arranjos N-dimensionais
# * Funções sofisticadas
# * Ferramentas para integrar código C/C++ e Fortran
# * Conveniente álgebra linear, transformada de Fourier e capacidade de números aleatórios
#
# Além de seus usos científicos óbvios, o NumPy também pode ser usado como um contêiner multidimensional eficiente de dados genéricos. Tipos de dados arbitrários podem ser definidos. Isso permite que o NumPy integre-se de forma fácil e rápida a uma ampla variedade de bancos de dados.
# + colab={} colab_type="code" id="QhODk5alOeiI"
import numpy as np # Importando a biblioteca numpy e definindo-a com o codnome de np
# +
matriz = np.arange(15).reshape(3, 5)
# exibe na tela
matriz
# -
matriz.shape
matriz.ndim
matriz.dtype.name
matriz.size
type(matriz)
# ##### Construção e Seleção de Dados
# * Criar matrizes completas com valores iniciais em zero ou um:
np.zeros(shape=(3, 4), dtype=np.float64)
np.ones(shape=(2, 3, 4), dtype=np.int16)
# * Definição inicial de um intervalo, de maneira similar a função `range` do Python:
np.arange(10, 30, 5)
np.arange(0, 2, 0.3)
# * Ou ainda um espaço linear:
vetor = np.linspace(start=0.0, stop=2.0, num=9)
vetor
# A seleção dos dados ocorre de maneira similar às listas, com o número inteiro representando a localização, começando a contagem em zero. Veja os exemplos:
vetor[0]
vetor[2]
vetor[0:4:2]
vetor[-1]
# No caso em que temos mais dimensões, como na matriz que definimos anteriormente, a mesma ideia se aplica, e separamos cada dimensão por vírgulas:
matriz
matriz[0, 0], matriz[0, 1], matriz[1, 0]
matriz[0, :]
matriz[:, -1]
# **Cuidado**, pois o sinal de igualdade não cria novas cópias dos tensores, e isso pode confundir os iniciantes:
outro_vetor = vetor
outro_vetor
outro_vetor is vetor
# +
outro_vetor *= 0
print(vetor)
# -
# Temos agora duas maneiras de acessar o mesmo vetor na memória, pois tanto `vetor` quanto `outro_vetor` apontam para a mesma posição na memória.
# ##### Operações Tensoriais
#
# Operações aritméticas e lógicas estão disponíveis para os objetos Numpy, e são propagados para todos os elementos do tensor. Veja os exemplos:
a = np.array([20, 30, 40, 50])
b = np.array([0, 1, 2, 3])
a - b
b ** 2
10 * np.sin(a)
a < 35
# > Leitura recomendada:
# > * [NumPy Documentation](https://numpy.org/doc/)
# > * [NumPy quickstart](https://numpy.org/doc/1.20/user/quickstart.html)
# > * [NumPy: the absolute basics for beginners](https://numpy.org/doc/1.20/user/absolute_beginners.html)
# > * [Tutorial: Linear algebra on n-dimensional arrays](https://numpy.org/doc/1.20/user/tutorial-svd.html)
# >
# > Outros pacotes Python para manipulação de dados:
# > * [Pandas](https://pandas.pydata.org/) é uma pacote Python especializado na processamento eficiente de dados tabelados, podendo lidar com arquivos CSV, Excel, SQL, arranjos Numpy e outros;
# > * [Xarray](http://xarray.pydata.org/) introduz rótulos na forma de dimensões, coordenadas e atributos sobre os dados brutos dos arranjos em formato NumPy, permitindo uma experiência de desenvolvimento mais intuitiva, consistente e a prova de falhas;
# > * [Dask](https://dask.org/) fornece paralelismo avançado para análises, permitindo desempenho em escala para as ferramentas que você adora.
# + [markdown] slideshow={"slide_type": "subslide"}
# #### **Pandas**
#
# 
#
# O pandas é um pacote Python que fornece **estruturas de dados rápidas, flexíveis e expressivas**, projetadas para tornar o trabalho com dados “relacionais” ou “rotulados” fáceis e intuitivos. O objetivo é ser o alicerce fundamental de alto nível para a análise prática de dados do mundo real em Python. Além disso, tem o objetivo mais amplo de se tornar a mais poderosa e flexível ferramenta de análise / manipulação de dados de código aberto disponível em qualquer linguagem.
#
# Pandas é bem adequado para muitos tipos diferentes de dados:
# * Dados tabulares com colunas de tipos heterogêneos, como em uma **tabela SQL, arquivo `.csv` ou planilha do Excel**;
# * Dados de **séries temporais** ordenados e não ordenados (não necessariamente de frequência fixa);
# * Dados de matriz arbitrária (homogeneamente digitados ou heterogêneos) com rótulos de linha e coluna;
# * Qualquer outra forma de conjuntos de dados observacionais / estatísticos. Os dados realmente não precisam ser rotulados para serem colocados em uma estrutura de dados de pandas.
# + slideshow={"slide_type": "subslide"} tags=[]
import pandas as pd
# + slideshow={"slide_type": "fragment"} tags=[]
df2 = pd.DataFrame({'A': 1.,
'B': pd.Timestamp('20130102'),
'C': pd.Series(1, index=list(range(4)), dtype='float32'),
'D': np.array([3] * 4, dtype='int32'),
'E': pd.Categorical(["test", "train", "test", "train"]),
'F': 'foo'})
# + slideshow={"slide_type": "fragment"}
df2
# + [markdown] slideshow={"slide_type": "skip"}
# > Material complementar:
# > * [Pandas](https://pandas.pydata.org/)
# > * [10 minutes to pandas](https://pandas.pydata.org/pandas-docs/version/0.25.0/getting_started/10min.html)
# -
# #### Tqdm
#
# Produz uma barra de progresso. Recurso puramente estético, mas ainda assim, muito útil:
from tqdm.notebook import tqdm
for i in tqdm(range(100)):
...
# + [markdown] slideshow={"slide_type": "subslide"}
# #### Sympy
#
# 
#
# SymPy é uma biblioteca Python para **matemática simbólica**. O objetivo é tornar-se um sistema de álgebra computacional (CAS) completo, mantendo o código o mais simples possível para ser compreensível e facilmente extensível. SymPy é escrito inteiramente em Python.
# + slideshow={"slide_type": "subslide"} tags=[]
import sympy as sm
sm.init_printing(use_latex="mathjax") # Para escrever equações na tela
# + slideshow={"slide_type": "fragment"} tags=[]
x, t = sm.symbols("x t") # Criando símbolos
# + [markdown] slideshow={"slide_type": "subslide"}
# \begin{equation}
# \text{calcular } \int (e^x \sin(x) + e^x \cos(x)) dx
# \end{equation}
# + slideshow={"slide_type": "fragment"}
sm.integrate(sm.exp(x) * sm.sin(x) + sm.exp(x) * sm.cos(x), x)
# + [markdown] slideshow={"slide_type": "subslide"}
# \begin{equation}
# \text{calcular a derivada de }\sin(x)e^x
# \end{equation}
# + slideshow={"slide_type": "fragment"}
sm.diff(sm.sin(x) * sm.exp(x), x)
# + [markdown] slideshow={"slide_type": "subslide"}
# \begin{equation}
# \text{calcular } \int_{-\infty}^{\infty} \sin(x^2)
# \end{equation}
# + slideshow={"slide_type": "fragment"}
sm.integrate(sm.sin(x ** 2), (x, -sm.oo, sm.oo))
# + [markdown] slideshow={"slide_type": "subslide"}
# \begin{equation}
# \text{calcular } \lim_{x \to 0} \dfrac{\sin(x)}{x}
# \end{equation}
# + slideshow={"slide_type": "fragment"}
sm.limit(sm.sin(x) / x, x, 0)
# + [markdown] slideshow={"slide_type": "subslide"}
# \begin{equation}
# \text{resolver } x^2 - 2 = 0
# \end{equation}
# + slideshow={"slide_type": "fragment"}
sm.solve(x ** 2 - 2, x)
# + [markdown] slideshow={"slide_type": "subslide"}
# \begin{equation}
# \text{resolver a equação diferencial } y'' - y = e^t
# \end{equation}
# + slideshow={"slide_type": "fragment"}
y = sm.Function("y")
eq1 = sm.dsolve(sm.Eq(y(t).diff(t, t) - y(t), sm.exp(t)), y(t))
eq1
# + slideshow={"slide_type": "fragment"}
# Bônus
print(sm.latex(eq1))
# + [markdown] slideshow={"slide_type": "skip"}
# Material complementar:
# * [Sympy](https://www.sympy.org/en/index.html)
# * [Documentation](https://docs.sympy.org/latest/index.html)
# + [markdown] colab_type="text" id="FUrzoPIHOeiw"
# #### Matplotlib
#
# 
#
# A Matplotlib é uma biblioteca de plotagem 2D do Python, que produz figuras de qualidade de publicação em uma variedade de formatos impressos e ambientes interativos entre plataformas. O Matplotlib pode ser usado em scripts Python, nos shells do Python e do IPython, no notebook Jupyter, nos servidores de aplicativos da web e em quatro kits de ferramentas de interface gráfica do usuário.
#
# A **Matplotlib tenta tornar as coisas fáceis simples e as coisas difíceis possíveis**. Você pode gerar gráficos, histogramas, espectros de potência, gráficos de barras, gráficos de erros, diagramas de dispersão, etc., com apenas algumas linhas de código.
#
# Como sempre, começamos importando a biblioteca:
# + colab={} colab_type="code" id="-DWBTFwmOeiw"
import matplotlib.pyplot as plt
# -
# Agora fazemos nossa primeira figura:
# + tags=[]
x = np.linspace(start=0, stop=10, num=100)
plt.plot(x, np.sin(x));
# -
# O nome dos eixos são indispensáveis se você quiser mostrar sua figura para terceiros, um título pode ajudar também. Outro exemplo é como podemos definir os limites de cada eixo do gráfico. Veja nossa nova figura:
# +
x = np.linspace(0, 10, 100)
plt.plot(x, np.sin(x))
plt.xlim([0, 2 * np.pi])
plt.ylim([-2, 2])
plt.xlabel(r"eixo x $\sigma^2$")
plt.ylabel("eixo y")
plt.title("Minha figura");
# -
# > Leitura recomendada:
# > * [Matplotlib](https://matplotlib.org/)
# > * [Style sheets reference](https://matplotlib.org/stable/gallery/style_sheets/style_sheets_reference.html)
# > * [Gallery](https://matplotlib.org/stable/gallery/index.html)
# > * [Gráficos com qualidade de publicação em Python com Matplotlib](https://www.fschuch.com/blog/2020/10/14/graficos-com-qualidade-de-publicacao-em-python-com-matplotlib/)
# #### Plotly
# A biblioteca de gráficos Python do Plotly cria **gráficos interativos** com qualidade de publicação. As possibilidades de como fazer gráficos são inumeras: de linha, gráficos de dispersão, gráficos de área, gráficos de barras, barras de erro, gráficos de caixa, histogramas, mapas de calor, subplots, eixos múltiplos, gráficos polares e gráficos de bolhas.
import plotly.express as px
import plotly.graph_objects as go
px.defaults.template = "ggplot2"
px.defaults.height = 600
df = px.data.iris()
fig = px.scatter(df, x="sepal_width", y="sepal_length", color="species")
fig.show()
fig = go.Figure(data =
go.Contour(
z=[[10, 10.625, 12.5, 15.625, 20],
[5.625, 6.25, 8.125, 11.25, 15.625],
[2.5, 3.125, 5., 8.125, 12.5],
[0.625, 1.25, 3.125, 6.25, 10.625],
[0, 0.625, 2.5, 5.625, 10]],
x=[-9, -6, -5 , -3, -1], # horizontal axis
y=[0, 1, 4, 5, 7] # vertical axis
))
fig.show()
# > Leitura recomendada:
# > * [Plotly](https://plotly.com/python/)
# > * [Plotly Express in Python](https://plotly.com/python/plotly-express/)
# > * [Dash App Gallery](https://dash-gallery.plotly.host/Portal/)
# #### Handcalcs
#
# Handcalcs é uma biblioteca para renderizar o código de cálculo Python automaticamente em $\LaTeX$. Como o handcalcs mostra a substituição numérica, os cálculos se tornam significativamente mais fáceis de visualizar e verificar manualmente. A ferramenta é extremamente útil em vários contextos, mas pode-se destacar seu destaque na ramo do ensino, podendo ser empregada tanto por professores produzindo material didático, quanto por alunos preparando trabalhos e relatórios.
import handcalcs.render
# Nós vamos ver a biblioteca na prática logo mais, caracterizada pelos blocos de código que começam com o comando mágico `%%render`:
# %%render
a = 2 # Eu sou um exemplo
b = 3
c = 2 * a + b / 3 # Olhe esse resultado!
# > Leitura complementar:
# > * [Veja no GitHub](www.github.com/connorferster/handcalcs).
# #### Pint
#
# Pint é um pacote Python para definir, operar e manipular quantidades físicas: o produto de um valor numérico e uma unidade de medida. Ele permite operações aritméticas entre eles e conversões de e para diferentes unidades.
import pint
ureg = pint.UnitRegistry()
# Veja o exemplo com a combinação de diferentes unidades de medida:
distancia = 3 * ureg("meter") + 4 * ureg("centimeter")
distancia
# Podemos agora converter essa distância facilmente para outras unidades:
distancia.to("inch")
# Vamos para um exemplo mais aplicado, com as propriedados do material (prata):
# %%render
k = ( 429 * ureg("W/(m*K)") ) # Condutividade térmica
rho = ( 10.5e3 * ureg("kg/m**3") ) # Massa específica
c_p = ( 235 * ureg("J/(kg*K)") ) # Calor específico
# Agora calculamos a difusividade térmica (perceba a combinação com o Handcalcs):
# %%render
alpha = k / (rho * c_p) # Difusividade térmica
# Nem sempre a simplificação de unidades é automática, mas podemos acionar manualmente:
alpha.to_base_units()
# Note que o uso de unidades também é compatível com os arranjos numéricos do NumPy:
np.linspace(0, 10, num=11) * ureg("hour")
# > Leitura recomendada:
# > * [Pint: makes units easy](https://pint.readthedocs.io/en/stable/)
# + [markdown] colab={} colab_type="code" id="hOC-0GylxlgY"
# -----
#
# > **<NAME>**,<br>
# > Pesquisador em Fluidodinâmica Computacional na PUCRS, com interesse em: Escoamentos turbulentos, transferência de calor e massa, e interação fluido-estrutura; Processamento e visualização de dados em Python; Jupyter Notebook como uma ferramenta de colaboração, pesquisa e ensino.<br>
# > [<EMAIL>](mailto:<EMAIL> "Email") [@fschuch](https://twitter.com/fschuch "Twitter") [Aprenda.py](https://fschuch.github.io/aprenda.py "Blog") [@aprenda.py](https://www.instagram.com/aprenda.py/ "Instagram")<br>
#
# -----
| Aulas/01-Introducao.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
#
# libraries
#
import pandas as pd
#
#
# read the data
#
gas_turbine_data = pd.read_csv('../Datasets/gt_2015.csv')
print(gas_turbine_data.shape)
print(list(gas_turbine_data.columns))
#
#
# subset the emissions columns ('CO' and 'NOX')
# and the first 100 rows as a sample
#
print(gas_turbine_data.iloc[:100, :].loc[:, ['CO', 'NOX']].describe())
print(gas_turbine_data.loc[:, ['CO', 'NOX']].describe())
#
| Chapter05/Exercise05_02/Exercise5_02.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Site-Level Roadway Safety Assessment Process
#
# ## Part 1: [Crash data analysis](1_Crash-data-analysis.ipynb)
# ### (1) Crash statistics description
# - Time: Year, Month, Day of Week, Hour
# - Crash Type, Type x Severity, Type x Time
# - Crash Severity, Severity x Time
#
# ### (2) Contributing factors analysis
# - Roadway engineering factors
# - Curvature
# - Lane width
# - Pavement conditions
# - Roadside hazards
# - Operation factors
# - Traffic signals/Traffic signs
# - Pavement markings
# - Environment conditions
# - Weather -> pavement conditions
# - Human factors
#
# ### (3) Crash reports and collision diagram
# - Read crash reports and perform a root cause analysis
# - Plot the collision diagram
# - Find temporal/spatial patterns
#
# ## Part 2: [Traffic operation analysis](2_Traffic-operation-analysis.ipynb)
# - Traffic volume and turning movements
# - Speed analysis
# - Level-of-Service
# - Pedestrian gap analysis
#
# ## Part 3: [Pedestrian and bicyclists activities](3_Pedestrian-and-bicyclists-activities.ipynb)
# - Volume of pedestrians and bicyclists
#
# ## Part 4: Existing conditions analysis
# - Location, land use
# - Physical conditions
# - Pavement conditions
# - Pavement markings
# - Visual obstructions
# - Traffic controls
#
# ## Part 5: Assessment of supporting documentation
# - Maintenance records
# - Public feedback
#
# ## Part 6: Field observation
# - Observation of evasive actions, conflicts
# - Observation of road user behaviors and characteristics
#
# ## Part 7: Problem identification and countermeasures selection
# - Identify safety concerns
# - Short term solutions
# - Engineering: Maintenance, vegetation, traffic control devices, pavement markings, ...
# - Enforcement
# - Education
# - Long term solutions
# - Engineering: Modify geometry
# - Education
# Sources: FHWA
| 0_Overview.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: julia-(4-threads)-1.6
# kernelspec:
# display_name: Julia (4 threads) 1.6
# language: julia
# name: julia-(4-threads)-1.6
# ---
# + [markdown] nteract={"transient": {"deleting": false}}
# ## Using sigma(...)
#
# This notebook demonstrates how to use the `sigma(...)` function to compare spectra on a channel-by-channel basis. When a spectrum is very similar to the other spectra in a collection, the result will be a `sigma(spec,specs,roc)` that is very close to a Normal distribution with a width of 1 and a center of zero. This demonstrates that count statistics are the only source of variation.
#
# The spectra in this example are extremely similar. This suggests that K412 is extremely homogeneous and the measurements were taken carefully.
#
# In brief, the `sigma(specs[1],specs,1:2000)` function, calculates the mean spectrum (except for `specs[1]`) over the range of channels 1:2000. It then calculates the difference of `specs[1]` from the mean and divides this by the uncertainty from count statistics alone.
# + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
using NeXLSpectrum
using StatsBase
using Distributions
using Gadfly
# + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
path = joinpath(@__DIR__,"..","test","K412 Spectra")
specs = loadspectrum.(joinpath(path,"III-E K412[$i][4].msa") for i in 0:4)
# + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
set_default_plot_size(8inch, 3inch)
plot(specs..., autoklms=true, xmax=8.0e3)
# + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
set_default_plot_size(6inch,12inch)
ss = [ sigma(specs[i],specs,1:2000) for i in eachindex(specs) ]
vstack( ( plot(x=eachindex(ss[i]),y=ss[i],Geom.point) for i in eachindex(ss) )...)
# + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}}
ns = fit.(Normal,ss)
# + jupyter={"outputs_hidden": false, "source_hidden": false} nteract={"transient": {"deleting": false}} tags=[]
set_default_plot_size(6inch, 12inch)
vstack( ( plot(
layer(x=-5.0:0.1:5.0, y=pdf.(ns[i],(-5.0:0.1:5.0)), Geom.line, Theme(default_color="red")),
layer(x=ss[i], Geom.histogram(bincount=100,density=true))
) for i in eachindex(ss) )...)
| notebook/sigma.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Estimating Equilibrium Climate Sensitivity (ECS) in CMIP6 models
#
# *Definition:* Equilibrium Climate Sensitivity is defined as change in global-mean near-surface air temperature (GMST) change due to an instantaneous doubling of CO$_{2}$ concentrations and once the coupled ocean-atmosphere-sea ice system has acheived a statistical equilibrium (i.e. at the top-of-atmosphere, incoming solar shortwave radiation is balanced by reflected solar shortwave and outgoing thermal longwave radiation).
#
# This notebook uses the ["Gregory method"](https://agupubs.onlinelibrary.wiley.com/doi/epdf/10.1029/2003GL018747) to approximate the ECS of CMIP6 models based on the first 150 years after an abrupt quadrupling of CO$_{2}$ concentrations. The "Gregory Method" extrapolates the quasi-linear relationship between GMST and radiative imbalance at the top-of-atmosphere to estimate how much warming would occur if the system were in radiative balance at the top-of-atmosphere, which is by definition the equilibrium response. In particular, we extrapolate the linear relationship that occurs between 100 and 150 years after the abrupt quadrupling. Since the radiative forcing due to CO$_{2}$ is a logarithmic function of the CO$_{2}$ concentration, the GMST change from a first doubling is roughly the same as for a second doubling (to first order, we can assume feedbacks as constant), which means that the GMST change due to a quadrupling of CO$_{2}$ is roughly $\Delta T_{4 \times \text{CO}_{2}} = 2 \times \text{ECS}$. See also [Mauritsen et al. 2019](https://agupubs.onlinelibrary.wiley.com/doi/epdf/10.1029/2018MS001400) for a detailed application of the Gregory Method (with modifications) for the case of one specific CMIP6 model, the MPI-M Earth System Model.
#
# For another take on applying the Gregory method to estimate ECS, see [Angeline Pendergrass' code](https://github.com/apendergrass/cmip6-ecs).
#
# revised by <NAME> (2019-11-08)
# extrapolate linear relationship between GMST and radiative imbalance
# between gregory_limits[0] and gregory_limits[1]
gregory_limits = [0,150]
# ### Python packages
# +
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
import xarray as xr
import cartopy
from data import check
from xradd import *
import os,ols
from tqdm.autonotebook import tqdm # Fancy progress bars for our loops!
import intake
import util
# %matplotlib inline
plt.rcParams['figure.figsize'] = 12, 6
# %config InlineBackend.figure_format = 'retina'
# +
from dask.distributed import Client
from dask_kubernetes import KubeCluster
cluster = KubeCluster()
cluster.adapt(minimum=4, maximum=40)
cluster
# -
client = Client(cluster)
client
# ## Data catalogs
#
# This notebook uses [`intake-esm`](https://intake-esm.readthedocs.io/en/latest/) to ingest and organize climate model output from the fresh-off-the-supercomputers Phase 6 of the Coupled Model Intercomparison Project (CMIP6).
#
# The file `https://storage.googleapis.com/cmip6/cmip6-zarr-consolidated-stores.csv` in google cloud storage contains thousands of lines of metadata, each describing an individual climate model experiment's simulated data.
#
# For example, the first line in the csv file contains the precipitation rate (`variable_id = 'pr'`), as a function of latitude, longitude, and time, in an individual climate model experiment with the BCC-ESM1 model (`source_id = 'BCC-ESM1'`) developed by the Beijing Climate Center (`institution_id = 'BCC'`). The model is *forced* by the forcing experiment SSP370 (`experiment_id = 'ssp370'`), which stands for the Shared Socio-Economic Pathway 3 that results in a change in radiative forcing of $\Delta F = 7.0$ W/m$^{2}$ from pre-industrial to 2100. This simulation was run as part of the `AerChemMIP` activity, which is a spin-off of the CMIP activity that focuses specifically on how aerosol chemistry affects climate.
df = pd.read_csv('https://storage.googleapis.com/cmip6/cmip6-zarr-consolidated-stores.csv')
df.head()
# The file `pangeo-cmip6.json` describes the structure of the CMIP6 metadata and is formatted so as to be read in by the `intake.open_esm_datastore` method, which categorizes all of the data pointers into a tiered collection. For example, this collection contains the simulated data from 28691 individual experiments, representing 48 different models from 23 different scientific institutions. There are 190 different climate variables (e.g. sea surface temperature, sea ice concentration, atmospheric winds, dissolved organic carbon in the ocean, etc.) available for 29 different forcing experiments.
col = intake.open_esm_datastore("pangeo-cmip6.json")
col
# Here, we show the various forcing experiments that climate modellers ran in these simulations. A few examples are:
# - `piControl` which fixes CO2 levels at pre-industrial concentrations of 300 ppm
# - `historical` which includes the historical evolution of greenhouse concentrations as well as historical volcanic eruptions, changes in solar luminosity, and changes in atmospheric aerosol concentrations (and some other, less impactful forcings).
# - `abrupt-4xCO2` in which the CO2 concentrations in an pre-industrial control simulation are abrupted quadrupled from 300 ppm to 1200 ppm.
# - `ssp585`, a `worst-case scenario` in which fossil-fueled development leads to a disastrous increase of $\Delta F = 8.5$ W/m$^{2}$ in radiative forcing.
df['experiment_id'].unique()
# df['source_id'].unique()
# df['activity_id'].unique()
# # Analysis of Climate Model Output Data
# ### Loading data
#
# `intake-esm` enables loading data directly into an [xarray.DataArray](http://xarray.pydata.org/en/stable/api.html#dataset), a metadata-aware extension of numpy arrays. `xarray` objects leverage [dask](https://dask.org/) to only read data into memory as needed for any specific operation (i.e. lazy evaluation). Think of `xarray` Datasets as ways of conveniently organizing large arrays of floating point numbers (e.g. climate model data) on an n-dimensional discrete grid, with important metadata such as units, variable, names, etc.
#
# Note that data on the cloud are in [zarr](https://zarr.readthedocs.io/en/stable/) format, an extension of the metadata-aware format [netcdf](https://www.unidata.ucar.edu/software/netcdf/) commonly used in geosciences.
#
# `intake-esm` has rules for aggegating datasets; these rules are defined in the collection-specification file.
#
# #### Choice of simulated forcing experiments
#
# Here, we choose the `piControl` experiment (in which CO2 concentrations are held fixed at a pre-industrial level of ~300 ppm) and `abrupt-4xCO2` experiment (in which CO2 concentrations are instantaneously quadrupled - or doubled twice - from a pre-industrial controrl state). Since the radiative forcing of CO2 is roughly a logarithmic function of CO2 concentrations, the ECS is roughly independent of the initial CO2 concentration. Thus, if one doubling of CO2 results in $ECS$ of warming, then two doublings (or, a quadrupling) results in $2 \times ECS$ of warming.
#
# Ideally, we would choose the `abrupt-2xCO2` forcing experiment, but this seems to be currently unavaiable in Google Cloud Storage.
# +
cat_tas = col.search(experiment_id=['abrupt-4xCO2','piControl'], # pick the `abrupt-4xCO2` and `piControl` forcing experiments
table_id='Amon', # choose to look at atmospheric variables (A) saved at monthly resolution (mon)
variable_id='tas', # choose to look at near-surface air temperature (tas) as our variable
member_id = 'r1i1p1f1') # arbitrarily pick one realization for each model (i.e. just one set of initial conditions)
cat_rad = col.search(experiment_id=['abrupt-4xCO2','piControl'], # pick the `abrupt-4xCO2` and `piControl` forcing experiments
table_id='Amon', # choose to look at atmospheric variables (A) saved at monthly resolution (mon)
variable_id=['rsut','rsdt','rlut'], # choose to look at near-surface air temperature (tas) as our variable
member_id = 'r1i1p1f1') # arbitrarily pick one realization for each model (i.e. just one set of initial conditions)
# -
# convert data catalog into a dictionary of xarray datasets
dset_dict_tas = cat_tas.to_dataset_dict(zarr_kwargs={'consolidated': True, 'decode_times': False})
dset_dict_rad = cat_rad.to_dataset_dict(zarr_kwargs={'consolidated': True, 'decode_times': False})
# +
#dset_dict_tas['CMIP.BCC.BCC-CSM2-MR.abrupt-4xCO2.Amon.gn'].tas
#modellist = [x for x,_ in dset_dict_tas.items()]
#modelname = [ii.split('.')[2]+'.'+ii.split('.')[3] for ii in modellist]
#print(modelname)
# -
# #### Get rid of any models that don't have both experiments available or all variables we need
# +
list_tas = [x for x,_ in dset_dict_tas.items()]
list_rad = []
for name, ds_rad in dset_dict_rad.items():
if not (('rsdt' in dset_dict_rad[name].keys()) & ('rsut' in dset_dict_rad[name].keys()) & ('rlut' in dset_dict_rad[name].keys())):
continue
list_rad.append(name)
list_set = sorted(list(set(list_tas) & set(list_rad)))
control = []
abrupt = []
for i in list_set:
if 'piControl' in i: control.append(i.split(".")[2])
if 'abrupt-4xCO2' in i: abrupt.append(i.split(".")[2])
list_model = sorted(list(set(control) & set(abrupt)))
print(list_model)
# -
# #### save variables to dictionaries
# +
# dictionary that will hold spliced DataArrays
ctrl_ds_dict = {}
ctrl_gmst_dict = {}
ctrl_imbalance_dict = {}
abrupt_ds_dict = {}
abrupt_gmst_dict = {}
abrupt_imbalance_dict = {}
for name, ds_rad in tqdm(dset_dict_rad.items()):
model_name = name.split(".")[2]
if (model_name not in list_model): continue
ds_tas = dset_dict_tas[name]
# rename spatial dimensions if necessary
if ('longitude' in ds_rad.dims) and ('latitude' in ds_rad.dims):
ds_rad = ds_rad.rename({'longitude':'lon', 'latitude': 'lat'}) # some models labelled dimensions differently...
ds_tas = ds_tas.rename({'longitude':'lon', 'latitude': 'lat'}) # some models labelled dimensions differently...
ds_rad = xr.decode_cf(ds_rad) # temporary hack, not sure why I need this but has to do with calendar-aware metadata on the time variable
ds_tas = xr.decode_cf(ds_tas)
# drop redundant variables (like "height: 2m")
for coord in ds_tas.coords:
if coord not in ['lat','lon','time']:
ds_tas = ds_tas.drop(coord)
# drop redundant variables (like "height: 2m")
for coord in ds_rad.coords:
if coord not in ['lat','lon','time']:
ds_rad = ds_rad.drop(coord)
## Calculate global-mean surface temperature (GMST)
gmst = gavg(ds_tas['tas'])
## Calculate global-mean top of atmosphere radiative imbalance (GMST)
net_toa = ds_rad['rsdt'] - ds_rad['rsut'] - ds_rad['rlut']
imbalance = gavg(net_toa)
# Add variable to dictionary
if 'piControl' in name:
ctrl_ds_dict[model_name] = ds_tas
ctrl_imbalance_dict[model_name] = imbalance.squeeze()
ctrl_gmst_dict[model_name] = gmst.squeeze()
if ('abrupt-4xCO2' in name):
abrupt_ds_dict[model_name] = ds_tas
abrupt_imbalance_dict[model_name] = imbalance.squeeze()
abrupt_gmst_dict[model_name] = gmst.squeeze()
# -
# #### Pick first model arbitrarily for example application of Gregory method
model = 'CESM2'
for name in ctrl_gmst_dict.keys():
if model == name:
ctrl_gmst = ctrl_gmst_dict[name].groupby('time.year').mean('time').compute()
ctrl_imbalance = ctrl_imbalance_dict[name].groupby('time.year').mean('time').compute()
abrupt_gmst = abrupt_gmst_dict[name].groupby('time.year').mean('time').compute() - ctrl_gmst.mean(dim='year')
abrupt_imbalance = abrupt_imbalance_dict[name].groupby('time.year').mean('time').compute() - ctrl_imbalance.mean(dim='year')
# +
plt.figure(figsize=(12,3.5))
plt.subplot(1,2,1)
abrupt_gmst.plot()
plt.ylabel('GMST ($^{\circ}C$)')
plt.subplot(1,2,2)
abrupt_imbalance.plot()
plt.ylabel('radiative imbalance (W/m$^{2}$)');
plt.savefig('4xCO2_temperature_evolution.png', dpi=100, bbox_inches='tight')
plt.figure(figsize=(12,3.5))
plt.subplot(1,2,1)
abrupt_gmst.plot()
plt.xlim(gregory_limits)
plt.ylabel('GMST ($^{\circ}C$)')
plt.subplot(1,2,2)
abrupt_imbalance.plot()
plt.xlim(gregory_limits)
plt.ylabel('radiative imbalance (W/m$^{2}$)');
# -
y_data = abrupt_imbalance.isel(year=slice(gregory_limits[0],gregory_limits[1]))
x_data = abrupt_gmst.isel(year=slice(gregory_limits[0],gregory_limits[1]))
a, b = np.polyfit(x_data,y_data,1)
# +
plt.scatter(abrupt_gmst, abrupt_imbalance, c = abrupt_gmst.values, cmap='viridis')
plt.scatter(x_data, y_data, marker='.', c = 'k', cmap='viridis')
x_extrapolate = np.arange(abrupt_gmst.min()-20,abrupt_gmst.min()+20., 0.1)
plt.plot(x_extrapolate, a*x_extrapolate + b, color='grey')
twoECS = b/(-a)
print("ECS = ", twoECS/2.)
plt.plot([0,twoECS + 1.], [0,0], "k--")
plt.xlim([0,twoECS + 1.])
plt.ylim([-2,8])
plt.xlabel('GMST ($^{\circ}C$)')
plt.ylabel('radiative imbalance (W/m$^{2}$)');
plt.savefig('Gregory_method_CESM2_example.png', dpi=100, bbox_inches='tight')
# -
outx1 = ols.ols(y_data.values,x_data.values);outx1.est_auto()
print('total feedback = {:.2f} +/- {:.2f}'.format(outx1.b[1],outx1.conf90[1]))
# #### Eagerly compute ECS for each model, in preparation for plotting
#
# The operations we have done up to this point to calculate the global-mean surface temperature were evaluated lazily. In other worse, we have created a blueprint for how we want to evaluate the calculations, but have not yet computing them. This lets us do things like multiply two 1 Tb arrays together even though they are each individually larger-than-memory.
#
# Now we call xarray's `compute()` method to carry out the computations we defined in the for loop above for calculation the global-mean surface temperature anomaly roughly 100-150 years (1200-1800 months) after instantaneous quadrupling of CO2 relative to the last 50 years (600 months) of the control simulation.
# +
ECS_dict = {}
results = {'ECS': {},'lambda': {}, 'ctrl_gmst': {}, 'ctrl_imbalance': {}, 'abrupt_gmst': {}, 'abrupt_imbalance': {}}
for name in tqdm(ctrl_gmst_dict.keys()):
results['ctrl_gmst'][name] = ctrl_gmst_dict[name].groupby('time.year').mean(dim='time').compute()
results['ctrl_imbalance'][name] = ctrl_imbalance_dict[name].groupby('time.year').mean(dim='time').compute()
results['abrupt_gmst'][name] = (
abrupt_gmst_dict[name].groupby('time.year').mean(dim='time') -
results['ctrl_gmst'][name].isel(year=slice(gregory_limits[0],gregory_limits[1])).mean(dim='year')
).compute()
results['abrupt_imbalance'][name] = (
abrupt_imbalance_dict[name].groupby('time.year').mean(dim='time') -
results['ctrl_imbalance'][name].isel(year=slice(gregory_limits[0],gregory_limits[1])).mean(dim='year')
).compute()
# Apply Gregory method to estimate ECS
if results['abrupt_imbalance'][name].size >= gregory_limits[1]:
y_data = results['abrupt_imbalance'][name].isel(year=slice(gregory_limits[0],gregory_limits[1])).compute()
x_data = results['abrupt_gmst'][name].isel(year=slice(gregory_limits[0],gregory_limits[1])).compute()
a, b = np.polyfit(x_data,y_data,1)
results['ECS'][name] = b/(-a) / 2.0
outx1 = ols.ols(y_data.values,x_data.values);outx1.est_auto()
results['lambda'][name] = outx1.b[1]
# -
results['ECS']
plt.hist(np.array(list(results['ECS'].values())), bins=np.arange(0,max(list(results['ECS'].values()))+1.0,1.0))
plt.xlabel(r"ECS ($^{\circ}$C)")
plt.ylabel("number of models")
plt.title('Equilibrium Climate Sensitivity (ECS) in CMIP6 models')
plt.annotate(s=fr"$N = {len(results['ECS'])}$",xy=(0.025,0.90), xycoords="axes fraction", fontsize=14)
plt.savefig(f"ECS_Gregory_{gregory_limits[0]}-{gregory_limits[1]}_hist.png",dpi=100)
tmp = sorted(results['ECS'].items(), key=lambda x: x[1])
ordered_ECS = { pair[0]:pair[1] for pair in tmp }
plt.figure(figsize=(10,5))
plt.bar(np.arange(len(ordered_ECS)), np.array(list(ordered_ECS.values())), align='center', alpha=0.5)
plt.xticks(np.arange(len(ordered_ECS)),ordered_ECS.keys(), rotation=90)
plt.ylabel(r"ECS ($^{\circ}$C)")
plt.title('Equilibrium Climate Sensitivity (ECS) in CMIP6 models')
plt.tight_layout()
plt.savefig(f"ECS_Gregory_{gregory_limits[0]}-{gregory_limits[1]}_bar.png",dpi=100,bbox_inches='tight')
tmp = sorted(results['lambda'].items(), key=lambda x: x[1])
ordered_lambda = { pair[0]:pair[1] for pair in tmp }
plt.figure(figsize=(10,5))
plt.bar(np.arange(len(ordered_lambda)), np.array(list(ordered_lambda.values())), align='center', alpha=0.5)
plt.xticks(np.arange(len(ordered_lambda)),ordered_lambda.keys(), rotation=90)
plt.ylabel(r"W m$^{-2}$ K$^{-1}$")
plt.title('Total feedback in CMIP6 models')
plt.tight_layout()
# #### Speed up your code with dask-kubernetes (if available)
# ```python
# # Cluster was created via the dask labextension
# # Delete this cell and replace with a new one
#
# from dask.distributed import Client
# from dask_kubernetes import KubeCluster
#
# cluster = KubeCluster()
# cluster.adapt(minimum=1, maximum=10, interval='2s')
# client = Client(cluster)
# client
# ```
| notebooks/Pangeo_calculate_ECS_and_feedback.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Data types
# ## Built-in Data Types
#
# Variables can store data of different types, and different types can do different things.
#
# Python has the following data types built-in by default, in these categories:
#
# - Text Type: str
# - [Numeric Types: int, float, complex](http://localhost:8888/notebooks/Data%20Types.ipynb#Numbers)
# - Sequence Types: list, tuple, range
# - Mapping Type: dict
# - Set Types: set, frozenset
# - Boolean Type: bool
# - Binary Types: bytes, bytearray, memoryview
# +
# Text Type : str
firstname = "Neeraj"
#number: int
age = 31
#number: float
averagecoffeeperday = 1.5
#number : complex data type
complexdatatype = 1j
#list
leads = ["Neeraj","Ujen","Pooja","Bisheshta"]
#tuples
#note that values in the tuples cannot be changed as compared to list
females = ("Smritis","Pooja","Bisheshta","Melissa","Mikita","Shweta","Kabita","Saloni","Pooja")
#range
six_numbers = range(1,6)
#dict
singer = {'firstname':'Malcom','age':24,'genre':["Pop","Rock","Hip Hop"],'Popular':'Originals'}
#Set
device_you_can_take_home = {"Laptop", "Keyboard", "Monitor","Mouse"}
#frozenset
device_you_can_take_home = frozenset({"Laptop", "Keyboard", "Monitor","Mouse"})
#Boolean
Working_from_home = True
#Bytes
bytevalue = b"byte value"
#Byte Array
bArray = bytearray(5)
#MemoryView
x = memoryview(bytes(5))
# -
# ## Getting the Data Type
# You can get the data type of any object by using the `type()` function:
firstname = "Sujan'"
print(firstname," is of type", type(firstname))
# ## Type Casting
#
# Converting the data type of one value to another data type
age = 12
print('type(age) ->', type(age))
print('type(str(age)) ->',type(str(age)))
#Examples of type casting
x = str("Hello World")# str
x = int(20) #int
x = float(20.5)#float
x = complex(1j)#complex
x = list(("apple", "banana", "cherry"))#list
x = tuple(("apple", "banana", "cherry"))#tuple
x = range(6)#range
x = dict(name="John", age=36)#dict
x = set(("apple", "banana", "cherry"))#set
x = frozenset(("apple", "banana", "cherry"))#frozenset
x = bool(5)#bool
x = bytes(5)#bytes
x = bytearray(5)#bytearray
x = memoryview(bytes(5))#memory view
# # Numbers
# - int
# Int, or integer, is a whole number, positive or negative, without decimals, of unlimited length.
# - float
# - Float, or "floating point number" is a number, positive or negative, containing one or more decimals.
# - Float can also be scientific numbers with an "e" to indicate the power of 10.
# - complex
# Complex numbers are written with a "j" as the imaginary part:
#
# Variables of numeric types are created when we assign a value to them:
# +
integer = 1 # int
floatype = 2E8 # float
complextype = 1j # complex
print(integer,' is of ',type(integer))
print(floatype,' is of ',type(floatype))
print(complextype,' is of ',type(complextype))
# -
# ## Math Operators
#
# - Addition ( + )
# - Subtraction ( - )
# - Multiplication ( * )
# - Division ( / )
# - Modulus ( % )
# - Exponentiation ( ** )
# - Floor Division ( // )
17//5+20%2-10/5*3**2
#3+0-10/5*9
#3+0-2*9
#Assignment : find the order of the operator
#Note the output value type
# ## String Literals
# > String literals in python are surrounded by either single quotation marks, or double quotation marks.
#
# ```
# 'hello' is the same as "hello".
# ```
#
# We can display a string literal with the print() function.
#
#
# ### Multiline Strings
# We can assign a multiline string to a variable by using three quotes:
#
multiline_string = """This is a multiline string
and we define a multiple string by enclosing the string value inside three quotes:
'''multiple line string''' or \"\"\" multiple line string \"\"\"
"""
print(multiline_string)
# ### Strings are Arrays
# > String in Python are arrays of bytes representing unicode characters.
#
# However, **Python does not have a character data type**, a single character is simply a string with a length of 1.
#
# Square brackets can be used to access elements of the string( **index start from 0** ).
title = "Python Training"
print(title[1])
# ### Slicing
# > Returns a range of characters by using the slice syntax.
#
# Specify the start index and the end index, separated by a colon, to return a part of the string.
print(title[2:5])
# ### Negative Indexing
# Use negative indexes to start the slice from the end of the string:
print(title[-5:-2])
#Other Functions
print(len(title))
# +
#string concatenation
firstname = "Melissa"
lastname = "Gurung"
fullname = "Melissa"+" "+"Gurung"
print(fullname)
# -
# **string and integer cannot be concatenated**
#
# +
age = 16
print("Age of "+fullname+" is "+ age)
# -
# **So needs to be type casted to string**
print("Age of "+fullname+" is "+ str(age))
#as the values are enclosed in the quotes these are considered as string
"10"+"2"
10 + 2
# ### Check if string exists in string
description = "Sujan Shrestha is talented, multi-lingual and married"
is_married = "married" in description
print(is_married)
# ### String format
# > The `format()` method takes the passed arguments, formats them, and places them in the string where the placeholders {} are:
firstname = "Ujen"
number_of_son = 1
ten=10
description = "{} has {}son not {}son. So he is chilled.".format(firstname,number_of_son,ten)
print(description)
#using the index
firstname = "Saloni"
sem = 2
college="Patan Campus"
score = 4.0
description = "{3} is studying in {1} in {2} semester. In the last semester she scored {0} GPA".format(score, college, sem, firstname)
print(description)
description = "{name} is studying in {college} in {semester} semester. In the last semester she scored {score} GPA".format(score=score, college=college, semester=sem,name=firstname)
print(description)
# ### Escape Character
# > To insert characters that are illegal in a string, use an escape character.
#
# An escape character is a backslash ***\*** followed by the character you want to insert.
#
# An example of an illegal character is a double quote inside a string that is surrounded by double quotes.
Extract_specification = "Please generate all the extract in the directory file:\\sharedrive\brand\new_folder"
print(Extract_specification)
Extract_specification = "Please generate all the extract in the directory file:\\sharedrive\\brand\\new_folder"
print(Extract_specification)
# Note the directory is still invalid
Extract_specification = "Please generate all the extract in the directory file:\\\\sharedrive\\brand\\new_folder"
print(Extract_specification)
# **we can also use raw type**
Extract_specification = r"Please generate all the extract in the directory file:\\sharedrive\brand\new_folder"
print(Extract_specification)
melissa_joke = 'Melissa's jokes are difficult to understand by the normal peoples ;)'
melissa_joke = "Melissa's jokes are difficult to understand by the normal peoples ;)"
print(melissa_joke)
# ### Using split function
# > `Split()` function can be used to generate a list from string based on a delimiter
nirmal_dai = "Features of Nirmal dai are Humble, To the point, rider, helpful, loved by all"
nirmal_dai_feature = nirmal_dai[27:].split(",")
print(nirmal_dai_feature)
print(type(nirmal_dai_feature))
# ### Input
#
# > Use `input()` function to take input from the user
# **Input always get string data type**
name = input("Enter your name ")
print("Hello ! ",name,type(name))
num1 = input("Enter num1 ")
num2 = input("Enter num2 ")
print("Addition of numbers",num1, num2 ," Is ", int(num1)+int(num2))
| Basics/Data Types.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="Tvh0xFcfPJb4" colab_type="code" colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "<KEY>", "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 72} outputId="90874ade-c6bb-4422-d585-41902e371a85"
from google.colab import files
uploaded = files.upload()
# + [markdown] id="K1osNSEUOmbJ" colab_type="text"
# # EDA
# + id="wB00v5U4UKLH" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 428} outputId="d1556297-6472-49a4-dfa6-80aabc9433ee"
import pandas as pd
import numpy as np
# I will now read in the csv
# https://www.kaggle.com/dgomonov/new-york-city-airbnb-open-data
df = pd.read_csv("AB_NYC_2019 (7).csv")
# This function will check to make sure that it was read in correctly
print(df.shape)
df.head()
# + id="T-0KjQmSxVxN" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 102} outputId="c636aab1-60f3-4507-896a-1377d99a61bd"
df.rename(columns={'neighbourhood_group': 'Borough', 'neighbourhood': 'Neighbourhood', 'room_type': 'Room_type', 'minimum_nights': 'Minimum_nights', 'availability_365': 'Availability_365', 'price': 'Price'}, inplace=True)
df.columns
# + id="Z0DipDaqsqSB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="d6d0db39-5958-4b74-fadc-dcddff673b79"
df.drop(['id','name','host_id','host_name', 'number_of_reviews', 'last_review', 'reviews_per_month', 'calculated_host_listings_count', 'longitude', 'latitude'], axis=1, inplace=True)
df.head()
# + id="IAp4crgIVAjB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 119} outputId="03b3c2b2-0bea-49b2-e8e8-33be210a144e"
# Columns 'to_list()'
df.columns.to_list()
# + id="jucpL2yiwCSn" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 221} outputId="6a93f89b-f28e-415b-9b1b-27b95e5d4adf"
# This will rearrange the columns
df = df[["Borough",
"Neighbourhood",
"Room_type",
"Availability_365",
"Minimum_nights",
"Price"]]
# This will check to make sure that they were done correctly
print(df.shape)
df.head()
# + id="Z9TMwLhmXGEU" colab_type="code" colab={}
import warnings
warnings.filterwarnings(action='ignore', category=FutureWarning, module='numpy')
# + id="MRsPhmhcXR-i" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="16dd679e-9f52-4121-cab0-2bcf1906326b"
# Defining Function -- ** nunique() **
def data_distributions(df):
# predifined columns
cols = ["Borough",
"Neighbourhood",
"Room_type",
"Availability_365",
"Minimum_nights"]
# For loop for specified columns
for col in cols:
print(col + ":", df[col].nunique())
print("\n")
print(df[col].value_counts(normalize=True))
print("-----\n")
# Now we need to call the function on the df
data_distributions(df)
# + id="X5mX6MqlPSYy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="8668e8ea-9e6d-41be-bb8f-826b38055e56"
# Defining Function -- ** value_counts() **
def values_function(df):
# Predefined column
cols = ["Borough",
"Neighbourhood",
"Room_type",
"Availability_365",
"Minimum_nights"]
for c in cols:
print("---- %s ---" % c)
print(df[c].value_counts())
print("\n")
# Calling the Function
values_function(df)
# + id="Q34JaKpKYvaY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 136} outputId="d7b0a363-d8b3-461f-e705-1542fb7cded0"
# Now I need to check for any NaN's
df.isnull().sum()
# + id="fLaHfqmFakx1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 297} outputId="52dc7773-dcf3-4ae5-cc91-495ea8f65f2e"
# Next will be to describe the metrics
df.describe()
# + id="-rK6gY_9aq2v" colab_type="code" colab={}
import pandas.util.testing as tm
import numpy as np
import matplotlib
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.cm as cmx
import matplotlib.colors as colors
# %matplotlib inline
# + id="T9oXCX_VguoD" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 221} outputId="c77b97c5-b63c-402a-cabe-91174489adc3"
# Not Removing Anything
dfx = df
# CHECK:
print(df.shape)
df.head()
# + id="xYY6ZYk2Qnyp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="d18c3c15-db01-410a-9320-ba91764e3e8c"
# Name Column -- CHECK
dfx["Price"][:10]
# + [markdown] id="Pi-5jOWfFukw" colab_type="text"
# ## Modeling
#
# #### PreProcessing
# + id="VoyXgHamia_d" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="e65411c3-23c7-42a7-d423-eb8dd422e047"
# I need to install category encoders
# !pip install category_encoders
# + id="ZmSUwXOiF6hZ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 221} outputId="ad649ed1-8402-4a8d-f437-46efe64b444d"
from sklearn.model_selection import train_test_split
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
import category_encoders as ce
import warnings
warnings.filterwarnings("ignore")
# This will use a defining function
def preprocessing(df):
"""
Precprocesses the data.
Input: DataFrame
Output: X_train, X_test, y_train, y_test
"""
# Copy the DF
dfx = df.copy()
# Targets and Features
target = "Price"
features = ["Borough",
"Neighbourhood",
"Room_type",
"Minimum_nights",
"Availability_365"]
# This will create the X Features Matrix
X = dfx[features]
# This will create the y target vector
y = dfx[target]
# This function will map - 'room_type'
room_type_dict = {"Shared room":1, "Private room":2, "Entire home/apt":3}
X.iloc[:, 4].map(room_type_dict)
# print(X["room_type"])
# Now we will create the Train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=.2,
random_state=42)
# Preprocess Pipline -- OrdinalEncoder and StandardScaler
preprocess = make_pipeline(
ce.OrdinalEncoder(),
StandardScaler()
)
# Now we will Fit Transform and Transform Training and Testing Data
X_train = preprocess.fit_transform(X_train)
X_test = preprocess.fit_transform(X_test)
# I now need to create a DataFrame for X Matrices
X_train_df = pd.DataFrame(X_train, columns=features)
X_test_df = pd.DataFrame(X_test, columns=features)
print(X_train_df.shape,
X_test_df.shape,
X_train.shape,
X_test.shape,
y_train.shape,
y_test.shape)
# Return
return X_train_df, X_test_df, X_train, X_test, y_train, y_test
# Calling function
X_train_df, X_test_df, X_train, X_test, y_train, y_test = preprocessing(dfx)
# Check it out
X_train_df.head()
# + [markdown] id="zqTenzDRLgU3" colab_type="text"
# ## Linear Regression
# + id="wGRif_YoKq7S" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 306} outputId="21a6184f-c755-42d3-d674-92babd9410fe"
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import mean_squared_error
from sklearn.metrics import r2_score
# Instantiate Model
lin_reg = LinearRegression()
# Fitting the Model to the training data
lin_reg.fit(X_train_df, y_train)
# Predicting the Training Price based on Training Data
y_pred_train = lin_reg.predict(X_train_df)
y_pred_test = lin_reg.predict(X_test_df)
# Reporting Metrics Function
def reporting_metrics(y_vector, y_pred_vector):
mse = mean_squared_error(y_vector, y_pred_vector)
rmse = np.sqrt(mse)
mae = mean_absolute_error(y_vector, y_pred_vector)
r2 = r2_score(y_vector, y_pred_vector)
print("DATA METRICS \n")
print("Mean Absolute Error:", mae)
print("Mean Squared Error:", mse)
print("Root Mean Squared Error:", rmse)
print("R^2:", r2)
print(".....")
# Calling Function -- ** TRAINING DATA **
reporting_metrics(y_train, y_pred_train)
# Intercept
print("Intercept:", lin_reg.intercept_)
print("\n")
# Coefficients
coefs = pd.Series(lin_reg.coef_, X_train_df.columns)
print("Coefficients:")
print(coefs)
# + id="ThS5WOC0Nzrf" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 306} outputId="5eb2dac6-2738-4cd2-8087-c00b0e25eb5e"
# Calling Function -- ** TESTING DATA **
reporting_metrics(y_test, y_pred_test)
# Intercept
print("Intercept:", lin_reg.intercept_)
print("\n")
# Coefficients
coefs = pd.Series(lin_reg.coef_, X_test_df.columns)
print("Coefficients:")
print(coefs)
# + [markdown] id="bpbb1rvvOHuX" colab_type="text"
# ## Random Forest Regressor
# + id="_tdUy7beOEIC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 319} outputId="eaaa2177-835e-4d3e-bc94-69dd55ac71db"
# %matplotlib inline
import matplotlib.pyplot as plt
from sklearn.ensemble import RandomForestRegressor
# Instantiate Model
rf_reg = RandomForestRegressor()
# Fitting the Model to the training data
rf_reg.fit(X_train_df, y_train)
# Predicting
y_pred_train = rf_reg.predict(X_train_df)
y_pred_test = rf_reg.predict(X_test_df)
# Reporting Metrics -- TRAINING DATA
reporting_metrics(y_train, y_pred_train)
# Get Feature Importances
importances = pd.Series(rf_reg.feature_importances_, X_train_df.columns)
# Plot Top N Features Importances
n = 5
plt.figure(figsize = (5, n/2))
plt.title(f"Top {n} Most Important Training Features")
importances.sort_values()[-n:].plot.barh(color = "green");
# + id="j4YRRudRYf48" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="1c7b4f03-5973-4e0c-a4b8-a6fc11b702c6"
from sklearn.externals import joblib
from joblib import dump, load
dump(rf_reg, 'rf_reg.joblib')
# + id="SQqTOEpzk-U0" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 34} outputId="f3a017d3-51c5-4a42-e8eb-574ce344a25f"
joblib.load
# + id="fE86MmcrOO2w" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 319} outputId="56a8ddb1-5686-4418-95c8-09465cfbdaf9"
# Reporting Metrics -- TESTING DATA
reporting_metrics(y_test, y_pred_test)
# Get Feature Importances
importances = pd.Series(rf_reg.feature_importances_, X_test_df.columns)
# Plot Top N Features Importances
n = 5
plt.figure(figsize = (5, n/2))
plt.title(f"Top {n} Most Important Testing Features")
importances.sort_values()[-n:].plot.barh(color = "red");
# + [markdown] id="KE8ADJlVUhjN" colab_type="text"
# ## We got the best scores and minimized errors most efficiently with the Random Forest Regressor, so we decided that it would be the best model to use for the prediction app.
| notebooks/EDA/optimal_airbnb_EDA (2).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.10 64-bit (''herbie'': conda)'
# name: python3
# ---
# # Pluck Points from Lat/Lon Grid
#
# This builds on my example from
#
# - Stack Overflow: https://stackoverflow.com/questions/58758480/xarray-select-nearest-lat-lon-with-multi-dimension-coordinates
#
# - MetPy Details: https://unidata.github.io/MetPy/latest/tutorials/xarray_tutorial.html?highlight=assign_y_x
from herbie.archive import Herbie
from metpy.units import units
import matplotlib.pyplot as plt
from toolbox.cartopy_tools import pc, common_features, ccrs
import numpy as np
import xarray as xr
from shapely.geometry import MultiPoint
from toolbox.gridded_data import pluck_points
H = Herbie('2021-9-23')
ds = H.xarray('TMP:2 m')
def nearest_points(ds, points, names=None, verbose=True):
"""
Pluck the nearest latitude/longitude points from a model grid.
Info
----
- Stack Overflow: https://stackoverflow.com/questions/58758480/xarray-select-nearest-lat-lon-with-multi-dimension-coordinates
- MetPy Details: https://unidata.github.io/MetPy/latest/tutorials/xarray_tutorial.html?highlight=assign_y_x
Parameters
----------
ds : a friendly xarray Dataset
points : tuple (lon, lat) or list of tuples
The longitude and latitude (lon, lat) coordinate pair (as a tuple)
for the points you want to pluck from the gridded Dataset.
A list of tuples may be given to return the values from multiple points.
names : list
A list of names for each point location (i.e., station name).
None will not append any names. names should be the same
length as points.
"""
# Check if MetPy has already parsed the CF metadata grid projection.
# Do that if it hasn't been done yet.
if 'metpy_crs' not in ds:
ds = ds.metpy.parse_cf()
# Apply the MetPy method `assign_y_x` to the dataset
# https://unidata.github.io/MetPy/latest/api/generated/metpy.xarray.html?highlight=assign_y_x#metpy.xarray.MetPyDataArrayAccessor.assign_y_x
ds = ds.metpy.assign_y_x()
# Convert the requested [(lon,lat), (lon,lat)] points to map projection.
# Accept a list of point tuples, or Shapely Points object.
# We want to index the dataset at a single point.
# We can do this by transforming a lat/lon point to the grid location
crs = ds.metpy_crs.item().to_cartopy()
# lat/lon input must be a numpy array, not a list or polygon
if isinstance(points, tuple):
# If a tuple is give, turn into a one-item list.
points = np.array([points])
if not isinstance(points, np.ndarray):
# Points must be a 2D numpy array
points = np.array(points)
lons = points[:,0]
lats = points[:,1]
transformed_data = crs.transform_points(ccrs.PlateCarree(), lons, lats)
xs = transformed_data[:,0]
ys = transformed_data[:,1]
# Select the nearest points from the projection coordinates.
# TODO: Is there a better way?
# There doesn't seem to be a way to get just the points like this
#ds = ds.sel(x=xs, y=ys, method='nearest')
# because it gives a 2D array, and not a point-by-point index.
# Instead, I have too loop the ds.sel method
new_ds = xr.concat([ds.sel(x=xi, y=yi, method='nearest') for xi, yi in zip(xs, ys)], dim='point')
# Add list of names as a coordinate
if names is not None:
# Assign the point dimension as the names.
assert len(points) == len(names), '`points` and `names` must be same length.'
new_ds['point'] = names
return new_ds
# +
points = [(-114,45),(-115,45),(-116,45),(-117,45),(-118,45)]
names = ['hi', 'by', 'a', 'b', 'c']
dsp = nearest_points(ds, points, names)
dsp
# -
pluck_points(ds, (-115, 45))
plt.scatter(dsp.longitude, dsp.latitude)
dspp = pluck_points(ds, points, names=names)
dspp
plt.scatter(dsp.longitude-360, dsp.latitude, color='r')
plt.scatter(dspp.longitude, dspp.latitude, marker='.', color='k')
plt.scatter(np.array(points)[:,0], np.array(points)[:,1])
points = np.array([(-100,45), (-100.08, 45.05)])
names = ['hi', 'by']
dsp = nearest_points(ds, points, names)
# +
crs = ds.metpy.parse_cf().t2m.metpy.cartopy_crs
ax = common_features(crs=crs, figsize=[6,6]).STATES().ax
ax.center_extent(lat=45, lon=-100, pad=15000);
# Raw Grid
ax.pcolormesh(ds.longitude, ds.latitude, ds.t2m, transform=pc, edgecolors='k')
ax.scatter(ds.longitude, ds.latitude, transform=pc, color='k', marker='.')
# Requested Point
ax.scatter(points[:,0], points[:,1], transform=pc, color='b', s=100)
# Nearest Point
ax.scatter(dsp.longitude, dsp.latitude, transform=pc, color='r', s=100)
#plt.legend()
# -
points = np.array([(-100,45), (-100.08, 45.05)])
names = ['hi', 'by']
# %%timeit
dsp = nearest_points(ds, points, names)
# %%timeit
dspp = pluck_points(ds, points, names)
from synoptic.services import stations_nearesttime, stations_metadata
from datetime import datetime
a = stations_metadata(radius='UKBKB,60')
points = np.array(list(zip(a.loc['longitude'], a.loc['latitude'])))
names = a.loc['STID'].to_list()
#points, names
# %%time
a = 5
# %%time
dspp = pluck_points(ds, points, names)
# %%time
dsp = nearest_points(ds, points, names)
dsp
dspp
# +
ax = common_features(crs=crs, figsize=[6,6]).STATES().ax
ax.center_extent(lon=points[:,0].mean(), lat=points[:,1].mean(), pad=18000);
# Raw Grid
ax.pcolormesh(ds.longitude, ds.latitude, ds.t2m, transform=pc, edgecolors='k')
ax.scatter(ds.longitude, ds.latitude, transform=pc, color='k', marker='.')
# Requested Point
ax.scatter(points[:,0], points[:,1], transform=pc, color='b', s=100, alpha=.3)
# Nearest Point
ax.scatter(dsp.longitude, dsp.latitude, transform=pc, color='r', s=100, alpha=.3)
#plt.legend()
# +
ax = common_features(crs=crs, figsize=[6,6]).STATES().ax
ax.center_extent(lon=points[:,0].mean(), lat=points[:,1].mean(), pad=18000);
# Raw Grid
ax.pcolormesh(ds.longitude, ds.latitude, ds.t2m, transform=pc, edgecolors='k')
ax.scatter(ds.longitude, ds.latitude, transform=pc, color='k', marker='.')
# Requested Point
ax.scatter(points[:,0], points[:,1], transform=pc, color='b', s=100, alpha=.3)
# Nearest Point
ax.scatter(dspp.longitude, dspp.latitude, transform=pc, color='r', s=100, alpha=.3)
#plt.legend()
# -
dspp.longitude-dsp.longitude
dspp.latitude-dsp.latitude
# +
ax = common_features(crs=crs, figsize=[6,6], dpi=150).STATES().ax
ax.center_extent(lon=points[:,0].mean(), lat=points[:,1].mean(), pad=18000);
# Raw Grid
ax.scatter(ds.longitude, ds.latitude, transform=pc, color='k', marker='.')
# Nearest Point
ax.scatter(dsp.longitude, dsp.latitude, transform=pc, color='g', marker='s', s=150, alpha=.3)
ax.scatter(dspp.longitude, dspp.latitude, transform=pc, color='r', s=100, alpha=.3)
#plt.legend()
# -
| notebooks/pluck_point_NEW.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/ekdnam/KalpanaLabs/blob/master/Day%207/KalpanaLabs_Day_7.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="ZmvQBQrDdd0j"
# # Comparison between Strings
#
# Last week we were amazed and bewildered when we saw that
#
# **"abc" \< "ABC"**
#
# The reason behind that is that strings are stored in the memory using their ASCII values.
# + [markdown] id="nSBcNQyRexaJ"
# ## ASCII Values
#
# I am not going to go in depth about how the values came into being and their significance, it would make you go crazy like last week.
#
#
#
#
# + id="qSMrTLWadqeL" outputId="59c5f8b3-638c-4e8c-d00a-fc94bc185ae2" colab={"base_uri": "https://localhost:8080/", "height": 302}
from IPython.display import Image
Image(url='https://media.giphy.com/media/iiILHqWDQo8w0/giphy.gif')
# + [markdown] id="ByysUZ4Ph-tA"
# One piece of information: the characters from a-z have a larger ascii value than A-Z. And the value increases from a to z, and A to Z.
#
#
#
#
# + [markdown] id="RgxnEXQ9igZs"
# ## How the comparison works
#
# The strings are compared by using a Lexicographic Order. Lexicographic Order seems a big word, yes. But don't worry, it has a very simple meaning.
#
# It simply means that they are compared in the manner they would be written in a dictionary. Thus Lexicographic Order is also called as Dictionary Order. And it is done using ASCII values.
#
# In a dictionary, "Delhi" would be before "Mumbai", right?
#
# Thus in a similar way,
# ```
# "Delhi" < "Mumbai"
# ```
#
# Also, "USA" would be before "us".
#
# Thus
# ```
# "USA" < "us"
# ```
# + id="wk9k-d8pdxKT"
| Day 7/KalpanaLabs_Day_7.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.10 64-bit (''pvtpy'': conda)'
# name: python3
# ---
# # PVT Object
#
# You can define a PVT object by providing a tabulated data indexed by pressure.
from pvtpy.pvt import PVT
from pvtpy.units import Pressure
import numpy as np
import pandas as pd
# ## Define some properties
p = np.linspace(10,3500,15)
rho = np.linspace(0.8,1.3,15)
tt = np.linspace(90,130,15)
p
# Create the PVT Object by providing a list of ordered pressure and corresponding properties in a dictionary form
# +
pvt1 = PVT(pressure=Pressure(value=p.tolist()), fields={'rho':rho.tolist(),'temp':tt.tolist()})
print(type(pvt1))
# -
# To export the pvt to a `pandas DataFrame` call the `df` method
print(pvt1.df())
# ### The pressure must be ordered either descending or ascending
#
# By using the syntax `[::-1]` you can reverse the order of a list
# +
pvt1_r = PVT(pressure=Pressure(value=p.tolist()[::-1]), fields={'rho':rho.tolist()[::-1],'temp':tt.tolist()[::-1]})
print(pvt1_r.df())
# -
try:
p_random = np.random.rand(15)
pvt_error = PVT(pressure=Pressure(value=p_random.tolist()), fields={'rho':rho.tolist(),'temp':tt.tolist()})
except Exception as e:
print(e)
print('Pressure is not sorted. It raises an error')
# ## Interpolate at a custom Pressure
pvt1.interpolate([1500,2100])
# ### Interpolate olly certain columns
pvt1.interpolate([1500,2100, 2500,2700],cols=['temp'])
| docs/examples/1-pvt/1a-PVT_Tables.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Import used modules
import pandas as pd
import sys
sys.path.insert(0, '../src')
import benchmark_utils as bu
import analysis_utils as au
# # Run Alignments for OpenCADD.superposition for the TK Structures
# Perform all pairwise alignments for the given sample structures. Every method performs 1225 alignments for the 50 tyrosine kinases. The benchmark is done with an Intel Core i5-1038NG7 CPU and 16 GB of RAM.
# +
#bu.run_alignments(sample1_path="../data/samples/TK_samples.txt",
# output_path="../data/OpenCADD_results/<NAME_OF_FILE>")
# -
# # Create a Dataframe containing the Alignments of all five Methods
# The alignments for PyMol and ChimeraX MatchMaker are done in the respectively programs and are saved in seperate files. For the analysis, the DataFrames are combined.
columns = ["reference_id", "mobile_id", "method", "rmsd",
"coverage", "reference_size", "mobile_size", "time",
"SI", "MI", "SAS", "ref_name", "ref_group", "ref_species",
"ref_chain", "mob_name", "mob_group", "mob_species", "mob_chain"]
superposer_TK_DFGin = pd.read_csv("../data/OpenCADD_results/superposer_benchmark_TK.csv", names=columns)
pymol_TK_DFGin = pd.read_csv("../data/PyMol_results/pymol_benchmark_TK_refinement.csv", names=columns)
chimerax_TK_DFGin = pd.read_csv("../data/ChimeraX_results/mmaker_benchmark_TK.csv", names=columns)
all_TK_DFGin = pd.concat([superposer_TK_DFGin, pymol_TK_DFGin, chimerax_TK_DFGin]).reset_index(drop=True)
# ### Compute the relative Coverage
# The relative coverage is computed the following way:
#
# coverage / min(lenght of structure 1, lenght of structure 2)
au.compute_rel_cov(all_TK_DFGin)
# # Analysis
# ## General Checks
counts, nans, times = au.general_checks(all_TK_DFGin)
# Check if every value is present.
# It should be 1225 for every value, because there are 1225 alignments performed per method.
counts
# Next, we check for missing alignments. Some Methods have problems with some structures.
#
# In this case, all alignments worked and there is no alignment missing.
nans
# During the computation of the alignments, the time is measured. For all OpenCADD methods combined the CPU-time is about 4 hours. The time for downloading the structures is not included.
times
# ### Compute Mean and Median
mean, median = au.compute_mean_median(all_TK_DFGin)
mean
median
# ## Create basic plots
# PyMol performs better in terms of RMSD compared to the other methods, but has also much lower values for the relative coverage.
au.create_scatter_plot(all_TK_DFGin, path="../reports/figures/TK_refinement")
au.create_violine_plot(all_TK_DFGin, path="../reports/figures/TK_refinement")
# ## Check if data is normally distributed
# The Kolmogorov-Smirnow-Test shows, that the values for RMSD, SI, MI, SAS and relative coverage are not normally distributed.
# MDA, Theseus and ChimeraX Matchmaker have similar distribution for all values.
# PyMol and MMLigner have distribution in about the same range, except for the relative coverage. PyMol has lower values for that.
dist_tests = au.check_distribution(all_TK_DFGin, path="../reports/figures/TK_refinement")
# ## Compute Correlation
# Since the data is not distributed normally, the spearman correlation is used.
#
# The three quality measures correlate very well with each other and with the rmsd. The quality measures also positively correlate with the relative coverage, which means, the higher the relative coverage, the higher the quality measures.
# This is now biased by ChimeraX Matchmaker, MDA and Theseus compared to the other two methods.
#
# All three quality measures share the property, that lower values mean better alignments.
corr = au.compute_correlation(all_TK_DFGin, coeff="spearman", path="../reports/figures/TK_refinement")
corr
# ## Check for significant differences
# Because the data is not normally distributed, an ANOVA is not suitable. Therefore the Kruskal-Wallis-Test is performed. The RMSD and the three quality measures are significantly different for the groups.
kruskal = au.compute_kruskal(all_TK_DFGin)
# ## Which groups are different
# All methods are significantly different to each other. But as also visible in the figures above, ChimeraX Matchmaker, MDA and Theseus have values in the same range.
significant, non_significant = au.compute_mannwhitneyu(all_TK_DFGin)
# # Count the best alignments
# For every pair of structures, the method that has the best quality measure is selected. The following statistics show how often a method had the best results for the quality measures.
best_results = au.count_best_results(all_TK_DFGin)
| notebooks/OpenCADD_Benchmark_TK_refinement.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # List variable
letters = ['a', 'd', 'c']
print(letters[0])
print(letters[3])
print(letters)
letters[1] = 'b'
print(letters)
letters.append(['x', 'y', 'z'])
print(letters)
print(letters[3])
print(letters[3][1])
# # Dictionary variable
even_nums = [0,2,4,6,8,10,12,14,16,18]
print(even_nums[3])
ceos = {'Apple': '<NAME>',
'Microsoft': 'Satya Nadella'}
print(ceos['Apple'])
print(ceos['Microsoft'])
csuite = {'Apple': ['<NAME>', '<NAME>'],
'Microsoft': ['Satya Nadella', '<NAME>']}
print(csuite['Apple'][1])
| notebooks/JZ_02_list_dictionary_variables.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
reviews = pd.read_csv("input_data/winemag-data-130k-v2.csv", index_col=0)
pd.set_option('max_rows', 5)
# -
reviews
reviews.iloc[:, 0]
reviews.iloc[:3, 0]
reviews.iloc[1:3, 0]
reviews.iloc[[0, 1, 2], 0]
reviews.iloc[-5:]
reviews.loc[0, 'country']
reviews.loc[:, ['taster_name', 'taster_twitter_handle', 'points']]
reviews.set_index("title")
reviews.country == 'Italy'
reviews.loc[reviews.country == 'Italy']
reviews.loc[(reviews.country == 'Italy') & (reviews.points >= 90)]
reviews.loc[reviews.price.notnull()]
| Kaggle Courses/Pandas - Indexing, Selecting & Assigning.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.9.0 64-bit
# name: python3
# ---
import sys
# !{sys.executable} -m pip install pandas numpy matplotlib scikit-learn disarray
# +
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import disarray
from sklearn.metrics import ConfusionMatrixDisplay
# +
f, axs = plt.subplots(1, 2, figsize=(15,5))
np.set_printoptions(precision=1)
fr_mat = np.array([
[11795, 35, 0, 16, 6, 36, 78, 34],
[41, 11922, 1, 12, 0, 6, 10, 8],
[3, 6, 11987, 0, 0, 3, 1, 0],
[29, 22, 0, 11921, 4, 8, 5, 11],
[10, 1, 0, 1, 11940, 16, 24, 8],
[35, 20, 0, 4, 26, 11804, 65, 46],
[36, 12, 0, 4, 7, 24, 11840, 77],
[30, 10, 0, 2, 9, 23, 130, 11796],
])
fr_labels = ['Stehen', 'Gehen', 'Rennen', 'Fahrrad', 'Auto', 'Bus', 'Zug', 'U-Bahn']
fr_disp = ConfusionMatrixDisplay(
confusion_matrix=fr_mat,
display_labels=fr_labels
)
fr_disp.plot(cmap=plt.cm.binary, ax=axs[0], colorbar=False, xticks_rotation=45)
df = pd.DataFrame(fr_mat, index=fr_labels, columns=fr_labels)
with open('fr_acc.tex', 'w') as f:
f.write(df.da.export_metrics(['accuracy']).to_latex())
hemminki_mat = np.array([
[62733, 755, 135, 58, 302, 361], # Stehend
[1549, 63664, 456, 650, 723, 805], # Laufend
[3976, 647, 32400, 18, 104, 8118], # Bus
[6730, 330, 874, 31921, 5907, 1894], # Zug
[5057, 711, 2961, 10879, 41203, 2682], # Metro
[4341, 318, 10123, 87, 2067, 77715], # Tram
])
hemminki_labels = ['Stehen', 'Gehen', 'Bus', 'Zug', 'Metro', 'Tram']
hemminki_disp = ConfusionMatrixDisplay(
confusion_matrix=hemminki_mat,
display_labels=hemminki_labels
)
hemminki_disp.plot(cmap=plt.cm.binary, ax=axs[1], colorbar=False, xticks_rotation=45)
df = pd.DataFrame(hemminki_mat, index=hemminki_labels, columns=hemminki_labels)
with open('hemminki_acc.tex', 'w') as f:
f.write(df.da.export_metrics(['accuracy']).to_latex())
plt.savefig('conf-mat-rw.pdf', dpi=1200, bbox_inches='tight')
| src/auxiliary/conf-mat-rw.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
# $$
# u(x) = \frac{1-e^{x/\epsilon}}{1-e^{1/\epsilon}}
# $$
def exct_sol(x, eps):
return ((-np.e**(x/eps))+1)/((-np.e**(1/eps))+1)
eps = 1
c = [0,0]
u0 = [0,1]
xs = np.arange(0,1,0.001)
eps = [10,1,0.1,0.01]
for e in eps:
plt.plot(xs, exct_sol(xs, e), label = e)
#plt.plot(xs, exct_sol(xs, eps[0]), label = e)
plt.legend()
plt.show()
# +
eps_array = [10,1,0.1,0.01]
eps = eps_array[2]
def bvp_solve(eps, n):
h = 1/(n+1)
u0 = ui = 0
u1 = 1
A = -2*eps+h
B = 4*eps
C = -2*e-h
M_A = A*np.eye(n,k = 1)
M_B = B*np.eye(n)
M_C = C*np.eye(n, k = -1)
M = M_A + M_B + M_C
# M = np.array([
# [B, A, 0, 0],
# [C, B, A, 0],
# [0, C, B, A],
# [0, 0, C, B]
# ])
b = np.zeros(n)
b[0] = -C*u0
b[-1] = -A*u1
#b = np.array([-C*u0, 0, 0, -A*u1])
u = np.linalg.solve(M,b)
hs = [0]
while hs[-1] <= 1:
hs.append(hs[-1] + h)
u = np.insert(u,0,0)
u = np.append(u,1)
return u, hs
# -
for e in eps_array:
u, hs = bvp_solve(e, 5)
plt.plot(hs[:-1], u, label = '$\epsilon$ = ' + str(e))
plt.legend()
plt.grid(alpha = 0.7)
plt.show()
# +
eps_array = [10,1,0.1,0.01]
eps = eps_array[2]
def bvp_solve2(eps, n):
#n = 5
h = 1/(n+1)
u0 = ui = 0
u1 = 1
A = -eps
B = 2*eps+h
C = -eps-h
M_A = A*np.eye(n,k = 1)
M_B = B*np.eye(n)
M_C = C*np.eye(n, k = -1)
M = M_A + M_B + M_C
# M = np.array([
# [B, A, 0, 0],
# [C, B, A, 0],
# [0, C, B, A],
# [0, 0, C, B]
# ])
b = np.zeros(n)
b[0] = -C*u0
b[-1] = -A*u1
#b = np.array([-C*u0, 0, 0, -A*u1])
u = np.linalg.solve(M,b)
hs = [0]
while hs[-1] <= 1:
hs.append(hs[-1] + h)
u = np.insert(u,0,0)
u = np.append(u,1)
return u, hs
# -
for e in eps_array:
u, hs = bvp_solve2(e, 500)
plt.plot(hs, u, label = '$\epsilon$ = ' + str(e))
plt.legend()
plt.grid(alpha = 0.7)
plt.show()
| comp-cientifica-II-2019-2/Aulas/.ipynb_checkpoints/Advection-Diffusion-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: ds-env2
# language: python
# name: ds-env2
# ---
import glob
import uuid
import json
import requests
import copy
import os
import cv2
import numpy as np
from time import sleep
import pandas as pd
import logging
from collections import Counter
from pytesseract import Output
from pytesseract import pytesseract
from difflib import SequenceMatcher
import time
from polyfuzz import PolyFuzz
#path = '/home/srihari/Desktop/data/data'
ocr_level = "LINE"
text_processing = True
REJECT_FILTER = 2
crop_factor= 7
crop_factor_y= 4
crop_save = True
digitization = True
vis_thresh=0.90
LANG_MAPPING = {
"en" : ["Latin","eng"],
"kn" : ['Kannada',"kan"],
"gu": ["guj"],
"or": ["ori"],
"hi" : ["Devanagari","hin","eng"],
"bn" : ["Bengali","ben"],
"mr": ["Devanagari","hin","eng"],
"ta": ['Tamil',"tam"],
"te" : ["Telugu","tel"],
"ml" :["Malayalam"],
"ma" :["Marathi"]
}
path = '/home/naresh/Tarento/testing_document_processor/test_pipeline/data/'
output_path = '/home/naresh/Tarento/testing_document_processor/test_pipeline/result/'
output_path_boxes= '/home/naresh/Tarento/testing_document_processor/test_word_boxes/'
base_path= '/home/naresh/Tarento/testing_document_processor/test_word_boxes/'
token = '<KEY>'
# +
#path = '/home/srihari/Desktop/data/data'
word_url = "https://auth.anuvaad.org/anuvaad-etl/wf-manager/v1/workflow/async/initiate"
google_url = "https://auth.anuvaad.org/anuvaad-etl/wf-manager/v1/workflow/async/initiate"
layout_url = "https://auth.anuvaad.org/anuvaad-etl/wf-manager/v1/workflow/async/initiate"
segmenter_url = "https://auth.anuvaad.org/anuvaad-etl/wf-manager/v1/workflow/async/initiate"
bs_url ="https://auth.anuvaad.org/anuvaad-etl/wf-manager/v1/workflow/jobs/search/bulk"
evaluator_url = "https://auth.anuvaad.org/anuvaad-etl/document-processor/evaluator/v0/process"
#evaluator_url = 'http://0.0.0.0:5001/anuvaad-etl/document-processor/evaluator/v0/process'
download_url ="https://auth.anuvaad.org/download/"
upload_url = 'https://auth.anuvaad.org/anuvaad-api/file-uploader/v0/upload-file'
headers = {
'auth-token' :token }
# +
class Draw:
def __init__(self,input_json,save_dir,regions,prefix='',color= (255,0,0),thickness=5):
self.json = input_json
self.save_dir = save_dir
self.regions = regions
self.prefix = prefix
self.color = color
self.thickness=thickness
if self.prefix == 'seg':
#print('drawing children')
self.draw_region_children()
else:
self.draw_region__sub_children()
def get_coords(self,page_index):
return self.json['outputs'][0]['pages'][page_index][self.regions]
def get_page_count(self):
return(self.json['outputs'][0]['page_info'])
def get_page(self,page_index):
page_path = self.json['outputs'][0]['page_info'][page_index]
page_path = page_path.split('upload')[1]#'/'.join(page_path.split('/')[1:])
#print(page_path)
return download_file(download_url,headers,page_path,f_type='image')
def draw_region(self):
font = cv2.FONT_HERSHEY_SIMPLEX
for page_index in range(len(self.get_page_count())) :
nparr = np.frombuffer(self.get_page(page_index), np.uint8)
image = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
for region in self.get_coords(page_index) :
ground = region['boundingBox']['vertices']
pts = []
for pt in ground:
pts.append([int(pt['x']) ,int(pt['y'])])
cv2.polylines(image, [np.array(pts)],True, self.color, self.thickness)
if 'class' not in region.keys():
region['class'] = 'TEXT'
cv2.putText(image, str(region['class']), (pts[0][0],pts[0][1]), font,
2, (0,125,255), 3, cv2.LINE_AA)
image_path = os.path.join(self.save_dir , '{}_{}_{}.png'.format(self.regions,self.prefix,page_index))
cv2.imwrite(image_path , image)
def draw_region_children(self):
font = cv2.FONT_HERSHEY_SIMPLEX
fontScale = 2
thickness =3
for page_index in range(len(self.get_page_count())) :
nparr = np.frombuffer(self.get_page(page_index), np.uint8)
image = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
for region_index,region in enumerate(self.get_coords(page_index)) :
try:
ground = region['boundingBox']['vertices']
pts = []
for pt in ground:
pts.append([int(pt['x']) ,int(pt['y'])])
#print(pts)
region_color = (0 ,0,125+ 130*(region_index/ len(self.get_coords(page_index))))
cv2.polylines(image, [np.array(pts)],True, region_color, self.thickness)
cv2.putText(image, str(region_index), (pts[0][0],pts[0][1]), font,
fontScale, region_color, thickness, cv2.LINE_AA)
for line_index, line in enumerate(region['children']):
ground = line['boundingBox']['vertices']
pts = []
for pt in ground:
pts.append([int(pt['x']) ,int(pt['y'])])
line_color = (125 + 130*(region_index/ len(self.get_coords(page_index))) ,0,0)
cv2.polylines(image, [np.array(pts)],True, line_color, self.thickness -2)
cv2.putText(image, str(line_index), (pts[0][0],pts[0][1]), font,
fontScale, line_color, thickness, cv2.LINE_AA)
except Exception as e:
print(str(e))
print(region)
image_path = os.path.join(self.save_dir , '{}_{}.png'.format(self.prefix,page_index))
cv2.imwrite(image_path , image)
def draw_region__sub_children(self):
for page_index in range(len(self.get_page_count())) :
nparr = np.frombuffer(self.get_page(page_index), np.uint8)
image = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
print(image)
font = cv2.FONT_HERSHEY_SIMPLEX
fontScale = 2
# Blue color in BGR
color = (0 ,255,0)
# Line thickness of 2 px
thickness = 3
# Using cv2.putText() method
for region_index,region in enumerate(self.get_coords(page_index)) :
try:
ground = region['boundingBox']['vertices']
pts = []
for pt in ground:
pts.append([int(pt['x']) ,int(pt['y'])])
#print(pts)
region_color = (0,0,255)
cv2.polylines(image, [np.array(pts)],True, region_color, self.thickness)
for line_index, line in enumerate(region['regions']):
ground = line['boundingBox']['vertices']
pts = []
for pt in ground:
pts.append([int(pt['x'])-1 ,int(pt['y']) -1 ])
line_color = (255,0,0)
cv2.polylines(image, [np.array(pts)],True, line_color, self.thickness -2)
cv2.putText(image, str(line_index), (pts[0][0],pts[0][1]), font,
fontScale, (255,0,0), thickness, cv2.LINE_AA)
for word_index, word in enumerate(line['regions']):
ground = word['boundingBox']['vertices']
pts = []
for pt in ground:
pts.append([int(pt['x']) -3,int(pt['y'])-3])
word_color = (0,255,0)
cv2.polylines(image, [np.array(pts)],True, word_color, self.thickness -2)
cv2.putText(image, str(word_index), (pts[0][0],pts[0][1]), font,
fontScale-1,(0,255,0), thickness, cv2.LINE_AA)
except Exception as e:
print(str(e))
print(region)
#print(self.prefix)
image_path = os.path.join(self.save_dir , '{}_{}_{}.png'.format(self.prefix,self.regions,page_index))
cv2.imwrite(image_path , image)
# -
# # google vision pipeline
def google_ocr_v15(url,headers,pdf_name):
file = {
"files": [
{
"locale": "en",
"path": pdf_name,
"type": "pdf",
"config":{
"OCR": {
"option": "HIGH_ACCURACY",
"language": "en",
"top_correction":"True",
"craft_word": "True",
"craft_line": "True",
}
}}
],
"workflowCode": "WF_A_FCWDLDBSOD15GV"
}
res = requests.post(url,json=file,headers=headers)
return res.json()
def upload_file(pdf_file,headers,url):
#url = 'https://auth.anuvaad.org/anuvaad-api/file-uploader/v0/upload-file'
files = [
('file',(open(pdf_file,'rb')))]
response = requests.post(url, headers=headers, files=files)
return response.json()
def download_file(download_url,headers,outputfile,f_type='json'):
download_url =download_url+str(outputfile)
res = requests.get(download_url,headers=headers)
if f_type == 'json':
return res.json()
else :
return res.content
# +
def save_json(path,res):
with open(path, "w", encoding='utf8') as write_file:
json.dump(res, write_file,ensure_ascii=False )
# +
def bulk_search(job_id,bs_url,headers):
bs_request = {
"jobIDs": [job_id],
"taskDetails":"true"
}
print(job_id)
res = requests.post(bs_url,json=bs_request,headers=headers, timeout = 10000)
print(res.json())
while(1):
in_progress = res.json()['jobs'][0]['status']
if in_progress == 'COMPLETED':
outputfile = res.json()['jobs'][0]['output'][0]['outputFile']
#print(outputfile)
print(in_progress)
return outputfile
break
sleep(0.5)
print(in_progress)
res = requests.post(bs_url,json=bs_request,headers=headers, timeout = 10000)
# +
def execute_module(module,url,input_file,module_code,pdf_dir,overwirte=True , draw=True):
output_path = os.path.join(pdf_dir,'{}.json'.format(module_code))
if os.path.exists(output_path) and not overwirte:
print(' loading *****************{}'.format(module_code ))
with open(output_path,'r') as wd_file :
response = json.load(wd_file)
wf_res = pdf_dir + '/{}_wf.json'.format(module_code)
with open(wf_res,'r') as wd_file :
json_file = json.load(wd_file)
#json_file = upload_file(output_path,headers,upload_url)['data']
else :
if module_code in ['wd','gv']:
res = upload_file(input_file,headers,upload_url)
print('upload response **********', res)
pdf_name = res['data']
response = module(url,headers,pdf_name)
else :
response = module(url,headers,input_file)
if 'eval' in module_code :
json_file = response['outputFile']
response = download_file(download_url,headers,json_file)
save_json(output_path,response)
return json_file,response
print(' response *****************{} {}'.format(module_code ,response ))
job_id = response['jobID']
json_file = bulk_search(job_id,bs_url,headers)
save_json(pdf_dir + '/{}_wf.json'.format(module_code),json_file)
print('bulk search response **************',json_file )
response = download_file(download_url,headers,json_file)
save_json(output_path,response)
if draw :
if module_code in ['wd','gv']:
Draw(response,pdf_dir,regions='lines',prefix=module_code)
else :
Draw(response,pdf_dir,regions='regions',prefix=module_code)
return json_file,response
# +
def evaluate__and_save_input(pdf_files,output_dir,headers,word_url,layout_url,download_url,upload_url,bs_url):
word_responses = {}
layout_responses = {}
segmenter_responses = []
for pdf in pdf_files:
#try :
pdf_name = pdf.split('/')[-1].split('.')[0]
print(pdf , ' is being processed')
pdf_output_dir = os.path.join(output_dir,pdf_name)
os.system('mkdir -p "{}"'.format(pdf_output_dir))
wd_json,_ = execute_module(google_ocr_v15,word_url,input_file=pdf,\
module_code='gv',pdf_dir=pdf_output_dir,overwirte=True , draw=False)
# -
def main(path,headers,word_url,layout_url,download_url,upload_url,bs_url):
pdf_names = glob.glob(path + '/*.pdf')
return evaluate__and_save_input(pdf_names,output_path,headers,word_url,layout_url,download_url,upload_url,bs_url)
if digitization:
main(path,headers,word_url,layout_url,download_url,upload_url,bs_url)
# +
def bound_coordinate(corrdinate,max):
if corrdinate < 0 :
corrdinate = 0
if corrdinate > max:
corrdinate = max - 2
return int(corrdinate)
def get_image_from_box(image, box, height=140):
#box = data['box']
#scale = np.sqrt((box[1, 1] - box[2, 1])**2 + (box[0, 1] - box[3, 1])**2) / height
#print("scale is ",scale)
#w = int(np.sqrt((box[0, 0] - box[1, 0])**2 + (box[2, 0] - box[3, 0])**2) / scale)
w = max(abs(box[0, 0] - box[1, 0]),abs(box[2, 0] - box[3, 0]))
height = max(abs(box[0, 1] - box[3, 1]),abs(box[1, 1] - box[2, 1]))
pts1 = np.float32(box)
#w=2266-376
pts2 = np.float32([[0, 0], [w, 0],[w,height],[0,height]])
M = cv2.getPerspectiveTransform(pts1, pts2)
result_img = cv2.warpPerspective(image,M,(w, height)) #flags=cv2.INTER_NEAREST
return result_img
def get_text(path,coord,language,mode_height,save_base_path,psm_val):
try:
path = path.split('upload')[1]
image = download_file(download_url,headers,path,f_type='image')
nparr = np.frombuffer(image, np.uint8)
image = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
#image = cv2.imread("/home/naresh/crop.jpeg",0)
height, width,channel = image.shape
# left = bound_coordinate(coord[0] , width)
# top = bound_coordinate(coord[1],height )
# right = bound_coordinate(coord[2] ,width)
# bottom = bound_coordinate(coord[3], height)
# region_width = abs(right-left)
# region_height = abs(bottom-top)
# if left==right==top==bottom==0 or region_width==0 or region_height==0:
# return ""
crop_image = get_image_from_box(image, coord, height=abs(coord[0,1]-coord[2,1]))
#crop_image = image[ top:bottom, left:right]
save_path = save_base_path+"/"+str(uuid.uuid4()) + '.jpg'
if crop_save:
cv2.imwrite(save_path,crop_image)
#if abs(bottom-top) > 3*mode_height:
if abs(coord[1,1]-coord[2,1])>3*mode_height:
text = pytesseract.image_to_string(crop_image,config='--psm 6', lang=LANG_MAPPING[language][1])
else:
text = pytesseract.image_to_string(crop_image,config='--psm '+str(psm_val), lang=LANG_MAPPING[language][1])
return text
except:
return ""
def merger_text(line):
text = ""
for word_idx, word in enumerate(line['regions']):
if "text" in word.keys():
text = text+" "+ word["text"]
return text
def get_coord(bbox):
temp_box = []
temp_box_cv = []
temp_box.append([bbox["boundingBox"]['vertices'][0]['x'],bbox["boundingBox"]['vertices'][0]['y']])
temp_box.append([bbox["boundingBox"]['vertices'][1]['x'],bbox["boundingBox"]['vertices'][1]['y']])
temp_box.append([bbox["boundingBox"]['vertices'][2]['x'],bbox["boundingBox"]['vertices'][2]['y']])
temp_box.append([bbox["boundingBox"]['vertices'][3]['x'],bbox["boundingBox"]['vertices'][3]['y']])
temp_box_cv.append(bbox["boundingBox"]['vertices'][0]['x'])
temp_box_cv.append(bbox["boundingBox"]['vertices'][0]['y'])
temp_box_cv.append(bbox["boundingBox"]['vertices'][2]['x'])
temp_box_cv.append(bbox["boundingBox"]['vertices'][2]['y'])
temp_box = np.array(temp_box)
return temp_box,temp_box_cv
def frequent_height(page_info):
text_height = []
if len(page_info) > 0 :
for idx, level in enumerate(page_info):
coord_crop,coord = get_coord(level)
if len(coord)!=0:
text_height.append(abs(coord[3]-coord[1]))
occurence_count = Counter(text_height)
return occurence_count.most_common(1)[0][0]
else :
return 0
def remove_space(a):
return a.replace(" ", "")
def seq_matcher(tgt_text,gt_text):
tgt_text = remove_space(tgt_text)
gt_text = remove_space(gt_text)
score = SequenceMatcher(None, gt_text, tgt_text).ratio()
matchs = list(SequenceMatcher(None, gt_text, tgt_text).get_matching_blocks())
match_count=0
match_lis = []
for match in matchs:
#match_lis.append(match.size)
#match_count = max(match_lis)
match_count = match_count + match.size
#gt_text_leng = len(gt_text)
#if gt_text_leng==0:
# gt_text_leng=1
#score = (score*match_count)/gt_text_leng
# if tgt_text == gt_text:
# score = 1.0
message = {"ground":True,"input":True}
if score==0.0:
if len(gt_text)>0 and len(tgt_text)==0:
message['input'] = "text missing in tesseract"
if len(gt_text)==0 and len(tgt_text)>0:
message['ground'] = "text missing in google vision"
if score==1.0 and len(gt_text)==0 and len(tgt_text)==0:
message['ground'] = "text missing in google vision"
message['input'] = "text missing in tesseract"
return score,message,match_count
def count_mismatch_char(gt ,tgt) :
count=0
gt_count = len(gt)
for i,j in zip(gt,tgt):
if i==j:
count=count+1
mismatch_char = abs(gt_count-count)
return mismatch_char
def correct_region(region):
box = region['boundingBox']['vertices']
region['boundingBox']= {'vertices' : [{'x':box[0]['x']-crop_factor,'y':box[0]['y']-crop_factor_y},\
{'x':box[1]['x']+crop_factor,'y':box[1]['y']-crop_factor_y},\
{'x':box[2]['x']+crop_factor,'y':box[2]['y']+crop_factor_y},\
{'x':box[3]['x']-crop_factor,'y': box[3]['y']+crop_factor_y}]}
return region
def sort_line(line):
line['regions'].sort(key=lambda x: x['boundingBox']['vertices'][0]['x'],reverse=False)
return line
def cell_ocr(lang, page_path, line,save_base_path,mode_height):
cell_text =""
for word_idx, word in enumerate(line['regions']):
word = correct_region(word)
coord_crop, coord = get_coord(word)
if len(coord)!=0 and abs(coord_crop[1,1]-coord_crop[2,1]) > REJECT_FILTER :
text = get_text(page_path, coord_crop, lang,mode_height,save_base_path,8)
cell_text = cell_text +" " +text
return cell_text
def text_extraction(df,lang, page_path, regions,save_base_path):
final_score = 0
total_words = 0
total_lines = 0
total_chars = 0
total_match_chars = 0
for idx, level in enumerate(regions):
mode_height = frequent_height(level['regions'])
if ocr_level=="WORD":
for line_idx, line in enumerate(level['regions']):
#word_regions = coord_adjustment(page_path, line['regions'],save_base_path)
for word_idx, word in enumerate(line['regions']):
word = correct_region(word)
coord_crop, coord = get_coord(word)
word_text = word['text']
if len(word_text)>0 and len(coord)!=0 and abs(coord_crop[1,1]-coord_crop[2,1]) > REJECT_FILTER :
text = get_text(page_path, coord_crop, lang,mode_height,save_base_path,8)
if text_processing:
text_list = text.split()
text = " ".join(text_list)
score,message,match_count = seq_matcher(text,word['text'])
final_score = final_score+score
total_words = total_words+1
total_chars = total_chars+len(remove_space(word['text']))
total_match_chars= total_match_chars+match_count
word['char_match'] = match_count
word['tess_text'] = text
word['score'] = score
word['message'] = message
columns = word.keys()
df2 = pd.DataFrame([word],columns=columns)
df = df.append(df2, ignore_index=True)
elif len(word_text)>0:
score,message,match_count = seq_matcher("",word['text'])
word['char_match'] = match_count
word['tess_text'] = " "
word['score'] = score
word['message'] = message
columns = word.keys()
df2 = pd.DataFrame([word],columns=columns)
df = df.append(df2, ignore_index=True)
if ocr_level=="LINE":
for line_idx, line in enumerate(level['regions']):
line = sort_line(line)
line_text = merger_text(line)
line = correct_region(line)
coord_crop, coord = get_coord(line)
if len(line_text)>0 and len(coord)!=0 and abs(coord_crop[1,1]-coord_crop[2,1]) > REJECT_FILTER :
if 'class' in line.keys() and (line['class']=="CELL" or line['class']=="CELL_TEXT"):
text = cell_ocr(lang, page_path, line,save_base_path,mode_height)
else:
text = get_text(page_path, coord_crop, lang,mode_height,save_base_path,7)
if text_processing:
text_list = text.split()
text = " ".join(text_list)
score,message,match_count = seq_matcher(text,line_text)
final_score = final_score+score
total_lines = total_lines+1
total_chars = total_chars+len(remove_space(line_text))
total_match_chars= total_match_chars+match_count
line['char_match'] = match_count
line['tess_text'] = text
line['text'] = line_text
line['score'] = score
line['message'] = message
columns = line.keys()
df2 = pd.DataFrame([line],columns=columns)
df = df.append(df2, ignore_index=True)
elif len(line_text)>0:
score,message,match_count = seq_matcher("",line_text)
line['char_match'] = match_count
line['tess_text'] = " "
line['text'] = line_text
line['score'] = score
line['message'] = message
columns = line.keys()
df2 = pd.DataFrame([line],columns=columns)
df = df.append(df2, ignore_index=True)
#return regions,final_score/total_words,df,total_chars,total_match_chars
return regions,final_score/total_lines,df,total_chars,total_match_chars
# -
json_files_path = glob.glob(output_path+"/average/gv.json")
def tesseract(json_files):
output = []
dfs =[]
for json_file in json_files:
file_name = json_file.split('/')[-1].split('.json')[0]
pdf_name = json_file.split('/')[-2]
print("file name--------------------->>>>>>>>>>>>>>>>>>",pdf_name)
if not os.path.exists(base_path+pdf_name):
os.mkdir(base_path+pdf_name)
save_base_path = base_path+pdf_name
with open(json_file,'r+') as f:
data = json.load(f)
columns = ["page_path","page_data","file_eval_info"]
final_df = pd.DataFrame(columns=columns)
Draw(data,save_base_path,regions='regions')
lang = data['outputs'][0]['config']['OCR']['language']
total_page = len(data['outputs'][0]['pages'])
file_score = 0; total_chars_file = 0
file_data = []; total_match_chars_file = 0
page_paths = []
page_data_counts = []
for idx,page_data in enumerate(data['outputs'][0]['pages']):
t1 = time.time()
print("processing started for page no. ",idx)
page_path = page_data['path']
regions = page_data['regions'][1:]
df = pd.DataFrame()
regions,score,df,total_chars,total_match_chars = text_extraction(df,lang, page_path, regions,save_base_path)
file_score = file_score + score
total_chars_file =total_chars_file +total_chars
total_match_chars_file = total_match_chars_file+total_match_chars
file_data.append(df.to_csv())
page_paths.append(page_path)
char_details = {"total_chars":total_chars,"total_match_chars":total_match_chars}
page_data_counts.append(char_details)
data['outputs'][0]['pages'][idx]["regions"][1:] = copy.deepcopy(regions)
t2 = t1+time.time()
print("processing completed for page in {}".format(t2))
file_eval_info = {"total_chars":total_chars_file,"total_match_chars":total_match_chars_file,"score":total_match_chars_file/total_chars_file}
print(file_eval_info)
final_df["page_path"] = page_paths
final_df["page_data"] = file_data
final_df["file_eval_info"] = [file_eval_info]*len(page_paths)
print("file level evaluation result------------------->>>>>>>>>>>>>>>>>>>>>>>>>>>",file_eval_info)
data['outputs'][0]['score'] = file_score/total_page
with open(save_base_path+"/"+file_name+".json", 'w') as outfile:
json.dump(data, outfile)
final_df.to_csv(save_base_path+"/"+file_name+'.csv')
return output,final_df
output,dfs = tesseract(json_files_path)
# +
def draw_thresh_box(df,path,page_index,save_path):
path = path.split('upload')[1]
image = download_file(download_url,headers,path,f_type='image')
nparr = np.frombuffer(image, np.uint8)
image = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
font = cv2.FONT_HERSHEY_SIMPLEX
color= (255,0,0);thickness=5
df =df.reset_index()
for row in df.iterrows():
row2 = row[1].to_dict()
boxes = row2['boundingBox']
boxes2 = ast.literal_eval(boxes)
ground = boxes2['vertices']
pts = []
for pt in ground:
pts.append([int(pt['x']) ,int(pt['y'])])
cv2.polylines(image, [np.array(pts)],True, color, thickness)
cv2.putText(image, str(row2['text']), (pts[0][0],pts[0][1]), font,
2, (0,0,255), 2, cv2.LINE_AA)
cv2.putText(image, str(row2['tess_text']), (pts[1][0],pts[1][1]), font,
2, (0,255,0), 2, cv2.LINE_AA)
image_path = os.path.join(save_path , '{}.png'.format(page_index))
cv2.imwrite(image_path , image)
def visualize_results(df_paths,thresh):
for df_path in glob.glob(df_paths+"*/*.csv"):
save_path = base_path + df_path.split('/')[-2]+"/"
df = pd.read_csv(df_path)
for idx,(page_path,page_data) in enumerate(zip(df['page_path'],df['page_data'])):
df_string = StringIO(page_data)
page_df = pd.read_csv(df_string, sep=",")
filtered_df = page_df[page_df['score']<thresh]
draw_thresh_box(filtered_df,page_path,idx,save_path)
visualize_results(base_path,vis_thresh)
# -
# # debug at local
tgt_text = "TERRITORYOFDELHI GOVERNMENTOFNATIONALCAPITAL" #.replace(" ", "")
gt_text = "GOVERNMENTOFNATIONALCAPITALTERRITORYOFDELHI" #.replace(" ", "")
def remove_space(a):
return a.replace(" ", "")
score = list(SequenceMatcher(None, gt_text, tgt_text).get_matching_blocks())
score
len("GOVERNMENTOFNATIONALCAPITAL")
def longestSubstring(str1,str2):
# initialize SequenceMatcher object with
# input string
seqMatch = SequenceMatcher(None,str1,str2)
# find match of longest sub-string
# output will be like Match(a=0, b=0, size=5)
match = seqMatch.find_longest_match(0, len(str1), 0, len(str2))
# print longest substring
if (match.size!=0):
print (str1[match.a: match.a + match.size])
else:
print ('No longest common sub-string found')
longestSubstring(tgt_text,gt_text)
#d =pd.read_csv("/home/naresh/Tarento/testing_document_processor/test_word_boxes/average/gv.csv")
#d =pd.read_csv("/home/naresh/gv.csv")
d =pd.read_csv("/home/naresh/gv.csv")
from io import StringIO
from leven import levenshtein
df_string = StringIO(d['file_eval_info'][0])
age_df = pd.read_csv(df_string, sep=",")
# age_df = age_df[age_df['score']<0.90]
age_df
age_df
s1 = age_df.iloc[-5].tess_text #.replace(" ", "")
s2 = age_df.iloc[-5].text #.replace(" ", "")
tess:
{'total_words': 6705 'total_chars': 52502 'total_match_chars': 52161 'score': 0.9935050093329778
tam_v4:
{'total_words': 6705 'total_chars': 52502 'total_match_chars': 52113 'score': 0.9925907584472973
similar=
less= 1+1+1+1+1+1+1
more=
levenshtein(s1, s2)
(len(s1)-4)/len(s1)
# +
def remove_space(a):
return a.replace(" ", "")
def seq_matcher(tgt_text,gt_text):
tgt_text = remove_space(tgt_text)
gt_text = remove_space(gt_text)
score = SequenceMatcher(None, gt_text, tgt_text).ratio()
mismatch_count = levenshtein(tgt_text, gt_text)
match_count = abs(max(len(gt_text),len(tgt_text))-mismatch_count)
score = match_count/max(len(gt_text),len(tgt_text))
return score
def max_score(gt_text_lis,tgt_text):
score=0
gt_text_updated = ""
for gt_text in gt_text_lis:
tmp_score= seq_matcher(tgt_text,gt_text)
if score<tmp_score:
score = tmp_score
gt_text_updated = gt_text
return score,gt_text_updated
# +
file_scores = []
import ast
file_confs = []
below_thresh_score =0
high_thresh_score=0
high_conf_low_score=0
high_score_low_conf=0
thresh=90
total_word=0
for idx,page in enumerate(d['page_data']):
df_string = StringIO(d['page_data'][idx])
df = pd.read_csv(df_string, sep=",")
for idx,row in enumerate(df['conf_dict']):
try:
score = df.iloc[idx]['score']
gt_text = df.iloc[idx]['text']
gt_text_lis = gt_text.split(' ')
confs = 0
row2 = ast.literal_eval(row)
for word_idx,word in enumerate(row2['text']):
text = row2['text'][word_idx]
conf = float(row2['conf'][word_idx])
score,gt_text = max_score(gt_text_lis,text)
file_confs.append(conf)
# print(conf)
# print(score*100)
# print("ffffffffffffffff")
file_scores.append(score*100)
# confs = confs + float(row2['conf'][word_idx])
# if len(row2['text'])!=0:
# avg_conf = confs/len(row2['text'])
# file_confs.append(avg_conf)
# file_scores.append(score*100)
if score*100<thresh and conf<thresh:
below_thresh_score+=1
total_word+=1
elif score*100<thresh and conf>thresh:
high_conf_low_score+=1
total_word+=1
elif score*100>thresh and conf<thresh:
high_score_low_conf+=1
total_word+=1
elif score*100>thresh and conf>thresh:
high_thresh_score+=1
total_word+=1
except:
pass
# -
print("below_thresh_score", below_thresh_score/total_word)
print("high_thresh_score" , high_thresh_score/total_word)
print("high_score_low_conf", high_score_low_conf/total_word)
print("high_conf_low_score" , high_conf_low_score/total_word)
# +
import matplotlib.pyplot as plt
#plt.plot(file_scores,file_confs,'g')
#plt.hist((file_scores,file_confs),label = ("score", "confidence"),bins=[0.5,1.0])
plt.bar(file_scores, file_confs)
plt.xlabel("score")
plt.ylabel("confidence")
plt.show()
# -
print(SequenceMatcher(lambda x : x==".", s1, s2).ratio())
seq = SequenceMatcher(lambda x : x==".", s1, s2)
print(seq.find_longest_match(0,len(s1),0,len(s2)))
print(SequenceMatcher(None, s2, s1).ratio())
seq = SequenceMatcher(lambda x : x==".", s2, s1)
print(seq.find_longest_match(0,len(s2),0,len(s1)))
(188+110+17+30)/351
s3='.modelandandsystem.finalizingevaluation.stackb.Iwas,'
s2="modelandandsystemfinalizingevaluationstack..Iwas."
score = list(SequenceMatcher(None, s3, s2).get_matching_blocks())
score
s1
s2
levenshtein(s2, s1)
count=0
for i in model.get_matches()['Similarity']:
if i!=0.0:
count = count+1
len(model.get_matches()['Similarity'])
43/50
s="modelandandsystem.finalizingevaluation.stackb.Iwas,"
s3 = "modelandandsystemfinalizingevaluationstack..Iwas."
score = list(SequenceMatcher(None, s3, s).get_matching_blocks())
score
# +
good line : {'total_chars': 78024 'total_match_chars': 77301 'score': 0.9907336204244848}
average line: {'total_chars': 200192 'total_match_chars': 182177 'score': 0.9100113890664961}
bad line : {'total_chars': 159366 'total_match_chars': 141919 'score': 0.8905224451890617}
good_word: {'total_chars': 78024 'total_match_chars': 73859 'score': 0.9466189890290168}
average_word: {'total_chars': 200192 'total_match_chars': 187318 'score': 0.9356917359335039}
bad_word: {'total_chars': 159358 'total_match_chars': 143968 'score': 0.9034249927835439}
good_craft_word: {'total_chars': 78024 'total_match_chars': 76006 'score': 0.9741361632318261}
average_craft_word: {'total_chars': 200148 'total_match_chars': 190506 'score': 0.9518256490197254}
bad_craft_word: {'total_chars': 159472 'total_match_chars': 143604 'score': 0.9004966389083977}
good_craft_line: {'total_chars': 78007 'total_match_chars': 77143 'score': 0.9889240709167126}
average_craft_line: {'total_chars': 200164, 'total_match_chars': 173005, 'score': 0.8643162606662537}
bad_craft_line: {'total_chars': 159223 'total_match_chars': 145184 'score': 0.9118280650408547}
updated line with word ocr for table:
good_line_craft: {'total_chars': 78024, 'total_match_chars': 77203, 'score': 0.9894775966369322}
average_line_craft: {'total_chars': 200164 'total_match_chars': 192864 'score': 0.9635299054775085}
bad_line_craft: {'total_chars': 159473 'total_match_chars': 151917 'score': 0.9526189386291096}
updated line with line and dataframe:
good_line_craft: {'total_chars': 78024, 'total_match_chars': 77100, 'score': 0.988157490003076}
average_line_craft:{'total_chars': 200164 'total_match_chars': 193285 'score': 0.9656331807917508}
bad_line_craft: {'total_chars': 159473 'total_match_chars': 151917 'score': 0.9332363472186515}
updated line with line and string:
good_line_craft: {'total_chars': 78024, 'total_match_chars': 77274, 'score': 0.9903875730544448}
average_line_craft:{'total_chars': 200164 'total_match_chars': 194295 'score': 0.9706790431845886}
bad_line_craft: {'total_chars': 159473 'total_match_chars': 151512 'score': 0.9500793237726762}
# +
{'total_words': 10331 'total_chars': 51109 'total_match_chars': 48696 'score': 0.9527871803400575 'g_total_match_chars': 49480 'g_score': 0.9681269443737893}
# +
##### english ocr evaluation
good_craft_line_tesss: {'total_words': 6846, 'total_chars': 34109, 'total_match_chars': 33829, 'score': 0.9917910228971826}
good_craft_line_google: {'total_words': 6846, 'total_chars': 34109, , 'g_total_match_chars': 33854, 'g_score': 0.9925239672813627}
average_craft_line_tesss: {'total_words': 9316'total_chars': 47852,'total_match_chars': 47037'score': 0.9829683189835325}
average_craft_line_google: {'total_words': 9316'total_chars': 47852, 'g_total_match_chars': 47190,'g_score': 0.9861656775056424}
bad_craft_line_tesss: {'total_words': 10336 'total_chars': 51089 'total_match_chars': 47752 'score': 0.9346826126954921 }
bad_craft_line_google: {'total_words': 10336 'total_chars': 51089 , 'g_total_match_chars': 48551 'g_score': 0.9503219871205152}
good_google_line_tesss: {'total_words': 6846 'total_chars': 34109 'total_match_chars': 33901 'score': 0.9939019027236213 }
good_google_line_google: {'total_words': 6846 'total_chars': 34109 , 'g_total_match_chars': 33951 'g_score': 0.9953677914919816}
average_google_line_tesss: {'total_words': 9319 'total_chars': 47864 'total_match_chars': 47147 'score': 0.9850200568276785 }
average_google_line_google: {'total_words': 9319 'total_chars': 47864, 'g_total_match_chars': 47372 'g_score': 0.9897208758148086}
bad_google_line_tesss: {'total_words': 10331 'total_chars': 51109 'total_match_chars': 48696 'score': 0.9527871803400575 }
bad_google_line_google: {'total_words': 10331 'total_chars': 51109 ,'g_total_match_chars': 49480 'g_score': 0.9681269443737893}
# -
# +
############## hindi ocr evaluation
good_line_v1: {'total_chars': 25527, 'total_match_chars': 23460, 'score': 0.9132185529047675}
accuracy with dynamic adjustment
good_line_v2: {'total_chars': 25527, 'total_match_chars': 23731, 'score': 0.929643122967838}
after horizontal merging integration in craft:
good_craft_line_tess_model: {'total_chars': 25527, 'total_match_chars': 24855, 'score': 0.9736749324244918}
good_craft_line_indic_model:{'total_words': 6021, 'total_chars': 25527, 'total_match_chars': 21621, 'score': 0.8469855447173581}
good_craft_word_indic_model: {'total_words': 6019, 'total_chars': 25527, 'total_match_chars': 20843, 'score': 0.816508011125475}
good_craft_word_google_model:{'total_words': 6019, 'total_chars': 25527, 'g_total_match_chars': 24494, 'g_score': 0.9595330434441963}
good_craft_word_tess_model: {'total_words': 6019, 'total_chars': 25527, 'total_match_chars': 23699, 'score': 0.9283895483213852}
good_craft_line_google_model: {'total_words': 6021, 'total_chars': 25527, 'g_total_match_chars': 25237, 'g_score': 0.9886394797665217}
average_craft_line_tess_model: {'total_words': 7203,'total_chars': 34204,'total_match_chars': 31215,'score': 0.9126125599345106}
average_craft_line_google_model: {'total_words': 7203,'total_chars': 34204,'g_total_match_chars': 32927 'g_score': 0.9626651853584376}
bad_craft_line_tess_model: {'total_words': 7012 'total_chars': 33491 'total_match_chars': 26700 'score': 0.7972291063270729 }
bad_craft_line_google_model: {'total_words': 7012 'total_chars': 33491 'g_total_match_chars': 30540 'g_score': 0.9118867755516408}
average_craft_line_indic_model: {'total_words': 7203 'total_chars': 34223 'total_match_chars': 25220 'score': 0.7369313035093358 }
bad_craft_line_indic_model: {'total_words': 7012 'total_chars': 33491 'total_match_chars': 21541 'score': 0.6431877220745872 }
bad_craft_word_tess_model: {'total_words': 7005, 'total_chars': 33491, 'total_match_chars': 27652, 'score': 0.825654653488997}
bad_craft_word_google_model: {'total_words': 7005, 'total_chars': 33491,'g_total_match_chars': 30371, 'g_score': 0.9068406437550387}
average_craft_word_tess_model: {'total_words': 7190 'total_chars': 34221 'total_match_chars': 29902 'score': 0.8737909470792788 }
average_craft_word_google_model: {'total_words': 7190 'total_chars': 34221 ,'g_total_match_chars': 31197 'g_score': 0.9116332076794951}
average_google_line_tess_model: {'total_words': 7202 'total_chars': 34196 'total_match_chars': 31528 'score': 0.9219791788513276 }
average_google_line_google_model: {'total_words': 7202 'total_chars': 34196 ,'g_total_match_chars': 33178 'g_score': 0.9702304363083402}
bad_google_line_tess_model: {'total_words': 6996 'total_chars': 33424 'total_match_chars': 27385 'score': 0.8193214456677836 }
bad_google_line_google_model: {'total_words': 6996 'total_chars': 33424 'g_total_match_chars': 30950 'g_score': 0.9259813307802777}
# +
########### ocr evaluation tamil
good_craft_line_indic: {'total_words': 3848, 'total_chars': 32767, 'total_match_chars': 31020, 'score': 0.9466841639454329}
good_craft_line_google: {'total_words': 3848, 'total_chars': 32767, 'g_total_match_chars': 31964, 'g_score': 0.9754936368907743}
average_craft_line_indic: {'total_words': 3951, 'total_chars': 22381, 'total_match_chars': 19955, 'score': 0.8916044859479022}
average_craft_line_google: {'total_words': 3951, 'total_chars': 22381,'g_total_match_chars': 21687, 'g_score': 0.9689915553371163}
bad_craft_line_tesse: {'total_words': 5653, 'total_chars': 39796, 'total_match_chars': 36193, 'score': 0.9094632626394612}
bad_craft_line_google: {'total_words': 5653, 'total_chars': 39796,'g_total_match_chars': 21687,'g_total_match_chars': 38257, 'g_score': 0.9613277716353402}
good_craft_line_tess: {'total_words': 3848, 'total_chars': 32767, 'total_match_chars': 31197, 'score': 0.9520859401226844}
average_craft_line_tess: {'total_words': 3951, 'total_chars': 22381, 'total_match_chars': 20335, 'score': 0.908583173227291}
bad_craft_line_indic: {'total_words': 5653, 'total_chars': 39796, 'total_match_chars': 36055, 'score': 0.9059955774449694}
bad_craft_word_indic {'total_words': 3848, 'total_chars': 32767, 'total_match_chars': 25359, 'score': 0.7739188818018128}
bad_craft_word_google: {'total_words': 3848, 'total_chars': 3276,,'g_total_match_chars': 26457, 'g_score': 0.8074282052064577}
bad_craft_word_tess: {'total_words': 3848, 'total_chars': 32767, 'total_match_chars': 25589, 'score': 0.7809381389812922}
average_craft_word_indic: {'total_words': 3950, 'total_chars': 22381, 'total_match_chars': 17921, 'score': 0.8007238282471739}
average_craft_word_google: {'total_words': 3950, 'total_chars': 22381,'g_total_match_chars': 19895, 'g_score': 0.8889236405879988}
average_craft_word_tess: {'total_words': 3950, 'total_chars': 22381, 'total_match_chars': 18441, 'score': 0.8239578213663376}
good_craft_word_indic: {'total_words': 5647, 'total_chars': 39793, 'total_match_chars': 34343, 'score': 0.8630412384087653}
good_craft_word_google: {'total_words': 5647, 'total_chars': 39793,'g_total_match_chars': 36204, 'g_score': 0.9098082577337723}
good_craft_word_tess: {'total_words': 5647, 'total_chars': 39793, 'total_match_chars': 34534, 'score': 0.8678410775764581}
good_google_line_tess: {'total_words': 3848, 'total_chars': 32767, 'total_match_chars': 31375, 'score': 0.9575182348094119}
average_google_line_tess: {'total_words': 3951, 'total_chars': 22383, 'total_match_chars': 20178, 'score': 0.9014877362283876}
bad_google_line_tess: {'total_words': 5646, 'total_chars': 39599, 'total_match_chars': 36327, 'score': 0.9173716507992626}
good_google_line_indic: {'total_words': 3848, 'total_chars': 32767, 'total_match_chars': 31471, 'score': 0.9604480117191077}
average_google_line_indic: {'total_words': 3951, 'total_chars': 22383, 'total_match_chars': 20507, 'score': 0.9161863914578028}
bad_google_line_indic: {'total_words': 5646, 'total_chars': 39599, 'total_match_chars': 36192, 'score': 0.9139624737998434}
good_google_line_google: {'total_words': 3848, 'total_chars': 32767, 'g_total_match_chars': 32267, 'g_score': 0.9847407452620014}
average_google_line_google: {'total_words': 3951, 'total_chars': 22383, 'g_total_match_chars': 21804, 'g_score': 0.9741321538667739}
bad_google_line_google: {'total_words': 5646, 'total_chars': 39599, 'g_total_match_chars': 38179, 'g_score': 0.964140508598702}
good_google_word_tess: {'total_words': 3848, 'total_chars': 32767, 'total_match_chars': 30459, 'score': 0.9295632801293985}
average_google_word_tess: {'total_words': 3950, 'total_chars': 22381, 'total_match_chars': 19742, 'score': 0.8820874849202448}
bad_google_word_tess: {'total_words': 5627, 'total_chars': 39609, 'total_match_chars': 36142, 'score': 0.9124693882703426}
good_google_word_indic: {'total_words': 3848, 'total_chars': 32767, 'total_match_chars': 30695, 'score': 0.9367656483657338}
average_google_word_indic: {'total_words': 3950, 'total_chars': 22381, 'total_match_chars': 20317, 'score': 0.90777891961932}
bad_google_word_indic: {'total_words': 5627, 'total_chars': 39609, 'total_match_chars': 36120, 'score': 0.9119139589487237}
good_google_word_google: {'total_words': 3848, 'total_chars': 32767, 'g_total_match_chars': 31601, 'g_score': 0.9644154179509873}
average_google_word_google: {'total_words': 3950, 'total_chars': 22381, 'g_total_match_chars': 21673, 'g_score': 0.9683660247531388}
bad_google_word_google: {'total_words': 5627, 'total_chars': 39609, 'g_total_match_chars': 37843, 'g_score': 0.9554141735464162}
# +
########### ocr evaluation kannada
good_craft_line_tesse: {"total_words": 3401,'total_chars': 24497 , 'total_match_chars': 23376 'score': 0.954239294607503 }
good_craft_line_google: {"total_words": 3401,'total_chars': 24497 ,'g_total_match_chars': 23991 'g_score': 0.979344409519533}
average_craft_line_tesse: {'total_words': 4802 'total_chars': 33476 'total_match_chars': 31385 'score': 0.9375373401840125}
average_craft_line_google: {'total_words': 4802 'total_chars': 33476 'g_total_match_chars': 32518 'g_score': 0.9713824829728761}
bad_craft_line_tesse: {'total_words': 4455 'total_chars': 29789 'total_match_chars': 26543 'score': 0.8910336030078216}
bad_craft_line_google: {'total_words': 4455 'total_chars': 29789 'g_total_match_chars': 28634 'g_score': 0.9612272986672933}
good_craft_word_tesse: {'total_words': 3401 'total_chars': 24497 'total_match_chars': 20137 'score': 0.8220190227374781}
good_craft_word_google: {'total_words': 3401 'total_chars': 24497 'g_total_match_chars': 19932 'g_score': 0.8136506511001347}
average_craft_word_tesse: {'total_words': 4802 'total_chars': 33476 'total_match_chars': 29141 'score': 0.8705042418449038}
average_craft_word_google: {'total_words': 4802 'total_chars': 33476 'g_total_match_chars': 28329 'g_score': 0.8462480583104314}
bad_craft_word_tesse: {'total_words': 4452 'total_chars': 29789 'total_match_chars': 24072 'score': 0.8080835207626976}
bad_craft_word_google: {'total_words': 4452 'total_chars': 29789 'g_total_match_chars': 25572 'g_score': 0.8584376783376414}
good_google_line_tesse: {'total_words': 3401 'total_chars': 24497 'total_match_chars': 23551 'score': 0.96138302649304}
good_google_line_google: {'total_words': 3401 'total_chars': 24497 'g_total_match_chars': 24234 'g_score': 0.9892639915091644}
average_google_line_tesse: {'total_words': 4801 'total_chars': 33470 'total_match_chars': 31849 'score': 0.9515685688676426}
average_google_line_google: {'total_words': 4801 'total_chars': 33470 'g_total_match_chars': 33037 'g_score': 0.9870630415297281}
bad_google_line_tesse: {'total_words': 4455 'total_chars': 29789 'total_match_chars': 26842 'score': 0.9010708650844271}
bad_google_line_google: {'total_words': 4455 'total_chars': 29789 'g_total_match_chars': 28840 'g_score': 0.9681426029742523}
good_google_word_tesse: {'total_words': 3400 'total_chars': 24495 'total_match_chars': 21836 'score': 0.8914472341294142}
good_google_word_google: {'total_words': 3400 'total_chars': 24495 'g_total_match_chars': 21986 'g_score': 0.8975709328434375}
average_google_word_tesse: {'total_words': 4801 'total_chars': 33470 'total_match_chars': 30428 'score': 0.9091126381834479}
average_google_word_google: {'total_words': 4801 'total_chars': 33470 'g_total_match_chars': 29172 'g_score': 0.8715864953689871}
bad_google_word_tesse: {'total_words': 4452 'total_chars': 29789 'total_match_chars': 25816 'score': 0.8666286213031656}
bad_google_word_google: {'total_words': 4452 'total_chars': 29789 'g_total_match_chars': 26619 'g_score': 0.8935848803249522}
# +
############## tamil average shree
processing started for page no. 0
page level score {'total_chars': 826, 'total_match_chars': 740, 'g_total_match_chars': 0}
processing completed for page in 3241811887.59696
processing started for page no. 1
page level score {'total_chars': 1163, 'total_match_chars': 1131, 'g_total_match_chars': 0}
processing completed for page in 3241812449.641782
processing started for page no. 2
page level score {'total_chars': 1938, 'total_match_chars': 1924, 'g_total_match_chars': 0}
processing completed for page in 3241812587.320466
processing started for page no. 3
page level score {'total_chars': 605, 'total_match_chars': 547, 'g_total_match_chars': 0}
processing completed for page in 3241813056.306098
processing started for page no. 4
page level score {'total_chars': 1803, 'total_match_chars': 1785, 'g_total_match_chars': 0}
processing completed for page in 3241813530.625787
processing started for page no. 5
page level score {'total_chars': 1390, 'total_match_chars': 1352, 'g_total_match_chars': 0}
processing completed for page in 3241813736.493059
processing completed for page in 3241824472.8735504
processing started for page no. 1
page level score {'total_chars': 784, 'total_match_chars': 734, 'g_total_match_chars': 0}
processing completed for page in 3241825261.708538
processing started for page no. 2
page level score {'total_chars': 852, 'total_match_chars': 802, 'g_total_match_chars': 0}
processing completed for page in 3241826577.777154
processing started for page no. 3
page level score {'total_chars': 1050, 'total_match_chars': 1038, 'g_total_match_chars': 0}
processing completed for page in 3241827256.8978252
processing started for page no. 4
page level score {'total_chars': 718, 'total_match_chars': 637, 'g_total_match_chars': 0}
processing completed for page in 3241827694.849202
processing started for page no. 5
page level score {'total_chars': 759, 'total_match_chars': 698, 'g_total_match_chars': 0}
processing completed for page in 3241828478.2017365
processing started for page no. 6
page level score {'total_chars': 485, 'total_match_chars': 437, 'g_total_match_chars': 0}
processing completed for page in 3241828927.7678914
processing started for page no. 7
page level score {'total_chars': 622, 'total_match_chars': 537, 'g_total_match_chars': 0}
processing completed for page in 3241829308.369837
processing started for page no. 8
page level score {'total_chars': 691, 'total_match_chars': 625, 'g_total_match_chars': 0}
processing completed for page in 3241830018.1830215
page level score {'total_chars': 1711, 'total_match_chars': 1642, 'g_total_match_chars': 0}
processing completed for page in 3241849648.098544
processing started for page no. 1
page level score {'total_chars': 1545, 'total_match_chars': 1502, 'g_total_match_chars': 0}
processing completed for page in 3241849787.963669
processing started for page no. 2
page level score {'total_chars': 1652, 'total_match_chars': 1585, 'g_total_match_chars': 0}
processing completed for page in 3241849934.5317726
processing started for page no. 3
page level score {'total_chars': 1677, 'total_match_chars': 1612, 'g_total_match_chars': 0}
processing completed for page in 3241850090.2137995
processing started for page no. 4
page level score {'total_chars': 1927, 'total_match_chars': 1834, 'g_total_match_chars': 0}
processing completed for page in 3241850241.3729124
processing started for page no. 5
page level score {'total_chars': 349, 'total_match_chars': 330, 'g_total_match_chars': 0}
processing completed for page in 3241850335.562466
processing started for page no. 6
page level score {'total_chars': 1446, 'total_match_chars': 1358, 'g_total_match_chars': 0}
processing completed for page in 3241850431.1224203
# +
txt_path = "/home/naresh/Tarento/testing_document_processor/ocr_benchamark_data/tamil/curated_training_data/*/*.txt"
csv_path = "/home/naresh/Tarento/testing_document_processor/ocr_benchamark_data/tamil/curated_training_data/*/*.csv"
for text_file, csv_file in zip(sorted(glob.glob(txt_path)),sorted(glob.glob(csv_path))):
txt_file = open(text_file,"r+")
df = pd.read_csv(csv_file)
df.reset_index(drop=True)
lines = txt_file.readlines()
for line in lines:
for idx, row in df.iterrows():
if row['key']==line. rstrip("\n"):
df = df.drop(index=idx,axis=0)
df.to_csv(csv_file)
# -
| anuvaad-etl/anuvaad-extractor/document-processor/evaluator/evaluator_string/src/notebooks/tesseract_ocr_evaluation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="-zJhryIy2OHz" outputId="9dce4ad4-ae52-49dd-d3f0-66b90112e0b6"
# !nvidia-smi
# + [markdown] id="v4OdNCeP18-F"
# # Imports
# + id="Eo-Pfm2BApZU" colab={"base_uri": "https://localhost:8080/"} outputId="c880e8ed-af24-4fd7-8c2c-2a4adf95de83"
from google.colab import drive
drive.mount('/content/drive')
# + id="or1bXxRcBqn4"
# !cp '/content/drive/My Drive/GIZ Zindi/Train.csv' .
# !cp '/content/drive/My Drive/GIZ Zindi/SampleSubmission.csv' .
# + id="LZlxM2g-1dzv"
# !cp '/content/drive/My Drive/GIZ Zindi/AdditionalUtterances.zip' AdditionalUtterances.zip
# + id="uAWDjYdh1m0m"
# !unzip -q AdditionalUtterances.zip
# + id="QgLBGRGz1yq2"
# Copy the files in and unzip
# !cp '/content/drive/My Drive/GIZ Zindi/audio_files.zip' audio_files.zip
# !unzip -q audio_files.zip
# + id="H7GH-9qUm3_k"
# !cp "/content/drive/My Drive/GIZ Zindi/nlp_keywords_29Oct2020.zip" nlp_keywords_29Oct2020.zip
# !unzip -q nlp_keywords_29Oct2020.zip
# + id="sBv1Gkw2Rje3" colab={"base_uri": "https://localhost:8080/"} outputId="703bcb8a-11a2-4e02-ca93-cb500a6129a8"
# !pip -q install efficientnet_pytorch
# + id="t-5agYag6nPg" colab={"base_uri": "https://localhost:8080/"} outputId="81b55288-3e09-4e8c-e6e8-c2c402970ec3"
# !pip install -q python_speech_features
# + id="i0epTZBG7Zr_" colab={"base_uri": "https://localhost:8080/"} outputId="9f869927-d24e-40ef-8bd1-8bce5477ec4e"
# !pip -q install albumentations --upgrade
# + id="w24RQCaX0Zyi"
import os
from PIL import Image
from sklearn.model_selection import train_test_split
from torchvision import datasets, models
from torch.utils.data import DataLoader, Dataset
import torch.nn as nn
import torch
import torchvision.models as models
from efficientnet_pytorch import EfficientNet
from torch.optim.lr_scheduler import MultiStepLR
from torch.optim.lr_scheduler import OneCycleLR
import pandas as pd
import numpy as np
import sklearn
from sklearn.model_selection import StratifiedKFold
from sklearn.metrics import accuracy_score, roc_auc_score
from tqdm.notebook import tqdm as tqdm
from sklearn.model_selection import train_test_split
import librosa
import librosa.display as display
import python_speech_features as psf
from matplotlib import pyplot as plt
import numpy as np
import albumentations
from torch.nn import Module,Sequential
import gc
import cv2
import multiprocessing as mp
from multiprocessing import Pool
from albumentations.augmentations.transforms import Lambda
import IPython.display as ipd
# + id="h5X002A-P4-i"
N_WORKERS = mp.cpu_count()
LOAD_TRAIN_DATA = None
LOAD_TEST_DATA = None
# + id="Ba854myQBcfU"
import random
import numpy as np
SEED_VAL = 1000
# Set the seed value all over the place to make this reproducible.
def seed_all(SEED = SEED_VAL):
random.seed(SEED)
np.random.seed(SEED)
torch.manual_seed(SEED)
torch.cuda.manual_seed_all(SEED)
os.environ['PYTHONHASHSEED'] = str(SEED)
torch.backends.cudnn.deterministic = True
# + [markdown] id="jd9HGTz31yKi"
# ## New atterances
# + id="SHkm4SML11Ek" colab={"base_uri": "https://localhost:8080/"} outputId="a40806e7-a11c-41e9-dfae-024d79b2f2bf"
def create_new_train():
import glob
dirs = glob.glob('latest_keywords/*')
#firts wave
new_atterances = pd.DataFrame()
labels = []
fn = []
for dir in dirs:
wav_paths = glob.glob(dir+'/*')
fn.extend(wav_paths)
labels.extend(len(wav_paths) * [dir.split('/')[-1]])
new_atterances['fn'] = fn
new_atterances['label'] = labels
new_atterances = new_atterances.sample(frac=1,random_state=SEED_VAL).reset_index(drop=True)
#second wave
dirs = glob.glob('nlp_keywords/*')
new_atterances1 = pd.DataFrame()
labels = []
fn = []
for dir in dirs:
wav_paths = glob.glob(dir+'/*')
fn.extend(wav_paths)
labels.extend(len(wav_paths) * [dir.split('/')[-1]])
new_atterances1['fn'] = fn
new_atterances1['label'] = labels
new_atterances1 = new_atterances1.sample(frac=1,random_state=SEED_VAL).reset_index(drop=True)
#combine all
train = pd.read_csv('Train.csv')
train = pd.concat([train,new_atterances,new_atterances1])
train = train.sample(frac=1,random_state=SEED_VAL).reset_index(drop=True)
train.to_csv('new_train.csv',index=False)
print(train.head())
print(train.shape)
create_new_train()
# + [markdown] id="tZniD6ThCw6a"
# # DataLoader
# + id="mwQd_y6hQvIU"
class conf:
sampling_rate = 44100
duration = 3 # sec
hop_length = 200*duration # to make time steps 128
fmin = 20
fmax = sampling_rate // 2
n_mels = 128
n_fft = n_mels * 20
padmode = 'constant'
samples = sampling_rate * duration
def get_default_conf():
return conf
conf = get_default_conf()
# + id="LyGR5S46S5S0"
def melspectogram_dB(file_path, cst=3, top_db=80.):
row_sound, sr = librosa.load(file_path,sr=conf.sampling_rate)
sound = np.zeros((cst*sr,))
if row_sound.shape[0] < cst*sr:
sound[:row_sound.shape[0]] = row_sound[:]
else:
sound[:] = row_sound[:cst*sr]
spec = librosa.feature.melspectrogram(sound,
sr=conf.sampling_rate,
n_mels=conf.n_mels,
hop_length=conf.hop_length,
n_fft=conf.n_fft,
fmin=conf.fmin,
fmax=conf.fmax)
spec_db = librosa.power_to_db(spec)
spec_db = spec_db.astype(np.float32)
return spec_db
def spec_to_image(spec, eps=1e-6):
mean = spec.mean()
std = spec.std()
spec_norm = (spec - mean) / (std + eps)
spec_min, spec_max = spec_norm.min(), spec_norm.max()
spec_img = 255 * (spec_norm - spec_min) / (spec_max - spec_min)
return spec_img.astype(np.uint8)
def preprocess_audio(audio_path):
spec = melspectogram_dB(audio_path)
spec = spec_to_image(spec)
return spec
# + colab={"base_uri": "https://localhost:8080/", "height": 280} id="tlgD1PPLs8fA" outputId="5f7fc973-187c-40a8-c680-e883d0acbc73"
spec = preprocess_audio('/content/audio_files/IV38R7F.wav')
print(spec.shape)
plt.imshow(spec)
# + id="OmO42IhrTeIu" colab={"base_uri": "https://localhost:8080/", "height": 280} outputId="689eefd5-8a46-472b-acd8-8e3af3a52199"
spec = preprocess_audio('audio_files/B6NYZM0.wav')
print(spec.shape)
plt.imshow(spec)
# + id="fFdXzGpuFeQI"
def get_data(df,mode='train'):
"""
:param: df: dataframe of train or test
:return: images_list: spec images of all the data
:return: label_list : label list of all the data
"""
audio_paths = df.fn.values
images_list = []
with mp.Pool(N_WORKERS) as pool:
images_list = pool.map(preprocess_audio,tqdm(audio_paths))
if mode == 'train':
label_list = df.label.values
return images_list,label_list
else:
return images_list
# + id="PV6u_nW3pc31"
class ImageDataset(Dataset):
def __init__(self, images_list,labels_list=None,transform=None):
self.images_list = images_list
self.transform = transform
self.labels_list = labels_list
def __getitem__(self, index):
spec = self.images_list[index]
if self.transform is not None:
spec = self.transform(image=spec)
spec = spec['image']
if self.labels_list is not None:
label = self.labels_list[index]
return {'image' : torch.tensor(spec,dtype=torch.float),
'label' : torch.tensor(label,dtype = torch.long) }
return {'image' : torch.tensor(spec,dtype=torch.float), }
def __len__(self):
return len(self.images_list)
# + [markdown] id="vOQv1YlR3jJu"
# # Models and train functions
# + id="njGRGejm2i6D"
class Net(nn.Module):
def __init__(self,name):
super(Net, self).__init__()
self.name = name
#self.convert_3_channels = nn.Conv2d(1,3,2,padding=1)
if name == 'b0':
self.arch = EfficientNet.from_pretrained('efficientnet-b0')
self.arch._fc = nn.Linear(in_features=1280, out_features=193, bias=True)
elif name == 'b1':
self.arch = EfficientNet.from_pretrained('efficientnet-b1')
self.arch._fc = nn.Linear(in_features=1280, out_features=193, bias=True)
elif name == 'b2':
self.arch = EfficientNet.from_pretrained('efficientnet-b2')
self.arch._fc = nn.Linear(in_features=1408, out_features=193, bias=True)
elif name =='b3':
self.arch = EfficientNet.from_pretrained('efficientnet-b3')
self.arch._fc = nn.Linear(in_features=1536, out_features=193, bias=True)
elif name =='b4':
self.arch = EfficientNet.from_pretrained('efficientnet-b4')
self.arch._fc = nn.Linear(in_features=1792, out_features=193, bias=True,)
elif name =='b5':
self.arch = EfficientNet.from_pretrained('efficientnet-b5')
self.arch._fc = nn.Linear(in_features=2048, out_features=193, bias=True)
elif name =='b6':
self.arch = EfficientNet.from_pretrained('efficientnet-b6')
self.arch._fc = nn.Linear(in_features=2304, out_features=193, bias=True)
elif name =='b7':
self.arch = EfficientNet.from_pretrained('efficientnet-b7')
self.arch._fc = nn.Linear(in_features=2560, out_features=193, bias=True)
elif name == 'densenet121':
self.arch = models.densenet121(pretrained=True)
num_ftrs = self.arch.classifier.in_features
self.arch.classifier = nn.Linear(num_ftrs,193,bias=True)
elif name == 'densenet169':
self.arch = models.densenet169(pretrained=True)
num_ftrs = self.arch.classifier.in_features
self.arch.classifier = nn.Linear(num_ftrs,193,bias=True)
elif name == 'densenet201':
self.arch = models.densenet201(pretrained=True)
num_ftrs = self.arch.classifier.in_features
self.arch.classifier = nn.Linear(num_ftrs,193,bias=True)
elif name == 'resnet50':
self.arch = models.resnet50(pretrained=True)
num_ftrs = self.arch.fc.in_features
self.arch.fc = nn.Linear(num_ftrs,193,bias=True)
elif name == 'resnet101':
self.arch = models.resnet101(pretrained=True)
num_ftrs = self.arch.fc.in_features
self.arch.fc = nn.Linear(num_ftrs,193,bias=True)
elif name == 'resnet152':
self.arch = models.resnet152(pretrained=True)
num_ftrs = self.arch.fc.in_features
self.arch.fc = nn.Linear(num_ftrs,193,bias=True)
elif name == 'resnet18':
self.arch = models.resnet18(pretrained=True)
my_weight = self.arch.conv1.weight.mean(dim=1, keepdim=True)
self.arch.conv1 = nn.Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
self.arch.conv1.weight = torch.nn.Parameter(my_weight)
num_ftrs = self.arch.fc.in_features
self.arch.fc = nn.Linear(num_ftrs,193,bias=True)
elif name == 'resnet34':
self.arch = models.resnet34(pretrained=True)
num_ftrs = self.arch.fc.in_features
self.arch.fc = nn.Linear(num_ftrs,193,bias=True)
elif name == 'resnext101':
self.arch = models.resnext101_32x8d(pretrained=True)
my_weight = self.arch.conv1.weight.mean(dim=1, keepdim=True)
self.arch.conv1 = nn.Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
self.arch.conv1.weight = torch.nn.Parameter(my_weight)
num_ftrs = self.arch.fc.in_features
self.arch.fc = nn.Linear(num_ftrs,193,bias=True)
elif name == 'resnext50':
self.arch = models.resnext50_32x4d(pretrained=True)
my_weight = self.arch.conv1.weight.mean(dim=1, keepdim=True)
self.arch.conv1 = nn.Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
self.arch.conv1.weight = torch.nn.Parameter(my_weight)
num_ftrs = self.arch.fc.in_features
self.arch.fc = nn.Linear(num_ftrs,193,bias=True)
elif name =='rexnetv1':
model = rexnetv1.ReXNetV1(width_mult=1.0)
model.output.conv2D = nn.Conv2d(1280, 1, kernel_size=(1, 1), stride=(1, 1))
def forward(self, x):
"""
"""
#x = self.convert_3_channels(x)
x = self.arch(x)
return x
# + id="QtSFM_tZKcna"
class AverageMeter():
"""Computes and stores the average and current value"""
def __init__(self):
self.reset()
def reset(self):
self.val = 0
self.avg = 0
self.sum = 0
self.count = 0
def update(self, val, n=1):
self.val = val
self.sum += val * n
self.count += n
self.avg = self.sum / self.count
# + id="sw4G_vuUGb3Z"
def loss_fn(outputs,targets):
criterion = nn.CrossEntropyLoss()
return criterion(outputs,targets)
# + id="4ilJ_wZt4UVw"
def train_fn(train_data_loader,model,optimizer,device,scheduler = None):
model.train()
tot_loss = 0
for bi,d in enumerate(train_data_loader):
images = d['image']
labels = d['label']
#send them to device
images = images.to(device,dtype=torch.float)
labels = labels.to(device,dtype=torch.long)
optimizer.zero_grad()
outputs = model(images.unsqueeze(dim=1))
loss = loss_fn(outputs,labels)
loss.backward()
optimizer.step()
tot_loss = tot_loss + loss.item()
if scheduler is not None:
scheduler.step()
logloss_score = tot_loss/len(train_data_loader)
# + id="Vk-5rsdI493E"
def eval_fn(valid_data_loader,model,device):
model.eval()
tot_loss = 0
with torch.no_grad():
for bi,d in enumerate(valid_data_loader):
images = d['image']
labels = d['label']
#send them to device
images = images.to(device,dtype=torch.float)
labels = labels.to(device,dtype=torch.long)
outputs = model(images.unsqueeze(dim=1))
loss = loss_fn(outputs,labels)
tot_loss = tot_loss + loss.item()
logloss_score = tot_loss/len(valid_data_loader)
return logloss_score
# + [markdown] id="r70JhJyW8LUb"
# # Training
# + id="MldtKN8sKLbU" colab={"base_uri": "https://localhost:8080/", "height": 205, "referenced_widgets": ["06606128a3f74e02b6f272fc29d0be2a", "0c18dc29522a4b03805dc2ce32cf65b4", "3100ade2d9d04e189c249e371d425d1a", "<KEY>", "<KEY>", "09d43776fe91430c92837494aa360cff", "<KEY>", "7b49cf8bbd024f2a958fd0010f755c6d"]} outputId="d1298899-0473-4609-ea88-5ed1084f43c4"
#Takes 8 minutes
# %%time
if LOAD_TRAIN_DATA is None:
gc.collect()
train = pd.read_csv('new_train.csv')
ss = pd.read_csv('/content/SampleSubmission.csv')
cols = ss.columns[1:].values
dict_cols = {}
for i, col in enumerate(cols):
dict_cols[col] = i
dict_cols
train.label = train.label.map(dict_cols)
print(train.head())
train_images,train_labels = get_data(train,mode='train')
LOAD_TRAIN_DATA = True
else:
print('Data Already Loaded')
# + id="zxqJubi35FM1"
HEIGHT = 128
WIDTH = 600
def get_transforms():
train_transform = albumentations.Compose([
#albumentations.PadIfNeeded(HEIGHT,WIDTH,border_mode = cv2.BORDER_CONSTANT,value=0),
albumentations.Resize(HEIGHT,WIDTH),
#albumentations.Lambda(NM(),always_apply=True)
#Lambda(image=SpecAugment(num_mask=2,freq_masking=0.1,time_masking=0.1),mask=None,p=0.2),
#Lambda(image=GaussNoise(2),mask=None,p=0.2),
#albumentations.Lambda(image=CONVERTRGB(),always_apply=True),
#albumentations.CenterCrop(100,140,p=1)
#albumentations.RandomCrop(120,120)
#albumentations.VerticalFlip(p=0.2),
#albumentations.HorizontalFlip(p=0.2),
#albumentations.RandomContrast(p=0.2),
#AT.ToTensor()
])
val_transform = albumentations.Compose([
#albumentations.PadIfNeeded(HEIGHT,WIDTH,border_mode = cv2.BORDER_CONSTANT,value=0),
albumentations.Resize(HEIGHT,WIDTH),
#albumentations.Lambda(NM(),always_apply=True)
#albumentations.Lambda(image=CONVERTRGB(),always_apply=True),
#AT.ToTensor()
])
return train_transform,val_transform
# + [markdown] id="cYUscqqeslvk"
# ## KFOLDS
# + id="uEMLmvvp0m4K"
NAME = 'resnext101'
EPOCHS = 15
TRAIN_BATCH_SIZE = 8
LR = 0.0001
NFOLDS = 10
# + id="WjunYADTtEO9"
skf = StratifiedKFold(NFOLDS,random_state=SEED_VAL,shuffle=True)
all_scores = []
def run_folds(train_images,train_labels):
seed_all(SEED_VAL)
train_transform,val_transform = get_transforms()
for i,(train_index,val_index) in enumerate(skf.split(train_labels,y=train_labels)):
if i in [7,8,9]:
print(f"######################### Fold {i+1}/{skf.n_splits} #########################")
tr_images,valid_images = [train_images[i] for i in train_index] ,[train_images[i] for i in val_index]
tr_labels,valid_labels = [train_labels[i] for i in train_index],[train_labels[i] for i in val_index]
train_dataset = ImageDataset(tr_images,tr_labels,transform=train_transform)
valid_dataset = ImageDataset(valid_images,valid_labels,transform = val_transform)
train_data_loader = DataLoader(dataset=train_dataset,shuffle=True,batch_size=TRAIN_BATCH_SIZE)
valid_data_loader = DataLoader(dataset=valid_dataset,shuffle=False,batch_size=32)
device = torch.device("cuda")
model = Net(NAME)
model.to(device)
optimizer = torch.optim.AdamW(model.parameters(), lr=LR)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, patience=0, factor=0.7,min_lr=1e-6,verbose=True)
best_log_loss = 1500
for epoch in range(EPOCHS):
print(f"----------------FOLD {i+1} : EPOCH {epoch+1}---------------------")
train_fn(train_data_loader, model, optimizer, device,scheduler=None)
log_loss_val = eval_fn(valid_data_loader ,model, device)
scheduler.step(log_loss_val)
print("val loss: ",log_loss_val)
if log_loss_val<best_log_loss:
best_log_loss = log_loss_val
torch.save(model.state_dict(),f"best_model_{i}")
#torch.save(model.state_dict(),f"/content/drive/MyDrive/Resnext101GIZ/best_model_{i}")
all_scores.append(best_log_loss)
print(f'best VAL_LOGLOSS for fold {i+1}: ',best_log_loss)
else:
pass
print(f"MEAN over all FOLDS: {np.mean(all_scores)}")
return np.mean(all_scores)
# + id="yt39SFentE23" colab={"base_uri": "https://localhost:8080/", "height": 1000, "referenced_widgets": ["677f2e5241f24599be1b70d92a6954f2", "bff3340834e145549c1b1eca63ba1a61", "59a00bc0e1c04b16bed219fa46c5e0d7", "28e310f6bf3d4d33a3a3190c192c86d0", "a7ce2c242e3048209921f3ea4a33c0ce", "aedc6c4c62a1499d95eaf5ff55f0160f", "1eb0208558da4aeaa32bf1a7286fe1d2", "c3a0676e1d0c4705b98b0281a8e44bd7"]} outputId="cbb9241b-fa98-4ace-cc12-a99ed0eb51bc"
CV_SCORE = run_folds(train_images,train_labels)
| Competition-Solutions/Audio/GIZ NLP Agricultural Keyword Spotter/Solution 2/resnext101/part3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import cv2
import numpy as np
from matplotlib import pyplot as plt
def cluster(img, K=8):
# Z = img.reshape((-1,3))
Z = img.reshape((-1))
# convert to np.float32
Z = np.float32(Z)
# define criteria, number of clusters(K) and apply kmeans()
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 10, 1.0)
ret, label, center=cv2.kmeans(Z,K,None,criteria,10,cv2.KMEANS_RANDOM_CENTERS)
# Now convert back into uint8, and make original image
center = np.uint8(center)
res = center[label.flatten()]
res2 = res.reshape((img.shape))
return res2, label
# +
img_bg = cv2.imread('BG.jpg')
img_1 = cv2.imread('1.jpg')
img_bg = cv2.medianBlur(img_bg, 5)
img_1 = cv2.medianBlur(img_1, 5)
img_bg_hsv = cv2.cvtColor(img_bg, cv2.COLOR_BGR2HSV)
img_1_hsv = cv2.cvtColor(img_1, cv2.COLOR_BGR2HSV)
K = 6
# +
img_bg_clust, label_bg = cluster(img_bg_hsv[:,:,0], K)
img_bg_clust_label = np.zeros(img_bg_clust.shape, dtype=np.uint8)
label_bg_shaped = label_bg.reshape(img_bg_clust.shape)
plt.imshow(img_bg_clust)
print(img_bg_clust.shape, label_bg.shape)
# img_bg_hsv = cv2.cvtColor(img_bg_clust, cv2.COLOR_BGR2HSV)
# img_1_hsv = cv2.cvtColor(img_1_clust, cv2.COLOR_BGR2HSV)
# -
dst = []
for label_val in range(K):
bg_hist = cv2.calcHist([img_bg_hsv[:,:,0][label_bg_shaped==label_val]], [0], None, [180], [0, 180])
dst.append(cv2.calcBackProject([img_1_hsv[:,:,0]], [0], bg_hist, [0, 180], 1))
fig, ax = plt.subplots(nrows=K, ncols=1, figsize=(30, 30))
for i in range(K):
dst_log = np.log10(1+dst[i])
ax[i].imshow(np.uint8(dst_log), cmap='gray')
# calculating object histogram
bg_hist = cv2.calcHist([img_bg_hsv], [0, 1], None, [180, 256], [0, 180, 0, 256])
# normalize histogram and apply backprojection
# cv2.normalize(bg_hist, bg_hist, 0, 255, cv2.NORM_MINMAX)
dst = cv2.calcBackProject([img_1_hsv], [0, 1], bg_hist, [0, 180, 0, 256], 1)
# Now convolute with circular disc
# disc = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(5,5))
# cv2.filter2D(dst, -1, disc, dst)
# # threshold and binary AND
# ret,thresh = cv2.threshold(dst,50,255,0)
# thresh = cv2.merge((thresh,thresh,thresh))
# res = cv2.bitwise_and(target,thresh)
# print(dst)
# print(np.log10(1+dst))
# dst = np.log10(1+dst)
plt.imshow(np.uint8(dst), cmap='gray')
# +
fig, ax = plt.subplots(nrows=3, ncols=2, figsize=(30, 30))
img_bg_rgb = cv2.cvtColor(img_bg, cv2.COLOR_BGR2GRAY)
ax[0][0].set_axis_off()
ax[0][0].set_title('img_bg', fontsize=16, color="red")
ax[0][0].imshow(img_bg_rgb)
img_bg_clust_rgb = cv2.cvtColor(img_bg_clust, cv2.COLOR_BGR2GRAY)
ax[0][1].set_axis_off()
ax[0][1].set_title('img_bg_clust', fontsize=16, color="red")
ax[0][1].imshow(img_bg_clust_rgb)
img_1_rgb = cv2.cvtColor(img_1, cv2.COLOR_BGR2GRAY)
ax[1][0].set_axis_off()
ax[1][0].set_title('img_1', fontsize=16, color="red")
ax[1][0].imshow(img_1_rgb)
img_1_clust_rgb = cv2.cvtColor(img_1_clust, cv2.COLOR_BGR2GRAY)
ax[1][1].set_axis_off()
ax[1][1].set_title('img_1_clust', fontsize=16, color="red")
ax[1][1].imshow(img_1_clust_rgb)
diff_rgb = cv2.cvtColor(diff, cv2.COLOR_BGR2GRAY)
ax[2][0].set_axis_off()
ax[2][0].set_title('diff', fontsize=16, color="red")
ax[2][0].imshow(diff)
plt.show()
| Week11.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
from allensdk.brain_observatory.ecephys.ecephys_project_cache import EcephysProjectCache
import os
from sqlalchemy import delete,insert
from sqlalchemy.orm import sessionmaker
import json
import numpy as np
import pandas as pd
from datetime import date,datetime,timedelta
from sqla_schema import *
import ingest
data_directory = 'C:\\Users\\yoni.browning\\Documents\\DataJoint\\AllenData'
manifest_path = os.path.join(data_directory, "manifest.json")
def query_to_df(Q):
df = pd.read_sql(Q.statement, Q.session.bind)
return df
def spike_count(start_trigger,end_trigger,spike_ts):
count = [None]*len(start_trigger)
for ii,trigger in enumerate(start_trigger):
count[ii] = np.sum(np.logical_and(spike_ts>=trigger,spike_ts<end_trigger[ii]))
return count
# +
engine = ingest.connect_to_db()
S = sessionmaker(engine)()
#Base.metadata.drop_all(engine, tables=(TrialSpikeCount.__table__,))
#Base.metadata.create_all(engine, tables=(TrialSpikeCount.__table__,))
# -
# Get the unique sessions for which we have spiketimes
unique_sessions = query_to_df(S.query(Session.id).\
join(SessionProbe).\
join(Channel).\
join(Unit).\
join(UnitSpikeTimes,Unit.id==UnitSpikeTimes.unit_id).group_by(Session.id)).values
unique_sessions
# +
def spike_count(start_trigger,end_trigger,spike_ts):
count = [None]*len(start_trigger)
for ii,trigger in enumerate(start_trigger):
count[ii] = np.sum(np.logical_and(spike_ts>=trigger,spike_ts<end_trigger[ii]))
return count
for ii,this_session in enumerate(unique_sessions):
print('Session = ' + str(this_session[0]))
stim_table = query_to_df(S.query(Session.id.label('session_id'),\
StimulusPresentation.id.label("stimulus_id"),\
StimulusPresentation.stop_time,\
StimulusPresentation.start_time).\
join(StimulusPresentation).filter(Session.id==int(this_session[0])))
unit_table = query_to_df(S.query(Session.id.label('session_id'),\
Unit.id.label("unit_id"),\
UnitSpikeTimes.spike_times).join(SessionProbe).\
join(Channel).join(Unit).\
join(UnitSpikeTimes,Unit.id==UnitSpikeTimes.unit_id).filter(Session.id==int(this_session[0])))
duration = stim_table.stop_time-stim_table.start_time
for ii,row in unit_table.iterrows():
print('Unit = ' + str(row.unit_id) + ' is ' + str(ii) + ' of ' + str(len(unit_table)) )
count = spike_count(stim_table.start_time,stim_table.stop_time,np.array(row.spike_times))
this_df = pd.DataFrame(data = \
{'unit_id':int(row.unit_id),\
'stimulus_id':np.array(stim_table.stimulus_id.values).astype(int),\
'spike_count':count,\
'spike_rate':np.divide(count,duration)})
this_df.to_sql('trial_spike_count', engine,index = False, if_exists='append')
S.commit()
# -
query_to_df(S.query(TrialSpikeCount.spike_count,StimulusPresentation.stimulus_type_id).\
join(StimulusPresentation).\
filter(TrialSpikeCount.unit_id ==950907205 ))
query_to_df(S.query(Session.id,Structure.name).\
join(SessionProbe).join(Channel).join(Structure).group_by(Session.id,Structure.name))
query_to_df(S.query(Session.id,Structure.name).\
join(SessionProbe).\
join(Channel).\
join(Structure).\
filter(Session.id==715093703).\
group_by(Session.id,Structure.name))
BS = S.query(Structure).filter(Structure.name == "Field CA1")
query_to_df(S.query(Session.id,UnitSpikeTimes.unit_id).\
join(SessionProbe).\
join(Channel).\
join(Structure).\
join(Unit).\
join(UnitSpikeTimes,Unit.id==UnitSpikeTimes.unit_id).\
filter(Session.id==int(715093703)).\
filter(Structure.name == "Field CA1"))
S.query(UnitSpikeTimes.spike_times).\
filter(UnitSpikeTimes.unit_id==950911195).all()
query_to_df(S.query(Session.id,TrialSpikeCount.unit_id).\
join(StimulusPresentation,Session.id ==StimulusPresentation.session_id).\
join(TrialSpikeCount,StimulusPresentation.id == TrialSpikeCount.stimulus_id).\
group_by(Session.id,TrialSpikeCount.unit_id))
data = query_to_df(S.query(TrialSpikeCount.spike_rate,\
StimulusPresentation.orientation,\
StimulusPresentation.spatial_frequency).\
join(StimulusPresentation).join(StimulusType).\
filter(TrialSpikeCount.unit_id == 950907205).
filter(StimulusType.name=='static_gratings'))
data = data.groupby(['spatial_frequency','orientation']).mean()
data.index.get_level_values(0)
data = query_to_df(S.query(TrialSpikeCount.spike_rate,\
StimulusPresentation.orientation,\
StimulusPresentation.spatial_frequency).\
join(StimulusPresentation).join(StimulusType).\
filter(TrialSpikeCount.unit_id == 950907205).
filter(StimulusType.name=='drifting_gratings'));
StimulusPresentation.temporal_frequency
unit_df = query_to_df(S.query(Session.id,TrialSpikeCount.unit_id).\
join(SessionProbe).\
join(Channel).\
join(Structure).\
join(Unit).\
join(TrialSpikeCount,Unit.id==TrialSpikeCount.unit_id).group_by(Session.id,TrialSpikeCount.unit_id))
unit_df
| pgaf/YoniTests.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # The Lasso
#
# ## StatML: Lecture 5
#
# ### Prof. <NAME>
#
# - Some content and images are from "The Elements of Statistical Learning" by <NAME>, Friedman
# - Reading ESL Chapter 3
# + [markdown] slideshow={"slide_type": "slide"}
# ### Recall Convex Optimization
#
# **Def** A function $f : \mathbb R^p \to \mathbb R$ is convex if for any $0 \le \alpha \le 1$, $x_0, x_1 \in \mathbb R^p$,
# $$
# f(\alpha x_0 + (1 - \alpha) x_1) \le \alpha f(x_0) + (1 - \alpha) f(x_1).
# $$
#
# > For convex functions, local minima are global minima
# + [markdown] slideshow={"slide_type": "fragment"}
# Recall **1st Order Condition**. If f is differentiable then it is convex if
# $$
# f(x) \ge f(x_0) + \nabla f(x_0)^\top (x - x_0), \forall x,x_0
# $$
# and when $\nabla f(x_0) = 0$ then
# $$
# f(x) \ge f(x_0), \forall x
# $$
# so any fixed point of gradient descent is a global min (for convex, differentiable f)
# + [markdown] slideshow={"slide_type": "slide"}
# ### Subdifferential
#
# **Def.** $g(x_0) \in \mathbb R^p$ is a *subgradient* of $f$ at $x_0$ if
# $$
# f(x) \ge f(x_0) + g(x_0)^\top (x - x_0), \forall x.
# $$
# The set of all subgradients at $x_0$ is call the *subdifferential*, denoted $\partial f(x_0)$.
#
# > For any global optima, $0 \in \partial f(x_0)$.
# + [markdown] slideshow={"slide_type": "slide"}
# ### Wavelet denoising
#
# Soft thresholding is commonly used for orthonormal bases.
# - Suppose that we have a vector $y_1,\ldots, y_T$ (like a time series).
# - And we want to reconstruct $y$ with $W \beta$ where $\beta$ has a small sum of absolute values $\sum_i |\beta_i|$
# - $W$ is $T \times T$ and $W W^\top = W^\top W = I$ (orthonormal full rank design)
#
# Want to minimize
# $$
# \frac 12 \sum_{i=1}^T (y - W \beta)_i^2 + \lambda \sum_{i=1}^T |\beta_i|.
# $$
# + slideshow={"slide_type": "skip"}
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# + slideshow={"slide_type": "skip"}
## Explore Turkish stock exchange dataset
tse = pd.read_excel('../../data/data_akbilgic.xlsx',skiprows=1)
tse = tse.rename(columns={'ISE':'TLISE','ISE.1':'USDISE'})
# + slideshow={"slide_type": "skip"}
def const_wave(T,a,b):
wave = np.zeros(T)
s1 = (b-a) // 2
s2 = (b-a) - s1
norm_C = (s1*s2 / (s1+s2))**0.5
wave[a:a+s1] = norm_C / s1
wave[a+s1:b] = -norm_C / s2
return wave
# + slideshow={"slide_type": "skip"}
def _const_wave_basis(T,a,b):
if b-a < 2:
return []
wave_basis = []
wave_basis.append(const_wave(T,a,b))
mid_pt = a + (b-a)//2
wave_basis += _const_wave_basis(T,a,mid_pt)
wave_basis += _const_wave_basis(T,mid_pt,b)
return wave_basis
# + slideshow={"slide_type": "skip"}
def const_wave_basis(T,a,b):
father = np.ones(T) / T**0.5
return [father] + _const_wave_basis(T,a,b)
# + slideshow={"slide_type": "skip"}
# Construct discrete Haar wavelet basis
T,p = tse.shape
wave_basis = const_wave_basis(T,0,T)
W = np.array(wave_basis).T
# + slideshow={"slide_type": "slide"}
_ = plt.plot(W[:,:3])
# + slideshow={"slide_type": "skip"}
def soft(y,lamb):
pos_part = (y - lamb) * (y > lamb)
neg_part = (y + lamb) * (y < -lamb)
return pos_part + neg_part
# + slideshow={"slide_type": "skip"}
## Volatility seems most interesting
## will construct local measure of volatility
## remove rolling window estimate (local centering)
## square the residuals
tse = tse.set_index('date')
tse_trem = tse - tse.rolling("7D").mean()
tse_vol = tse_trem**2.
## Make wavelet transformation and soft threshold
tse_wave = W.T @ tse_vol.values
lamb = .001
tse_soft = soft(tse_wave,lamb)
tse_rec = W @ tse_soft
tse_den = tse_vol.copy()
tse_den.iloc[:,:] = tse_rec
# + slideshow={"slide_type": "slide"}
_ = tse_vol.plot(subplots=True,figsize=(10,10))
# + slideshow={"slide_type": "slide"}
_ = tse_den.plot(subplots=True,figsize=(10,10))
# -
# ### Wavelet reconstruction
#
# Can reconstruct the sequence by
# $$
# \hat y = W \hat \beta.
# $$
# The objective is likelihood term + L1 penalty term,
# $$
# \frac 12 \sum_{i=1}^T (y - W \beta)_i^2 + \lambda \sum_{i=1}^T |\beta_i|.
# $$
# > The L1 penalty "forces" some $\beta_i = 0$, inducing sparsity
# + slideshow={"slide_type": "slide"}
plt.plot(tse_soft[:,4])
high_idx = np.where(np.abs(tse_soft[:,4]) > .0001)[0]
print(high_idx)
# + slideshow={"slide_type": "slide"}
fig, axs = plt.subplots(len(high_idx) + 1,1)
for i, idx in enumerate(high_idx):
axs[i].plot(W[:,idx])
plt.plot(tse_den['FTSE'],c='r')
# + [markdown] slideshow={"slide_type": "slide"}
# ### Non-orthogonal design
#
# The objective is likelihood term + L1 penalty term,
# $$
# \frac 12 \sum_{i=1}^T (y - X \beta)_i^2 + \lambda \sum_{i=1}^T |\beta_i|.
# $$
# does not have closed form for $X$ that is non-orthogonal.
#
# - it is convex
# - it is non-smooth (recall $|x|$)
# - has tuning parameter $\lambda$
# + [markdown] slideshow={"slide_type": "fragment"}
# Compare to best subset selection (NP-hard):
# $$
# \min \frac 12 \sum_{i=1}^T (y - X \beta)_i^2.
# $$
# for
# $$
# \| \beta \|_0 = |{\rm supp}(\beta)| < s.
# $$
# + [markdown] slideshow={"slide_type": "slide"}
# ### Image of Lasso solution
#
# <img src="lasso_soln.PNG" width=100%>
# + [markdown] slideshow={"slide_type": "slide"}
# ### Solving the Lasso
#
# The lasso can be written in *regularized form*,
# $$
# \min \frac 12 \sum_{i=1}^T (y - X \beta)_i^2 + \lambda \sum_{i=1}^T |\beta_i|,
# $$
# or in *constrained form*,
# $$
# \min \frac 12 \sum_{i=1}^T (y - X \beta)_i^2, \quad \textrm{s.t.} \sum_{i=1}^T |\beta_i| \le C,
# $$
#
# - For every $\lambda$ there is a $C$ such that the regularized form and constrained form have the same argmin
# - This correspondence is data dependent
# + [markdown] slideshow={"slide_type": "slide"}
# ### Exercise 5.1. Solving the Lasso
#
# A quadratic program (QP) is any convex optimization of the form
# $$
# \min \beta^\top Q \beta + \beta^\top a \quad \textrm{ s.t. } A\beta \le c
# $$
# where $Q$ is positive semi-definite.
#
# Show that the lasso in constrained form is a QP. (Hint: write $\beta = \beta_+ - \beta_-$ where $\beta_{+,j} = \beta_{j} 1\{ \beta_j > 0\}$ and $\beta_{-,j} = - \beta_{j} 1\{ \beta_j < 0\}$).
# + [markdown] slideshow={"slide_type": "slide"}
# **Solution to 5.1**
#
# The objective is certainly quadratic...
# $$
# \frac 12 \sum_{i=1}^T (y - X \beta)_i^2 = \frac 12 \beta^\top (X^\top X) \beta - \beta^\top (X^\top y) + C
# $$
# and we know that $X^\top X$ is PSD because $a^\top X^\top X a = \| X a\|^2 \ge 0$.
#
# What about $\| \beta \|_1$?
# + [markdown] slideshow={"slide_type": "slide"}
# 
# + [markdown] slideshow={"slide_type": "slide"}
# ### Solving the lasso
#
# For a single $\lambda$ (or $C$ in constrained form) can solve the lasso with many specialized methods
# - quadratic program solver
# - proximal gradient
# - alternating direction method of multipliers
#
# but $\lambda$ is a tuning parameter. Options
# 1. Construct a grid of $\lambda$ and solve each lasso
# 2. Solve for all $\lambda$ values - path algorithm
# + [markdown] slideshow={"slide_type": "slide"}
# ### Active sets and why lasso works better
#
# - Let $\hat \beta_\lambda$ be the $\hat \beta$ at tuning parameter $\lambda$.
# - Define $\mathcal A_\lambda = {\rm supp}(\hat \beta_\lambda)$ the non-zero elements of $\hat \beta_\lambda$.
# 1. For large $\lambda \rightarrow \infty$, $|\mathcal A_\lambda| = 0$
# 2. For small $\lambda = 0$, $|\mathcal A_\lambda| = p$ (when OLS solution has full support)
#
# Forward greedy selection only adds elements to the active set, does not remove elements.
#
# ### Exercise 5.2.1
# Verify 1 and 2 above.
# + [markdown] slideshow={"slide_type": "slide"}
# 
# + [markdown] slideshow={"slide_type": "slide"}
# ### Lasso Path
#
# 1. Start at $\lambda = +\infty, \hat \beta = 0$.
# 2. Decrease $\lambda$ until $\hat \beta_{j_1} \ne 0$, $\mathcal A \gets \{j_1\}$. (Hitting event)
# 3. Continue decreasing $\lambda$ updating $\mathcal A$ with hitting and leaving events
#
#
# - $x_{j_1}$ is the predictor variable most correlated with $y$
# - Hitting events are when element is added to $\mathcal A$
# - Leaving events are when element is removed from $\mathcal A$
# - $\hat \beta_{\lambda,j}$ is piecewise linear, continuous, as a function of $\lambda$
# - knots are at "hitting" and "leaving" events
# + [markdown] slideshow={"slide_type": "slide"}
# 
# from sklearn.org
# + [markdown] slideshow={"slide_type": "slide"}
# ### Least Angle Regression (LAR)
#
# 1. Standardize predictors and start with residual $r = y - \bar y$, $\hat \beta = 0$
# 2. Find $x_j$ most correlated with $r$
# 3. Move $\beta_j$ in the direction of $x_j^\top r$ until the residual is more correlated with another $x_k$
# 4. Move $\beta_j,\beta_k$ in the direction of their joint OLS coefficients of $r$ on $(x_j,x_k)$ until some other competitor $x_l$ has as much correlation with the current residual
# 5. Continue until all predictors have been entered.
#
# ### Exercise 5.2.2
# How do we know that LAR does not give us the Lasso solution?
# + [markdown] slideshow={"slide_type": "fragment"}
# ### Lasso modification
#
# 4.5 If a non-zero coefficient drops to 0 then remove it from the active set and recompute the restricted OLS.
# + [markdown] slideshow={"slide_type": "slide"}
# 
# from ESL
# + slideshow={"slide_type": "skip"}
# # %load ../standard_import.txt
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn import preprocessing, model_selection, linear_model
# %matplotlib inline
# + slideshow={"slide_type": "slide"}
## Modified from the github repo: https://github.com/JWarmenhoven/ISLR-python
## which is based on the book by James et al. Intro to Statistical Learning.
df = pd.read_csv('../../data/Hitters.csv', index_col=0).dropna()
df.index.name = 'Player'
df.info()
# + slideshow={"slide_type": "slide"}
## Simulate a dataset for lasso
n=100
p=1000
X = np.random.randn(n,p)
X = preprocessing.scale(X)
# + slideshow={"slide_type": "slide"}
## Subselect true active set
sprob = 0.02
Sbool = np.random.rand(p) < sprob
s = np.sum(Sbool)
print("Number of non-zero's: {}".format(s))
# + slideshow={"slide_type": "slide"}
## Construct beta and y
mu = 100.
beta = np.zeros(p)
beta[Sbool] = mu * np.random.randn(s)
eps = np.random.randn(n)
y = X.dot(beta) + eps
# -
# ### Exercise 5.3
#
# - Run the lasso using `linear_model.lars_path` with the lasso modification (see docstring with ?linear_model.lars_path)
# - Plot the lasso coefficients that are learned as a function of lambda. You should have a plot with the x-axis being lambda and the y-axis being the coefficient value, with $p=1000$ lines plotted. Highlight the $s$ coefficients that are truly non-zero by plotting them in red.
# ?linear_model.lars_path
# + slideshow={"slide_type": "slide"}
## Answer to exercise 5.3
## Run lars with lasso mod, find active set
larper = linear_model.lars_path(X,y,method="lasso")
S = set(np.where(Sbool)[0])
def plot_it():
for j in S:
_ = plt.plot(larper[0],larper[2][j,:],'r')
for j in set(range(p)) - S:
_ = plt.plot(larper[0],larper[2][j,:],'k',linewidth=.75)
_ = plt.title('Lasso path for simulated data')
_ = plt.xlabel('lambda')
_ = plt.ylabel('Coef')
# + slideshow={"slide_type": "slide"}
plot_it()
# + slideshow={"slide_type": "slide"}
## Hitters dataset
df = pd.read_csv('../../data/Hitters.csv', index_col=0).dropna()
df.index.name = 'Player'
df.info()
# + slideshow={"slide_type": "slide"}
df.head()
# + slideshow={"slide_type": "slide"}
dummies = pd.get_dummies(df[['League', 'Division', 'NewLeague']])
dummies.info()
print(dummies.head())
# + slideshow={"slide_type": "slide"}
y = df.Salary
# Drop the column with the independent variable (Salary), and columns for which we created dummy variables
X_ = df.drop(['Salary', 'League', 'Division', 'NewLeague'], axis=1).astype('float64')
# Define the feature set X.
X = pd.concat([X_, dummies[['League_N', 'Division_W', 'NewLeague_N']]], axis=1)
X.info()
# + slideshow={"slide_type": "slide"}
X.head(5)
# -
# ### Exercise 5.4
#
# You should cross-validate to select the lambda just like any other tuning parameter. Sklearn gives you the option of using their fast cross-validation script via `linear_model.LassoCV`, see the documentation. You can create a leave-one-out cross validator with `model_selection.LeaveOneOut` then pass this to `LassoCV` with the `cv` argument. Do this, and see what the returned fit and selected lambda are.
# + slideshow={"slide_type": "slide"}
## Answer to 5.4
## Fit the lasso and cross-validate, increased max_iter to achieve convergence
loo = model_selection.LeaveOneOut()
looiter = loo.split(X)
hitlasso = linear_model.LassoCV(cv=looiter,max_iter=2000)
hitlasso.fit(X,y)
# + slideshow={"slide_type": "slide"}
print("The selected lambda value is {:.2f}".format(hitlasso.alpha_))
# + slideshow={"slide_type": "fragment"}
hitlasso.coef_
# + [markdown] slideshow={"slide_type": "slide"}
# We can also compare this to the selected model from forward stagewise regression:
#
# ```
# [-0.21830515, 0.38154135, 0. , 0. , 0. ,
# 0.16139123, 0. , 0. , 0. , 0. ,
# 0.09994524, 0.56696569, -0.16872682, 0.16924078, 0. ,
# 0. , 0. , -0.19429699, 0. ]
# ```
#
# This is not exactly the same model with differences in the inclusion or exclusion of AtBat, HmRun, Runs, RBI, Years, CHmRun, Errors, League_N, Division_W, NewLeague_N
# + slideshow={"slide_type": "skip"}
bforw = [-0.21830515, 0.38154135, 0. , 0. , 0. ,
0.16139123, 0. , 0. , 0. , 0. ,
0.09994524, 0.56696569, -0.16872682, 0.16924078, 0. ,
0. , 0. , -0.19429699, 0. ]
# + slideshow={"slide_type": "fragment"}
print(", ".join(X.columns[(hitlasso.coef_ != 0.) != (bforw != 0.)]))
| lectures/lecture5/lecture5.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from bs4 import BeautifulSoup
import requests
import pandas as pd
from multiprocessing import Pool
import datetime
import scrapers
# +
import itertools
import tqdm.notebook as tqdm
def p_imap_unordered_set(pool, func, iterable):
iterable = list(iterable)
temp = tqdm.tqdm(pool.imap_unordered(func, iterable), total=len(iterable))
return set(itertools.chain.from_iterable(temp))
the_time = lambda: datetime.datetime.now().time().strftime('%H:%M:%S')
# +
riders_done = set()
teams_done = set()
results = set()
teams_to_do = set()
print(the_time())
with Pool(50) as p:
top_mens_riders = p_imap_unordered_set(
p, scrapers.scrape_top, itertools.product(range(1960, 2021), range(1, 4), [True])
)
top_womens_riders = p_imap_unordered_set(
p, scrapers.scrape_top, itertools.product(range(2018, 2021), range(1, 4), [False])
)
top_riders = top_mens_riders | top_womens_riders
riders_to_do = top_riders.copy()
for i in range(50):
if not riders_to_do:
break
print(f"{the_time()} - iter {i+1}, riders_to_do: {len(riders_to_do)}")
with Pool(50) as p:
fresh_results = p_imap_unordered_set(p, scrapers.scrape_rider, riders_to_do)
results.update(fresh_results)
riders_done.update(riders_to_do)
teams_to_do = set(team for _, team in fresh_results) - teams_done
print(f"{the_time()} - teams_to_do: {len(teams_to_do)}")
with Pool(50) as p:
riders_to_do = p_imap_unordered_set(p, scrapers.scrape_team, teams_to_do) - riders_done
teams_done.update(teams_to_do)
print(the_time())
# +
with open("top_riders.txt", "w") as f:
f.write("\n".join(str(i) for i in sorted(top_riders)))
with open("top_mens_riders.txt", "w") as f:
f.write("\n".join(str(i) for i in sorted(top_mens_riders)))
with open("top_womens_riders.txt", "w") as f:
f.write("\n".join(str(i) for i in sorted(top_womens_riders)))
df = pd.DataFrame(iter(results), columns=["rider", "team"])
df.to_csv("scraped.csv", index=False)
| scraping_and_analysis/01_scrape.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/tvo003/freeCodeCamp/blob/main/Python_Language_Basics.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="0mlVux1bKw4W"
# boolean
is_game_over = False
print(is_game_over)
is_game_over = True
print(is_game_over)
is_game_over = 5 > 6
print(is_game_over)
is_player_at_halfway_point = False
print(is_player_at_halfway_point)
is_player_at_halfway_point = True
print(is_player_at_halfway_point)
beat_game = False
print(beat_game)
beat_game = True
print(beat_game)
# 01/26 examples
month_of_january = True
print(month_of_january)
month_of_january = False
print(month_of_january)
# + id="V_ZoWTVHMMvK"
# int
# float
num_lives = 5
print(num_lives)
percent_health = 0.5
print(percent_health)
extra_lives = 1
print(extra_lives)
lost_a_life = -1
print(lost_a_life)
donate_lives_to_other_players = -1
print(donate_lives_to_other_players)
received_lives_from_other_players = 1
print(received_lives_from_other_players)
# 01/26 examples
fever_temperature = 103
print(fever_temperature)
healthy_temperature = 95.2
print(healthy_temperature)
# + id="MJ8vYuIDNL3b"
# strings
player_name = "Tuan"
print(player_name)
player_name = 'Kaison'
print(player_name)
new_player_name = "Enter Name"
print(new_player_name)
player_quits_game_name = 'Enter Name'
print(player_quits_game_name)
name_of_player_joining_your_team = "Bob"
print(name_of_player_joining_your_team)
name_of_player_leaving_your_team = 'Jason'
print(name_of_player_leaving_your_team)
#01/26 examples
near_by_park = "Overlook"
print(near_by_park)
school_district_name = "Tumwater"
print(school_district_name)
favorite_movie = "Sleepless in Seattle"
print(favorite_movie)
# + id="u8JabNapN24x"
# type
print(type(is_game_over))
print(type(num_lives))
print(type(percent_health))
print(type('player_name'))
# + id="28VLXsmFTgeY"
num_lives = 5
# num_lives = "5"
# you can change any types to a string by using str(exp_exp)
str_num_lives = str(num_lives)
print(type(num_lives))
print(type(str_num_lives))
# Feb 02
num_cats = 4
str_num_cats = str(num_cats)
print(type(num_cats))
print(type(str_num_cats))
num_kids = 6
str_num_kids = str(num_kids)
print(type(num_kids))
print(type(str_num_kids))
# + id="Iu_eDh6-VRvb"
# coverting to a boolean(1).
# 0 will always give you a false
# most of everything else will provide true
print(bool(0))
print(bool(1))
# + id="CxCG7iraWAHC"
# coverting to a boolean(2).
# 0 will always give you a false
# most of everything else will provide true
print(bool(0))
print(bool(1))
# + id="3jmWZEocWBs_"
# coverting to a boolean(3).
# 0 will always give you a false
# most of everything else will provide true
print(bool(0))
print(bool(1))
print(bool(6))
print(bool(0.1))
print(bool("Newman")) # string
print(bool("False")) # Even though the string is false, when coverting to a bool, it will come back as true
#01/26 examples
print(bool('money'))
print(bool('pool'))
# + id="zbMACSRFWa7Z"
# Converting to float
print(float("1"))
print(float(5))
print(float(False))
print(float(True))
#01/26 examples
print(float(3))
print(float(6))
# + id="fEOAtSlZ65OU"
# arithmetic Operators
health = 50
new_health = health + 20
print(health)
health = health + 20
print(health)
xPos = 5
print(xPos % 2)
print(xPos // 2)
print(xPos ** 2)
print(xPos)
first_name = "Tuan"
last_name = "Vo"
print(first_name + " " + last_name) # string concatenation
# 01/26 examples
money = 100
new_money = money + 12 # adding more money
print(money) # this print will only show 100
money = money + 12 # Need to add this condition to have it equal $112
print(money) # this will put eveything together
uPos = 9
print(uPos % 1)
print(uPos // 3)
print(uPos ** 2)
address_numbers = '1234'
address_street_name = 'Abc Street'
address_city = 'New York'
address_state = 'NY'
address_zip_code = '12345'
print(address_numbers + " " + address_street_name)
print(address_city + ", " + address_state + " " + address_zip_code)
# + id="QizgNQmkAWVk"
# Assignment Operators
health = 50
health += 20
health -= 50
print(health)
name = "Tuan"
name += " Vo"
print(name)
xPos = 10
xPos %= 2
#xPos *= 2
#xPos /= 2
#xPos //= 2
#xPos **= 2
print(xPos)
# + id="EIoCgw1SBwLz"
# comparison operators should return a boolean
# > >= < <= == !=
result = 5 > 2
print(result)
print(type(result))
print( 5 == 2)
print( 5 != 2)
# + id="iEemv5cfD-Hl"
# comparison operators part 2
a = "a"
a_ = "A"
first_name = 'Sonja'
last_name = 'Silversten'
print(True == False)
print(first_name == last_name)
print(a == a_)
b = "b"
print(a > b) # B is bigger than A
# + id="ghjF2LU8B3yB"
# logical operators
# not, and, or
is_game_over = False
is_game_over = not is_game_over
print(is_game_over)
health = 0
lives = 1
print(health <= 0 and lives <= 0)
health = 0
lives = 0
print(health <= 0 or lives <= 0)
#02/02
seahawks_playoffs = False
seahawks_playoffs = not seahawks_playoffs
print(seahawks_playoffs)
wins = 0
loss = 1
print(wins >= 0 and loss <= 0)
superBowl = 1
backToBack_superBowl = 0
print(superBowl >= 1 or backToBack_superBowl <= 0)
# + id="hqlV5piRXOb3"
# List types
inventory = ['Sword', 'Bread', 'Boots', 'Shirts']
sword = inventory[0] # Sword being at index 0
print(sword)
inventory[1] = 'Apples' # change bread at index 1 to apples
print(inventory)
#02/02
# Make a list of all cats in household
# Please find what index Finny is in and print his name
name_of_cats = ['Newman', 'Luna', 'Finny', 'Earl']
finny = name_of_cats[2]
print(finny)
# Halo is the new cat, please add him to the end of list using .append
# print new list with Halo added
name_of_cats.append('Halo') # .append is to add a name to the end of a list
print(name_of_cats)
# + id="aiH2pzoIkTi7"
inventory = ['Sword', 'Bread', 'Boots', 'Shirts']
print(len(inventory))
print(max(inventory)) # base on letter Character (Sw) would be max
print(min(inventory)) # base on letter Character (A) would be min
# Add another item to our list
inventory.append('Hat') # .append will let us add to the end of our list
print(inventory)
inventory.insert(0, 'Knife') # This will add item base on index number
print(inventory)
# Remove item from list
inventory.pop() # .pop will remove last item on list
print(inventory)
inventory.remove('Sword') # .remove a specific item from list
print(inventory)
inventory.clear() # to remove all items from list
inventory = [] # this is another way. But using .clear is overall better to understand
# + id="av4bS2gzp8oT"
# Multidimensional list. List within a list
universe = [[1, 2, 3], # Row 0
[1, 2], # Row 1
[1, 2, 3, 4, 5], # Row 2
[1, 2, 3, 4]] # Row 3
ninth_world = universe[2][1] # Row 2, Index 1
universe.append([1, 2, 3, 4, 5, 6]) # to add at end of list
universe[1].pop() # to remove item at end of list
print(universe)
# 02/02
# names of each student from 9th-12th grade
name_ofEachStudent_perGrade = [['Tom', 'Tim', 'Annie'],
['Jim', 'Jon', 'Sophie', 'Malia'],
['Jaxsen', 'Garrett', 'Kaison', 'Rylon'],
['Austin', 'Caleb', 'Lawrence', 'Jason', 'Justin' ]]
# Have a new student tranferring to 10th grade (Brett), please add him to list
name_ofEachStudent_perGrade[1].append('Brett') # Make sure to understand how and why we needed to add index 1 after per grade
# Caleb from 12th is moving, please remove him from list
name_ofEachStudent_perGrade[3].pop(1) # remove base on row and index of row. Do not use names as this will throw an error
print(name_ofEachStudent_perGrade)
# + id="ShHxwzcXqVID"
# Tuples are inmutable, cannot change. Holds a value
item = ("Health Kit", 4)
name = item[0] # index placment
quantity = item[1] # index placment
print(name)
print(quantity)
item = ("knife", 1) # Since we're not able to change value, this is a new value
print(item)
print(item.count('Knife'))
print(item.index(1))
# practice
lunch_items = ("Meat Loaf", 6)
print(lunch_items)
name = lunch_items[0]
quantity = lunch_items[1]
print(name)
print(quantity)
# + id="knMHiGzxwIV6"
# Dictionaries
inventory = {"Knife": 1, "Health Kit": 3, "Wood": 5}
# Exp inventory {"Key": Value, "Key": Value}
# with dictionaries, how could you access the value?
print(inventory["Knife"]) # This should give me the value of 1
# what if I wanted to change the value of an item?
inventory["Knife"] = 2 # this will change it from 1 to 2
print(inventory)
# what if we receive a new item. How would I add it to the list?
inventory["Gold"] = 50
print(inventory) # this is add the item to the end of the list
# + id="_0Fpp_107Tfg"
# Dictionaires 2
inventory = {"Knife": 1, "Health Kit": 3, "Wood": 5}
# what if I'm looking for an item that does not exist on my inventory list?
# print(inventory['Gold']) would be one way. But this may cause a crash
# if i had a very long list of 10k plus items, how would I know I'm missing gold?
# Another way would be using the GET function, exp below
print(inventory.get("Gold"))
# another way to get a list of keys and values
print(inventory.keys())
print(inventory.values())
# How to remove a key item from your list
inventory.pop("Knife")
print(inventory)
# to clear our inventory list
# use - inventory.clear()
# I can also do the following
print(len(inventory))
print(min(inventory))
print(max(inventory))
# + id="QvH4yJHzCAC3"
# Ranges, list of whole consecutive numbers
# What if you wanted the first 10 numbers
first_ten = range(10)
print(first_ten[0])
print(first_ten[9])
# you will only get up to 9. Because we start at index 0. You would need to add another number to get the number 10. exp below
first_ten = range(1,11)
print(first_ten[0])
print(first_ten[9])
# how to skip numbers in a list. Exp every 2 numbers
print(list(range(1,11,2)))
# How to reverse the list
reversed_ten = reversed(first_ten)
# would also need to convert to list
reversed_ten = list(reversed_ten)
print(reversed_ten)
# in and not in operatros will return a boolean answer
print(5 in first_ten)
print(5 not in first_ten)
# + id="J98qwEiWDgG-"
# Conditionals (if, elif, else statements)
# example: How to move right when coreect key is pressed?
key_press = "l" # button player is pushing to get result. If incorrect button is pushed, nothing will happen
if key_press == "r":
print("Move right") # With Python, make sure it's indented
elif key_press == "l":
print("Move left")
else: # else statment will only execute if the other 2 fails
print("Invalid key")
# Tenary operator requires us to have a possible value or a alternative value. Force elif or else statement
command = "Move right" if key_press == "r" else "Move left"
# Practice codes
drinking_age = 24
if drinking_age >= 21:
print("Welcome In")
elif drinking_age <= 20:
print("Not old enough")
lunch_item = "p"
if lunch_item == "h":
print("hamburger")
elif lunch_item == "p":
print('pizza')
else:
print("Wrong selection, please choose again")
# things I missed, == and "" for strings
# + id="z1ikjo32S5Aj"
# conditionals 2 - If statement variants
num_lives = 3
num_health_kits = 3
health = 44
if health <= 0 and num_health_kits <= 0:
num_lives -= 1 # nested if statments
print('You lose a life!')
if num_lives <= 0:
print('Game over!')
elif health < 9:
print('Warning, health less than 10%! Find health recharge pack')
elif health < 49:
print('Warning, health less than 50%!')
# Make sure to have smaller number on top if you're doing less than. If not, code will not correctly
# Practice code
# Banking account information
spending_balance = 10
checking_balance = 4500
savings_balance = 15000
if spending_balance <= 0 or checking_balance <= 1000:
print('Transfer money from savings account!')
if spending_balance == 0:
print('You have a zero balance in your spending account!')
elif spending_balance >= 1 and spending_balance <= 99:
print('You less than $100 in your account')
elif spending_balance <= 500 and spending_balance >= 100:
print('You have spent more than 50% of your monthly spending limit')
# had to use the OR operator for line 21 for code to display message.
# + id="7DhUvTxssYe4"
# While Loops (What we will cover in this exercise)
# create, run and break out of a while loop
start_position = 0
end_position = 5
while start_position < end_position: # While loop needs to have a condition
start_position += 1
print(start_position)
print("You've reached the end of your list!")
# example # 2. Code with more than one action
start_position = 0
end_position = 5
enemy_position = 2 # added code
while start_position < end_position: # While loop needs to have a condition
start_position += 1
print(start_position)
if start_position == enemy_position: # added code
print("You collided with the enemy") # added code
break # to break out of loop
print("Game over")
# Exp 3: What if there was a way to jump over the enemy?
start_position = 0
jump_position = 1 # added code
end_position = 5
enemy_position = 2
while start_position < end_position: # While loop needs to have a condition
start_position += 1
print(start_position)
if jump_position > 0: # added code
continue # added code
if start_position == enemy_position:
print("You collided with the enemy")
break # to break out of loop
print("You've reached the end of your list!")
# + id="z-bPMyVpXEGR"
# my practice code for while loops
# Basketball, shooting a free throw. Make 5 stright in a row before moving to 3 point line. If any free throw is missed, you will start at zero
# start of code
free_throws_made = 0
three_pointers_made = 5
while free_throws_made < three_pointers_made:
free_throws_made += 1 # This is to add one until you reach 5
print(free_throws_made)
print("Please move to 3 point line!") # Once you reached 5, this note will let user know what to do next
# Now I will be adding a missed free throw action. Which will require player to start over.
free_throws_made = 0
missed_free_throws = 2
three_pointers_made = 5
while free_throws_made < three_pointers_made:
free_throws_made += 1 # This is to add one until you reach 5
print(free_throws_made)
if free_throws_made == missed_free_throws: # what would happen when you miss a free throw
print("Missed free throw, please start from beginning")
break # to stop loop
print("Must make 5 in a row before moving on")
# + id="baJTzQR3fw9d"
# For loop examples in python is called (For in loop) (19)
# create, run and must work with ranges and lists
for numbers in range(1,6): # this will print numbers 1-5, not 6.
print(numbers)
# now make a list, exp: inventory
inventory = ['Tent', 'Sleeping bag', 'Wood', 'Food', 'Bug spray']
for item in inventory:
print(item)
# using an if statement with a break
for item in range(4):
print(inventory[item])
if item == 1:
break
# converting from a for loop to a while loop
item = 0
while item < len(inventory):
print(inventory[item])
item += 1
# + id="3QUk3ELQOPEx"
# Functions (20)
# Some examples of implementing and calling functions. Also variable scope
x_pos = 0 # Vaiable outside of function.
def move():
global x_pos # Needed to use global since variable on line for is out of the function. Wouldn't need to use global if variable was inside of funtion.
x_pos += 1 # If there is a variable inside the function, it's consider to be local since it resides in function
move() # this will give me 1. If I wanted to move more, I would just add more move() to the function and so on
print(x_pos)
# + id="6aXxd2cAUpJM"
# Functions (20-2)
# Example of Parameters and Return Values
# Adding parameters and return statments. Also passing in paramenters and return values when calling functions
x_pos = 0
def move(by_amount): # function with parameters.
global x_pos
x_pos += by_amount
move(4) # You can put any interger in (parameters), it will move that amount
print(x_pos)
x_pos = 0
def move(by_amount=1): # parameters with a default value. This is just in case you dont have a value for move()
global x_pos
x_pos += by_amount
move() # You can put any interger in (parameters), it will move that amount
print(x_pos)
# + id="LCwR5a3-5kpx"
# Function (20-3)
start_position = 0
end_position = 10
x_pos = 0
def move(x_pos, by_amount=1):
x_pos += by_amount
return x_pos # this return will exit out of the function regardless if there are codes under this line or not.
final_x_pos = move(x_pos, 5)
print(final_x_pos)
# example 2
start_position = 0
end_position = 10
x_pos = 0
def move(x_pos, by_amount=1):
global start_position, end_position
x_pos += by_amount
if x_pos < start_position:
return start_position
elif x_pos > end_position:
return end_position
# reason for not using an (else) CONDITION is because if any of the above code is true, it will break out. If not, code will continue until last return line 26
return x_pos # this return will exit out of the function regardless if there are codes under this line or not.
final_x_pos = move(x_pos, 10) # change the interger value to make changes
print(final_x_pos)
# + id="TtVTwBCAnhUq"
# Class examples
# create a custom class to represent and object, add some fields, an inittializdr and methods
class GameCharacter:
# x_pos
# health
# name
speed = 1.0
# Def is a contructor/function and dbl underscore init dbl underscore is the initializer
# (self) belongs to a init for game character
# setting up player with the following
def __init__(self, name, x_pos, health):
self.name = name
self.x_pos = x_pos
self.health = health
# movement within the game
def move(self, by_amount):
self.x_pos += by_amount
# to see what kind of damage was done to character
def take_damage(self, amount):
self.health -= amount
if self.health <= 0:
self.health = 0
# to check if player is dead or not
def check_is_dead(self):
return self.health <= 0
# Need to add function for speed using the static method. Static method is used to change static variable
def change_speed(to_speed):
GameCharacter.speed = to_speed
# + id="BJxo-vRI-64K"
# Objects examples, create an instance of class
# Object from our class, access objectd and execute the object
# We will use the information from above
# value of the character
game_character = GameCharacter('Tuan', 6, 100)
other_character = GameCharacter('Kaison', 2, 85)
# print(type(game_character))
# need to get more information about Character. What are we trying to find? exp name, position of player or health of player.
print(game_character.name)
print(other_character.name)
# Now move each character 3 times. Use .move to change players location
game_character.move(3)
other_character.move(3)
print(game_character.x_pos) # Don't forget to add x_pos
print(other_character.x_pos)
# each player is going to take some damage (-20)
game_character.take_damage(20) # .take_damage is from above code
other_character.take_damage(85)
print(game_character.health) # .health is from above as well. You can also see this when you hover your mouse over game character
print(other_character.health)
# check if any or all players are dead
print(game_character.check_is_dead()) # because this is a method, make sure to use (())
print(other_character.check_is_dead())
# + id="4dIzgejVGfME"
# inheritance examples.
# Create a subclass, compare to superclass and go over the difference in uasge between the two
# Create a class with a superclass for Player Character
class PlayerCharacter(GameCharacter): # supeclass are in parentensis. Subclass (PlayerCharacter) now has all the info from superclass
def __init__(self, name, x_pos, health, num_lives):
super().__init__(name, x_pos, health) # superclass initializer (__init__). Use this when you are using a code from a subclass. line 12-15 in class examples
self.max_health = health
self.num_lives = num_lives
# to see what kind of damage was done to character
def take_damage(self, amount):
self.health -= amount
if self.health <= 0:
self.num_lives -= 1 # when health reaches 0, it will take 1 life and replensih health to max.
self.health = self.max_health # Replenish health to max
# to check if player is dead or not
def check_is_dead(self):
return self.health <= 0 and self.num_lives <= 0 # when this code is true, player will be dead and out of game
# + id="5lby6lEUcVKg"
# create a sublass to help superclass run
# Sublass has everything a superclass has to offer but not the other way around
pc = PlayerCharacter('Tuan', 0, 100, 4)
gc = GameCharacter('Tiger', 0, 100)
print(pc.max_health)
# print(gc.max_health) this codewill not work for GC. read line 2 for explanation
# move both characters by any amount
pc.move(2)
print(pc.x_pos)
gc.move(2)
print(gc.x_pos)
# both characters take health damage
pc.take_damage(150) # with this being more than 100, we should lose a life but health will replenish to 100
gc.take_damage(150)
print(pc.health)
print(gc.health)
# check characters is dead using the method call
print(pc.check_is_dead()) # Method call uses prentensis
print(gc.check_is_dead())
# + id="6lkwe3FfhwPw"
# Static Members. Create and use some static members
# Static variables, static functions and how to access and use the static members
gc_1 = GameCharacter('Lion', 0, 100)
gc_2 = GameCharacter('Fox', 2, 100)
# Get speed for each character
print(gc_1.speed)
print(gc_2.speed)
# you can also do the following to get speed of game character
print(GameCharacter.speed) # this will give me the speed of all game character
# how to change the speed of game characters. Python is flexible and will let you change speed doing the following: Line 17
# GameCharacter.speed = 2.0
GameCharacter.change_speed(3.0) # Adding change is a better way of changing the speed of character
print(gc_1.speed)
# gc_2.speed = 10
print(gc_2.speed)
| Python_Language_Basics.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# !pip install matplotlib --upgrade
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import datetime
import matplotlib
matplotlib.__version__
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
btc = pd.read_csv('btc_april.csv')
btc.head()
# +
#look at Issuance Count to determine exact dates where block reward halvings occured
#1st halving: 2012-11-29
#2nd halving: 2016-07-10
# ~3rd halving: 2020-05-12
btc[['date', 'IssContNtv']].head()
# +
#we are only interested in btc volatility
btc_vol = btc[['date', 'VtyDayRet180d', 'VtyDayRet60d', 'VtyDayRet30d']]
# -
btc_vol.head()
#checking for NaNs
btc_vol = btc_vol.fillna(0)
btc_vol.isna().sum()
btc_vol.head()
# +
#had to convert 'date' column to datetime since the 'str' format wasn't beind handled well by 'plt' or 'sns'
btc_vol['date'] = pd.to_datetime(btc_vol['date'])
# -
ax = plt.plot(btc_vol['date'], btc_vol['VtyDayRet180d'])
#using seaborn now
vol_plot_fast = sns.lineplot(x='date', y='VtyDayRet30d', data=btc_vol)
vol_plot_med = sns.lineplot(x='date', y='VtyDayRet60d', data=btc_vol)
vol_plot_slow = sns.lineplot(x='date', y='VtyDayRet180d', data=btc_vol)
# +
#might be more useful to zoom in
#divide df for period between inception and 1st halving
#and from 1st halving to 2nd halving
#and finally from 2nd halving to preent
#creating masks
halv_1 = btc_vol[btc_vol['date'] == '2012-11-28']
halv_2 = btc_vol[btc_vol['date'] == '2016-07-09']
# -
vol_1st_halv = btc_vol[0:halv_1.index[0]+1]
vol_2nd_halv = btc_vol[halv_1.index[0]:halv_2.index[0]+1]
vol_pre = btc_vol[halv_2.index[0]:-1]
vol_1st_halv.head()
vol_2nd_halv.head()
vol_pre.head()
plt.figure(figsize=(20, 8))
vol_halv1_plot = sns.lineplot(x='date', y='VtyDayRet180d', data=vol_1st_halv)
plt.figure(figsize=(20, 8))
vol_halv2_plot = sns.lineplot(x='date', y='VtyDayRet180d', data=vol_2nd_halv)
plt.figure(figsize=(20, 8))
vol_pre_plot = sns.lineplot(x='date', y='VtyDayRet180d', data=vol_pre)
# +
#let's further zoom in and analyze volatility 1 month prior and 1 month post 1st halving
first_halv_prior_post = btc_vol[halv_1.index[0]-30:halv_1.index[0]+31]
# +
#30 days prior and 30 days post 1st halving
first_halv_prior_post.head()
# -
max_vol = max(first_halv_prior_post['VtyDayRet30d'])
max_vol
# +
#creating mask in order to map index to date of max volatility
max_v = first_halv_prior_post[['VtyDayRet30d']][first_halv_prior_post[['VtyDayRet30d']] == max_vol]
max_v = max_v.dropna()
max_v
# -
max_val_index = max_v.index[0]
max_val_index
halv1 = first_halv_prior_post[['date', 'VtyDayRet30d']].loc[max_val_index][0]
halv1
# +
#red line shows us max volatility for the current period
#blue line shows exact date of halving
fig, ax = plt.subplots(figsize = (15,6))
fig.autofmt_xdate()
plt.axvline(halv1, color='r', linestyle='--', lw=2)
plt.axvline('2012-11-29', color='b', linestyle='--', lw=2)
fig = sns.lineplot(x='date', y='VtyDayRet30d', data=first_halv_prior_post)
# +
#repeating the process from above to the data from the 2nd halving
first_halv2_prior_post = btc_vol[halv_2.index[0]-30:halv_2.index[0]+31]
# -
first_halv2_prior_post.head()
max_vol2 = max(first_halv2_prior_post['VtyDayRet30d'])
max_vol2
max_v2 = first_halv2_prior_post[['VtyDayRet30d']][first_halv2_prior_post[['VtyDayRet30d']] == max_vol2]
max_v2 = max_v2.dropna()
max_v2
max_val2_index = max_v2.index[0]
max_val2_index
halv2 = first_halv2_prior_post[['date', 'VtyDayRet30d']].loc[max_val2_index][0]
halv2
# +
#red line shows us max volatility for the current period
#blue line shows exact date of halving
fig, ax = plt.subplots(figsize = (15,6))
fig.autofmt_xdate()
plt.axvline(halv2, color='r', linestyle='--', lw=2)
plt.axvline('2016-07-10', color='b', linestyle='--', lw=2)
fig = sns.lineplot(x='date', y='VtyDayRet30d', data=first_halv2_prior_post)
# +
#since we don't have the data yet for the 3rd halving we compute the last 30 days of btc volatility
first_pre_prior_post = btc_vol[-31:-1]
# -
first_pre_prior_post.head()
max_vol3 = max(first_pre_prior_post['VtyDayRet30d'])
max_vol3
max_v3 = first_pre_prior_post[['VtyDayRet30d']][first_pre_prior_post[['VtyDayRet30d']] == max_vol3]
max_v3 = max_v3.dropna()
max_v3
max_val3_index = max_v3.index[0]
max_val3_index
halv3 = first_pre_prior_post[['date', 'VtyDayRet30d']].loc[max_val3_index][0]
halv3
fig, ax = plt.subplots(figsize = (15,6))
fig.autofmt_xdate()
plt.axvline(halv3, color='r', linestyle='--', lw=2)
fig = sns.lineplot(x='date', y='VtyDayRet30d', data=first_pre_prior_post)
# +
############################################################################################################
############################################################################################################
# +
#let's further zoom out and analyze volatility 3 month prior and 3 month post 1st halving
first_halv_prior_post = btc_vol[halv_1.index[0]-90:halv_1.index[0]+91]
# +
#90 days prior and 90 days post 1st halving
first_halv_prior_post.head()
# -
max_vol = max(first_halv_prior_post['VtyDayRet30d'])
max_vol
# +
#creating mask in order to map index to date of max volatility
max_v = first_halv_prior_post[['VtyDayRet30d']][first_halv_prior_post[['VtyDayRet30d']] == max_vol]
max_v = max_v.dropna()
max_v
# -
max_val_index = max_v.index[0]
max_val_index
halv1 = first_halv_prior_post[['date', 'VtyDayRet30d']].loc[max_val_index][0]
halv1
# +
#red line shows us max volatility for the current period
#blue line shows exact date of halving
fig, ax = plt.subplots(figsize = (15,6))
fig.autofmt_xdate()
plt.axvline(halv1, color='r', linestyle='--', lw=2)
plt.axvline('2012-11-29', color='b', linestyle='--', lw=2)
fig = sns.lineplot(x='date', y='VtyDayRet30d', data=first_halv_prior_post)
# +
#repeating the process from above to the data from the 2nd halving
first_halv2_prior_post = btc_vol[halv_2.index[0]-90:halv_2.index[0]+91]
# -
first_halv2_prior_post.head()
max_vol2 = max(first_halv2_prior_post['VtyDayRet30d'])
max_vol2
max_v2 = first_halv2_prior_post[['VtyDayRet30d']][first_halv2_prior_post[['VtyDayRet30d']] == max_vol2]
max_v2 = max_v2.dropna()
max_v2
max_val2_index = max_v2.index[0]
max_val2_index
halv2 = first_halv2_prior_post[['date', 'VtyDayRet30d']].loc[max_val2_index][0]
halv2
# +
#red line shows us max volatility for the current period
#blue line shows exact date of halving
fig, ax = plt.subplots(figsize = (15,6))
fig.autofmt_xdate()
plt.axvline(halv2, color='r', linestyle='--', lw=2)
plt.axvline('2016-07-10', color='b', linestyle='--', lw=2)
fig = sns.lineplot(x='date', y='VtyDayRet30d', data=first_halv2_prior_post)
# +
#since we don't have the data yet for the 3rd halving we compute the last 90 days of btc volatility
first_pre_prior_post = btc_vol[-91:-1]
# -
first_pre_prior_post.head()
max_vol3 = max(first_pre_prior_post['VtyDayRet30d'])
max_vol3
max_v3 = first_pre_prior_post[['VtyDayRet30d']][first_pre_prior_post[['VtyDayRet30d']] == max_vol3]
max_v3 = max_v3.dropna()
max_v3
max_val3_index = max_v3.index[0]
max_val3_index
halv3 = first_pre_prior_post[['date', 'VtyDayRet30d']].loc[max_val3_index][0]
halv3
fig, ax = plt.subplots(figsize = (15,6))
fig.autofmt_xdate()
plt.axvline(halv3, color='r', linestyle='--', lw=2)
fig = sns.lineplot(x='date', y='VtyDayRet30d', data=first_pre_prior_post)
# +
############################################################################################################
############################################################################################################
# +
#now let's analyze other hardforked coins: LTC, BCH, BSV,
# -
ltc = pd.read_csv('ltc_april.csv')
ltc.head(1)
ltc = ltc.rename(columns={'time': 'date'})
ltc['date'] = [i[0:10] for i in ltc['date']]
ltc['date'] = pd.to_datetime(ltc['date'])
ltc.head(1)
#confirm halving dates
ltc[['date', 'BlkCnt', 'IssContNtv']].head(1)
# +
#we are only interested in btc volatility
ltc_vol = ltc[['date', 'VtyDayRet180d', 'VtyDayRet60d', 'VtyDayRet30d']]
# -
type(ltc_vol['date'][0])
# +
#might be more useful to zoom in
#divide df for period between inception and 1st halving
#and from 1st halving to 2nd halving
#and finally from 2nd halving to preent
#creating masks
ltc_halv_1 = ltc_vol[ltc_vol['date'] == '2015-08-25']
ltc_halv_2 = ltc_vol[ltc_vol['date'] == '2019-08-05']
# -
vol_1st_halv = ltc_vol[0:ltc_halv_1.index[0]+1]
vol_2nd_halv = ltc_vol[ltc_halv_1.index[0]:ltc_halv_2.index[0]+1]
vol_pre = ltc_vol[ltc_halv_2.index[0]:-1]
# +
#let's further zoom in and analyze volatility 1 month prior and 1 month post 1st halving
first_halv_prior_post = ltc_vol[ltc_halv_1.index[0]-30:ltc_halv_1.index[0]+31]
# +
#30 days prior and 30 days post 1st halving
first_halv_prior_post.head()
# -
max_vol = max(first_halv_prior_post['VtyDayRet30d'])
max_vol
# +
#creating mask in order to map index to date of max volatility
max_v = first_halv_prior_post[['VtyDayRet30d']][first_halv_prior_post[['VtyDayRet30d']] == max_vol]
max_v = max_v.dropna()
max_v
# -
max_val_index = max_v.index[0]
max_val_index
halv1 = first_halv_prior_post[['date', 'VtyDayRet30d']].loc[max_val_index][0]
halv1
# +
#red line shows us max volatility for the current period
#blue line shows exact date of halving
fig, ax = plt.subplots(figsize = (15,6))
fig.autofmt_xdate()
plt.axvline(halv1, color='r', linestyle='--', lw=2)
plt.axvline('2015-08-25', color='b', linestyle='--', lw=2)
fig = sns.lineplot(x='date', y='VtyDayRet30d', data=first_halv_prior_post)
# +
#repeating the process from above to the data from the 2nd halving
first_halv2_prior_post = ltc_vol[ltc_halv_2.index[0]-30:ltc_halv_2.index[0]+31]
# -
first_halv2_prior_post.head()
max_vol2 = max(first_halv2_prior_post['VtyDayRet30d'])
max_vol2
max_v2 = first_halv2_prior_post[['VtyDayRet30d']][first_halv2_prior_post[['VtyDayRet30d']] == max_vol2]
max_v2 = max_v2.dropna()
max_v2
max_val2_index = max_v2.index[0]
max_val2_index
halv2 = first_halv2_prior_post[['date', 'VtyDayRet30d']].loc[max_val2_index][0]
halv2
# +
#red line shows us max volatility for the current period
#blue line shows exact date of halving
fig, ax = plt.subplots(figsize = (15,6))
fig.autofmt_xdate()
plt.axvline(halv2, color='r', linestyle='--', lw=2)
plt.axvline('2019-08-05', color='b', linestyle='--', lw=2)
fig = sns.lineplot(x='date', y='VtyDayRet30d', data=first_halv2_prior_post)
# +
#let's further zoom out and analyze volatility 3 month prior and 3 month post 1st halving
first_halv_prior_post = ltc_vol[ltc_halv_1.index[0]-90:ltc_halv_1.index[0]+91]
# +
#90 days prior and 90 days post 1st halving
first_halv_prior_post.head()
# -
max_vol = max(first_halv_prior_post['VtyDayRet30d'])
max_vol
# +
#creating mask in order to map index to date of max volatility
max_v = first_halv_prior_post[['VtyDayRet30d']][first_halv_prior_post[['VtyDayRet30d']] == max_vol]
max_v = max_v.dropna()
max_v
# -
max_val_index = max_v.index[0]
max_val_index
halv1 = first_halv_prior_post[['date', 'VtyDayRet30d']].loc[max_val_index][0]
halv1
# +
#red line shows us max volatility for the current period
#blue line shows exact date of halving
fig, ax = plt.subplots(figsize = (15,6))
fig.autofmt_xdate()
plt.axvline(halv1, color='r', linestyle='--', lw=2)
plt.axvline('2015-08-25', color='b', linestyle='--', lw=2)
fig = sns.lineplot(x='date', y='VtyDayRet30d', data=first_halv_prior_post)
# +
############################################################################################################
# +
#let's further zoom out and analyze volatility 3 month prior and 3 month post 1st halving
first_halv2_prior_post = ltc_vol[ltc_halv_2.index[0]-90:ltc_halv_2.index[0]+91]
# +
#90 days prior and 90 days post 1st halving
first_halv2_prior_post.head()
# -
max_vol = max(first_halv2_prior_post['VtyDayRet30d'])
max_vol
# +
#creating mask in order to map index to date of max volatility
max_v = first_halv2_prior_post[['VtyDayRet30d']][first_halv2_prior_post[['VtyDayRet30d']] == max_vol]
max_v = max_v.dropna()
max_v
# -
max_val_index = max_v.index[0]
max_val_index
halv2 = first_halv2_prior_post[['date', 'VtyDayRet30d']].loc[max_val_index][0]
halv2
# +
#red line shows us max volatility for the current period
#blue line shows exact date of halving
fig, ax = plt.subplots(figsize = (15,6))
fig.autofmt_xdate()
plt.axvline(halv2, color='r', linestyle='--', lw=2)
plt.axvline('2019-08-05', color='b', linestyle='--', lw=2)
fig = sns.lineplot(x='date', y='VtyDayRet30d', data=first_halv2_prior_post)
| .ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import pandas as pd
import os
from pathlib import Path
import tqdm
import matplotlib.pyplot as plt
# %matplotlib inline
# -
# ### Data preparation
def txt_to_matrix(filename, line_skip = 5):
f = open (filename, 'r')
# Lineskip, cleaning, conversion
data = f.readlines()[line_skip:]
return np.asarray(
[l.replace("\n", "").split() for l in data]
).astype(np.float32)
# +
def get_time_step(root, index):
wse = txt_to_matrix(root + '/decoded--' + index + '.WSE')
dep = txt_to_matrix(root + '/decoded--' + index + '.DEP')
vvx = txt_to_matrix(root + '/decoded--' + index + '.VVX')
vvy = txt_to_matrix(root + '/decoded--' + index + '.VVY')
# timestep: matrice 801 rows x 4 misurazioni x 1256 colonne (valori)
return np.array(list(zip(wse, dep, vvx, vvy)))
def get_dep_time_step(root, index):
dep = txt_to_matrix(root + '/decoded--' + index + '.DEP')
# timestep: matrice 801 rows x 4 misurazioni x 1256 colonne (valori)
return np.array(dep)
# +
rootdir = '../output/'
timesteps = []
paths = [p for p in sorted(os.listdir('../output'))]
x = 0
ceiling = 50
# Read all dirs and process them
for path in tqdm.tqdm(paths):
if x >= ceiling: break
# Processing
path = rootdir + path
timesteps.append(
get_dep_time_step(
path, ("{:04d}".format(x))
)
)
x += 1
timesteps = np.asarray(timesteps).astype(np.float32)
# -
plt.plot(timesteps[:, 401, 600])
# +
from skimage.transform import rescale, resize, downscale_local_mean
sample = timesteps[10, :, :]
ratio = 1
s_x = int(sample.shape[0] * ratio)
s_y = int(sample.shape[1] * ratio)
sample = resize(sample, (s_x, s_y))
plt.imshow(sample)
# +
dataset = timesteps[:, :, :].astype(np.float64)
plt.plot(dataset[:, 400, 600])
# +
'''
dataset_min = dataset.min(axis=(1, 2), keepdims=True)
dataset_max = dataset.max(axis=(1, 2), keepdims=True)
norm = ((dataset - dataset_min)/(dataset_max - dataset_min))
plt.plot(norm[:, 400, 600])
'''
dataset[dataset > 10] = 0
# -
# ### Building the net
import torch.nn as nn
import torch.nn.functional as F
import torch
# +
if torch.cuda.is_available():
dev = "cuda:0"
else:
dev = "cpu"
device = torch.device(dev)
# +
class CNNet(torch.nn.Module):
#Our batch shape for input x is (3, 32, 32)
def __init__(self, w = 801, h = 1256):
super(CNNet, self).__init__()
# input channels, output channels
self.conv1 = torch.nn.Conv2d(1, 5, 3)
self.pool = torch.nn.MaxPool2d(kernel_size=2, stride=2, padding=0)
# DNN
self.fc1 = torch.nn.Linear(5 * 399 * 627, 64)
self.fc2 = torch.nn.Linear(64, s_x * s_y)
def forward(self, x):
x = F.relu(self.conv1(x))
x = self.pool(x)
x = x.flatten()
x = F.relu(self.fc1(x))
x = self.fc2(x)
return(x)
net = CNNet().to(device)
# +
t = torch.cuda.get_device_properties(0).total_memory
c = torch.cuda.memory_cached(0)
a = torch.cuda.memory_allocated(0)
f = t-a-c # free inside cache
print((f/t)*100)
# +
import torch.optim as optim
from torchvision import transforms, utils
criterion = nn.MSELoss()
optimizer = optim.Adam(net.parameters(), lr=0.001)
# Training
# primo frame
in_x = dataset[0, :, :]
# +
from copy import deepcopy
losses = []
for epoch in range(10):
frame = 0
epoch_loss = []
for out_x in dataset[1:, :,:]:
sample = torch.FloatTensor(resize(in_x, (s_x, s_y))).expand(1, 1, -1, -1).to(device)
y = torch.FloatTensor(resize(out_x, (s_x, s_y))).flatten().to(device)
#sample = torch.FloatTensor(in_x).to(device)
#y = torch.FloatTensor(out_x).flatten().to(device)
# Compute
y_hat = net(
sample
).to(device)
# Loss
loss = criterion(y_hat, y)
loss.backward()
optimizer.step()
epoch_loss.append(loss.item())
# Next step
in_x = deepcopy(out_x)
frame += 1
# print("Loss {}".format(loss.item()))
avg = np.asarray(epoch_loss).mean()
losses.append(avg)
print("Epoch {} - avg.loss: {}".format(epoch, avg))
# -
plt.plot(losses)
| notebooks/02-cnn-pytorch.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# [View in Colaboratory](https://colab.research.google.com/github/saicodes/Style-Transfer-on-google-colab/blob/master/MinorProject.ipynb)
# + id="9flzHOROSJZJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 104} outputId="79d0f93b-ec36-4bd9-cbaf-6d4447df407e"
# !rm -r ComputerVision NeuralStyleTransfer
# !git clone https://github.com/ldfrancis/ComputerVision.git
# !cp -r ComputerVision/NeuralStyleTransfer .
# + id="ObrIcvJjR-9c" colab_type="code" colab={}
% load https://github.com/ldfrancis/ComputerVision/blob/master/NeuralStyleTransfer/implementNTS.py
# + id="Ucr8PfBsLLTs" colab_type="code" colab={}
from google.colab import files
files.upload()
# + id="aX-ZmbZULQv6" colab_type="code" colab={}
CONTENT_IMAGE = "content1.jpg"
STYLE_IMAGE = "style1.jpg"
IMAGE_HEIGHT = 300
IMAGE_WIDTH = 400
ITERATION = 200
# + id="RhmoYqQILnIs" colab_type="code" colab={}
path_to_content_image = "/content/"+CONTENT_IMAGE
path_to_style_image = "/content/"+STYLE_IMAGE
# + id="2_n92fG5Lpw5" colab_type="code" colab={}
import matplotlib.pyplot as plt
c_image = plt.imread(path_to_content_image)
s_image = plt.imread(path_to_style_image)
# + id="I4uTspjRLsHL" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 385} outputId="8f4d88a5-dde6-4e09-bfff-77bef98bc65e"
print("Content Image of size (height, width) => {0}".format(c_image.shape[:-1]))
plt.imshow(c_image)
# + id="ewBF1WhaLuX_" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 385} outputId="4f9777f1-58a4-4e3b-9515-9f543bf89ccb"
print("Style Image of size (height, width) => {0}".format(s_image.shape[:-1]))
plt.imshow(s_image)
# + id="BgpOjJxALxQH" colab_type="code" colab={}
from NeuralStyleTransfer import implementNTS as NST
# + id="rf0jsAa6WypO" colab_type="code" colab={}
NST.setImageDim(IMAGE_WIDTH,IMAGE_HEIGHT)
# + id="8E24PpMKYLZK" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1112} outputId="91c9c751-844f-4f56-ae1f-4a2fba98bfeb"
NST.run(ITERATION, style_image=path_to_style_image, content_image=path_to_content_image)
# + id="4UggxnJML0m2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 368} outputId="fd3230c6-4f07-4a1f-8db6-dbb808aea507"
generated_image_path = "/content/NeuralStyleTransfer/output/generated_image.jpg"
image = plt.imread(generated_image_path)
plt.imshow(image)
# + id="5ug6EOWmM93n" colab_type="code" colab={}
files.download("NeuralStyleTransfer/output/generated_image.jpg")
| notebooks/style-transfer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + nbpresent={"id": "270863f4-df4b-4cdc-8f75-891493ba3c17"}
print('Hello world!')
# + nbpresent={"id": "19474935-710c-4cfa-b95e-db20a636a02e"}
import numpy as np
import pandas as pd
# + nbpresent={"id": "1c4b3d45-aafd-4121-8c65-1b9cf97fd36f"}
import os
import tarfile
# + nbpresent={"id": "a57aa152-21b5-4ed9-a602-0cb27eb26b2b"}
HOUSING_PATH = 'datasets/housing'
# + nbpresent={"id": "680d9d0b-899b-4c4b-98a4-e783de68107b"}
DOWNLOAD_ROOT = "https://raw.githubusercontent.com/ageron/handson-ml/master/"
HOUSING_PATH = "datasets/housing"
HOUSING_URL = DOWNLOAD_ROOT + HOUSING_PATH + "/housing.tgz"
def fetch_housing_data(housing_url=HOUSING_URL, housing_path=HOUSING_PATH):
housing_csv_path = os.path.join(housing_path, 'housing.csv')
housing_tgz_path = os.path.join(housing_path, 'housing.tgz')
if os.path.isfile(housing_csv_path):
print(f'Find {housing_csv_path}, do nothing')
return
if os.path.isfile(housing_tgz_path):
print(f'Find {housing_tgz_path}, will extract it')
housing_tgz = tarfile.open(housing_tgz_path)
housing_tgz.extractall(path=housing_path)
housing_tgz.close()
return
print(f'Can not find {housing_csv_path}')
fetch_housing_data()
# + nbpresent={"id": "f0045e07-2c66-4dfc-b3d9-a7857b67daae"}
def load_housing_data(housing_path=HOUSING_PATH):
csv_path = os.path.join(housing_path, "housing.csv")
return pd.read_csv(csv_path)
# + nbpresent={"id": "719d1c1e-ce52-4bed-acc0-c4b427df562a"}
housing = load_housing_data()
# + nbpresent={"id": "2753a5b3-0fdb-45fd-8aec-1f1cc94c6992"}
housing.head()
# + nbpresent={"id": "0f0755bc-5e9b-454e-94dd-8f56934c0240"}
housing.info()
# + nbpresent={"id": "3b8b94e9-5f2c-4ce1-9ad8-b05741464994"}
housing.ocean_proximity.value_counts()
# + nbpresent={"id": "29da4d17-81ae-4b7d-a656-cce82addda4b"}
housing['ocean_proximity'].value_counts()
# + nbpresent={"id": "d857d7f4-97ab-4ea3-bb99-bb9e03cde1e8"}
housing.describe()
# + nbpresent={"id": "65ccb1ee-6e6a-43d7-81f3-4b2fbb2260a8"}
# %matplotlib inline
import matplotlib.pyplot as plt
housing.hist(bins=50, figsize=(20,15))
plt.show()
# + nbpresent={"id": "dd9f1eec-7e2a-4a3f-8559-c595d6dd6f41"}
housing.median_income.describe()
# + nbpresent={"id": "f53bd07b-3492-4463-bf34-869ace37fdca"}
housing.median_income.hist(bins=15)
plt.show()
# + nbpresent={"id": "41c39473-c192-4b7c-9752-c71eb9c9675f"}
income_cat = np.ceil(housing.median_income / 1.5)
# + nbpresent={"id": "fd055754-0c23-4bc7-be6e-c8270c576221"}
income_cat.where(income_cat < 5.0, 5.0, inplace=True)
# + nbpresent={"id": "fe7d513e-9cc4-41ae-a057-e7aeb4b9cc46"}
# The above operations can be replaced by the following
income_cat2 = np.ceil(housing.median_income / 1.5)
income_cat2[income_cat2 > 5.0] = 5.0
(income_cat2 == income_cat).all()
# + nbpresent={"id": "d3507c78-efbd-4b84-91eb-9f0fab2897bd"}
income_cat.describe()
# + nbpresent={"id": "90197af0-7f1c-4a50-9dff-309ffc16044e"}
income_cat.value_counts() / len(income_cat)
# + nbpresent={"id": "7f55dc1d-df5f-4f96-9a9e-76b56ffa529e"}
from sklearn.model_selection import StratifiedShuffleSplit
# + nbpresent={"id": "dd9466aa-8869-4ae7-a880-4a37fa1d8411"}
split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)
# + nbpresent={"id": "974852e1-4c66-4fec-b360-7f1c1adae9eb"}
housing['income_cat'] = income_cat
for train_index, test_index in split.split(housing, housing['income_cat']):
strat_train_set = housing.loc[train_index]
strat_test_set = housing.loc[test_index]
# + nbpresent={"id": "75312c54-d9b8-488d-985f-7880c16482a0"}
Stratified = strat_test_set['income_cat'].value_counts().sort_index() / len(strat_test_set)
Overall = housing['income_cat'].value_counts().sort_index() / len(housing)
data = pd.DataFrame({'Overall': Overall, 'Stratified' : Stratified})
data['Strat. %error'] = (data['Overall'] - data['Stratified']) / data['Overall'] * 100
data
# + [markdown] nbpresent={"id": "d938597d-4e3a-4bc9-95ae-8ade8e5b96ab"}
# ## Visualizing Data
# + nbpresent={"id": "670e673e-a87e-4766-94bb-23d264c895d2"}
strat_train_set_copy = strat_train_set.copy()
# + nbpresent={"id": "73a5594e-2890-4f4a-9015-7a8b35b8bfb9"}
housing.plot(kind="scatter", x='longitude', y='latitude')
# + nbpresent={"id": "2b9d6051-bb97-4e4e-9529-85db64bbd72a"}
housing.plot(kind="scatter", x='longitude', y='latitude', alpha=0.1)
# + nbpresent={"id": "fd9cff1a-d7e7-469b-8761-8a565e8b7bb1"}
strat_train_set_copy.plot(kind='scatter', x='longitude', y='latitude', alpha=0.4,
s=strat_train_set_copy.population/100,
c=strat_train_set_copy.median_house_value,
cmap=plt.get_cmap("jet"),
label="population", figsize=(15, 15),
colorbar=True)
plt.legend()
# + nbpresent={"id": "a00d55c1-814a-4e07-97c7-2a65a917ac7b"}
corr_matrix = strat_train_set_copy.corr()
# + nbpresent={"id": "92e63c67-babb-44c9-843c-718c3b378981"}
corr_matrix.median_house_value.sort_values(ascending=False)
# + nbpresent={"id": "5355aa4e-722a-4363-b2e1-2ab91eb57c40"}
from pandas.plotting import scatter_matrix
# + nbpresent={"id": "f13e1a19-d7a8-4e29-8a4a-685653911f60"}
attributes = ["median_house_value", "median_income", "total_rooms",
"housing_median_age"]
scatter_matrix(housing[attributes], figsize=(12, 8))
# + nbpresent={"id": "8eade140-23f9-4487-9d01-58133cc8cb02"}
strat_train_set_copy.plot.scatter(x="median_income", y="median_house_value", alpha=0.1)
# -
# ### Experimenting with Attribute Combinations
housing["rooms_per_household"] = housing["total_rooms"] / housing["households"]
housing["bedrooms_per_room"] = housing["total_bedrooms"]/housing["total_rooms"]
housing["population_per_household"]=housing["population"]/housing["households"]
housing.info()
corr_matrix = housing.corr()
corr_matrix['median_house_value'].sort_values(ascending=False)
# ## 2.5 Prepare the Data for Machine Learning Algorithms
housing = strat_train_set.drop('median_house_value', axis=1)
housing_labels = strat_train_set['median_house_value'].copy()
housing.info()
housing.dropna(subset=['total_bedrooms']).info()
housing.drop('total_bedrooms', axis=1).info()
housing['total_bedrooms'].fillna(housing['total_bedrooms'].median()).describe()
from sklearn.impute import SimpleImputer
imputer = SimpleImputer(strategy='median')
housing_num = housing.drop("ocean_proximity", axis=1)
imputer.fit(housing_num)
imputer.statistics_
imputer.strategy
housing.drop("ocean_proximity", axis=1).median().values
X = imputer.transform(housing_num)
X
housing_tr = pd.DataFrame(X, columns=housing_num.columns)
housing_tr.head()
# ### Handling Text and Categorical Attributes
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
housing_cat = housing.ocean_proximity
housing_cat.describe()
housing_cat.value_counts()
housing_cat_encoded = encoder.fit_transform(housing_cat)
housing_cat_encoded
type(housing_cat_encoded)
print(encoder.classes_)
# #### One hot encoding
from sklearn.preprocessing import OneHotEncoder
encoder = OneHotEncoder()
print(housing_cat_encoded.shape)
print(type(housing_cat_encoded))
(housing_cat_encoded.reshape(-1, 1)).shape
housing_cat_1hot = encoder.fit_transform(housing_cat_encoded.reshape(-1, 1))
housing_cat_1hot
type(housing_cat_1hot)
housing_cat_1hot.toarray()
# ### Combine
from sklearn.preprocessing import LabelBinarizer
encoder = LabelBinarizer(sparse_output=False)
housing_cat_1hot = encoder.fit_transform(housing_cat)
housing_cat_1hot
type(housing_cat_1hot)
# ### Custom Transformers
rooms_ix, bedrooms_ix, population_ix, households_ix = 3, 4, 5, 6
housing.head()
housing.iloc[:, 3]
X = housing.values
# This can be achieved by the iloc, with using .values
housing.iloc[:, [rooms_ix, bedrooms_ix, households_ix, population_ix]].head()
rooms_per_household = X[:, rooms_ix] / X[:, households_ix]
population_per_household = X[:, population_ix] / X[:, households_ix]
bedrooms_per_room = X[:, bedrooms_ix] / X[:, rooms_ix]
np.c_[X, rooms_per_household, population_per_household]
np.c_[X, rooms_per_household, population_per_household, bedrooms_per_room]
# +
from sklearn.base import BaseEstimator, TransformerMixin
rooms_ix, bedrooms_ix, population_ix, households_ix = 3, 4, 5, 6
class CombinedAttributesAdder(BaseEstimator, TransformerMixin):
def __init__(self, add_bedrooms_per_room=False):
self.add_bedrooms_per_room = add_bedrooms_per_room
def fit(self, X, y=None):
return self
def transform(self, X, y=None):
rooms_per_household = X[:, rooms_ix] / X[:, households_ix]
population_per_household = X[:, population_ix] / X[:, households_ix]
if self.add_bedrooms_per_room:
bedrooms_per_room = X[:, bedrooms_ix] / X[:, rooms_ix]
return np.c_[X, rooms_per_household, population_per_household, bedrooms_per_room]
else:
return np.c_[X, rooms_per_household, population_per_household]
# -
attr_adder = CombinedAttributesAdder(add_bedrooms_per_room=False)
housing_extra_attribs = attr_adder.transform(X)
print(housing_extra_attribs.shape)
print(housing.shape)
# Convert back to data frame -- My way
new_columns = housing.columns.append(
pd.Index(['rooms_per_household', 'population_per_household'])
)
new_columns
housing_extra_attribs_df = pd.DataFrame(housing_extra_attribs, columns=new_columns)
housing_extra_attribs_df.head()
# ### 2.5.4 Feature Scaling
housing.describe()
housing.total_rooms.describe()
# +
from sklearn.preprocessing import MinMaxScaler
scalar = MinMaxScaler()
scalar.fit(housing["total_rooms"].values.reshape(-1, 1))
pd.DataFrame(scalar.transform(housing["total_rooms"].values.reshape(-1, 1)), columns=["total_rooms"])["total_rooms"].describe()
# +
from sklearn.preprocessing import StandardScaler
scalar = StandardScaler()
scalar.fit(housing["total_rooms"].values.reshape(-1, 1))
pd.DataFrame(scalar.transform(housing["total_rooms"].values.reshape(-1, 1)), columns=["total_rooms"])["total_rooms"].describe()
# -
# ### 2.5.5 Transformation Pipeline
# +
from sklearn.pipeline import Pipeline
num_pipeline = Pipeline([
('imputer', SimpleImputer(strategy="median")),
('attr_adder', CombinedAttributesAdder()),
('std_scaler', StandardScaler())
])
# +
# I want to verify the pipelined version
# doest the same thing as the separated steps
num_pipeline_stage1 = Pipeline([
('imputer', SimpleImputer(strategy="median")),
])
X_pipeline = num_pipeline_stage1.fit_transform(housing_num)
X = imputer.transform(housing_num)
X_pipeline
np.array_equal(X, X_pipeline)
# +
num_pipeline_stage2 = Pipeline([
('imputer', SimpleImputer(strategy="median")),
('attr_adder', CombinedAttributesAdder()),
])
Y = attr_adder.fit_transform(X)
Y_pipeline = num_pipeline_stage2.fit_transform(housing_num)
np.array_equal(Y, Y_pipeline)
# +
num_pipeline_stage3 = Pipeline([
('imputer', SimpleImputer(strategy="median")),
('attr_adder', CombinedAttributesAdder()),
('std_scaler', StandardScaler())
])
Z = scalar.fit_transform(Y)
Z.std(), Z.mean()
Z_pipeline = num_pipeline_stage3.fit_transform(housing_num)
np.array_equal(Z, Z_pipeline)
# +
from sklearn.base import BaseEstimator, TransformerMixin
class DataFrameSelector(BaseEstimator, TransformerMixin):
def __init__(self, attribute_names):
self.attribute_names = attribute_names
def fit(self, X, y=None):
return self
def transform(self, X):
return X[self.attribute_names].values
class CustomizedLabelBinarizer(BaseEstimator, TransformerMixin):
def __init__(self, sparse_output=False):
self.encode = LabelBinarizer(sparse_output = sparse_output)
def fit(self, X, y=None):
return self.encode.fit(X)
def transform(self, X):
return self.encode.transform(X)
num_attribs = list(housing_num)
cat_attribs = ["ocean_proximity"]
num_pipeline = Pipeline([
('selector', DataFrameSelector(num_attribs)),
('imputer', SimpleImputer(strategy="median")),
('attr_adder', CombinedAttributesAdder()),
('std_scaler', StandardScaler()),
]
)
cat_pipeline = Pipeline([
('selector', DataFrameSelector(cat_attribs)),
('label_binarizer', CustomizedLabelBinarizer()),
]
)
# LabelBinarizer().fit_transform(DataFrameSelector(cat_attribs).fit_transform(housing))
# num_pipeline.fit_transform(housing)
# cat_pipeline.fit_transform(housing)
from sklearn.pipeline import FeatureUnion
full_pipeline = FeatureUnion(transformer_list=[
('num_pipeline', num_pipeline),
('cat_pipeline', cat_pipeline),
])
housing_prepared = full_pipeline.fit_transform(housing)
print(housing_prepared.shape)
housing_prepared
# -
# ### 2.6.1 Training and Evaluating on the Training Set
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(housing_prepared, housing_labels)
some_data = housing[:5]
some_data
some_labels = housing_labels[:5]
some_labels
some_data_prepared = full_pipeline.transform(some_data)
some_data_prepared
print(f'Prediction:\t{lin_reg.predict(some_data_prepared)}')
print(f'Lables:\t\t{list(some_labels)}')
from sklearn.metrics import mean_squared_error
housing_prediction = lin_reg.predict(housing_prepared)
lin_mse = mean_squared_error(housing_prediction, housing_labels)
lin_rmse = np.sqrt(lin_mse)
lin_rmse
# #### Tree model
# +
from sklearn.tree import DecisionTreeRegressor
tree_reg = DecisionTreeRegressor()
tree_reg.fit(housing_prepared, housing_labels)
tree_predictions = tree_reg.predict(housing_prepared)
tree_mse = mean_squared_error(tree_predictions, housing_labels)
tree_rmse = np.sqrt(tree_mse)
tree_rmse
# -
# ### 2.6.2 Better Evaluation Using Cross-Validation
from sklearn.model_selection import cross_val_score
scores = cross_val_score(tree_reg, housing_prepared, housing_labels, scoring="neg_mean_squared_error", cv=10)
rmse_scores = np.sqrt(-scores)
rmse_scores
# +
def display_scores(scores):
print(f'Scores: {scores}')
print(f'Mean: {scores.mean()}')
print(f'STD: {scores.std()}')
display_scores(rmse_scores)
# -
# #### Random Forest
# +
from sklearn.ensemble import RandomForestRegressor
forest_reg = RandomForestRegressor()
forest_reg.fit(housing_prepared, housing_labels)
forest_scores = cross_val_score(forest_reg, housing_prepared, housing_labels, scoring="neg_mean_squared_error", cv=10)
forest_rmse_scores = np.sqrt(-forest_scores)
display_scores(forest_rmse_scores)
# -
forest_prediction = forest_reg.predict(housing_prepared)
forest_rmse = np.sqrt(mean_squared_error(forest_prediction, housing_labels))
forest_rmse
# #### Save models
from sklearn.externals import joblib
joblib.dump(forest_reg, 'forest_reg.pkl')
forest_reg_loaded = joblib.load('forest_reg.pkl')
np.sqrt(mean_squared_error(forest_reg_loaded.predict(housing_prepared), housing_labels))
# ### 2.7.1 Grid Search
from sklearn.model_selection import GridSearchCV
param_grid = [
{'n_estimators': [3, 10, 30], 'max_features': [2,4,6,8]},
{'bootstrap': [False], 'n_estimators': [3, 10, 30], 'max_features': [2,4,6,8]}
]
forest_reg = RandomForestRegressor()
grid_search = GridSearchCV(forest_reg, param_grid, cv=5, scoring="neg_mean_squared_error")
grid_search.fit(housing_prepared, housing_labels)
grid_search.best_params_
grid_search.best_estimator_
cvres = grid_search.cv_results_
for mean_score, params in zip(cvres["mean_test_score"], cvres["params"]):
print(np.sqrt(-mean_score), params)
# ### 2.7.4 Analyze the best models and their errors
feature_importances = grid_search.best_estimator_.feature_importances_
feature_importances
extra_attribs = ['rooms_per_hhold', 'pop_per_hhold']
cat_one_hot_attribs = list(encoder.classes_)
cat_one_hot_attribs
attributes = num_attribs + extra_attribs + cat_one_hot_attribs
attributes, len(attributes)
sorted(zip(feature_importances, attributes), reverse=True)
# ### 2.7.5 Evaluate Your System on the Test Set
# +
final_model = grid_search.best_estimator_
X_test = strat_test_set.drop("median_house_value", axis=1)
y_test = strat_test_set.median_house_value.copy()
X_test_prepared = full_pipeline.transform(X_test)
final_predictions = final_model.predict(X_test_prepared)
final_mse = mean_squared_error(final_predictions, y_test)
final_rmse = np.sqrt(final_mse)
final_rmse
| HandsOnML/ch02/Housing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + id="hc4hWMaL3aKJ"
import pandas as pd
import spacy, json
from sklearn import model_selection
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
import numpy as np
import matplotlib.pyplot as plt
from tensorflow import keras
from tensorflow.keras import layers, models
# + id="kF0xF2bIGlBA"
df = pd.read_json(open("data.json", "r", encoding="utf8"))
with open('themes.json', 'r') as f:
theme_codes = json.load(f)
# + id="6xob1V-IoUpr"
metrics = pd.read_csv('metrics.csv', index_col=0)
# + colab={"base_uri": "https://localhost:8080/", "height": 639} id="XaigsoMuoYOH" outputId="96a196ca-14c9-49cd-d350-d90aa5215d9e"
metrics.sort_values(by=['val_auc'], ascending=False).head(20)
# + id="GFBUrLKMGkzV"
texts_len = df['text'].apply(len)
df.drop(df[texts_len<50].index, inplace=True)
# + id="fpgSIfUzGkip"
max_features = 5001 # maximum number of words in vocabulari 5000
max_len = 150 # max length of string
# + id="TCfGpuwyGkhK"
joined_text = df['title'] + df['text']
X = keras.preprocessing.sequence.pad_sequences(list(joined_text), maxlen=max_len, padding='post')
# + id="ZhTFE6P3qIpx"
# + colab={"base_uri": "https://localhost:8080/"} id="_jVg00mCGke-" outputId="997dde84-3ac3-4ab4-f885-b87d8e020bc3"
embedding_dim = 128
bin_mod = keras.models.Sequential([
keras.layers.Embedding(input_dim=max_features,
output_dim=embedding_dim,
input_length=max_len),
keras.layers.Flatten(),
keras.layers.Dense(2000,activation='relu'),
keras.layers.Dense(500,activation='relu'),
keras.layers.Dense(100,activation='relu'),
keras.layers.Dense(1, activation='sigmoid')
])
bin_mod.compile(optimizer='nadam',
loss='binary_crossentropy',
metrics=['binary_accuracy'])
bin_mod.summary()
# + colab={"base_uri": "https://localhost:8080/"} id="tslfeZ8VzyOj" outputId="dfb8bd6f-7941-4cd9-eaac-a9b25d761b8a"
len(df['themes'][0])
# + id="Pm8Vp99I1poT"
df = df.reset_index(drop=True)
# + id="SI9gkiNgzyMN"
frequencies = []
for i in range(93):
t = 0
print(i, ' theme')
for j in range(df.shape[0]):
t += df['themes'][j][i]
frequencies.append(t)
print(t)
# + id="4t7f-aNQzyJt"
frequencies = np.array(frequencies)
# + colab={"base_uri": "https://localhost:8080/"} id="ijVrnhhIzyCi" outputId="f1b630b7-6eb1-42db-c1c0-a8bd5a0850c1"
frequencies[(-frequencies).argsort()[:15]]
# + colab={"base_uri": "https://localhost:8080/"} id="Q8MnzlY74Kj0" outputId="ecc00e52-edaa-42f4-804f-e03e4241bb23"
for i in (-frequencies).argsort()[:15]:
for key, value in theme_codes.items():
if value == i:
print(key)
# + colab={"base_uri": "https://localhost:8080/", "height": 195} id="sltkdSzK4KfP" outputId="283f888f-d76c-4035-efe9-6284a170a96e"
metrics.loc[np.isin(metrics['theme'], ['nature', 'family', 'love', 'body', 'animals'])]
# + id="wasidxSe3dak"
['nature', 'family', 'love', 'body', 'animals']
# + id="SLQO6SK-Gk1p"
themes_to_predict = ['nature', 'family', 'love', 'body', 'animals']
# + colab={"base_uri": "https://localhost:8080/", "height": 940} id="7WckbtH0sRGL" outputId="2a538a37-bdc9-43e9-a3f0-f2c098caad74"
epochs = 1
result = {}
preds = {}
bin_models = {}
for theme in themes_to_predict:
print('Theme #:', theme)
theme_index = theme_codes[theme]
Y = np.array([row['themes'][theme_index] for index, row in df.iterrows()])
X_train, X_test, Y_train, Y_test = model_selection.train_test_split(X, Y, test_size=0.1, random_state=42)
bin_mod_ = models.clone_model(bin_mod)
bin_mod_.compile(optimizer='nadam',
loss='binary_crossentropy',
metrics=['binary_accuracy'])
bin_mod_.fit(np.array(X_train), np.array(Y_train),
#batch_size=128,
validation_data=(np.array(X_test),np.array(Y_test)),
epochs=epochs)
score = bin_mod_.evaluate(np.array(X_test), np.array(Y_test))
print("Test Score Model1:", score[0])
print("Test Accuracy Model1:", score[1])
y_pred = bin_mod_.predict(X_test)
bin_models['bin_model_' + theme] = bin_mod_
#class_names = ['out of cat ', 'in cat']
#cm = confusion_matrix(Y_test, np.rint(y_pred))
#disp = ConfusionMatrixDisplay(confusion_matrix=cm, display_labels=class_names)
#disp.plot()
#plt.show()
pd.Series(y_pred.flatten()).hist(bins=100, figsize=(14, 8)).figure
preds[theme] = y_pred
result[theme] = score[1]
print(result)
# + colab={"base_uri": "https://localhost:8080/"} id="tE8z7sbYsQ-3" outputId="dab51851-ae9d-4030-a2ff-5d2878c52024"
result2 = {}
preds2 = {}
for theme in themes_to_predict:
print('Theme #:', theme)
theme_index = theme_codes[theme]
Y = np.array([row['themes'][theme_index] for index, row in df.iterrows()])
X_train, X_test, Y_train, Y_test = model_selection.train_test_split(X, Y, test_size=0.1, random_state=42)
bin_mod = bin_models['bin_model_' + theme]
#print(np.array(X_test))
score = bin_mod.evaluate(np.array(X_test), np.array(Y_test))
print("Test Score Model1:", score[0])
print("Test Accuracy Model1:", score[1])
y_pred = bin_mod.predict(X_test)
preds2[theme] = y_pred
result2[theme] = score[1]
print(result2)
# + id="zEpkPJiRgxjO"
for theme in themes_to_predict:
print(np.where(preds2[theme] >= 0.4))
# + colab={"base_uri": "https://localhost:8080/"} id="Pp9m0X_8hQ8R" outputId="139e5d29-413b-433a-d3a3-987f97c9435a"
preds2['nature'][[17, 23, 24, 28, 32, 37, 40, 45, 61, 62, 64]]
# + colab={"base_uri": "https://localhost:8080/"} id="ShTAGK44sQ4o" outputId="a705f71c-634a-49cd-df18-394e5008fd39"
X_train, X_test, Y_train, Y_test = model_selection.train_test_split(X, df['themes'], test_size=0.1, random_state=42)
for theme in themes_to_predict:
print('Theme #:', theme)
theme_index = theme_codes[theme]
y_test = np.array([row[theme_index] for index, row in Y_test.items()])
bin_mod = bin_models['bin_model_' + theme]
#print(np.array(X_test))
score = bin_mod.evaluate(np.array(X_test),
np.array(y_test))
print("Test Score Model1:", score[0])
print("Test Accuracy Model1:", score[1])
y_pred = bin_mod.predict(X_test)
# + colab={"base_uri": "https://localhost:8080/"} id="G9-ZLpbZNoTx" outputId="78a52e64-2a24-45b6-862e-3168ecf08d80"
bin_models
# + id="cfPUQoE6NoRX"
# + colab={"base_uri": "https://localhost:8080/"} id="CF9KQCskNoPD" outputId="50e5a775-8538-4ab8-bcfc-c04bd6ffbdfc"
X_test[0]
print(y_test[15])
# + id="Ub7PKbvgd79k"
indices = np.where(y_test == 1)[0]
# + colab={"base_uri": "https://localhost:8080/"} id="wH2UJV6EeIUq" outputId="1ec2319a-2d17-4506-8bcc-ef29874d33ab"
y_test[indices]
# + id="_rLKBCS-sQxp"
with np.printoptions(threshold=np.inf):
print(y_test)
# + colab={"base_uri": "https://localhost:8080/"} id="vSSp9v18SpVo" outputId="6c074a12-1d10-4de3-e2e7-534a4694368f"
X_test[15]
# + colab={"base_uri": "https://localhost:8080/"} id="5GUAsAQiTVeN" outputId="683a8d2e-1663-4c00-8d51-2b2c75879dd0"
X_test.shape
# + colab={"base_uri": "https://localhost:8080/"} id="N5QOGuIqTXf4" outputId="8a4ea906-2c83-476b-fdc5-8070923b182e"
X_test[15].shape
# + colab={"base_uri": "https://localhost:8080/"} id="rH9qVfO6TiVo" outputId="d379aa3c-1426-4ddf-cf6b-dba6e03c744b"
np.reshape(X_test[15], (1, 150))
# + colab={"base_uri": "https://localhost:8080/"} id="lau1MDyZew_q" outputId="a214e588-f8c4-4dfc-b894-692ad3e38d3a"
X_test[indices]
# + colab={"base_uri": "https://localhost:8080/"} id="JTveA9xXsQqB" outputId="baecb40e-38aa-4565-bdcb-0b1931cb1cfc"
for name, model in bin_models.items():
print(name)
print(np.where(model.predict(X_test) >= 0.5))
# + id="qObZruQ6mBp3"
# + id="kEgKdhVOde6u"
# + id="TUwnC8uZdvyR"
df = pd.read_json(open("data.json", "r", encoding="utf8"))
with open('themes.json', 'r') as f:
theme_codes = json.load(f)
# + id="PW8PwJWIdvvQ"
texts_len = df['text'].apply(len)
df.drop(df[texts_len<50].index, inplace=True)
# + id="32VagpnJdvsi"
themes_to_predict = ['nature', 'family', 'love', 'body', 'animals']
# + id="Qptlfd-TdvqC"
max_features = 10000 # maximum number of words in vocabulari 5000
max_len = 150 # max length of string
# + id="<KEY>"
joined_text = df['title'] + df['text']
X = keras.preprocessing.sequence.pad_sequences(list(joined_text), maxlen=max_len, padding='post')
# + colab={"base_uri": "https://localhost:8080/"} id="wSFaO_CJGkbQ" outputId="d70b0e75-3bd2-4de2-91ad-d16e54668740"
embedding_dim = 64
mult_mod = keras.models.Sequential([
keras.layers.Embedding(input_dim=max_features,
output_dim=embedding_dim,
input_length=max_len),
keras.layers.Flatten(),
keras.layers.Dense(2000, activation='relu'),
keras.layers.Dropout(0.3),
keras.layers.Dense(500, activation='relu'),
keras.layers.Dropout(0.3),
keras.layers.Dense(100, activation='relu'),
keras.layers.Dense(5, activation='sigmoid')
])
mult_mod.compile(optimizer='nadam',
loss='categorical_crossentropy',
metrics=['categorical_accuracy'])
mult_mod.summary()
# + [markdown] id="F8ue_TT1KQ9A"
# Create list with indices for themes to predict
# + colab={"base_uri": "https://localhost:8080/"} id="Tn0-6lHEmb7I" outputId="99822ea3-4365-4282-8692-dce08d68c12e"
theme_indices = []
for theme in themes_to_predict:
print('Theme #:', theme)
theme_index = theme_codes[theme]
theme_indices.append(theme_index)
# + colab={"base_uri": "https://localhost:8080/"} id="LdOc0seiGkX-" outputId="de3c0a43-f34f-4bf9-eaad-cc866cc2b736"
Y = np.array([[row['themes'][theme_index] for theme_index in theme_indices] for index, row in df.iterrows()])
X_train, X_test, Y_train, Y_test = model_selection.train_test_split(X, Y, test_size=0.1, random_state=42)
epochs = 5
mult_mod.fit(np.array(X_train), np.array(Y_train),
#batch_size=128,
validation_data=(np.array(X_test),np.array(Y_test)),
epochs=epochs)
# + colab={"base_uri": "https://localhost:8080/"} id="mm4zVvQCGkU_" outputId="63f56781-1a97-486d-c4d3-ebddd2825551"
X_test[15]
# + colab={"base_uri": "https://localhost:8080/"} id="HgznuQL0-Hz6" outputId="c8eab3e6-02fe-4452-d4f6-34979928d6cb"
Y_test[13:15]
# + colab={"base_uri": "https://localhost:8080/"} id="ch7Z7X1k_L06" outputId="4ec76c6f-2898-4d39-b73a-f603fc2d5579"
X_test.shape
# + colab={"base_uri": "https://localhost:8080/"} id="-VWkrZvc_OMr" outputId="5d5b7c8c-8961-41dc-816d-f90b4c26136b"
X_test[15].shape
# + [markdown] id="6hk7R0jWKAdw"
# This shows that model output only zeros
# + colab={"base_uri": "https://localhost:8080/"} id="3KMYaxrF_fIr" outputId="5166241b-a7c4-4b89-cd63-8355996a3e33"
np.unique(mult_mod.predict(X_test))
# + id="FZ6Gybu_GkOF"
# + id="5h_9pR8eNJxB"
# + id="8XZWJOUeNJqj"
# + id="yEQ4h3y3NJhn"
# + id="tAwRKXNPNJZ3"
# + id="8-tGPWFONJRX"
# + id="omFMs4JWNJKh"
# + id="qwFChvw6NJEd"
# + id="YAv1vM21NI_A"
# + id="RXqtUOkFoCEC"
| train/danylo/Choosing_themes_and_model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
print('Hello There, Python!')
print("One")
print("Two")
print("Three")
print("Michael", end=' ')
print("Fudge")
print('Mike','Fudge')
first_name = 'Tony'
last_name = 'Stark'
print(first_name, last_name)
print('"')
print("\"")
print("\\")
print("Mike's House!!")
print("\n1\n2\n3")
print("Mike\tFudge\tAwesome")
input()
input("What is your name?")
your_name = input("What is your name?")
print("Hello", your_name)
your_name
x
print("hi bob")
name = input("What is your name: ")
print("hi", name)
x = input('Enter Social Secrity Number')
greeting = input("Enter a greeting ")
name = input("Enter your name ")
print(greeting, name)
age = 50
age = age + 1
| content/lessons/02/Watch-Me-Code/in-class-watch-me-code.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + nbsphinx="hidden"
#Please execute this cell
import sys
sys.path.append('../')
import jupman
from soft import draw_mat
from soft import draw_nx
# %matplotlib inline
# -
# # Visualization 1
#
# ## [Download exercises zip](../_static/generated/visualization.zip)
#
# [Browse files online](https://github.com/DavidLeoni/softpython-en/tree/master/visualization)
#
#
# ## Introduction
#
# We will review the famous library Matplotlib which allows to display a variety of charts, and it is the base of many other visualization libraries.
#
#
# ### What to do
#
# - unzip exercises in a folder, you should get something like this:
#
# ```
#
# visualization
# visualization1.ipynb
# visualization1-sol.ipynb
# visualization2-chal.ipynb
# visualization-images.ipynb
# visualization-images-sol.ipynb
# jupman.py
# ```
#
# <div class="alert alert-warning">
#
# **WARNING**: to correctly visualize the notebook, it MUST be in an unzipped folder !
# </div>
#
#
# - open Jupyter Notebook from that folder. Two things should open, first a console and then browser. The browser should show a file list: navigate the list and open the notebook `visualization/visualization1.ipynb`
#
# <div class="alert alert-warning">
#
# **WARNING 2**: DO NOT use the _Upload_ button in Jupyter, instead navigate in Jupyter browser to the unzipped folder !
# </div>
#
#
# - Go on reading that notebook, and follow instuctions inside.
#
#
# Shortcut keys:
#
# - to execute Python code inside a Jupyter cell, press `Control + Enter`
# - to execute Python code inside a Jupyter cell AND select next cell, press `Shift + Enter`
# - to execute Python code inside a Jupyter cell AND a create a new cell aftwerwards, press `Alt + Enter`
# - If the notebooks look stuck, try to select `Kernel -> Restart`
# ## First example
#
# Let's start with a very simple plot:
# +
# this is *not* a python command, it is a Jupyter-specific magic command,
# to tell jupyter we want the graphs displayed in the cell outputs
# %matplotlib inline
# imports matplotlib
import matplotlib.pyplot as plt
# we can give coordinates as simple numberlists
# this are couples for the function y = 2 * x
xs = [1, 2, 3, 4, 5, 6]
ys = [2, 4, 6, 8,10,12]
plt.plot(xs, ys)
# we can add this after plot call, it doesn't matter
plt.title("my function")
plt.xlabel('x')
plt.ylabel('y')
# prevents showing '<matplotlib.text.Text at 0x7fbcf3c4ff28>' in Jupyter
plt.show()
# -
# ### Plot style
#
# To change the way the line is displayed, you can set dot styles with another string parameter. For example, to display red dots, you would add the string `ro`, where `r` stands for red and `o` stands for dot.
# +
# %matplotlib inline
import matplotlib.pyplot as plt
xs = [1, 2, 3, 4, 5, 6]
ys = [2, 4, 6, 8,10,12]
plt.plot(xs, ys, 'ro') # NOW USING RED DOTS
plt.title("my function")
plt.xlabel('x')
plt.ylabel('y')
plt.show()
# -
# ### x power 2 exercise
#
# Try to display the function `y = x**2` (x power 2) using green dots and for integer xs going from -10 to 10
# +
# write here the solution
# +
# SOLUTION
# %matplotlib inline
import matplotlib.pyplot as plt
xs = range(-10, 10)
ys = [x**2 for x in xs ]
plt.plot(xs, ys, 'go')
plt.title("x squared")
plt.xlabel('x')
plt.ylabel('y')
plt.show()
# -
# ### Axis limits
#
# If you want to change the x axis, you can use `plt.xlim`:
# +
# %matplotlib inline
import matplotlib.pyplot as plt
xs = [1, 2, 3, 4, 5, 6]
ys = [2, 4, 6, 8,10,12]
plt.plot(xs, ys, 'ro')
plt.title("my function")
plt.xlabel('x')
plt.ylabel('y')
plt.xlim(-5, 10) # SETS LOWER X DISPLAY TO -5 AND UPPER TO 10
plt.ylim(-7, 26) # SETS LOWER Y DISPLAY TO -7 AND UPPER TO 26
plt.show()
# -
# ### Axis size
# +
# %matplotlib inline
import matplotlib.pyplot as plt
xs = [1, 2, 3, 4, 5, 6]
ys = [2, 4, 6, 8,10,12]
fig = plt.figure(figsize=(10,3)) # width: 10 inches, height 3 inches
plt.plot(xs, ys, 'ro')
plt.title("my function")
plt.xlabel('x')
plt.ylabel('y')
plt.show()
# -
# ### Changing tick labels
#
# You can also change labels displayed on ticks on axis with `plt.xticks` and `plt.yticks` functions:
#
# **Note:** instead of `xticks` you might directly use [categorical variables](https://matplotlib.org/gallery/lines_bars_and_markers/categorical_variables.html) IF you have matplotlib >= 2.1.0
#
# Here we use `xticks` as sometimes you might need to fiddle with them anyway
# +
# %matplotlib inline
import matplotlib.pyplot as plt
xs = [1, 2, 3, 4, 5, 6]
ys = [2, 4, 6, 8,10,12]
plt.plot(xs, ys, 'ro')
plt.title("my function")
plt.xlabel('x')
plt.ylabel('y')
# FIRST NEEDS A SEQUENCE WITH THE POSITIONS, THEN A SEQUENCE OF SAME LENGTH WITH LABELS
plt.xticks(xs, ['a', 'b', 'c', 'd', 'e', 'f'])
plt.show()
# -
# ## Introducting numpy
# For functions involving reals, vanilla python starts showing its limits and its better to switch to numpy library. Matplotlib can easily handle both vanilla python sequences like lists and numpy array.
# Let's see an example without numpy and one with it.
# ### Example without numpy
#
# If we only use _vanilla_ Python (that is, Python without extra libraries like numpy), to display the function `y = 2x + 1` we can come up with a solution like this
# +
# %matplotlib inline
import matplotlib.pyplot as plt
xs = [x*0.1 for x in range(10)] # notice we can't do a range with float increments
# (and it would also introduce rounding errors)
ys = [(x * 2) + 1 for x in xs]
plt.plot(xs, ys, 'bo')
plt.title("y = 2x + 1 with vanilla python")
plt.xlabel('x')
plt.ylabel('y')
plt.show()
# -
# ### Example with numpy
#
# With numpy, we have at our disposal several new methods for dealing with arrays.
#
# First we can generate an interval of values with one of these methods.
#
# Sine Python range does not allow float increments, we can use `np.arange`:
# +
import numpy as np
xs = np.arange(0,1.0,0.1)
xs
# -
# Equivalently, we could use `np.linspace`:
# +
xs = np.linspace(0,0.9,10)
xs
# -
# Numpy allows us to easily write functions on arrays in a natural manner. For example, to calculate `ys` we can now do like this:
# +
ys = 2*xs + 1
ys
# -
# Let's put everything together:
# +
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
xs = np.linspace(0,0.9,10) # left end: 0 *included* right end: 0.9 *included* number of values: 10
ys = 2*xs + 1
plt.plot(xs, ys, 'bo')
plt.title("y = 2x + 1 with numpy")
plt.xlabel('x')
plt.ylabel('y')
plt.show()
# -
# ### y = sin(x) + 3 exercise
#
# ✪✪✪ Try to display the function `y = sin(x) + 3` for x at pi/4 intervals, starting from 0. Use exactly 8 ticks.
#
# **NOTE**: 8 is the _number of x ticks_ (telecom people would use the term 'samples'), **NOT** the x of the last tick !!
#
# a) try to solve it without using numpy. For pi, use constant `math.pi` (first you need to import `math` module)
#
# b) try to solve it with numpy. For pi, use constant `np.pi` (which is exactly the same as `math.pi`)
#
# b.1) solve it with `np.arange`
#
# b.2) solve it with `np.linspace`
#
# c) For each tick, use the label sequence `"0π/4", "1π/4" , "2π/4", "3π/4" , "4π/4", "5π/4", ....` . Obviously writing them by hand is easy, try instead to devise a method that works for any number of ticks. What is changing in the sequence? What is constant? What is the type of the part changes ? What is final type of the labels you want to obtain ?
#
# d) If you are in the mood, try to display them better like 0, π/4 , π/2 π, 3π/4 , π, 5π/4 possibly using Latex (requires some search, [this example](https://stackoverflow.com/a/40642200) might be a starting point)
#
# **NOTE**: Latex often involves the usage of the `\` bar, like in `\frac{2,3}`. If we use it directly, Python will interpret `\f` as a special character and will not send to the Latex processor the string we meant:
'\frac{2,3}'
# One solution would be to double the slashes, like this:
'\\frac{2,3}'
# An even better one is to prepend the string with the `r` character, which allows to write slashes only once:
r'\frac{2,3}'
# +
# write here solution for a) y = sin(x) + 3 with vanilla python
# +
# SOLUTION a) y = sin(x) + 3 with vanilla python
# %matplotlib inline
import matplotlib.pyplot as plt
import math
xs = [x * (math.pi)/4 for x in range(8)]
ys = [math.sin(x) + 3 for x in xs]
plt.plot(xs, ys)
plt.title("a) solution y = sin(x) + 3 with vanilla python ")
plt.xlabel('x')
plt.ylabel('y')
plt.show()
# +
# write here solution b.1) y = sin(x) + 3 with numpy, arange
# +
# SOLUTION b.1) y = sin(x) + 3 with numpy, linspace
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
# left end = 0 right end = 7/4 pi 8 points
# notice numpy.pi is exactly the same as vanilla math.pi
xs = np.arange(0, # included
8 * np.pi/4, # *not* included (we put 8, as we actually want 7 to be included)
np.pi/4 )
ys = np.sin(xs) + 3 # notice we know operate on arrays. All numpy functions can operate on them
plt.plot(xs, ys)
plt.title("b.1 solution y = sin(x) + 3 with numpy arange")
plt.xlabel('x')
plt.ylabel('y')
plt.show()
# +
# write here solution b.2) y = sin(x) + 3 with numpy, linspace
# +
# SOLUTION b.2) y = sin(x) + 3 with numpy, linspace
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
# left end = 0 right end = 7/4 pi 8 points
# notice numpy.pi is exactly the same as vanilla math.pi
xs = np.linspace(0, (np.pi/4) * 7 , 8)
ys = np.sin(xs) + 3 # notice we know operate on arrays. All numpy functions can operate on them
plt.plot(xs, ys)
plt.title("b2 solution y = sin(x) + 3 with numpy , linspace")
plt.xlabel('x')
plt.ylabel('y')
plt.show()
# -
# write here solution c) y = sin(x) + 3 with numpy and pi xlabels
# +
# SOLUTION c) y = sin(x) + 3 with numpy and pi xlabels
# %matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
xs = np.linspace(0, (np.pi/4) * 7 , 8) # left end = 0 right end = 7/4 pi 8 points
ys = np.sin(xs) + 3 # notice we know operate on arrays. All numpy functions can operate on them
plt.plot(xs, ys)
plt.title("c) solution y = sin(x) + 3 with numpy and pi xlabels")
plt.xlabel('x')
plt.ylabel('y')
# FIRST NEEDS A SEQUENCE WITH THE POSITIONS, THEN A SEQUENCE OF SAME LENGTH WITH LABELS
plt.xticks(xs, ["%sπ/4" % x for x in range(8) ])
plt.show()
# -
# ### Showing degrees per node
# Going back to the indegrees and outdegrees as seen in [Relational data - Simple statistics paragraph](https://en.softpython.org/relational/relational1-intro-sol.html#Simple-statistics), we will try to study the distributions visually.
#
# Let's take an example networkx DiGraph:
# +
import networkx as nx
G1=nx.DiGraph({
'a':['b','c'],
'b':['b','c', 'd'],
'c':['a','b','d'],
'd':['b', 'd']
})
draw_nx(G1)
# -
# ### indegree per node
#
# ✪✪ Display a plot for graph `G` where the xtick labels are the nodes, and the y is the indegree of those nodes.
#
# **Note:** instead of `xticks` you might directly use [categorical variables](https://matplotlib.org/gallery/lines_bars_and_markers/categorical_variables.html) IF you have matplotlib >= 2.1.0
#
# Here we use `xticks` as sometimes you might need to fiddle with them anyway
#
#
# To get the nodes, you can use the `G1.nodes()` function:
G1.nodes()
# It gives back a `NodeView` which is not a list, but still you can iterate through it with a `for in` cycle:
for n in G1.nodes():
print(n)
# Also, you can get the indegree of a node with
G1.in_degree('b')
# write here the solution
# +
# SOLUTION
import numpy as np
import matplotlib.pyplot as plt
xs = np.arange(G1.number_of_nodes())
ys_in = [G1.in_degree(n) for n in G1.nodes() ]
plt.plot(xs, ys_in, 'bo')
plt.ylim(0,max(ys_in) + 1)
plt.xlim(0,max(xs) + 1)
plt.title("G1 Indegrees per node solution")
plt.xticks(xs, G1.nodes())
plt.xlabel('node')
plt.ylabel('indegree')
plt.show()
# -
# ## Bar plots
#
# The previous plot with dots doesn't look so good - we might try to use instead a bar plot. First look at this this example, then proceed with the next exercise
# +
import numpy as np
import matplotlib.pyplot as plt
xs = [1,2,3,4]
ys = [7,5,8,2 ]
plt.bar(xs, ys,
0.5, # the width of the bars
color='green', # someone suggested the default blue color is depressing, so let's put green
align='center') # bars are centered on the xtick
plt.show()
# -
# ### indegree per node bar plot
#
# ✪✪ Display a [bar plot](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.bar.html) for graph `G1` where the xtick labels are the nodes, and the y is the indegree of those nodes.
# +
# write here
# +
# SOLUTION
import numpy as np
import matplotlib.pyplot as plt
xs = np.arange(G1.number_of_nodes())
ys_in = [G1.in_degree(n) for n in G1.nodes() ]
plt.bar(xs, ys_in, 0.5, align='center')
plt.title("G1 Indegrees per node solution")
plt.xticks(xs, G1.nodes())
plt.xlabel('node')
plt.ylabel('indegree')
plt.show()
# -
# ### indegree per node sorted alphabetically
#
# ✪✪ Display the same bar plot as before, but now sort nodes alphabetically.
#
# NOTE: you cannot run `.sort()` method on the result given by `G1.nodes()`, because nodes in network by default have no inherent order. To use `.sort()` you need first to convert the result to a `list` object.
# +
# SOLUTION
import numpy as np
import matplotlib.pyplot as plt
xs = np.arange(G1.number_of_nodes())
xs_labels = list(G1.nodes())
xs_labels.sort()
ys_in = [G1.in_degree(n) for n in xs_labels ]
plt.bar(xs, ys_in, 0.5, align='center')
plt.title("G1 Indegrees per node, sorted labels solution")
plt.xticks(xs, xs_labels)
plt.xlabel('node')
plt.ylabel('indegree')
plt.show()
# -
# write here
# ### indegree per node sorted
#
# ✪✪✪ Display the same bar plot as before, but now sort nodes according to their indegree. This is more challenging, to do it you need to use some sort trick. First read the [Python documentation](https://docs.python.org/3/howto/sorting.html#key-functions) and then:
#
# 1. create a list of couples (list of tuples) where each tuple is the node identifier and the corresponding indegree
# 2. sort the list by using the second value of the tuples as a key.
#
#
# write here
# +
# SOLUTION
import numpy as np
import matplotlib.pyplot as plt
xs = np.arange(G1.number_of_nodes())
coords = [(v, G1.in_degree(v)) for v in G1.nodes() ]
coords.sort(key=lambda c: c[1])
ys_in = [c[1] for c in coords]
plt.bar(xs, ys_in, 0.5, align='center')
plt.title("G1 Indegrees per node, sorted by indegree solution")
plt.xticks(xs, [c[0] for c in coords])
plt.xlabel('node')
plt.ylabel('indegree')
plt.show()
# -
#
# ### out degrees per node sorted
#
# ✪✪✪ Do the same graph as before for the outdegrees.
# You can get the outdegree of a node with:
G1.out_degree('b')
# +
# SOLUTION
import numpy as np
import matplotlib.pyplot as plt
xs = np.arange(G1.number_of_nodes())
coords = [(v, G1.out_degree(v)) for v in G1.nodes() ]
coords.sort(key=lambda c: c[1])
ys_out = [c[1] for c in coords]
plt.bar(xs, ys_out, 0.5, align='center')
plt.title("G1 Outdegrees per node sorted solution")
plt.xticks(xs, [c[0] for c in coords])
plt.xlabel('node')
plt.ylabel('outdegree')
plt.show()
# -
# write here
# ### degrees per node
#
# ✪✪✪ We might check as well the sorted degrees per node, intended as the sum of in_degree and out_degree. To get the sum, use `G1.degree(node)` function.
# +
# write here the solution
# +
# SOLUTION
import numpy as np
import matplotlib.pyplot as plt
xs = np.arange(G1.number_of_nodes())
coords = [(v, G1.degree(v)) for v in G1.nodes() ]
coords.sort(key=lambda c: c[1])
ys_deg = [c[1] for c in coords]
plt.bar(xs, ys_deg, 0.5, align='center')
plt.title("G1 degrees per node sorted SOLUTION")
plt.xticks(xs, [c[0] for c in coords])
plt.xlabel('node')
plt.ylabel('degree')
plt.show()
# -
# ✪✪✪✪ **EXERCISE**: Look at [this example](https://matplotlib.org/gallery/lines_bars_and_markers/barchart.html#sphx-glr-gallery-lines-bars-and-markers-barchart-py), and make a double bar chart sorting nodes by their _total_ degree. To do so, in the tuples you will need `vertex`, `in_degree`, `out_degree` and also `degree`.
# write here
# +
# SOLUTION
import numpy as np
import matplotlib.pyplot as plt
xs = np.arange(G1.number_of_nodes())
coords = [(v, G1.degree(v), G1.in_degree(v), G1.out_degree(v) ) for v in G1.nodes() ]
coords.sort(key=lambda c: c[1])
ys_deg = [c[1] for c in coords]
ys_in = [c[2] for c in coords]
ys_out = [c[3] for c in coords]
width = 0.35
fig, ax = plt.subplots()
rects1 = ax.bar(xs - width/2, ys_in, width,
color='SkyBlue', label='indegrees')
rects2 = ax.bar(xs + width/2, ys_out, width,
color='IndianRed', label='outdegrees')
# Add some text for labels, title and custom x-axis tick labels, etc.
ax.set_title('G1 in and out degrees per node SOLUTION')
ax.set_xticks(xs)
ax.set_xticklabels([c[0] for c in coords])
ax.legend()
plt.show()
# -
# ## Frequency histogram
# Now let's try to draw degree frequencies, that is, for each degree present in the graph we want to display a bar as high as the number of times that particular degree appears.
#
# For doing so, we will need a matplot histogram, see [documentation](https://matplotlib.org/api/_as_gen/matplotlib.pyplot.hist.html)
#
# We will need to tell matplotlib how many columns we want, which in histogram terms are called _bins_. We also need to give the histogram a series of numbers so it can count how many times each number occurs. Let's consider this graph `G2`:
# +
import networkx as nx
G2=nx.DiGraph({
'a':['b','c'],
'b':['b','c', 'd'],
'c':['a','b','d'],
'd':['b', 'd','e'],
'e':[],
'f':['c','d','e'],
'g':['e','g']
})
draw_nx(G2)
# -
# If we take the the degree sequence of `G2` we get this:
# +
degrees_G2 = [G2.degree(n) for n in G2.nodes()]
degrees_G2
# -
# We see 3 appears four times, 6 once, and seven twice.
#
# Let's try to determine a good number for the bins. First we can check the boundaries our x axis should have:
min(degrees_G2)
max(degrees_G2)
# So our histogram on the x axis must go at least from 3 and at least to 7. If we want integer columns (bins), we will need at least ticks for going from 3 included to 7 included, so at least ticks for 3,4,5,6,7. For getting precise display, wen we have integer x it is best to also manually provide the sequence of bin edges, remembering it should start at least from the minimum _included_ (in our case, 3) and arrive to the maximum + 1 _included_ (in our case, 7 + 1 = 8)
#
# **NOTE**: precise histogram drawing can be quite tricky, please do read [this StackOverflow post](https://stackoverflow.com/a/27084005) for more details about it.
# +
import matplotlib.pyplot as plt
import numpy as np
degrees = [G2.degree(n) for n in G2.nodes()]
# add histogram
# in this case hist returns a tuple of three values
# we put in three variables
n, bins, columns = plt.hist(degrees_G2,
bins=range(3,9), # 3 *included* , 4, 5, 6, 7, 8 *included*
width=1.0) # graphical width of the bars
plt.xlabel('Degrees')
plt.ylabel('Frequency counts')
plt.title('G2 Degree distribution')
plt.xlim(0, max(degrees) + 2)
plt.show()
# -
# As expected we see 3 is counted four times, 6 once, and seven twice.
#
#
#
# ✪✪✪ **EXERCISE**: Still, it would be visually better to align the x ticks to the middle of the bars with `xticks`, and also to make the graph more tight by setting the `xlim` appropriately. This is not always easy to do.
#
# Read carefully [this StackOverflow post](https://stackoverflow.com/a/27084005) and try do it by yourself.
#
# **NOTE**: set _one thing at a time_ and try if it works(i.e. first xticks and then xlim), doing everything at once might get quite confusing
# +
# write here the solution
# +
# SOLUTION
import matplotlib.pyplot as plt
import numpy as np
degrees = [G2.degree(n) for n in G2.nodes()]
# add histogram
min_x = min(degrees) # 3
max_x = max(degrees) # 7
bar_width = 1.0
# in this case hist returns a tuple of three values
# we put in three variables
n, bins, columns = plt.hist(degrees_G2,
bins= range(3,9), # 3 *included* to 9 *excluded*
# it is like the xs, but with one number more !!
# to understand why read this
# https://stackoverflow.com/questions/27083051/matplotlib-xticks-not-lining-up-with-histogram/27084005#27084005
width=bar_width) # graphical width of the bars
plt.xlabel('Degrees')
plt.ylabel('Frequency counts')
plt.title('G2 Degree distribution, tight graph SOLUTION')
xs = np.arange(min_x,max_x + 1) # 3 *included* to 8 *excluded*
# used numpy so we can later reuse it for float vector operations
plt.xticks(xs + bar_width / 2, # position of ticks
xs ) # labels of ticks
plt.xlim(min_x, max_x + 1) # 3 *included* to 8 *excluded*
plt.show()
# -
# ## Showing plots side by side
#
# You can display plots on a grid. Each cell in the grid is idientified by only one number. For example, for a grid of two rows and three columns, you would have cells indexed like this:
#
# ```
# 1 2 3
# 4 5 6
# ```
# +
# %matplotlib inline
import matplotlib.pyplot as plt
import math
xs = [1,2,3,4,5,6]
# cells:
# 1 2 3
# 4 5 6
plt.subplot(2, # 2 rows
3, # 3 columns
1) # plotting in first cell
ys1 = [x**3 for x in xs]
plt.plot(xs, ys1)
plt.title('first cell')
plt.subplot(2, # 2 rows
3, # 3 columns
2) # plotting in first cell
ys2 = [2*x + 1 for x in xs]
plt.plot(xs,ys2)
plt.title('2nd cell')
plt.subplot(2, # 2 rows
3, # 3 columns
3) # plotting in third cell
ys3 = [-2*x + 1 for x in xs]
plt.plot(xs,ys3)
plt.title('3rd cell')
plt.subplot(2, # 2 rows
3, # 3 columns
4) # plotting in fourth cell
ys4 = [-2*x**2 for x in xs]
plt.plot(xs,ys4)
plt.title('4th cell')
plt.subplot(2, # 2 rows
3, # 3 columns
5) # plotting in fifth cell
ys5 = [math.sin(x) for x in xs]
plt.plot(xs,ys5)
plt.title('5th cell')
plt.subplot(2, # 2 rows
3, # 3 columns
6) # plotting in sixth cell
ys6 = [-math.cos(x) for x in xs]
plt.plot(xs,ys6)
plt.title('6th cell')
plt.show()
# -
# ### Graph models
# Let's study frequencies of some known network types.
# #### Erdős–Rényi model
#
# ✪✪ A simple graph model we can think of is the so-called [Erdős–Rényi model](https://en.wikipedia.org/wiki/Erd%C5%91s%E2%80%93R%C3%A9nyi_model): is is an _undirected_ graph where have `n` nodes, and each node is connected to each other with probability `p`. In networkx, we can generate a random one by issuing this command:
G = nx.erdos_renyi_graph(10, 0.5)
# In the drawing, by looking the absence of arrows confirms it is undirected:
draw_nx(G)
# Try plotting degree distribution for different values of `p` (0.1, 0.5, 0.9) with a fixed `n=1000`, putting them side by side on the same row. What does their distribution look like ? Where are they centered ?
#
# To avoid rewriting the same code again and again, define a `plot_erdos(n,p,j)` function to be called three times.
# +
# write here the solution
# +
# SOLUTION
import matplotlib.pyplot as plt
import numpy as np
def plot_erdos(n, p, j):
G = nx.erdos_renyi_graph(n, p)
plt.subplot(1, # 1 row
3, # 3 columns
j) # plotting in jth cell
degrees = [G.degree(n) for n in G.nodes()]
num_bins = 20
n, bins, columns = plt.hist(degrees, num_bins, width=1.0)
plt.xlabel('Degrees')
plt.ylabel('Frequency counts')
plt.title('p = %s' % p)
n = 1000
fig = plt.figure(figsize=(15,6)) # width: 10 inches, height 3 inches
plot_erdos(n, 0.1, 1)
plot_erdos(n, 0.5, 2)
plot_erdos(n, 0.9, 3)
print()
print(" Erdős–Rényi degree distribution SOLUTION")
plt.show()
# -
# ## Other plots
#
# Matplotlib allows to display pretty much any you might like, here we collect some we use in the course, for others, see the [extensive Matplotlib documentation](https://matplotlib.org/gallery/index.html)
# ### Pie chart
# +
# %matplotlib inline
import matplotlib.pyplot as plt
labels = ['Oranges', 'Apples', 'Cocumbers']
fracs = [14, 23, 5] # how much for each sector, note doesn't need to add up to 100
plt.pie(fracs, labels=labels, autopct='%1.1f%%', shadow=True)
plt.title("Super strict vegan diet (good luck)")
plt.show()
# -
# ## Fancy plots
#
# You can enhance your plots with some eyecandy, we put some example.
# ### Background color
# CHANGES THE BACKGROUND COLOR FOR *ALL* SUBSEQUENT PLOTS
plt.rcParams['axes.facecolor'] = 'azure'
plt.plot([1,2,3],[4,5,6])
plt.show()
plt.rcParams['axes.facecolor'] = 'white' # restores the white for all following plots
plt.plot([1,2,3],[4,5,6])
plt.show()
# ### Text
# +
plt.xlim(0,450) # important to set when you add text
plt.ylim(0,600) # as matplotlib doesn't automatically resize to show them
plt.text(250,
450,
"Hello !",
fontsize=40,
fontweight='bold',
color="lightgreen",
ha='center', # centers text horizontally
va='center') # centers text vertically
plt.show()
# -
# ## Images
#
# Let's try adding the image [clef.png](clef.png)
# +
# %matplotlib inline
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(7,7))
# NOTE: if you don't see anything, check position and/or zoom factor
from matplotlib.offsetbox import OffsetImage, AnnotationBbox
plt.xlim(0,150) # important to set when you add images
plt.ylim(0,200) # as matplotlib doesn't automatically resize to show them
ax=fig.gca()
img = plt.imread('clef.png')
ax.add_artist(AnnotationBbox(OffsetImage(img, zoom=0.5),
(50, 100),
frameon=False))
plt.show()
# -
# ### Color intensity
#
# To tweak the color intensity we can use the `alpha` parameter, which varies from `0.0` to `1.0`
plt.plot([150,175], [25,400],
color='green',
alpha=1.0, # full color
linewidth=10)
plt.plot([100,125],[25,400],
color='green',
alpha=0.3, # lighter
linewidth=10)
plt.plot([50,75], [25,400],
color='green',
alpha=0.1, # almost invisible
linewidth=10)
plt.show()
# ## Exercise - Be fancy
#
# Try writing some code to visualize the image down here
# +
# %matplotlib inline
import matplotlib.pyplot as plt
# write here
fig = plt.figure(figsize=(10,10))
# CHANGES BACKGROUND COLOR
plt.rcParams['axes.facecolor'] = 'azure'
# SHOWS TEXT
plt.text(250,
450,
"Be fancy",
fontsize=40,
fontweight='bold',
color="pink",
ha='center',
va='center')
# CHANGES COLOR INTENSITY WITH alpha
plt.plot([25,400], [300,300],
color='blue',
alpha=1.0, # full color
linewidth=10)
plt.plot([25,400], [200,200],
color='blue',
alpha=0.3, # softer
linewidth=10)
plt.plot([25,400], [100,100],
color='blue',
alpha=0.1, # almost invisible
linewidth=10)
# NOTE: if you don't see anything, check position and/or zoom factor
from matplotlib.offsetbox import OffsetImage, AnnotationBbox
plt.xlim(0,450) # important to set when you add images
plt.ylim(0,600) # as matplotlib doesn't automatically resize to show them
ax=fig.gca()
img = plt.imread('clef.png')
ax.add_artist(AnnotationBbox(OffsetImage(img, zoom=0.5),
(100, 200),
frameon=False))
plt.show()
# -
# ## Continue
#
# Go on with [the AlgoRhythm challenge](https://en.softpython.org/visualization/visualization2-chal.html) or the [numpy images tutorial](https://en.softpython.org/visualization/visualization-images-sol.html)
#
| visualization/visualization1-sol.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Wired Elements: experimenting Web Components
# https://github.com/rough-stuff/wired-elements
# ## Libraries
from httpserver import start, stop
from http.server import SimpleHTTPRequestHandler, HTTPServer
import os
from urllib.parse import urlparse, parse_qs
import ipywidgets as widgets
import time
from IPython.display import HTML
# ## Backend: the HTTP server
# + [markdown] tags=[]
# ### Widgets python implementation
# -
logsw=widgets.Textarea(layout=widgets.Layout(width='50%'))
logsw.value = ""
bag ={}
class myHTTPRequestHandler(SimpleHTTPRequestHandler):
callback={}
def __init__(self, *args, directory=None,bag=bag, **kwargs):
self.bag = bag
self.directory = os.path.join(os.getcwd(),"widgets")
super().__init__(*args, directory=self.directory, **kwargs)
# print(self.directory)
def end_headers(self):
super().end_headers()
def do_GET(self):
self.parsed_path = urlparse(self.path)
self.queryparams = parse_qs(self.parsed_path.query)
if self.path.endswith('/version'):
self.version()
elif self.parsed_path.path.endswith('/setvalue'):
self.setvalue()
else:
super().do_GET()
def version(self):
ans = '{"version": "0.0"}'
eans = ans.encode()
self.send_response(200)
self.send_header("Content-type", "application/json")
self.send_header("Content-Length",len(eans))
self.end_headers()
self.wfile.write(eans)
def setvalue(self):
self.bag.update({self.queryparams['variable'][0]:self.queryparams['value'][0]})
if self.queryparams['variable'][0] in myHTTPRequestHandler.callback:
self.callback[self.queryparams['variable'][0]](self.queryparams['value'][0])
self.version()
def log_message(self, format, *args):
global logsw
v = format % args
t = time.localtime()
current_time = time.strftime("%H:%M:%S", t)
logsw.value += current_time + " " + v +"\n"
logsw
start(handler_class=myHTTPRequestHandler, port=8085)
# ## Frontend
# ### Widget javascript implementation: Using Web Components
class LoadModule(object):
"""This class is loading modules"""
def _repr_javascript_(self):
return '''
debugger;
const modulename = 'wired-elements';
let module = document.getElementById('modulename'+'.js');
if (module === null){
let css_urls = [{"url": 'https://fonts.googleapis.com/css?family=Gloria+Hallelujah&display=swap', "rel": "stylesheet"},
{"url": "/proxy/8085/wired-elements/style.css", "rel": "stylesheet", "type": "text/css"}
];
let js_urls = [{"url": 'https://unpkg.com/wired-elements?module', "id": modulename+'.js'}
];
for (const css_url in css_urls){
let fileref = document.createElement('link');
fileref.rel = css_urls[css_url].rel;
if ('type' in css_urls[css_url]){
fileref.type = css_urls[css_url].type;
}
fileref.href = css_urls[css_url].url;
document.head.append(fileref);
}
for (const js_url in js_urls){
let script_= document.createElement('script');
script_.src = js_urls[js_url].url;
script_.type = "module"
script_.id = js_urls[js_url].id
document.head.appendChild(script_);
}
}
'''
LoadModule()
def cb_monnom(value):
global monnom
monnom.value = value
myHTTPRequestHandler.callback.update({'monnom': cb_monnom})
wired = HTML("""
<main>
<wired-card elevation="5">
<h1>wired-elements demo</h1>
</wired-card>
<section>
<wired-input placeholder="your name"></wired-input>
<wired-button elevation="2">Submit</wired-button>
</section>
</main>
""")
monnom = widgets.Label()
display(wired,monnom)
class ExecuteModule(object):
"""This class execute the code once the module is loaded"""
def _repr_javascript_(self):
return '''debugger;
let script_= document.createElement('script');
script_.src = "/proxy/8085/wired-elements/main.js";
script_.type = "module"
document.head.appendChild(script_);
'''
ExecuteModule()
# +
#stop()
# -
| wired-elements.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import matplotlib.pylab as plt
import numpy as np
# +
def Weight(phi,A=5, phi_o=0,delta=50):
return 1-np.abs(phi)/delta
def Weight2(phi,A=5, phi_o=0,width=50, n=100):
return ((n+1.0)/(n-1.0))*(1.0 - n * (np.abs(2.0 * phi/width))**( n - 1.0) + (n - 1.0) * (np.abs(2.0*phi/width))**( n))
def annot_max(x,y, ax=None):
x=np.array(x)
y=np.array(y)
xmax = x[np.argmax(y)]
ymax = y.max()
text= "x={:.3f}, y={:.3f}".format(xmax, ymax)
if not ax:
ax=plt.gca()
bbox_props = dict(boxstyle="square,pad=0.3", fc="w", ec="k", lw=0.72)
arrowprops=dict(arrowstyle="->",connectionstyle="angle,angleA=0,angleB=60")
kw = dict(xycoords='data',textcoords="axes fraction",
arrowprops=arrowprops, bbox=bbox_props, ha="right", va="top")
ax.annotate(text, xy=(xmax, ymax), xytext=(0.94,0.96), **kw)
def plotweighting(philist, A, delta, phi_o, enumeration,color):
label=enumeration
plt.plot(philist,[Weight(phi, A = A, phi_o = phi_o, delta=delta) for phi in philist], label = label,color=color)
#def plotweighting(philist, A, delta, phi_o, enumeration,color):
# label=enumeration
# plt.plot(philist,[Weight2(phi, A = A, phi_o = phi_o, width=2*delta) for phi in philist], label = label,color=color)
# +
from palettable.scientific.sequential import GrayC_20_r
from matplotlib.colors import ListedColormap, LinearSegmentedColormap
cmapCurve = ListedColormap(GrayC_20_r.mpl_colors[7:])
cmap = LinearSegmentedColormap.from_list("cont",GrayC_20_r.mpl_colors[10:])
#cmap = ListedColormap(GrayC_20_r.mpl_colors[10:])
# +
def PlotLineVariation(delta,A_factor,phi_o_factor,p = 3,**kwargs):
A = A_factor*p/delta
phi_o = delta*phi_o_factor
philist=np.arange(-(delta),(delta),.5).tolist()
#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
plotweighting(philist, A, delta, phi_o,"$\omega(\phi,\phi_o,A)$",cmapCurve.colors[0])
#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
AnnotatePhi_o = kwargs.get("AnnotatePhi_o",False)
AnnotateA = kwargs.get("AnnotateA",False)
if AnnotatePhi_o:
plt.text(phi_o+0.04*delta,0.3,'$\phi_o={}\delta$'.format(phi_o_factor),rotation=-60,fontsize='small')
if AnnotateA:
plt.text(phi_o*1.0-1/(1.5*A),0.75+np.log(A_factor)/15,'$A={}\delta$'.format(A_factor),rotation=-30,fontsize='small',horizontalalignment='center')
def BackgroundNPlotFormat(delta):
philist=np.arange(-(delta),(delta),.5).tolist()
#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
N = 10000
X, Y = np.mgrid[-delta:delta:complex(0, N), 0:1:complex(0, 5)]
Z = np.abs(X)
plt.pcolormesh(X, Y, Z, cmap=cmap)
#%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
TitleText = r"$\delta$ = {delta}m".format(delta=delta)
plt.axvline([delta],c="k",ls=":");plt.axvline([-delta],c="k",ls=":");
plt.text(delta*(0.97),0.95,TitleText,rotation=90)
plt.axvline([0],c="k",ls="--");
plt.xlabel("$\phi(x)$")
plt.grid()
plt.xticks(np.arange(-delta , delta , 10))
plt.ylabel("$\omega(\phi,\phi_o,A)$",fontsize='large')
#plt.legend(title=TitleText,loc='lower left')
# +
bbox=dict(facecolor='white', edgecolor='black', boxstyle='round,pad=0.3')
plt.figure(figsize= [15, 4],dpi=100)
delta = 50.05;
PlotLineVariation(delta, A_factor=3, phi_o_factor=0.5, p = 3)
PlotLineVariation(delta, A_factor=2, phi_o_factor=0.5, p = 3)
BoxAnnotation ="""$w(\phi,\delta)$"""
plt.text(-delta*(.95), 0.70, BoxAnnotation, color='black', bbox=bbox)
BackgroundNPlotFormat(delta)
plt.title("Weighting for plastic multiplier")
plt.xlim(-delta-5,delta+5)
plt.show()
# -
def plotweighting(philist, A, delta, phi_o, enumeration,color):
label=enumeration
plt.plot(philist,[Weight2(phi, A = A, phi_o = phi_o, width=2*delta, n=50) for phi in philist], label = label,color=color)
# +
bbox=dict(facecolor='white', edgecolor='black', boxstyle='round,pad=0.3')
plt.figure(figsize= [15, 4],dpi=100)
delta = 50.05;
PlotLineVariation(delta, A_factor=50, phi_o_factor=0.5)
BoxAnnotation ="""$w(\phi,\delta)$"""
plt.text(-delta*(.95), 0.70, BoxAnnotation, color='black', bbox=bbox)
BackgroundNPlotFormat(delta)
plt.title("Weighting for plastic multiplier")
plt.xlim(-delta-5,delta+5)
plt.show()
# -
| PythonCodes/Utilities/WeightingPlots/PM_Plottings.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import argparse
import os
import pprint
import shutil
import sys
import logging
import time
import timeit
from pathlib import Path
import numpy as np
import torch
import torch.nn as nn
import torch.backends.cudnn as cudnn
import torch.optim
from tensorboardX import SummaryWriter
import _init_paths
import models
import datasets
from config import config
from config import update_config
from core.criterion import CrossEntropy, OhemCrossEntropy
from core.function import train, validate
from utils.modelsummary import get_model_summary
from utils.utils import create_logger, FullModel
import pickle
import glob
from torchvision import transforms
import PIL.Image as Image
# -
with open('../train_args.pkl', 'rb') as f:
args = pickle.load(f)
args.cfg = '../'+args.cfg #changed
update_config(config, args)
config.CUDNN.keys()
logger, final_output_dir, tb_log_dir = create_logger(
config, args.cfg, 'train')
logger.info(pprint.pformat(args))
logger.info(config)
writer_dict = {
'writer': SummaryWriter(tb_log_dir),
'train_global_steps': 0,
'valid_global_steps': 0,
}
'datasets.'+config.DATASET.DATASET
# +
cudnn.benchmark = config.CUDNN.BENCHMARK
cudnn.deterministic = config.CUDNN.DETERMINISTIC
cudnn.enabled = config.CUDNN.ENABLED
# gpus = list(config.GPUS)
gpus = [0,1] # changed
# build model
model = eval('models.'+config.MODEL.NAME +
'.get_seg_model')(config)
config.TRAIN.IMAGE_SIZE[0] = int(512)
config.TRAIN.IMAGE_SIZE[1] = int(256)
dump_input = torch.rand(
(1, 3, config.TRAIN.IMAGE_SIZE[1], config.TRAIN.IMAGE_SIZE[0])
)
logger.info(get_model_summary(model.cuda(), dump_input.cuda()))
# # copy model file
this_dir = os.path.dirname('./') #changed
models_dst_dir = os.path.join(final_output_dir, 'models')
if os.path.exists(models_dst_dir):
shutil.rmtree(models_dst_dir)
shutil.copytree(os.path.join(this_dir, '../lib/models'), models_dst_dir)
# prepare data
crop_size = (int(config.TRAIN.IMAGE_SIZE[1]), int(config.TRAIN.IMAGE_SIZE[0]))
train_dataset = eval('datasets.'+config.DATASET.DATASET)(
root = '../' + config.DATASET.ROOT, #changed
# root=config.DATASET.ROOT,
list_path=config.DATASET.TRAIN_SET,
num_samples=None,
num_classes=config.DATASET.NUM_CLASSES,
multi_scale=config.TRAIN.MULTI_SCALE,
flip=config.TRAIN.FLIP,
ignore_label=config.TRAIN.IGNORE_LABEL,
base_size=config.TRAIN.BASE_SIZE,
crop_size=crop_size,
downsample_rate=config.TRAIN.DOWNSAMPLERATE,
scale_factor=config.TRAIN.SCALE_FACTOR)
trainloader = torch.utils.data.DataLoader(
train_dataset,
batch_size=config.TRAIN.BATCH_SIZE_PER_GPU*len(gpus),
shuffle=config.TRAIN.SHUFFLE,
num_workers=0, #config.WORKERS,
pin_memory=True,
drop_last=True)
if config.DATASET.EXTRA_TRAIN_SET:
extra_train_dataset = eval('datasets.'+config.DATASET.DATASET)(
root = '../' + config.DATASET.ROOT, #changed
# root=config.DATASET.ROOT,
list_path=config.DATASET.EXTRA_TRAIN_SET,
num_samples=None,
num_classes=config.DATASET.NUM_CLASSES,
multi_scale=config.TRAIN.MULTI_SCALE,
flip=config.TRAIN.FLIP,
ignore_label=config.TRAIN.IGNORE_LABEL,
base_size=config.TRAIN.BASE_SIZE,
crop_size=crop_size,
downsample_rate=config.TRAIN.DOWNSAMPLERATE,
scale_factor=config.TRAIN.SCALE_FACTOR)
extra_trainloader = torch.utils.data.DataLoader(
extra_train_dataset,
batch_size=config.TRAIN.BATCH_SIZE_PER_GPU*len(gpus),
shuffle=config.TRAIN.SHUFFLE,
num_workers=config.WORKERS,
pin_memory=True,
drop_last=True)
test_size = (config.TEST.IMAGE_SIZE[1], config.TEST.IMAGE_SIZE[0])
test_dataset = eval('datasets.'+config.DATASET.DATASET)(
root = '../' + config.DATASET.ROOT, #changed
# root=config.DATASET.ROOT,
list_path=config.DATASET.TEST_SET,
num_samples=config.TEST.NUM_SAMPLES,
num_classes=config.DATASET.NUM_CLASSES,
multi_scale=False,
flip=False,
ignore_label=config.TRAIN.IGNORE_LABEL,
base_size=config.TEST.BASE_SIZE,
crop_size=test_size,
downsample_rate=1)
testloader = torch.utils.data.DataLoader(
test_dataset,
batch_size=config.TEST.BATCH_SIZE_PER_GPU*len(gpus),
shuffle=False,
num_workers=0,#config.WORKERS,
pin_memory=True)
# -
#config.TRAIN.LR=0.0001
LR = 0.001
# +
from models import segnet_vj
domain_network = segnet_vj.segnet_domain_adapt(config)
domain_network.init_weights(config.MODEL.PRETRAINED)
pretrained_dict = torch.load('../hrnet_w48_cityscapes_cls19_1024x2048_ohem_trainvalset.pth')
model_dict = domain_network.state_dict()
pretrained_dict = {k[6:]: v for k, v in pretrained_dict.items()
if k[6:] in model_dict.keys()}
model_dict.update(pretrained_dict)
domain_network.load_state_dict(model_dict)
optimizer = torch.optim.SGD([{'params':
filter(lambda p: p.requires_grad,
domain_network.parameters()),
'lr': LR/10}],
lr=LR/10,
momentum=config.TRAIN.MOMENTUM,
weight_decay=config.TRAIN.WD,
nesterov=config.TRAIN.NESTEROV,
)
classifier_criterion = CrossEntropy(ignore_label=config.TRAIN.IGNORE_LABEL,
weight=train_dataset.class_weights)
# find output dimension of our model
domain_network.eval()
domain_network.float()
domain_network.cuda()
with torch.no_grad():
feat = domain_network.forward_feature(dump_input.cuda())
pred = domain_network.forward_classifier(feat)
disc_input_size = feat.shape[1]
discriminator = segnet_vj.FCN_discriminator(in_channels=disc_input_size, out_classes=2)
# Define discriminator loss
disc_criterion = nn.CrossEntropyLoss()
disc_optimizer = torch.optim.SGD([{'params':
filter(lambda p: p.requires_grad,
discriminator.parameters()),
'lr': LR}],
lr=LR,
momentum=config.TRAIN.MOMENTUM,
weight_decay=config.TRAIN.WD,
nesterov=config.TRAIN.NESTEROV,
)
full_network = segnet_vj.Full_Adaptation_Model(network = domain_network, loss_classifier=classifier_criterion,\
discriminator=discriminator, loss_discriminator=disc_criterion)
# full_network.cuda()
# +
try:
full_network.network.load_state_dict(torch.load('../adapt_classification_net_1.pth'))
full_network.discriminator.load_state_dict(torch.load('../adapt_discriminator_net_1.pth'))
print("Network loaded from saved models")
except:
print("Network loaded from scratch (except classification net)")
full_network_model = nn.DataParallel(full_network, device_ids=gpus).cuda()
# -
(config.TRAIN.IMAGE_SIZE[0], config.TRAIN.IMAGE_SIZE[1])
# +
import glob
from PIL import Image
import torchvision.transforms as transforms
class Image_loader():
def __init__(self, folder_path, img_shape):
self.folder_path = folder_path
self.img_shape = img_shape
self.file_list = glob.glob(folder_path + "/*jpg")
self.trans = transforms.ToTensor()
self.order = np.random.choice([i for i in range(len(self.file_list))], len(self.file_list), replace=False)
self.current_index = 0
def get_images(self,batch_size):
img_list = []
for i in range(batch_size):
img = Image.open(self.file_list[self.order[self.current_index]])
self.current_index = (self.current_index+1)%len(self.file_list)
img_list.append( self.trans(img.resize(self.img_shape)) )
img_dataset = torch.stack(img_list)
return img_dataset
# +
folder_path = '../domain2_images'
img_shape = (config.TRAIN.IMAGE_SIZE[0], config.TRAIN.IMAGE_SIZE[1])
img_loader = Image_loader(folder_path = '../domain2_images', img_shape=img_shape)
# -
full_network.network.conv2.weight[5,0]
full_network.network.train()
full_network.discriminator.train()
full_network_model.train()
for i_iter, batch in enumerate(trainloader, 0):
disc_optimizer.zero_grad()
optimizer.zero_grad()
images, labels, _, _ = batch
d2_images = img_loader.get_images(batch_size=len(images))
label_discriminator = np.zeros([len(images)*2])
label_discriminator[-len(d2_images):] = 1
label_discriminator = torch.from_numpy(label_discriminator)
loss = full_network_model(input_d1=images, label_d1=labels.cuda().long(), input_d2=d2_images,\
label_discriminator=label_discriminator.cuda().long(), lamda=0.25)
final_loss = torch.mean(loss)
final_loss.backward()
disc_optimizer.step()
optimizer.step()
print("final loss:", final_loss.item(), "\n")
if (i_iter == 25):
break
full_network.network.conv2.weight[5,0]
import torch.cuda as cuda
cuda.memory_allocated(0) /(1024*1024)
# Save the model
torch.save(full_network.network.state_dict(), '../adapt_classification_net_1.pth')
torch.save(full_network.discriminator.state_dict(), '../adapt_discriminator_net_1.pth')
# Now have a module which will take a list of image paths and compare the segmentation output from domain adapted network and initial network
# +
base_network = segnet_vj.segnet_domain_adapt(config)
base_network.init_weights(config.MODEL.PRETRAINED)
pretrained_dict = torch.load('../hrnet_w48_cityscapes_cls19_1024x2048_ohem_trainvalset.pth')
model_dict = base_network.state_dict()
pretrained_dict = {k[6:]: v for k, v in pretrained_dict.items()
if k[6:] in model_dict.keys()}
model_dict.update(pretrained_dict)
base_network.load_state_dict(model_dict)
adapted_network = full_network.network
test_device = 'cuda:0'
base_network.to(test_device)
adapted_network.to(test_device)
base_network.eval()
adapted_network.eval()
print("Networks ready")
# +
import matplotlib.pyplot as plt
img = img_loader.get_images(batch_size=1)
img = img.to(test_device)
op1 = base_network(img)
op2 = adapted_network(img)
fig_img1 = op1.detach().cpu().numpy().squeeze().argmax(axis=0)
fig_img2 = op2.detach().cpu().numpy().squeeze().argmax(axis=0)
# Ensure the color range is always same
fig_img1[0,0],fig_img1[0,1] = 0,18
fig_img2[0,0],fig_img2[0,1] = 0,18
img_trnsfm = transforms.ToPILImage()
# +
plt.figure(figsize=(15,10))
plt.subplot(1,3,1)
plt.imshow(img_trnsfm(img.detach().cpu()[0]))
plt.title("Actual Image")
plt.subplot(1,3,2)
plt.imshow(fig_img1)
plt.title("Base network")
plt.subplot(1,3,3)
plt.imshow(fig_img2)
plt.title("Adapted network")
plt.show()
# -
# To free memory
del op1, op2, base_network
| tools/train_vj.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
from __future__ import division
from nltk.book import *
import nltk, re, pprint, urllib
#from nltk import word_tokenize
# One useful way we can process our words is to look at the "term frequencies" of the document. "Term frequency" is just another word for how frequently a given word appears in a document.
# NLTK has a built in function which creates a frequency distribution of all the words in our document.
myFreqDist = nltk.FreqDist(text4)
# We can use this frequency distribution to get the most frequent words in our text, or get the frequency of a particular term inside of a text.
# +
print myFreqDist["the"] #print the number of instances of the word "the" in the text
print myFreqDist.freq("the") #print the frequency of the word "the" in the text
print myFreqDist.most_common(50)
# -
# Another option for analysing term frequencies in documents is using NLTK's "Text Collection" class. This class will allow us to compare the frequencies of words relative to their respective frequencies in other documents. This will give us a more useful measure of how relevant a given term is to a document.
#
# The statistic we use to represent this idea is "tf-idf", short for "term frequency-inverse document frequency." Inverse document frequency is a measure of how frequently a term appears across all documents in our collection.
# +
from nltk.text import TextCollection
myTextCollection = TextCollection([text1,text2,text3,text4,text5,text6,text7,text8,text9])
print myTextCollection.tf_idf("nation",text7)
print myTextCollection.tf_idf("nation",text4)
print myTextCollection.tf_idf("nation",text1)
# -
# tf-idf is commonly used in search engines to find the most relevant results of a search.
| term frequencies.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <NAME>, Data Scientist - Who has fun LEARNING, EXPLORING & GROWING
# __INSTRUCTIONS__
#
# This assignment test whether you have a basic level of competency in Python. The assignment has one required part. Note that you will have an unlimited amount of submissions.
#
# ## __I. Word Count (100 points)__
#
# The question can be answered using material we've covered in class and with the help of the functions listed in the Notes to the question.
#
# __I. Counting the frequency of Words__
# In this assignment, you will write a function named word_distribution that takes a string as an input and returns a dictionary containing the frequency of each word in the text. Your function should convert all words to lower case and should remove punctuation that occurs at the end of a word.
#
# For example if the argument to the function is:
#
#
#
# text_string = “Hello. How are you? Please say hello if you don’t love me!”
#
# your function should return (note that dictionaries are unordered and you may see a different ordering of keys in what your function returns):
#
#
#
# _{‘hello’: 2, ‘how’:1, ‘are’:1, ‘you’:2, ’please’:1, “don’t”: 1, 'say':1, 'if':1, 'love':1,'me':1}_
#
#
#
# For the purposes of this assignment, you can assume that words have at most one punctuation symbol at the end or one punctuation symbol at the beginning and ignore punctuation that appears anywhere else. For example:
#
# _text_string = "That's when I saw Jane (John's sister)!"_
#
# should return:
#
#
# _{"that's":1, "when":1,"i":1,"saw":1,"jane":1, "john's":1, "sister)":1}_
#
# Though __sister)__ is not really a word, we'll accept it for the purposes of this assignment. Don't try to remove more than 1 punctuation symbol at the end of the words, because it would be rejected by the grader.
#
#
#
# Notes:
#
#
# 1. __word.split()__ splits a string on spaces and returns a list. Note that this will concatenate punctuation with words. For the above example, you will get:
#
# [‘Hello.’,’How’,’are’,’you?’,…]
#
# 2. __word = word.lower()__ converts a string into lower case.
#
# 3. __x.isalpha()__ returns __True__ if a character is a letter of the alphabet and __False__ otherwise
#
# <u>What To Submit</u>. wordcount.py which should behave as specified above. Before you submit, the RUN button on Vocareum should help you determine whether or not your program executes correctly on the platform. Submit only one solution for the problem.
#
#
text_string = "Hello. How are you? Please say hello if you don’t love me!"
#text_string = "That's when I saw Jane (John's sister)!"
text_string
words_list = text_string.split()
#words_list
words_list = [words_list[i].lower() for i in range(len(words_list))]
#words_list
for i in range(len(words_list)):
if not words_list[i].isalpha():
word = words_list[i]
print(word)
for j in word:
if j != "\'" and not j.isalpha():
idx = word.find(j)
words_list[i] = word.replace(word[idx],"")
break
print('----------')
words_list
words_dict = {}
#words_count
for word in words_list:
if word in words_dict:
words_dict[word] += 1
else:
words_dict[word] = 1
print(words_dict)
# ## Project 2 Assignment
# +
text_string = "Hello. How are you? Please say hello if you don’t love me!"
text_string = "That's when I saw Jane (John's sister)!"
text_string
def word_distribution(text_string):
words_list = text_string.split()
words_list = [words_list[i].lower() for i in range(len(words_list))]
for i in range(len(words_list)):
if not words_list[i].isalpha():
word = words_list[i]
for j in word:
if j != "\'" and j != "’" and not j.isalpha():
idx = word.find(j)
words_list[i] = word.replace(word[idx],"")
#break
words_dict = {}
for word in words_list:
if word in words_dict:
words_dict[word] += 1
else:
words_dict[word] = 1
print(words_dict)
result = words_dict
return result
# -
word_distribution(text_string)
{‘hello’: 2, ‘how’:1, ‘are’:1, ‘you’:2, ’please’:1, “don’t”: 1, 'say':1, 'if':1, 'love':1,'me':1}
{"that's":1, "when":1,"i":1,"saw":1,"jane":1, "john's":1, "sister)":1}
| Analytics & Python/Week 2 - A Crash Course In Python Part 2/Week 2 Assignment (word count).ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import pandas as pd
NCI_Thesaurus = pd.read_csv('D:\Python_Database\Thesaurus.csv')
# del NCI_Thesaurus['Unnamed: 5']
# del NCI_Thesaurus['Unnamed: 6']
# del NCI_Thesaurus['Description 3']
del NCI_Thesaurus['ID 1']
del NCI_Thesaurus['ID 2']
NCI_Thesaurus.head()
# +
term = raw_input('What is the item you wished described? ')
class color:
PURPLE = '\033[95m'
CYAN = '\033[96m'
DARKCYAN = '\033[36m'
BLUE = '\033[94m'
GREEN = '\033[92m'
YELLOW = '\033[93m'
RED = '\033[91m'
BOLD = '\033[1m'
UNDERLINE = '\033[4m'
END = '\033[0m'
temp = NCI_Thesaurus.apply(lambda row: row.astype(str).str.contains(term).any(), axis=1)
for i in range(len(temp.tolist())):
if temp.tolist()[i] == True:
print '\n'
print color.BOLD + term + color.END
print NCI_Thesaurus.loc[i]['Description 2']
print NCI_Thesaurus.loc[i]['Description 3']
# -
| timdrop/Myeloma Terms - NCI Thesaurus.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import toto
from toto.inputs.mat import MATfile
filename=r'../_tests/mat_file/tidal.mat'
tx=MATfile(filename)
df=tx._toDataFrame()
dfout=df[0].TideAnalysis.skew_surge(mag='el_tide',\
args={'Minimum SNR':2,\
'Latitude':-36.0,
})
print(dfout)
# -
| notebook/create_skew_surge.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Examine if the important mobility ties are structural
#
# - Combine three networks: Spatial + (strong or insig weak) + (sig weak pos or neg).
# - Compute the edge_betweenness_centralities
# - Double think: maybe it is fine to combine only two networks: spatial + others.
import numpy as np
import pandas as pd
import geopandas as gpd
import networkx as nx
import matplotlib.pyplot as plt
import pickle
import copy
from scipy.sparse import csr_matrix
import time
from sklearn.preprocessing import normalize
import sys
sys.path.append("../")
import utils
import importlib
importlib.reload(utils)
# +
# read files
with open("../../data/02_intermediate/boston_stays.pickle", 'rb') as f:
df_boston = pickle.load(f)
with open("../../data/02_intermediate/miami_stays.pickle", 'rb') as f:
df_miami = pickle.load(f)
with open("../../data/02_intermediate/chicago_stays.pickle", 'rb') as f:
df_chicago = pickle.load(f)
with open("../../data/03_processed/A_home_activity_three_cities_unweighted_dic.pickle", 'rb') as f:
A_home_activity_unweighted_dic = pickle.load(f)
with open("../../data/03_processed/A_home_activity_three_cities_weighted_dic.pickle", 'rb') as f:
A_home_activity_weighted_dic = pickle.load(f)
# -
with open("../../data/03_processed/spatial_network_boston_miami_chicago_dic.pickle", 'rb') as f:
spatial_network_dic = pickle.load(f)
# read shapefiles
with open("../../data/02_intermediate/boston_miami_chicago_ct_shp_dic.pickle", 'rb') as f:
shp_dic = pickle.load(f)
# read evaluation files
with open("../../data/05_model_outputs/lasso_coefficients.pickle", 'rb') as f:
lasso_coef = pickle.load(f)
# +
# activity counts for the three cities
# boston
activity_counts_boston = np.unique(df_boston.cat, return_counts = True)
# miami
activity_counts_miami = np.unique(df_miami.cat, return_counts = True)
# chicago
activity_counts_chicago = np.unique(df_chicago.cat, return_counts = True)
# convert the counts to df
activity_counts_dic = {}
activity_counts_dic['boston']=activity_counts_boston
activity_counts_dic['miami']=activity_counts_miami
activity_counts_dic['chicago']=activity_counts_chicago
# turn them to dataframes
activity_counts_df_dic = {}
for key_ in activity_counts_dic.keys():
activity_counts = activity_counts_dic[key_]
activity_count_df = pd.DataFrame(activity_counts[1],
index = activity_counts[0],
columns = ['count'])
sorted_activity_count_df = activity_count_df.sort_values('count', ascending=False)
activity_counts_df_dic[key_] = sorted_activity_count_df
# -
shp_dic['boston']
# ### Part 1. Compute edge bet centralities for three graphs (each iteration)
#
# - Part 1. Combine spatial + strong or weak insig + weak pos or neg networks
# - Reasoning: we compare the significant mobility networks to other two simultaneously.
# - However, it seems that we could compare the four mobility networks separately.
#
# +
# total time: about 2 hours.
# init
network_property_dic = {}
# fixed parameters
spatial_net_name = 'queen_contiguity_adj_df'
model_type = 'lasso (no socio-demographics)'
threshold = 1.0
top_K_as_strong_mobility_ties = 50
sampling_size = 10 # number of samples we need from each activity list (strong, weak insig, etc.)
# five layers of iteration. It is slow...
for city in ['boston','chicago','miami']:
network_property_dic[city] = {}
# need to try and test if the spatial net is connected.
# if not, use only the largest component for the analysis.
spatial_net = spatial_network_dic[city][spatial_net_name]
G_spatial = nx.from_pandas_adjacency(spatial_net)
if nx.number_connected_components(G_spatial) > 1:
# if city = chicago or miami, then the network is disconnected. We choose the giant component.
# find the giant component
Gcc = sorted(nx.connected_components(G_spatial), key=len, reverse=True)
for G_sub in Gcc:
print(len(G_sub)) # print the size of the components.
G0 = G_spatial.subgraph(Gcc[0])
giant_component_node_list = sorted(list(G0.nodes))
giant_component_node_list
# replace the input shapefile and the spatial networks
spatial_network_dic[city][spatial_net_name] = spatial_network_dic[city][spatial_net_name].loc[giant_component_node_list, giant_component_node_list]
shp_dic[city] = shp_dic[city].loc[giant_component_node_list, :]
# recreate the spatial net and spatial graph
spatial_net = spatial_network_dic[city][spatial_net_name]
G_spatial = nx.from_pandas_adjacency(spatial_net)
print(city)
print("Baseline average distance: ", nx.average_shortest_path_length(G_spatial))
for output_var in ['inc_median_household_2018', 'property_value_median_2018', 'rent_median_2018']:
network_property_dic[city][output_var] = {}
# create four lists of activities: strong, weak insig, weak sig pos, weak sig neg.
strong_activity_list = list(activity_counts_df_dic[city].index[:top_K_as_strong_mobility_ties])
weak_sig_activity_list = list(lasso_coef[city][output_var][model_type].index)
weak_sig_neg_activities = list(lasso_coef[city][output_var][model_type]['value'].loc[lasso_coef[city][output_var][model_type]['value'] < 0.0].index)
weak_sig_pos_activities = list(lasso_coef[city][output_var][model_type]['value'].loc[lasso_coef[city][output_var][model_type]['value'] > 0.0].index)
weak_insig_activity_list = list(set(activity_counts_df_dic[city].index).difference(set(strong_activity_list)).difference(set(weak_sig_activity_list)))
#
activity_type_list = ['strong', 'weak_sig_neg', 'weak_sig_pos', 'weak_insig']
activity_list_dic = {}
activity_list_dic['strong'] = strong_activity_list
activity_list_dic['weak_sig_neg'] = weak_sig_neg_activities
activity_list_dic['weak_sig_pos'] = weak_sig_pos_activities
activity_list_dic['weak_insig'] = weak_insig_activity_list
# combine spatial, benchmark, and target networks to compute the edge_bet_centralities.
for activity_type_benchmark in ['strong', 'weak_insig']:
for activity_type_target in ['weak_sig_neg', 'weak_sig_pos']:
print(activity_type_benchmark, activity_type_target)
network_property_dic[city][output_var][(activity_type_benchmark, activity_type_target)] = {}
activity_type_benchmark_list = activity_list_dic[activity_type_benchmark]
activity_type_target_list = activity_list_dic[activity_type_target]
for i in range(sampling_size):
activity_benchmark_name = np.random.choice(activity_type_benchmark_list)
activity_target_name = np.random.choice(activity_type_target_list)
print(activity_benchmark_name, activity_target_name)
spatial_net = spatial_network_dic[city][spatial_net_name]
mobility_benchmark_net = utils.turn_df_to_adj(A_home_activity_unweighted_dic[city][threshold][activity_benchmark_name], shp_dic[city])
mobility_target_net = utils.turn_df_to_adj(A_home_activity_unweighted_dic[city][threshold][activity_target_name], shp_dic[city])
# integrate networks and compute betweenness centralities
integrated_adj = spatial_net.add(mobility_benchmark_net, fill_value = 0.0).add(mobility_target_net, fill_value = 0.0)
integrated_adj.values[integrated_adj.values > 1.0] = 1.0 # valid adj matrices
# Get the edge betweenness metrics
G = nx.from_pandas_adjacency(integrated_adj)
edge_bet_centrality_graph = nx.edge_betweenness_centrality(G) # joint centrality metrics
# turn the graph info to dataframe
edge_bet_centrality_df = pd.DataFrame(edge_bet_centrality_graph.values(),
index = list(edge_bet_centrality_graph.keys()),
columns = ['edge_bet_centrality'])
# separate the spatial, strong, and weak ties
G_spatial = nx.from_pandas_adjacency(spatial_net)
G_mobility_benchmark = nx.from_pandas_adjacency(mobility_benchmark_net)
G_mobility_target = nx.from_pandas_adjacency(mobility_target_net)
spatial_edges = list(G_spatial.edges())
mobility_benchmark_edges = list(G_mobility_benchmark.edges())
mobility_target_edges = list(G_mobility_target.edges())
# Boston - income - four types of activityes - specific edge bet centralities
network_property_dic[city][output_var][(activity_type_benchmark, activity_type_target)][('spatial', activity_benchmark_name, activity_target_name)]=(edge_bet_centrality_df.loc[spatial_edges, 'edge_bet_centrality'].mean(),
edge_bet_centrality_df.loc[mobility_benchmark_edges, 'edge_bet_centrality'].mean(),
edge_bet_centrality_df.loc[mobility_target_edges, 'edge_bet_centrality'].mean())
# -
# ## save
with open('../../data/05_model_outputs/network_property_edge_bet_centrality.pickle', 'wb') as f:
pickle.dump(network_property_dic, f)
# ## open
with open('../../data/05_model_outputs/network_property_edge_bet_centrality.pickle', 'rb') as f:
network_property_dic = pickle.load(f)
network_property_dic['boston']['inc_median_household_2018']
# ### Part 2. Compute edge bet centralities for two graphs (each iteration)
#
# - Combine spatial + strong or weak insig or weak pos or neg networks
#
A_home_activity_unweighted_dic[city][threshold].keys()
# +
# time
beginning_time = time.time()
# init
network_property_edge_centrality_dic = {}
# fixed parameters
spatial_net_name = 'queen_contiguity_adj_df'
model_type = 'lasso (no socio-demographics)'
threshold = 1.0
top_K_as_strong_mobility_ties = 50
# sampling_size = 10 # number of samples we need from each activity list (strong, weak insig, etc.)
# five layers of iteration. It is slow...
for city in ['boston','chicago','miami']:
network_property_edge_centrality_dic[city] = {}
# need to try and test if the spatial net is connected.
# if not, use only the largest component for the analysis.
spatial_net = spatial_network_dic[city][spatial_net_name]
G_spatial = nx.from_pandas_adjacency(spatial_net)
if nx.number_connected_components(G_spatial) > 1:
# if city = chicago or miami, then the network is disconnected. We choose the giant component.
# find the giant component
Gcc = sorted(nx.connected_components(G_spatial), key=len, reverse=True)
for G_sub in Gcc:
print(len(G_sub)) # print the size of the components.
G0 = G_spatial.subgraph(Gcc[0])
giant_component_node_list = sorted(list(G0.nodes))
giant_component_node_list
# replace the input shapefile and the spatial networks
spatial_network_dic[city][spatial_net_name] = spatial_network_dic[city][spatial_net_name].loc[giant_component_node_list, giant_component_node_list]
shp_dic[city] = shp_dic[city].loc[giant_component_node_list, :]
# recreate the spatial net and spatial graph
spatial_net = spatial_network_dic[city][spatial_net_name]
G_spatial = nx.from_pandas_adjacency(spatial_net)
print(city)
print("Baseline average distance: ", nx.average_shortest_path_length(G_spatial))
for idx in range(len(list(A_home_activity_unweighted_dic[city][threshold].keys()))):
activity_name = list(A_home_activity_unweighted_dic[city][threshold].keys())[idx]
network_property_edge_centrality_dic[city][activity_name] = {}
current_time = time.time()
elapse_time = current_time - beginning_time
print(idx, activity_name, elapse_time/60.0, "minutes", end = '\r')
spatial_net = spatial_network_dic[city][spatial_net_name]
mobility_net = utils.turn_df_to_adj(A_home_activity_unweighted_dic[city][threshold][activity_name], shp_dic[city])
# integrate networks and compute betweenness centralities
integrated_adj = spatial_net.add(mobility_net, fill_value = 0.0)
integrated_adj.values[integrated_adj.values > 1.0] = 1.0 # valid adj matrices
# Get the edge betweenness metrics
G = nx.from_pandas_adjacency(integrated_adj)
edge_bet_centrality_graph = nx.edge_betweenness_centrality(G) # joint centrality metrics
# turn the graph info to dataframe
edge_bet_centrality_df = pd.DataFrame(edge_bet_centrality_graph.values(),
index = list(edge_bet_centrality_graph.keys()),
columns = ['edge_bet_centrality'])
# separate the spatial, strong, and weak ties
G_spatial = nx.from_pandas_adjacency(spatial_net)
G_mobility = nx.from_pandas_adjacency(mobility_net)
spatial_edges = list(G_spatial.edges())
mobility_edges = list(G_mobility.edges())
# Boston - income - four types of activityes - specific edge bet centralities
network_property_edge_centrality_dic[city][activity_name][('spatial', activity_name)]=(edge_bet_centrality_df.loc[spatial_edges, 'edge_bet_centrality'].mean(),
edge_bet_centrality_df.loc[mobility_edges, 'edge_bet_centrality'].mean())
# -
# ### save
with open('../../data/05_model_outputs/network_property_edge_bet_centrality_simpler.pickle', 'wb') as f:
pickle.dump(network_property_edge_centrality_dic, f)
# ### Analysis
network_property_dic[city][output_var].keys()
network_property_dic[city][output_var][('strong', 'weak_sig_neg')]
# +
# Check
city = 'boston'
output_var = 'inc_median_household_2018'
edge_bet_df_dic = {}
for activity_tuple in list(network_property_dic[city][output_var].keys()):
print(activity_tuple)
edge_bet_df_dic[activity_tuple] = pd.DataFrame(network_property_dic[city][output_var][activity_tuple].values(),
columns = ['spatial', activity_tuple[0], activity_tuple[1]],
index = network_property_dic[city][output_var][activity_tuple].keys())
# normalize by using the spatial column (Q: Is this approach correct?)
edge_bet_df_dic[activity_tuple][activity_tuple[0]] = edge_bet_df_dic[activity_tuple][activity_tuple[0]]/edge_bet_df_dic[activity_tuple]['spatial']
edge_bet_df_dic[activity_tuple][activity_tuple[1]] = edge_bet_df_dic[activity_tuple][activity_tuple[1]]/edge_bet_df_dic[activity_tuple]['spatial']
edge_bet_df_dic[activity_tuple]['spatial'] = np.ones(edge_bet_df_dic[activity_tuple].shape[0])
# edge_bet_df_dic[activity_type]['spatial'] = edge_bet_df_dic[activity_type][activity_type]/edge_bet_df_dic[activity_type]['spatial']
# print
print(edge_bet_df_dic[activity_tuple].describe())
# edge_bet_df_dic
# +
# Check
city = 'boston'
output_var = 'property_value_median_2018'
edge_bet_df_dic = {}
for activity_tuple in list(network_property_dic[city][output_var].keys()):
print(activity_tuple)
edge_bet_df_dic[activity_tuple] = pd.DataFrame(network_property_dic[city][output_var][activity_tuple].values(),
columns = ['spatial', activity_tuple[0], activity_tuple[1]],
index = network_property_dic[city][output_var][activity_tuple].keys())
# normalize by using the spatial column (Q: Is this approach correct?)
edge_bet_df_dic[activity_tuple][activity_tuple[0]] = edge_bet_df_dic[activity_tuple][activity_tuple[0]]/edge_bet_df_dic[activity_tuple]['spatial']
edge_bet_df_dic[activity_tuple][activity_tuple[1]] = edge_bet_df_dic[activity_tuple][activity_tuple[1]]/edge_bet_df_dic[activity_tuple]['spatial']
edge_bet_df_dic[activity_tuple]['spatial'] = np.ones(edge_bet_df_dic[activity_tuple].shape[0])
# edge_bet_df_dic[activity_type]['spatial'] = edge_bet_df_dic[activity_type][activity_type]/edge_bet_df_dic[activity_type]['spatial']
# print
print(edge_bet_df_dic[activity_tuple].describe())
# edge_bet_df_dic
# +
# Check
city = 'boston'
output_var = 'rent_median_2018'
edge_bet_df_dic = {}
for activity_tuple in list(network_property_dic[city][output_var].keys()):
print(activity_tuple)
edge_bet_df_dic[activity_tuple] = pd.DataFrame(network_property_dic[city][output_var][activity_tuple].values(),
columns = ['spatial', activity_tuple[0], activity_tuple[1]],
index = network_property_dic[city][output_var][activity_tuple].keys())
# normalize by using the spatial column (Q: Is this approach correct?)
edge_bet_df_dic[activity_tuple][activity_tuple[0]] = edge_bet_df_dic[activity_tuple][activity_tuple[0]]/edge_bet_df_dic[activity_tuple]['spatial']
edge_bet_df_dic[activity_tuple][activity_tuple[1]] = edge_bet_df_dic[activity_tuple][activity_tuple[1]]/edge_bet_df_dic[activity_tuple]['spatial']
edge_bet_df_dic[activity_tuple]['spatial'] = np.ones(edge_bet_df_dic[activity_tuple].shape[0])
# edge_bet_df_dic[activity_type]['spatial'] = edge_bet_df_dic[activity_type][activity_type]/edge_bet_df_dic[activity_type]['spatial']
# print
print(edge_bet_df_dic[activity_tuple].describe())
# edge_bet_df_dic
# +
# Check
city = 'miami'
# output_var = 'inc_median_household_2018'
# output_var = 'property_value_median_2018'
output_var = 'rent_median_2018'
edge_bet_df_dic = {}
for activity_tuple in list(network_property_dic[city][output_var].keys()):
print(activity_tuple)
edge_bet_df_dic[activity_tuple] = pd.DataFrame(network_property_dic[city][output_var][activity_tuple].values(),
columns = ['spatial', activity_tuple[0], activity_tuple[1]],
index = network_property_dic[city][output_var][activity_tuple].keys())
# normalize by using the spatial column (Q: Is this approach correct?)
edge_bet_df_dic[activity_tuple][activity_tuple[0]] = edge_bet_df_dic[activity_tuple][activity_tuple[0]]/edge_bet_df_dic[activity_tuple]['spatial']
edge_bet_df_dic[activity_tuple][activity_tuple[1]] = edge_bet_df_dic[activity_tuple][activity_tuple[1]]/edge_bet_df_dic[activity_tuple]['spatial']
edge_bet_df_dic[activity_tuple]['spatial'] = np.ones(edge_bet_df_dic[activity_tuple].shape[0])
# edge_bet_df_dic[activity_type]['spatial'] = edge_bet_df_dic[activity_type][activity_type]/edge_bet_df_dic[activity_type]['spatial']
# print
print(edge_bet_df_dic[activity_tuple].describe())
# -
# # Save
| src/04_modelling/model_02_weak_ties_structural_three_cities.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Activate cell with Y key to enable module imports if not running notebook within Docker
# Deactivate cell with R key
import os
import sys
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path)
# %load_ext autoreload
# %autoreload 2
# + active=""
# # Optional
# import warnings
# warnings.filterwarnings('ignore')
# +
import pandas as pd
import numpy as np
df = pd.read_csv('../data/raw/train.csv')
dfc = df.copy()
target = dfc.loc[:, 'TARGET_5Yrs':]
dfc_labeled = dfc.loc[:, 'GP':'TARGET_5Yrs']
dfc = dfc.loc[:, 'GP':'TOV']
# +
from src.features.kpw_build_features import build
from sklearn.preprocessing import StandardScaler
dfc = build(dfc)
scaler = StandardScaler()
dfc = scaler.fit_transform(dfc)
# -
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(dfc, target, test_size=0.2, random_state=5)
X_train, X_valid, y_train, y_valid = train_test_split(X_train, y_train, test_size=0.2, random_state=10)
# +
from sklearn.linear_model import LogisticRegression
#log_reg = LogisticRegression(max_iter=8_000)
log_reg = LogisticRegression()
log_reg.fit(X_train, y_train.values.ravel())
# +
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier()
knn.fit(X_train, y_train.values.ravel())
# +
from sklearn.ensemble import RandomForestClassifier
# Must use non-default hyperparameters to reduce overfitting
rfc = RandomForestClassifier(max_depth=5)
rfc.fit(X_train, y_train.values.ravel())
# +
from sklearn.metrics import accuracy_score, mean_squared_error as mse, mean_absolute_error as mae
from src.models.eval_ratio_accuracy import ratioAccuracy
y_mean = y_train.mean()
y_base = np.full((len(y_train), 1), y_mean)
print('RMSE:' ,mse(y_train, y_base, squared=False))
print('MAE:', mae(y_train, y_base))
print('Base accuracy:', ratioAccuracy(y_train))
# +
from src.models.eval_model import eval_model
print('Logistic Regression:')
eval_model(log_reg, X_train, y_train, X_valid, y_valid)
print('K Nearest Neighbors:')
eval_model(knn, X_train, y_train, X_valid, y_valid)
print('Random Forest:')
eval_model(rfc, X_train, y_train, X_valid, y_valid)
# -
from hyperopt import fmin, hp, STATUS_OK, tpe, Trials
from sklearn.model_selection import cross_val_score
# +
log_reg_space = {
'C' : hp.quniform('C', 0.01, 100, 0.05),
'fit_intercept' : hp.choice('fit_intercept', [False, True]),
'multi_class': hp.choice('multi_class', ['auto', 'ovr', 'multinomial']),
'l1_ratio': hp.quniform('l1_ratio', 0.0, 1.0, 0.05)
}
def tune_log_reg(space):
log_reg_hyperopt = LogisticRegression(
penalty = 'elasticnet',
C = space['C'],
fit_intercept = space['fit_intercept'],
solver = 'saga',
max_iter = 10_000,
multi_class = space['multi_class'],
n_jobs = -1,
l1_ratio = space['l1_ratio']
)
acc = cross_val_score(log_reg_hyperopt, X_train, y_train.values.ravel(), scoring="accuracy", cv=10).mean()
return {"loss": -acc, "status": STATUS_OK}
log_reg_best = fmin(
fn = tune_log_reg,
space = log_reg_space,
algo = tpe.suggest,
max_evals = 50
)
print('Best Logistic Regression Elasticnet hyperparameter values:\n', log_reg_best)
# +
knn_space = {
'n_neighbors' : hp.quniform('n_neighbors', 1, 100, 1),
'weights' : hp.choice('weights', ['uniform', 'distance']),
'algorithm' : hp.choice('algorithm', ['ball_tree', 'kd_tree', 'brute']),
'p' : hp.choice('p', [1, 2]),
}
def tune_knn(space):
knn_hyperopt = KNeighborsClassifier(
n_neighbors = int(space['n_neighbors']),
weights = space['weights'],
algorithm = space['algorithm'],
p = space['p'],
n_jobs = -1
)
acc = cross_val_score(knn_hyperopt, X_train, y_train.values.ravel(), scoring="accuracy", cv=10).mean()
return {"loss": -acc, "status": STATUS_OK}
knn_best = fmin(
fn = tune_knn,
space = knn_space,
algo = tpe.suggest,
max_evals = 50
)
print('Best KNN hyperparameter values:\n', knn_best)
# +
rfc_space = {
'n_estimators' : hp.quniform('n_estimators', 1, 10_000, 10),
'criterion' : hp.choice('criterion', ['gini', 'entropy']),
'max_depth' : hp.quniform('max_depth', 5, 100, 5),
'max_features' : hp.choice('max_features', ['sqrt', 'log2', None])
}
def tune_rfc(space):
rfc_hyperopt = RandomForestClassifier(
n_estimators = int(space['n_estimators']),
criterion = space['criterion'],
max_depth = space['max_depth'],
max_features = space['max_features'],
n_jobs = -1
)
acc = cross_val_score(rfc_hyperopt, X_train, y_train.values.ravel(), scoring="accuracy", cv=10).mean()
return {"loss": -acc, "status": STATUS_OK}
rfc_best = fmin(
fn = tune_rfc,
space = rfc_space,
algo = tpe.suggest,
max_evals = 50
)
print('Best RFC hyperparameter values:\n', rfc_best)
# +
log_reg_l2_optimised = LogisticRegression(
penalty = 'l2',
C = 97.95,
fit_intercept = True,
solver = 'saga',
max_iter = 10_000,
multi_class = 'multinomial',
n_jobs = -1
)
log_reg_l2_optimised.fit(X_train, y_train.values.ravel())
log_reg_elastic_optimised = LogisticRegression(
penalty = 'elasticnet',
C = 1.4,
fit_intercept = True,
solver = 'saga',
max_iter = 10_000,
multi_class = 'ovr',
l1_ratio = 1.0,
n_jobs = -1
)
log_reg_elastic_optimised.fit(X_train, y_train.values.ravel())
knn_optimised = KNeighborsClassifier(
n_neighbors = 42,
weights = 'uniform',
algorithm = 'brute',
p = 2,
n_jobs = -1
)
knn_optimised.fit(X_train, y_train.values.ravel())
rfc_optimised = RandomForestClassifier(
n_estimators = 4230,
criterion = 'entropy',
max_depth = 10,
max_features = None,
n_jobs = -1
)
rfc_optimised.fit(X_train, y_train.values.ravel())
# +
print('Logistic Regression L2 Optimised:')
eval_model(log_reg_l2_optimised, X_train, y_train, X_valid, y_valid)
print('Logistic Regression Elastic Optimised:')
eval_model(log_reg_elastic_optimised, X_train, y_train, X_valid, y_valid)
print('K Nearest Neighbors Optimised:')
eval_model(knn_optimised, X_train, y_train, X_valid, y_valid)
print('Random Forest Optimised:')
eval_model(rfc_optimised, X_train, y_train, X_valid, y_valid)
# +
from src.models.save_predictions import save_predictions
test_data = pd.read_csv('../data/raw/test.csv')
test_data = test_data.loc[:, 'GP':'TOV']
test_data = build(test_data)
test_data = scaler.transform(test_data)
save_predictions('au-ron_week2_rfc.csv', rfc_optimised, test_data)
pd.read_csv('../data/predictions/au-ron_week2_rfc.csv')
| notebooks/Au_Ron-week2_kaggle-nba.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/GoogleCloudPlatform/training-data-analyst/blob/master/courses/fast-and-lean-data-science/01_MNIST_TPU_Keras.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="xqLjB2cy5S7m" colab_type="text"
# ## MNIST on TPU (Tensor Processing Unit)<br>or GPU using tf.Keras and tf.data.Dataset
# <table><tr><td><img valign="middle" src="https://raw.githubusercontent.com/GoogleCloudPlatform/tensorflow-without-a-phd/master/tensorflow-rl-pong/images/keras-tensorflow-tpu300px.png" width="300" alt="Keras+Tensorflow+Cloud TPU"></td></tr></table>
#
#
# This sample trains an "MNIST" handwritten digit
# recognition model on a GPU or TPU backend using a Keras
# model. Data are handled using the tf.data.Datset API. This is
# a very simple sample provided for educational purposes. Do
# not expect outstanding TPU performance on a dataset as
# small as MNIST.
#
# <h3><a href="https://cloud.google.com/gpu/"><img valign="middle" src="https://raw.githubusercontent.com/GoogleCloudPlatform/tensorflow-without-a-phd/master/tensorflow-rl-pong/images/gpu-hexagon.png" width="50"></a> Train on GPU or TPU <a href="https://cloud.google.com/tpu/"><img valign="middle" src="https://raw.githubusercontent.com/GoogleCloudPlatform/tensorflow-without-a-phd/master/tensorflow-rl-pong/images/tpu-hexagon.png" width="50"></a></h3>
#
# 1. Select a GPU or TPU backend (Runtime > Change runtime type)
# 1. Runtime > Run All (Watch out: the "Colab-only auth" cell requires user input)
#
# <h3><a href="https://cloud.google.com/ml-engine/"><img valign="middle" src="https://raw.githubusercontent.com/GoogleCloudPlatform/tensorflow-without-a-phd/master/tensorflow-rl-pong/images/mlengine-hexagon.png" width="50"></a> Deploy to ML Engine</h3>
# 1. At the bottom of this notebook you can deploy your trained model to ML Engine for a serverless, autoscaled, REST API experience. You will need a GCP project and a GCS bucket for this last part.
#
# TPUs are located in Google Cloud, for optimal performance, they read data directly from Google Cloud Storage (GCS)
# + [markdown] id="Lvo0t7XVIkWZ" colab_type="text"
# ### Parameters
# + id="cCpkS9C_H7Tl" colab_type="code" colab={}
BATCH_SIZE = 128 # On TPU, this will be the per-core batch size. A Cloud TPU has 8 cores so tha global TPU batch size is 1024
training_images_file = 'gs://mnist-public/train-images-idx3-ubyte'
training_labels_file = 'gs://mnist-public/train-labels-idx1-ubyte'
validation_images_file = 'gs://mnist-public/t10k-images-idx3-ubyte'
validation_labels_file = 'gs://mnist-public/t10k-labels-idx1-ubyte'
# + [markdown] id="qpiJj8ym0v0-" colab_type="text"
# ### Imports
# + id="AoilhmYe1b5t" colab_type="code" outputId="914f12e4-ca4e-4b92-ddf5-acdf57c0b13b" colab={"base_uri": "https://localhost:8080/", "height": 34}
import os, re, math, json, shutil, pprint
import PIL.Image, PIL.ImageFont, PIL.ImageDraw
import numpy as np
import tensorflow as tf
from matplotlib import pyplot as plt
from tensorflow.python.platform import tf_logging
print("Tensorflow version " + tf.__version__)
# + id="qhdz68Xm3Z4Z" colab_type="code" cellView="form" colab={}
#@title visualization utilities [RUN ME]
"""
This cell contains helper functions used for visualization
and downloads only. You can skip reading it. There is very
little useful Keras/Tensorflow code here.
"""
# Matplotlib config
plt.rc('image', cmap='gray_r')
plt.rc('grid', linewidth=0)
plt.rc('xtick', top=False, bottom=False, labelsize='large')
plt.rc('ytick', left=False, right=False, labelsize='large')
plt.rc('axes', facecolor='F8F8F8', titlesize="large", edgecolor='white')
plt.rc('text', color='a8151a')
plt.rc('figure', facecolor='F0F0F0')# Matplotlib fonts
MATPLOTLIB_FONT_DIR = os.path.join(os.path.dirname(plt.__file__), "mpl-data/fonts/ttf")
# pull a batch from the datasets. This code is not very nice, it gets much better in eager mode (TODO)
def dataset_to_numpy_util(training_dataset, validation_dataset, N):
# get one batch from each: 10000 validation digits, N training digits
unbatched_train_ds = training_dataset.apply(tf.data.experimental.unbatch())
v_images, v_labels = validation_dataset.make_one_shot_iterator().get_next()
t_images, t_labels = unbatched_train_ds.batch(N).make_one_shot_iterator().get_next()
# Run once, get one batch. Session.run returns numpy results
with tf.Session() as ses:
(validation_digits, validation_labels,
training_digits, training_labels) = ses.run([v_images, v_labels, t_images, t_labels])
# these were one-hot encoded in the dataset
validation_labels = np.argmax(validation_labels, axis=1)
training_labels = np.argmax(training_labels, axis=1)
return (training_digits, training_labels,
validation_digits, validation_labels)
# create digits from local fonts for testing
def create_digits_from_local_fonts(n):
font_labels = []
img = PIL.Image.new('LA', (28*n, 28), color = (0,255)) # format 'LA': black in channel 0, alpha in channel 1
font1 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'DejaVuSansMono-Oblique.ttf'), 25)
font2 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'STIXGeneral.ttf'), 25)
d = PIL.ImageDraw.Draw(img)
for i in range(n):
font_labels.append(i%10)
d.text((7+i*28,0 if i<10 else -4), str(i%10), fill=(255,255), font=font1 if i<10 else font2)
font_digits = np.array(img.getdata(), np.float32)[:,0] / 255.0 # black in channel 0, alpha in channel 1 (discarded)
font_digits = np.reshape(np.stack(np.split(np.reshape(font_digits, [28, 28*n]), n, axis=1), axis=0), [n, 28*28])
return font_digits, font_labels
# utility to display a row of digits with their predictions
def display_digits(digits, predictions, labels, title, n):
plt.figure(figsize=(13,3))
digits = np.reshape(digits, [n, 28, 28])
digits = np.swapaxes(digits, 0, 1)
digits = np.reshape(digits, [28, 28*n])
plt.yticks([])
plt.xticks([28*x+14 for x in range(n)], predictions)
for i,t in enumerate(plt.gca().xaxis.get_ticklabels()):
if predictions[i] != labels[i]: t.set_color('red') # bad predictions in red
plt.imshow(digits)
plt.grid(None)
plt.title(title)
# utility to display multiple rows of digits, sorted by unrecognized/recognized status
def display_top_unrecognized(digits, predictions, labels, n, lines):
idx = np.argsort(predictions==labels) # sort order: unrecognized first
for i in range(lines):
display_digits(digits[idx][i*n:(i+1)*n], predictions[idx][i*n:(i+1)*n], labels[idx][i*n:(i+1)*n],
"{} sample validation digits out of {} with bad predictions in red and sorted first".format(n*lines, len(digits)) if i==0 else "", n)
# utility to display training and validation curves
def display_training_curves(training, validation, title, subplot):
if subplot%10==1: # set up the subplots on the first call
plt.subplots(figsize=(10,10), facecolor='#F0F0F0')
plt.tight_layout()
ax = plt.subplot(subplot)
ax.grid(linewidth=1, color='white')
ax.plot(training)
ax.plot(validation)
ax.set_title('model '+ title)
ax.set_ylabel(title)
ax.set_xlabel('epoch')
ax.legend(['train', 'valid.'])
# + [markdown] id="Lzd6Qi464PsA" colab_type="text"
# ### Colab-only auth for this notebook and the TPU
# + id="MPx0nvyUnvgT" colab_type="code" cellView="both" colab={}
IS_COLAB_BACKEND = 'COLAB_GPU' in os.environ # this is always set on Colab, the value is 0 or 1 depending on GPU presence
if IS_COLAB_BACKEND:
from google.colab import auth
auth.authenticate_user() # Authenticates the backend and also the TPU using your credentials so that they can access your private GCS buckets
# + [markdown] id="Lz1Zknfk4qCx" colab_type="text"
# ### tf.data.Dataset: parse files and prepare training and validation datasets
# Please read the [best practices for building](https://www.tensorflow.org/guide/performance/datasets) input pipelines with tf.data.Dataset
# + id="ZE8dgyPC1_6m" colab_type="code" colab={}
def read_label(tf_bytestring):
label = tf.decode_raw(tf_bytestring, tf.uint8)
label = tf.reshape(label, [])
label = tf.one_hot(label, 10)
return label
def read_image(tf_bytestring):
image = tf.decode_raw(tf_bytestring, tf.uint8)
image = tf.cast(image, tf.float32)/256.0
image = tf.reshape(image, [28*28])
return image
def load_dataset(image_file, label_file):
imagedataset = tf.data.FixedLengthRecordDataset(image_file, 28*28, header_bytes=16)
imagedataset = imagedataset.map(read_image, num_parallel_calls=16)
labelsdataset = tf.data.FixedLengthRecordDataset(label_file, 1, header_bytes=8)
labelsdataset = labelsdataset.map(read_label, num_parallel_calls=16)
dataset = tf.data.Dataset.zip((imagedataset, labelsdataset))
return dataset
def get_training_dataset(image_file, label_file, batch_size):
dataset = load_dataset(image_file, label_file)
dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset
dataset = dataset.shuffle(5000, reshuffle_each_iteration=True)
dataset = dataset.repeat() # Mandatory for Keras for now
dataset = dataset.batch(batch_size, drop_remainder=True) # drop_remainder is important on TPU, batch size must be fixed
dataset = dataset.prefetch(-1) # fetch next batches while training on the current one (-1: autotune prefetch buffer size)
return dataset
def get_validation_dataset(image_file, label_file):
dataset = load_dataset(image_file, label_file)
dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset
dataset = dataset.batch(10000, drop_remainder=True) # 10000 items in eval dataset, all in one batch
dataset = dataset.repeat() # Mandatory for Keras for now
return dataset
# instantiate the datasets
training_dataset = get_training_dataset(training_images_file, training_labels_file, BATCH_SIZE)
validation_dataset = get_validation_dataset(validation_images_file, validation_labels_file)
# For TPU, we will need a function that returns the dataset
training_input_fn = lambda: get_training_dataset(training_images_file, training_labels_file, BATCH_SIZE)
validation_input_fn = lambda: get_validation_dataset(validation_images_file, validation_labels_file)
# + [markdown] id="_fXo6GuvL3EB" colab_type="text"
# ### Let's have a look at the data
# + id="yZ4tjPKvL2eh" colab_type="code" outputId="11c2414a-ab78-4716-b3e1-5942a90239c6" colab={"base_uri": "https://localhost:8080/", "height": 177}
N = 24
(training_digits, training_labels,
validation_digits, validation_labels) = dataset_to_numpy_util(training_dataset, validation_dataset, N)
display_digits(training_digits, training_labels, training_labels, "training digits and their labels", N)
display_digits(validation_digits[:N], validation_labels[:N], validation_labels[:N], "validation digits and their labels", N)
font_digits, font_labels = create_digits_from_local_fonts(N)
# + [markdown] id="KIc0oqiD40HC" colab_type="text"
# ### Keras model: 3 convolutional layers, 2 dense layers
# If you are not sure what cross-entropy, dropout, softmax or batch-normalization mean, head here for a crash-course: [Tensorflow and deep learning without a PhD](https://github.com/GoogleCloudPlatform/tensorflow-without-a-phd/#featured-code-sample)
# + id="56y8UNFQIVwj" colab_type="code" outputId="9881b784-7da6-406c-8756-e1b9b71ec5c7" colab={"base_uri": "https://localhost:8080/", "height": 672}
# This model trains to 99.4% sometimes 99.5% accuracy in 10 epochs (with a batch size of 32)
model = tf.keras.Sequential(
[
tf.keras.layers.Reshape(input_shape=(28*28,), target_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(filters=6, kernel_size=3, padding='same', use_bias=False), # no bias necessary before batch norm
tf.keras.layers.BatchNormalization(scale=False, center=True), # no batch norm scaling necessary before "relu"
tf.keras.layers.Activation('relu'), # activation after batch norm
tf.keras.layers.Conv2D(filters=12, kernel_size=6, padding='same', use_bias=False, strides=2),
tf.keras.layers.BatchNormalization(scale=False, center=True),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Conv2D(filters=24, kernel_size=6, padding='same', use_bias=False, strides=2),
tf.keras.layers.BatchNormalization(scale=False, center=True),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(200, use_bias=False),
tf.keras.layers.BatchNormalization(scale=False, center=True),
tf.keras.layers.Activation('relu'),
tf.keras.layers.Dropout(0.5), # Dropout on dense layer only
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam', # learning rate will be set by LearningRateScheduler
loss='categorical_crossentropy',
metrics=['accuracy'])
# print model layers
model.summary()
# set up learning rate decay
lr_decay = tf.keras.callbacks.LearningRateScheduler(lambda epoch: 0.0001 + 0.02 * math.pow(0.5, 1+epoch), verbose=True)
# + [markdown] id="aeyQ4ipM5qJ1" colab_type="text"
# ### Detect TPU, adapt model
# + id="FP961-b75EFp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 356} outputId="a2a25c9f-2f21-4391-d934-28a100e8622e"
# TPUClusterResolver() automatically detects a connected TPU on all Gooogle's
# platforms: Colaboratory, AI Platform (ML Engine), Kubernetes and Deep Learning
# VMs created through the 'ctpu up' utility. If auto-detection is not available,
# you can pass the name of your TPU explicitly:
# tf.contrib.cluster_resolver.TPUClusterResolver('MY_TPU_NAME')
# tip: on a VM created with "ctpu up" the TPU has the same name as the VM.
try:
tpu = tf.contrib.cluster_resolver.TPUClusterResolver() # TPU detection
strategy = tf.contrib.tpu.TPUDistributionStrategy(tpu)
train_model = tf.contrib.tpu.keras_to_tpu_model(model, strategy)
except ValueError:
tpu = None
train_model = model
print("Running on GPU or CPU")
# + [markdown] id="CuhDh8ao8VyB" colab_type="text"
# ### Train and validate the model
# + id="TTwH_P-ZJ_xx" colab_type="code" outputId="a5be8502-8a51-4f68-80c7-ec2d586d200b" colab={"base_uri": "https://localhost:8080/", "height": 1162}
EPOCHS = 10
steps_per_epoch = 60000//BATCH_SIZE # 60,000 items in this dataset
# Counting steps and batches on TPU: the tpu.keras_to_tpu_model API regards the batch size of the input dataset
# as the per-core batch size. The effective batch size is 8x more because Cloud TPUs have 8 cores. It increments
# the step by +8 everytime a global batch (8 per-core batches) is processed. Therefore batch size and steps_per_epoch
# settings can stay as they are for TPU training. The training will just go faster.
# Warning: this might change in the final version of the Keras/TPU API.
if tpu:
# Work in progress: reading directly from dataset object not yet implemented
# for Keras/TPU. Keras/TPU needs a function that returns a dataset.
history = train_model.fit(training_input_fn, steps_per_epoch=steps_per_epoch, epochs=EPOCHS,
validation_data=validation_input_fn, validation_steps=1, callbacks=[lr_decay])
else:
history = train_model.fit(training_dataset, steps_per_epoch=steps_per_epoch, epochs=EPOCHS,
validation_data=validation_dataset, validation_steps=1, callbacks=[lr_decay])
# + [markdown] id="Q961axfSIMRB" colab_type="text"
# ### Visualize training and validation curves
# + id="ji4ZVkwaKQMD" colab_type="code" outputId="c9d484a5-84bc-4389-a31c-0f70a98dd01d" colab={"base_uri": "https://localhost:8080/", "height": 770}
print(history.history.keys())
display_training_curves(history.history['acc'], history.history['val_acc'], 'accuracy', 211)
display_training_curves(history.history['loss'], history.history['val_loss'], 'loss', 212)
# + [markdown] id="9jFVovcUUVs1" colab_type="text"
# ### Visualize predictions
# + id="w12OId8Mz7dF" colab_type="code" outputId="6f5a05f0-dac5-4bea-a713-32e51f85fc79" colab={"base_uri": "https://localhost:8080/", "height": 790}
# recognize digits from local fonts
probabilities = train_model.predict(font_digits, steps=1)
predicted_labels = np.argmax(probabilities, axis=1)
display_digits(font_digits, predicted_labels, font_labels, "predictions from local fonts (bad predictions in red)", N)
# recognize validation digits
probabilities = train_model.predict(validation_digits, steps=1)
predicted_labels = np.argmax(probabilities, axis=1)
display_top_unrecognized(validation_digits, predicted_labels, validation_labels, N, 7)
# + [markdown] id="5tzVi39ShrEL" colab_type="text"
# ## Deploy the trained model to ML Engine
#
# Push your trained model to production on ML Engine for a serverless, autoscaled, REST API experience.
#
# You will need a GCS bucket and a GCP project for this.
# Models deployed on ML Engine autoscale to zero if not used. There will be no ML Engine charges after you are done testing.
# Google Cloud Storage incurs charges. Empty the bucket after deployment if you want to avoid these. Once the model is deployed, the bucket is not useful anymore.
# + [markdown] id="3Y3ztMY_toCP" colab_type="text"
# ### Configuration
# + id="iAZAn7yIhqAS" colab_type="code" cellView="both" colab={}
PROJECT = "" #@param {type:"string"}
BUCKET = "gs://" #@param {type:"string", default:"jddj"}
NEW_MODEL = True #@param {type:"boolean"}
MODEL_NAME = "colabmnist" #@param {type:"string"}
MODEL_VERSION = "v0" #@param {type:"string"}
assert PROJECT, 'For this part, you need a GCP project. Head to http://console.cloud.google.com/ and create one.'
assert re.search(r'gs://.+', BUCKET), 'For this part, you need a GCS bucket. Head to http://console.cloud.google.com/storage and create one.'
# + [markdown] id="GxQTtjmdIbmN" colab_type="text"
# ### Export the model for serving from ML Engine
# + id="GOgh7Kb7SzzG" colab_type="code" colab={}
class ServingInput(tf.keras.layers.Layer):
# the important detail in this boilerplate code is "trainable=False"
def __init__(self, name, dtype, batch_input_shape=None):
super(ServingInput, self).__init__(trainable=False, name=name, dtype=dtype, batch_input_shape=batch_input_shape)
def get_config(self):
return {'batch_input_shape': self._batch_input_shape, 'dtype': self.dtype, 'name': self.name }
def call(self, inputs):
# When the deployed model is called through its REST API,
# the JSON payload is parsed automatically, transformed into
# a tensor and passed to this input layer. You can perform
# additional transformations, such as decoding JPEGs for example,
# before sending the data to your model. However, you can only
# use tf.xxxx operations.
return inputs
# little wrinkle: must copy the model from TPU to CPU manually. This is a temporary workaround.
tf_logging.set_verbosity(tf_logging.INFO)
restored_model = model
restored_model.set_weights(train_model.get_weights()) # this copied the weights from TPU, does nothing on GPU
tf_logging.set_verbosity(tf_logging.WARN)
# add the serving input layer
serving_model = tf.keras.Sequential()
serving_model.add(ServingInput('serving', tf.float32, (None, 28*28)))
serving_model.add(restored_model)
export_path = tf.contrib.saved_model.save_keras_model(serving_model, os.path.join(BUCKET, 'keras_export')) # export he model to your bucket
export_path = export_path.decode('utf-8')
print("Model exported to: ", export_path)
# + [markdown] id="zy3T3zk0u2J0" colab_type="text"
# ### Deploy the model
# This uses the command-line interface. You can do the same thing through the ML Engine UI at https://console.cloud.google.com/mlengine/models
#
# + id="nGv3ITiGLPL3" colab_type="code" colab={}
# Create the model
if NEW_MODEL:
# !gcloud ml-engine models create {MODEL_NAME} --project={PROJECT} --regions=us-central1
# + id="o3QtUowtOAL-" colab_type="code" colab={}
# Create a version of this model (you can add --async at the end of the line to make this call non blocking)
# Additional config flags are available: https://cloud.google.com/ml-engine/reference/rest/v1/projects.models.versions
# You can also deploy a model that is stored locally by providing a --staging-bucket=... parameter
# !echo "Deployment takes a couple of minutes. You can watch your deployment here: https://console.cloud.google.com/mlengine/models/{MODEL_NAME}"
# !gcloud ml-engine versions create {MODEL_VERSION} --model={MODEL_NAME} --origin={export_path} --project={PROJECT} --runtime-version=1.10
# + [markdown] id="jE-k1Zn6kU2Z" colab_type="text"
# ### Test the deployed model
# Your model is now available as a REST API. Let us try to call it. The cells below use the "gcloud ml-engine"
# command line tool but any tool that can send a JSON payload to a REST endpoint will work.
# + id="zZCt0Ke2QDer" colab_type="code" colab={}
# prepare digits to send to online prediction endpoint
digits = np.concatenate((font_digits, validation_digits[:100-N]))
labels = np.concatenate((font_labels, validation_labels[:100-N]))
with open("digits.json", "w") as f:
for digit in digits:
# the format for ML Engine online predictions is: one JSON object per line
data = json.dumps({"serving_input": digit.tolist()}) # "serving_input" because the ServingInput layer was named "serving". Keras appends "_input"
f.write(data+'\n')
# + id="n6PqhQ8RQ8bp" colab_type="code" outputId="434953b5-c1b0-4964-8dcf-2119361839e9" colab={"base_uri": "https://localhost:8080/", "height": 331}
# Request online predictions from deployed model (REST API) using the "gcloud ml-engine" command line.
# predictions = !gcloud ml-engine predict --model={MODEL_NAME} --json-instances digits.json --project={PROJECT} --version {MODEL_VERSION}
print(predictions)
probabilities = np.stack([json.loads(p) for p in predictions[1:]]) # first line is the name of the input layer: drop it, parse the rest
predictions = np.argmax(probabilities, axis=1)
display_top_unrecognized(digits, predictions, labels, N, 100//N)
# + [markdown] id="SVY1pBg5ydH-" colab_type="text"
# ## License
# + [markdown] id="hleIN5-pcr0N" colab_type="text"
#
#
# ---
#
#
# author: <NAME><br>
# twitter: @martin_gorner
#
#
# ---
#
#
# Copyright 2018 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
#
# ---
#
#
# This is not an official Google product but sample code provided for an educational purpose
#
| courses/fast-and-lean-data-science/01_MNIST_TPU_Keras.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# [View in Colaboratory](https://colab.research.google.com/github/nikhiljangam/machine-learning-cheatsheet/blob/master/External_data_Drive,_Sheets,_and_Cloud_Storage.ipynb)
# + [markdown] id="7Z2jcRKwUHqV" colab_type="text"
# This notebook provides recipes for loading and saving data from external sources.
# + id="G-AE-NEHDjRp" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 1013} outputId="2e0cc48e-c93e-4ee5-ad15-7344719cd86c"
# Install the PyDrive wrapper & import libraries.
# This only needs to be done once per notebook.
# !pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# Authenticate and create the PyDrive client.
# This only needs to be done once per notebook.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
# Download a file based on its file ID.
#
# A file ID looks like: laggVyWshwcyP6kEI-y_W3P8D26sz
file_id = 'REPLACE_WITH_YOUR_FILE_ID'
downloaded = drive.CreateFile({'id': file_id})
print('Downloaded content "{}"'.format(downloaded.GetContentString()))
# + [markdown] id="eikfzi8ZT_rW" colab_type="text"
# # Local file system
# + [markdown] id="BaCkyg5CV5jF" colab_type="text"
# ## Uploading files from your local file system
#
# `files.upload` returns a dictionary of the files which were uploaded.
# The dictionary is keyed by the file name, the value is the data which was uploaded.
# + id="vz-jH8T_Uk2c" colab_type="code" colab={}
from google.colab import files
uploaded = files.upload()
for fn in uploaded.keys():
print('User uploaded file "{name}" with length {length} bytes'.format(
name=fn, length=len(uploaded[fn])))
# + [markdown] id="hauvGV4hV-Mh" colab_type="text"
# ## Downloading files to your local file system
#
# `files.download` will invoke a browser download of the file to the user's local computer.
#
# + id="p2E4EKhCWEC5" colab_type="code" colab={}
from google.colab import files
with open('example.txt', 'w') as f:
f.write('some content')
files.download('example.txt')
# + [markdown] id="c2W5A2px3doP" colab_type="text"
# # Google Drive
#
# You can access files in Drive using the [native REST API](https://developers.google.com/drive/v3/web/about-sdk) or a wrapper like [PyDrive](https://gsuitedevs.github.io/PyDrive/docs/build/html/index.html). We'll describe each, starting with PyDrive.
# + [markdown] id="7taylj9wpsA2" colab_type="text"
# ## PyDrive
#
# The example below shows 1) authentication, 2) file upload, and 3) file download. More examples are available in the [PyDrive documentation](https://gsuitedevs.github.io/PyDrive/docs/build/html/index.html)
# + id="zU5b6dlRwUQk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 53} outputId="60570a07-c670-4cd1-abb7-19504ee91e86"
# !pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
# 1. Authenticate and create the PyDrive client.
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
# PyDrive reference:
# https://gsuitedevs.github.io/PyDrive/docs/build/html/index.html
# 2. Create & upload a file text file.
uploaded = drive.CreateFile({'title': 'Sample upload.txt'})
uploaded.SetContentString('Sample upload file content')
uploaded.Upload()
print('Uploaded file with ID {}'.format(uploaded.get('id')))
# 3. Load a file by ID and print its contents.
downloaded = drive.CreateFile({'id': uploaded.get('id')})
print('Downloaded content "{}"'.format(downloaded.GetContentString()))
# + [markdown] id="jRQ5_yMcqJiV" colab_type="text"
# ## Drive REST API
#
# The first step is to authenticate.
# + id="r-exJtdG3XwJ" colab_type="code" colab={}
from google.colab import auth
auth.authenticate_user()
# + [markdown] id="57uSvdv48bp7" colab_type="text"
# Now we can construct a Drive API client.
# + id="1aNyFO958V13" colab_type="code" colab={}
from googleapiclient.discovery import build
drive_service = build('drive', 'v3')
# + [markdown] id="eDLm7MHQEr2U" colab_type="text"
# With the client created, we can use any of the functions in the [Google Drive API reference](https://developers.google.com/drive/v3/reference/). Examples follow.
#
# + [markdown] id="bRFyEsdfBxJ9" colab_type="text"
# ## Creating a new Drive file with data from Python
# + id="F1-nafvN-NwW" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 53} outputId="4466d27b-edd0-4ef1-8a10-0e180a231af2"
# Create a local file to upload.
with open('/tmp/to_upload.txt', 'w') as f:
f.write('my sample file')
print('/tmp/to_upload.txt contains:')
# !cat /tmp/to_upload.txt
# + id="3Jv6jh6HEpP8" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="cfc1186f-68a1-4620-a18f-99bc6d83e82e"
# Upload the file to Drive. See:
#
# https://developers.google.com/drive/v3/reference/files/create
# https://developers.google.com/drive/v3/web/manage-uploads
from googleapiclient.http import MediaFileUpload
file_metadata = {
'name': '<NAME>',
'mimeType': 'text/plain'
}
media = MediaFileUpload('/tmp/to_upload.txt',
mimetype='text/plain',
resumable=True)
created = drive_service.files().create(body=file_metadata,
media_body=media,
fields='id').execute()
print('File ID: {}'.format(created.get('id')))
# + [markdown] id="j5VyISCKFrqU" colab_type="text"
# After executing the cell above, a new file named 'Sample file' will appear in your [drive.google.com](https://drive.google.com/) file list. Your file ID will differ since you will have created a new, distinct file from the example above.
# + [markdown] id="P3KX0Sm0E2sF" colab_type="text"
# ## Downloading data from a Drive file into Python
# + id="KHeruhacFpSU" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="dfe7154f-249c-4344-bd62-a3b294ce02b4"
# Download the file we just uploaded.
#
# Replace the assignment below with your file ID
# to download a different file.
#
# A file ID looks like: 1uBtlaggVyWshwcyP6kEI-y_W3P8D26sz
file_id = 'target_file_id'
import io
from googleapiclient.http import MediaIoBaseDownload
request = drive_service.files().get_media(fileId=file_id)
downloaded = io.BytesIO()
downloader = MediaIoBaseDownload(downloaded, request)
done = False
while done is False:
# _ is a placeholder for a progress object that we ignore.
# (Our file is small, so we skip reporting progress.)
_, done = downloader.next_chunk()
downloaded.seek(0)
print('Downloaded file contents are: {}'.format(downloaded.read()))
# + [markdown] id="sOm9PFrT8mGG" colab_type="text"
# # Google Sheets
#
# Our examples below will use the existing open-source [gspread](https://github.com/burnash/gspread) library for interacting with Sheets.
#
# First, we'll install the package using `pip`.
# + id="Mwu_sWHv4jEo" colab_type="code" colab={}
# !pip install --upgrade -q gspread
# + [markdown] id="qzi9VsEqzI-o" colab_type="text"
# Next, we'll import the library, authenticate, and create the interface to sheets.
# + id="6d0xJz3VzLOo" colab_type="code" colab={}
from google.colab import auth
auth.authenticate_user()
import gspread
from oauth2client.client import GoogleCredentials
gc = gspread.authorize(GoogleCredentials.get_application_default())
# + [markdown] id="yjrZQUrt6kKj" colab_type="text"
# Below is a small set of gspread examples. Additional examples are shown on the [gspread Github page](https://github.com/burnash/gspread#more-examples).
# + [markdown] id="WgXqE02UofZG" colab_type="text"
# ## Creating a new sheet with data from Python
# + id="tnnYKhGfzGeP" colab_type="code" colab={}
sh = gc.create('A new spreadsheet')
# + [markdown] id="v9Ia9JVc6Zvk" colab_type="text"
# After executing the cell above, a new spreadsheet will be shown in your sheets list on [sheets.google.com](http://sheets.google.com/).
# + id="rkwijiH72BCm" colab_type="code" colab={}
# Open our new sheet and add some data.
worksheet = gc.open('A new spreadsheet').sheet1
cell_list = worksheet.range('A1:C2')
import random
for cell in cell_list:
cell.value = random.randint(1, 10)
worksheet.update_cells(cell_list)
# + [markdown] id="vRWjiZka9IvT" colab_type="text"
# After executing the cell above, the sheet will be populated with random numbers in the assigned range.
# + [markdown] id="k9q0pp33dckN" colab_type="text"
# ## Downloading data from a sheet into Python as a Pandas DataFrame
#
# We'll read back to the data that we inserted above and convert the result into a [Pandas DataFrame](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html).
#
# (The data you observe will differ since the contents of each cell is a random number.)
# + id="JiJVCmu3dhFa" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 126} outputId="783cdfff-087b-48c4-c1aa-637045b07980"
# Open our new sheet and read some data.
worksheet = gc.open('A new spreadsheet').sheet1
# get_all_values gives a list of rows.
rows = worksheet.get_all_values()
print(rows)
# Convert to a DataFrame and render.
import pandas as pd
pd.DataFrame.from_records(rows)
# + [markdown] id="S7c8WYyQdh5i" colab_type="text"
# # Google Cloud Storage (GCS)
#
# We'll start by authenticating to GCS and creating the service client.
# + id="xM70QWdxeE7q" colab_type="code" colab={}
from google.colab import auth
auth.authenticate_user()
# + [markdown] id="NAM6vyXAfVUj" colab_type="text"
# ## Upload a file from Python to a GCS bucket
#
# We'll start by creating the sample file to be uploaded.
# + id="LADpx7LReOMk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 53} outputId="46db7cfa-9ad8-405b-f715-b466a3b2cb7a"
# Create a local file to upload.
with open('/tmp/to_upload.txt', 'w') as f:
f.write('my sample file')
print('/tmp/to_upload.txt contains:')
# !cat /tmp/to_upload.txt
# + [markdown] id="BCiCo3v9fwch" colab_type="text"
# Next, we'll upload the file using the `gsutil` command, which is included by default on Colab backends.
# + id="VYC5CyAbAtU7" colab_type="code" colab={}
# First, we need to set our project. Replace the assignment below
# with your project ID.
project_id = 'Your_project_ID_here'
# + id="TpnuFITI6Tzu" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="04f1dd6d-4d7e-4264-b37b-1e8f645d6d38"
# !gcloud config set project {project_id}
# + id="Bcpvh_R_6jKB" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="e1c132b8-6a5c-46db-b1fa-f5768089890c"
import uuid
# Make a unique bucket to which we'll upload the file.
# (GCS buckets are part of a single global namespace.)
bucket_name = 'colab-sample-bucket-' + str(uuid.uuid1())
# Full reference: https://cloud.google.com/storage/docs/gsutil/commands/mb
# !gsutil mb gs://{bucket_name}
# + id="L5cMl7XV65be" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 92} outputId="bb51e51d-7f5f-4e2b-935c-203b8d314115"
# Copy the file to our new bucket.
# Full reference: https://cloud.google.com/storage/docs/gsutil/commands/cp
# !gsutil cp /tmp/to_upload.txt gs://{bucket_name}/
# + id="pJGU6gX-7M-N" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="38db2bd3-0879-4a2e-8f41-c9495ba570a9"
# Finally, dump the contents of our newly copied file to make sure everything worked.
# !gsutil cat gs://{bucket_name}/to_upload.txt
# + [markdown] id="0ENMqxq25szn" colab_type="text"
# ### Using Python
# + [markdown] id="YnN-iG9y56V-" colab_type="text"
# This section demonstrates how to upload files using the native Python API rather than `gsutil`.
#
# This snippet is based on [a larger example](https://github.com/GoogleCloudPlatform/storage-file-transfer-json-python/blob/master/chunked_transfer.py) with additional uses of the API.
# + id="YsXBVQqkArHD" colab_type="code" colab={}
# The first step is to create a bucket in your cloud project.
#
# Replace the assignment below with your cloud project ID.
#
# For details on cloud projects, see:
# https://cloud.google.com/resource-manager/docs/creating-managing-projects
project_id = 'Your_project_ID_here'
# + id="YFVbF4cdhd9Y" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="ffa2a4ec-ee02-4fc2-8ed4-a4d73d04e6be"
# Authenticate to GCS.
from google.colab import auth
auth.authenticate_user()
# Create the service client.
from googleapiclient.discovery import build
gcs_service = build('storage', 'v1')
# Generate a random bucket name to which we'll upload the file.
import uuid
bucket_name = 'colab-sample-bucket' + str(uuid.uuid1())
body = {
'name': bucket_name,
# For a full list of locations, see:
# https://cloud.google.com/storage/docs/bucket-locations
'location': 'us',
}
gcs_service.buckets().insert(project=project_id, body=body).execute()
print('Done')
# + [markdown] id="ppkrR7p4mx_P" colab_type="text"
# The cell below uploads the file to our newly created bucket.
# + id="cFAq-F2af5TJ" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="d07f7059-5767-4d58-ac71-e457a76e8c07"
from googleapiclient.http import MediaFileUpload
media = MediaFileUpload('/tmp/to_upload.txt',
mimetype='text/plain',
resumable=True)
request = gcs_service.objects().insert(bucket=bucket_name,
name='to_upload.txt',
media_body=media)
response = None
while response is None:
# _ is a placeholder for a progress object that we ignore.
# (Our file is small, so we skip reporting progress.)
_, response = request.next_chunk()
print('Upload complete')
# + [markdown] id="cyqTizOtnZEf" colab_type="text"
# Once the upload has finished, the data will appear in the cloud console storage browser for your project:
#
# https://console.cloud.google.com/storage/browser?project=YOUR_PROJECT_ID_HERE
# + [markdown] id="Q2CWQGIghDux" colab_type="text"
# ## Downloading a file from GCS to Python
#
# Next, we'll download the file we just uploaded in the example above. It's as simple as reversing the order in the `gsutil cp` command.
# + id="lPdTf-6O73ll" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 110} outputId="a6da299e-00ff-42a7-f845-9a93a4ccce61"
# Download the file.
# !gsutil cp gs://{bucket_name}/to_upload.txt /tmp/gsutil_download.txt
# Print the result to make sure the transfer worked.
# !cat /tmp/gsutil_download.txt
# + [markdown] id="s6nDq8Nk7aPN" colab_type="text"
# ### Using Python
# + [markdown] id="P6aWjfTv7bit" colab_type="text"
# We repeat the download example above using the native Python API.
# + id="z1_FuDjAozF1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="ab14cf25-7b51-41c9-d88c-9a94f5c79dfc"
# Authenticate to GCS.
from google.colab import auth
auth.authenticate_user()
# Create the service client.
from googleapiclient.discovery import build
gcs_service = build('storage', 'v1')
from apiclient.http import MediaIoBaseDownload
with open('/tmp/downloaded_from_gcs.txt', 'wb') as f:
request = gcs_service.objects().get_media(bucket=bucket_name,
object='to_upload.txt')
media = MediaIoBaseDownload(f, request)
done = False
while not done:
# _ is a placeholder for a progress object that we ignore.
# (Our file is small, so we skip reporting progress.)
_, done = media.next_chunk()
print('Download complete')
# + id="DxLyhaiBpAGX" colab_type="code" colab={}
# Inspect the file we downloaded to /tmp
# !cat /tmp/downloaded_from_gcs.txt
| External_data_Drive,_Sheets,_and_Cloud_Storage.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.10 64-bit (''venv'': venv)'
# language: python
# name: python3
# ---
import torch
from boltzmanngen.model._build import model_from_config
from boltzmanngen.train.loss import Loss, LossStat
import numpy as np
from boltzmanngen.data import DataConfig, IndexBatchIterator
from boltzmanngen.distribution import Energy, GaussianMCMCSampler, BoltzmannGenerator
from boltzmanngen.utils.types import assert_numpy
import matplotlib
from matplotlib import pyplot as plt
import scipy.stats as stats
# +
def plot_energy(energy, extent=(-4., 4.), resolution=100, dim=2):
""" Plot energy functions in 2D """
xs = torch.meshgrid([torch.linspace(*extent, resolution) for _ in range(2)])
xs = torch.stack(xs, dim=-1).view(-1, 2)
xs = torch.cat([
xs,
torch.Tensor(xs.shape[0], dim - xs.shape[-1]).zero_()
], dim=-1)
us = energy.energy(xs).view(resolution, resolution)
us = torch.exp(-us)
plt.imshow(assert_numpy(us), extent=extent * 2)
plt.xlim=(extent[0], extent[1])
plt.ylim=(extent[0], extent[1])
del xs, us
def plot_samples(samples, weights=None, range=None):
""" Plot sample histogram in 2D """
samples = assert_numpy(samples)
h = plt.hist2d(
samples[:, 0],
samples[:, 1],
weights=assert_numpy(weights) if weights is not None else weights,
bins=100,
norm=matplotlib.colors.LogNorm(),
range=range,
)
plt.colorbar(h[3])
def plot_bg(bg, target, n_samples=10000, range=[-4., 4.], dim=2):
""" Plot target energy, bg energy and bg sample histogram"""
plt.figure(figsize=(12, 4))
plt.subplot(1, 3, 1)
plot_energy(target, extent=range, dim=dim)
plt.title("Target energy")
plt.subplot(1, 3, 2)
plot_energy(bg, extent=range, dim=dim)
plt.title("BG energy")
plt.subplot(1, 3, 3)
plot_samples(bg.sample(n_samples)["x"], range=[range, range])
plt.title("BG samples")
def plot_weighted_energy_estimate(bg: BoltzmannGenerator, target: Energy, dim: int, n_samples=10000, n_bins=100, range=[-2.5, 2.5]):
""" Plot weighed energy from samples """
result = bg.sample(n_samples)
samples, latent, dlogp = result["x"], result["z"], result["dlogp"]
log_weights = bg.log_weights(samples, latent, dlogp)
plt.figure(figsize=(12, 4))
plt.subplot(1, 3, 1)
_, bins, _ = plt.hist(assert_numpy(samples[:, 0]), histtype="step", log=True, bins=n_bins, weights=None, density=True, label="samples", range=range)
xs = torch.linspace(*range, n_bins).view(-1, 1)
xs = torch.cat([xs, torch.zeros(xs.shape[0], dim - 1)], dim=-1).view(-1, dim)
us = target.energy(xs).view(-1)
us = torch.exp(-us)
us = us / torch.sum(us * (bins[-1] - bins[0]) / n_bins)
plt.plot(xs[:, 0], us, label="$\log p(x)$")
plt.xlabel("$x0$")
plt.ylabel("log density")
plt.legend()
plt.title("unweighed energy")
plt.subplot(1, 3, 2)
_, bins, _ = plt.hist(assert_numpy(samples[:, 0]), histtype="step", log=True, bins=n_bins, weights=assert_numpy(log_weights.exp()), density=True, label="samples", range=range)
plt.plot(xs[:, 0], us, label="$\log p(x)$")
plt.xlabel("$x0$")
plt.legend()
plt.title("weighed energy")
plt.subplot(1, 3, 3)
plt.xlabel("$x0$")
plt.ylabel("$x1$")
plot_samples(samples, weights=log_weights.exp(), range=[range, range])
plt.title("weighed samples")
del result, samples, latent, dlogp, log_weights
def plot_potential(X: torch.Tensor, cbar=True, orientation='vertical', figsize=(4, 5.5), rng=[-5, 5], vmax=300):
# 2D potential
xgrid = torch.linspace(rng[0], rng[1], 100)
ygrid = torch.linspace(rng[0], rng[1], 100)
Xgrid, Ygrid = torch.meshgrid(xgrid, ygrid)
grid = torch.vstack([Xgrid.flatten(), Ygrid.flatten()]).T
E = double_well.energy(grid)
E = E.reshape((100, 100))
E = torch.min(E, torch.tensor(vmax))
plt.figure(figsize=figsize)
plt.contourf(Xgrid, Ygrid, E, 50, cmap='jet', vmax=vmax)
if cbar:
if orientation == 'horizontal':
cbar = plt.colorbar(orientation='horizontal', shrink=0.3, aspect=10, anchor=(0.5, 7.5), use_gridspec=False)#, anchor=(0, 0.5))
cbar.outline.set_linewidth(1)
cbar.outline.set_color('white')
cbar.outline.fill = False
plt.setp(plt.getp(cbar.ax.axes, 'xticklabels'), color='w')
cbar.ax.xaxis.set_tick_params(color='white')
#cbar.set_label('Energy / kT', labelpad=0, y=0.0, color='white')
else:
cbar = plt.colorbar()
cbar.set_label('Energy / kT', labelpad=-15, y=0.6)
cbar.set_ticks([0, vmax/2, vmax])
plt.scatter(X[:, 0], X[:, 1], c=range(len(X)), cmap='viridis', marker='+', s=1)
plt.xticks([rng[0], 0, rng[1]])
plt.yticks([rng[0], 0, rng[1]])
plt.xlabel('$x_1$', labelpad=0)
plt.ylabel('$x_2$', labelpad=-10)
def plot_prior(Z: torch.Tensor, cbar=True, orientation='vertical', figsize=(4, 5.5)):
# 2D potential
xgrid = torch.linspace(-5, 5, 100)
ygrid = torch.linspace(-5, 5, 100)
Xgrid, Ygrid = torch.meshgrid(xgrid, ygrid)
grid = torch.vstack([Xgrid.flatten(), Ygrid.flatten()]).T
E = torch.from_numpy(stats.multivariate_normal.pdf(grid, mean=[0, 0], cov=[1, 1]))
E = E.reshape((100, 100))
E = torch.min(E, torch.tensor(1.0))
plt.figure(figsize=figsize)
plt.contourf(Xgrid, Ygrid, E, 50, cmap='jet', vmax=torch.max(E))
if cbar:
if orientation == 'horizontal':
cbar = plt.colorbar(orientation='horizontal', shrink=0.3, aspect=10, anchor=(0.5, 7.5), use_gridspec=False)#, anchor=(0, 0.5))
cbar.outline.set_linewidth(1)
cbar.outline.set_color('white')
cbar.outline.fill = False
plt.setp(plt.getp(cbar.ax.axes, 'xticklabels'), color='w')
cbar.ax.xaxis.set_tick_params(color='white')
#cbar.set_label('Energy / kT', labelpad=0, y=0.0, color='white')
else:
cbar = plt.colorbar()
cbar.set_label('Energy / kT', labelpad=-15, y=0.6)
cbar.set_ticks([0, torch.max(E)/2, torch.max(E)])
plt.scatter(Z[:, 0], Z[:, 1], c=range(len(Z)), cmap='viridis', marker='+', s=1)
plt.xticks([-5, 0, 5])
plt.yticks([-5, 0, 5])
plt.xlabel('$x_1$', labelpad=0)
plt.ylabel('$x_2$', labelpad=-10)
def hist_weights(X, log_weights):
bins = np.linspace(-2.5, 2.5, 100 + 1)
bin_means = 0.5 * (bins[:-1] + bins[1:])
sample_x_index = np.digitize(X[:, 0], bins)
whist = np.zeros(len(bins) + 1)
for i in range(len(log_weights)):
whist[sample_x_index[i]] += np.exp(log_weights[i])
return bin_means, whist[1:-1]
def plot_network(X_left, X_transition, X_right, weight_cutoff=1e-2):
fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(16, 3.5))
plt.subplots_adjust(wspace=0.25)
# Plot X distribution
axis = axes[0]
axis.plot(X_left[:, 0], X_left[:, 1], linewidth=0, marker='.', markersize=3, color='blue')
axis.plot(X_transition[:, 0], X_transition[:, 1], linewidth=0, marker='.', markersize=3, color='orange')
axis.plot(X_right[:, 0], X_right[:, 1], linewidth=0, marker='.', markersize=3, color='red')
axis.set_xlabel('$x_1$')
axis.set_xlim(-4, 4)
axis.set_ylabel('$x_2$', labelpad=-12)
axis.set_ylim(-4, 4)
axis.set_yticks([-4, -2, 0, 2, 4])
# Plot Z distribution
axis = axes[1]
zs = []
for x in [X_left, X_transition, X_right]:
data = {
DataConfig.INPUT_KEY: x.to(device)
}
_out = model(data, inverse=True)
zs.append(_out[DataConfig.OUTPUT_KEY].detach().cpu().numpy())
for c, z in zip(['blue', 'orange', 'red'], zs):
axis.plot(z[:, 0], z[:, 1], linewidth=0, marker='.', markersize=3, color=c)
circle = plt.Circle((0, 0), radius=1.0, color='black', alpha=0.4, fill=True)
axis.add_artist(circle)
circle = plt.Circle((0, 0), radius=2.0, color='black', alpha=0.25, fill=True)
axis.add_artist(circle)
circle = plt.Circle((0, 0), radius=3.0, color='black', alpha=0.1, fill=True)
axis.add_artist(circle)
axis.set_xlabel('$z_1$')
axis.set_xlim(-4, 4)
axis.set_ylabel('$z_2$', labelpad=-12)
axis.set_ylim(-4, 4)
axis.set_yticks([-4, -2, 0, 2, 4])
del _out, zs
# Plot proposal distribution
result = bg.sample(10000)
X, log_weights = result["x"].detach().cpu(), result["log_weights"].detach().cpu()
temperature = 1.0
H, bins = np.histogram(X[:, 0], bins=100)
bin_means = 0.5*(bins[:-1] + bins[1:])
Eh = -np.log(H) / temperature
X1, Y1 = bin_means, Eh
X1, W1 = hist_weights(X, log_weights)
axis = axes[2]
x_grid = np.linspace(-3, 3, num=200)
x_grid = np.c_[ x_grid, np.ones(len(x_grid))]
E = assert_numpy(double_well._energy(torch.from_numpy(x_grid)) / 1.0)
axis.plot(x_grid, E, linewidth=3, color='black')
Y1 = Y1 - Y1.min() + E.min()
Inan = np.where(W1 < weight_cutoff)
Y1[Inan] = np.nan
axis.plot(X1, Y1, color='orange', linewidth=2, label='ML+KL')
axis.set_xlim(-3, 3)
axis.set_ylim(-12, 5.5)
axis.set_yticks([])
axis.set_xlabel('$x_1$')
axis.set_ylabel('Energy / kT')
plt.legend(ncol=1, loc=9, fontsize=12, frameon=False)
del result, X, log_weights
return fig, axes
def plot_transition_traj(A, B, points = 10000, show_prior = False, rng=[-5, 5], vmax=300):
X1 = A.to(device)
X2 = B.to(device)
x = torch.stack([X1, X2])
data = {
DataConfig.INPUT_KEY: x.to(device)
}
data = model(data, inverse=True)
z = data[DataConfig.OUTPUT_KEY]
z0 = torch.linspace(z[0,0].item(), z[1, 0].item(), points)
z1 = torch.linspace(z[0,1].item(), z[1, 1].item(), points)
z = torch.stack([z0, z1]).T
data = {
DataConfig.INPUT_KEY: z.to(device)
}
data = model(data)
x = data[DataConfig.OUTPUT_KEY]
R = X2 - X1
centered_x = x - X1
path_positions = torch.matmul(centered_x, R).div((R).pow(2).sum(0) + 1e-10)
path_evolution = torch.linspace(0.0, 1.0, points).to(device)
data[DataConfig.OUTPUT_KEY] = x[torch.argmin(torch.abs(path_evolution[..., None] - path_positions[None, ...]), dim=1)]
loss, loss_contrib = bg._loss(pred=data, temperature=1.0, direction=DataConfig.Z_TO_X_KEY, explore=1.0)
print(loss.mean())
if show_prior:
plot_prior(Z=z.detach().cpu(), orientation="horizontal")
plot_potential(X=x.detach().cpu(), orientation="horizontal", rng=rng, vmax=vmax)
# +
config = {
"model_builders": ["InvertibleModel", "ModelJacobian"],
"num_layers": 10,
"loss_params": {
("y", "J", "zx"): [
"JKLLoss",
1.0,
{
"energy_model": "MultimodalEnergy",
"params": {
'dim': 2,
}
# "energy_model": "ShiftedDoubleWellEnergy",
# "params": {
# 'a' : 0.0,
# 'b' : -2.0,
# 'c' : 0.5,
# 'dim' : 2,
# },
},
],
("y", "J", "xz"): [
"MLLoss",
1.0,
{
"energy_model": "NormalDistribution",
"params": {
'mean': torch.tensor([0.0, 0.0]),
'cov': torch.tensor([[1.0, 0.0],[0.0, 1.0]]),
'dim' : 2,
},
},
],
("y", "J", "saddle"): [
"HessianLoss",
1.0,
{
"energy_model": "MultimodalEnergy",
"params": {
'dim': 2,
},
},
],
("y", "J", "path"): [
"PathLoss",
1.0,
{
"energy_model": "MultimodalEnergy",
"params": {
'dim': 2,
},
"sigmas": [1.0, 1.0],
"logp_eps": [1e-5, 1e-2],
"hist_volume_expansions": [0.2, 2.0],
},
],
},
"lr": 1e-4,
"epochs": 150,
"path_starting_epoch": 50,
"path_weight": 0.5,
"kll_starting_epoch": 120
}
device = "cuda:1"
model = model_from_config(config).to(device)
model.train()
loss_f, target, prior, _, _ = Loss.from_config(config)
loss_stat = LossStat().to(device)
double_well = loss_f.funcs[(DataConfig.OUTPUT_KEY, DataConfig.JACOB_KEY, DataConfig.Z_TO_X_KEY)].energy_model
target = target.to(device)
prior = prior.to(device)
bg = BoltzmannGenerator(prior, model, target, loss_f).to(device)
# -
rng = [-5.0, 5.0]
vmax = 300
plot_bg(bg, target, dim=2, range=rng)
# plot_weighted_energy_estimate(bg, target, dim=2, range=rng)
# +
init_state = torch.Tensor([[-2., -3.], [3., 3.]]) # init_state = torch.Tensor([[4., -2.], [-3., -3.]]) <- multimodal
target_sampler = GaussianMCMCSampler(target, init_state=init_state, noise_std=0.5, uniform_range=[0, 1e-1])
data = target_sampler.sample(10000)
X_left = data[data[:, 0] < 0]
X_right = data[data[:, 0] > 0]
plot_samples(data, range=[rng, rng])
# +
n_kl_samples = 1000
n_batch = 1000
batch_iter = IndexBatchIterator(len(data), n_batch)
optim = torch.optim.Adam(bg.parameters(), lr=config["lr"])
epochs = 50#config["epochs"]
path_starting_epoch = 2#config["path_starting_epoch"]
path_weight = config["path_weight"]
kll_starting_epoch = 100#config["kll_starting_epoch"]
batch_log_freq = 5
lambdas = torch.linspace(0.1, 0.0, epochs).to(device)
# -
model.train()
for epoch, lamb in enumerate(lambdas):
for it, idxs in enumerate(batch_iter):
batch = data[idxs]
optim.zero_grad()
nll = bg.energy(batch).mean()
loss_stat(nll, bg.loss_contrib)
(lamb * nll).backward()
if epoch >= kll_starting_epoch:
kll = bg.kldiv(n_kl_samples, explore=1.0).mean()
loss_stat(kll, bg.loss_contrib)
((1. - lamb) * kll).backward()
if epoch >= path_starting_epoch - 1:
left = batch[batch[:, 0] < 0]
right = batch[batch[:, 0] > 0]
x = torch.vstack([left[0], right[0]])
path = bg.path(n_kl_samples, path_weight=path_weight, x=x).mean()
if epoch >= path_starting_epoch:
loss_stat(path, bg.loss_contrib)
path.backward()
# hess = bg.saddle(n_kl_samples).mean()
# loss_stat(hess, bg.loss_contrib)
# hess.backward()
optim.step()
if it % batch_log_freq == 0:
print("\repoch: {0}, iter: {1}/{2}, lambda: {3}".format(
epoch + 1,
it,
len(batch_iter),
lamb,
), loss_stat.current_result(), end="")
mw = [
"multimodal_nll+kll_transition.pth",
"multimodal_nll+kll_no_transition.pth",
"multimodal_nll+hess_no_transition.pth",
"multimodal_nll+hess+kll_no_transition.pth",
"multimodal_nll+path_no_transition.pth",
"multimodal_nll+path+orth_no_transition.pth",
"multimodal_nll+path+orth+kll_no_transition.pth",
"multimodal_nll+path+orth+kll_no_transition_final.pth",
"holders_nll+path+orth+kll_no_transition_final.pth",
"bird_nll+kll.pth",
"bird_nll+kll+path+orth.pth",
]
model_weights = mw[8]
torch.save(model.state_dict(), model_weights)
model.load_state_dict(torch.load(model_weights))
model = model.eval()
# +
# plot_bg(bg, target, dim=2, range=[-4., 4.])
# plot_weighted_energy_estimate(bg, target, dim=2)
result = bg.sample(10000)
X, Z = result["x"], result["z"]
plot_potential(X=X.detach().cpu(), orientation="horizontal", rng=rng, vmax=vmax)
plot_prior(Z=Z.detach().cpu(), orientation="horizontal")
del result, X, Z
# +
X_transition = torch.zeros((500, 2), dtype=torch.float32)
X_transition[:, 1] = torch.randn(500)
plot_network(X_left=X_left, X_transition=X_transition, X_right=X_right)
# -
plot_transition_traj(X_left[0], X_right[0], show_prior=True, rng=rng, vmax=vmax)
for _ in range(3):
plot_transition_traj(X_left[np.random.randint(0, len(X_left))], X_right[np.random.randint(0, len(X_right))], rng=rng, vmax=vmax)
# +
X1 = torch.tensor([-3, 3.5]).to(device)
X2 = torch.tensor([3, 3.0]).to(device)
x0 = torch.linspace(X1[0].item(), X2[0].item(), 1000)
x1 = torch.linspace(X1[1].item(), X2[1].item(), 1000)
x = torch.stack([x0, x1]).T
data = {
DataConfig.INPUT_KEY: x.to(device)
}
data = model(data, inverse=True)
z = data[DataConfig.OUTPUT_KEY]
plot_prior(Z=z.detach().cpu(), orientation="horizontal")
plot_potential(X=x.detach().cpu(), orientation="horizontal", rng=rng, vmax=vmax)
# -
with open(model_weights.split(".")[0] + ".txt", 'w') as f:
f.write(str(config))
| double_well.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Load Spatial Data Frame Tutorial
#
# This is the completed sample for the [Load spatial data frame](https://developers.arcgis.com/labs/python/load-spatial-data-frame/) ArcGIS tutorial.
#
# [ArcGIS tutorials](https://developers.arcgis.com/labs/) are short guides demonstrating the three phases of building geospatial apps: Data, Design, Develop.
#
# __Note__: Please complete the [Import Data](https://developers.arcgis.com/labs/python/import-data/) tutorial if you have not done so already. You will use the output feature layer from this lab to learn how to find and share an item.
from arcgis.gis import GIS
import pandas as pd
# Log into ArcGIS Online by making a GIS connection using your developer account. Replace `username` and `password` with your own credentials.
# +
import getpass
password = getpass.getpass("Enter password, please: ")
gis = GIS('https://arcgis.com', 'username', password)
# -
# Search for the *Trailheads* layer used in the [Import Data](https://developers.arcgis.com/labs/python/import-data/) tutorial.
feature_service_srch_results = gis.content.search(query='title: "Trailheads*" AND type: "Feature Service"')
feature_service_srch_results
# Retrieve the feature service item from the list of results. Then, get the layer from that service.
feature_service_item = feature_service_srch_results[0]
feature_layer = feature_service_item.layers[0]
feature_layer
# Build the Spatial Pandas Dataframe
sdf = pd.DataFrame.spatial.from_layer(feature_layer)
sdf.head()
| labs/load_spatial_data_frame.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Ejercicios de ilustración método de transformada inversa y aceptación y rechazo
# +
# Importamos librerías a trabajar en todas las simulaciones
import matplotlib.pyplot as plt
import numpy as np
from itertools import cycle # Librería para hacer ciclos
import scipy.stats as st # Librería estadística
from math import factorial as fac # Importo la operación factorial
# %matplotlib inline
# -
# ### Ilustración método de la transformada inversa con paquete `stats`
plt.hist(st.norm.ppf(np.random.rand(1000)), label='MTI')
plt.hist(st.norm.rvs(size=1000), label='M.RVS', alpha=0.5)
plt.legend()
# +
# Elegir la distribución
name_dist = 'chi'
# Explorar la función de la función getattr
dist = getattr(st, name_dist)
# Parametros de la distribución (tupla)
params = (4, 0, 1)
# Cantidad de términos
N = 5000
# Diccionario de argumentos de la distribución
args = {'df': 6, 'loc': 0, 'scale': 1}
# Generación de variable aleatorias de la distrubición elegida
x = dist(**args).rvs(size=1000)
# Comparación de histogramas
# 1. Histograma distribución original
plt.figure(figsize=[10,5])
plt.plot(np.arange(0, 5, 0.1), dist(**args).pdf(np.arange(0, 5, 0.1)))
plt.hist(x, bins=50, density=True, label='distribución original');
# 2. Implementación del método de la transformada inversa usando función 'ppf'
U = np.random.rand(N)
f_inv = dist(**args).ppf(U)
plt.hist(f_inv, bins=50, density=True, label='método de la transformada inversa', alpha=0.5);
plt.legend()
# -
# ## <font color ='red'> **Ejercicio 2**
# 1. Generación variable aleatoria continua
#
# $$
# h(x)=
# \begin{cases}
# 0, & x<0 \\
# x, & 0 \le x < 1 \\
# 2-x, & 1\le x \le 2 \\
# 0,& x>2
# \end{cases}
# $$
#
# Genere muestres aleatorias que distribuyan según la función dada usando el método de la transformada inversa y grafique el histograma de 100 muestras generadas con el método y compárela con el función $h(x)$ dada, esto con el fín de validar que el procedimiento fue realizado de manera correcta
#
# ### Método de la transformada inversa
# +
h = lambda x: 0 if x < 0 else (x if 0 < x < 1 else (2 - x if 1 <= x <= 2 else 0))
x = np.arange(-0.5, 2.5, 0.01)
plt.plot(x, [h(xi) for xi in x])
# +
U = np.random.rand(N)
H_inv = lambda u: np.sqrt(2 * u) if 0 <= u <= 0.5 else 2 - np.sqrt(4 - 2 *(1 + u))
H_inv_values = [H_inv(ui) for ui in U]
# Validar función inversa (gráfica)
h = lambda x: 0 if x < 0 else (x if 0 < x < 1 else (2 - x if 1 <= x <= 2 else 0))
plt.plot(x, [h(xi) for xi in x], label='pdf')
plt.hist(H_inv_values, bins=50, density=True, label='MTI')
# +
N = 500
# Crear función acumulada
H = lambda x: 0 if x<0 else (x ** 2 / 2 if 0 <= x < 1 else (-x **2 / 2 + 2 * x -1 if 1 <= x <= 2 else 0) )
# Graficar función acumulada
x = np.arange(0, 2, 0.01)
plt.plot(x, [H(xi) for xi in x], label='$H(x)$')
# Crear función inversa
# Vector de aleatorios uniformer
U = np.random.rand(N)
H_inv = lambda u: np.sqrt(2 * u) if 0 <= u <= 0.5 else 2 - np.sqrt(4 - 2 *(1 + u))
# Vector generado con MTI
H_inv_values = [H_inv(ui) for ui in U]
# Validar función inversa (gráfica)
# Graficar histograma aleatorios
plt.hist(H_inv_values, bins=50, density=True, label='MTI')
# Función de densidad h(x)
h = lambda x: 0 if x < 0 else (x if 0 < x < 1 else (2 - x if 1 <= x <= 2 else 0))
plt.plot(x, [h(xi) for xi in x], label='pdf')
plt.legend();
# -
# ### Método de aceptación y rechazo
# +
N = 100
# Graficar densidad de probabilidad h(x)
# Programar método de aceptación y rechazo
# Graficar puntos aceptados
# Almacenar números aceptados en una variable y graficar su histograma
# -
# ## Ejercicio 3
# Suponga que tiene la siguiente probability mass function
# $$
# P(X=k) =
# \begin{cases}
# \frac{1}{3}\left( \frac{2}{3}\right)^{k-1}, & \text{si } k=1, 2, \cdots \\
# 0, & \text{otro caso}
# \end{cases}
# $$
# +
N = 700
# PMF p(x)
p = lambda k: (1 / 3) * (2 / 3) ** (k-1)
# Gráfica de pmf
k = np.arange(1, 20)
plt.plot(k, p(k), 'r*')
# Método de aceptación rechazo discreto
max_p = p(1)
| TEMA-2/Clase11_Ejercicios_MTI_MAR.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import os
import glob
import json
# #### To create power chords, we need information about sample pitch (note height/frequency). We need to read annotations csv file to map sample pitch to sample filename.
# +
# reading pitch information
annot=pd.read_csv("/home/stjepan/file_meta.csv")
annot2=annot[["audioFileName", "pitch"]]
annot2.head(3)
# renaming filenames to include pitch information
for i in range (0, df.shape[0]):
filename = str(annot2.iat[i,0])
pitch = str(annot2.iat[i,1])
for file in (glob.glob('*.wav')):
if file == filename:
os.rename(filename, filename[:-4]+"_"+pitch+".wav")
# -
# #### Powerchord = sample1(pitch 1)+sample2(pitch1+7)+sample3(pitch1 +12) - mixed together
# +
# creating json file containing power chord filenames
test_path = "/home/stjepan/guitar_data/"
f_ix = lambda x: int(x.split('/')[-1].strip('.wav')[-2:])
files = glob.glob(test_path + '*.wav')
notes = list(map(f_ix, files))
power_chords = {}
p = lambda f: f.split('/')[-1]
get_index = lambda x: files[int(str(notes.index(x)))]
for ix, base in enumerate(files):
#print(f_ix(base))
try:
fifth = p(get_index(f_ix(base) + 7))
octave = p(get_index(f_ix(base) + 12))
power_chords[p(base)] = [fifth, octave]
except Exception as e:
print (e)
continue
json.dump(fp = open('power_chords_.json', 'w'), obj = power_chords)
json_data =json.load(fp = open('power_chords_.json'))
# -
# #### Reading filenames from json to create powerchords with sox.Combiner
df=pd.read_json("power_chords_.json")
df=df.T
df = df.reset_index()
for i in range(0, df.shape[0]):
base = PATH+str(df.iat[i,0])
fifth = PATH+str(df.iat[i,1])
octave = PATH+str(df.iat[i,2])
cbn = sox.Combiner()
cbn.build([base, fifth, octave], PATH+'pwrchrd_'+str(i)+'.wav', "mix" )
| notebooks/exploration/Power_chrods_creation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.5 64-bit (''base'': conda)'
# name: python385jvsc74a57bd0dce69896fdb445434427c12e791455610f9ef8e6bb07ea975426634cd43b3db3
# ---
# +
import pandas as pd
import numpy as np
import pickle
import scipy
import matplotlib.pyplot as plt
import joblib
import warnings
# seaborn
import seaborn as sns
# Sk learn model
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.preprocessing import MultiLabelBinarizer
from sklearn.model_selection import cross_validate
from sklearn.model_selection import train_test_split
# load data provided by school
from load_data import *
# +
""" use ingredients as features only """
# preprocessing
arr_x = df_train['ingredients'].to_numpy()
for i in range(len(arr_x)):
arr_x[i] = str(arr_x[i]).replace("[", "").replace("]", "").replace(",", "").replace("'", "").split(" ")
ingrs = list(vocab_ingr_dict_train.keys())
mlb = MultiLabelBinarizer(classes = ingrs)
with warnings.catch_warnings():
warnings.simplefilter("ignore")
X = mlb.fit_transform(arr_x)
y = df_train['duration_label'].to_numpy()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42)
# +
""" use steps as features only """
# preprocessing
arr_x = df_train['steps'].to_numpy()
for i in range(len(arr_x)):
arr_x[i] = str(arr_x[i]).replace("[", "").replace("]", "").replace(",", "").replace("'", "").split(" ")
ingrs = list(vocab_steps_dict_train.keys())
mlb = MultiLabelBinarizer(classes = ingrs)
with warnings.catch_warnings():
warnings.simplefilter("ignore")
X = mlb.fit_transform(arr_x)
y = df_train['duration_label'].to_numpy()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42)
# +
from sklearn.linear_model import LogisticRegression
lg = LogisticRegression(random_state=0, max_iter=1000)
lg.fit(X_train, y_train)
lg.score(X_test,y_test)
# -
| test/ztom-001.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Dataset Guide (incl. Integration to new API)
# With Delira v0.3.2 a new dataset API was introduced to allow for more
# flexibility and add some features. This notebook shows the difference
# between the new and the old API and provides some examples for newly added
# features.
# ## Overview Old API
# The old dataset API was based on the assumption that the underlying structure
# of the data can be described as followed:
# * root
# * sample1
# * img1
# * img2
# * label
# * sample2
# * img1
# * img2
# * label
# * ...
#
#
# A single sample was constructed from multiple images which are all located in
# the same subdirectory.
# The corresponding signature of the `AbstractDataset` was given by
# `data_path, load_fn, img_extensions, gt_extensions`.
# While most datasets need a `load_fn` to load a single sample and a
# `data_path` to the root directory, `img_extensions`and `gt_exntensions`
# were often unsed.
# As a consequence a new dataset needed to be created which initialises the
# unused variables with arbitrary values.
# ## Overview New API
# The new dataset API was refactored to a more general approach where only a
# `data_path` to the root directory and a `load_fn` for a single sample need
# to be provided.
# A simple loading function (`load_fn`) to generate random data independent
# from the given path might be realized as below.
# +
import numpy as np
def load_random_data(path: str) -> dict:
"""Load random data
Parameters
----------
path : str
path to sample (not used in this example)
Returns
-------
dict
return data inside a dict
"""
return {
'data': np.random.rand(3, 512, 512),
'label': np.random.randint(0, 10),
'path': path,
}
# -
# When used with the provided BaseDatasets, the return value of the load
# function is not limited to dictionaries and might be of any type which can be
# added to a list with the `append` method.
# ### New Datasets
# Some basic datasets are already implemented inside Delira and should be
# suitable for most cases. The `BaseCacheDataset` saves all samples inside the
# RAM and thus can only be used if everything fits inside the memory.
# ´BaseLazyDataset´ loads the individual samples on time when they are needed,
# but might lead to slower training due to the additional loading time.
# +
from delira.data_loading import BaseCacheDataset, BaseLazyDataset
# because `load_random_data` does not use the path argument, they can have
# arbitrary values in this example
paths = list(range(10))
# create case dataset
cached_set = BaseCacheDataset(paths, load_random_data)
# create lazy dataset
lazy_set = BaseLazyDataset(paths, load_random_data)
# print cached data
print(cached_set[0].keys())
# print lazy data
print(lazy_set[0].keys())
# -
# In the above example a list of multiple paths is used as the `data_path`.
# `load_fn` is called for every element inside the provided list (can be any
# iterator). If `data_path` is a single string, it is assumed to be the path
# to the root directory. In this case, `load_fn`is called for every element
# inside the root directory.
# Sometimes, a single file/folder contains multiple samples.
# `BaseExtendCacheDataset` uses the `extend` function to add elements to the
# internal list. Thus it is assumed that `load_fn` provides an iterable object,
# where eacht item represents a single data sample.
# `AbstractDataset` is now iterable and can be used directly in combination
# with for loops.
for cs in cached_set:
print(cs["path"])
# ## New Utility Function (Integration to new API)
# The behavior of the old API can be replicated with the `LoadSample`,
# `LoadSampleLabel`functions. `LoadSample` assumes that all needed images and
# the label (for a single sample) are located in a directory. Both functions
# return a dictionary containing the loaded data.
# `sample_ext` maps keys to iterables. Each iterable defines the names of the
# images which should be loaded from the directory. ´sample_fn´ is used to load
# the images which are than stacked inside a single array.
# +
from delira.data_loading import LoadSample, LoadSampleLabel
def load_random_array(path: str):
"""Return random data
Parameters
----------
path : str
path to image
Returns
-------
np.ndarray
loaded data
"""
return np.random.rand(128, 128)
# define the function to load a single sample from a directory
load_fn = LoadSample(
sample_ext={
# load 3 data channels
'data': ['red.png', 'green.png', 'blue.png'],
# load a singel segmentation channel
'seg': ['seg.png']
},
sample_fn=load_random_array,
# optionally: assign individual keys a datatype
dtype={"data": "float", "seg": "uint8"},
# optioanlly: normalize individual samples
normalize=["data"])
# Note: in general the function should be called with the path of the
# directory where the imags are located
sample0 = load_fn(".")
print("data shape: {}".format(sample0["data"].shape))
print("segmentation shape: {}".format(sample0["seg"].shape))
print("data type: {}".format(sample0["data"].dtype))
print("segmentation type: {}".format(sample0["seg"].dtype))
print("data min value: {}".format(sample0["data"].min()))
print("data max value: {}".format(sample0["data"].max()))
# -
# By default the range is normalized to (-1, 1), but `norm_fn` can be
# changed to achieve other normalization schemes. Some examples are included
# in `delira.data_loading.load_utils`.
#
# `LoadSampleLabel` takes an additional argument for the label and a function
# to load a label. This functions can be used in combination with the provided
# BaseDatasets to replicate (and extend) the old API.
| notebooks/tutorial_dataset.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Imports
# +
import pandas as pd
import sys
sys.path.insert(0,'../satori')
from postprocess import *
# -
# ### Load the interaction results
# +
# For SATORI based interactions
#df_A = pd.read_csv('../results/TAL-GATA_binaryFeat_Analysis_customTFs_euclidean_v8/Interactions_SATORI/interactions_summary_attnLimit-0.12.txt',sep='\t')
#df_A = pd.read_csv('../results/TAL-GATA_binaryFeat_Analysis_allTFs_euclidean_v8/Interactions_SATORI/interactions_summary_attnLimit-0.12.txt',sep='\t')
#df_A = pd.read_csv('../../TAL-GATA_binaryFeat_Analysis_euclidean_v8/Interactions_Results_v9_run1_1000/interactions_summary_attnLimit-0.12.txt',sep='\t')
df_A = pd.read_csv('../results/TAL-GATA_binaryFeat_Analysis_allTFs_euclidean_v8_from_customTFs/Interactions_SATORI/interactions_summary_attnLimit-0.08.txt',sep='\t')
# For FIS based interactions
#df_B = pd.read_csv('../results/TAL-GATA_binaryFeat_Analysis_customTFs_euclidean_v8/Interactions_FIS/interactions_summary_attnLimit-0.txt',sep='\t')
#df_B = pd.read_csv('../results/TAL-GATA_binaryFeat_Analysis_allTFs_euclidean_v8/Interactions_FIS/interactions_summary_attnLimit-0.txt',sep='\t')
#df_B = pd.read_csv('../../DFIM_TAL-GATA-allTFs_experiment/Interactions_v10_1000/interactions_summary_attnLimit-0.txt',sep='\t')
df_B = pd.read_csv('../results/TAL-GATA_binaryFeat_Analysis_allTFs_euclidean_v8_from_customTFs/Interactions_FIS/interactions_summary_attnLimit-0.3.txt',sep='\t')
# -
# ### Load the annotation file
df_annotate = pd.read_csv('../../../Basset_Splicing_IR-iDiffIR/Analysis_For_none_network-typeB_lotus_posThresh-0.60/MEME_analysis/Homo_sapiens_2019_01_14_4_17_pm/TF_Information_all_motifs.txt',sep='\t')
# ### Pre-process the interactions
ATTN = preprocess_for_comparison(df_A, annotation_df=df_annotate)
DFIM = preprocess_for_comparison(df_B, annotation_df=df_annotate)
ATTN.shape, DFIM.shape
# ### Individual interactions analysis
# #### Get unique interactions per method and their intersection
ATTN_unique, DFIM_unique, intersected = get_comparison_stats(DFIM, ATTN, intr_type='TF_Interaction')
print(f"DFIM: {len(set(DFIM_unique.keys()))}, SATORI: {len(set(ATTN_unique.keys()))}, Common: {len(intersected)}")
# #### Comparison plot: individual interactions
# The order of the arguments determine which method (used for inferring interactions) is being compared to the other
df_res = common_interaction_stats(DFIM_unique, ATTN_unique)
plot_interaction_comparison(df_res, first_n=15, xlabel='TF interaction', store_pdf_path='output/talgata_satori-vs-fis_TFs.pdf', fig_size=(9,6))
# ### Family interactions analysis
# #### Get unique family interactions per method and their intersection
ATTN_unique, DFIM_unique, intersected = get_comparison_stats(DFIM, ATTN, intr_type='Family_Interaction')
print(f"DFIM: {len(set(DFIM_unique.keys()))}, SATORI: {len(set(ATTN_unique.keys()))}, Common: {len(intersected)}")
# #### Comparison plot: family interactions
# The order of the arguments determine which method (used for inferring interactions) is being compared to the other
df_res = common_interaction_stats(DFIM_unique, ATTN_unique)
plot_interaction_comparison(df_res, first_n=10, xlabel='Family interaction', store_pdf_path='output/talgata_satori-vs-fis_Fams.pdf', fig_size=(9,6))
| analysis/DFIM-vs-ATTN_TAL-GATA.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="BH5X7qoVEMXe"
#Standard imports
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import pickle
# + id="oFewHMpuKpcE"
#Model Evaluation libraries
from sklearn.model_selection import train_test_split
# + id="GmXhSxh5M4oD"
#Machine Learning libraries
from sklearn.linear_model import LinearRegression
#Performance validation libraries
from sklearn.metrics import mean_squared_error
# + colab={"base_uri": "https://localhost:8080/", "height": 231} id="Ugi0YT8AFA3B" outputId="6be39269-7d6e-4135-ac7a-e3208be7ced2"
#Reading the .csv file of the data
df_titanic = pd.read_csv("titanic.csv")
df_titanic.head()
# + colab={"base_uri": "https://localhost:8080/"} id="0utt5qSWFfeX" outputId="21723b45-0d21-4e1a-a636-a89921dbf255"
#DATA CLEANING
#Finding the missing values in the data
df_titanic.isna().sum()
# + id="XqsbnhZgFsNB"
#Name and Titanic is unique for each passenger
df_titanic.drop(["Name", "Ticket"], axis=1, inplace=True)
# + id="PEMQh-RDGoor"
#More than 77% missing values
df_titanic.drop(["Cabin"], axis=1, inplace=True)
# + id="RwPhwOT2HGD-"
#Filling the missing values by the mode of the data
df_titanic.Embarked = df_titanic.Embarked.fillna(value='S')
# + id="Ni1XN5XkHOgq"
#Filling the missing values by the mean age
mean = df["Age"].mean()
df_titanic.Age= df_titanic.Age.fillna(value=mean)
# + colab={"base_uri": "https://localhost:8080/"} id="9m7lHy1LHkXz" outputId="790da89b-5c2e-46f3-85c0-dcd84f740303"
#Cleaned Data
df_titanic.isna().sum()
# + colab={"base_uri": "https://localhost:8080/", "height": 406} id="6KELJ1zOILDr" outputId="1dee5f0b-72ca-4064-a372-dc1c0711f234"
df_titanic
# + id="xrrbg5NkI-S1"
#Categorizing the data
numerical_columns = ['Age', 'Fare']
categorical_columns = ['Pclass', 'Sex', 'SibSp', 'Parch', 'Embarked']
# + id="UO5vnS6hKh22"
#Converting sex to a numerical attribute
for i in range(len(df_titanic.Sex)):
if(df_titanic['Sex'][i] == 'male'):
df_titanic['Sex'][i] = 0
else:
df_titanic['Sex'][i] = 1
# + id="GsKSyfLtPXAJ"
#Converting Embarked to a numerical attribute
for i in range(len(df_titanic.Embarked)):
if(df_titanic['Embarked'][i] == 'S'):
df_titanic['Embarked'][i] = 0
if(df_titanic['Embarked'][i] == 'C'):
df_titanic['Embarked'][i] = 1
else:
df_titanic['Embarked'][i] = 2
# + id="uAyfH-JJXvBJ"
#Converting Age from float to int
for i in range(len(df_titanic.Age)):
df_titanic["Age"] = df_titanic["Age"].astype(int)
# + id="B1UR1TkIZD3p"
#Converting Fare from float to int
for i in range(len(df_titanic.Fare)):
df_titanic["Fare"] = df_titanic["Fare"].astype(int)
# + colab={"base_uri": "https://localhost:8080/", "height": 197} id="Lq53HivSLoem" outputId="8ffa620d-70a1-4da6-c84f-b1a8d1b28ddb"
df_titanic.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 406} id="lrgeUfF5L-yq" outputId="eadc4f5e-0fa7-4e08-edbb-81a325d338c5"
#Selecting the features of the data
X = df_titanic.drop(['Survived'], axis=1)
X
# + colab={"base_uri": "https://localhost:8080/"} id="HreCsplWMNZA" outputId="af91ed0e-d792-4f66-c016-dcec15d4c08f"
#Selecting the target class of the data
y = df_titanic.Survived
y
# + id="TEyNQI4LMabO"
#Performing a train test split on the data
X_train, X_test, y_train, y_test = train_test_split(X,y, random_state=40)
# + id="OYIYcQKPMk7I"
#Model Build and fitting
reg = LinearRegression().fit(X_train, y_train)
# + id="9ebo0KVNNAMc"
#Model Predictions
y_pred = reg.predict(X_test)
y_pred[y_pred > 0.5] = 1
y_pred[y_pred < 0.5] = 0
# + colab={"base_uri": "https://localhost:8080/"} id="SJMGElG4W_1x" outputId="b4834d6c-0b85-466d-831d-56700c14d277"
#Model Validation
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test, y_pred)
# + id="sodOr2o6VNjb"
| ML_Pipeline.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import torch
import torch.utils.data
import torch.nn as nn
import torch.optim as optim
import torchdiffeq
from tensorboard_utils import Tensorboard
from tensorboard_utils import tensorboard_event_accumulator
import transformer.Constants as Constants
from transformer.Layers import EncoderLayer, DecoderLayer
from transformer.Modules import ScaledDotProductAttention
from transformer.Models import Decoder, get_attn_key_pad_mask, get_non_pad_mask, get_sinusoid_encoding_table
from transformer.SubLayers import PositionwiseFeedForward
import dataset
import model_process
import checkpoints
from node_transformer import NodeTransformer
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
# #%matplotlib notebook
# %matplotlib inline
# %config InlineBackend.figure_format = 'retina'
print("Torch Version", torch.__version__)
# %load_ext autoreload
# %autoreload 2
# -
seed = 1
torch.manual_seed(seed)
device = torch.device("cuda")
print("device", device)
data = torch.load("/home/mandubian/datasets/multi30k/multi30k.atok.low.pt")
max_token_seq_len = data['settings'].max_token_seq_len
print(max_token_seq_len)
train_loader, val_loader = dataset.prepare_dataloaders(data, batch_size=16)
# ### Create an experiment with a name and a unique ID
exp_name = "transformer_6_layers_multi30k"
unique_id = "2019-06-07_1000"
# ### Create Model
model = None
# +
src_vocab_sz = train_loader.dataset.src_vocab_size
print("src_vocab_sz", src_vocab_sz)
tgt_vocab_sz = train_loader.dataset.tgt_vocab_size
print("tgt_vocab_sz", tgt_vocab_sz)
if model:
del model
model = NodeTransformer(
n_src_vocab=max(src_vocab_sz, tgt_vocab_sz),
n_tgt_vocab=max(src_vocab_sz, tgt_vocab_sz),
len_max_seq=max_token_seq_len,
n_layers=6,
#emb_src_tgt_weight_sharing=False,
#d_word_vec=128, d_model=128, d_inner=512,
n_head=8, method='dopri5-ext', rtol=1e-3, atol=1e-3,
has_node_encoder=False, has_node_decoder=False)
model = model.to(device)
# -
# ### Create basic optimizer
optimizer = optim.Adam(model.parameters(), lr=1e-4, betas=(0.9, 0.995), eps=1e-9)
# ### Restore best checkpoint (to restart past training)
# +
state = checkpoints.restore_best_checkpoint(
exp_name, unique_id, "validation", model, optimizer)
print("accuracy", state["acc"])
print("loss", state["loss"])
model = model.to(device)
# -
fst = next(iter(val_loader))
print(fst)
en = ' '.join([val_loader.dataset.src_idx2word[idx] for idx in fst[0][0].numpy()])
ge = ' '.join([val_loader.dataset.tgt_idx2word[idx] for idx in fst[2][0].numpy()])
print(en)
print(ge)
# +
timesteps = np.linspace(0., 1, num=6)
timesteps = torch.from_numpy(timesteps).float().to(device)
qs = fst[0]
qs_pos = fst[1]
resp = model_process.predict_single(qs, qs_pos, model, timesteps, device, max_token_seq_len)
# -
idx = 5
print("score", resp[idx]["score"])
en = ' '.join([val_loader.dataset.src_idx2word[idx] for idx in qs[idx].cpu().numpy()])
ge = ' '.join([val_loader.dataset.tgt_idx2word[idx] for idx in resp[idx]["resp"]])
print("[EN]", en)
print("[GE]", ge)
# +
import itertools
import codecs
timesteps = np.linspace(0., 1, num=6)
timesteps = torch.from_numpy(timesteps).float().to(device)
resps = []
f = codecs.open(f"{exp_name}_{unique_id}_prediction.txt","w+", "utf-8")
def cb(batch_idx, batch, all_hyp, all_scores):
for i, idx_seqs in enumerate(all_hyp):
for j, idx_seq in enumerate(idx_seqs):
s = all_scores[i][j].cpu().item()
b = batch[0][i].cpu().numpy()
b = list(filter(lambda x: x != Constants.BOS and x!=Constants.EOS and x!=Constants.PAD, b))
idx_seq = list(filter(lambda x: x != Constants.BOS and x!=Constants.EOS and x!=Constants.PAD, idx_seq))
en = ' '.join([val_loader.dataset.src_idx2word[idx] for idx in b])
ge = ' '.join([val_loader.dataset.tgt_idx2word[idx] for idx in idx_seq])
resps.append({"en":en, "ge":ge, "score":s})
f.write(ge + "\n")
resp = model_process.predict_dataset(val_loader, model, timesteps, device,
cb, max_token_seq_len)
f.close()
| node-transformer-deprecated/transformer-6-layers-predict-v0.1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Identificando elementos que não sejam o Campo
#
#
# Para fazer identificação dos elementos pertencentes ao campo foi desconsiderado todos os elementos pixels que nem fossem verdes
# +
import cv2
import os
import numpy as np
vidcap = cv2.VideoCapture('../videos/video6.mp4')
success,image = vidcap.read()
success = True
while success:
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
mask = cv2.inRange(hsv, lower_green, upper_green)
res = cv2.bitwise_and(image, image, mask=mask)
cv2.imshow('partida',res)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
success,image = vidcap.read()
vidcap.release()
cv2.destroyAllWindows()
# -
# # Identificando jogadores do time
#
# * Para idenficar os jogadores de cada time foi considerado a cor da blusa
# * Foi removido do e setado como preto e contado o numero do pixels pretos para fazer a identificação
# +
import cv2
import os
import numpy as np
vidcap = cv2.VideoCapture('../videos/video6.mp4')
success,image = vidcap.read()
success = True
upper_green = np.array([70,255, 255])
lower_green = np.array([40,40, 40])
lower_blue = np.array([110,50,50])
upper_blue = np.array([130,255,255])
lower_red = np.array([0,31,255])
upper_red = np.array([176,255,255])
lower_white = np.array([0,0,0])
upper_white = np.array([0,0,255])
lower_yellow = np.array([20, 100, 100])
upper_yellow = np.array([30, 255, 255])
lower_black = np.array([20, 100, 100])
upper_black = np.array([180, 255, 30])
def detect_player(player_img, low_color, upper_color):
player_hsv = cv2.cvtColor(player_img,cv2.COLOR_BGR2HSV)
mask = cv2.inRange(player_hsv, low_color, upper_color)
res = cv2.bitwise_and(player_img, player_img, mask=mask)
res = cv2.cvtColor(res,cv2.COLOR_BGR2GRAY)
nzCount = cv2.countNonZero(res)
return nzCount
while success:
time.sleep(0.01)
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
mask = cv2.inRange(hsv, lower_green, upper_green)
res = cv2.bitwise_and(image, image, mask=mask)
res_gray = cv2.cvtColor(res,cv2.COLOR_BGR2GRAY)
contours, _ = cv2.findContours(res_gray ,cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
for c in contours:
x,y,w,h = cv2.boundingRect(c)
if(h>=(1.5)*w):
if(w>15 and h>= 15):
player_img = image[y:y+h,x:x+w]
defensor = detect_player(player_img, lower_red, upper_red)
atacante = detect_player(player_img, lower_white, upper_white)
if(defensor >= 20):
cv2.putText(image, 'Defensor', (x-2, y-2), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (255,0,0), 2, cv2.LINE_AA)
cv2.rectangle(image,(x,y),(x+w,y+h),(255,0,0),3)
if(atacante>=20):
cv2.putText(image, 'Atacante', (x-2, y-2), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0,0,255), 2, cv2.LINE_AA)
cv2.rectangle(image,(x,y),(x+w,y+h),(0,0,255),3)
cv2.imshow('partida',image)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
success,image = vidcap.read()
vidcap.release()
cv2.destroyAllWindows()
# -
# # Identificando Jogadores sem Uso de cores estáticas
# * Baseado no histograma
# +
import cv2
import os
import numpy as np
import time
vidcap = cv2.VideoCapture('../videos/video9.mp4')
success,image = vidcap.read()
success = True
lower = np.array([])
upper = np.array([])
CURRENT = None
while success:
time.sleep(0.01)
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
mask = cv2.inRange(hsv, lower_green, upper_green)
res = cv2.bitwise_and(image, image, mask=mask)
res_gray = cv2.cvtColor(res,cv2.COLOR_BGR2GRAY)
contours, _ = cv2.findContours(res_gray ,cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
for c in contours:
x,y,w,h = cv2.boundingRect(c)
if(h>=(1.5)*w):
if(w>15 and h>= 15):
player_img = image[y:y+h,x:x+w]
if(CURRENT is None):
hist = cv2.calcHist([player_img],[0],None,[256],[0,256])
CURRENT = hist
if(CURRENT is not None):
hist = cv2.calcHist([player_img],[0],None,[256],[0,256])
result = cv2.compareHist( hist, CURRENT, cv2.HISTCMP_BHATTACHARYYA)
CURRENT = hist
if(result < 0.49):
cv2.putText(image, 'Defensor', (x-2, y-2), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (255,0,0), 2, cv2.LINE_AA)
cv2.rectangle(image,(x,y),(x+w,y+h),(255,0,0),3)
else:
cv2.putText(image, 'Atacante', (x-2, y-2), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0,0,255), 2, cv2.LINE_AA)
cv2.rectangle(image,(x,y),(x+w,y+h),(0,0,255),3)
cv2.imshow('partida',image)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
success,image = vidcap.read()
vidcap.release()
cv2.destroyAllWindows()
# -
| player_detection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Appointment Optimization
# ***
# <br></br>
# Please refer to MH4702_Project.pdf for complete problem description and result.
#
# In this notebook we are simulating a clinic operation with one doctor as a server. This system can be categorized as a single server system with exponentialy distributed service rate, and fixed inter-arrival time (D/M/1).
#
# required packages :
# - pandas
# - numpy
# - bayes_opt (https://github.com/fmfn/BayesianOptimization)
# +
import pandas as pd
import numpy as np
import math
import warnings
from functools import partial
from bayes_opt import BayesianOptimization
warnings.simplefilter("ignore")
np.random.seed(5)
rand_unif = np.random.uniform
# -
# ## Simulation
# ***
#
# Below are the descriptions of function parameters used in the simulation : <br>
# - service_rate . Exponentialy distributed service rate of the doctor
# - schedule_interval . Patient inter-arrival time.
# - opening_hours . Clinic opening hours. Clinic cannot accept new appointment beyond its opening hours.
# - n_days . Simulation Duration in terms of days. Simulation will be generated on daily basis. <br>
# - no_show_prob . Probability that a certain patient will not appear on his or her appointment time.
#
#
# First customer arrive at time schedule_interval.
#
# Some assumptions that are assumed to be true :
# - Patient arrived on time or not at all.
# - Patients are willing to wait for any time until he or she served by the doctor.
#
def simulate(service_rate = 3 , schedule_interval = 0.5 , opening_hours = 8 , n_days = 100 , no_show_prob = 0.0) :
columns = [
'Inter-arrival Time',
'Arrival Time per Day',
'Show up' ,
'Service Starting Time',
'Service Time',
'Service Ending Time',
'Waiting Time',
'Sojourn Time',
'Doctors Idle Time',
'Number of People in the System',
'Number of People Waiting',
]
cust_per_day = math.floor(opening_hours/schedule_interval)
df_dict = {col : list() for col in columns}
for i in range(n_days) :
#### First Customer of the day
inter_time = [schedule_interval]
arrive_time = [schedule_interval]
###if random number bigger than no show probability mark as show up
show_up = [1 if rand_unif() > no_show_prob else 0]
service_s_time = [schedule_interval]
##If show up do random roll for service time otherwise 0
service_time = [-np.log(1-rand_unif())/service_rate if show_up[0] else 0]
service_n_time = [service_s_time[0] + service_time[0]]
wait_time = [service_s_time[0] - arrive_time[0]]
sojourn_time = [service_n_time[0] - arrive_time[0]]
doctors_idle = [0]
n_people_sys = [1]
n_people_wait = [max([n_people_sys[0] - 1 , 0])]
for i in range(1, cust_per_day) :
inter_time.append(schedule_interval)
arrive_time.append(schedule_interval + arrive_time[i-1])
if rand_unif() > no_show_prob :
show_up.append(1)
else :
show_up.append(0)
service_s_time.append(max([service_n_time[i-1] , arrive_time[i]]))
if show_up[i] :
service_time.append(-np.log(1-rand_unif())/service_rate)
else :
service_time.append(0)
service_n_time.append(service_s_time[i] + service_time[i])
if show_up[i] :
wait_time.append(service_s_time[i] - arrive_time[i])
else :
wait_time.append(0)
sojourn_time.append(service_n_time[i] - arrive_time[i])
doctors_idle.append(max([0 , arrive_time[i] - service_n_time[i-1]]))
n_people_sys.append(sum((np.array(service_n_time[:i]) > arrive_time[i])*show_up[:i]) + show_up[i])
n_people_wait.append(max([n_people_sys[i] - 1, 0]))
df_dict['Inter-arrival Time'].extend(inter_time)
df_dict['Arrival Time per Day'].extend(arrive_time)
df_dict['Show up'].extend(show_up)
df_dict['Service Starting Time'].extend(service_s_time)
df_dict['Service Time'].extend(service_time)
df_dict['Service Ending Time'].extend(service_n_time)
df_dict['Waiting Time'].extend(wait_time)
df_dict['Sojourn Time'].extend(sojourn_time)
df_dict['Doctors Idle Time'].extend(doctors_idle)
df_dict['Number of People in the System'].extend(n_people_sys)
df_dict['Number of People Waiting'].extend(n_people_wait)
return pd.DataFrame(df_dict)
# ### Simulation Sample
#
simulate(n_days = 1)
# In order to be able to optimize the system we will define a scenario. Our optimization objective is profit maximization.
#
# We will split profit parts into income and cost :
#
# <b>Income</b> :
# - For each patient that arrive at the clinic will get a fixed income of 600. <br>
# - Patient who do not appear at his or her appointment will be regarded as sunk cost or income of 0.
#
# <b>Cost</b> :
# - Doctor daily hiring cost is assumed to be $1000 + 1800 \times n\_days \times \frac{opening\_hour}{8}$
# - Doctor will be given overtime pay which amounts to $\frac{overtime\_duration}{5}$
# - Operations cost will cover any other cost incurred by the clinic (utilities, rental, etc)
#
# <b>Variables to optimize</b> :
# - service_rate (default (unoptimized) = 3)
# - schedule_interval (default = 0.5)
# - opening_hours (optional) (default = 8)
#
# To simplify the will not apply any constraints except limitations on our search space.
#
# For demonstration purposes we will show perform optimization with probability of not showing up as 0 and 0.2. I will also show comparison of the result before and after optimization.
#
# Each simulate_scheme calls will perform simulation and returns the profit over n_days time window.
#
# +
def income(show_up , fee = 500) :
# Calculate the income based on the number of patient who show up.
return fee * show_up.sum()
def operation_cost(over_time , over_time_multiplier , hiring_cost, op_hour , op_h_cost = 200) :
over_time_cost = (over_time*over_time_multiplier).sum()
op_cost = 0
for op in op_hour :
op_cost += op_h_cost * op
return over_time_cost + op_cost + hiring_cost
def simulate_scheme(service_rate , schedule_interval , opening_hours ,
op_h_cost = 200 , no_show_prob = 0.2 , fee = 400 , n_days = 60) :
#### Simulate a scheme with given configuration of service rate schedule interval and opening hour.
#### This simulation will return the profit for a given simulated period.
#### op_h_cost : Hourly Operational Cost for the clinic
#### no_show_prob : Number of patient not showing up , this patient are not chargeable.
#### n_days : Number of days every simulation is evaluated. Higher n_days will probably yield more stable result.
#### hiring_cost : The cost for doctor.
#### *Assumption : Doctor with higher skill will serve faster hence able to handle more patients.
#### Define Cost for doctor's skill, the doctor is paid daily cost is scaled according to his work hour,
#### using 8 as base working hour
hiring_cost = (1000 + 1800*(service_rate))*(opening_hours/8)*n_days
#### Assume the doctor is paid for 1/5 of his wage if he is doing overtime
over_time_multiplier = hiring_cost/(n_days*5.0)
#### simulate the data for n_days
ref = simulate(service_rate , schedule_interval , opening_hours , n_days = n_days , no_show_prob = no_show_prob)
#### How long the doctor does overtime. Values can be negative if he leave earlier
#### than his designated work hour ends
ot_count = (ref[ref['Arrival Time per Day'] == ref['Arrival Time per Day'].max()]['Service Ending Time'] \
- opening_hours - schedule_interval).values
#### The function below implies that if the doctor finishes his service with the last patient
#### before his designated work hour ends, he can leave early. He is only entitled to overtime
#### pays if he is past his work hour. This is that is often used in machine learning which is named
#### rectified linear unit
relu = lambda x : max(x,0)
overtime = np.array(list(map(relu , ot_count)))
op_hour = np.array(ot_count + opening_hours)
inc = income(ref['Show up'] , fee)
cst = operation_cost(overtime , over_time_multiplier , hiring_cost , op_hour , op_h_cost)
return inc - cst
# +
optimize_op_hour = False
print('Single Server Optimization ...')
base_parameter = {
'service_rate' : 3 ,
'schedule_interval' : 0.5
}
if optimize_op_hour :
base_parameter['opening_hours'] = 8
optimized = {}
print()
for no_show in [0,0.2] :
print('With %.3f no show probability.' % no_show )
fixed_params = {
'op_h_cost' : 300 ,
'no_show_prob' : no_show,
'fee' : 600 ,
'n_days' : 30 ,
}
if not optimize_op_hour :
fixed_params['opening_hours'] = 8
print()
print('Base Parameters : ')
for key in base_parameter.keys() :
print('%s : %.3f' %(key ,base_parameter[key]))
print()
print('Fixed Parameters :')
for key in fixed_params.keys() :
print('%s : %.3f' % (key , fixed_params[key]))
print()
#### Simulate the scheme for 100 months to assess the clinic's profit.
print('Before Optimization :')
print()
sim = simulate(**base_parameter , n_days = 30 , opening_hours = 8 , no_show_prob = no_show)
# if no_show > 0 :
# sim.to_csv('Unoptimized with Probability.csv' , index = False)
# else :
# sim.to_csv('Unoptimized without Probability.csv' , index = False)
simulate_scheme_fixed = partial(simulate_scheme , **fixed_params)
res = []
for i in range(100) :
res.append(simulate_scheme_fixed(**base_parameter))
print('Monthly Profit Mean : %.3f' % np.mean(res))
print('Monthly Profit Variance : %.3f' % np.var(res))
print('Monthly Profit std : %.3f' % np.std(res))
mean_service_time = sim[sim['Show up'] == 1]['Service Time'].mean()
var_service_time = sim[sim['Show up'] == 1]['Service Time'].var()
std_service_time = sim[sim['Show up'] == 1]['Service Time'].std()
print()
search_space = {'service_rate' : (1.0,5.0) ,
'schedule_interval' : (0.2,1.0)}
if optimize_op_hour :
search_space['opening_hours'] = (6.0,10.0)
bo = BayesianOptimization(simulate_scheme_fixed ,
search_space,
random_state = 10,
verbose = 0)
bo.maximize(init_points = 5 , n_iter = 50)
optimized[no_show] = bo.res['max']['max_params']
print('After Optimization :')
print()
print('Best Parameters :')
for key in optimized[no_show].keys() :
print('%s : %.3f' %(key , optimized[no_show][key]))
print()
res = []
for i in range(100) :
res.append(simulate_scheme_fixed(**optimized[no_show]))
print('Monthly Profit Mean : %.3f' % np.mean(res))
print('Monthly Profit Variance : %.3f' % np.var(res))
print('Monthly Profit std : %.3f' % np.std(res))
avg_idle = sim['Doctors Idle Time'].mean()
var_idle = sim['Doctors Idle Time'].var()
std_idle = sim['Doctors Idle Time'].std()
avg_waiting = sim['Waiting Time'].mean()
var_waiting = sim['Waiting Time'].var()
std_waiting = sim['Waiting Time'].std()
print()
if optimize_op_hour :
opening_hours = optimized[no_show]['opening_hours']
else :
opening_hours = fixed_params['opening_hours']
#### Assess the queuing system performance in 30 days.
print('Queueing performance :')
sim = simulate(**optimized[no_show] , n_days = 30 , opening_hours = 8 , no_show_prob = no_show)
maxval = sim[['Arrival Time per Day']].max().values[0]
avg_overtime = np.mean([max(num , 0) for num in (sim[sim['Arrival Time per Day'] == maxval]['Service Ending Time'] - opening_hours - optimized[no_show]['schedule_interval']).values])
var_overtime = np.var([max(num , 0) for num in (sim[sim['Arrival Time per Day'] == maxval]['Service Ending Time'] - opening_hours - optimized[no_show]['schedule_interval']).values])
std_overtime = np.std([max(num , 0) for num in (sim[sim['Arrival Time per Day'] == maxval]['Service Ending Time'] - opening_hours - optimized[no_show]['schedule_interval']).values])
print('Mean Service Time : %.3f' % mean_service_time)
print('Variance Service Time : %.3f' % var_service_time)
print('Standard Deviation Service Time :%.3f' % std_service_time)
print()
print('Average Waiting Time : %.3f' % avg_waiting)
print('Variance Waiting Time : %.3f' % var_waiting)
print('Standard Deviation Waiting Time : %.3f' % std_waiting)
print()
print('Average Idle Time : %.3f' % avg_idle)
print('Variance Idle Time : %.3f'% var_idle)
print('Standard Deviation Idle Time : %.3f'% std_idle)
print()
print('Average Overtime: %.3f ' % avg_overtime)
print('Variance Overtime : %.3f' % var_overtime)
print('Standard Deviation Overtime : %.3f' % std_overtime)
print()
print('Average Show up : %.3f' % sim['Show up'].mean())
print('Variance Show up : %.3f' % sim['Show up'].var())
print('Standard Deviation Show up : %.3f' % sim['Show up'].std())
print('=======================================================================')
print()
# if no_show > 0 :
# sim.to_csv('Optimized with No Show.csv' , index = False)
# else :
# sim.to_csv('Optimized without No Show.csv' , index = False)
# -
| Demo Notebook.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] button=false new_sheet=false run_control={"read_only": false} slideshow={"slide_type": "slide"}
# # Part V: Abstracting Failures
#
# In this part, we show how to determine abstract failure conditions.
#
# * [Statistical Debugging](StatisticalDebugger.ipynb) is a technique to determine events during execution that _correlate with failure_ – for instance, the coverage of individual code locations.
#
# * [Mining Function Specifications](DynamicInvariants.ipynb) shows how to learn preconditions and postconditions from given executions.
#
# * In [Generalizing Failure Circumstances](DDSetDebugger.ipynb), we discuss how to automatically determine the (input) conditions under which the failure occurs.
| docs/notebooks/05_Abstracting.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.6 64-bit (''vela'': pipenv)'
# language: python
# name: python37664bitvelapipenvde09592071074af6a70ce3b1ce38af95
# ---
# # 20-06-17: Daily Data Practice
# ---
#
# ## Daily Practices
#
# * Practice with DS/ML tools and processes
# * [fast.ai course](https://course.fast.ai/)
# * HackerRank SQL or Packt SQL Data Analytics
# * Hands-on ML | NLP In Action | Dive Into Deep Learning | Coursera / guided projects
# * Read, code along, take notes
# * _test yourself on the concepts_ — i.e. do all the chapter exercises
# * Try to hit benchmark accuracies with [UCI ML datasets](https://archive.ics.uci.edu/ml/index.php) or Kaggle
# * Meta Data: Review and write
# * Focus on a topic, review notes and resources, write a blog post about it
# * 2-Hour Job Search
# * Interviewing
# * Behavioral questions and scenarios
# * Business case walk-throughs
# * Hot-seat DS-related topics for recall practice (under pressure)
# ---
#
# ## Meta Data: Review and Write
#
# > Focus on a topic or project, learn/review the concepts, write (a blog post) about it
# Finishing up with the final polish on each of my artifacts. I started out with a few edits to the FYI page copy I wrote yesterday, then put the finishing touches on the new version of my resume.
#
# Now, I'm updating my LinkedIn description and anything else that needs a final look over.
#
# Here is my new LinkedIn "About" section:
# ### About
#
# I'm a data scientist and engineer, software and machine learning engineer, writer, and musician currently living in Denver, Colorado. I hold a BS in Economics from Cal Poly, SLO, where I also competed as a Division I springboard diver.
#
# I've worked on a number of teams to develop useful and usable applications, such as a subreddit recommendation system and an object detection API that classifies over 70 types of waste (see my Projects section below). Building things and working with people is how I learn best, and I'm always looking for opportunities to do so.
#
# I relish any opportunity to solve a problem, simple or complex, and always hold myself and those around me to a high standard.
#
# As a child I was encouraged to teach myself everything I wanted to learn and to entertain myself without a TV (we didn’t have one). Contrary to what a surprising number of people believe, these two areas of life go hand in hand. Learning was my entertainment—and still is.
#
# My unique upbringing instilled in me what I believe is the most important lesson of all: how to be a lifelong learner. For me, learning did not start nor end with my academic education; life is a continuous learning process. I have a deep passion not only for the challenge of learning new skills and studying new disciplines, but also for practicing and refining those I already have.
#
# From anchoring the news to analyzing millions of data points; from training and performing as a vocalist to developing a market for wall-climbing robots; from shooting and editing video to writing technical essays on topics such as the economics of real estate or photosynthetic production of plastic; from surfing trips in exotic locations around the world to flying all over the country on a weekly basis implementing enterprise-level software; from designing, printing, and selling t-shirts to saving enough to try out semi-retirement for a couple years—I am very proud of the variety of academic, professional, and life experiences I’ve had.
#
# In case you're curious, you can find out more about me by visiting my site: tobias.fyi.
#
# Hope to see you there!
# ---
#
# ## DS + ML Practice
#
# * Pick a dataset and try to do X with it
# * Try to hit benchmark accuracies with [UCI ML datasets](https://archive.ics.uci.edu/ml/index.php) or Kaggle
# * Practice with the common DS/ML tools and processes
# * Hands-on ML | NLP In Action | Dive Into Deep Learning
# * Machine learning flashcards
#
# #### _The goal is to be comfortable explaining the entire process._
#
# * Data access / sourcing, cleaning
# * Exploratory data analysis
# * Data wrangling techniques and processes
# * Inference
# * Statistics
# * Probability
# * Visualization
# * Modeling
# * Implement + justify choice of model / algorithm
# * Track performance + justify choice of metrics
# * Communicate results as relevant to the goal
# ### Image Dataset
#
# Decided to spend a little time working on a new project. I'm going through the fast.ai course, and the first assignment is to train a classifier. My idea, which I've barely started working on, is to train a model that can recognize different exercises—e.g. push-ups, pull-ups, sit-ups.
#
# Looked through the [thread about image dataset gathering](https://forums.fast.ai/t/tips-for-building-large-image-datasets/26688/34) and found some good resources for making this first step of gathering the images a little quicker and easier.
#
# * [More](https://forums.fast.ai/t/generating-image-datasets-quickly/19079)
# * [Threads](https://forums.fast.ai/t/how-to-scrape-the-web-for-images/7446/8)
#
# I determined what packages I want to use to scrape the images. I'm going to start out by trying the [fastclass](https://github.com/cwerner/fastclass) package written by someone in the thread. If that doesn't work well I'll try the official fast.ai tutorial on the subject.
# ---
#
# ## Coding
#
# > Work through practice problems on HackerRank or similar
# ### Python
# ### SQL
#
# ---
#
# ## Interviewing
#
# > Practice answering the most common behavioral and technical interview questions
#
# ### Technical
#
# * Business case walk-throughs
# * Hot-seat DS-related topics for recall practice (under pressure)
# + [markdown] jupyter={"source_hidden": true}
# ### Behavioral
#
# * "Tell me a bit about yourself"
# * "Tell me about a project you've worked on and are proud of"
# * "What do you know about our company?"
# * "Where do you see yourself in 3-5 years?"
# * "Why do you want to work here / want this job?"
# * "What makes you most qualified for this role?"
# * "What is your greatest strength/weakness?"
# * "What is your greatest technical strength?"
# * "Tell me about a time when you had conflict with someone and how you handled it"
# * "Tell me about a mistake you made and how you handled it"
# * Scenario questions (STAR: situation, task, action, result)
# * Success story / biggest accomplishment
# * Greatest challenge (overcome)
# * Persuaded someone who did not agree with you
# * Dealt with and resolved a conflict (among team members)
# * Led a team / showed leadership skills or aptitude
# * How you've dealt with stress / stressful situations
# * Most difficult problem encountered in previous job; how you solved it
# * Solved a problem creatively
# * Exceeded expectations to get a job done
# * Showed initiative
# * Something that's not on your resume
# * Example of important goal you set and how you reached it
# * A time you failed
# * "Do you have any questions for me?"
# * What is your favorite aspect of working here?
# * What has your journey looked like at the company?
# * What are some challenges you face in your position?
# -
#
# ---
#
# ## 2-Hour Job Search
#
| ds/practice/daily_practice/20-06/20-06-17-daily.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: tf-gpu
# language: python
# name: tf-gpu
# ---
# # Post-process
#
# We can join all the prediccion results into one vrt
# !gdalbuildvrt ./data/data_results_tiles_800_800.vrt ./data/data_result/800_800/*.tif
# ### Removing water zones
#
# The burn area index is also sensitive to water sources. Dus an option to remove these areas from the prediction is applying a mask. It can be made using the NDWI index.
#
# For this example, we use "water-zone/w07small.tif" file. It has to have the same size and resolution as the prediction image.
#
#
# !mkdir -p ./data/water-zone/
# !wget https://storage.googleapis.com/dym-datasets-public/fires-bariloche/water-zone/w07small.tif -O ./data/water-zone/w07small.tif
# We can also can apply a threshold for the prediction
# +
results = "./data/data_results_tiles_800_800.vrt"
water_mask = "./data/water-zone/w07small.tif"
# !gdal_calc.py -A $results -B $water_mask --calc="(A*(1-B)>205)*A*255" --outfile=./data/results_up205.tif
| 4_Post-process.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/juanignaciorey/Best-README-Template/blob/master/README_md.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + cellView="form" id="07OGwGY_kFpf"
MODE = "MOUNT" #@param ["MOUNT", "UNMOUNT"]
#Mount your Gdrive!
from google.colab import drive
drive.mount._DEBUG = False
if MODE == "MOUNT":
drive.mount('/content/drive', force_remount=True)
elif MODE == "UNMOUNT":
try:
drive.flush_and_unmount()
except ValueError:
pass
get_ipython().system_raw("rm -rf /root/.config/Google/DriveFS")
# + cellView="form" id="WTHJrdkDntCV"
DRIVE_FOLDER = "content/drive/MyDrive/proyectos/" #@param {type:"string"}
DRIVE_FOLDER = "../../../../../../../" + DRIVE_FOLDER
# %mkdir --parents $DRIVE_FOLDER
GIT_USER = "juanignaciorey" #@param {type:"string"}
GIT_PROYECT = "colab-proyect-template" #@param {type:"string"}
GIT_REPO = "https://github.com/{user}/{proyect}.git".format(user=GIT_USER, proyect=GIT_PROYECT)
# !git clone $GIT_REPO
# %cd $GIT_PROYECT
sys.path.append($GIT_PROYECT)
# %pwd
# + [markdown] id="addn0NIok<PASSWORD>"
# ### Ajustes luego de la instalacipon de WP
# <header>Ajustes</header>
# <blockquote>
# Enlaces permantes
# <li>Nombre de entrada</li>
# <li>o personalizada /catgoria/postname</li>
#
# Lectura
# <li>Disuade los motors de busquda</li>
#
# Medios
# <li>descheck organizar archivs een mes y año</li>
# </blockquote>
#
# ENTRADAS
# <blockquote>
# <li>Borrar entradas</li>
# <li>Crear Aviso Legal</li>
# <li>Crear DMCA</li>
# </blockquote>
#
# PLUGINS
# <blockquote>
# <li>borrar plugins</li>
# <li>Wp Favs key: <KEY></li>
# <li>Crear DMCA</li>
# </blockquote>
#
# TODO: Agregar y subir a wp Favs
# <blockquote>
# <li>Super Progressive Web Apps -> Req tema flex y SSL</li><li>
# OneSignal – Web Push Notifications</li><li>
# Schema App Structured Data</li><li>
# AMP (AMP Project Contributors)</li>
# </blockquote>
| README_md.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 0.4.5
# language: julia
# name: julia-0.4
# ---
using Gadfly
using ColorBrewer
using DataFrames
df = readtable("../data/bigiron_scan_rep_space_a496065_clean.csv");
grouped_info = by(df, [:method, :measure, :genetype, :screen, :representation, :bottleneck_representation, :seq_depth, :crisprtype]) do grouped_df
n = size(grouped_df, 1)
mean_score = clamp(mean(grouped_df[:score]), 0, 1)
std_score = std(grouped_df[:score])
conf_int = 1.96 * std_score./sqrt(n)
DataFrame(
std_score = std_score,
mean_score = mean_score,
score_max = mean_score + conf_int,
score_min = mean_score - conf_int,
n = n
)
end
rename!(grouped_info, Dict(:representation => :transfection_representation,
:seq_depth => :sequencing_representation))
n_steps = length(unique(grouped_info[:transfection_representation]))
rep_vals = hcat(map(x->round(Int64, x), logspace(0,4,n_steps)), logspace(0, 4, n_steps))
for repcol in [:transfection_representation, :bottleneck_representation, :sequencing_representation]
arr = zeros(size(grouped_info, 1))
for i in 1:size(rep_vals, 1)
arr[grouped_info[repcol] .== rep_vals[i, 1]] = rep_vals[i, 2]
end
grouped_info[repcol] = arr
end
head(grouped_info)
# +
selection = [(:genetype, "all"), (:method, "venn"), (:measure, "incdec"), (:crisprtype, "CRISPRi")]
data = grouped_info[vec(all(hcat([(grouped_info[item[1]] .== item[2]) for item in selection]...), 2)), :];
data = vcat([data[data[:sequencing_representation] .== seq_depth, :] for seq_depth in [1.0, 10.0, 100.0, 1000.0, 10000.0]]...)
data[!isfinite(data[:mean_score]), :mean_score] = 0.0
p = plot(data, ygroup=:sequencing_representation, xgroup=:screen, x=:transfection_representation,
y=:bottleneck_representation, color=:mean_score, Geom.subplot_grid(Geom.rectbin,
Coord.cartesian(xmin=0, xmax=4, ymin=0, ymax=4)),
Scale.x_log10, Scale.y_log10, Scale.color_continuous(),
Guide.title("$(unique(data[:crisprtype])[1]) using $(unique(data[:method])[1]) scoring"))
draw(SVG("../plots/scan_rep_space.svg", 12cm, 20cm), p)
draw(SVG(12cm, 20cm), p)
# -
selection = Tuple{Symbol, Any}[
(:genetype, "all"),
(:method, "auprc"),
(:measure, "incdec"),
]
colors = palette("Accent", 4)
reps = [:transfection_representation, :bottleneck_representation, :sequencing_representation]
datas = []
for reptype in combinations(reps, 2)
push!(selection, (reptype[1], 10^4))
push!(selection, (reptype[2], 10^4))
data = grouped_info[vec(all(hcat([(grouped_info[item[1]] .== item[2]) for item in selection]...), 2)), :]
colname = pop!(setdiff(Set(reps), Set(reptype)))
data[:reptype] = colname
data[:representation] = data[colname]
push!(datas, data)
pop!(selection)
pop!(selection)
end
datas = vcat(datas...)
p = plot(datas, xgroup=:screen, ygroup=:crisprtype, x=:representation,
y=:mean_score, ymax=:score_max, ymin=:score_min,
color=:reptype, Scale.x_log10, Geom.subplot_grid(Geom.line, Geom.ribbon,
Guide.yticks(ticks=collect(0:0.2:1)), Coord.cartesian(ymax=1.0, ymin=0.0)),
Scale.color_discrete_manual(colors[1:3]..., levels=reps),
Theme(line_width=2pt), Guide.title("$(unique(datas[:method])[1])"))
draw(SVG("../plots/scan_rep_space_line.svg", 20cm, 20cm), p)
draw(SVG(20cm, 20cm), p)
| meta/scan_rep_space_analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# This notebook retrives information (like beats, tempo) about the music using librosa library. This
# +
import os
import subprocess
import pandas as pd
import glob
import numpy, scipy, matplotlib.pyplot as plt, IPython.display as ipd
import librosa, librosa.display
from ipywidgets import interact
# -
audio_info_df = pd.read_csv('ten_sec_audio_analysis_df.csv', index_col = 0)
audio_info_df.head()
# Append a column YTID thats just the youtube id. yid without .wav
audio_info_df['YTID'] = audio_info_df['yid'].apply(lambda x: str(x).split('.')[0])
# +
#audio_info_df = audio_info_df.append({'YTID':(audio_info_df['yid'].iloc[1])[:-4]}, ignore_index=True) # wrong
#(audio_info_df['yid'].iloc[0])[:-4] # 'CgCBHTl1BB0'
#for i in len(audio_info_df):
#audio_info_df = audio_info_df.append({'YTID': audio_info_df.yid.iloc[i][:-4]}, ignore_index=True)
# -
audio_info_df.head(5)
#Check if any value of yid is not string
x = audio_info_df['yid']
x[x.apply(type) != str].any()
print(audio_info_df.size)
audio_info_df.shape
#
# Lets try to combine clips_metadata.csv and audio_info_df.csv cos we need one df that has both file name and ytid and music_class information
#
#
# clips_metadata has clips shorter than 10 seconds. audio_info_df has only 10s clips. The difference is 300something. So while joining, join based on audio_info_df
## Lets try to combine clips_metadata.csv and audio_info_df.csv
clips_metadata_df = pd.read_csv('clips_metadata', sep =' ')
clips_metadata_df = clips_metadata_df.drop(columns=['start_seconds','end_seconds','positive_labels','type'])
clips_metadata_df.head()
print(clips_metadata_df.size, clips_metadata_df.shape)#73143, (10449, 7)
print(audio_info_df.size, audio_info_df.shape)#30438, (10146, 3)
# +
#result = pd.merge(audio_info_df, clips_metadata_df, how = 'left', left_on = audio_info_df.yid[:-4] , right_on = 'YTID')
audio_info_df = pd.merge(audio_info_df, clips_metadata_df, how='left', on=['YTID'])
audio_info_df.head()
# -
audio_info_df.shape
# ## Generate Tempogram
# +
#Syntax
#tempo, beat_times = librosa.beat.beat_track(x, sr=ample_rate, start_bpm=30, units='time') # start_bpm = initial guess for the tempo estimator (in beats per minute)
# -
from pathlib import Path
audio_path = Path('/Users/Amulya/workspace/Fastai/MusicMoodClassification/google_audioset/')
tempogram_path = Path('/Users/Amulya/workspace/Fastai/MusicMoodClassification/tempogram/')
tempogram = Path('/Users/Amulya/workspace/Fastai/MusicMoodClassification/tempogram/')
if tempogram.exists()==False:
tempogram.mkdir()
def tempogram(audio_file_name):
fpath = Path(str(audio_path) + '/' + audio_file_name)
samples, sample_rate = librosa.load(fpath)
fig = plt.figure(figsize=[0.92,0.92])
ax = fig.add_subplot(111)
onset_env = librosa.onset.onset_strength(samples, sr=sample_rate, hop_length=200, n_fft=2048)
tempogram = librosa.feature.tempogram(onset_envelope=onset_env, sr=sample_rate, hop_length=200, win_length=400)
librosa.display.specshow(tempogram, sr=sample_rate, hop_length=200, x_axis='time', y_axis='tempo')
ax.axes.get_xaxis().set_visible(False)
ax.axes.get_yaxis().set_visible(False)
ax.set_frame_on(False)
fname = audio_file_name.replace('.wav','.png')
filename = Path(str(tempogram_path) + '/' + fname)
plt.savefig(filename, format="png", dpi=400, bbox_inches='tight', pad_inches=0)
plt.close('all')
processed_files = [f.split('.png')[0] + ".wav" for f in os.listdir('tempogram/')]
len(processed_files)
# +
to_process = []
all_files = list(audio_info_df['yid'].values)
for f in all_files :
if f not in processed_files:
to_process.append(f)
len(to_process)
# +
# TESTING
##for i in range (0,2):
## tempogram(audio_info_df['yid'].values[i])
## i=i+1
# -
# My laptop was running out of memory while running all the files for so some reason, so generated only 2000 tempograms at a time
# +
import multiprocessing as mp
import numpy as np
mp.cpu_count()
with mp.Pool(2) as pool:
pool.map(tempogram, to_process[:2000])
#pool.map(tempogram, audio_info_df['yid'].values)
# -
| 4_Music_Mood_Classification/music_mood_generate_tempogram.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # The beta-binomial model
# Consider the problem of inferring the probability that a coin shows up heads, given a series of observed coin tosses. This model forms the basis of many methods including naive Bayes classifiers, Markov Models etc. Let's specify the likelihood and prior, and deriving the posterior and posterior predictive.
#
# ### Likelihood
# Suppose $X_i\sim \mathrm{Ber}(\theta)$, where $X_i = 1$ represents "heads", $X_i=0$ represents "tails", and $\theta\in [0, 1]$ is the rate parameter (probability of heads). If the data are iid, the likelihood has the form
#
# $$
# p(\mathcal{D}|\theta) = \theta^{N_1}(1-\theta)^{N_0}
# $$
#
# where we have $N_1=\sum_{i=1}^N\mathbb{I}(x_i=1)$ heads and $N_0 = \sum_{i=1}^N\mathbb{I}(x_i=0)$ tails. These two counts are called the **sufficient statistics** of the data, since this is all we need to know about $\mathcal{D}$ to infer $\theta$.
# ### Prior
# We need a prior which has support over the interval [0, 1]. To make the math easier, it would be convenient if the prior had the same form as the likelihood, i.e. if the prior looked like
#
# $$
# p(\theta)\propto\theta^{\gamma_1}(1-\theta)^{\gamma_2}
# $$
#
# For some prior parameters $\gamma_1$ and $\gamma_2$, in which case we can evaluate the posterior by simply adding up the exponents. When the prior and posterior have the same form, we say that the prior is a **conjugate prior** for the corresponding likelihood.
#
# In the case of the Bernoulli, the conjugate prior is the beta distribution:
#
# $$
# \mathrm{Beta}(\theta|a, b) \propto \theta^{a-1}(1-\theta)^{b-1}
# $$
# ### Posterior
# If we multiply the likelihood by the beta prior we get the following posterior:
#
# \begin{eqnarray}
# p(\theta|\mathcal{D}) &\propto & p(\mathcal{D}|\theta)p(\theta) \propto \theta^{N_1}(1-\theta)^{N_0}\theta^{a-1}(1-\theta)^{b-1} \\
# & \propto & \theta^{N_1+a-1}(1-\theta)^{N_0+b-1}\\
# & \propto & \mathrm{Beta}(N_1+a, N_0+b)
# \end{eqnarray}
#
# In particular, the posterior is obtained by adding the prior hyperparameters to the empirical counts.
#
# #### Posterior mean and mode
# The MAP estimate is given by
# $$
# \hat{\theta}_{MAP} = \frac{a + N_1 -1}{a + b + N - 2}
# $$
# (from the definition of the Beta distribution). If we use a uniform prior (i.e. $a = 1$, $b = 1$), then the MAP estimate reduces to the MLE which is just the empirical fraction of heads:
# $$
# \hat{\theta}_{MLE} = \frac{N_1}{N}
# $$
#
# **Exercise 3.1**: We can derive this by maximising the likelihood function $p(\mathcal{D}|\theta) = \theta^{N_1}(1-\theta)^{N_0}$. Taking the derivative of the $\log$ of the likelihood we obtain:
# $$
# \frac{d\log p}{d\theta} = \frac{N_1}{\theta} - \frac{N_0}{1-\theta}
# $$
# To find the maximum set the derivative to zero, where we find that
# $$
# \hat{\theta}_{MLE} = \frac{N_1}{N_0+N_1} = \frac{N_1}{N}
# $$
# The posterior mean is given by
# $$
# \bar{\theta} = \frac{a + N_1}{a + b + N}
# $$
#
# We show that the posterior mean is a convex combination of the prior mean and the MLE, which captures the notion that the posterior is a compromise between what we previously believed and what the data is telling us.
#
# Let $\alpha_0 = a + b$ be the equivalent sample size of the prior, which controls its strength, and let the prior mean be $m_1 = a/\alpha_0$. Then the posterior mean is given by
#
# $$
# \mathbb{E}[\theta|\mathcal{D}] = \frac{\alpha_0m_1 + N_1}{N+ \alpha_0} = \frac{\alpha_0}{N+\alpha_0}m_1 + \frac{N}{N+\alpha_0}\frac{N_1}{N} = \lambda m_1 + (1-\lambda)\hat{\theta}_{MLE}
# $$
#
# where $\lambda=\frac{\alpha_0}{N + \alpha_0}$ is the ratio of the prior to posterior equivalent sample size. So the weaker the prior, the smaller is $\lambda$, and hence the closer the posterior mean is to the MLE as $N\rightarrow\infty$.
# #### Posterior Variance
# The variance of the Beta posterior is given by
# $$
# \mathrm{var}[\theta|\mathcal{D}] = \frac{(a+N_1)(b+N_2)}{(a+N_1+b+N_0)^2(a+N_1+b+N_0+1)}
# $$
# Simplify this in the case that $N \gg a, b$, to get
# $$
# \mathrm{var}[\theta|\mathcal{D}]\approx\frac{N_1N_0}{NNN}=\frac{\hat{\theta}(1-\hat{\theta})}{N}
# $$
# where $\hat{\theta}$ is the MLE. Hence the error bar (posterior standard deviation) is given by
# $$
# \sigma = \sqrt{\mathrm{var}[\theta|\mathcal{D}]}\approx\sqrt{\frac{\hat{\theta}(1-\hat{\theta})}{N}}
# $$
# So uncertainty goes down at a rate of $1/\sqrt{N}$. Note however the uncertainty is maximized when $\hat{\theta} = 0.5$, which means it is easier to be sure that a coin is biased than to be sure that it is fair.
# ### Predicting the outcome of future trials
# The posterior predictive is given by the following, known as the compound **beta-binomial** distribution
# $$
# Bb(x|a, b, M) \triangleq \binom{M}{x}\frac{B(x+a, M-x+b)}{B(a, b)}
# $$
# This distribution has the following mean and variance
# $$
# \mathbb{E}[x] = M\frac{a}{a+b}, \mathrm{var}[x] = \frac{Mab}{(a+b)^2}\frac{(a+b+M)}{a + b + 1}
# $$
#
| generative-models/Beta-Binomial-Model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.8.10 64-bit
# name: python3
# ---
import pickle
import sklearn.metrics as met
import numpy as np
from tqdm import tqdm
import matplotlib.pyplot as plt
# +
trainData = np.load('../../../dataFinal/npy_files/fin_t2_train.npy')
trialData = np.load('../../../dataFinal/npy_files/fin_t2_trial.npy')
testData = np.load('../../../dataFinal/npy_files/fin_t2_test.npy')
noWEtrainData = np.load('../../../dataFinal/npy_files//noWE_t2_train.npy')
noWEtrialData = np.load('../../../dataFinal/npy_files//noWE_t2_trial.npy')
noWEtestData = np.load('../../../dataFinal/npy_files//noWE_t2_test.npy')
trainLabels = open('../../../dataFinal/finalTrainLabels.labels', 'r').readlines()
trialLabels = open('../../../dataFinal/finalDevLabels.labels','r').readlines()
testLabels = open('../../../dataFinal/finalTestLabels.labels', 'r').readlines()
# +
for i in tqdm(range(len(trainLabels))):
trainLabels[i] = int(trainLabels[i])
for i in tqdm(range(len(testLabels))):
testLabels[i] = int(testLabels[i])
for i in tqdm(range(len(trialLabels))):
trialLabels[i] = int(trialLabels[i])
trainLabels = np.array(trainLabels)
testLabels = np.array(testLabels)
trialLabels = np.array(trialLabels)
trainLabels = trainLabels.reshape((-1, ))
testLabels = testLabels.reshape((-1, ))
trialLabels = trialLabels.reshape((-1, ))
# -
with open('svm_score.txt','a') as file:
file.write("SVM - Bag of Words + Word Embeddings:\n\n")
for c in [0.1,0.2,0.25,0.3,0.35,0.4,0.5,1,2,5]:
file.write("C = {}:".format(c))
train_acc = met.accuracy_score(trainLabels,train_predict)
trial_acc = met.accuracy_score(trialLabels,trial_predict)
test_acc = met.accuracy_score(testLabels,test_predict)
train_prec = met.precision_score(trainLabels,train_predict,average='weighted')
trial_prec = met.precision_score(trialLabels,trial_predict,average='weighted')
test_prec = met.precision_score(testLabels,test_predict,average='weighted')
train_rec = met.recall_score(trainLabels, train_predict,average='weighted')
trial_rec = met.recall_score(trialLabels, trial_predict,average='weighted')
test_rec = met.recall_score(testLabels, test_predict,average='weighted')
train_f1 = met.f1_score(trainLabels,train_predict,average='weighted')
trial_f1 = met.f1_score(trialLabels,trial_predict,average='weighted')
test_f1 = met.f1_score(testLabels,test_predict,average='weighted')
file.write("\n\tTrain Accuracy: {}".format(train_acc))
file.write("\n\tTrain Precision: {}".format(train_prec))
file.write("\n\tTrain Recall: {}".format(train_rec))
file.write("\n\tTrain F1: {}".format(train_f1))
file.write("\n")
file.write("\n\tTrial Accuracy: {}".format(trial_acc))
file.write("\n\tTrial Precision: {}".format(trial_prec))
file.write("\n\tTrial Recall: {}".format(trial_rec))
file.write("\n\tTrial F1: {}".format(trial_f1))
file.write("\n")
file.write("\n\tTest Accuracy: {}".format(test_acc))
file.write("\n\tTest Precision: {}".format(test_prec))
file.write("\n\tTest Recall: {}".format(test_rec))
file.write("\n\tTest F1: {}".format(test_f1))
file.write("\n\n")
file.close()
with open('svm_score.txt','a') as file:
file.write("\n\nSVM - Bag of Words only:\n\n")
for c in [0.1,0.2,0.25,0.3,0.35,0.4,0.5,1,2,5]:
file.write("C = {}:".format(c))
model = pickle.load(open('./models/noWE_SVM_linear_{}'.format(c),'rb'))
train_predict = model.predict(noWEtrainData)
trial_predict = model.predict(noWEtrialData)
test_predict = model.predict(noWEtestData)
train_acc = met.accuracy_score(trainLabels,train_predict)
trial_acc = met.accuracy_score(trialLabels,trial_predict)
test_acc = met.accuracy_score(testLabels,test_predict)
train_prec = met.precision_score(trainLabels,train_predict,average='weighted')
trial_prec = met.precision_score(trialLabels,trial_predict,average='weighted')
test_prec = met.precision_score(testLabels,test_predict,average='weighted')
train_rec = met.recall_score(trainLabels, train_predict,average='weighted')
trial_rec = met.recall_score(trialLabels, trial_predict,average='weighted')
test_rec = met.recall_score(testLabels, test_predict,average='weighted')
train_f1 = met.f1_score(trainLabels,train_predict,average='weighted')
trial_f1 = met.f1_score(trialLabels,trial_predict,average='weighted')
test_f1 = met.f1_score(testLabels,test_predict,average='weighted')
file.write("\n\tTrain Accuracy: {}".format(train_acc))
file.write("\n\tTrain Precision: {}".format(train_prec))
file.write("\n\tTrain Recall: {}".format(train_rec))
file.write("\n\tTrain F1: {}".format(train_f1))
file.write("\n")
file.write("\n\tTrial Accuracy: {}".format(trial_acc))
file.write("\n\tTrial Precision: {}".format(trial_prec))
file.write("\n\tTrial Recall: {}".format(trial_rec))
file.write("\n\tTrial F1: {}".format(trial_f1))
file.write("\n")
file.write("\n\tTest Accuracy: {}".format(test_acc))
file.write("\n\tTest Precision: {}".format(test_prec))
file.write("\n\tTest Recall: {}".format(test_rec))
file.write("\n\tTest F1: {}".format(test_f1))
file.write("\n\n")
file.close()
c_list = [0.1,0.2,0.25,0.3,0.35,0.4,0.5,1,2,5]
train_acc_list = []
trial_acc_list = []
test_acc_list = []
for c in c_list:
model = pickle.load(open('./models/SVM_linear_{}'.format(c),'rb'))
train_predict = model.predict(trainData)
trial_predict = model.predict(trialData)
test_predict = model.predict(testData)
train_acc = met.accuracy_score(trainLabels,train_predict)
trial_acc = met.accuracy_score(trialLabels,trial_predict)
test_acc = met.accuracy_score(testLabels,test_predict)
train_acc_list.append(train_acc)
trial_acc_list.append(trial_acc)
test_acc_list.append(test_acc)
plt.plot(c_list,train_acc_list,label='Train')
plt.plot(c_list,test_acc_list,label='Test')
plt.plot(c_list,trial_acc_list,label='Trial')
plt.xlabel('C')
plt.ylabel('Accuracy')
plt.title('SVM word2vec: Training Accuracy vs C')
plt.legend()
plt.show()
plt.plot(c_list,test_acc_list)
plt.xlabel('C')
plt.ylabel('Accuracy')
plt.title('SVM word2vec: Testing Accuracy vs C')
plt.show()
plt.plot(c_list,trial_acc_list)
plt.xlabel('C')
plt.ylabel('Accuracy')
plt.title('SVM word2vec: Trial Accuracy vs C')
plt.show()
for c in c_list:
model = pickle.load(open('./models/noWE_SVM_{}'.format(c),'rb'))
train_predict = model.predict(noWEtrainData)
trial_predict = model.predict(noWEtrialData)
test_predict = model.predict(noWEtestData)
train_acc = met.accuracy_score(trainLabels,train_predict)
trial_acc = met.accuracy_score(trialLabels,trial_predict)
test_acc = met.accuracy_score(testLabels,test_predict)
train_acc_list.append(train_acc)
trial_acc_list.append(trial_acc)
test_acc_list.append(test_acc)
plt.plot(c_list,train_acc_list,label='Train')
plt.plot(c_list,test_acc_list,label='Test')
plt.plot(c_list,trial_acc_list,label='Trial')
plt.xlabel('C')
plt.ylabel('Accuracy')
plt.title('SVM: Training Accuracy vs C')
plt.legend()
plt.show()
plt.plot(c_list,test_acc_list)
plt.xlabel('C')
plt.ylabel('Accuracy')
plt.title('SVM: Testing Accuracy vs C')
plt.show()
plt.plot(c_list,trial_acc_list)
plt.xlabel('C')
plt.ylabel('Accuracy')
plt.title('SVM: Trial Accuracy vs C')
plt.show()
| src/finalModels/SVM/svm_evaluate.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="WmH6ZFnRi8H2" colab_type="code" colab={}
import pandas as pd
import numpy as np
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import cross_val_score
# + id="HcG5sWFljaUY" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="2b9afd61-10ca-4522-867b-78ec11e6d76f" executionInfo={"status": "ok", "timestamp": 1581614334721, "user_tz": -60, "elapsed": 396, "user": {"displayName": "<NAME>142oma", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCGKNYprpKO2J5ef0g4ErRZs7AMbJGPsqSbPBiyVA=s64", "userId": "16137752290493992603"}}
# cd "/content/drive/My Drive/Colab Notebooks/dw_matrix"
# + id="zpWVNm6Jj4Cz" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="a1bab5e1-9bbc-47df-d572-fe89094ea1e3" executionInfo={"status": "ok", "timestamp": 1581614344934, "user_tz": -60, "elapsed": 1604, "user": {"displayName": "<NAME>\u0142oma", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCGKNYprpKO2J5ef0g4ErRZs7AMbJGPsqSbPBiyVA=s64", "userId": "16137752290493992603"}}
# ls data
# + id="lAMFGx0ij6Pa" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="4c44a383-d0b9-4175-f525-044c1e28e7c2" executionInfo={"status": "ok", "timestamp": 1581614392149, "user_tz": -60, "elapsed": 1243, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCGKNYprpKO2J5ef0g4ErRZs7AMbJGPsqSbPBiyVA=s64", "userId": "16137752290493992603"}}
df = pd.read_csv('data/men_shoes.csv', low_memory=False)
df.shape
# + id="P85P0cQNkBLy" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 231} outputId="b32efd90-8dfe-4e12-ed05-e33aa62f477b" executionInfo={"status": "ok", "timestamp": 1581614409307, "user_tz": -60, "elapsed": 408, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCGKNYprpKO2J5ef0g4ErRZs7AMbJGPsqSbPBiyVA=s64", "userId": "16137752290493992603"}}
df.columns
# + id="cM63FLHakKPk" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="0d6436ff-8342-4d5f-b10f-e5bc4e7d3dfb" executionInfo={"status": "ok", "timestamp": 1581614472931, "user_tz": -60, "elapsed": 377, "user": {"displayName": "<NAME>\u0142oma", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCGKNYprpKO2J5ef0g4ErRZs7AMbJGPsqSbPBiyVA=s64", "userId": "16137752290493992603"}}
mean_price = np.mean(df['prices_amountmin'])
mean_price
# + id="gpJqaGCikXE1" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="d53f471b-3f26-4007-ef3e-e01e1095f61e" executionInfo={"status": "ok", "timestamp": 1581614514505, "user_tz": -60, "elapsed": 459, "user": {"displayName": "<NAME>\u0142oma", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCGKNYprpKO2J5ef0g4ErRZs7AMbJGPsqSbPBiyVA=s64", "userId": "16137752290493992603"}}
[3] * 5
# + id="fmbFlROlkhiM" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="df387f7f-abbc-46c1-fcab-63882052046c" executionInfo={"status": "ok", "timestamp": 1581614595003, "user_tz": -60, "elapsed": 380, "user": {"displayName": "<NAME>\u0142oma", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCGKNYprpKO2J5ef0g4ErRZs7AMbJGPsqSbPBiyVA=s64", "userId": "16137752290493992603"}}
y_true = df['prices_amountmin']
y_pred = [mean_price] * y_true.shape[0]
mean_absolute_error(y_true, y_pred)
# + id="WD12l0Rvks2L" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 283} outputId="c71b50e8-ba91-4fa7-e937-7217c21cfe18" executionInfo={"status": "ok", "timestamp": 1581614651255, "user_tz": -60, "elapsed": 717, "user": {"displayName": "<NAME>0142oma", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCGKNYprpKO2J5ef0g4ErRZs7AMbJGPsqSbPBiyVA=s64", "userId": "16137752290493992603"}}
df['prices_amountmin'].hist(bins=100)
# + id="2KIVEePMlFPr" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 286} outputId="62d4c36c-1dda-4ee4-92ec-12b32ea09890" executionInfo={"status": "ok", "timestamp": 1581614707984, "user_tz": -60, "elapsed": 736, "user": {"displayName": "<NAME>\u0142oma", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCGKNYprpKO2J5ef0g4ErRZs7AMbJGPsqSbPBiyVA=s64", "userId": "16137752290493992603"}}
np.log1p(df['prices_amountmin'] +1) .hist(bins=100)
# + id="S_SpPHGtlTE2" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="ab55380a-7db4-46b1-d062-bf1bd0183170" executionInfo={"status": "ok", "timestamp": 1581614760867, "user_tz": -60, "elapsed": 381, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCGKNYprpKO2J5ef0g4ErRZs7AMbJGPsqSbPBiyVA=s64", "userId": "16137752290493992603"}}
y_true = df['prices_amountmin']
y_pred = [np.median(y_true)] * y_true.shape[0]
mean_absolute_error(y_true, y_pred)
# + id="0eoN8blWlfHT" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="75f46139-198c-4224-b1b8-ade846aad508" executionInfo={"status": "ok", "timestamp": 1581615016399, "user_tz": -60, "elapsed": 559, "user": {"displayName": "<NAME>0142oma", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCGKNYprpKO2J5ef0g4ErRZs7AMbJGPsqSbPBiyVA=s64", "userId": "16137752290493992603"}}
y_true = df['prices_amountmin']
price_log_mean = np.expm1(np.mean(np.log1p(y_true)))
y_pred = [price_log_mean] * y_true.shape[0]
mean_absolute_error(y_true, y_pred)
# + id="xruvnLnCmXis" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 231} outputId="92c3693a-efbb-49e5-d32d-356a55fadf1c" executionInfo={"status": "ok", "timestamp": 1581615050495, "user_tz": -60, "elapsed": 434, "user": {"displayName": "<NAME>0142oma", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCGKNYprpKO2J5ef0g4ErRZs7AMbJGPsqSbPBiyVA=s64", "userId": "16137752290493992603"}}
df.columns
# + id="SBuaRNVnmmxT" colab_type="code" colab={}
df['brand_cat'] = df['brand'].factorize()[0]
# + id="vxmLUOOmmvbS" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="07228332-a86f-4dc5-aacb-6fda8c9af174" executionInfo={"status": "ok", "timestamp": 1581615451775, "user_tz": -60, "elapsed": 582, "user": {"displayName": "<NAME>\u0142oma", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCGKNYprpKO2J5ef0g4ErRZs7AMbJGPsqSbPBiyVA=s64", "userId": "16137752290493992603"}}
feats = ['brand_cat']
X = df[feats].values
y = df['prices_amountmin'].values
model = DecisionTreeRegressor(max_depth=5)
scores = cross_val_score(model, X, y, scoring='neg_mean_absolute_error')
np.mean(scores), np.std(scores)
# + id="f9hvZ2BpoAZK" colab_type="code" colab={}
def run_model(feats):
X = df[feats].values
y = df['prices_amountmin'].values
model = DecisionTreeRegressor(max_depth=5)
scores = cross_val_score(model, X, y, scoring='neg_mean_absolute_error')
return np.mean(scores), np.std(scores)
# + id="-BRnTDSUofCa" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="2c58ea24-af0c-4365-ce99-b0a2e6ada104" executionInfo={"status": "ok", "timestamp": 1581615572151, "user_tz": -60, "elapsed": 401, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCGKNYprpKO2J5ef0g4ErRZs7AMbJGPsqSbPBiyVA=s64", "userId": "16137752290493992603"}}
run_model(['brand_cat'])
# + id="FuT7r8dmomJC" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="7b52b5e9-3118-4e3c-ca55-bd64144583d6" executionInfo={"status": "ok", "timestamp": 1581615738467, "user_tz": -60, "elapsed": 453, "user": {"displayName": "<NAME>142oma", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCGKNYprpKO2J5ef0g4ErRZs7AMbJGPsqSbPBiyVA=s64", "userId": "16137752290493992603"}}
df['manufacturer_cat'] = df['manufacturer'].factorize()[0]
run_model(['manufacturer_cat'])
# + id="9OAjY-w-pOvq" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="0eb7908f-f757-4167-ea90-878cc2727cf5" executionInfo={"status": "ok", "timestamp": 1581615772273, "user_tz": -60, "elapsed": 381, "user": {"displayName": "<NAME>142oma", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCGKNYprpKO2J5ef0g4ErRZs7AMbJGPsqSbPBiyVA=s64", "userId": "16137752290493992603"}}
run_model(['manufacturer_cat', 'brand_cat'])
# + id="HIo63I9FpXAE" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 35} outputId="4b7bc3eb-be16-44be-af73-de9f0d43e5e4" executionInfo={"status": "ok", "timestamp": 1581615825495, "user_tz": -60, "elapsed": 438, "user": {"displayName": "<NAME>0142oma", "photoUrl": "https://lh3.googleusercontent.com/a-/AAuE7mCGKNYprpKO2J5ef0g4ErRZs7AMbJGPsqSbPBiyVA=s64", "userId": "16137752290493992603"}}
df['merchants_cat'] = df['merchants'].factorize()[0]
run_model(['merchants_cat'])
# + id="yGGW5dkKpj_C" colab_type="code" colab={}
| matrix_one/day4.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:anaconda3]
# language: python
# name: conda-env-anaconda3-py
# ---
import json
import geojsonio
import geojson
import geopandas as gpd
import pandas as pd
pd.options.display.max_seq_items = 500
data = gpd.read_file('./data/nc/counties.geojson')
print(data.head())
data['geometry'][20:40]
embed(geojsonio.display(data))
data = json.loads('./data/VA/counties.geojson')
data['features'] #Your first point
| geojson scratchwork.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# import data
from keras.datasets import imdb
(x_train, y_train), (x_test, y_test) = imdb.load_data(path="/home/super-workstation/program/LearnDeepLearning/data/imdb.npz",
num_words=10000,
skip_top=0,
maxlen=None,
seed=113,
start_char=1,
oov_char=2,
index_from=3)
word_index = imdb.get_word_index(path="imdb_word_index.json")
# +
# preprocessing
from keras.preprocessing.sequence import pad_sequences
maxl = 80
'''
for s in x_train:
maxl = max(len(s), maxl)
for s in x_test:
maxl = max(len(s), maxl)
'''
# 统一输入长度 input_length(输入为一维数据)
pad_train = pad_sequences(x_train, maxlen=maxl, padding='pre')
pad_test = pad_sequences(x_test, maxlen=maxl, padding='pre')
print(pad_train[0])
print(pad_test[0])
print(maxl)
print(pad_train.shape)
print(pad_test.shape)
# +
# setup net
import keras
from keras.models import Sequential, Model
from keras.layers import Dense, Embedding, Flatten, GlobalAveragePooling1D, LSTM
voc_size = 20000
embedding_size = 128
model = Sequential()
model.add(Embedding(voc_size, embedding_size, input_length=maxl, name='layer_x'))
#model.add(GlobalAveragePooling1D())
model.add(LSTM(100, name='layer_lstm'))
#model.add(AveragePooling1D(pool_size=maxl))
#model.add(Flatten())
#model.add(Dense(16, activation='relu'))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
'''
layer_name = 'my_layer'
intermediate_layer_model = Model(inputs=model.input,
outputs=model.get_layer('layer_lstm').output)
intermediate_output = intermediate_layer_model.predict(pad_test)
print(intermediate_output.shape)
print(intermediate_output[0][0])
'''
# -
# train and evaluate
import sys
#print(sys.path)
sys.path.insert(0, '../')
from dpl import utils
utils.trainAndEvaluateData(model, pad_train, y_train, pad_test, y_test, 20)
# test
import numpy as np
test_data_c = ['a little confused as to how this movie is so extolled worldwide.', 'Basically there\'s a family where a little boy \
(Jake) thinks there\'s a zombie in his closet & his parents are fighting all the time. \
This movie is slower than a soap opera... and suddenly, Jake decides to become Rambo and kill the zombie. \
OK, first of all when you\'re going to make a film you must Decide if its a thriller or a drama! \
As a drama the movie is watchable. Parents are divorcing & arguing like in real life. \
And then we have Jake with his closet which totally ruins all the film! \
I expected to see a BOOGEYMAN similar movie, and instead i watched a \
drama with some meaningless thriller spots.\
3 out of 10 just for the well playing parents & descent dialogs. As for\
the shots with Jake: just ignore them',
'This movie is too over rated. I think it suite for 5/10. Its boring \
and had a bad ending for a 2.5 hours Movie. I think it better for you \
to just watch from the review of other people.',
'terrible bad','A very boring movie, waited for so long for the movie to end quickly. I was expecting something more at \
the start of the movie, but it ends up getting more and more unwanted to watch.',
'''This show was an amazing, fresh & innovative idea in the 70\'s when it first aired. \
The first 7 or 8 years were brilliant, but things dropped off after that. By 1990, the show was \
not really funny anymore, and it\'s continued its decline further to the complete waste of time\
it is today. It\'s truly disgraceful how far this show has fallen. The writing is painfully bad \
the performances are almost as bad - if not for the mildly entertaining respite of the guest-hosts, \
this show probably wouldn\'t still be on the air. I find it so hard to believe that the same creator \
that hand-selected the original cast also chose the band of hacks that followed. How can one recognize \
such brilliance and then see fit to replace it with such mediocrity? I felt I must give 2 stars out of \
respect for the original cast that made this show such a huge success. As it is now, the show is just awful. \
I can\'t believe it\'s still on the air.''']
test_label =[1, 0, 1, 0]
print(type(x_train[0]))
test_data = np.zeros([len(test_data_c)], dtype=object)
print(test_data.shape)
row_count = 0
for re in test_data_c:
splits = re.split()
len_s = len(splits)
count = 0
test_data[row_count] = []
for char_s in splits:
str = '0'
if char_s in word_index:
str = word_index[char_s]
test_data[row_count].append(str)
#print(row[0][count])
count+=1
row_count+=1
pad_test = pad_sequences(test_data, maxlen=maxl, padding='pre')
results = model.predict(pad_test)
print(results)
# +
from keras.datasets import reuters
(x_train, y_train), (x_test, y_test) = reuters.load_data(path="/home/super-workstation/program/LearnDeepLearning/data/reuters.npz",
num_words=None,
skip_top=0,
maxlen=None,
test_split=0.2,
seed=113,
start_char=1,
oov_char=2,
index_from=3)
word_index = reuters.get_word_index(path="reuters_word_index.json")
import os
print(os.path.abspath('.'))
| src/traditional/setup_rnn.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# =============================================
# Interpolate bad channels for MEG/EEG channels
# =============================================
#
# This example shows how to interpolate bad MEG/EEG channels
#
# - Using spherical splines as described in [1]_ for EEG data.
# - Using field interpolation for MEG data.
#
# The bad channels will still be marked as bad. Only the data in those channels
# is removed.
#
# References
# ----------
# .. [1] <NAME>., <NAME>., <NAME>. and <NAME>. (1989)
# Spherical splines for scalp potential and current density mapping.
# Electroencephalography and Clinical Neurophysiology, Feb; 72(2):184-7.
#
#
# +
# Authors: <NAME> <<EMAIL>>
# <NAME> <<EMAIL>>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
fname = data_path + '/MEG/sample/sample_audvis-ave.fif'
evoked = mne.read_evokeds(fname, condition='Left Auditory',
baseline=(None, 0))
# plot with bads
evoked.plot(exclude=[], time_unit='s')
# compute interpolation (also works with Raw and Epochs objects)
evoked.interpolate_bads(reset_bads=False, verbose=False)
# plot interpolated (previous bads)
evoked.plot(exclude=[], time_unit='s')
| 0.17/_downloads/6142afaa6c1e4f659fc4014ad01a9c75/plot_interpolate_bad_channels.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Build a simple trading strategy
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
# ### 1. Munging the stock data and add two columns - MA10 and MA50
#import FB's stock data, add two columns - MA10 and MA50
#use dropna to remove any "Not a Number" data
fb = pd.DataFrame.from_csv('../data/facebook.csv')
fb['MA10'] = fb['Close'].rolling(10).mean()
fb['MA50'] = fb['Close'].rolling(50).mean()
fb = fb.dropna()
fb.head()
# ### 2. Add "Shares" column to make decisions base on the strategy
# +
#Add a new column "Shares", if MA10>MA50, denote as 1 (long one share of stock), otherwise, denote as 0 (do nothing)
fb['Shares'] = [1 if fb.loc[ei, 'MA10']>fb.loc[ei, 'MA50'] else 0 for ei in fb.index]
# +
#Add a new column "Profit" using List Comprehension, for any rows in fb, if Shares=1, the profit is calculated as the close price of
#tomorrow - the close price of today. Otherwise the profit is 0.
#Plot a graph to show the Profit/Loss
fb['Close1'] = fb['Close'].shift(-1)
fb['Profit'] = [fb.loc[ei, 'Close1'] - fb.loc[ei, 'Close'] if fb.loc[ei, 'Shares']==1 else 0 for ei in fb.index]
fb['Profit'].plot()
plt.axhline(y=0, color='red')
# -
# ### 3. Use .cumsum() to display our model's performance if we follow the strategy
# +
#Use .cumsum() to calculate the accumulated wealth over the period
fb['wealth'] = fb['Profit'].cumsum()
fb.tail()
# +
#plot the wealth to show the growth of profit over the period
fb['wealth'].plot()
plt.title('Total money you win is {}'.format(fb.loc[fb.index[-2], 'wealth']))
# -
# ## You can create your own simple trading strategy by copying the codes above and modify the codes accordingly using the data of Microsoft (microsoft.csv).
| HKUST - Python and Statistics for Financial Analysis/module1- Visualizing and Munging Stock Data/Build+a+simple+trading+strategy.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.6.0
# name: julia-1.6
# ---
# # SSPs Calibration: Leach
#
# Our working local folders for this calibration are contained in the parent folder of this notebook, in _calibration/Leach_
# ## Socioeconomic Data
#
# N/A
# ## Emissions Data
# The emissions data for the runs is pulled directly from the MimiFAIRv2 repository here: https://github.com/FrankErrickson/MimiFAIRv2.jl which describes it's source as follows:
#
# - Code Source: Extracted using default Python model version of FAIR2.0, available at https://github.com/njleach/FAIR/tree/47c6eec031d2edcf09424394dbb86581a1b246ba"
# - Paper Reference: Leach et al. 2021. FaIRv2.0.0: a generalized impulse response model for climate uncertainty and future scenario exploration, Geoscientific Model Development. https://doi.org/10.5194/gmd-14-3007-2021"
#
# We redname the five relevant data source files from this repository as _Leach_SSPXX_ to obtain input files for the MimiSSPs components.
| calibration/Leach/Leach_Calibration.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# File: Hotel-Recommendations.ipynb
# Names: <NAME>
# Date: 10/18/20
# Usage: Program previews and summarizes Expedia Hotel Recommendations data, generates exploratory visualizations, and uses predictive models to predict hotel groups based on user data.
# +
import matplotlib
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import seaborn as sns
import yellowbrick
from yellowbrick.features import Rank2D # correlation visualization package
from yellowbrick.style import set_palette # color for yellowbrick visualizer
from scipy.stats import spearmanr
from scipy.stats import kendalltau
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
from sklearn.naive_bayes import GaussianNB
# -
# # Creating Optimal Hotel Recommendations
#
# ### Objective: Predict which “hotel cluster” the user is likely to book, given their search details.
#
# **Data source:**
# Expedia Hotel Recommendations
# <https://www.kaggle.com/c/expedia-hotel-recommendations>
# 
# ## Loading and Exploring Data
# To understand the data, I reviewed the columns and descriptions provided by the Kaggle data overview tab:
#
# <https://www.kaggle.com/c/expedia-hotel-recommendations/data?select=train.csv>
# 
#
# 
# ### Loading Data
#
# The dataset is very large, with over 37 million observations, so I will only load a smaller subset.
#
# Besides the target variable of hotel_cluster, the columns I'm going to explore are user_id, is_package, site_name, user_location_country, hotel_continent, srch_adults_cnt, srch_children_cnt, and srch_destination_id.
# Loading subset of data into pandas dataframe, choosing columns and specifying data types
hotels_train_df = pd.read_csv('train.csv',
usecols=['user_id', 'is_package', 'site_name', 'user_location_country',
'srch_adults_cnt', 'srch_children_cnt', 'srch_destination_id',
'hotel_cluster', 'hotel_continent'],
dtype={'is_package':bool}, # changing data type to boolean
nrows = 500000)
# Previewing data
hotels_train_df.head(10)
# Summary of data
hotels_train_df.info()
# Summary information for columns
hotels_train_df.describe()
# +
# Summary information for columns without scientific notation
with pd.option_context('float_format', '{:f}'.format):
print(hotels_train_df.describe())
# -
# Displaying summary information for boolean 'is_package' column
print(hotels_train_df.describe(include=[bool]))
# Checking missing data sums
hotels_train_df.isna().sum()
# ### Exploratory Visualizations
# +
# User country frequency using seaborn countplot
plt.figure(figsize=(15, 9))
plt.xticks(rotation=90)
sns.countplot(x='user_location_country', data=hotels_train_df)
# -
# From this bar graph as well as the data quartile range summary statistics, even though the x-axis in this graph is too crowded to read, we can tell that the vast majority of our users in this subset represent country 66. I will confirm this with further plotting next.
#
# This may be a bias introduced from only selecting a subset, so for future exploration I could try selecting another subset, or loading all of the data in chunks in order to see if the data represent a more diverse sample. For the purposes of this assignment and learning, I'm going to stick with this smaller subset.
#
# +
# Bar graph of the number of users in each country for the top ten countries
# Selecting and storing user country column
country_count = hotels_train_df['user_location_country'].value_counts()
# Limiting to top 10 countries
countries_topten = country_count[:10,]
plt.figure(figsize=(12,9))
sns.barplot(countries_topten.index, countries_topten.values, alpha=0.8)
plt.title('Top 10 Countries of Users')
plt.ylabel('Number of Observations', fontsize=12)
plt.xlabel('Country', fontsize=12)
plt.show()
# -
# After limiting the data to the top ten country count values, we can clearly confirm that our users mostly come from country 66.
# +
# Boxplot of hotel cluster by hotel continent
plt.figure(figsize=(12,9))
sns.boxplot(x = hotels_train_df["hotel_continent"], y = hotels_train_df["hotel_cluster"], palette="Blues");
plt.show()
# -
# Interpreting this box plot is difficult because the data are not represented very well. Hotel cluster is more of a discrete categorical variable and this treats it as continuous, which isn't very helpful. We can see that continent 0 represents a wider range of hotel clusters while continent 1 represents a smaller range, but we don't have enough information on the hotel clusters themselves to make this insight useful. I'm going to try looking at frequency of hotel clusters instead.
#
# +
# Plot frequency of each hotel cluster
hotels_train_df["hotel_cluster"].value_counts().plot(kind='bar',colormap="Set3",figsize=(15,7))
plt.xlabel('Hotel Cluster', fontsize=15)
plt.ylabel('Count', fontsize=15)
plt.title('Frequency of Hotel Clusters', fontsize=20)
plt.show()
# -
# From this bar chart we can see that hotel clusters 91 and 41 are the most frequent groups, and the least common group is cluster 74.
# ### Checking Correlation
#
# I'm going to calculate correlation to get a sense of the relationships between some of the variables, which will help in data understanding and determining which predictive models might be most effective.
#
#
# +
# Pearson Ranking
# Setting up figure size
plt.rcParams['figure.figsize'] = (15, 9)
# Choosing attributes to compare
features = ['srch_destination_id', 'user_location_country', 'srch_adults_cnt', 'srch_children_cnt',
'hotel_continent', 'site_name']
# Extracting numpy arrays
X = hotels_train_df[features].values
# Instantiating, fitting, and transforming the visualizer with covariance ranking algorithm
visualizer = Rank2D(features=features, algorithm='pearson')
visualizer.fit(X)
visualizer.transform(X)
visualizer.poof(outpath="pearson_ranking.png") # Drawing the data and saving the output
plt.show()
# -
# It looks like the strongest correlation is a positive relationship of about ~0.25 between hotel continent and site name. The other relationships are also not statistically significant. This tells us that we don't have to worry about multicollinearity when choosing predictive models.
#
# ## Predictive Modeling
#
# Since we are trying to predict the unique hotel cluster, we are dealing with a multi-class classification problem. First, I will look at how many hotel clusters exist.
#
#
# Convert hotel_cluster column to string
hotel_clusters = hotels_train_df['hotel_cluster'].astype(str)
hotel_clusters.describe(include=['O'])
# Our target variable, hotel_cluster, consists of 100 unique values.
# ### Splitting 'hotels_train_df' into train and test set
# +
# Splitting the data into independent and dependent variables
features = ['srch_destination_id', 'user_location_country', 'srch_adults_cnt', 'srch_children_cnt',
'hotel_continent', 'site_name']
X = hotels_train_df[features].values
y = hotels_train_df['hotel_cluster'].values
# +
# Creating the Training and Test set from data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 21)
# Number of samples in each set
print("No. of samples in training set: ", X_train.shape[0])
print("No. of samples in test set:", X_test.shape[0])
# -
# ### Random Forest Classifier
#
# I chose to use a random forest classifier because it's a more accurate ensemble of trees, less biased, and I'm working with a larger amount of data.
#
# Fitting Random Forest Classification to the Training set
classifier = RandomForestClassifier(n_estimators = 10, criterion = 'entropy', random_state = 42)
classifier.fit(X_train, y_train)
# ### Evaluating Random Forest Classifier
# +
# Predicting test set results
y_pred = classifier.predict(X_test)
# Confusion Matrix
pd.crosstab(y_test, y_pred, rownames=['Actual Hotel Cluster'], colnames=['Predicted Hotel Cluster'])
# -
# Confusion Matrix
confusion_matrix(y_test,y_pred)
# In these two methods of displaying the confusion matrix, we can see that there are a good amount of high values across the diagonal section, which is a good sign. However, in the larger version we can also see that there are high values dispersed throughout the sides as well which means there are a lot of incorrect predictions.
# Classification Report
print("Classification Report:\n\n", classification_report(y_test,y_pred))
# Accuracy of model
print("Accuracy:", accuracy_score(y_test, y_pred))
# ### Naive Bayes Classifier
#
# I'm going to try a Naive Bayes Classifier next, since my features are independent and because it tends to perform well with multiple classes.
#
# Training Naive Bayes classifier
gnb = GaussianNB().fit(X_train, y_train)
gnb_predictions = gnb.predict(X_test)
# ### Evaluating Naive Bayes Classifier
# Accuracy
accuracy = gnb.score(X_test, y_test)
print("Accuracy:", accuracy)
# Confusion matrix
confusion_matrix(y_test, gnb_predictions)
# ## Results
# Overall, my predictive models performed quite poorly. The Random Forest Classifier resulted in a 22% accuracy and the Naive Bayes Classifer only gave a 5% accuracy. The highest precision score from the Random Forest Classifier was 91% for hotel cluster 74, but the rest were mostly very low. To improve predictive power, I think it would help to have more information on what the attributes represent. For example, it would be nice to know how the hotel groups are determined and which locations correspond to country and continent numbers. This way, the results might be more interpretable. In addition, I could experiment with a different combination of features and different parameters when modeling. Finally, I could try building different ensembles of models to try achieving better accuracy and interpretability.
| Projects/Hotel-Recommendations/Hotel-Recommendations.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Important installation
# This notebook requires unusual packages: `LightGBM`, `SHAP` and `LIME`.
# For installation, do:
#
# `conda install lightgbm lime shap`
# ## Initial classical imports
import os
import numpy as np
import pandas as pd
import warnings
warnings.simplefilter(action='ignore', category=Warning)
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
# # Read in data
# The stock data first:
# +
dfs = {}
for ticker in ['BLK', 'GS', 'MS']:
dfs[ticker] = pd.read_pickle('stock_{}.pickle'.format(ticker)).set_index('Date')
# -
# The media data:
df_media = pd.read_csv('MediaAttention_Mini.csv', parse_dates=[0], index_col='Time')
df_media.info()
# +
import glob
df_media_long = pd.DataFrame()
for fle in glob.glob('MediaAttentionLong*.csv'):
dummy = pd.read_csv(fle, parse_dates=[0], index_col='Time')
df_media_long = pd.concat([df_media_long,dummy])
df_media_long = df_media_long['2007':'2013']
df_media_long = df_media_long.resample('1D').sum()
df_media_long.shape
# +
#df_media_long= pd.read_csv('MediaAttentionLong_2008.csv', parse_dates=[0], index_col='Time')
# -
df_media_long.columns = ['{}_Long'.format(c) for c in df_media_long.columns]
df_media_long.loc['2008',:].info()
for ticker in dfs:
dfs[ticker] = dfs[ticker].merge(df_media, how='inner', left_index=True, right_index=True)
dfs[ticker] = dfs[ticker].merge(df_media_long, how='inner', left_index=True, right_index=True)
dfs['BLK'].head(10)
# # Model
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, recall_score, roc_auc_score, make_scorer
from sklearn.model_selection import cross_val_score, KFold
model = RandomForestClassifier(n_estimators=100, max_depth=4, random_state=314, n_jobs=4)
for ticker in dfs:
X = dfs[ticker].drop(['Label', 'Close'], axis=1).fillna(-1)
y = dfs[ticker].loc[:,'Label']
scores = cross_val_score(model, X, y,
scoring=make_scorer(accuracy_score),
cv=KFold(10, shuffle=True, random_state=314),
n_jobs=1
)
print('{} prediction performance in accuracy = {:.3f}+-{:.3f}'.format(ticker,
np.mean(scores),
np.std(scores)
))
plt.figure(figsize=(15,12))
sns.heatmap(dfs['BLK'].corr(), vmin=-0.2, vmax=0.2)
# ### Train a model on all data
df = pd.concat(list(dfs.values()), axis=0)
X = df.drop(['Label', 'Close'], axis=1).fillna(-1)
y = df.loc[:,'Label']
scores = cross_val_score(model, X, y,
scoring=make_scorer(accuracy_score),
cv=KFold(10, shuffle=True, random_state=314),
n_jobs=1
)
print('{} prediction performance in accuracy = {:.3f}+-{:.3f}'.format('ALL',
np.mean(scores),
np.std(scores)
))
scores
mdl = model.fit(X,y)
import shap
shap.initjs()
explainer=shap.TreeExplainer(mdl)
shap_values = explainer.shap_values(X)
shap.summary_plot(shap_values, X, plot_type="bar")
# ## LightGBM
# + code_folding=[5, 14, 118]
from sklearn.model_selection import StratifiedKFold, KFold, GroupKFold
from sklearn.base import clone, ClassifierMixin, RegressorMixin
import lightgbm as lgb
def train_single_model(clf_, X_, y_, random_state_=314, opt_parameters_={}, fit_params_={}):
'''
A wrapper to train a model with particular parameters
'''
c = clone(clf_)
c.set_params(**opt_parameters_)
c.set_params(random_state=random_state_)
return c.fit(X_, y_, **fit_params_)
def train_model_in_CV(model, X, y, metric, metric_args={},
model_name='xmodel',
seed=31416, n=5,
opt_parameters_={}, fit_params_={},
verbose=True,
groups=None, y_eval=None):
# the list of classifiers for voting ensable
clfs = []
# performance
perf_eval = {'score_i_oof': 0,
'score_i_ave': 0,
'score_i_std': 0,
'score_i': []
}
# full-sample oof prediction
y_full_oof = pd.Series(np.zeros(shape=(y.shape[0],)),
index=y.index)
sample_weight=None
if 'sample_weight' in metric_args:
sample_weight=metric_args['sample_weight']
index_weight=None
if 'index_weight' in metric_args:
index_weight=metric_args['index_weight']
del metric_args['index_weight']
doSqrt=False
if 'sqrt' in metric_args:
doSqrt=True
del metric_args['sqrt']
if groups is None:
cv = KFold(n, shuffle=True, random_state=seed) #Stratified
else:
cv = GroupKFold(n)
# The out-of-fold (oof) prediction for the k-1 sample in the outer CV loop
y_oof = pd.Series(np.zeros(shape=(X.shape[0],)),
index=X.index)
scores = []
clfs = []
for n_fold, (trn_idx, val_idx) in enumerate(cv.split(X, (y!=0).astype(np.int8), groups=groups)):
X_trn, y_trn = X.iloc[trn_idx], y.iloc[trn_idx]
X_val, y_val = X.iloc[val_idx], y.iloc[val_idx]
if 'LGBMRanker' in type(model).__name__ and groups is not None:
G_trn, G_val = groups.iloc[trn_idx], groups.iloc[val_idx]
if fit_params_:
# use _stp data for early stopping
fit_params_["eval_set"] = [(X_trn,y_trn), (X_val,y_val)]
fit_params_['verbose'] = verbose
if index_weight is not None:
fit_params_["sample_weight"] = y_trn.index.map(index_weight).values
fit_params_["eval_sample_weight"] = [None, y_val.index.map(index_weight).values]
if 'LGBMRanker' in type(model).__name__ and groups is not None:
fit_params_['group'] = G_trn.groupby(G_trn, sort=False).count()
fit_params_['eval_group'] = [G_trn.groupby(G_trn, sort=False).count(),
G_val.groupby(G_val, sort=False).count()]
#display(y_trn.head())
clf = train_single_model(model, X_trn, y_trn, 314+n_fold, opt_parameters_, fit_params_)
clfs.append(('{}{}'.format(model_name,n_fold), clf))
# oof predictions
if isinstance(clf, RegressorMixin):
y_oof.iloc[val_idx] = clf.predict(X_val)
elif isinstance(clf, ClassifierMixin) and metric.__name__=='roc_auc_score':
y_oof.iloc[val_idx] = clf.predict_proba(X_val)[:,1]
else:
y_oof.iloc[val_idx] = clf.predict(X_val)
# prepare weights for evaluation
if sample_weight is not None:
metric_args['sample_weight'] = y_val.map(sample_weight)
elif index_weight is not None:
metric_args['sample_weight'] = y_val.index.map(index_weight).values
# prepare target values
y_true_tmp = y_val if 'LGBMRanker' not in type(model).__name__ and y_eval is None else y_eval.iloc[val_idx]
y_pred_tmp = y_oof.iloc[val_idx] if y_eval is None else y_oof.iloc[val_idx]
#store evaluated metric
scores.append(metric(y_true_tmp, y_pred_tmp, **metric_args))
#cleanup
del X_trn, y_trn, X_val, y_val, y_true_tmp, y_pred_tmp
# Store performance info for this CV
if sample_weight is not None:
metric_args['sample_weight'] = y_oof.map(sample_weight)
elif index_weight is not None:
metric_args['sample_weight'] = y_oof.index.map(index_weight).values
perf_eval['score_i_oof'] = metric(y, y_oof, **metric_args)
perf_eval['score_i'] = scores
if doSqrt:
for k in perf_eval.keys():
if 'score' in k:
perf_eval[k] = np.sqrt(perf_eval[k])
scores = np.sqrt(scores)
perf_eval['score_i_ave'] = np.mean(scores)
perf_eval['score_i_std'] = np.std(scores)
return clfs, perf_eval, y_oof
def print_perf_clf(name, perf_eval):
print('Performance of the model:')
print('Mean(Val) score inner {} Classifier: {:.4f}+-{:.4f}'.format(name,
perf_eval['score_i_ave'],
perf_eval['score_i_std']
))
print('Min/max scores on folds: {:.4f} / {:.4f}'.format(np.min(perf_eval['score_i']),
np.max(perf_eval['score_i'])))
print('OOF score inner {} Classifier: {:.4f}'.format(name, perf_eval['score_i_oof']))
print('Scores in individual folds: {}'.format(perf_eval['score_i']))
# -
mdl_inputs = {
'lgbm1_reg': (lgb.LGBMClassifier(max_depth=-1, min_child_samples=400, random_state=314, silent=True, metric='None',
n_jobs=4, n_estimators=1000, learning_rate=0.1),
{'colsample_bytree': 0.9, 'min_child_weight': 10.0, 'num_leaves': 20, 'reg_alpha': 1, 'subsample': 0.9},
{"early_stopping_rounds":20,
"eval_metric" : 'binary_logloss',
'eval_names': ['train', 'early_stop'],
'verbose': False,
#'callbacks': [lgb.reset_parameter(learning_rate=learning_rate_decay_power)],
},#'categorical_feature': 'auto'},
y,
None
),
'rf1': (
RandomForestClassifier(n_estimators=100, max_depth=4, random_state=314, n_jobs=4),
{},
{},
y,
None
)
}
# +
# %%time
mdls = {}
results = {}
y_oofs = {}
for name, (mdl, mdl_pars, fit_pars, y_, g_) in mdl_inputs.items():
print('--------------- {} -----------'.format(name))
mdl_, perf_eval_, y_oof_ = train_model_in_CV(mdl, X, y_, accuracy_score,
metric_args={},
model_name=name,
opt_parameters_=mdl_pars,
fit_params_=fit_pars,
n=10, seed=3146,
verbose=500,
groups=g_)
results[name] = perf_eval_
mdls[name] = mdl_
y_oofs[name] = y_oof_
print_perf_clf(name, perf_eval_)
# -
# # Train LGBM model on a simple rain/val/test split (70/15/15)
from sklearn.model_selection import train_test_split
X_1, X_tst, y_1, y_tst = train_test_split(X, y, test_size=0.15, shuffle=True, random_state=314)
X_trn, X_val, y_trn, y_val = train_test_split(X_1, y_1, test_size=0.15, shuffle=True, random_state=31)
mdl = lgb.LGBMClassifier(max_depth=-1, min_child_samples=400, random_state=314,
silent=True, metric='None',
n_jobs=4, n_estimators=1000, learning_rate=0.1,
**{'colsample_bytree': 0.9, 'min_child_weight': 10.0, 'num_leaves': 20, 'reg_alpha': 1, 'subsample': 0.9}
)
mdl.fit(X_trn, y_trn,
**{"early_stopping_rounds":20,
"eval_metric" : 'binary_logloss',
'eval_set': [(X_trn, y_trn), (X_val, y_val)],
'eval_names': ['train', 'early_stop'],
'verbose': 100
})
print('Accuracy score on train/validation/test samples is: {:.3f}/{:.3f}/{:.3f}'.format(accuracy_score(y_trn, mdl.predict(X_trn)),
accuracy_score(y_val, mdl.predict(X_val)),
accuracy_score(y_tst, mdl.predict(X_tst))
))
# ## Do LGBM model exlanation
# ### SHAP
explainer=shap.TreeExplainer(mdl)
shap_values = explainer.shap_values(X_tst)
shap.summary_plot(shap_values, X_tst, plot_type="bar")
# _To understand how a single feature effects the output of the model we can plot **the SHAP value of that feature vs. the value of the feature** for all the examples in a dataset. Since SHAP values represent a feature's responsibility for a change in the model output, **the plot below represents the change in predicted label as either of chosen variables changes**._
for f in ['negative_frac', 'BlackRock_count_Long', 'positive_count', 'diff_7d']:
shap.dependence_plot(f, shap_values, X_tst)
# ### LIME
import lime
from lime.lime_tabular import LimeTabularExplainer
explainer = LimeTabularExplainer(X_trn.values,
feature_names=X_trn.columns,
class_names=['Down','Up'],
verbose=False,
mode='classification')
exp= []
for i in range(5):
e = explainer.explain_instance(X_trn.iloc[i,:].values, mdl.predict_proba)
_ = e.as_pyplot_figure(label=1)
#exp.append(e)
import pickle
with open('model_lighgbm.pkl', 'wb') as fout:
pickle.dump(mdl, fout)
| python/money_making_model.ipynb |