code stringlengths 38 801k | repo_path stringlengths 6 263 |
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.5.3
# language: julia
# name: julia-1.5
# ---
# # 6.S083 / 18.S190 Problem set 2: Probability and modelling recovery
#
# ## Submission deadline: 11:59pm on Tuesday, April 14
#
#
# In this problem set we will look at probability distributions and how to model recovery from an infection.
#
# Note: If you are unable to get `Interact.jl` to work to create interactive visualizations, note this fact on your pset submission. Where it says to create an interactive visualization, instead write a function that creates one of the plots, and run that function for different parameter values manually, showing the resulting plots.
#
# Similarly, if you have problems running simulations for larger values of the parameters, just do what you can and make a note.
# ## Exercise 1: Frequencies
#
# In this exercise we will write a more general function to
# count occurrences of values in a set of data. In class we used
# as `Vector`, but that is difficult to use with data that jumps around a lot.
#
# 1. Write a function `counts` that accepts a vector `data` and calculates the number of times each value in `data` occurs.
#
# To do so, use a **dictionary** called `counts`.
# A dictionary maps a **key** to a **value**.
# We want to map integers to integers, so we create an empty dictionary with
#
# ```jl
# d = Dict{Int, Int}()
# ```
#
# Use the `haskey` function to find out whether the `Dict` contains a given key or not.
#
# Use indexing to add a new (key, value) pair, or to retrieve a value:
#
# ```jl
# d[3] = 1
# d[3]
# ```
#
# The function should return the dictionary.
#
# 2. Test that your code is correct by applying it to obtain the counts of the data vector `vv = [1, 0, 1, 0, 1000, 1, 1, 1000]`. What
# should the result be?
#
#
# The dictionary contains the information as a sequence of **pairs** mapping keys to values. This is not a particularly useful form for us. Instead we would prefer a vector of the keys and a vector of the values, sorted in order of the key.
#
# Make a new version of the `counts` function where you do the following (below). Start off by just running the following commands each in their own cell on the dictionary you got by running the previous `counts` function on the vector `vv` so that you see the result of running each command. Once you have understood what's happening at *each* step, add them to the `counts` function in a new cell.
#
# 3. Extract vectors `ks` of keys and `vs` of values using the `keys()` and `values()` functions and convert the results into a vector using the `collect` function.
#
# 4. Define a variable `p` as the result of running the `sortperm` on the keys. This gives a **permutation** that tells you in which order you need to take the keys to give a sorted version.
#
# 5. Use indexing `ks[p]` to return the sorted keys and values vectors.
#
# [Here we are passing in a *vector* as the index. Julia extracts the values at the indices given in that vector]
#
# 6. Test that your new `counts` function gives the correct result for the vector `v` by comparing it to the true result (that you get by doing the counting by hand!)
#
# 7. Make a function `probability_distribution` that normalizes the result of `counts` to calculate the relative frequency, i.e. to give a probability distribution (i.e. such that the sum of the resulting vector is 1).
#
# The function should return the keys (the unique data that was in the original data set, as calculated in `counts`, and the probabilities (relative frequencies).
#
# Test that it gives the correct result for the vector `vv`.
#
# We will use this function in the rest of the exercises.
function counts(data)
dict = Dict{Int, Int}()
for i in 1:length(data)
if !haskey(dict, data[i])
dict[data[i]] = 1
else
dict[data[i]] = dict[data[i]] + 1
end
end
return dict
end
vv = [1, 0, 1, 0, 1000, 1, 1, 1000]
counts(vv)
function counts(data)
dict = Dict{Int, Int}()
for i in 1:length(data)
if !haskey(dict, data[i])
dict[data[i]] = 1
else
dict[data[i]] = dict[data[i]] + 1
end
end
ks = collect(keys(dict))
vs = collect(values(dict))
return ks[sortperm(ks)], vs[sortperm(ks)]
end
vv = [1, 0, 1, 0, 1000, 1, 1, 1000]
counts(vv)
function probability_distribution(data)
ks, vs = counts(data)
vs = vs/sum(vs)
return ks, vs
end
probability_distribution(vv)
# ### Exercise 2: Modelling recovery
#
# In this exercise, we will investigate the simple model of recovery from an infection that was described in lectures. We
# want to study the time $\tau$ to recover.
#
# In this model, an individual who is infected has probability $p$ to recover each day. If they recover on day $n$ then $\tau = n$.
# We see that $\tau$ is a random variable, so we need to study its **probability distribution**.
#
# 1. Define the function `bernoulli(p)` from lectures. Recall that this generates `true` with probability $p$ and `false` with probability $(1 - p)$.
#
# 2. Write a function `geometric(p)`. This should run a simulation with probability $p$ to recover and wait *until* the individual recovers, at which point it returns the time taken to recover.
#
# 3. Write a function `experiment(p, N)` that runs the function from [2] `N` times and collects the results into a vector.
#
# 4. Run an experiment with $p=0.25$ and $N=10,000$. Plot the resulting probability distribution, i.e. plot $P(\tau = n)$ against $n$, where $n$ is the recovery time.
#
# 5. Calculate the mean recovery time and add it to the plot using the `vline!()` function and the `ls=:dash` argument to make a dashed line.
#
# Note that `vline!` requires a *vector* of values where you wish to draw vertical lines.
#
# 6. What shape does the distribution seem to have? Can you verify that by using one or more log scales?
#
# 7. Write an interactive visualization that repeats [4] for $p$ varying between $0$ and $1$ and $N$ between $0$ and $100,000$.
#
# As you vary $p$, what do you observe? Does that make sense?
#
# Note that you can make a range for $p$ using something like `0:0.01:1.0`, and you can make a `@manipulate` with additional sliders using a double `for` loop: `for a in 1:10, b in 1:10`.
#
# 8. Fix $N = 10,000$ and calculate the *mean* time $\langle \tau(p) \rangle$ to recover. Plot this as a function of $p$.
# Can you find the relationship between the two quantities?
function bernoulli(p)
r = rand()
return r < p
end
function geometric(p)
i=0
recover = false
while !recover
i = i+1
recover = bernoulli(p)
end
return i
end
geometric(0.001)
function experiment(p, N)
v = [geometric(p) for i in 1:N]
return v
end
exp = experiment(0.25, 10000)
a,b = probability_distribution(exp)
using Plots
using Statistics
using Interact
plot(a,b, ls=:dash, fmt=:png)
vline!(a.==round(mean(exp)))
plot(a,b, ls=:dash, fmt=:png, yscale=:log10)
vline!(a.==round(mean(exp)))
@manipulate for p in 0:0.01:1, N in 0:10000
exp = experiment(p, N)
a,b = probability_distribution(exp)
plot(a,b, ls=:dash, fmt=:png)
vline!(a.==round(mean(exp)))
end
v = Vector()
for p in 0.001:0.01:1
exp = experiment(p, 1000)
push!(v, mean(exp))
end
plot(0.001:0.01:1,v, fmt=:png)
# ### Exercise 3: More efficient geometric distributions
#
# Let's use the notation $P_n := \mathbb{P}(\tau = n)$ for the probability to recover on the $n$th step.
#
# Probability theory tells us that in the limit of an infinite number of trials, we have the exact results $P_1 = p$, that $P_2 = p (1-p)$, and in general $P_n = p (1 - p)^{n-1}$.
#
# 1. Fix $p = 0.25$. Make a vector of the values $P_n$ for $n=1, \dots, 50$. You must use a loop or similar construction; do *not* do this by hand!
#
# 2. Plot $P_n$ as a function of $n$. Compare it to the result from the previous exercise (i.e. plot them both on the same graph).
#
# How could we measure the *error*, i.e. the distance between the two graphs? What do you think determines it?
#
#
# If $p$ is *small*, say $p=0.001$, then the algorithm we used in
# Exercise 2 to sample from geometric distribution will be very slow, since it just sits there calculating a lot of `false`s!
# (The average amount of time taken is what you hopefully found in [1.8])
#
# Let's make a better algorithm. Think of each probability $P_n$ as a "bin" of length $P_n$. If we lay those bins next to each other starting from $P_1$ on the left, then $P_2$, etc., there will be an *infinite* number of bins filling up the interval between $0$ and $1$. (In principle there is no upper limit on how many days it will take to recover, although the probability becomes *very* small.)
#
# Now suppose we take a uniform random number $r$ between $0$ and $1$. That will fall into one of the bins. If it falls into the bin corresponding to $P_n$, then we return $n$ as the recovery time!
#
# 3. To draw this picture, we need to add up the lengths of the lines from 1 to $n$ for each $n$, i.e. calculate the **cumulative sum**. Write a function `cumulative_sum`, which returns a new vector. (Of course, you should only do this for a finite number of values! Say those that you found in [2]. )
#
# 4. Plot the resulting values on a horizontal line. Generate a few random points and plot those. Convince yourself that the probability that a point hits a bin is equal to the length of that bin.
#
# 5. Calculate analytically the sum of $P_1$ up to $P_n$. (Hint: This should be a calculation that you did in high school or in Calculus I.)
#
# 6. Use the result of [5] to find analytically which bin $n$ a given value of $r \in [0, 1]$ falls into using the inequality $P_{n+1} \le r \le P_n$.
#
# 7. Implement this using the `floor` function.
p = 0.25
v = Vector()
for i in 1:50
prob = p*(1-p)^(i-1)
push!(v, prob)
end
plot(v, fmt=:png)
plot!(a,b)
function cumulative_sum(vec)
new_vec = Vector()
for i in 1:length(vec)
if i == 1
push!(new_vec, vec[i])
else
push!(new_vec, new_vec[i-1]+vec[i])
end
end
return new_vec
end
v2 = cumulative_sum(v)
plot(v2, m=:o, fmt=:png)
z = zeros(length(v2))
z[Int64(round((rand()*length(v2))))] = 1
plot!(z)
# Analytically
# 1 - (1-p)^n
#
# n + 1 >= log(1-r)/log(1-p) >= n
function tell_bin(p)
return floor(log(1-rand())/log(1-p))
end
mean([tell_bin(0.5) for i in 1:100])
# ### Exercise 4: A simple infection model
#
# In this exercise we will investigate a *highly* simplified model of the process of infection and recovery. (In the next problem set we will develop a much better model.)
#
# The model is as follows: An individual starts in state `S` ("susceptible"). When they are in state `S`, they have a probability $p_E$ to become exposed (state `E`) at each step. Once they are exposed, they have probability $p_I$ to become infectious (state `I`). When they are infectious, they have a probability $p_R$ to recover at each step.
#
# Let's denote by $\tau_S$ the length of time spent in state `S`, and similarly for $\tau_E$ and $\tau_I$.
#
# 1. How does the total time $\tau_\text{total}$ to go from `S` to `R` relate to these times? What is the relation with the geometric random variables from the previous exercises?
#
# 2. Write a function `total_time(p_E, p_I, p_R)` that calculates the total time to go from state `S` to state `R`.
#
# 3. Run a Monte Carlo simulation to calculate and plot the probability distribution of $\tau_\text{total}$ for $p_E = 0.25$, $p_I = 0.1$ and $p_R = 0.05$.
#
# 4. What happens to the probability distribution of the total time as you add more states, say `E_1`, `E_2`? You may suppose that the probability to move to the next state is the same value $p$ for all states. How could you use $s$ such states?
#
# [This is a visual representation of a famous theorem that we will discuss in class.]
#
# 5. **Extra credit:** Write a simulation that runs $N$ individuals and keeps track at each step of how many people are in which state. Plot the resulting graph of the number of people in the `S`, `I`, `E` and `R` states as a function of time.
# tau_total = tau_S + tau_E + tau_I + tau_R
# Sum of geomteric RVs
function total_time(p_E, p_I, p_R)
return geometric(p_E) + geometric(p_I) + geometric(p_R)
end
vec = [total_time(0.25, 0.1, 0.05) for i in 1:10000]
a,b = probability_distribution(vec)
plot(b, fmt=:png)
function total_time2(p_E, p_E1, p_E2, p_I, p_R)
return geometric(p_E) +geometric(p_E1) + geometric(p_E2) + geometric(p_I) + geometric(p_R)
end
vec = [total_time(0.25, 0.25, 0.25) for i in 1:10000]
a,b = probability_distribution(vec)
plot(b, fmt=:png)
vec = [total_time2(0.25, 0.25, 0.25, 0.25, 0.25) for i in 1:10000]
a,b = probability_distribution(vec)
plot(b, fmt=:png)
N = 10000
n_s = N
n_e = 0
n_i = 0
n_r = 0
v_s = Vector()
v_e = Vector()
v_i = Vector()
v_r = Vector()
for i in 1:100
n_e = 0.3*n_s
n_i = 0.2*n_e
n_r = 0.05*n_i
n_s = N - n_e - n_i - n_r
push!(v_s, n_s)
push!(v_e, n_e)
push!(v_i, n_i)
push!(v_r, n_r)
end
plot(v_s, fmt=:png)
plot!(v_e)
plot!(v_i)
plot!(v_r)
# ### Exercise 5: Helping with transcripts
#
# Correct another 10 + 20 lines of the transcripts (see problem set 1 for details) and report which ones you did.
| PS2.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="axmewNekXEkc" outputId="ffe428ea-e48e-49f0-d80b-00541825e1e8"
try:
from google.colab import drive
drive.mount('/content/drive')
data_dir = '/content/drive/MyDrive/data/task-1'
bert_dir = '/content/drive/MyDrive/BERT'
roberta_dir = '/content/drive/MyDrive/RoBERTa'
except:
data_dir = './data/'
bert_dir = './BERT/BERT'
roberta_dir = './RoBERTa/RoBERTa'
# + colab={"base_uri": "https://localhost:8080/"} id="tV6UcBDWEGUr" outputId="172f13cd-c0f1-4b87-e56d-39c9acb115a0"
# !pip install ekphrasis
# + colab={"base_uri": "https://localhost:8080/"} id="MbZQUEp1Xk31" outputId="a5189d5f-a8f5-47ff-8a3b-a2a4c1ef78fc"
# !pip install transformers
# + colab={"base_uri": "https://localhost:8080/"} id="wzCRdDeahuqk" outputId="704b0c15-1af0-4de7-ecaf-6ef98e6b9b8e"
# !python -m spacy download en_core_web_lg
# + [markdown] id="Kucw77SvTQEo"
# ### Setting up
# + id="WX9TqmK7lDoK"
# Imports
import torch
import torch.nn as nn
import pandas as pd
import numpy as np
import codecs
import re
import spacy
import nltk
import torch.nn.functional as F
import matplotlib.pyplot as plt
import seaborn as sns
from tqdm import tqdm
from nltk.corpus import stopwords
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from torch.utils.data import Dataset, random_split
from ekphrasis.classes.preprocessor import TextPreProcessor
from ekphrasis.classes.tokenizer import SocialTokenizer
from transformers import BertForSequenceClassification, BertTokenizer, RobertaForSequenceClassification, RobertaTokenizer
# + id="X09jt8VRlDoM"
# Setting random seed and device
SEED = 1
torch.manual_seed(SEED)
torch.cuda.manual_seed(SEED)
torch.backends.cudnn.deterministic = True
use_cuda = torch.cuda.is_available()
device = torch.device("cuda:0" if use_cuda else "cpu")
# + [markdown] id="8Dnji6Do7bAy"
# ### Loading the data
# + id="AqhlzLl6lDoO"
# Load data
test_df = pd.read_csv(f'{data_dir}/test.csv')
# + id="3d9SWnJpOqL0"
nlp = spacy.load('en_core_web_lg')
# + id="kN0NukhFcraW" colab={"base_uri": "https://localhost:8080/", "height": 202} outputId="37b72001-ab45-48b3-8041-610e0581f30b"
test_df.head()
# + [markdown] id="T9ci8eyzThNF"
# ### Preprocessing functions
# + id="EGhbGB_Phpu1"
def capitalisation_by_ner(sentence, entities=['GPE', 'ORG', 'NORP', 'PERSON']):
edited_row = []
trial_doc = nlp(sentence)
for tok in trial_doc:
if tok.ent_type_ in entities:
edited_row.append(tok.text)
else:
edited_row.append(tok.text.lower())
return ' '.join(edited_row)
# + id="q8pCEBigg47h"
# Word replacement
# Join the contractions
# Tokenize
# remove stop words
# remove punct EXCEPT ! ? #
# Twitter handles
def preprocessor(df):
_df = pd.DataFrame(index=df.index, columns=['edited_sentences', 'meanGrade'])
_df['meanGrade'] = df.meanGrade
text_processor = TextPreProcessor(
fix_html=True, # fix HTML tokens
# corpus from which the word statistics are going to be used
# for word segmentation
segmenter="english",
# corpus from which the word statistics are going to be used
# for spell correction
corrector="english",
unpack_hashtags=False, # perform word segmentation on hashtags
unpack_contractions=False, # Unpack contractions (can't -> can not)
spell_correct=True, # spell correction for elongated words
)
punct = "[\.,:;\(\)\[\]@\-\$£]"
nltk.download('stopwords')
stops = stopwords.words('english')
# Word replacement + join the contractions
# NOTE: need to deal with ' '
# NOTE: Numbers/digits have not been removed
# NOTE: We have removed all stop words. We analysed the sentiment of the stop
# words in the training set to determine if removing them would negatively
# affect our results. The motivation for this check was that any word with a
# sentiment would affect the funniness score of the sentence.
# Since stop words have no sentiment, they have been removed
# This doesn't retain any twitter handles, but retains the hashtags
_df['edited_sentences'] = df[['original', 'edit']] \
.apply(lambda x: re.subn("<.*/>", x[1], x[0])[0], axis=1) \
.apply(lambda x: capitalisation_by_ner(x)) \
.str.replace(" (?P<one>\w*'\w+)", lambda x: x.group("one")) \
.apply(lambda x: text_processor.pre_process_doc(x)) \
.str.replace("#", "# ") \
.str.replace("[‘’]", "'") \
.str.replace("'s", "") \
.str.replace(punct, "") \
.apply(lambda x: " ".join([w for w in x.split(" ") if w not in stops])) \
.str.replace("[0-9]", "")
return _df
# + [markdown] id="Yxi4X4YP7kSe"
# ### Setting up the models and the evaluation functions
# + id="NzXeDgHmlDob"
def model_eval(data_loader, model):
model.eval()
preds = []
targets = []
rmse = 0
model = model.to(device)
with torch.no_grad():
for batch in data_loader:
input_ids = batch['input_ids'].to(device)
attention_mask = batch['attention_mask'].to(device)
labels = batch['labels'].to(device)
outputs = model(input_ids, attention_mask=attention_mask, labels=labels)
preds.extend(outputs.logits.squeeze(1).detach().cpu().numpy())
targets.extend(labels.detach().cpu().numpy())
preds = np.array(preds)
targets = np.array(targets)
print(preds, targets)
model_performance(preds, targets, print_output=True)
return preds, targets
# + id="2_22fHHElDog"
# How we print the model performance
def model_performance(output, target, print_output=False):
"""
Returns SSE and MSE per batch (printing the MSE and the RMSE)
"""
sq_error = (output - target)**2
sse = np.sum(sq_error)
mse = np.mean(sq_error)
rmse = np.sqrt(mse)
if print_output:
print(f'| MSE: {mse:.2f} | RMSE: {rmse:.2f} |')
return sse, mse
# + id="jzQ0KLXslDoq"
class Task1Dataset(Dataset):
def __init__(self, train_data, labels):
self.x_train = train_data
self.y_train = labels
def __len__(self):
return len(self.y_train)
def __getitem__(self, idx):
item = {key: torch.tensor(val[idx]) for key, val in self.x_train.items()}
item['labels'] = torch.tensor(self.y_train[idx], dtype=torch.float)
return item
# + colab={"base_uri": "https://localhost:8080/"} id="xeLPXxWThOod" outputId="f63648e3-dadc-4ef6-d237-c5cecd576496"
clean_test_df = preprocessor(test_df)
# + id="3DUPjPIzCl8P"
test_data = clean_test_df['edited_sentences']
# + id="X9U4LSkzTk43"
bert_model = BertForSequenceClassification.from_pretrained(bert_dir)
# + id="L3cGviFaTvgP"
roberta_model = RobertaForSequenceClassification.from_pretrained(roberta_dir)
# + colab={"base_uri": "https://localhost:8080/", "height": 165, "referenced_widgets": ["92f2690bd54d47c6864c92a09e7dbd76", "70b0c2d2bb8047a1807a68f7645d4ddc", "5759b7cf21504362b3dc6cfbf0c936c5", "e9cab77c6eeb4089be5673d8888a9f37", "<KEY>", "<KEY>", "fe47f6b6a0754f9bbd423ba517519f55", "a57abfa06d294553bd8b77332a5ca1ea", "<KEY>", "c17e4aee83514730a88f8fff1b0ef5fa", "<KEY>", "c7fcb93a892043e3bece5a6e3a0d7a71", "<KEY>", "<KEY>", "010f7d1a4497441999b281818d8be54c", "d9ad862823d24b74ae1b74ef1f45b8db", "<KEY>", "dc53e2149c624d5695e04d6e1e16fa47", "<KEY>", "<KEY>", "4556c8574c1f46e09301c25e176c05e4", "<KEY>", "b8fe5610df6847c9b9e13ce1e90a6f74", "c856ccee406c46f2a5d4c670d391145e"]} id="ZAjup_B1UR06" outputId="7b27919b-04fe-4f90-f5f4-1a51966a4afd"
tokenizer_bert = BertTokenizer.from_pretrained('bert-base-uncased')
tokenizer_roberta = RobertaTokenizer.from_pretrained('roberta-base')
# + [markdown] id="uifYAIdtUc7k"
# ### Evaluating the BERT model on unseen test data
# + id="sd58T-FpCuvz"
test_X = tokenizer_bert(test_data.to_list(), add_special_tokens=False, padding=True, return_tensors="pt")
# + id="oOlgvA1xC8ES"
test_dataset = Task1Dataset(test_X, test_df['meanGrade'])
# + id="81WbQ7fdDaDx"
bert_model = bert_model.to(device)
# + id="bp9d32sOlDo6" colab={"base_uri": "https://localhost:8080/"} outputId="09a89e5d-d478-4c82-a34e-18702555fcc2"
BATCH_SIZE = 32
test_loader = torch.utils.data.DataLoader(test_dataset, batch_size=BATCH_SIZE)
print("Dataloaders created.")
# + colab={"base_uri": "https://localhost:8080/"} id="LCondAvEi4Ro" outputId="8370c117-7b1b-41d6-c1bb-1287541458dc"
predictions, target = model_eval(test_loader, bert_model)
model_performance(predictions, target, print_output=True)
# + id="gcydCZnXVN9Y"
clean_test_df['predictions_bert'] = predictions
# + colab={"base_uri": "https://localhost:8080/", "height": 501} id="Clg4NNpsVRxX" outputId="301907b5-0ccf-4822-d79e-ab9e544cc1ee"
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(20, 10))
fig.suptitle('Final Test Dataset Analysis')
sns.boxplot(x='meanGrade', y='predictions_bert', data=clean_test_df, ax=ax1)
sns.scatterplot(x='meanGrade', y='predictions_bert', data=clean_test_df, ax=ax2)
plt.show()
# + [markdown] id="c8TMh92YVb9T"
# ### Evaluating the RoBERTa model on unseen test data
# + id="wnktU8FWVmeA"
test_X_roberta = tokenizer_roberta(test_data.to_list(), add_special_tokens=False, padding=True, return_tensors="pt")
# + id="ozSdmVfGVmeJ"
test_dataset_roberta = Task1Dataset(test_X_roberta, test_df['meanGrade'])
# + id="japtAu56VmeJ"
roberta_model = roberta_model.to(device)
# + colab={"base_uri": "https://localhost:8080/"} id="xpEXk54aVmeJ" outputId="ac7f72a8-8ff0-4cd4-ccee-5e0f2492a911"
BATCH_SIZE = 32
test_loader_roberta = torch.utils.data.DataLoader(test_dataset_roberta, batch_size=BATCH_SIZE)
print("Dataloaders created.")
# + colab={"base_uri": "https://localhost:8080/"} id="ywPmNmqvVmeK" outputId="9fc1dcda-f4e8-4dc8-e777-d1b2be9dc898"
predictions_r, target_r = model_eval(test_loader_roberta, roberta_model)
model_performance(predictions_r, target_r, print_output=True)
# + id="bJVyvqUE9uYB" colab={"base_uri": "https://localhost:8080/", "height": 202} outputId="a7ab97d9-42cb-417e-f9f3-4e48a046d77b"
clean_test_df['predictions_roberta'] = predictions_r
clean_test_df.head()
# + id="s5Nnuij3Vfjo" colab={"base_uri": "https://localhost:8080/", "height": 501} outputId="0805a3d1-c9c3-43fb-db2e-3a5794bae7c7"
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(20, 10))
fig.suptitle('Final Test Dataset Analysis for RoBERTa model')
sns.boxplot(x='meanGrade', y='predictions_roberta', data=clean_test_df, ax=ax1)
sns.scatterplot(x='meanGrade', y='predictions_roberta', data=clean_test_df, ax=ax2)
plt.show()
| Evaluating_BERT_and_RoBERTa_on_the_unseen_test_data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: dev
# language: python
# name: dev
# ---
import numpy as np
from sklearn.datasets import load_boston
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import SGDRegressor as skSGDRegressor
# ### Implementation 1
# - scikit-learn loss = "squared_loss", penalty="l2"/"none"
# - similar to sklearn.linear_model.LinearRegression
# +
def _loss(x, y, coef, intercept):
p = np.dot(x, coef) + intercept
return 0.5 * (p - y) * (p - y)
def _grad(x, y, coef, intercept):
p = np.dot(x, coef) + intercept
# clip gradient (consistent with scikit-learn)
dloss = np.clip(p - y, -1e12, 1e12)
coef_grad = dloss * x
intercept_grad = dloss
return coef_grad, intercept_grad
# -
class SGDRegressor():
def __init__(self, penalty="l2", alpha=0.0001, max_iter=1000, tol=1e-3,
shuffle=True, random_state=0,
eta0=0.01, power_t=0.25, n_iter_no_change=5):
self.penalty = penalty
self.alpha = alpha
self.max_iter = max_iter
self.tol = tol
self.shuffle = shuffle
self.random_state = random_state
self.eta0 = eta0
self.power_t = power_t
self.n_iter_no_change = n_iter_no_change
def fit(self, X, y):
coef = np.zeros(X.shape[1])
intercept = 0
best_loss = np.inf
no_improvement_count = 0
t = 1
rng = np.random.RandomState(self.random_state)
for epoch in range(self.max_iter):
# different from how data is shuffled in scikit-learn
if self.shuffle:
ind = rng.permutation(X.shape[0])
X, y = X[ind], y[ind]
sumloss = 0
for i in range(X.shape[0]):
sumloss += _loss(X[i], y[i], coef, intercept)
eta = self.eta0 / np.power(t, self.power_t)
coef_grad, intercept_grad = _grad(X[i], y[i], coef, intercept)
if self.penalty == "l2":
coef *= 1 - eta * self.alpha
coef -= eta * coef_grad
intercept -= eta * intercept_grad
t += 1
if sumloss > best_loss - self.tol * X.shape[0]:
no_improvement_count += 1
else:
no_improvement_count = 0
if no_improvement_count == self.n_iter_no_change:
break
if sumloss < best_loss:
best_loss = sumloss
self.coef_ = coef
self.intercept_ = np.array([intercept])
self.n_iter_ = epoch + 1
return self
def predict(self, X):
y_pred = np.dot(X, self.coef_) + self.intercept_
return y_pred
# shuffle=False penalty="none"
X, y = load_boston(return_X_y=True)
X = StandardScaler().fit_transform(X)
clf1 = SGDRegressor(shuffle=False, penalty="none").fit(X, y)
clf2 = skSGDRegressor(shuffle=False, penalty="none").fit(X, y)
assert np.allclose(clf1.coef_, clf2.coef_)
assert np.allclose(clf1.intercept_, clf2.intercept_)
pred1 = clf1.predict(X)
pred2 = clf2.predict(X)
assert np.allclose(pred1, pred2)
# shuffle=False penalty="l2"
for alpha in [0.1, 1, 10]:
X, y = load_boston(return_X_y=True)
X = StandardScaler().fit_transform(X)
clf1 = SGDRegressor(shuffle=False, alpha=alpha).fit(X, y)
clf2 = skSGDRegressor(shuffle=False, alpha=alpha).fit(X, y)
assert np.allclose(clf1.coef_, clf2.coef_)
assert np.allclose(clf1.intercept_, clf2.intercept_)
pred1 = clf1.predict(X)
pred2 = clf2.predict(X)
assert np.allclose(pred1, pred2)
# ### Implementation 2
# - scikit-learn loss = "huber", penalty="l2"/"none"
# +
def _loss(x, y, coef, intercept, epsilon):
p = np.dot(x, coef) + intercept
r = p - y
if np.abs(r) <= epsilon:
return 0.5 * r * r
else:
return epsilon * (np.abs(r) - 0.5 * epsilon)
def _grad(x, y, coef, intercept, epsilon):
p = np.dot(x, coef) + intercept
r = p - y
if np.abs(r) <= epsilon:
dloss = r
elif r > epsilon:
dloss = epsilon
else:
dloss = -epsilon
dloss = np.clip(dloss, -1e12, 1e12)
coef_grad = dloss * x
intercept_grad = dloss
return coef_grad, intercept_grad
# -
class SGDRegressor():
def __init__(self, penalty="l2", alpha=0.0001, max_iter=1000, tol=1e-3,
shuffle=True, epsilon=0.1, random_state=0,
eta0=0.01, power_t=0.25, n_iter_no_change=5):
self.penalty = penalty
self.alpha = alpha
self.max_iter = max_iter
self.tol = tol
self.shuffle = shuffle
self.epsilon = epsilon
self.random_state = random_state
self.eta0 = eta0
self.power_t = power_t
self.n_iter_no_change = n_iter_no_change
def fit(self, X, y):
coef = np.zeros(X.shape[1])
intercept = 0
best_loss = np.inf
no_improvement_count = 0
t = 1
rng = np.random.RandomState(self.random_state)
for epoch in range(self.max_iter):
# different from how data is shuffled in scikit-learn
if self.shuffle:
ind = rng.permutation(X.shape[0])
X, y = X[ind], y[ind]
sumloss = 0
for i in range(X.shape[0]):
sumloss += _loss(X[i], y[i], coef, intercept, self.epsilon)
eta = self.eta0 / np.power(t, self.power_t)
coef_grad, intercept_grad = _grad(X[i], y[i], coef, intercept, self.epsilon)
if self.penalty == "l2":
coef *= 1 - eta * self.alpha
coef -= eta * coef_grad
intercept -= eta * intercept_grad
t += 1
if sumloss > best_loss - self.tol * X.shape[0]:
no_improvement_count += 1
else:
no_improvement_count = 0
if no_improvement_count == self.n_iter_no_change:
break
if sumloss < best_loss:
best_loss = sumloss
self.coef_ = coef
self.intercept_ = np.array([intercept])
self.n_iter_ = epoch + 1
return self
def predict(self, X):
y_pred = np.dot(X, self.coef_) + self.intercept_
return y_pred
# shuffle=False penalty="none"
X, y = load_boston(return_X_y=True)
X = StandardScaler().fit_transform(X)
clf1 = SGDRegressor(shuffle=False, penalty="none").fit(X, y)
clf2 = skSGDRegressor(loss="huber", shuffle=False, penalty="none").fit(X, y)
assert np.allclose(clf1.coef_, clf2.coef_)
assert np.allclose(clf1.intercept_, clf2.intercept_)
pred1 = clf1.predict(X)
pred2 = clf2.predict(X)
assert np.allclose(pred1, pred2)
# shuffle=False penalty="l2"
for alpha in [0.1, 1, 10]:
X, y = load_boston(return_X_y=True)
X = StandardScaler().fit_transform(X)
clf1 = SGDRegressor(shuffle=False, alpha=alpha).fit(X, y)
clf2 = skSGDRegressor(loss="huber", shuffle=False, alpha=alpha).fit(X, y)
assert np.allclose(clf1.coef_, clf2.coef_)
assert np.allclose(clf1.intercept_, clf2.intercept_)
pred1 = clf1.predict(X)
pred2 = clf2.predict(X)
assert np.allclose(pred1, pred2)
| linear_model/SGDRegressor.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# <!-- dom:TITLE: Demo - Working with Functions -->
# # Demo - Working with Functions
# <!-- dom:AUTHOR: <NAME> Email:<EMAIL> at Department of Mathematics, University of Oslo. -->
# <!-- Author: -->
# **<NAME>** (email: `<EMAIL>`), Department of Mathematics, University of Oslo.
#
# Date: **Oct 23, 2020**
#
# Copyright 2020, <NAME>. Released under CC Attribution 4.0 license
#
# **Summary.** This is a demonstration of how the Python module [shenfun](https://github.com/spectralDNS/shenfun) can be used to work with
# global spectral functions in one and several dimensions.
#
#
#
#
#
#
#
#
#
#
#
#
#
# ## Construction
#
# A global spectral function $u(x)$ is represented on a one-dimensional
# domain (a line) as
# $$
# u(x) = \sum_{k=0}^{N-1} \hat{u}_k \psi_k(x)
# $$
# where $\psi_k(x)$ is the $k$'th basis function and $x$ is a
# position inside the domain. $\{\hat{u}_k\}_{k=0}^{N-1}$ are the
# expansion coefficient for the series, and often referred to as the
# degrees of freedom. There is one degree of freedom per basis function.
# We can use any number of basis functions,
# and the span of the chosen basis is then a function space. Also part of the
# function space is the domain, which is
# specified when a function space is created. To create a function space
# $T=\text{span}\{T_k\}_{k=0}^{N-1}$ for
# the first N Chebyshev polynomials of the first kind on the default domain $[-1, 1]$,
# do
from shenfun import *
N = 8
T = FunctionSpace(N, 'Chebyshev', domain=(-1, 1))
# The function $u(x)$ can now be created with all N coefficients
# equal to zero as
u = Function(T)
# When using Chebyshev polynomials the computational domain is always
# $[-1, 1]$. However, we can still use a different physical domain,
# like
T = FunctionSpace(N, 'Chebyshev', domain=(0, 1))
# and under the hood shenfun will then map this domain to the reference
# domain through
# $$
# u(x) = \sum_{k=0}^{N-1} \hat{u}_k \psi_k(2(x-0.5))
# $$
# ## Approximating analytical functions
#
# The `u` function above was created with only zero
# valued coefficients, which is the default. Alternatively,
# a [Function](https://shenfun.readthedocs.io/en/latest/shenfun.forms.html#shenfun.forms.arguments.Function) may be initialized using a constant
# value
T = FunctionSpace(N, 'Chebyshev', domain=(-1, 1))
u = Function(T, val=1)
# but that is not very useful. A third method to initialize
# a [Function](https://shenfun.readthedocs.io/en/latest/shenfun.forms.html#shenfun.forms.arguments.Function) is to interpolate using an analytical
# Sympy function.
import sympy as sp
x = sp.Symbol('x', real=True)
u = Function(T, buffer=4*x**3-3*x)
print(u)
# Here the analytical Sympy function will first be evaluated
# on the entire quadrature mesh of the `T` function space,
# and then forward transformed to get the coefficients. This
# corresponds to a projection to `T`. The projection is
#
# Find $u_h \in T$, such that
# $$
# (u_h - u, v)_w = 0 \quad \forall v \in T,
# $$
# where $v \in \{T_j\}_{j=0}^{N-1}$ is a test function,
# $u_h=\sum_{k=0}^{N-1} \hat{u}_k T_k$ is a trial function and the
# notation $(\cdot, \cdot)_w$ represents a weighted inner product.
# In this projection $u_h$ is the [Function](https://shenfun.readthedocs.io/en/latest/shenfun.forms.html#shenfun.forms.arguments.Function), $u$ is the sympy function and we use sympy
# to exactly evaluate $u$ on all quadrature points
# $\{x_j\}_{j=0}^{N-1}$. With quadrature we then have
# $$
# (u, v)_w = \sum_{j\in\mathcal{I}^N} u(x_j) v(x_j) w_j \forall v \in T,
# $$
# where $\mathcal{I}^N = (0, 1, \ldots, N-1)$ and $\{w_j\}_{j\in \mathcal{I}^N}$
# are the quadrature weights. The left hand side of the projection is
# $$
# (u_h, v)_w = \sum_{j\in\mathcal{I}^N} u_h(x_j) v(x_j) w_j \forall v \in T.
# $$
# A linear system of equations arise when inserting for the
# basis functions
# $$
# \left(u, T_i\right)_w = \tilde{u}_i \forall i \in \mathcal{I}^N,
# $$
# and
# $$
# \begin{align*}
# \left(u_h, T_i \right)_w &= (\sum_{k\in \mathcal{I}^N} \hat{u}_k T_k , T_i)_w \\
# &= \sum_{k\in \mathcal{I}^N} \left( T_k, T_i\right)_w \hat{u}_k
# \end{align*}
# $$
# with the mass matrix
# $$
# a_{ik} = \left( T_k, T_i\right)_w \forall (i, k) \in \mathcal{I}^N \times \mathcal{I}^N,
# $$
# we can now solve to get the unknown
# expansion coefficients. In matrix notation
# $$
# \hat{u} = A^{-1} \tilde{u},
# $$
# where $\hat{u}=\{\hat{u}_i\}_{i\in \mathcal{I}^N}$,
# $\tilde{u}=\{\tilde{u}_i\}_{i \in \mathcal{I}^N}$ and
# $A=\{a_{ki}\}_{(i,k) \in \mathcal{I}^N \times \mathcal{I}^N}$
#
# ## Adaptive function size
#
# The number of basis functions can also be left open during creation
# of the function space, through
T = FunctionSpace(0, 'Chebyshev', domain=(-1, 1))
# This is useful if you want to approximate a function and
# are uncertain how many basis functions that are required.
# For example, you may want to approximate the function $\cos(20 x)$.
# You can then find the required [Function](https://shenfun.readthedocs.io/en/latest/shenfun.forms.html#shenfun.forms.arguments.Function) using
u = Function(T, buffer=sp.cos(20*x))
print(len(u))
# We see that $N=45$ is required to resolve this function. This agrees
# well with what is reported also by [Chebfun](https://www.chebfun.org/docs/guide/guide01.html).
# Note that in this process a new [FunctionSpace()](https://shenfun.readthedocs.io/en/latest/shenfun.forms.html#shenfun.forms.arguments.FunctionSpace) has been
# created under the hood. The function space of `u` can be
# extracted using
Tu = u.function_space()
print(Tu.N)
# To further show that shenfun is compatible with Chebfun we can also
# approximate the Bessel function
T1 = FunctionSpace(0, 'Chebyshev', domain=(0, 100))
u = Function(T1, buffer=sp.besselj(0, x))
print(len(u))
# which gives 83 basis functions, in close agreement with Chebfun (89).
# The difference lies only in the cut-off criteria. We cut frequencies
# with a relative tolerance of 1e-12 by default, but if we make this criteria
# a little bit stronger, then we will also arrive at a slightly higher number:
u = Function(T1, buffer=sp.besselj(0, x), reltol=1e-14)
print(len(u))
# Plotting the function on its quadrature points looks
# a bit ragged, though:
# +
# %matplotlib inline
import matplotlib.pyplot as plt
Tu = u.function_space()
plt.plot(Tu.mesh(), u.backward())
# -
# To improve the quality of this plot we can instead evaluate the
# function on more points
xj = np.linspace(0, 100, 1000)
plt.plot(xj, u(xj))
# Alternatively, we can refine the function, which simply
# pads zeros to $\hat{u}$
up = u.refine(200)
Tp = up.function_space()
plt.plot(Tp.mesh(), up.backward())
# The padded expansion coefficients are now given as
print(up)
# ## More features
#
# Since we have used a regular Chebyshev basis above, there
# are many more features that could be explored simply by going through
# [Numpy's Chebyshev module](https://numpy.org/doc/stable/reference/routines.polynomials.chebyshev.html).
# For example, we can create a Chebyshev series like
import numpy.polynomial.chebyshev as cheb
c = cheb.Chebyshev(u, domain=(0, 100))
# The Chebyshev series in Numpy has a wide range of possibilities,
# see [here](https://numpy.org/doc/stable/reference/generated/numpy.polynomial.chebyshev.Chebyshev.html#numpy.polynomial.chebyshev.Chebyshev).
# However, we may also work directly with the Chebyshev
# coefficients already in `u`. To find the roots of the
# polynomial that approximates the Bessel function on
# domain $[0, 100]$, we can do
z = Tu.map_true_domain(cheb.chebroots(u))
# Note that the roots are found on the reference domain $[-1, 1]$
# and as such we need to move the result to the physical domain using
# `map_true_domain`. The resulting roots `z` are both real and imaginary,
# so to extract the real roots we need to filter a little bit
z2 = z[np.where((z.imag == 0)*(z.real > 0)*(z.real < 100))].real
print(z2[:5])
# Here `np.where` returns the indices where the condition is true. The condition
# is that the imaginary part is zero, whereas the real part is within the
# true domain $[0, 100]$.
#
# **Notice.**
#
# Using directly `cheb.chebroots(c)` does not seem to work (even though the
# series has been generated with the non-standard domain) because
# Numpy only looks for roots in the reference domain $[-1, 1]$.
#
#
#
# We could also use a function space with boundary conditions built
# in, like
Td = FunctionSpace(0, 'C', bc=(sp.besselj(0, 0), sp.besselj(0, 100)), domain=(0, 100))
ud = Function(Td, buffer=sp.besselj(0, x))
print(len(ud))
# As we can see this leads to a function space of dimension
# very similar to the orthogonal space.
#
# The major advantages of working with a space with boundary conditions
# built in only comes to life when solving differential equations. As
# long as we are only interested in approximating functions, we may just
# as well stick to the orthogonal spaces. This goes for Legendre as
# well as Chebyshev.
#
# ## Multidimensional functions
#
# Multidimensional tensor product spaces are created
# by taking the tensor products of one-dimensional function spaces.
# For example
C0 = FunctionSpace(20, 'C')
C1 = FunctionSpace(20, 'C')
T = TensorProductSpace(comm, (C0, C1))
u = Function(T)
# Here $\text{T} = \text{C0} \otimes \text{C1}$, the basis function is
# $T_i(x) T_j(y)$ and the Function `u` is
# $$
# u(x, y) = \sum_{i=0}^{N-1} \sum_{j=0}^{N-1} \hat{u}_{ij} T_i(x) T_j(y).
# $$
# The multidimensional Functions work more or less exactly like for the
# 1D case. We can here interpolate 2D Sympy functions
y = sp.Symbol('y', real=True)
u = Function(T, buffer=sp.cos(10*x)*sp.cos(10*y))
X = T.local_mesh(True)
plt.contourf(X[0], X[1], u.backward())
# Like for 1D the coefficients are computed through projection,
# where the exact function is evaluated on all quadrature points
# in the mesh.
#
# The Cartesian mesh represents the quadrature points of the
# two function spaces, and can be visualized as follows
X = T.mesh()
for xj in X[0]:
for yj in X[1]:
plt.plot((xj, xj), (X[1][0, 0], X[1][0, -1]), 'k')
plt.plot((X[0][0], X[0][-1]), (yj, yj), 'k')
# We may alternatively plot on a uniform mesh
X = T.local_mesh(broadcast=True, uniform=True)
plt.contourf(X[0], X[1], u.backward(kind='uniform'))
# ## Curvilinear coordinates
#
# With shenfun it is possible to use curvilinear coordinates,
# and not necessarily with orthogonal basis vectors. With
# curvilinear coordinates the computational coordinates are
# always straight lines, rectangles and cubes. But the physical
# coordinates can be very complex.
#
# Consider the unit disc with polar coordinates. Here
# the position vector $\mathbf{r}$ is given by
# $$
# \mathbf{r} = r\cos \theta \mathbf{i} + r\sin \theta \mathbf{j}
# $$
# The physical domain is $\Omega = \{(x, y): x^2 + y^2 < 1\}$,
# whereas the computational domain is the Cartesian product
# $D = \{(r, \theta) \in [0, 1] \times [0, 2 \pi]\}$.
#
# We create this domain in shenfun through
r, theta = psi = sp.symbols('x,y', real=True, positive=True)
rv = (r*sp.cos(theta), r*sp.sin(theta))
B0 = FunctionSpace(20, 'C', domain=(0, 1))
F0 = FunctionSpace(20, 'F')
T = TensorProductSpace(comm, (B0, F0), coordinates=(psi, rv))
# Note that we are using a Fourier space for the azimuthal
# direction, since the solution here needs to be periodic.
# We can now create functions on the space using an
# analytical function in computational coordinates
u = Function(T, buffer=(1-r)*r*sp.sin(sp.cos(theta)))
# However, when this is plotted it may not be what you expect
X = T.local_mesh(True)
plt.contourf(X[0], X[1], u.backward(), 100)
# We see that the function has been plotted in computational coordinates,
# and not on the disc, as you probably expected. To plot on
# the disc we need the physical mesh, and not the computational
X = T.local_cartesian_mesh()
plt.contourf(X[0], X[1], u.backward(), 100)
# **Notice.**
#
# The periodic plot does not wrap all around the circle. This is
# not wrong, we have simply not used the same point twice, but it
# does not look very good. To overcome this problem we can wrap the
# grid all the way around and re-plot.
up = u.backward()
xp, yp, up = wrap_periodic([X[0], X[1], up], axes=[1])
plt.contourf(xp, yp, up, 100)
# ## Adaptive functions in multiple dimensions
#
# If you want to find a good resolution for a function in multiple
# dimensions, the procedure is exactly like in 1D. First create function
# spaces with 0 quadrature points, and then call [Function](https://shenfun.readthedocs.io/en/latest/shenfun.forms.html#shenfun.forms.arguments.Function)
B0 = FunctionSpace(0, 'C', domain=(0, 1))
F0 = FunctionSpace(0, 'F')
T = TensorProductSpace(comm, (B0, F0), coordinates=(psi, rv))
u = Function(T, buffer=((1-r)*r)**2*sp.sin(sp.cos(theta)))
print(u.shape)
# The algorithm used to find the approximation in multiple dimensions
# simply treat the problem one direction at the time. So in this case
# we would first find a space in the first direction by using
# a function ` ~ ((1-r)*r)**2`, and then along the second using
# a function ` ~ sp.sin(sp.cos(theta))`.
#
#
# <!-- ======= Bibliography ======= -->
| docs/source/functions.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # FloPy
#
# ### A quick demo of how to control the ASCII format of numeric arrays written by FloPy
# load and run the Freyberg model
# +
import sys
import os
import platform
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
# run installed version of flopy or add local path
try:
import flopy
except:
fpth = os.path.abspath(os.path.join('..', '..'))
sys.path.append(fpth)
import flopy
#Set name of MODFLOW exe
# assumes executable is in users path statement
version = 'mf2005'
exe_name = 'mf2005'
if platform.system() == 'Windows':
exe_name = 'mf2005.exe'
mfexe = exe_name
#Set the paths
loadpth = os.path.join('..', 'data', 'freyberg')
modelpth = os.path.join('data')
#make sure modelpth directory exists
if not os.path.exists(modelpth):
os.makedirs(modelpth)
print(sys.version)
print('numpy version: {}'.format(np.__version__))
print('matplotlib version: {}'.format(mpl.__version__))
print('flopy version: {}'.format(flopy.__version__))
# -
ml = flopy.modflow.Modflow.load('freyberg.nam', model_ws=loadpth,
exe_name=exe_name, version=version)
ml.model_ws = modelpth
ml.write_input()
success, buff = ml.run_model()
if not success:
print ('Something bad happened.')
files = ['freyberg.hds', 'freyberg.cbc']
for f in files:
if os.path.isfile(os.path.join(modelpth, f)):
msg = 'Output file located: {}'.format(f)
print (msg)
else:
errmsg = 'Error. Output file cannot be found: {}'.format(f)
print (errmsg)
# Each ``Util2d`` instance now has a ```.format``` attribute, which is an ```ArrayFormat``` instance:
print(ml.lpf.hk[0].format)
# The ```ArrayFormat``` class exposes each of the attributes seen in the ```ArrayFormat.___str___()``` call. ```ArrayFormat``` also exposes ``.fortran``, ``.py`` and ``.numpy`` atrributes, which are the respective format descriptors:
print(ml.dis.botm[0].format.fortran)
print(ml.dis.botm[0].format.py)
print(ml.dis.botm[0].format.numpy)
# #### (re)-setting ```.format```
#
# We can reset the format using a standard fortran type format descriptor
ml.dis.botm[0].format.fortran = "(6f10.4)"
print(ml.dis.botm[0].format.fortran)
print(ml.dis.botm[0].format.py)
print(ml.dis.botm[0].format.numpy)
ml.write_input()
success, buff = ml.run_model()
# Let's load the model we just wrote and check that the desired ```botm[0].format``` was used:
ml1 = flopy.modflow.Modflow.load("freyberg.nam",model_ws=modelpth)
print(ml1.dis.botm[0].format)
# We can also reset individual format components (we can also generate some warnings):
ml.dis.botm[0].format.width = 9
ml.dis.botm[0].format.decimal = 1
print(ml1.dis.botm[0].format)
# We can also select ``free`` format. Note that setting to free format resets the format attributes to the default, max precision:
ml.dis.botm[0].format.free = True
print(ml1.dis.botm[0].format)
ml.write_input()
success, buff = ml.run_model()
ml1 = flopy.modflow.Modflow.load("freyberg.nam",model_ws=modelpth)
print(ml1.dis.botm[0].format)
| examples/Notebooks/flopy3_array_outputformat_options.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import json
import os
import numpy as np
import csv
fileList = ['../Datasets/' + f for f in os.listdir('../Datasets') if f.endswith('.json')]
fileList
combinedList=[]
commitList=[]
commitInfoFileList = ['../Datasets/' + f for f in os.listdir('../Datasets') if f.endswith('.csv')]
for f in fileList:
commitInfoFile = f[:-5]+".jsoncommMap.csv"
if commitInfoFile in commitInfoFileList:
fileObj = open(f,)
data = json.load(fileObj)
combinedL=[]
for event in data:
try:
if(event["type"] == "IssuesEvent" and event["payload"]["action"] == "closed"):
combinedL.append([0,event["payload"]["issue"]])
if(event["type"] == "PullRequestEvent" and event["payload"]["action"] == "closed"):
combinedL.append([1,event["payload"]["pull_request"]])
except:
pass
fileObj.close()
combinedList.append(combinedL)
commit=[]
with open(commitInfoFile,'r') as fo:
csvFile = csv.reader(fo)
for lines in csvFile:
commit.append((int(lines[0]),[] if lines[1]=='[]' else [y[1:-1] for y in lines[1][1:-1].split(', ')]))
commitList.append(commit)
issuePrArr=[]
count=0
commitList
for i in range(len(commitList)):
commitL=dict(commitList[i])
combinedL=combinedList[i]
count=0
combinedIndex=0
for item in combinedL:
combinedIndex+=1
if item[0]==1:
continue
ev=item[1]
count+=1
if count in commitL:
for commit in commitL[count]:
j=max(0,combinedIndex-10)
while j<combinedIndex+10:
if combinedL[j][0]==1 and combinedL[j][1]["merge_commit_sha"] == commit:
issuePrArr.append([ev["title"],ev["body"],','.join([label["name"] for label in ev["labels"]]),combinedL[j][1]["title"],combinedL[j][1]["body"]])
j+=1
issuePrArr
len(issuePrArr)
postproc=[]
for ipr in issuePrArr:
postproc.append([ipr[i].encode('UTF-8').encode('string-escape') for i in range(5)])
postproc
f=open("../Datasets/"+"Dataset.csv", "wb")
w = csv.writer(f,quoting=csv.QUOTE_ALL)
for line in postproc:
w.writerow(line)
f.close()
postnp=np.array(postproc)
np.savetxt("../Datasets/"+"Dataset.csv",postnp)
g=open("../Datasets/"+"Dataset.csv", "rb")
w = csv.reader(g,quoting=csv.QUOTE_ALL)
for line in w:
print(len(line))
g.close()
| Notebooks/Combine.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# > **Copyright (c) 2020 <NAME>**<br><br>
# > **Copyright (c) 2021 Skymind Education Group Sdn. Bhd.**<br>
# <br>
# Licensed under the Apache License, Version 2.0 (the \"License\");
# <br>you may not use this file except in compliance with the License.
# <br>You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0/
# <br>
# <br>Unless required by applicable law or agreed to in writing, software
# <br>distributed under the License is distributed on an \"AS IS\" BASIS,
# <br>WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# <br>See the License for the specific language governing permissions and
# <br>limitations under the License.
# <br>
# <br>
# **SPDX-License-Identifier: Apache-2.0**
# <br>
# # Sentiment Analysis
# ## Introduction
# So far, all of the analysis we've done has been pretty generic - looking at counts, creating scatter plots, etc. These techniques could be applied to numeric data as well.
#
# When it comes to text data, there are a few popular techniques that we'll be going through in the next few notebooks, starting with sentiment analysis. A few key points to remember with sentiment analysis.
#
# 1. **TextBlob Module:** Linguistic researchers have labeled the sentiment of words based on their domain expertise. Sentiment of words can vary based on where it is in a sentence. The TextBlob module allows us to take advantage of these labels.
# 2. **Sentiment Labels:** Each word in a corpus is labeled in terms of polarity and subjectivity (there are more labels as well, but we're going to ignore them for now). A corpus' sentiment is the average of these.
# * **Polarity**: How positive or negative a word is. -1 is very negative. +1 is very positive.
# * **Subjectivity**: How subjective, or opinionated a word is. 0 is fact. +1 is very much an opinion.
#
# For more info on how TextBlob coded up its [sentiment function](https://planspace.org/20150607-textblob_sentiment/).
#
# Let's take a look at the sentiment of the various transcripts, both overall and throughout the comedy routine.
# # Notebook Content
#
# * [Sentiment of Routine](#Sentiment-of-Routine)
#
#
# * [Sentiment of Routine Over Time](#Sentiment-of-Routine-Over-Time)
#
#
# * [Additional Exercises](#Additional-Exercises)
# ## Sentiment of Routine
# +
# We'll start by reading in the corpus, which preserves word order
import pandas as pd
data = pd.read_pickle('models/corpus.pkl')
data
# +
# Create quick lambda functions to find the polarity and subjectivity of each routine
# Terminal / Anaconda Navigator: conda install -c conda-forge textblob
from textblob import TextBlob
pol = lambda x: TextBlob(x).sentiment.polarity
sub = lambda x: TextBlob(x).sentiment.subjectivity
data['polarity'] = data['transcript'].apply(pol)
data['subjectivity'] = data['transcript'].apply(sub)
data
# +
# Let's plot the results
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [10, 8]
for index, comedian in enumerate(data.index):
x = data.polarity.loc[comedian]
y = data.subjectivity.loc[comedian]
plt.scatter(x, y, color='blue')
plt.text(x+.001, y+.001, data['full_name'][index], fontsize=10)
plt.xlim(-.01, .12)
plt.title('Sentiment Analysis', fontsize=20)
plt.xlabel('<-- Negative -------- Positive -->', fontsize=15)
plt.ylabel('<-- Facts -------- Opinions -->', fontsize=15)
plt.show()
# -
# ## Sentiment of Routine Over Time
# Instead of looking at the overall sentiment, let's see if there's anything interesting about the sentiment over time throughout each routine.
# +
# Split each routine into 10 parts
import numpy as np
import math
def split_text(text, n=10):
'''Takes in a string of text and splits into n equal parts, with a default of 10 equal parts.'''
# Calculate length of text, the size of each chunk of text and the starting points of each chunk of text
length = len(text)
size = math.floor(length / n)
start = np.arange(0, length, size)
# Pull out equally sized pieces of text and put it into a list
split_list = []
for piece in range(n):
split_list.append(text[start[piece]:start[piece]+size])
return split_list
# -
# Let's take a look at our data again
data
# +
# Let's create a list to hold all of the pieces of text
list_pieces = []
for t in data.transcript:
split = split_text(t)
list_pieces.append(split)
list_pieces
# -
# The list has 10 elements, one for each transcript
len(list_pieces)
# Each transcript has been split into 10 pieces of text
len(list_pieces[0])
# +
# Calculate the polarity for each piece of text
polarity_transcript = []
for lp in list_pieces:
polarity_piece = []
for p in lp:
polarity_piece.append(TextBlob(p).sentiment.polarity)
polarity_transcript.append(polarity_piece)
polarity_transcript
# -
# Show the plot for one comedian
plt.plot(polarity_transcript[0])
plt.title(data['full_name'].index[0])
plt.show()
# +
# Show the plot for all comedians
plt.rcParams['figure.figsize'] = [16, 12]
for index, comedian in enumerate(data.index):
plt.subplot(3, 4, index+1)
plt.plot(polarity_transcript[index])
plt.plot(np.arange(0,10), np.zeros(10))
plt.title(data['full_name'][index])
plt.ylim(ymin=-.2, ymax=.3)
plt.show()
# -
# <NAME> stays generally positive throughout her routine. Similar comedians are Louis C.K. and <NAME>.
#
# On the other hand, you have some pretty different patterns here like <NAME> who gets happier as time passes and <NAME> who has some pretty down moments in his routine.
# ## Additional Exercises
# 1. Modify the number of sections the comedy routine is split into and see how the charts over time change.
# # Contributors
#
# **Author**
# <br><NAME>
# # References
#
# 1. [Natural Language Processing in Python](https://www.youtube.com/watch?v=xvqsFTUsOmc&t=6s)
| nlp-labs/Day_04/Statistical_Models/3- Sentiment-Analysis.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="RfCwVSjFui2i"
import pandas as pd
# + [markdown] id="CFf-zxDj22na"
# Load the data into Pandas Dataframe
# + id="nqY2NZCJuvvS"
college_placement_df = pd.read_csv("collegePlace.csv")
# + [markdown] id="aPVGeplW2-5g"
# Extract first 5 rows from the pandas dataframe
# + id="Xfcg-kZfu2T-" colab={"base_uri": "https://localhost:8080/", "height": 204} outputId="37bbc849-ec40-442b-af8f-dd56656411cc"
college_placement_df.head(5)
# + [markdown] id="FQoIApWV3Md3"
# Display the columns names in dataframe
# + id="dO9m7FR1u_Tc" colab={"base_uri": "https://localhost:8080/"} outputId="4d23b2ff-d330-4755-9080-ae0ee1b4bb3b"
print(college_placement_df.columns)
# + [markdown] id="2qoXGDOO3S8k"
# Filter the dataframe by Age
# + id="uxAwWVR53W4F"
college_place_21_df = college_placement_df[college_placement_df["Age"] == 21]
college_place_22_df = college_placement_df[college_placement_df["Age"] == 22]
# + [markdown] id="Dts7i-pl3m2u"
# Find the dimension of dataframe of spark dataframe
# + colab={"base_uri": "https://localhost:8080/"} id="_KrE0C8g3mCI" outputId="25ebb30f-ccd8-4cce-ca1e-80790deca3fe"
print(college_place_21_df.shape)
print(college_place_22_df.shape)
# + [markdown] id="S2hT3Dxh30f9"
# Find the average cgpa, number of students placed and total number of students in each stream
# + id="BEx3IoQv3zmx"
college_place_by_stream = college_placement_df.groupby("Stream").agg({'CGPA':'mean', 'PlacedOrNot':'sum'}).reset_index()
college_student_by_stream = college_placement_df.groupby("Stream").agg({'PlacedOrNot':'count'}).reset_index()
# + colab={"base_uri": "https://localhost:8080/"} id="U7jyzLhD4w8Y" outputId="1d764736-e35e-42be-fc4c-9f3d5a58997e"
print(college_place_by_stream.head(2))
print(college_student_by_stream.head(2))
# + [markdown] id="-1JOFoaS4A1v"
# Rename columns in spark dataframe
# + id="LCoO6Qp44F_J"
college_place_by_stream = college_place_by_stream.rename(columns = {"PlacedOrNot":"Number_of_Students_Placed"})
college_student_by_stream = college_student_by_stream.rename(columns = {"PlacedOrNot":"Number_of_Students"})
# + [markdown] id="9qcsIOqW5ehx"
# Join college_place_by_stream and college_student_by_stream
# + id="5A0CMKay5feJ"
college_placement_join = pd.merge(college_place_by_stream,college_student_by_stream,on = ['Stream'],how = "inner")
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="VCpM5Y-G5UJq" outputId="d4a2cabf-f174-45b8-ce4a-1773ea334978"
college_placement_join.head(5)
# + [markdown] id="HUER58Ba6A01"
# Change the datatype of columns
# + id="pmogGepU6KNp"
college_placement_join['Number_of_Students_Placed'] = college_placement_join['Number_of_Students_Placed'].astype('int')
college_placement_join['Number_of_Students'] = college_placement_join['Number_of_Students'].astype('int')
# + [markdown] id="BuO0gooF6jTt"
# Create derived column - Percentage of students placed
# + id="_HF_WnKR6dkB"
college_placement_join['percent_placed'] = round((college_placement_join['Number_of_Students_Placed']/college_placement_join['Number_of_Students'])*100,2)
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="fwltGmVa65zT" outputId="492b68c6-2821-47ef-e028-dda623796839"
college_placement_join.head()
# + [markdown] id="GM0AdfVs7PQl"
# Finding which Stream has highest number of placed students
# + id="R1PPcdw07QIh"
college_placement_join = college_placement_join.sort_values(['Number_of_Students_Placed'], ascending=[0])
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="GRXfjdf-7uow" outputId="d7beba83-7393-4273-f2be-df70b472a220"
college_placement_join.head()
# + id="stWIGhHn7v4B"
college_placement_join = college_placement_join.sort_values(['percent_placed'], ascending=[0])
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="wcpmHW6w7x-H" outputId="196ea440-ddac-4d12-c87a-80ed033f37df"
college_placement_join.head()
| Spark_Vs_Pandas_Placement_Data_Analysis/Placement_data_analysis_using_Pandas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Project: Train a Quadcopter How to Fly
#
# Design an agent that can fly a quadcopter, and then train it using a reinforcement learning algorithm of your choice! Try to apply the techniques you have learnt, but also feel free to come up with innovative ideas and test them.
#
# 
#
# ## Instructions
#
# > **Note**: If you haven't done so already, follow the steps in this repo's README to install ROS, and ensure that the simulator is running and correctly connecting to ROS.
#
# When you are ready to start coding, take a look at the `quad_controller_rl/src/` (source) directory to better understand the structure. Here are some of the salient items:
#
# - `src/`: Contains all the source code for the project.
# - `quad_controller_rl/`: This is the root of the Python package you'll be working in.
# - ...
# - `tasks/`: Define your tasks (environments) in this sub-directory.
# - `__init__.py`: When you define a new task, you'll have to import it here.
# - `base_task.py`: Generic base class for all tasks, with documentation.
# - `takeoff.py`: This is the first task, already defined for you, and set to run by default.
# - ...
# - `agents/`: Develop your reinforcement learning agents here.
# - `__init__.py`: When you define a new agent, you'll have to import it here, just like tasks.
# - `base_agent.py`: Generic base class for all agents, with documentation.
# - `policy_search.py`: A sample agent has been provided here, and is set to run by default.
# - ...
#
# ### Tasks
#
# Open up the base class for tasks, `BaseTask`, defined in `tasks/base_task.py`:
#
# ```python
# class BaseTask:
# """Generic base class for reinforcement learning tasks."""
#
# def __init__(self):
# """Define state and action spaces, initialize other task parameters."""
# pass
#
# def set_agent(self, agent):
# """Set an agent to carry out this task; to be called from update."""
# self.agent = agent
#
# def reset(self):
# """Reset task and return initial condition."""
# raise NotImplementedError
#
# def update(self, timestamp, pose, angular_velocity, linear_acceleration):
# """Process current data, call agent, return action and done flag."""
# raise NotImplementedError
# ```
#
# All tasks must inherit from this class to function properly. You will need to override the `reset()` and `update()` methods when defining a task, otherwise you will get `NotImplementedError`'s. Besides these two, you should define the state (observation) space and the action space for the task in the constructor, `__init__()`, and initialize any other variables you may need to run the task.
#
# Now compare this with the first concrete task `Takeoff`, defined in `tasks/takeoff.py`:
#
# ```python
# class Takeoff(BaseTask):
# """Simple task where the goal is to lift off the ground and reach a target height."""
# ...
# ```
#
# In `__init__()`, notice how the state and action spaces are defined using [OpenAI Gym spaces](https://gym.openai.com/docs/#spaces), like [`Box`](https://github.com/openai/gym/blob/master/gym/spaces/box.py). These objects provide a clean and powerful interface for agents to explore. For instance, they can inspect the dimensionality of a space (`shape`), ask for the limits (`high` and `low`), or even sample a bunch of observations using the `sample()` method, before beginning to interact with the environment. We also set a time limit (`max_duration`) for each episode here, and the height (`target_z`) that the quadcopter needs to reach for a successful takeoff.
#
# The `reset()` method is meant to give you a chance to reset/initialize any variables you need in order to prepare for the next episode. You do not need to call it yourself; it will be invoked externally. And yes, it will be called once before each episode, including the very first one. Here `Takeoff` doesn't have any episode variables to initialize, but it must return a valid _initial condition_ for the task, which is a tuple consisting of a [`Pose`](http://docs.ros.org/api/geometry_msgs/html/msg/Pose.html) and [`Twist`](http://docs.ros.org/api/geometry_msgs/html/msg/Twist.html) object. These are ROS message types used to convey the pose (position, orientation) and velocity (linear, angular) you want the quadcopter to have at the beginning of an episode. You may choose to supply the same initial values every time, or change it a little bit, e.g. `Takeoff` drops the quadcopter off from a small height with a bit of randomness.
#
# > **Tip**: Slightly randomized initial conditions can help the agent explore the state space faster.
#
# Finally, the `update()` method is perhaps the most important. This is where you define the dynamics of the task and engage the agent. It is called by a ROS process periodically (roughly 30 times a second, by default), with current data from the simulation. A number of arguments are available: `timestamp` (you can use this to check for timeout, or compute velocities), `pose` (position, orientation of the quadcopter), `angular_velocity`, and `linear_acceleration`. You do not have to include all these variables in every task, e.g. `Takeoff` only uses pose information, and even that requires a 7-element state vector.
#
# Once you have prepared the state you want to pass on to your agent, you will need to compute the reward, and check whether the episode is complete (e.g. agent crossed the time limit, or reached a certain height). Note that these two things (`reward` and `done`) are based on actions that the agent took in the past. When you are writing your own agents, you have to be mindful of this.
#
# Now you can pass in the `state`, `reward` and `done` values to the agent's `step()` method and expect an action vector back that matches the action space that you have defined, in this case a `Box(6,)`. After checking that the action vector is non-empty, and clamping it to the space limits, you have to convert it into a ROS `Wrench` message. The first 3 elements of the action vector are interpreted as force in x, y, z directions, and the remaining 3 elements convey the torque to be applied around those axes, respectively.
#
# Return the `Wrench` object (or `None` if you don't want to take any action) and the `done` flag from your `update()` method (note that when `done` is `True`, the `Wrench` object is ignored, so you can return `None` instead). This will be passed back to the simulation as a control command, and will affect the quadcopter's pose, orientation, velocity, etc. You will be able to gauge the effect when the `update()` method is called in the next time step.
#
# ### Agents
#
# Reinforcement learning agents are defined in a similar way. Open up the generic agent class, `BaseAgent`, defined in `agents/base_agent.py`, and the sample agent `RandomPolicySearch` defined in `agents/policy_search.py`. They are actually even simpler to define - you only need to implement the `step()` method that is discussed above. It needs to consume `state` (vector), `reward` (scalar value) and `done` (boolean), and produce an `action` (vector). The state and action vectors must match the respective space indicated by the task. And that's it!
#
# Well, that's just to get things working correctly! The sample agent given `RandomPolicySearch` uses a very simplistic linear policy to directly compute the action vector as a dot product of the state vector and a matrix of weights. Then, it randomly perturbs the parameters by adding some Gaussian noise, to produce a different policy. Based on the average reward obtained in each episode ("score"), it keeps track of the best set of parameters found so far, how the score is changing, and accordingly tweaks a scaling factor to widen or tighten the noise.
# + raw_mimetype="text/html" language="html"
# <div style="width: 100%; text-align: center;">
# <h3>Teach a Quadcopter How to Tumble</h3>
# <video poster="images/quadcopter_tumble.png" width="640" controls muted>
# <source src="images/quadcopter_tumble.mp4" type="video/mp4" />
# <p>Video: Quadcopter tumbling, trying to get off the ground</p>
# </video>
# </div>
# -
# Obviously, this agent performs very poorly on the task. It does manage to move the quadcopter, which is good, but instead of a stable takeoff, it often leads to dizzying cartwheels and somersaults! And that's where you come in - your first _task_ is to design a better agent for this takeoff task. Instead of messing with the sample agent, create new file in the `agents/` directory, say `policy_gradients.py`, and define your own agent in it. Remember to inherit from the base agent class, e.g.:
#
# ```python
# class DDPG(BaseAgent):
# ...
# ```
#
# You can borrow whatever you need from the sample agent, including ideas on how you might modularize your code (using helper methods like `act()`, `learn()`, `reset_episode_vars()`, etc.).
#
# > **Note**: This setup may look similar to the common OpenAI Gym paradigm, but there is one small yet important difference. Instead of the agent calling a method on the environment (to execute an action and obtain the resulting state, reward and done value), here it is the task that is calling a method on the agent (`step()`). If you plan to store experience tuples for learning, you will need to cache the last state ($S_{t-1}$) and last action taken ($A_{t-1}$), then in the next time step when you get the new state ($S_t$) and reward ($R_t$), you can store them along with the `done` flag ($\left\langle S_{t-1}, A_{t-1}, R_t, S_t, \mathrm{done?}\right\rangle$).
#
# When an episode ends, the agent receives one last call to the `step()` method with `done` set to `True` - this is your chance to perform any cleanup/reset/batch-learning (note that no reset method is called on an agent externally). The action returned on this last call is ignored, so you may safely return `None`. The next call would be the beginning of a new episode.
#
# One last thing - in order to run your agent, you will have to edit `agents/__init__.py` and import your agent class in it, e.g.:
#
# ```python
# from quad_controller_rl.agents.policy_gradients import DDPG
# ```
#
# Then, while launching ROS, you will need to specify this class name on the commandline/terminal:
#
# ```bash
# roslaunch quad_controller_rl rl_controller.launch agent:=DDPG
# ```
#
# Okay, now the first task is cut out for you - follow the instructions below to implement an agent that learns to take off from the ground. For the remaining tasks, you get to define the tasks as well as the agents! Use the `Takeoff` task as a guide, and refer to the `BaseTask` docstrings for the different methods you need to override. Use some debug print statements to understand the flow of control better. And just like creating new agents, new tasks must inherit `BaseTask`, they need be imported into `tasks/__init__.py`, and specified on the commandline when running:
#
# ```bash
# roslaunch quad_controller_rl rl_controller.launch task:=Hover agent:=DDPG
# ```
#
# > **Tip**: You typically need to launch ROS and then run the simulator manually. But you can automate that process by either copying/symlinking your simulator to `quad_controller_rl/sim/DroneSim` (`DroneSim` must be an executable/link to one), or by specifying it on the command line, as follows:
# >
# > ```bash
# > roslaunch quad_controller_rl rl_controller.launch task:=Hover agent:=DDPG sim:=<full path>
# > ```
# ## Task 1: Takeoff
#
# ### Implement takeoff agent
#
# Train an agent to successfully lift off from the ground and reach a certain threshold height. Develop your agent in a file under `agents/` as described above, implementing at least the `step()` method, and any other supporting methods that might be necessary. You may use any reinforcement learning algorithm of your choice (note that the action space consists of continuous variables, so that may somewhat limit your choices).
#
# The task has already been defined (in `tasks/takeoff.py`), which you should not edit. The default target height (Z-axis value) to reach is 10 units above the ground. And the reward function is essentially the negative absolute distance from that set point (upto some threshold). An episode ends when the quadcopter reaches the target height (x and y values, orientation, velocity, etc. are ignored), or when the maximum duration is crossed (5 seconds). See `Takeoff.update()` for more details, including episode bonus/penalty.
#
# As you develop your agent, it's important to keep an eye on how it's performing. Build in a mechanism to log/save the total rewards obtained in each episode to file. Once you are satisfied with your agent's performance, return to this notebook to plot episode rewards, and answer the questions below.
#
# ### Plot episode rewards
#
# Plot the total rewards obtained in each episode, either from a single run, or averaged over multiple runs.
import pandas as pd
takOff = pd.read_csv('takoff-stats_2018-02-05_08-29-02.csv')
takOff[['total_reward']].plot(title="Episode Rewards")
# > **Info**: Takeoff Task, N-Version
# > - Networksize is 300 and 600 for the second hiddenlayer
# > - Without contrains on x,y axis
# > - Start point on z is: 0.0
# > - +- episode 250 learning push
# > - A lot of noise in the network, the reward range is not clear/stable
import pandas as pd
takOff = pd.read_csv('takeoff-final-stats_2018-02-06_01-37-30.csv')
takOff[['total_reward']].plot(title="Episode Rewards")
# > **Info**: Takeoff Task, Final-Version
# > - Networksize is 128 and 128 for the second hiddenlayer
# > - Contrains on x,y axis +- 0.2 threshold value
# ```python
# if not -self.threshold < pose.position.x < self.threshold:
# reward -= 10.0
# done = True
# if not -self.threshold < pose.position.y < self.threshold:
# reward -= 10.0
# done = True
# ```
# > - Start point on z is: 0.0
# > - Basic impl. contrains on speed, if the time is not over but position-z is gt target-z then punish with -10 units
# ```python
# if timestamp < self.max_duration and pose.position.z > self.target_z:
# reward -= 10.0
# done = True
# ```
# > - Very fast learning in a short time, less noise
# **Q**: What algorithm did you use? Briefly discuss why you chose it for this task.
#
# **A**: I'm using Deep Deterministic Gradient (DDPG). In this project we work with continuous action spaces. According to various examples and publications, this algorithm is the most promising way to solve these tasks. I'm using the provided project code from udacity with the actor-critic model. The paper "Continuous control with Deep Reinforcement Learning" from Lillicrap, et al. was the first source for my implementation.
#
# **Q**: Using the episode rewards plot, discuss how the agent learned over time.
#
# - Was it an easy task to learn or hard?
# - Was there a gradual learning curve, or an aha moment?
# - How good was the final performance of the agent? (e.g. mean rewards over the last 10 episodes)
#
# **A**:
# - The noise factor coupled with the explorative search makes it harder to find a solution. The agent has something of a GAN-Generator. It is very difficult and time consuming. For me it was very hard, it does not always react the same and it is very sensitive.
# - It shows from the different attempts that nothing happens for a long time. Suddenly the reward rises and falls. The reward stays low for a long time.
# - The performance in my case was o.k, almost 8-12 epics stable
# > ***FYI***: The provided software for this project is not stable especially the simulator and the network system (Win10 in my case). That's why I had to run the simulator in the VM to get any results.
# ## Task 2: Hover
#
# ### Implement hover agent
#
# Now, your agent must take off and hover at the specified set point (say, 10 units above the ground). Same as before, you will need to create an agent and implement the `step()` method (and any other supporting methods) to apply your reinforcement learning algorithm. You may use the same agent as before, if you think your implementation is robust, and try to train it on the new task. But then remember to store your previous model weights/parameters, in case your results were worth keeping.
#
# ### States and rewards
#
# Even if you can use the same agent, you will need to create a new task, which will allow you to change the state representation you pass in, how you verify when the episode has ended (the quadcopter needs to hover for at least a few seconds), etc. In this hover task, you may want to pass in the target height as part of the state (otherwise how would the agent know where you want it to go?). You may also need to revisit how rewards are computed. You can do all this in a new task file, e.g. `tasks/hover.py` (remember to follow the steps outlined above to create a new task):
#
# ```python
# class Hover(BaseTask):
# ...
# ```
#
# **Q**: Did you change the state representation or reward function? If so, please explain below what worked best for you, and why you chose that scheme. Include short code snippet(s) if needed.
#
# **A**: I create a new reward and punish model based on takeoff. The start point is on 0.0. I create a hover-zone +- 1 unit on position-z (10).It is like a Box Model. Same for position-x and position-y if gt then punish.
#
# ```python
# reward = -min(abs(self.target_z - pose.position.z), 20.0)
# if pose.position.z > self.target_z + self.threshold and timestamp < self.max_duration:
# timestamp += 2.5
# reward -= 10.0
# done = True
# if -self.threshold+self.target_z < pose.position.z < self.threshold+self.target_z:
# timestamp -= 2.5
# reward += 10.0
# done = True
# if not -self.threshold < pose.position.x < self.threshold:
# reward -= 10.0
# done = True
# if not -self.threshold < pose.position.y < self.threshold:
# reward -= 10.0
# done = True
# elif timestamp > self.max_duration:
# reward -= 10.0
# done = True
# ```
#
# ### Implementation notes
#
# **Q**: Discuss your implementation below briefly, using the following questions as a guide:
#
# - What algorithm(s) did you try? What worked best for you?
# - What was your final choice of hyperparameters (such as $\alpha$, $\gamma$, $\epsilon$, etc.)?
# - What neural network architecture did you use (if any)? Specify layers, sizes, activation functions, etc.
#
# **A**:
# - I'm using Deep Deterministic Gradient (DDPG). The Model reminds me on the DCGAN project. I tried different activation functions like: linear, leaky relu, relu and different model sizes from 64, 128, 300, 600. My final reward system based on the takeoff with additional adjustments. It works best for me, the learning phase has risen sharply, it takes more than 2000 episodes to see a stable hover effect.I decided to set the starting point of the position-z to 0.0 unit. So that I can use it as the starting point for the last task.
# - My hyperparameters based on the paper "Deep Reinforcement Learning" from Lillicrap, et al.
#
# ```python
#
# LEARNING_RATE = 0.001
# BUFFER_SIZE = 100000
# gamma = 0.99
# tau = 0.001
# batch_size = 64
#
# noise:
# theta=0.15,
# sigma=0.2
#
# ```
#
# - My final network has 3 layer with relu activation and batchNormalization. The last layer use a sigmoid function. This compilation comes from the project DCGAN. I only changed the activity function from leaky relu to relu.
#
#
# ```python
# states = layers.Input(shape=(self.state_size,), name='states')
# net = layers.Dense(units=128, activation=None)(states)
# net = layers.BatchNormalization()(net)
# net = layers.Activation('relu')(net)
# net = layers.Dense(units=128, activation=None)(net)
# net = layers.BatchNormalization()(net)
# net = layers.Activation('relu')(net)
# net = layers.Dense(units=128, activation=None)(net)
# net = layers.BatchNormalization()(net)
# net = layers.Activation('relu')(net)
#
# raw_actions = layers.Dense(units=self.action_size, activation='sigmoid', name='raw_actions')(net)
#
# ```
#
# ### Plot episode rewards
#
# As before, plot the episode rewards, either from a single run, or averaged over multiple runs. Comment on any changes in learning behavior.
import pandas as pd
takOff = pd.read_csv('hover-stats_2018-02-05_12-15-39.csv')
takOff[['total_reward']].plot(title="Episode Rewards")
# > **Info**: Hover Task, N-Version
# > - Networksize is 300 and 600 for the second hiddenlayer
# > - Without contrains on x,y axis
# > - Start point on z is: 10.0
# > - +- episode 380 learning push
# > - A lot of noise in the network, the reward range is not clear/stable...like my first version for takeoff
import pandas as pd
takOff = pd.read_csv('hover_600_stats_2018-02-06_11-58-53.csv')
takOff[['total_reward']].plot(title="Episode Rewards")
# > **Info**: Hover Task, N-Version
# > - Networksize is 600 and 600 for the second hiddenlayer
# > - Contrains on x,y axis +- 0.2 threshold value
# > - Has some hover spots
# > - Probably not stable, maybe it needs more training
import pandas as pd
takOff = pd.read_csv('hover-stats_2018-02-06_06-30-25.csv')
takOff[['total_reward']].plot(title="Episode Rewards")
# > **Info**: Hover Task, N-Version
# > - Networksize is 64 and 64 for the second hiddenlayer
# > - Contrains on x,y axis +- 0.2 threshold value
# > - Impl. new reward for position-z
# ```python
# if pose.position.z >= self.target_z+self.threshold:
# reward += 10.0
# done = True
# if pose.position.z > self.target_z+2:
# reward -= 10.0
# done = True
# if pose.position.z < self.target_z-2:
# reward -= 10.0
# done = True
# ```
# > - Start point on z is: +10.0 units
# > - Harder to train, less noise
# > - Around 240 episode hover
# > - Probably not stable, maybe it needs more training
# +
import pandas as pd
takOff = pd.read_csv('hover-final-stats_2018-02-10_06-06-43.csv')
takOff[['total_reward']].plot(title="Episode Rewards")
# -
# > **Info**: Hover Task, Final-Version
# > - Networksize is 128 and 128 for the second hiddenlayer
# > - Contrains on x,y axis +- 1 threshold value
# > - Impl. new reward setup for position-z, based on takeoff impl.
# > - Start point on z is: 0.0 units
# > - Harder to train, less noise
# > - Around 70 episode hover effect, around 1100 episode hover effect
# > - Stable hover effect around 2000+ episode
# ## Task 3: Landing
#
# What goes up, must come down! But safely!
#
# ### Implement landing agent
#
# This time, you will need to edit the starting state of the quadcopter to place it at a position above the ground (at least 10 units). And change the reward function to make the agent learn to settle down _gently_. Again, create a new task for this (e.g. `Landing` in `tasks/landing.py`), and implement the changes. Note that you will have to modify the `reset()` method to return a position in the air, perhaps with some upward velocity to mimic a recent takeoff.
#
# Once you're satisfied with your task definition, create another agent or repurpose an existing one to learn this task. This might be a good chance to try out a different approach or algorithm.
#
# ### Initial condition, states and rewards
#
# **Q**: How did you change the initial condition (starting state), state representation and/or reward function? Please explain below what worked best for you, and why you chose that scheme. Were you able to build in a reward mechanism for landing gently?
#
# **A**: I set the starting position to +10 units on position-z and landing point +1.0 unit on position-z, with a soft learning. The time factor is responsible for a gentle landing. At the beginning I had the landing point to 0.0 but the learning behavior was very low. After that, I set the landing point higher to be more flexible with the punish/reward setup. This made me successful and that's why I stick to this approach, it's very similar to the hover setup just turned around.
#
# ### Implementation notes
#
# **Q**: Discuss your implementation below briefly, using the same questions as before to guide you.
#
# **A**: I'm using Deep Deterministic Gradient (DDPG). Same architecture and same network as previous tasks. Only punish/reward setup has changed. Hyperparameters and model are the same. It was difficult to learn and time consuming. I set the target z to +1.0 unit with a threshold of 0.8.
#
# ```python
# self.target_z = 1.0
# self.threshold = 0.8
#
# reward = -min(abs(pose.position.z-self.target_z),20)
# if -self.threshold+self.target_z < pose.position.z < self.threshold+self.target_z:
# if (self.max_duration - timestamp) <= 2.0:
# reward += 500.0
# done = True
# if not -self.threshold < pose.position.x < self.threshold:
# reward -= 10.0
# done = True
# if not -self.threshold < pose.position.y < self.threshold:
# reward -= 10.0
# done = True
# elif timestamp > self.max_duration:
# reward -= 200.0
# done = True
# ```
#
# ### Plot episode rewards
#
# As before, plot the episode rewards, either from a single run, or averaged over multiple runs. This task is a little different from the previous ones, since you're starting in the air. Was it harder to learn? Why/why not?
import pandas as pd
takOff = pd.read_csv('landing-stats_2018-02-10_13-52-38.csv')
takOff[['total_reward']].plot(title="Episode Rewards")
# > **Info**: Landing Task, N-Version
# > - Networksize is 128 and 128 for the second hiddenlayer
# > - Contrains on x,y axis +- 1 threshold value
# > - Start point on z is: 10.0
# > - Test run to see if the reward setup is working
import pandas as pd
takOff = pd.read_csv('stats_2018-02-10_16-00-23.csv')
takOff[['total_reward']].plot(title="Episode Rewards")
# > **Info**: Landing Task, N-Version
# > - Networksize is 128 and 128 for the second hiddenlayer
# > - Contrains on x,y axis +- 1 threshold value
# > - Start point on z is: 10.0
# > - very strong punish/reward setup
import pandas as pd
takOff = pd.read_csv('landing-stats_2018-02-10_18-04-31.csv')
takOff[['total_reward']].plot(title="Episode Rewards")
# > **Info**: Landing Task, N-Version
# > - Networksize is 128 and 128 for the second hiddenlayer
# > - Contrains on x,y axis +- 1 threshold value
# > - Start point on z is: 10.0
# > - punish/reward setup is now more open, I removde position-z 0.0 punish
# > - good results, needs to learn longer but the sim crashed..again :-(
# > - interesting in the beginning is a strong learning and on episode ~1200
# ## Task 4: Combined
#
# In order to design a complete flying system, you will need to incorporate all these basic behaviors into a single agent.
#
# ### Setup end-to-end task
#
# The end-to-end task we are considering here is simply to takeoff, hover in-place for some duration, and then land. Time to create another task! But think about how you might go about it. Should it be one meta-task that activates appropriate sub-tasks, one at a time? Or would a single combined task with something like waypoints be easier to implement? There is no right or wrong way here - experiment and find out what works best (and then come back to answer the following).
#
# **Q**: What setup did you ultimately go with for this combined task? Explain briefly.
#
# **A**: I use a chain (subtask) of tasks and award a reward for each successful task. After the hover task, I reset the time factor additionally like a reward to learn longer for the landing task. This is my strategy to solve this challenge.
#
# ### Implement combined agent
#
# Using your end-to-end task, implement the combined agent so that it learns to takeoff (at least 10 units above ground), hover (again, at least 10 units above ground), and gently come back to ground level.
#
# ### Combination scheme and implementation notes
#
# Just like the task itself, it's up to you whether you want to train three separate (sub-)agents, or a single agent for the complete end-to-end task.
#
# **Q**: What did you end up doing? What challenges did you face, and how did you resolve them? Discuss any other implementation notes below.
#
# **A**: I'm using a singel agent with the DDPG architecture and a basic model with 3 layer and a size off 128. The punish/reward setup is a combination of all preview tasks. The challenge was to handling the time factor and to set the reward behavior. In this setup I'm using a second position-z target (top/down). After successful hover task, I'm reset the timestamp for additional learning time.
#
# ```python
# self.target_top_z = 10.0
# self.target_down_z = 1.0
# self.threshold = 1
# # Takeoff
# if -self.threshold+self.target_top_z < pose.position.z < self.threshold+self.target_top_z:
# reward += 10
# print("Takeoff")
# # Hover
# if (self.max_duration - timestamp) <= 2.0:
# reward += 10.0
# timestamp = 0 # reset time
# print("Hover")
# # Landing
# if -self.threshold+self.target_down_z < pose.position.z < self.threshold+self.target_down_z:
# if (self.max_duration - timestamp) <= 2.0:
# reward += 500.0
# done = True
# print("Landing")
#
# ```
#
# ### Plot episode rewards
#
# As before, plot the episode rewards, either from a single run, or averaged over multiple runs.
import pandas as pd
takOff = pd.read_csv('combined-stats_2018-02-11_13-05-03.csv')
takOff[['total_reward']].plot(title="Episode Rewards")
# > **Info**: Combined Task, N-Version
# > - Networksize is 128 and 128 for the second hiddenlayer
# > - Contrains on x,y axis +- 1 threshold value
# > - Start point on z is: 0.0
# > - Successful singel rounde takeoff/hover/landing (episode 50~60, ~ -450 reward),
# > - Successful rounde takeoff/hover/ (episode ~ 580, ~ -950 reward)
# > - Good results, needs to learn longer but the sim crashed..again :-(
# ## Reflections
#
# **Q**: Briefly summarize your experience working on this project. You can use the following prompts for ideas.
#
# - What was the hardest part of the project? (e.g. getting started, running ROS, plotting, specific task, etc.)
# - How did you approach each task and choose an appropriate algorithm/implementation for it?
# - Did you find anything interesting in how the quadcopter or your agent behaved?
#
# **A**:
# The whole project is difficult without experience in Linux, ROC, IT-Network, Mechanics and RL.
# The unstable infrastructure like the simulator makes it very difficult.
# You never know exactly where the mistake lies because you have no experience.
# I was on the project almost every day for 2-4 hours in the last 3 weeks.
# I'm completely unsure of the result, i had to improvise to get results at all.
# I could only achieve that, to run the Silmuator in the VM. As a result,
# each run ran at 2-4 fps.I lost a lot of time and could not really care about the topic.
# That's why I stopped now, because I'm tired and have no desire.
# Interest in this area has disappeared at the moment.
# I do not recommend the last project. If the quality is not improved, then you should remove the whole project.
#
# I did not try anything else, then I had so many problems.
# RL is like a GAN or better a DCGAN. Each result is unique and often can not be reproduced.
# It takes a lot of time and the whole thing is difficult.
#
# Pro:
# - I've learned to debug with python, that's certainly a very valuable knowledge
# - To work with the community and help others
# - The idea and the topic are exciting
| project-5/RL/RL-Quadcopter.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # States
#
# A Riemann Problem is specified by the state of the material to the left and right of the interface. In this hydrodynamic problem, the state is fully determined by an [equation of state](eos_defns.html) and the variables
#
# $$
# {\bf U} = \begin{pmatrix} \rho_0 \\ v_x \\ v_t \\ \epsilon \end{pmatrix},
# $$
#
# where $\rho_0$ is the rest-mass density, $v_x$ the velocity normal to the interface, $v_t$ the velocity tangential to the interface, and $\epsilon$ the specific internal energy.
# ## Defining a state
# In `r3d2` we define a state from an equation of state and the values of the key variables:
from r3d2 import eos_defns, State
eos = eos_defns.eos_gamma_law(5.0/3.0)
U = State(1.0, 0.1, 0.0, 2.0, eos)
# Inside the notebook, the state will automatically display the values of the key variables:
U
# Adding a label to the state for output purposes requires an extra keyword:
U2 = State(10.0, -0.3, 0.1, 5.0, eos, label="L")
U2
# ## Reactive states
#
# If the state has energy available for reactions, that information is built into the equation of state. The definition of the equation of state changes: the definition of the state itself does not:
q_available = 0.1
t_ignition = 10.0
Cv = 1.0
eos_reactive = eos_defns.eos_gamma_law_react(5.0/3.0, q_available, Cv, t_ignition, eos)
U_reactive = State(5.0, 0.1, 0.1, 2.0, eos_reactive, label="Reactive")
U_reactive
# ## Additional functions
#
# A state knows its own wavespeeds. Given a wavenumber (the left acoustic wave is `0`, the middle contact or advective wave is `1`, and the right acoustic wave is `2`), we have:
print("Left wavespeed of first state is {}".format(U.wavespeed(0)))
print("Middle wavespeed of second state is {}".format(U2.wavespeed(1)))
print("Right wavespeed of reactive state is {}".format(U.wavespeed(2)))
# A state will return the key *primitive* variables ($\rho, v_x, v_t, \epsilon$):
print("Primitive variables of first state are {}".format(U.prim()))
# A state will return all the variables it computes, which is $\rho, v_x, v_t, \epsilon, p, W, h, c_s$: the primitive variables as above, the pressure $p$, Lorentz factor $W$, specific enthalpy $h$, and speed of sound $c_s$:
print("All variables of second state are {}".format(U.state()))
| docs/states.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] id="BGDENCpVoVap"
# # Cytoscape and igraph
# [](https://colab.research.google.com/github/cytoscape/py4cytoscape/blob/0.0.11/doc/tutorials/Cytoscape-and-iGraph.ipynb)
#
# **by <NAME>, <NAME>, <NAME>**
#
# **py4cytoscape 0.0.11**
#
# This notebook will show you how to convert networks between igraph and Cytoscape.
#
# ## Prerequisites
# In addition to this package (py4cytoscape), you will need:
#
# - Cytoscape 3.8 or greater, which can be downloaded from https://cytoscape.org/download.html. Simply follow the installation instructions on screen.
# - Complete installation wizard
# - Launch Cytoscape
# - If your Cytoscape is 3.8.2 or earlier, install FileTransfer App (Follow [here](https://py4cytoscape.readthedocs.io/en/0.0.10/tutorials/index.html) to do it.)
#
# **NOTE: To run this notebook, you must manually start Cytoscape first – don’t proceed until you have started Cytoscape.**
#
# ### Setup required only in a remote notebook environment
# If you're using a remote Jupyter Notebook environment such as Google Colab, run the cell below.
# (If you're running a local Jupyter Notebook server on the desktop machine same with Cytoscape, you don't need to do that.)
# + id="rW3kPNE_oBVi"
_PY4CYTOSCAPE = 'git+https://github.com/cytoscape/py4cytoscape@0.0.11'
import requests
exec(requests.get("https://raw.githubusercontent.com/cytoscape/jupyter-bridge/master/client/p4c_init.py").text)
IPython.display.Javascript(_PY4CYTOSCAPE_BROWSER_CLIENT_JS) # Start browser client
# + [markdown] id="MJuRPckpoVDG"
# Note that to use the current py4cytoscape release (instead of v0.0.11), remove the _PY4CYTOSCAPE= line in the snippet above.
#
#
# ### Sanity test to verify Cytoscape connection
# By now, the connection to Cytoscape should be up and available. To verify this, try a simple operation that doesn't alter the state of Cytoscape.
# + id="nJg--PWSpkoh"
import py4cytoscape as p4c
p4c.cytoscape_ping()
p4c.cytoscape_version_info()
# + [markdown] id="q_NZMRsMpvIQ"
# ## From igraph to Cytoscape
#
# The igraph package is a popular network tool among Python users. With py4cytoscape, you can easily translate igraph networks to Cytoscape networks!
#
# Here is a basic igraph network construction from the Graph.DataFrame docs, https://igraph.org/python/doc/tutorial/generation.html#from-pandas-dataframe-s
# + id="BNvFcCX5pqKA"
import pandas as pd
from igraph import Graph
actors = pd.DataFrame(data={'name': ["Alice", "Bob", "Cecil", "David", "Esmeralda"],
'age': [48,33,45,34,21],
'gender': ["F","M","F","M","F"]
})
relations = pd.DataFrame(data={'from': ["Bob", "Cecil", "Cecil", "David", "David", "Esmeralda"],
'to': ["Alice", "Bob", "Alice", "Alice", "Bob", "Alice"],
'same_dept': [False, False, True, False, False, True],
'friendship': [4,5,5,2,1,1],
'advice': [4,5,5,4,2,3]
})
ig = Graph.DataFrame(relations, directed=True, vertices=actors)
# + [markdown] id="bqPaK1tStDO3"
# You now have an igraph network, ig.
# In order to translate the network together with all vertex (node) and edge attributes over to Cytoscape, simply use:
# + id="sp03WTLdq6yX"
p4c.create_network_from_igraph(ig, "myIgraph")
# -
p4c.notebook_export_show_image()
# + [markdown] id="2DtLHnxMrIMk"
# ## From Cytoscape to igraph
#
# Inversely, you can use create_igraph_from_network() in py4cytoscape to retrieve vertex (node) and edge DataFrames to construct an igraph network.
#
# + id="Ldcold8sq5Vh"
ig2 = p4c.create_igraph_from_network("myIgraph")
# + [markdown] id="MRmXpqvmrfQ1"
# Compare the round-trip result for yourself…
# + id="y6pzE0G8rYSZ"
print(ig)
# + id="p2G2tZF9rhNq"
print(ig2)
# + [markdown] id="oVjLiutaryAM"
# Note that a few additional attributes are present which are used by Cytoscape to support node/edge selection and network collections.
#
# **Also note: All networks in Cytoscape are implicitly modeled as *directed*. This means that if you start with an *undirected* network in igraph and then convert it round-trip (like described above), then you will end up with a *directed* network.**
#
# + id="un5TFtROrlPY"
| doc/tutorials/Cytoscape-and-iGraph.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# - Train-Validation-Test Set Split
# - Train models
# - parameter search over random space
# - Evaluation of fit
# # Setup Computational Environment
import numpy as np
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn import metrics
from rdkit import Chem
from rdkit.Chem import Draw
# %matplotlib inline
# # Look at Chemicals
m = Chem.MolFromSmiles('Cc1ccccc1')
Chem.Kekulize(m)
Chem.MolToSmiles(m,kekuleSmiles=True)
fig = Draw.MolToMPL(m)
m2 = Chem.MolFromSmiles('C1=C2C(=CC(=C1Cl)Cl)OC3=CC(=C(C=C3O2)Cl)Cl')
fig2 = Draw.MolToMPL(m2)
m3 = Chem.MolFromSmiles('O=C1OC2=C(C=C1)C1=C(C=CCO1)C=C2')
fig3 = Draw.MolToMPL(m3)
# # Look at a grid of chemicals
smiles = ("O=C(NCc1cc(OC)c(O)cc1)CCCC/C=C/C(C)C", "CC(C)CCCCCC(=O)NCC1=CC(=C(C=C1)O)OC", "c1(C(=O)O)cc(OC)c(O)cc1")
mols = [Chem.MolFromSmiles(x) for x in smiles]
Draw.MolsToGridImage(mols)
suppl = Chem.SDMolSupplier('data/cdk2.sdf')
# +
d_train = pd.read_csv("train-0.1m.csv")
d_test = pd.read_csv("test.csv")
d_train_test = d_train.append(d_test)
vars_categ = ["Month","DayofMonth","DayOfWeek","UniqueCarrier", "Origin", "Dest"]
vars_num = ["DepTime","Distance"]
def get_dummies(d, col):
dd = pd.get_dummies(d.ix[:, col])
dd.columns = [col + "_%s" % c for c in dd.columns]
return(dd)
# %time X_train_test_categ = pd.concat([get_dummies(d_train_test, col) for col in vars_categ], axis = 1)
X_train_test = pd.concat([X_train_test_categ, d_train_test.ix[:,vars_num]], axis = 1)
y_train_test = np.where(d_train_test["dep_delayed_15min"]=="Y", 1, 0)
X_train = X_train_test[0:d_train.shape[0]]
y_train = y_train_test[0:d_train.shape[0]]
X_test = X_train_test[d_train.shape[0]:]
y_test = y_train_test[d_train.shape[0]:]
md = LogisticRegression(tol=0.00001, C=1000)
# %time md.fit(X_train, y_train)
phat = md.predict_proba(X_test)[:,1]
metrics.roc_auc_score(y_test, phat)
| scikit-learn classification pipeline.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # CSAL4243: Introduction to Machine Learning
# <NAME> (<EMAIL>)
# + [markdown] slideshow={"slide_type": "slide"}
# # Lecture 4: Multinomial Regression
# + [markdown] slideshow={"slide_type": "fragment"}
# ### Overview
# + [markdown] slideshow={"slide_type": "fragment"}
# - [Machine Learning pipeline](#Machine-Learning-pipeline)
# - [# Linear Regression with one variable](#Linear-Regression-with-one-variable)
# - [Model Representation](#Model-Representation)
# - [Cost Function](#Cost-Function)
# - [Gradient Descent](#Gradient-Descent)
# - [Linear Regression Example](#Linear-Regression-Example)
# - [Read data](#Read-data)
# - [Plot data](#Plot-data)
# - [Lets assume $\theta_0 = 0$ and $\theta_1=0$](#Lets assume $\theta_0 = 0$ and $\theta_1=0$)
# - [Plot it](#Plot-it)
# - [$\theta_1$ vs Cost](#$\theta_1$ vs Cost)
# - [Gradient Descent](#Gradient-Descent)
# - [Run Gradient Descent](#Run-Gradient-Descent)
# - [Plot Convergence](#Plot-Convergence)
# - [Predict output using trained model](#Predict-output-using-trained-model)
# - [Plot Results](#Plot-Results)
# - [Resources](#Resources)
# - [Credits](#Credits)
# -
# <br>
# <br>
# + [markdown] slideshow={"slide_type": "slide"}
# # Machine Learning pipeline
# + [markdown] slideshow={"slide_type": "fragment"}
# <img style="float: left;" src="images/model.png">
# + [markdown] slideshow={"slide_type": "fragment"}
# - x is called input variables or input features.
#
# - y is called output or target variable. Also sometimes known as label.
#
# - h is called hypothesis or model.
#
# - pair (x<sup>(i)</sup>,y<sup>(i)</sup>) is called a sample or training example
#
# - dataset of all training examples is called training set.
#
# - m is the number of samples in a dataset.
#
# - n is the number of features in a dataset excluding label.
# + [markdown] slideshow={"slide_type": "fragment"}
# <img style="float: left;" src="images/02_02.png", width=400>
# -
# <br>
# <br>
# + [markdown] slideshow={"slide_type": "slide"}
# # Linear Regression with one variable
# + [markdown] slideshow={"slide_type": "fragment"}
# ## Model Representation
#
# - Model is represented by h<sub>$\theta$</sub>(x) or simply h(x)
#
# - For Linear regression with one input variable h(x) = $\theta$<sub>0</sub> + $\theta$<sub>1</sub>x
#
# <img style="float: left;" src="images/02_01.png">
# + [markdown] slideshow={"slide_type": "fragment"}
# - $\theta$<sub>0</sub> and $\theta$<sub>1</sub> are called weights or parameters.
# - Need to find $\theta$<sub>0</sub> and $\theta$<sub>1</sub> that maximizes the performance of model.
# -
# <br>
# <br>
# <br>
# + [markdown] slideshow={"slide_type": "slide"}
# ## Cost Function
# + [markdown] slideshow={"slide_type": "fragment"}
# Let $\hat{y}$ = h(x) = $\theta$<sub>0</sub> + $\theta$<sub>1</sub>x
#
# Error in single sample (x,y) = $\hat{y}$ - y = h(x) - y
#
# Cummulative error of all m samples = $\sum_{i=1}^{m} (h(x^i) - y^i)^2$
#
# Finally mean error or cost function = J($\theta$) = $\frac{1}{2m}\sum_{i=1}^{m} (h(x^i) - y^i)^2$
#
# <img style="float: left;" src="images/03_01.png", width=300> <img style="float: right;" src="images/03_02.png", width=300>
# -
# <br>
# <br>
# # Gradient Descent
#
# Cost function:
#
# J($\theta$) = $\frac{1}{2m}\sum_{i=1}^{m} (h(x^i) - y^i)^2$
#
# Gradient descent equation:
#
# $\theta_j := \theta_j - \alpha \frac{\partial}{\partial \theta_j} J(\theta_0, \theta_1)$
# <br>
# Replacing J($\theta$) for each j
#
# \begin{align*} \text{repeat until convergence: } \lbrace & \newline \theta_0 := & \theta_0 - \alpha \frac{1}{m} \sum\limits_{i=1}^{m}(h_\theta(x_{i}) - y_{i}) \newline \theta_1 := & \theta_1 - \alpha \frac{1}{m} \sum\limits_{i=1}^{m}\left((h_\theta(x_{i}) - y_{i}) x_{i}\right) \newline \rbrace& \end{align*}
#
# ---
# <br>
# <img style="float: left;" src="images/03_04.gif">
# <br>
# <br>
# + [markdown] slideshow={"slide_type": "slide"}
# # Linear Regression Example
# -
# | x | y |
# | ------------- |:-------------:|
# | 1 | 0.8 |
# | 2 | 1.6 |
# | 3 | 2.4 |
# | 4 | 3.2 |
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Read data
# + slideshow={"slide_type": "fragment"}
# %matplotlib inline
import pandas as pd
import numpy as np
from sklearn import linear_model
import matplotlib.pyplot as plt
# read data in pandas frame
dataframe = pd.read_csv('datasets/example1.csv', encoding='utf-8')
# assign x and y
X = np.array(dataframe[['x']])
y = np.array(dataframe[['y']])
m = y.size # number of training examples
# + slideshow={"slide_type": "fragment"}
# check data by printing first few rows
dataframe.head()
# + [markdown] slideshow={"slide_type": "subslide"}
# ## Plot data
# + slideshow={"slide_type": "fragment"}
#visualize results
plt.scatter(X, y)
plt.title("Dataset")
plt.xlabel("x")
plt.ylabel("y")
plt.show()
# -
# ## Lets assume $\theta_0 = 0$ and $\theta_1=0$
# +
theta0 = 0
theta1 = 0
cost = 0
for i in range(m):
hx = theta1*X[i,0] + theta0
cost += pow((hx - y[i,0]),2)
cost = cost/(2*m)
print (cost)
# -
# ## plot it
# +
# predict using model
y_pred = theta1*X + theta0
# plot
plt.scatter(X, y)
plt.plot(X, y_pred)
plt.title("Line for theta1 = 0")
plt.xlabel("x")
plt.ylabel("y")
plt.show()
# -
# ## Plot $\theta1$ vs Cost
# +
# save theta1 and cost in a vector
cost_log = []
theta1_log = []
cost_log.append(cost)
theta1_log.append(theta1)
# plot
plt.scatter(theta1_log, cost_log)
plt.title("Theta1 vs Cost")
plt.xlabel("Theta1")
plt.ylabel("Cost")
plt.show()
# + [markdown] slideshow={"slide_type": "slide"}
# ## Lets assume $\theta_0 = 0$ and $\theta_1=1$
# + slideshow={"slide_type": "fragment"}
theta0 = 0
theta1 = 1
cost = 0
for i in range(m):
hx = theta1*X[i,0] + theta0
cost += pow((hx - y[i,0]),2)
cost = cost/(2*m)
print (cost)
# -
# ## plot it
# +
# predict using model
y_pred = theta1*X + theta0
# plot
plt.scatter(X, y)
plt.plot(X, y_pred)
plt.title("Line for theta1 = 1")
plt.xlabel("x")
plt.ylabel("y")
plt.show()
# -
# ## Plot $\theta1$ vs Cost again
# +
# save theta1 and cost in a vector
cost_log.append(cost)
theta1_log.append(theta1)
# plot
plt.scatter(theta1_log, cost_log)
plt.title("Theta1 vs Cost")
plt.xlabel("Theta1")
plt.ylabel("Cost")
plt.show()
# -
# ## Lets assume $\theta_0 = 0$ and $\theta_1=2$
# +
theta0 = 0
theta1 = 2
cost = 0
for i in range(m):
hx = theta1*X[i,0] + theta0
cost += pow((hx - y[i,0]),2)
cost = cost/(2*m)
print (cost)
# predict using model
y_pred = theta1*X + theta0
# plot
plt.scatter(X, y)
plt.plot(X, y_pred)
plt.title("Line for theta1 = 2")
plt.xlabel("x")
plt.ylabel("y")
plt.show()
# +
# save theta1 and cost in a vector
cost_log.append(cost)
theta1_log.append(theta1)
# plot
plt.scatter(theta1_log, cost_log)
plt.title("theta1 vs Cost")
plt.xlabel("Theta1")
plt.ylabel("Cost")
plt.show()
# -
# ## Run it for a while
# +
theta0 = 0
theta1 = -3.1
cost_log = []
theta1_log = [];
inc = 0.1
for j in range(61):
theta1 = theta1 + inc;
cost = 0
for i in range(m):
hx = theta1*X[i,0] + theta0
cost += pow((hx - y[i,0]),2)
cost = cost/(2*m)
cost_log.append(cost)
theta1_log.append(theta1)
# -
# ## plot $\theta_1$ vs Cost
plt.scatter(theta1_log, cost_log)
plt.title("theta1 vs Cost")
plt.xlabel("Theta1")
plt.ylabel("Cost")
plt.show()
# <br>
# <br>
# # Lets do it with Gradient Descent now
#
# +
theta0 = 0
theta1 = -3
alpha = 0.1
interations = 100
cost_log = []
iter_log = [];
inc = 0.1
for j in range(interations):
cost = 0
grad = 0
for i in range(m):
hx = theta1*X[i,0] + theta0
cost += pow((hx - y[i,0]),2)
grad += ((hx - y[i,0]))*X[i,0]
cost = cost/(2*m)
grad = grad/(2*m)
theta1 = theta1 - alpha*grad
cost_log.append(cost)
# -
theta1
# ## Plot Convergence
plt.plot(cost_log)
plt.title("Convergence of Cost Function")
plt.xlabel("Iteration number")
plt.ylabel("Cost function")
plt.show()
# + [markdown] slideshow={"slide_type": "slide"}
# ## Predict output using trained model
# + slideshow={"slide_type": "fragment"}
# predict using model
y_pred = theta1*X + theta0
# plot
plt.scatter(X, y)
plt.plot(X, y_pred)
plt.title("Line for Theta1 from Gradient Descent")
plt.xlabel("x")
plt.ylabel("y")
plt.show()
# + [markdown] slideshow={"slide_type": "slide"}
# # Resources
#
# Course website: [https://w4zir.github.io/ml17s/](https://w4zir.github.io/ml17s/)
#
# [Course resources](https://github.com/w4zir/ml17s)
# + [markdown] slideshow={"slide_type": "fragment"}
# # Credits
# Raschka, Sebastian. Python machine learning. Birmingham, UK: Packt Publishing, 2015. Print.
#
# [<NAME>, Machine Learning, Coursera](#https://www.coursera.org/learn/machine-learning)
#
# [<NAME>](https://github.com/icrtiou/Coursera-ML-AndrewNg)
#
# [<NAME>](https://github.com/kaleko/CourseraML)
| lectures/.ipynb_checkpoints/lec04-multinomial-regression-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/holic1021/Advanced-Computer-Vision-with-TensorFlow/blob/main/C3_W1_Lab_1_transfer_learning_cats_dogs.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="fYJqjq66JVQQ"
# # Basic transfer learning with cats and dogs data
#
#
# + [markdown] id="0oWuHhhcJVQQ"
# ### Import tensorflow
# + id="ioLbtB3uGKPX"
try:
# # %tensorflow_version only exists in Colab.
# %tensorflow_version 2.x
except Exception:
pass
# + [markdown] id="gjfMJAHPJVQR"
# ### Import modules and download the cats and dogs dataset.
# + id="y23ucAFLoHop"
import urllib.request
import os
import zipfile
import random
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras import layers
from tensorflow.keras import Model
from tensorflow.keras.applications.inception_v3 import InceptionV3
from tensorflow.keras.optimizers import RMSprop
from shutil import copyfile
data_url = "https://download.microsoft.com/download/3/E/1/3E1C3F21-ECDB-4869-8368-6DEBA77B919F/kagglecatsanddogs_3367a.zip"
data_file_name = "catsdogs.zip"
download_dir = '/tmp/'
urllib.request.urlretrieve(data_url, data_file_name)
zip_ref = zipfile.ZipFile(data_file_name, 'r')
zip_ref.extractall(download_dir)
zip_ref.close()
# + [markdown] id="JNVXCUNUJVQR"
# Check that the dataset has the expected number of examples.
# + id="AwMoZHxWOynx"
print("Number of cat images:",len(os.listdir('/tmp/PetImages/Cat/')))
print("Number of dog images:", len(os.listdir('/tmp/PetImages/Dog/')))
# Expected Output:
# Number of cat images: 12501
# Number of dog images: 12501
# + [markdown] id="_0riaptkJVQR"
# Create some folders that will store the training and test data.
# - There will be a training folder and a testing folder.
# - Each of these will have a subfolder for cats and another subfolder for dogs.
# + id="qygIo4W5O1hQ"
try:
os.mkdir('/tmp/cats-v-dogs')
os.mkdir('/tmp/cats-v-dogs/training')
os.mkdir('/tmp/cats-v-dogs/testing')
os.mkdir('/tmp/cats-v-dogs/training/cats')
os.mkdir('/tmp/cats-v-dogs/training/dogs')
os.mkdir('/tmp/cats-v-dogs/testing/cats')
os.mkdir('/tmp/cats-v-dogs/testing/dogs')
except OSError:
pass
# + [markdown] id="1ZHD_c-sJVQR"
# ### Split data into training and test sets
#
# - The following code put first checks if an image file is empty (zero length)
# - Of the files that are not empty, it puts 90% of the data into the training set, and 10% into the test set.
# + id="M90EiIu0O314"
import random
from shutil import copyfile
def split_data(SOURCE, TRAINING, TESTING, SPLIT_SIZE):
files = []
for filename in os.listdir(SOURCE):
file = SOURCE + filename
if os.path.getsize(file) > 0:
files.append(filename)
else:
print(filename + " is zero length, so ignoring.")
training_length = int(len(files) * SPLIT_SIZE)
testing_length = int(len(files) - training_length)
shuffled_set = random.sample(files, len(files))
training_set = shuffled_set[0:training_length]
testing_set = shuffled_set[training_length:]
for filename in training_set:
this_file = SOURCE + filename
destination = TRAINING + filename
copyfile(this_file, destination)
for filename in testing_set:
this_file = SOURCE + filename
destination = TESTING + filename
copyfile(this_file, destination)
CAT_SOURCE_DIR = "/tmp/PetImages/Cat/"
TRAINING_CATS_DIR = "/tmp/cats-v-dogs/training/cats/"
TESTING_CATS_DIR = "/tmp/cats-v-dogs/testing/cats/"
DOG_SOURCE_DIR = "/tmp/PetImages/Dog/"
TRAINING_DOGS_DIR = "/tmp/cats-v-dogs/training/dogs/"
TESTING_DOGS_DIR = "/tmp/cats-v-dogs/testing/dogs/"
split_size = .9
split_data(CAT_SOURCE_DIR, TRAINING_CATS_DIR, TESTING_CATS_DIR, split_size)
split_data(DOG_SOURCE_DIR, TRAINING_DOGS_DIR, TESTING_DOGS_DIR, split_size)
# Expected output
# 666.jpg is zero length, so ignoring
# 11702.jpg is zero length, so ignoring
# + [markdown] id="KMx_pePuJVQR"
# Check that the training and test sets are the expected lengths.
# + id="cl8sQpM1O9xK"
print("Number of training cat images", len(os.listdir('/tmp/cats-v-dogs/training/cats/')))
print("Number of training dog images", len(os.listdir('/tmp/cats-v-dogs/training/dogs/')))
print("Number of testing cat images", len(os.listdir('/tmp/cats-v-dogs/testing/cats/')))
print("Number of testing dog images", len(os.listdir('/tmp/cats-v-dogs/testing/dogs/')))
# expected output
# Number of training cat images 11250
# Number of training dog images 11250
# Number of testing cat images 1250
# Number of testing dog images 1250
# + [markdown] id="pNz89__rJVQR"
# ### Data augmentation (try adjusting the parameters)!
#
# Here, you'll use the `ImageDataGenerator` to perform data augmentation.
# - Things like rotating and flipping the existing images allows you to generate training data that is more varied, and can help the model generalize better during training.
# - You can also use the data generator to apply data augmentation to the validation set.
#
# You can use the default parameter values for a first pass through this lab.
# - Later, try to experiment with the parameters of `ImageDataGenerator` to improve the model's performance.
# - Try to drive reach 99.9% validation accuracy or better.
# + id="TVO1l8vAPE14"
TRAINING_DIR = "/tmp/cats-v-dogs/training/"
# Experiment with your own parameters to reach 99.9% validation accuracy or better
train_datagen = ImageDataGenerator(rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
train_generator = train_datagen.flow_from_directory(TRAINING_DIR,
batch_size=100,
class_mode='binary',
target_size=(150, 150))
VALIDATION_DIR = "/tmp/cats-v-dogs/testing/"
validation_datagen = ImageDataGenerator(rescale=1./255)
validation_generator = validation_datagen.flow_from_directory(VALIDATION_DIR,
batch_size=100,
class_mode='binary',
target_size=(150, 150))
# + [markdown] id="WchwDzWNJVQR"
# ### Get and prepare the model
#
# You'll be using the `InceptionV3` model.
# - Since you're making use of transfer learning, you'll load the pre-trained weights of the model.
# - You'll also freeze the existing layers so that they aren't trained on your downstream task with the cats and dogs data.
# - You'll also get a reference to the last layer, 'mixed7' because you'll add some layers after this last layer.
# + id="tiPK1LlMOvm7"
weights_url = "https://storage.googleapis.com/mledu-datasets/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5"
weights_file = "inception_v3.h5"
urllib.request.urlretrieve(weights_url, weights_file)
# Instantiate the model
pre_trained_model = InceptionV3(input_shape=(150, 150, 3),
include_top=False,
weights=None)
# load pre-trained weights
pre_trained_model.load_weights(weights_file)
# freeze the layers
for layer in pre_trained_model.layers:
layer.trainable = False
# pre_trained_model.summary()
last_layer = pre_trained_model.get_layer('mixed7')
print('last layer output shape: ', last_layer.output_shape)
last_output = last_layer.output
# + [markdown] id="3edBz_IxJVQR"
# ### Add layers
# Add some layers that you will train on the cats and dogs data.
# - `Flatten`: This will take the output of the `last_layer` and flatten it to a vector.
# - `Dense`: You'll add a dense layer with a relu activation.
# - `Dense`: After that, add a dense layer with a sigmoid activation. The sigmoid will scale the output to range from 0 to 1, and allow you to interpret the output as a prediction between two categories (cats or dogs).
#
# Then create the model object.
# + id="oDidHXO1JVQR"
# Flatten the output layer to 1 dimension
x = layers.Flatten()(last_output)
# Add a fully connected layer with 1,024 hidden units and ReLU activation
x = layers.Dense(1024, activation='relu')(x)
# Add a final sigmoid layer for classification
x = layers.Dense(1, activation='sigmoid')(x)
model = Model(pre_trained_model.input, x)
# + [markdown] id="asCm8okXJVQR"
# ### Train the model
# Compile the model, and then train it on the test data using `model.fit`
# - Feel free to adjust the number of epochs. This project was originally designed with 20 epochs.
# - For the sake of time, you can use fewer epochs (2) to see how the code runs.
# - You can ignore the warnings about some of the images having corrupt EXIF data. Those will be skipped.
# + id="3nxUncKWPRhR"
# compile the model
model.compile(optimizer=RMSprop(lr=0.0001),
loss='binary_crossentropy',
metrics=['acc'])
# train the model (adjust the number of epochs from 1 to improve performance)
history = model.fit(
train_generator,
validation_data=validation_generator,
epochs=2,
verbose=1)
# + [markdown] id="H6Oo6kM-JVQR"
# ### Visualize the training and validation accuracy
#
# You can see how the training and validation accuracy change with each epoch on an x-y plot.
# + id="erDopoQ5eNL7"
# %matplotlib inline
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
#-----------------------------------------------------------
# Retrieve a list of list results on training and test data
# sets for each training epoch
#-----------------------------------------------------------
acc=history.history['acc']
val_acc=history.history['val_acc']
loss=history.history['loss']
val_loss=history.history['val_loss']
epochs=range(len(acc)) # Get number of epochs
#------------------------------------------------
# Plot training and validation accuracy per epoch
#------------------------------------------------
plt.plot(epochs, acc, 'r', "Training Accuracy")
plt.plot(epochs, val_acc, 'b', "Validation Accuracy")
plt.title('Training and validation accuracy')
plt.figure()
# + [markdown] id="xKc_1Qm8JVQR"
# ### Predict on a test image
#
# You can upload any image and have the model predict whether it's a dog or a cat.
# - Find an image of a dog or cat
# - Run the following code cell. It will ask you to upload an image.
# - The model will print "is a dog" or "is a cat" depending on the model's prediction.
# + id="_0R9fsf4w29e"
import numpy as np
from google.colab import files
from keras.preprocessing import image
uploaded = files.upload()
for fn in uploaded.keys():
# predicting images
path = '/content/' + fn
img = image.load_img(path, target_size=(150, 150))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
image_tensor = np.vstack([x])
classes = model.predict(image_tensor)
print(classes)
print(classes[0])
if classes[0]>0.5:
print(fn + " is a dog")
else:
print(fn + " is a cat")
| C3_W1_Lab_1_transfer_learning_cats_dogs.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Merging Dataframes
#
# 
import pandas as pd
staff_df = pd.DataFrame([{'Name': 'Kelly', 'Role': 'Director of HR'},
{'Name': 'Sally', 'Role': 'Course liasion'},
{'Name': 'James', 'Role': 'Grader'}])
staff_df = staff_df.set_index('Name')
staff_df
student_df = pd.DataFrame([{'Name': 'James', 'School': 'Business'},
{'Name': 'Mike', 'School': 'Law'},
{'Name': 'Sally', 'School': 'Engineering'}])
student_df = student_df.set_index('Name')
student_df
# ## Outer Join
# FULL (OUTER) JOIN: Returns all records when there is a match in either left or right table
# 
pd.merge(staff_df, student_df, how='outer' , left_index=True, right_index=True)
# ## Inner Join
# (INNER) JOIN: Returns records that have matching values in both tables
# 
pd.merge(staff_df, student_df, how='inner', left_index=True, right_index=True)
# ## Left Join
# LEFT (OUTER) JOIN: Returns all records from the left table, and the matched records from the right table
# 
pd.merge(staff_df, student_df, how='left', left_index=True, right_index=True)
# ## Right Join
# RIGHT (OUTER) JOIN: Returns all records from the right table, and the matched records from the left table
# 
pd.merge(staff_df, student_df, how='right', left_index=True, right_index=True)
staff_df
student_df
staff_df = staff_df.reset_index()
student_df = student_df.reset_index()
staff_df
student_df
pd.merge(staff_df, student_df, how='left', left_on='Name', right_on='Name')
staff_df = pd.DataFrame([{'Name': 'Kelly', 'Role': 'Director of HR', 'Location': 'State Street'},
{'Name': 'Sally', 'Role': 'Course liasion', 'Location': 'Washington Avenue'},
{'Name': 'James', 'Role': 'Grader', 'Location': 'Washington Avenue'}])
student_df = pd.DataFrame([{'Name': 'James', 'School': 'Business', 'Location': '1024 Billiard Avenue'},
{'Name': 'Mike', 'School': 'Law', 'Location': 'Fraternity House #22'},
{'Name': 'Sally', 'School': 'Engineering', 'Location': '512 Wilson Crescent'}])
pd.merge(staff_df, student_df, how='left', left_on='Name', right_on='Name')
staff_df = pd.DataFrame([{'First Name': 'Kelly', 'Last Name': 'Desjardins', 'Role': 'Director of HR'},
{'First Name': 'Sally', 'Last Name': 'Brooks', 'Role': 'Course liasion'},
{'First Name': 'James', 'Last Name': 'Wilde', 'Role': 'Grader'}])
student_df = pd.DataFrame([{'First Name': 'James', 'Last Name': 'Hammond', 'School': 'Business'},
{'First Name': 'Mike', 'Last Name': 'Smith', 'School': 'Law'},
{'First Name': 'Sally', 'Last Name': 'Brooks', 'School': 'Engineering'}])
staff_df
student_df
pd.merge(staff_df, student_df, how='inner', left_on=['First Name','Last Name'], right_on=['First Name','Last Name'])
# ## Concatenating DataFrames
men2004 = pd.read_csv("../data/men2004.csv")
men2008 = pd.read_csv("../data/men2008.csv")
men2004.head()
men2004.shape
men2008.head()
men2008.shape
df_new = men2004.append(men2008, ignore_index= True)
df_new.head()
df_new.shape
62+59
| 04 - Data Analysis With Pandas/notebooks/09_MergingJoiningConcat.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ========================================
#
# __コンテンツ__
# * サーチ方法
# 1. モジュールインポートとデータロード
# 2. パラメータサーチ範囲の設定
# 3. 特徴量選択範囲の設定 (optional)
# 4. サーチの実行
#
# * ログ使用方法
# 1. サーチ結果の抽出
# 2. スタッキングのためのメタ特徴量生成
#
# ========================================
# # サーチ方法
# ## 1. モジュールインポートとデータロード
# ここではサンプルデータとして"the breast cancer wisconsin dataset"(2値分類)を使用する。
# はじめにデータセットをTrain, Testで分割する。
# +
import os ,sys
import numpy as np, pandas as pd, scipy as sp
from sklearn import datasets
from sklearn.model_selection import train_test_split, StratifiedKFold
from sklearn.linear_model import LogisticRegression
from cvopt.model_selection import SimpleoptCV
from cvopt.search_setting import search_category, search_numeric
dataset = datasets.load_breast_cancer()
Xtrain, Xtest, ytrain, ytest = train_test_split(dataset.data, dataset.target, test_size=0.3, random_state=0)
print("Train features shape:", Xtrain.shape)
print("Test features shape:", Xtest.shape)
# -
from bokeh.io import output_notebook
output_notebook() # When you need search visualization, need run output_notebook()
# ## 2. パラメータサーチ範囲の設定
# 設定は、各cvクラス共通の書式で行うことが可能。
param_distributions = {
"penalty": search_category(['l1', 'l2']),
"C": search_numeric(0.01, 3.0, "float"),
"tol" : search_numeric(0.0001, 0.001, "float"),
"class_weight" : search_category([None, "balanced"]),
}
# ### 2.A その他の書式
# 各cvクラスのベースモジュールと同様の方法の書式も使用可能。
#
# ### for HyperoptCV (base module: Hyperopt)
# ```python
# param_distributions = {
# "penalty": hp.choice("penalty", ["l1", "l2"]),
# "C": hp.loguniform("C", 0.01, 3.0),
# "tol" : hp.loguniform("tol", 0.0001, 0.001),
# "class_weight" : hp.choice("class_weight", [None, "balanced"]),
# }
# ```
# ### for BayesoptCV (base module: GpyOpt)
# __NOTE:__
# * GpyOptでは、サーチ範囲を辞書のリストで設定するが、本モジュールでは辞書の辞書(key:param name, value:dict{GpyOpt標準のサーチ範囲設定辞書})で設定を行う。
# * カテゴリカルのパラメータの場合、サーチ範囲設定辞書に key:`categories`, value:`カテゴリ名一覧リスト` を追加する必要がある。
# ```python
# param_distributions = {
# "penalty" : {"name": "penalty", "type":"categorical", "domain":(0,1), "categories":["l1", "l2"]},
# "C": {"name": "C", "type":"continuous", "domain":(0.01, 3.0)},
# "tol" : {"name": "tol", "type":"continuous", "domain":(0.0001, 0.001)},
# "class_weight" : {"name": "class_weight", "type":"categorical", "domain":(0,1), "categories":[None, "balanced"]},
# }
# ```
#
# ### for GASearchCV, RandomoptCV
# __NOTE:__
# * サポート対象は`search_setting.search_numeric`, `search_setting.search_category`, `scipy.stats`クラスになる。
# ```python
# param_distributions = {
# "penalty" : hp.choice("penalty", ["l1", "l2"]),
# "C": sp.stats.randint(low=0.01, high=3.0),
# "tol" : sp.stats.uniform(loc=0.0001, scale=0.00009),
# "class_weight" : hp.choice("class_weight", [None, "balanced"],
# }
# ```
# ## 3. 特徴量選択範囲の設定 (optional)
# 特徴量選択は`feature_group`毎に行う。
# __もし`feature_group`が"-1"であれば、そのグループの特徴量は必ず選択される。__
# グループを分ける方法は例えばランダム、特徴量エンジニアリング方法の違い、データソースの違いがある。
# `feature_group`を設定しない場合、全ての特徴量が使用される。
#
# ------------------------------------
#
# ### Example.
# データが5個の特徴量(カラム)を持っていて、以下のように`feature_group`を設定したいとする。
#
# | feature index(data col index) | feature group |
# |:------------:|:------------:|
# | 0 | 0 |
# | 1 | 0 |
# | 2 | 0 |
# | 3 | 1 |
# | 4 | 1 |
#
# この場合、以下のようにlistを定義する。
#
# ```
# feature_groups = [0, 0, 0, 1, 1]
# ```
#
# サーチの結果として, `feature_group`毎に選択したかどうかを表すbooleanが得られる。
#
# ```
# feature_groups0: True
# feature_groups1: False
# ```
#
# この結果は、グループ0の特徴量を選択し、グループ1の特徴量を選択しないという意味になる。
#
# ------------------------------------
#
feature_groups = np.random.randint(0, 5, Xtrain.shape[1])
# ## 4. サーチの実行
# cvoptはscikit-learnのcross validationクラスと同様のAPIを持っている。
# scikit-learnを使い慣れていれば、cvoptのクラスも簡単に使用することが出来る。
#
# 各クラスの詳細は[API reference](https://genfifth.github.io/cvopt/)を参照のこと。
# +
estimator = LogisticRegression()
cv = StratifiedKFold(n_splits=3, shuffle=True, random_state=0)
opt = SimpleoptCV(estimator, param_distributions,
scoring="roc_auc", # Objective of search
cv=cv, # Cross validation setting
max_iter=32, # Number of search
n_jobs=3, # Number of jobs to run in parallel.
verbose=2, # 0: don't display status, 1:display status by stdout, 2:display status by graph
logdir="./search_usage_jp", # If this path is specified, save the log.
model_id="search_usage_jp", # used estimator's dir and file name in save.
save_estimator=2, # estimator save setting.
backend="hyperopt", # hyperopt,bayesopt, gaopt or randomopt.
)
opt.fit(Xtrain, ytrain, validation_data=(Xtest, ytest),
# validation_data is optional.
# This data is only used to compute validation score(don't fit).
# When this data is input & save_estimator=True,the estimator which is fitted whole Xtrain is saved.
feature_groups=feature_groups,
)
ytest_pred = opt.predict(Xtest)
# -
# # ログ使用方法
# ## 1.サーチ結果の抽出
# cvoptでは、サーチログを簡単に使用するためのヘルパー関数を用意している。
# サーチ結果を使用したい場合、以下のように実行可能。
# +
from cvopt.utils import extract_params
target_index = pd.DataFrame(opt.cv_results_)[pd.DataFrame(opt.cv_results_)["mean_test_score"] == opt.best_score_]["index"].values[0]
estimator_params, feature_params, feature_select_flag = extract_params(logdir="./search_usage_jp",
model_id="search_usage_jp",
target_index=target_index,
feature_groups=feature_groups)
estimator.set_params(**estimator_params) # Set estimator parameters
Xtrain_selected = Xtrain[:, feature_select_flag] # Extract selected feature columns
print(estimator)
print("Train features shape:", Xtrain.shape)
print("Train selected features shape:",Xtrain_selected.shape)
# -
# ## 2. スタッキングのためのメタ特徴量生成
# メタ特徴量を使用し、 [stacking](https://mlwave.com/kaggle-ensembling-guide/)をしたい場合、以下のようにメタ特徴量を取得できる。
# これを行う場合、サーチ時にパラメータ`save_estimator`を0より大きくすることで、各cv-foldを実施した際のestimatorを保存する必要がある。 加えて、fitしていない特徴量に対するメタ特徴量を生成したい場合、パラメータ`save_estimator`を1より大きくすることで、全trainデータでfitしたestimatorを保存する必要がある。
# +
from cvopt.utils import mk_metafeature
target_index = pd.DataFrame(opt.cv_results_)[pd.DataFrame(opt.cv_results_)["mean_test_score"] == opt.best_score_]["index"].values[0]
Xtrain_meta, Xtest_meta = mk_metafeature(Xtrain, ytrain,
logdir="./search_usage_jp",
model_id="search_usage_jp",
target_index=target_index,
cv=cv,
validation_data=(Xtest, ytest),
feature_groups=feature_groups,
estimator_method="predict_proba")
print("Train features shape:", Xtrain.shape)
print("Train meta features shape:", Xtrain_meta.shape)
print("Test features shape:", Xtest.shape)
print("Test meta features shape:", Xtest_meta.shape)
| notebooks/basic_usage_jp.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import requests
import pandas as pd
import tweepy
import json
import os
# First, you will need to enable OAuth 2.0 in your App’s auth settings in the Developer Portal to get your client ID. You will also need your callback URL, which can be obtained from your App's auth settings.
# %env CLIENT_ID your_client_id
oauth2_user_handler = tweepy.OAuth2UserHandler(
client_id=os.environ.get("CLIENT_ID"),
redirect_uri=your-callback-url,
scope=["users.read", "tweet.read", "offline.access", "bookmark.read"]
)
# Visit the following URL to authorize your App on behalf of your Twitter handle in a browser.
print(oauth2_user_handler.get_authorization_url())
# Replace where it says "Paste in the full URL after you've authorized your App" with the full URL from after you've authorized your App to your Twitter account.
access_token = oauth2_user_handler.fetch_token(
"Paste in the full URL after you've authorized your App"
)
access = access_token['access_token']
user_me = requests.request("GET", "https://api.twitter.com/2/users/me", headers={'Authorization': 'Bearer {}'.format(access)}).json()
user_me
user_id = user_me['data']['id']
url = "https://api.twitter.com/2/users/{}/bookmarks".format(user_id)
headers = {
'Content-Type': 'application/json',
'Authorization': 'Bearer {}'.format(access)
}
response = requests.request("GET", url, headers=headers).json()
bookmarked_tweets = pd.DataFrame(response["data"])
bookmarked_tweets
username = user_me['data']['username']
print(username)
# First, you will need to set up an integration directly in your Notion account and share your integration with the database you are looking to update. You can learn more about Notion’s API and how to get started in their [developer documentation](https://developers.notion.com/).
# %env NOTION_DATABASE_ID your_database_id
# %env NOTION_API_KEY your_api_key
def notion_update(text):
url = "https://api.notion.com/v1/pages"
payload = json.dumps({
"parent": {
"database_id": "{}".format(os.environ.get("NOTION_DATABASE_ID"))
},
"properties": {
"title": {
"title": [
{
"text": {
"content": "{}".format(text)
}
}
]
}
}
})
headers = {
'Content-Type': 'application/json',
'Notion-Version': '2022-02-22',
'Authorization': 'Bearer {}'.format(os.environ.get("NOTION_API_KEY"))
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
for index1, row1 in bookmarked_tweets.iterrows():
text = "https://twitter.com/{}/status/{}\n{}".format(username, row1["id"], row1["text"])
notion_update(text)
| Bookmarks lookup and Notion.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import pickle
from sklearn.metrics.pairwise import linear_kernel
from sklearn.feature_extraction.text import TfidfVectorizer
import csv
# -
class Jaden:
_model = None
_vector = None
_vocabulary = None
_unique_stemmed_words = None
def __init__(self):
self._model = pickle.load(open('finishjadenapp/_model.sav', 'rb'))
self._vector = pickle.load(open('finishjadenapp/_vectorized.sav', 'rb'))
with open('finishjadenapp/_tarih.csv', newline='', encoding='utf8') as f:
reader = csv.reader(f)
_vocabulary = list(reader)
self._vocabulary = _vocabulary
with open('finishjadenapp/_unique_stemmed_words.csv', newline='', encoding='utf8') as f:
reader = csv.reader(f)
_unique_stemmed_words = list(reader)
self._unique_stemmed_words = _unique_stemmed_words
def find_answer(self, question):
_cos_sim = linear_kernel(self._model.transform([self.stemming(self._unique_stemmed_words, question)]), self._vector).flatten()
_cos_sim = np.ndarray.argsort(-_cos_sim)[:5]
_result = []
for i in _cos_sim:
_result.append(self._vocabulary[i+1][1])
return _result
def stemming(self, doc1, doc2):
alldocin = doc1
docin = doc2.split(' ')
result = []
for i in range(len(alldocin)):
for j in range(len(docin)):
s = self.comparison(alldocin[i][0], docin[j])
if(len(s) > 3):
docin[j] = s
return " ".join(str(x) for x in docin)
def comparison(self, word1, word2):
length = len(word2) if len(word1) > len(word2) else len(word1)
result = ''
for i in range(length):
if(word1[i] == word2[i]):
result = result + word1[i]
else:
break
return result
_jaden = Jaden()
_jaden.find_answer('шоқынды алтынсариннің')
| Last_model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + _cell_guid="b1076dfc-b9ad-4769-8c92-a6c4dae69d19" _uuid="8f2839f25d086af736a60e9eeb907d3b93b6e0e5"
# This Python 3 environment comes with many helpful analytics libraries installed
# It is defined by the kaggle/python docker image: https://github.com/kaggle/docker-python
# For example, here's several helpful packages to load in
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import matplotlib.pyplot as plt
# %matplotlib inline
import cv2
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list the files in the input directory
import os
print(os.listdir("../input"))
# Any results you write to the current directory are saved as output.
# + _cell_guid="79c7e3d0-c299-4dcb-8224-4455121ee9b0" _uuid="d629ff2d2480ee46fbb7e2d37f6b5fab8052498a"
from os.path import join
from sklearn.model_selection import train_test_split
from tensorflow.python import keras
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import Dense, Flatten, Conv2D, Dropout
from tensorflow.python.keras.preprocessing.image import load_img, img_to_array
# + _uuid="8f2ecb108eeb4edc249f7662f79457c585b8a0fc"
labels = pd.read_csv("../input/labels.csv")
labels.head()
# + _uuid="e27878d9742a0ff697e593f585f718c708bd7f38"
#convert breed from categories to numbers
dogs=labels.breed.unique()
breeds={}
num=0
for item in dogs:
breeds[item]=num
num+=1
labels.breed=[breeds[item] for item in labels.breed]
# + _uuid="f50f3f1bc588939ba1eefa69572faa147ae85c13"
#creating a dataframe with full Image Path
img_paths = [join("../input/train/", id+".jpg") for id in labels["id"]]
# + _uuid="15b0d769d03da3a3b57cbc145840ef6ca8d47729"
def read_and_prep_images(img_paths, img_height=100, img_width=100):
imgs = [load_img(img_path, target_size=(img_height, img_width)) for img_path in img_paths]
img_array = np.array([img_to_array(img) for img in imgs])
return img_array
# + _uuid="b9d4f20ea01f8f72d56639a637c1424d9b6b107b"
train_data = read_and_prep_images(img_paths)
# + _uuid="7ef19ad64fb986505ebfe61a1c58cfaaa9f8d6d2"
out_y=keras.utils.to_categorical(labels["breed"])
# + _uuid="aff4210badb5967fec1d6aca27c2888d1da0189d"
X_train, X_test, y_train, y_test = train_test_split(train_data, out_y, test_size=0.33, random_state=42)
# + _uuid="e84fc94c8f1216284ec986fab0ea7cb8d94646d5"
model=Sequential()
model.add(Conv2D(64,kernel_size=(3,3),strides=2,activation='relu',input_shape=(100,100,3)))
model.add(Conv2D(128,kernel_size=(3,3),strides=2,activation='relu'))
model.add(Conv2D(256,kernel_size=(3,3),strides=2,activation='relu'))
model.add(Dropout(.30))
model.add(Conv2D(64,kernel_size=(3,3),activation='relu'))
model.add(Conv2D(64,kernel_size=(3,3),activation='relu'))
model.add(Conv2D(128,kernel_size=(3,3),activation='relu'))
model.add(Dropout(.50))
model.add(Conv2D(32,kernel_size=(3,3),activation='relu'))
model.add(Conv2D(32,kernel_size=(3,3),activation='relu'))
model.add(Flatten())
model.add(Dense(256,activation='relu'))
model.add(Dense(128,activation='relu'))
model.add(Dropout(.30))
model.add(Dense(256,activation='relu'))
model.add(Dense(128,activation='relu'))
model.add(Dropout(.30))
model.add(Dense(256,activation='relu'))
model.add(Dense(128,activation='relu'))
model.add(Flatten())
model.add(Dense(128,activation='relu'))
model.add(Dense(120,activation='softmax'))
# + _uuid="f41d9a6b0d3f1c96bb09a76aad80d4b3ed495bef"
model.compile(loss=keras.losses.categorical_crossentropy,optimizer='adam',metrics=['accuracy'])
# + _uuid="05040dbb3f7d5cfafe279e16d04d3147fc004bf9"
H=model.fit(X_train, y_train,
batch_size=128,
epochs=10,
validation_split = 0.2)
# + _uuid="66d8164c24586d4d428b4d4e14bf6ff39e52fad3"
# plot the training loss and accuracy
plt.style.use("ggplot")
plt.figure()
N = 10
plt.plot(np.arange(0, N), H.history["loss"], label="train_loss")
plt.plot(np.arange(0, N), H.history["val_loss"], label="val_loss")
plt.title("Training Loss")
plt.xlabel("Epoch #")
plt.ylabel("Loss/Accuracy")
plt.legend(loc="upper left")
#plt.savefig(args["plot"])
# + _uuid="dde6242a144dff7242981010d2b8287ce0e82b24"
plt.plot(np.arange(0, N), H.history["acc"], label="train_acc")
plt.plot(np.arange(0, N), H.history["val_acc"], label="val_acc")
plt.title("Training Accuracy")
plt.xlabel("Epoch #")
plt.ylabel("Loss/Accuracy")
plt.legend(loc="upper left")
# + _uuid="ce808b4e3fd16692293b11695bac4fd2119cf05f"
reversebreed={}
num=0
for item in range(120):
reversebreed[item]=dogs[num]
num+=1
#labels.breed=[reversebreed[item] for item in labels.breed]
# + _uuid="77d50f4af5cbd5ca82abaf683a8ef38cf1bee5cd"
img_paths_test = "../input/test"
# + _uuid="7bed3e869eeeb0668f960f6b2e53c4de60eb0ff1"
testimgs = [load_img(img_paths_test+"/"+filename, target_size=(100, 100)) for filename in os.listdir(img_paths_test)]
test_img_array = np.array([img_to_array(img) for img in testimgs])
# + _uuid="a4173552a039b4aefb157b4ef0de1a1ee01f0690"
test_img_array.shape
# + _uuid="e22442f3972e9a3aa607ca494f93bc0625ff20da"
preds=model.predict_proba(test_img_array,batch_size=64)
# + _uuid="1dfb331cf3d520847004f7790567c71001c9ac8a"
probDF=pd.DataFrame(preds)
# + _uuid="6c2c70b5b54f49bf27b5c8e755ddc36195012ebb"
probDF.rename(reversebreed,axis=1,inplace=True)
# + _uuid="5f13f2dc4dbce5d13e8612ab94abd42ec048c5d1"
probDF=probDF.reindex_axis(sorted(probDF.columns), axis=1)
# + _uuid="77281a0830a76f08a9d487c4402cdc80f541b867"
probDF.head()
# + _uuid="698db2e59c012b54be1045cdf9736fcc8651fd07"
imgnames=[]
for filename in os.listdir(img_paths_test):
imgnames.append(filename.split('.')[0])
# + _uuid="02758814320e88f20e80230e2029b35eecb39992"
imgnames=pd.DataFrame(imgnames)
# + _uuid="ecf6aaa22ebc5b698481d267c7fdd382dcfb09de"
submission=imgnames.join(probDF)
# + _uuid="f80f93e7cc93620f74e57e559de29db16690b8a5"
submission.rename(columns={0:"id"},inplace=True)
# + _uuid="b80ec9ddc9b9c9a09c6686f96a6b08d453af6298"
submission.to_csv("Submission.csv",sep=',',encoding='utf-8',index=False)
# + _uuid="8ad36e7667d4b5a0d09f0005436258d37c726ff4"
'''for pred in preds:
top_indices = pred.argsort()[-3:][::-1]
result = [ (pred[i],) for i in top_indices]
result.sort(key=lambda x: x[0], reverse=True)
print(result)'''
# + _uuid="febbbe161a75dfc05bc114d509cdf51345bc5a91"
#model.save("dogbreedmodel1.h5")
# + _uuid="6ed3b13e69a6d78dba4648b75ed3790e47a4ff29"
| kernel_testTrain.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from gensim.models import KeyedVectors
import numpy as np
import torch
import torch.nn as nn
import torch.functional as F
import torch.optim as optim
from collections import defaultdict
from nltk.tokenize import RegexpTokenizer
# +
indices = [0, 5, 6]
train = []
tokenizer = RegexpTokenizer(r'\w+')
for line in open("../snli_1.0/snli_1.0_test.txt"):
entry = line.split("\t")
label, sentence1, sentence2 = [entry[i] for i in indices]
sentence1 = tokenizer.tokenize(sentence1.lower())
sentence2 = tokenizer.tokenize(sentence2.lower())
train.append((sentence1, sentence2, label))
# -
for entry in train[1:3]:
s1, s2, l = entry
print("sentence1: {}, sentence2: {}, label: {} END".format(s1, s2, l))
w2i = defaultdict(lambda: len(w2i))
l2i = defaultdict(lambda: len(l2i))
UNK = w2i["<unk>"]
w2i["test"]
w2i
x = torch.Tensor(torch.rand(3, 4))
softmax = nn.Softmax(dim = 1)
sims
sims = torch.Tensor([0.5, 0.3, 0.8])
weighted = torch.sum(torch.t(torch.t(x) * sims), dim = 0)
np.dot(sims, x)
torch.mv(torch.t(x), sims)
torch.t(torch.t(x) * sims)
sims = torch.Tensor([[0.5, 0.3, 0.8], [0.2, 0.7, 0.4])
softmax(sims)
torch.matmul(sims, x)
def get_align(sims, embeddings):
assert(sims.size()[1] == embeddings.size()[0])
softmax = nn.Softmax(dim = 1)
sims = softmax(sims)
reweighted = torch.matmul(sims, embeddings)
return(reweighted)
get_align(sims, x)
# ## Example
# First sentence = "Hey how are you?"
#
# Second sentence = "hey there!"
glove = KeyedVectors.load_word2vec_format("../pretrained_vectors/glove_50_word2vec.txt")
glove_tensor = torch.FloatTensor(glove.vectors)
emb = nn.Embedding.from_pretrained(glove_tensor)
# +
def read_test_data(file):
with open(file) as f:
for line in f:
words = line.strip()
yield([w2i[x] for x in words.split(" ")])
example_train = [list(read_test_data("test_sentences.txt"))]
# -
pretrained_glove = np.random.uniform(-.25, .25, (len(w2i), 50))
print(len(pretrained_glove))
pretrained_glove[0] = 0
print(len(pretrained_glove))
for key in glove.vocab.keys():
if key in w2i:
print("key: {}".format(w2i[key]))
pretrained_glove[w2i[key]] = glove[key]
else:
continue
pretrained_glove
# +
# fake embedding for first sentence: 4 words 6 dims
first = torch.rand(4, 6)
# fake embedding for second sentence: 2 words 6 dims
second = torch.rand(2, 6)
# -
linear_f = nn.Linear(in_features=6, out_features=7)
# transformed1 = linear_f(first)
# transformed2 = linear_f(second)
transformed1, transformed2 = [linear_f(x) for x in [first, second]]
transformed2
aligned1 = torch.bmm(transformed1, torch.t(transformed2))
aligned2 = torch.bmm(transformed2, torch.t(transformed1))
torch.mm(transformed1[[1]], torch.t(transformed2))
first
# +
## w_hat_a = get_align(aligned2, first) <- gets concatenated with sentence_b's embeddings
get_align(aligned2, first)
## w_hat_b = get_align(aligned1, second) <- gets concatenated with sentence_a's embeddings.
# -
concatenated2 = torch.cat((second, get_align(aligned2, first)), 1)
concatenated1 = torch.cat((first, get_align(aligned1, second)),1)
## second layer
linear_pair = nn.Linear(in_features = 2*6, out_features = 7)
paired1 = linear_pair(concatenated1)
paired2 = linear_pair(concatenated2)
paired1
v1 = torch.sum(paired1, 0)
v2 = torch.sum(paired2, 0)
torch.cat((v1, v2))
softmax(relu(paired1))
# +
class MLP(nn.Module):
def __init__(self, **kwargs):
'''
PARAMS:
sent1 : tensor for sentence1
sent2 : tensor for sentence2
hidden_size : number of units in the hidden layer, 200 in the paper
output_size : number of units in the final layer (decision), 3 for entailment
emb_size : embedding size, 300 (might use glove instead to test)
LAYERS:
linear_t : transformation layer (first layer in the network)
linear_p : paired layer which uses original word embeddings and their concatenation with the attended vectors
linead_d : decision layer which uses final concatenation of representations of sentences and decides if they
entail or contradict each other or are neutral.
'''
super(MLP, self).__init__()
self.emb_size = kwargs["EMB_SIZE"]
self.hidden_size = kwargs["HIDDEN_SIZE"]
self.output_size = kwargs["OUTPUT_SIZE"]
self.vocab = kwargs["VOCAB"]
# Layers
self.embed = nn.Embedding(self.vocab, self.emb_size)
self.linear_transform = nn.Sequential(nn.Linear(self.emb_size, self.hidden_size), nn.ReLU(), nn.Linear(self.hidden_size, self.hidden_size))
self.linear_pair = nn.Sequential(nn.Linear(self.emb_size*2, self.hidden_size), nn.ReLU(), nn.Linear(self.hidden_size, self.hidden_size))
self.linear_decide = nn.Sequential(nn.Linear(self.hidden_size*2, self.hidden_size), nn.ReLU(), nn.Linear(self.hidden_size, self.output_size), nn.LogSoftmax(dim=0))
# def get_align(self, )
def forward(self, sentence1, sentence2):
sent1, sent2 = [self.embed(x) for x in [sentence1, sentence2]]
transformed1, transformed2 = [self.linear_transform(x) for x in [sent1, sent2]]
sim1 = torch.mm(transformed1, torch.t(transformed2))
sim2 = torch.mm(transformed2, torch.t(transformed1))
concatenated1 = torch.cat((sent1, get_align(sim1, sent2)), 1)
concatenated2 = torch.cat((sent2, get_align(sim2, sent1)), 1)
paired1, paired2 = [self.linear_pair(x) for x in [concatenated1, concatenated2]]
v1, v2 = [torch.sum(x, 0) for x in [paired1, paired2]]
# print("v1 and v2: \n{}".format(torch.cat((v1,v2), 0)))
pred = self.linear_decide(torch.cat((v1, v2), 0))
# pred = torch.cat((v1, v2), 0)
return(pred)
# -
model = MLP(EMB_SIZE = 50, HIDDEN_SIZE = 100, OUTPUT_SIZE = 3, VOCAB = 7)
model.embed.weight.data.copy_(torch.from_numpy(pretrained_glove))
model.embed.weight.requires_grad = False
test1 = torch.LongTensor([6, 6])
test2 = torch.LongTensor([1, 5])
torch.cat((test1, test2), dim=1)
linear_decide2 = nn.Sequential(nn.Linear(200, 200), nn.ReLU(), nn.Linear(200, 3))
score = model(torch.LongTensor([6,6]), torch.LongTensor([1,5]))
ld1 = nn.Linear(200, 200)
ldr = nn.ReLU()
ld2 = nn.Linear(200, 3)
ld2(ldr(ld1(score)))
score.argmax().item()
sftmax = nn.LogSoftmax(dim=0)
test3 = torch.rand(3)
sftmax(test3)
| Parikh et al., 2016 - A decomposable attention model for Natural Language Inference.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={} colab_type="code" id="ktFt6IzKJkSn"
import pandas as pd
# + colab={} colab_type="code" id="_JoUdRaiIwo4"
#Loading data from the Github repository to colab notebook
filename = 'https://raw.githubusercontent.com/PacktWorkshops/The-Data-Science-Workshop/master/Chapter15/Dataset/crx.data'
# + colab={"base_uri": "https://localhost:8080/", "height": 204} colab_type="code" id="0ZmzTR-CJra-" outputId="9a8e417d-5645-4ed3-c784-3a2a30ea658c"
# Loading the data using pandas
credData = pd.read_csv(filename,sep=",",header = None,na_values = "?")
credData.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 204} colab_type="code" id="rXYA47JRKVz-" outputId="db6236d6-83bd-46db-9c75-01ca341ea2f1"
# Changing the Classes to 1 & 0
credData.loc[credData[15] == '+' , 15] = 1
credData.loc[credData[15] == '-' , 15] = 0
credData.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="R9-NFhigmokr" outputId="2c9c2b5a-b742-4499-e899-24ce4386aa6b"
# Dropping all the rows with na values
newcred = credData.dropna(axis = 0)
newcred.shape
# + colab={"base_uri": "https://localhost:8080/", "height": 51} colab_type="code" id="F3DiCvpm0Qgo" outputId="d434f1d7-77bc-4d69-ff6e-a53a71c9aaff"
# Seperating X and y variables
X = newcred.loc[:,0:14]
print(X.shape)
y = newcred.loc[:,15]
print(y.shape)
# + colab={} colab_type="code" id="q93tXRtVzuvS"
from sklearn.model_selection import train_test_split
# Splitting the data into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=123)
# + [markdown] colab_type="text" id="NDhlhPQr1ts7"
# **Pipe line for Dummy creation**
# + colab={} colab_type="code" id="JhAB1VvZDbKR"
# Importing the necessary packages
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler, OneHotEncoder
# + colab={} colab_type="code" id="QIkCAPmoDa8d"
# Pipeline for transforming categorical variables
catTransformer = Pipeline(steps=[('onehot', OneHotEncoder(handle_unknown='ignore'))])
# + colab={} colab_type="code" id="bMOThSF82_7W"
# Pipeline for scaling numerical variables
numTransformer = Pipeline(steps=[('scaler', StandardScaler())])
# + colab={"base_uri": "https://localhost:8080/", "height": 289} colab_type="code" id="JZrTj4415EOS" outputId="805c58b4-be4f-40ee-89ae-1464d8c2cb78"
# Printing dtypes for X
X.dtypes
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="ONz7zAvD2_qm" outputId="ccfa99d9-24cb-431e-b240-bb0278fb2cd6"
# Selecting numerical features
numFeatures = X.select_dtypes(include=['int64', 'float64']).columns
numFeatures
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" id="bp76LzPY46eZ" outputId="2b99d059-0d50-431b-ed13-dd6ebbc73fef"
# Selecting Categorical features
catFeatures = X.select_dtypes(include=['object']).columns
catFeatures
# + colab={} colab_type="code" id="Vf2spP0U37Nv"
# Creating the preprocessing engine
from sklearn.compose import ColumnTransformer
preprocessor = ColumnTransformer(
transformers=[
('numeric', numTransformer, numFeatures),
('categoric', catTransformer, catFeatures)])
# + colab={"base_uri": "https://localhost:8080/", "height": 241} colab_type="code" id="TiB_EpWADaqv" outputId="90794de3-f02c-4a51-f178-c61260b610fe"
# Transforming the Training data
Xtran_train = pd.DataFrame(preprocessor.fit_transform(X_train))
print(Xtran_train.shape)
Xtran_train.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 241} colab_type="code" id="SYn3MApz4W9v" outputId="934d4cb2-8315-4c23-a286-64211934bedc"
# Transforming Test data
Xtran_test = pd.DataFrame(preprocessor.transform(X_test))
print(Xtran_test.shape)
Xtran_test.head()
# -
| Chapter16/Exercise16.02/Exercise_16_02_Preprocessing_using_ML_pipeline_v1_0.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Principles and Patterns for ML Practitioners
#
# ### S.O.L.I.D (and more) principles applied to an ML problem
#
# ##### By <NAME>, Zühlke Engineering AG
#
# 
# + [markdown] slideshow={"slide_type": "skip"}
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Principles and Practices in Code
#
# #### - Motivation: Typical Python ML code
# #### - SWE's S.O.L.I.D Principles
# #### - Background: Machine Learning with Tensorflow
# #### - Tutorial: Structured Experiments in Python
#
# # Principles and Practices in Collaboration
#
# #### - Explore - Experiment - Build - Infer
# + [markdown] slideshow={"slide_type": "skip"}
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Motivation
#
# [The official Tensorflow MNIST example](mnist_original.py)
# + [markdown] slideshow={"slide_type": "skip"}
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # There's More to Code Than Coding
#
# ## Minimize learning curve for those after you
# ## Code is written once, read and changed multiple times
# ## Dare touch a running system: make it easy-to-change
# ## Reduce efforts for testing
# ## Minimize dependency and reduce complexity
#
# + [markdown] slideshow={"slide_type": "skip"}
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# ## Exactly because data analytics and machine learning
#
# ## have rather *exploratory traits*
#
# ## practices should better support *code and config changes*
#
# ## *without endangering* the quality of the code.
# + [markdown] slideshow={"slide_type": "skip"}
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # The Anatomy of a Machine Learning Experiment
# 
#
# + [markdown] slideshow={"slide_type": "skip"}
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Principles to the Rescue: S.O.L.I.D
# The [S.O.L.I.D. Principles](http://www.cvc.uab.es/shared/teach/a21291/temes/object_oriented_design/materials_adicionals/principles_and_patterns.pdf)
# are commonly attributed to [<NAME> (Uncle Bob)](https://de.wikipedia.org/wiki/Robert_Cecil_Martin).
#
# ### SRP = Single Responsibility Principle
# ### OCP = Open-Close Principle
# ### LSP = Liskov Substitution Principle
# ### ISP = Interface Segregation Principle
# ### DIP = Dependency Inversion Principle
# #### ...and following those principles leads to patterns
# + [markdown] slideshow={"slide_type": "skip"}
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Background: Tensorflow
#
# ### Tensorflow already sports an extremely helpful design
#
# ### The actual processing is described by a computational graph
#
# ### ```Dataset```s, ```Estimator```s, and ```Tower```s manage the training for you
#
#
# The content here is heavily inspired by the
# [github tensorflow repo](https://github.com/tensorflow/models/tree/master/official/mnist) -
# indeed initially copied, and then significantly refactored to demonstrate how SWE patterns and principles make the code more readable, testable and reusable.
#
# We're using [Zalando Research's Fashion Dataset](https://github.com/zalandoresearch/fashion-mnist)
# in addition to the well-known [Handwritten Digits](http://yann.lecun.com/exdb/mnist/).
# + [markdown] slideshow={"slide_type": "skip"}
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# The pipeline | the neural network
# - | -
#  | 
# + [markdown] slideshow={"slide_type": "skip"}
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Tensorflow Building Blocks
# ##### I am using the most current TF API 1.8.0 with the following building blocks:
#
# - [Tensorflow Dataset API](https://www.tensorflow.org/programmers_guide/datasets)
# - Allows for pre-processing with a monadic API (map, flatmap, etc)
# - Preprocessing may even happen in parallel streaming fashion
#
# - [Estimator API](https://www.tensorflow.org/programmers_guide/estimators)
# - very convenient highlevel API
# - Checkpointing and recovery
# - Tensorboard summaries
# - much more...
#
# - [Multi-GPU Training of contrib.estimator package](https://www.tensorflow.org/api_docs/python/tf/contrib/estimator/)
# - convenient wrapper to distribute training on any number of GPUs on a single machine
# - works by means of synchonous gradient averaging over parallel mini-batches
# + [markdown] slideshow={"slide_type": "skip"}
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# ### The ```Dataset``` API
#
# ``` python
# def train_input_fn():
# ds_tr = dataset.training_dataset(hparams.data_dir, DATA_SET)
# ds_tr_tr, _ = split_datasource(ds_tr, 60000, 0.95)
# ds1 = ds_tr_tr.cache().shuffle(buffer_size=57000).\
# repeat(hparams.train_epochs).\
# batch(hparams.batch_size)
# return ds1
#
# def eval_input_fn():
# ds_tr = dataset.training_dataset(hparams.data_dir, DATA_SET)
# _, ds_tr_ev = split_datasource(ds_tr, 60000, 0.95)
# ds2 = ds_tr_ev.batch(hparams.batch_size)
# return ds2
# ```
# + [markdown] slideshow={"slide_type": "skip"}
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# ### The ```Estimator``` API
# Create an ```Estimator``` by passing a *model function* to the constructor
#
# ``` python
# mnist_classifier = tf.estimator.Estimator(
# model_fn=model_function,
# model_dir=hparams.model_dir,
# params={
# 'data_format': data_format,
# 'multi_gpu': hparams.multi_gpu
# })
# ```
#
# The model function must return appropriate ```EstimatorSpec```s for 'TRAIN', 'EVAL', or 'TEST'. We create it in its own module using a given ```Model```.
#
# A ```Model``` is the function that actually creates the graph. Two possible implementations can be found in their own modules in the ```models``` package
#
# ``` python
# model_function = create_model_fn(
# lambda params: Model(params),
# tf.train.AdamOptimizer(),
# tf.losses.sparse_softmax_cross_entropy,
# hparams)
# ```
# + [markdown] slideshow={"slide_type": "skip"}
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# ## Train
# ``` python
# start_time=time.time()
# mnist_classifier.train(input_fn=train_input_fn, hooks=[logging_hook])
# duration=time.time() - start_time
# ```
#
# ## Evaluate
# ``` python
# eval_results = mnist_classifier.evaluate(input_fn=eval_input_fn)
# accuracy = eval_results['accuracy']
# steps = eval_results['global_step']
# duration = int(duration)
# ```
# + [markdown] slideshow={"slide_type": "skip"}
# ---
# -
# # Tutorial
#
# [run_experiment.ipynb](run_experiment.ipynb)
# + [markdown] slideshow={"slide_type": "skip"}
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# # Explore - Experiment - Train - Inference
# 
# + [markdown] slideshow={"slide_type": "slide"}
# # Supplementary material in this tutorial:
#
# - parallel training on two or more GPUs
# - The concept of Estimator and EstimatorSpec
# - Datasets and functional monadic interfaces
# - The concept of a computational graph -> tf.InteractiveSession() to the rescue
# - Neural networks: concepts
# -
| experiments/mnist_sota/presentation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_python3
# language: python
# name: conda_python3
# ---
# # How to use Amazon Forecast
#
# Helps advanced users start with Amazon Forecast quickly. The demo notebook runs through a typical end to end usecase for a simple timeseries forecasting scenario.
#
# Prerequisites:
# [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/installing.html) .
#
# For more informations about APIs, please check the [documentation](https://docs.aws.amazon.com/forecast/latest/dg/what-is-forecast.html)
#
# ## Table Of Contents
# * [Setting up](#setup)
# * [Test Setup - Running first API](#hello)
# * [Forecasting Example with Amazon Forecast](#forecastingExample)
#
# **Read Every Cell FULLY before executing it**
#
# ## Set up Preview SDK<a class="anchor" id="setup"></a>
# Configures your AWS CLI to now understand our up and coming service Amazon Forecast
# !aws configure add-model --service-model file://../sdk/forecastquery-2018-06-26.normal.json --service-name forecastquery
# !aws configure add-model --service-model file://../sdk/forecast-2018-06-26.normal.json --service-name forecast
# +
# Prerequisites : 1 time install only, remove the comments to execute the lines.
# #!pip install boto3
# #!pip install pandas
# -
import boto3
from time import sleep
import subprocess
# +
session = boto3.Session(region_name='us-west-2') #us-east-1 is also supported
forecast = session.client(service_name='forecast')
forecastquery = session.client(service_name='forecastquery')
# -
# ## Test Setup <a class="anchor" id="hello"></a>
# Let's say Hi to the Amazon Forecast to interact with our Simple API ListRecipes. The API returns a list of the global recipes Forecast offers that you could potentially use as a part of your forecasting solution.
forecast.list_recipes()
# *If this ran successfully, kudos! If there are any errors at this point runing the following list_recipes, please contact us at the [AWS support forum](https://forums.aws.amazon.com/forum.jspa?forumID=327)
# ## Forecasting with Amazon Forecast<a class="anchor" id="forecastingExample"></a>
# ### Preparing your Data
# In Amazon Forecast , a dataset is a collection of file(s) which contain data that is relevant for a forecasting task. A dataset must conform to a schema provided by Amazon Forecast.
# For this exercise, we use the individual household electric power consumption dataset. (<NAME>. and <NAME>. (2017). UCI Machine Learning Repository [http://archive.ics.uci.edu/ml]. Irvine, CA: University of California, School of Information and Computer Science.) We aggregate the usage data hourly.
# # Data Type
# Amazon forecast can import data from Amazon S3. We first explore the data locally to see the fields
import pandas as pd
df = pd.read_csv("../data/item-demand-time.csv", dtype = object)
df.head(3)
# Now upload the data to S3. But before doing that, go into your AWS Console, select S3 for the service and create a new bucket inside the `Oregon` or `us-west-2` region. Use that bucket name convention of `amazon-forecast-unique-value-data`. The name must be unique, if you get an error, just adjust until your name works, then update the `bucketName` cell below.
s3 = session.client('s3')
accountId = boto3.client('sts').get_caller_identity().get('Account')
bucketName = 'amazon-forecast-chrisking-data'# Update the unique-value bit here.
key="elec_data/item-demand-time.csv"
s3.upload_file(Filename="../data/item-demand-time.csv", Bucket=bucketName, Key=key)
bucketName
# +
# One time setup only, uncomment the following command to create the role to provide to Amazon Forecast.
# Save the generated role for all future calls to use for importing or exporting data.
cmd = 'python ../setup_forecast_permissions.py '+bucketName
p = subprocess.Popen(cmd.split(' '), stdout=subprocess.PIPE, stderr=subprocess.PIPE)
# -
roleArn = 'arn:aws:iam::%s:role/amazonforecast'%accountId
# ### CreateDataset
# More details about `Domain` and dataset type can be found on the [documentation](https://docs.aws.amazon.com/forecast/latest/dg/howitworks-domains-ds-types.html) . For this example, we are using [CUSTOM](https://docs.aws.amazon.com/forecast/latest/dg/custom-domain.html) domain with 3 required attributes `timestamp`, `target_value` and `item_id`. Also for your project name, update it to reflect your name in a lowercase format.
DATASET_FREQUENCY = "H"
TIMESTAMP_FORMAT = "yyyy-MM-dd hh:mm:ss"
project = 'workshop_forecastdemo' # Replace this with a unique name here, make sure the entire name is < 30 characters.
datasetName= project+'_ds'
datasetGroupName= project +'_gp'
s3DataPath = "s3://"+bucketName+"/"+key
datasetName
# +
# Specify the schema of your dataset here. Make sure the order of columns matches the raw data files.
schema ={
"Attributes":[
{
"AttributeName":"timestamp",
"AttributeType":"timestamp"
},
{
"AttributeName":"target_value",
"AttributeType":"float"
},
{
"AttributeName":"item_id",
"AttributeType":"string"
}
]
}
response=forecast.create_dataset(
Domain="CUSTOM",
DatasetType='TARGET_TIME_SERIES',
DataFormat='CSV',
DatasetName=datasetName,
DataFrequency=DATASET_FREQUENCY,
TimeStampFormat=TIMESTAMP_FORMAT,
Schema = schema
)
# -
forecast.describe_dataset(DatasetName=datasetName)
forecast.create_dataset_group(DatasetGroupName=datasetGroupName,RoleArn=roleArn,DatasetNames=[datasetName])
# If you have an existing datasetgroup, you can update it
forecast.describe_dataset_group(DatasetGroupName=datasetGroupName)
# ### Create Data Import Job
# Brings the data into Amazon Forecast system ready to forecast from raw data.
ds_import_job_response=forecast.create_dataset_import_job(DatasetName=datasetName,Delimiter=',', DatasetGroupName =datasetGroupName ,S3Uri= s3DataPath)
ds_versionId=ds_import_job_response['VersionId']
print(ds_versionId)
# Check the status of dataset, when the status change from **CREATING** to **ACTIVE**, we can continue to next steps. Depending on the data size. It can take 10 mins to be **ACTIVE**. This process will take 5 to 10 minutes.
while True:
dataImportStatus = forecast.describe_dataset_import_job(DatasetName=datasetName,VersionId=ds_versionId)['Status']
print(dataImportStatus)
if dataImportStatus != 'ACTIVE' and dataImportStatus != 'FAILED':
sleep(30)
else:
break
forecast.describe_dataset_import_job(DatasetName=datasetName,VersionId=ds_versionId)
# ### Recipe
recipesResponse=forecast.list_recipes()
recipesResponse
# Get details about each recipe.
forecast.describe_recipe(RecipeName='forecast_MQRNN')
# ### Create Solution with customer forecast horizon
# Forecast horizon is how long in future the forecast should be predicting. For weekly data, a value of 12 means 1 weeks. Our example is hourly data, we try forecast the next day, so we can set to 24.
predictorName= project+'_mqrnn'
forecastHorizon = 24
createPredictorResponse=forecast.create_predictor(RecipeName='forecast_MQRNN',DatasetGroupName= datasetGroupName ,PredictorName=predictorName,
ForecastHorizon = forecastHorizon)
predictorVerionId=createPredictorResponse['VersionId']
forecast.list_predictor_versions(PredictorName=predictorName)
# Check the status of solutions, when the status change from **CREATING** to **ACTIVE**, we can continue to next steps. Depending on data size, model selection and hyper parameters,it can take 10 mins to more than one hour to be **ACTIVE**.
while True:
predictorStatus = forecast.describe_predictor(PredictorName=predictorName,VersionId=predictorVerionId)['Status']
print(predictorStatus)
if predictorStatus != 'ACTIVE' and predictorStatus != 'FAILED':
sleep(30)
else:
break
# ### Get Error Metrics
forecastquery.get_accuracy_metrics(PredictorName=predictorName)
# ### Deploy Predictor
forecast.deploy_predictor(PredictorName=predictorName)
deployedPredictorsResponse=forecast.list_deployed_predictors()
print(deployedPredictorsResponse)
# Please note that the following cell can also take 10 minutes or more to be fully operational. There's no output here, but that is fine as long as the * is there.
while True:
deployedPredictorStatus = forecast.describe_deployed_predictor(PredictorName=predictorName)['Status']
print(deployedPredictorStatus)
if deployedPredictorStatus != 'ACTIVE' and deployedPredictorStatus != 'FAILED':
sleep(30)
else:
break
print(deployedPredictorStatus)
# ### Get Forecast
# When the solution is deployed and forecast results are ready, you can view them.
forecastResponse = forecastquery.get_forecast(
PredictorName=predictorName,
Interval="hour",
Filters={"item_id":"client_12"}
)
print(forecastResponse)
# # Export Forecast
# You can batch export forecast to s3 bucket. To do so an role with s3 put access is needed, but this has already been created.
forecastInfoList= forecast.list_forecasts(PredictorName=predictorName)['ForecastInfoList']
forecastId= forecastInfoList[0]['ForecastId']
outputPath="s3://"+bucketName+"/output"
forecastExportResponse = forecast.create_forecast_export_job(ForecastId=forecastId, OutputPath={"S3Uri": outputPath,"RoleArn":roleArn})
forecastExportJobId = forecastExportResponse['ForecastExportJobId']
while True:
forecastExportStatus = forecast.describe_forecast_export_job(ForecastExportJobId=forecastExportJobId)['Status']
print(forecastExportStatus)
if forecastExportStatus != 'ACTIVE' and forecastExportStatus != 'FAILED':
sleep(30)
else:
break
# Check s3 bucket for results
s3.list_objects(Bucket=bucketName,Prefix="output")
# # Cleanup
#
# While Forecast is in preview there are no charges for using it, but to future proof this work below are the instructions to cleanup your work space.
# Delete Deployed Predictor
forecast.delete_deployed_predictor(PredictorName=predictorName)
# Delete the Predictor:
forecast.delete_predictor(PredictorName=predictorName)
# Delete Import
forecast.delete_dataset_import(DatasetName=datasetName)
# Delete Dataset Group
forecast.delete_dataset_group(DatasetGroupName=datasetGroupName)
| notebooks/Getting_started_with_Forecast.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
#
# Tutorial 1: Building your first gradient
# =================================================
# In this example, we will derive a gradient and do some basic inspections to
# determine which gradients may be of interest and what the multidimensional
# organization of the gradients looks like.
#
# We’ll first start by loading some sample data. Note that we’re using
# parcellated data for computational efficiency.
#
#
# +
from brainspace.datasets import load_group_fc, load_parcellation, load_conte69
# First load mean connectivity matrix and Schaefer parcellation
conn_matrix = load_group_fc('schaefer', scale=400)
labeling = load_parcellation('schaefer', scale=400, join=True)
# and load the conte69 surfaces
surf_lh, surf_rh = load_conte69()
# -
# Let’s first look at the parcellation scheme we’re using.
#
#
# +
from brainspace.plotting import plot_hemispheres
plot_hemispheres(surf_lh, surf_rh, array_name=labeling, size=(1200, 200),
cmap='tab20', zoom=1.85)
# -
# and let’s construct our gradients.
#
#
# +
from brainspace.gradient import GradientMaps
# Ask for 10 gradients (default)
gm = GradientMaps(n_components=10, random_state=0)
gm.fit(conn_matrix)
# -
# Note that the default parameters are diffusion embedding approach, 10
# components, and no kernel (use raw data). Once you have your gradients, a
# good first step is to simply inspect what they look like. Let’s have a look
# at the first two gradients.
#
#
# +
import numpy as np
from brainspace.utils.parcellation import map_to_labels
mask = labeling != 0
grad = [None] * 2
for i in range(2):
# map the gradient to the parcels
grad[i] = map_to_labels(gm.gradients_[:, i], labeling, mask=mask, fill=np.nan)
plot_hemispheres(surf_lh, surf_rh, array_name=grad, size=(1200, 400), cmap='viridis_r',
color_bar=True, label_text=['Grad1', 'Grad2'], zoom=1.55)
# -
# But which gradients should you keep for your analysis? In some cases you may
# have an a priori interest in some previously defined set of gradients. When
# you do not have a pre-defined set, you can instead look at the lambdas
# (eigenvalues) of each component in a scree plot. Higher eigenvalues (or lower
# in Laplacian eigenmaps) are more important, so one can choose a cut-off based
# on a scree plot.
#
#
# +
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1, figsize=(5, 4))
ax.scatter(range(gm.lambdas_.size), gm.lambdas_)
ax.set_xlabel('Component Nb')
ax.set_ylabel('Eigenvalue')
plt.show()
# -
# This concludes the first tutorial. In the next tutorial we will have a look
# at how to customize the methods of gradient estimation, as well as gradient
# alignments.
#
#
| docs/python_doc/auto_examples/plot_tutorial1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Signals and power spectral density
# The (windowed) fourier tranform of a signal $A(t)$ is defined as
# $$ A_T(f) = \frac{1}{\sqrt{T}}\int_0^T dt \, A(t) \, e^{-j2 \pi f t}.$$
# It has units of $[A]/\sqrt{\textrm{Hz}}$. This is a useful way to define the fourier transform of a noise process, because the cumulative amplitude of a random walk grows $\propto \sqrt{T}$.
#
# With this definition, the _power spectral density_ of a signal $A(t)$ is defined as
# $$ S_{A}(f) = \lim_{T \to \infty} \langle A_T^\dagger(f) \, A_T(f) \rangle, $$
# where the average is over many realizations of the fluctuating signal.
#
# The _one-sided power spectral density_ is $W_{A}(f) = S_{A}(f) + S_{A}(-f)$. This is the quantity that is most relevant for signals in the lab. For example, $W_V(f) = 4 k_B T R$ (units of V$^2$/Hz) for thermal voltage noise, and $W_I = 2 eI$ (units of A$^2$/Hz) for current shot noise.
#
# The root mean squared value of $A(t)$ can be calculated using both the time domain and the frequency domain, thanks to Parseval's theorem.
# $$ A_\textrm{rms}^2 = \lim_{T \to \infty} \frac{1}{T}\int_{-T/2}^{T/2} dt \, A(t)^2 = \int_{-\infty}^{\infty} df\, S_{A}(f) = \int_0^{\infty} df \, W_{A}(f). $$
#
# For simplicity, we will call $\sqrt{W_A(f)}$ (units of $[A]/\sqrt{\textrm{Hz}}$) the **spectrum** of the signal $A(t)$.
# Here is an example of a calculation for a voltage signal, which is what you are most likely to work with in the lab.
# +
import matplotlib.pyplot as plt
# %matplotlib inline
import numpy as np
t = np.arange(3,12,1e-5) # seconds
V = np.random.normal(0,1,t.shape) # volts
# t,V = np.loadtxt("input_file.txt",unpack=True) # use this if you are importing data from a file
# spectral density calculation
T = t[-1] - t[0]
dt = t[1] - t[0]
sampling_rate = 1/dt
V_T = V/np.sqrt(T)
V_f = np.fft.fft(V_T)*dt
S_V = np.abs(V_f)**2
f = np.fft.fftfreq(len(V_T),dt)
W_V = 2*S_V[f>0] # keep only positive frequencies
f = f[f>0] # keep only positive frequencies
df = f[1]-f[0]
# check that the fft function behaves properly, by verifying Parseval's theorem
V_rms_time = np.sqrt(np.trapz(V**2,x=t,dx=dt)/T)
print(f"Time domain root-mean-squared = {V_rms_time}")
V_rms_freq = np.sqrt(np.trapz(W_V,x=f,dx=df))
print(f"Frequency domain root-mean-squared = {V_rms_freq}")
# plot
fig, ax = plt.subplots()
ax.loglog(f,np.sqrt(W_V))
ax.set_ylabel("voltage spectrum, $\sqrt{W_V}$ [V/$\sqrt{\mathrm{Hz}}$]")
ax.set_xlabel("frequency, $f$ [Hz]")
ax.margins(0,0.1)
plt.show()
# -
# This trace looks noisy since we only used one realization of the noise process. In practice, you need to average multiple measurements. Remember to average the __power spectral density__ $W_V$ !
#
# [Not the signal $V(t)$, the fourier transform $V(f)$ or the spectrum $\sqrt{W_V}$].
# Note the limits on the x-axis, which are due to the properties of discrete fourier transforms:
#
# - The lowest frequency available in the spectrum is $f_\textrm{min} = 1/T$, because of the __[uncertainty principle](https://en.wikipedia.org/wiki/Fourier_uncertainty_principle#Uncertainty_principle)__. If you need lower frequencies, measure for a longer time.
#
# - The highest frequency available in the spectrum is $f_\textrm{max} = \frac{1}{2 \, \Delta t}$, because of the __[sampling theorem](https://en.wikipedia.org/wiki/Nyquist%E2%80%93Shannon_sampling_theorem)__. If you need higher frequencies, measure with a higher sampling rate.
| spectral_density.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Part 1 - Dimensionality Reduction and Feature Selection
#
#
# Thinking in high dimensions is particularly hard. A 1-D dot would hardly be able to imagine a 2D world. In the same way, the square below struggles to think of our 3D world. For us, it is very difficult to imagine dimensions above the 3rd. We have proxies for thinking of a 4th dimension - passing of time, a 3D surface with some extra measure represented by color, etc - but this starts to become really difficult to work above the 4th dimension.
#
# 
#
# Thankfully, when dealing with ML problems, you can go above three features without trying to imagine how they would look like (Lucky you!). However, you still need to understand how the number of dimensions might affect - either positively or negatively - your models and how to work with high dimensional spaces. Don't worry, we will guide you through it.
# +
import string
import re
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from pylab import barh,plot,yticks,show,grid,xlabel,figure,cla,close
from nltk.tokenize import WordPunctTokenizer
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn import svm
from sklearn.metrics import accuracy_score
from sklearn.feature_selection import SelectFromModel
from sklearn.feature_selection import SelectKBest, chi2
# -
# ## 1. The curse of dimensionality
#
# So far you learned how to handle text data by transforming it into a vectorized feature space. Namely, we mostly used preprocessing tricks and some sort of count over words - remember the CountVectorizer and TF-IDF - to generate our feature space. However, as you might have realized, the number of features for these problems is huge, in particular if you want to cover all language. In the limit, your feature space will cover the entire vocabulary!
#
#
# The effect of high dimensionality features when modelling these problems is called the **curse of dimensionality**. You can probably already see some of the effects of these curse, right?
#
# * To start with, more features, will obviously mean a longer training process
# * On another hand, a higher dimensionality can actually hurt the classifier performance
#
#
#
# #### But I thought "more features" meant "more accuracy"...
#
# This is not always true. When you have high dimensionality problems, the use of all the features might actually hurt your model. The model becomes more prone to overfitting and might have worse accuracy in points outside your training data.
#
# ## 2. Basic Feature Selection
#
# One of the ways you might think of to reduce your feature dimensionality is by performing feature selection. We are going to walk you through some methods with an actuall example. Let's start by loading some data - we are going to use the twitter dataset of republican and democrat tweets
# ### 2.1 - Get your dataset
#
# Start by importing the dataset.
df = pd.read_csv('../datasets/twitter_rep_dem_data_small.csv')
# ### 2.2 - Get to know our problem
#
# We'll first learn our categories and see a few examples of how our training data looks like.
print('Categories:')
print(', '.join(set(df.Party)))
df.head(10)
# You'll notice that our tweets are just text with some particularities. For example, it is common to have twitter handles in the text, defined by the "@" character. Our data also has three collumns, but we are going to ignore the *Handle* collumn for now and just focus on classifying *Tweets* with *Party* labels.
# ### 2.3 - Feature Extraction
#
# Our data is simply raw text, each element a document to be classified, with a corresponding label, which is the Party of the tweet. Pretty simple, right?
#
# Since you are a great student, you went thoroughly through BLU07 and you already know how to prepare, handle text and extract some simple features. So let's process our data and use TfidfVectorizer with a range of 1-2 ngrams to get us some simple features.
#
# First let's apply simple tokenization and remove punctuation. To avoid overfitting to twitter related information, like the handles tagged in the messages, let's remove those, so our focus is only on the language.
# +
handle_remotion = lambda doc: re.subn(r'@\w+','', doc.lower())[0]
df['Tweet'] = df['Tweet'].map(handle_remotion)
simple_tokenizer = lambda doc: " ".join(WordPunctTokenizer().tokenize(doc))
df['Tweet'] = df['Tweet'].map(simple_tokenizer)
df.head(10)
# -
# Now let's split our data and apply some vectorization. Let's pick a random seed so the results are replicable.
seed = 42
# <img src="../media/random.jpg" width="400">
# +
train_data, test_data = train_test_split(df, test_size=0.3, random_state=seed)
print('Training examples: {}'.format(train_data.size))
print('Test examples: {}\n'.format(test_data.size))
vectorizer = TfidfVectorizer(ngram_range=(1,2))
# %timeit vectorizer.fit(train_data.Tweet)
X_train = vectorizer.transform(train_data.Tweet)
X_test = vectorizer.transform(test_data.Tweet)
# -
# ### 2.4 - Getting our baseline
#
# Let's get our baseline accuracy and measure the time it takes to fit a Naive Bayes Model, a model you should be familiar with and which in NLP comes hand in hand with the Bag Of Words representation (if you don't get the joke below, eventually you should go read about naive bayes).
#
# <img src="../media/frequentists_vs_bayesians_2x.png" width="400">
#
#
# +
clf = MultinomialNB()
# %timeit clf.fit(X_train, train_data.Party)
pred = clf.predict(X_test)
print('Accuracy: {}'.format(accuracy_score(pred, test_data.Party)))
# -
# ### 2.5 - Feature selection
#
# Now that we have our baseline, let's start by looking into the number of features used on our classifier:
X_train.shape
# So far so good, and our classifier even trained pretty fast, but a 140K-dimensional space is obviously very difficult to interpret (think back to the fact that even 4D is difficult for us to grasp). Let's instead try to extract our K most important words. One way that you might think to do this is actually to just get the features corresponding to the most frequent terms. In fact, our TfidfVectorizer already has an option for that. Let's see the impact on our training speed and accuracy.
for k in [10, 100, 1000, 5000, 10000, 50000, 100000]:
print('Using {} features'.format(k))
print('----'.format(k))
vectorizer_truncated = TfidfVectorizer(ngram_range=(1,2), max_features=k)
vectorizer_truncated.fit(train_data.Tweet)
X_train_truncated = vectorizer_truncated.transform(train_data.Tweet)
X_test_truncated = vectorizer_truncated.transform(test_data.Tweet)
clf = MultinomialNB()
# %timeit clf.fit(X_train_truncated, train_data.Party)
pred = clf.predict(X_test_truncated)
print('Accuracy: {}\n'.format(accuracy_score(pred, test_data.Party)))
# Ok, so no amazing effects. Let's look into the actual top K-features to see if they make sense.
K=10
vectorizer_truncated = TfidfVectorizer(ngram_range=(1,2), max_features=K)
vectorizer_truncated.fit(train_data.Tweet)
feature_names = vectorizer_truncated.get_feature_names()
for f in feature_names:
print(f)
# As you can see, the top 10 features are basically meaningless when thinking of our classes. In the next cell you will see that the counts of these words are balanced between classes.
for feature in feature_names:
print('Documents that contains the word %s' % feature)
print('----')
docs = train_data.Tweet.str.lower().str.contains(feature)
print(str(train_data.Party[docs].value_counts()) + '\n\n')
# Tip: Notice that most of these words are normally considered stopwords. Try to exclude stopwords from the TfidfVectorizer and see what new top features you obtain. You will probably see that there will be some common words and some "political discourse" words, but not really democrat/republican specific.
#
# But let's move on to more meaningful approaches.
# ## 3. Feature Selection through statistical analysis
#
# Basic feature selection methods might actually work sometimes, in particular if you pick a reasonable heuristic to decide on which features to choose. In our case, obviously, just picking the higher counts is not that good, since it does not provide any useful information on the labels. But you could imagine for example trying to pick as features words that appear only (or almost only) on one of our classes.
#
# Although this might seem a good idea, depending on your problem, you would probably not want to lose that much time thinking about heuristics, implementing them and comparing them. This is where statistical tests are useful. You don't have to reason on the features you are using, these tests use your data to provide insights on your features.
#
# ##### Chi-squared test
#
# The chi-squared test is one of these tests. The chi-squared formula measures how much expected counts and observed counts of variables/distributions deviate from each other. It can be used to test for independce between two variables, like defined by the equation below, where $O_{x_1x_2}$ is the observation of the conjuction of variables and $E_{x_1x_2}$ the corresponding expected value, this is, the expected value given our hypothesis *$H_0$: the variables are independent*.
#
# $$\chi^2 = \sum{\frac{(O_{x_1x_2} - E_{x_1x_2})^2}{E_{x_1x_2}}}$$
#
# For feature selection we want to test the independence of our features from the class labels. In our particular case, we define $x_1=t$ as our term or word and $x_2=c$ as our class label. A small chi-squared value will mean that the term is closer to independence from the class, and a big value that it is very dependent on the class.
#
# Knowing the details of the chi-squared can be useful for you, but in the context of our BLU is not our primary goal and there are more useful methods for text features that you will learn about in the following notebooks. However, we provide at the end of this BLU a more detailed explanation of this test, and some examples, if you wish to understand it better (see Annex A).
#
# We will lean on the previous example and show you how chi-squared would help us select more meaningful features.
# ## 3.1 - Setup problem
#
# Like before, let's fetch our data, extract its features, run a baseline, and move from there.
# +
stat_df = pd.read_csv('../datasets/twitter_rep_dem_data_small.csv')
stat_df['Tweet'] = stat_df['Tweet'].map(handle_remotion)
stat_df['Tweet'] = stat_df['Tweet'].map(simple_tokenizer)
stat_train_data, stat_test_data = train_test_split(df, test_size=0.3, random_state=seed)
stat_vectorizer = TfidfVectorizer(ngram_range=(1,2))
# %timeit stat_vectorizer.fit(stat_train_data.Tweet)
stat_X_train = stat_vectorizer.transform(stat_train_data.Tweet)
stat_X_test = stat_vectorizer.transform(stat_test_data.Tweet)
stat_clf = MultinomialNB()
# %timeit stat_clf.fit(stat_X_train, stat_train_data.Party)
stat_pred = stat_clf.predict(stat_X_test)
print('Accuracy: {}'.format(accuracy_score(stat_pred, stat_test_data.Party)))
# -
# ## 3.2 - Feature Selection from chi-squared
#
# We will now use the chi2 and obtain some chi-squared values for our features.
chi_values, p_values = chi2(stat_X_train, stat_train_data.Party)
# We can plot the most dependent features from the chi-squared values.
# +
feature_names = stat_vectorizer.get_feature_names()
cla() # Clear axis
close() # Close a figure window
figure(figsize=(12,10))
zipped_chi_squared = zip(feature_names, chi_values)
sorted_chi_values = sorted(zipped_chi_squared, key=lambda x:x[1])
top_chi_values = list(zip(*sorted_chi_values[-30:]))
x = range(len(top_chi_values[1]))
labels = top_chi_values[0]
barh(x, list(top_chi_values)[1], align='center', alpha=.2, color='g')
yticks(x, labels)
xlabel('$\chi^2$')
show()
# -
# Actually, scikit-learn already provides a function to directly select the K-best features for our model, so we are going to use that to extract our most important features. We can confirm that the top features selected match the ones with the highest chi-values
# +
ch2 = SelectKBest(chi2, k=10)
ch2.fit(stat_X_train, stat_train_data.Party)
stat_X_train_chi = ch2.transform(stat_X_train)
most_important_features = [feature_names[i] for i in ch2.get_support(indices=True)]
for f in most_important_features:
print(f)
# -
# Now we're getting somewhere, these new features are starting to make sense. We can look into their distribution of over our documents to get a sense of the relation they share with the labels of the dataset.
for feature in most_important_features:
print('Documents that contains the word %s' % feature)
print('----')
docs = stat_train_data.Tweet.str.lower().str.contains(feature)
print(str(stat_train_data.Party[docs].value_counts()) + '\n\n')
# As you see, some words are much more common in Republican tweets, and others in Democrat tweets. These features are thus much more interpretable and might have a better impact in training.
# +
for k in [10, 100, 1000, 5000, 10000, 50000, 100000, 'all']:
print('Using {} features'.format(k))
print('----'.format(k))
ch2_train = SelectKBest(chi2, k=k)
ch2_train.fit(stat_X_train, stat_train_data.Party)
X_train_chi = ch2_train.transform(stat_X_train)
X_test_chi = ch2_train.transform(stat_X_test)
clf = MultinomialNB()
# %timeit clf.fit(X_train_chi, stat_train_data.Party)
pred = clf.predict(X_test_chi)
print('Accuracy: {}\n'.format(accuracy_score(pred, stat_test_data.Party)))
# -
# Using 100000 features we get only slightly better results. However, looking at the features themselves, we can clearly see this is a better feature selection method than the previous one, but we could still use a bit more gain and speed improvements.
#
# One thing that feature selection does not take into consideration is feature interaction, which can limit a bit the gains in performance. In the remaining of this BLU you will learn about more elaborate methods to perform dimensionality reduction, in particular some very recent methods that are the standard in text-related tasks.
# ## 4. Final remarks
#
# After reading this notebook you should understood:
#
# - What is the curse of dimensionality
# - How can you perform simple feature selection by reasoning about your problem
# - How to apply statistical methods for feature selection by finding dependencies between features and labels
#
# Keep in mind that predicting in the real world is much less theoretical. The performance of these methods will depend a lot on your problem, the size of your dataset, your model choice, your preprocessing. You can use feature selection to improve speed, avoid overfitting, or even just to interpret your features and how they interact with your classes.
#
# <br>
#
# -----
#
# **Suggestion**: try to vary the following options/parameters and analyze its impact:
#
# - Experiment a bit more with your preprocessing and see its impact
# - Try other feature extraction options such as the simple CountVectorizer
# - Use smaller slices of the dataset to see how the dataset size impacts both baseline and feature selected results
# - Experiment with other classifiers and the impact of the dimensionality reduction on these
# - Try to use the model to evaluate other text data, search for republican/democrat posts/news/blogs and see how well your model classifies each, for example political speeches.
#
# -----
#
# <br>
#
# Although we focused on simple heuristics and in the chi-squared method, there are other features selection methods that you might find useful. Some examples of these are:
#
# - Feature selection through variance (sklearn.feature_selection.VarianceThreshold)
# - Feature selection through mutual information
# - Recursive feature elimination
# - Tree-based feature selection
#
# **And remember, these methods just tell us that there is a relation between labels and features, but not the nature of that relation.** Now go and apply these methods!
#
# 
#
#
#
# -------------
#
# ## Annex A. Details on chi-squared
#
#
# Let's do a really quick example for you to understand how this works. Let's say we are modelling how characteristics from star trek characters that appear in an episoded are related to their death, and among our features we have one particular called "has red t-shirt" which can take only two categorical values: Yes/No.
#
# Let's build a table representing this scenario:
#
# | Has red t-shirt | Dies | Does not die | Total |
# |----------------------|--------|--------------|-------|
# | Yes | 63 | 9 | 72 |
# | No | 13 | 40 | 53 |
# | total | 76 | 49 | 125 |
#
# This is what we call a contigency table, and it contains our observed values. Testing for independence of the variables, we will have that our expected value is computed from the probabilities $N*P(x_1x_2) = N*P(x_1)P(x_2) $, and we get:
#
# | Has red t-shirt | Dies | Does not die |
# |----------------------|--------|--------------|
# | Yes | 43.78 ($125 * \frac{76}{125}\frac{72}{125}$) | 28.22 ($125 * \frac{49}{125}\frac{72}{125}$ |
# | No | 32.22 ($125 * \frac{76}{125}\frac{53}{125}$) | 20.78($125 * \frac{49}{125}\frac{53}{125}$ |
#
# So our chi-squared value is 50.79, which corresponds to a p-value < .00001, which means we are dealing with dependent variables.
#
# <img src="../media/dig3graves.jpeg" width="500">
#
# ### Chi-squared in BoW context
#
# But how about our context? Our variables are not categorical, so how do we compute this? Actually the chi-squared value can be extended to frequencies, building a contingency table from the feature values and class label. Starting from a table of word frequencies:
#
#
# | | $C_0$ | ... | $C_j $ | ... | Total |
# |----------------------|---------------------|-----|-------------------------|-----|----------------------------|
# | Word 0 | $$C_{t_0, c_0}$$ | ... | $$C_{t_0, c_1}$$ | ... | $$ \sum_j{C_{t_0, c_j}} $$ |
# | Word 1 | $$C_{t_1, c_0}$$ | ... | $$C_{t_1, c_1}$$ | ... | $$ \sum_j{C_{t_1, c_j}} $$ |
# | ... | ... | ... | ... | ... | ... |
# | Word i | $$C_{t_i, c_0}$$ | ... | $$C_{t_i, c_1}$$ | ... | $$ \sum_j{C_{t_i, c_j}} $$ |
# | ... | ... | ... | ... | ... | ... |
# | Total | $$ \sum_i{C_{t_i, c_0}} $$ | ... |$$\sum_i{C_{t_i, c_j}} $$| ... | $$ N = \sum_{i,j}{C_{t_i, c_j}} $$|
#
#
# We can take each feature ($t=x_i$) and class ($c=c_j$) and assume a table of the form:
#
# | | Class j | Not Class j | Total |
# |----------------------|-----------|-------------|--------|
# | Word i | $C_{tc}$ | $C_{tx}$ | $C_{tc}$ + $C_{tx}$ |
# | Not Word i | $C_{xc}$ | $C_{xx}$ | $C_{xc}$ + $C_{xx}$ |
# | Total | $C_{tc}$ + $C_{xc}$ | $C_{tx}$ + $C_{xx}$ | N = $C_{tc}$ + $C_{xc}$ + $C_{tx}$ + $C_{xx}$ |
#
# Where:
#
# - $C_{tc}$ : counts of co-ocurrences of the term and class
# - $C_{tx}$ : counts of ocurrences of the term but not the class
# - $C_{xc}$ : counts of ocurrences of the class but not the term
# - $C_{xx}$ : counts of ocurrences outside the class and without the term
#
# <br>
# Notice that you can compute your negative word counts ("Not Word i") by using the totals:
#
# $$C_{xc} = \sum_i{C_{t_i, c}} - C_{tc} \quad\quad C_{tx} = \sum_j{C_{t, c_j}} - C_{tc} \quad\quad C_{xx} = N - C_{tc} - C_{tx} - C_{xc} $$
#
# <br>
# The expression can be unrolled to the following, for each term $t$ and class $c$:
#
# $$\chi^2(t, c) = \frac{N(C_{tc}C_{xx}-C_{tx}C_{xc})^2}{(C_{tc}+C_{xc})(C_{tx}+C_{xx})(C_{tc}+C_{tx})(C_{xc}+C_{xx})}$$
#
# <br>
#
# -----
#
# **Sugestion**: If, like me, you can't move forward without understanding the origin of these expressions, try to get from the initial expression to this unrolling. However, if you get stuck on the math, just go into the notebook **Annex A - Chi-squared math** and follow the demonstration.
#
# -----
#
#
# <br>
# And we will get for all our terms and classes the chi-squared values indication correlation
#
# | | Republican ($C_0$) | Democrat ($C_1$) |
# |----------------------|-----------|-------------|
# | Word 0 | $$\chi^2(t_0, c_0)$$ | $$\chi^2(t_0, c_1)$$ |
# | Word 1 | $$\chi^2(t_1, c_0)$$ | $$\chi^2(t_1, c_1)$$ |
# | ... | ... | ... |
# | Word i | $$\chi^2(t_i, c_0)$$ | $$\chi^2(t_i, c_1)$$ |
# | ... | ... | ... |
#
#
#
# ### Chi-squared and TF-IDF
#
# You've seen how to apply chi-squared for categorical values and now for frequencies, more specifically for word frequencies. But we were applying it to TF-IDF values, values that seem to violate the chi-squared rules. The reason why we can apply the chi-squared to TF-IDF values is actually because these are just weighted/scaled frequencies and the probabilities and totals shouls add up.
#
#
# ### Implementation, finally!
#
# Let's apply this and write a function that receives a matrix with term counts for each label.
# +
def chi_squared(counts):
"""
Non vectorized version of chi squared function - the idea is that you see the relation with the formula above,
but you should never use such an inefficient version when actually performing a chi-squared analysis
"""
print("Applying chi-squared to {} feature and {} classes".format(counts.shape[0], counts.shape[1]))
chi_values = np.zeros(counts.shape)
for i in range(counts.shape[0]):
for j in range(counts.shape[1]):
n = counts.sum()
c_tc = counts[i,j]
c_tx = counts.sum(axis=1)[i,0]-c_tc
c_xc = counts.sum(axis=0)[0,j]-c_tc
c_xx = n-c_tc-c_tx-c_xc
chi_values[i,j] = n*(((c_tc*c_xx)-(c_tx*c_xc))**2)/((c_tc+c_xc)*(c_tx+c_xx)*(c_tc+c_tx)*(c_xc+c_xx))
return chi_values
def chi_squared_vect(counts):
"""
Vectorized version of chi squared function - this is still a non-optimized version, but it should run faster than
the previous function
"""
print("Applying chi-squared to {} feature and {} classes".format(counts.shape[0], counts.shape[1]))
n = counts.sum()
c_tc = counts
c_tx = counts.sum(axis=1)-counts
c_xc = counts.sum(axis=0)-counts
c_xx = n * np.ones(counts.shape) - counts - c_tx - c_xc
num = n * np.square(np.multiply(c_tc, c_xx)-np.multiply(c_tx, c_xc))
den = np.multiply(np.multiply(np.multiply(c_tc+c_xc, c_tx+c_xx), c_tc+c_tx), c_xc+c_xx)
chi_values = np.divide(num, den)
return chi_values
# -
# ### Applying to our previous example
#
# Now we'll apply our functions to a small portion of our data (first 100 tweets) since our implementation is not optimized, in particular the non-vectorized example. If we find the features with higher chi-values, we find our more important features, which are the ones with less independence.
# +
small_vectorizer = TfidfVectorizer(ngram_range=(1,2))
small_vectorizer.fit(train_data.Tweet)
small_X_train = small_vectorizer.transform(train_data.Tweet)
small_y_train = train_data.Party
idx_rep = np.where(small_y_train=='Republican')
idx_dem = np.where(small_y_train=='Democrat')
counts_rep = small_X_train[idx_rep[0], :].sum(axis=0)
counts_dem = small_X_train[idx_dem[0], :].sum(axis=0)
counts = np.concatenate((counts_rep, counts_dem))
# chi_values = chi_squared(counts.transpose())
chi_values_vect = chi_squared_vect(counts.transpose())
feature_names = small_vectorizer.get_feature_names()
best_features = chi_values_vect.argsort(axis=0).tolist()
print("Most important features:\n")
for idx in sorted(best_features[-10:]):
print(u"{}, value: {}".format(feature_names[idx[0]], chi_values_vect[idx[0], 0]))
# -
# Awesome, we got to the same results as the scikit-learn! Time to move on to the next notebook!
| Natural-Language-Processing/Modelling/BLU08 - Learning Notebook - Part 1 of 3 - Dimensionality Reduction and Feature Selection.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Data Science & Business Analytics Internship at The Sparks Foundation (Feb 2021)
# ## __Author: <NAME>__
# ## Task 1 : Prediction using Supervised Machine Learning
# #### _Objective: Predict the percentage score of a student based on the number of study hours using the Linear Regression supervised machine learning algorithm._
# ### 1. Importing the dataset and all required libraries
# Importing all libraries required in this notebook
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
# Reading data from remote link
url = "http://bit.ly/w-data"
dataset = pd.read_csv(url)
print("Data imported successfully")
# ### 2. Data Preprocessing
#printing head(first few rows) of the dataset
dataset.head()
#Now print the last 5 records
dataset.tail()
#Check if there any null value in the Dataset
dataset.isnull == True
# + **There is no null value in the Dataset so, we can now visualize our Data.**
#the datatype of the columns
dataset.dtypes
#here we use describe() method so that we can able to see percentiles,mean,std,max,count of the given dataset.
dataset.describe()
# ### 3. Data Visualization
# + **Now I have plot the dataset to check wheather they have some relation or not.**
dataset.plot(x='Hours', y='Scores', style='o')
plt.title('Hours vs Score')
plt.xlabel('Hours Studied')
plt.ylabel('Score Percentage')
plt.show()
# + **We can visualize by looking at the above graph, it shows that there is a linear relation between the number of hours studied and percentage score obtained.**
# ### 4. Preparing the dataset
# + **Here we will divide the dataset into attributes (input) and labels (output). Then we will split the dataset into two parts - Testing data and Training data**
X = dataset.iloc[:,:-1].values
Y = dataset.iloc[:,1].values
print(X)
print(Y)
# ### 5. Splitting dataset into training and test set
#spliting the data into training and testing data. The ratio of this data is 20% (Test data) and 80% (Train Data)
from sklearn.model_selection import train_test_split
X_train,X_test,Y_train,Y_test = train_test_split(X,Y,random_state = 0,test_size=0.2)
# ### 6. Training the Algorithm
print("X train.shape =", X_train.shape)
print("Y train.shape =", Y_train.shape)
print("X test.shape =", X_test.shape)
print("Y test.shape =", Y_test.shape)
from sklearn.linear_model import LinearRegression
model=LinearRegression()
# Here we use fit function to tell the algorithm on which data to work
model.fit(X_train, Y_train)
#plotting the REGRESSION LINE (Y = MX + C)
Y0 = model.intercept_ + model.coef_*X_train
# Visualising the Training dataset
plt.scatter(X_train,Y_train,color='blue',marker='s')
plt.plot(X_train,Y0,color='black')
plt.xlabel("Hours",fontsize=10)
plt.ylabel("Scores",fontsize=10)
plt.title("Regression line(Training set)",fontsize=10)
plt.show()
# ### 7. Predictions Making
#predicting the Scores for test data
Y_predicted=model.predict(X_test)
print(Y_predicted)
#now print the Y_test (Actual Score)
Y_test
#plotting the line on test data
plt.scatter(X_test,Y_test,color='blue',marker='s')
plt.plot(X_test,Y_predicted,color='black')
plt.xlabel("Hours",fontsize=10)
plt.ylabel("Scores",fontsize=10)
plt.title("Regression line(Test set)",fontsize=10)
plt.show()
# ### 8. Comparing the Predicted Score with the Actual Score
df_compare = pd.DataFrame({'Actual score': Y_test, 'Predicted score': Y_predicted})
df_compare
# ### 9. Evaluating the Model
# Finding the accuracy of the model
from sklearn import metrics
print('Mean Absolute Error:', metrics.mean_absolute_error(Y_test, Y_predicted))
# + **Here small value of Mean absolute error indicates that the chances of error or wrong forecasting through the model are very less.**
# ## **Question : What will be predicted score if a student studies for 9.25 hrs/ day?**
#Testing the data with the model
hours = 9.25
predict_score = model.predict([[hours]])
print("The predicted score if a student studies for",hours, "hrs/day :", predict_score[0])
# ## **According to the regression model if a student studies for 9.25 hours a day then he/she is likely to score 93.69 marks.**
# # Thank You
| Task-1_Supervised ML.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from scipy.optimize import minimize
from sklearn.model_selection import train_test_split
# %matplotlib inline
data = pd.read_csv('C:\\Users\\Owner\\Napa\\results_model_data.csv')
# -
def result_assign(win_margin):
# This function converts the win_margin column into a binary win/loss result
if win_margin>0:
return 1
else:
return 0
def sigmoid(z):
# Computes the sigmoid function for logistic regression
return 1 / (1 + np.exp(-z))
def sigmoid_gradient(z):
# Computes the gradient of the sigmoid function, to be used in backpropagation
return np.multiply(sigmoid(z), (1 - sigmoid(z)))
def forward_propagate(X, theta1, theta2):
# Calculate the hypothesis using input values of theta for each stage of the network
m = X.shape[0]
# Insert bias unit for input layer
a1 = np.insert(X, 0, values=np.ones(m), axis=1)
z2 = a1 * theta1.T
# Insert bias unit for hidden layer
a2 = np.insert(sigmoid(z2), 0, values=np.ones(m), axis=1)
z3 = a2 * theta2.T
h = sigmoid(z3)
return a1, z2, a2, z3, h
def backward_prop(params, input_layer_size, hidden_layer_size, num_labels, X, y):
# Reshape the parameter array back into the respective matrices
theta1 = np.matrix(np.reshape(params[:hidden_layer_size * (input_layer_size + 1)], (hidden_layer_size, (input_layer_size + 1))))
theta2 = np.matrix(np.reshape(params[hidden_layer_size * (input_layer_size + 1):], (num_labels, (hidden_layer_size + 1))))
# Forward propagate through the network
a1, z2, a2, z3, h = forward_propagate(X, theta1, theta2)
# Initialize
J = 0
delta1 = np.zeros(theta1.shape)
delta2 = np.zeros(theta2.shape)
# Compute cost
first = np.multiply(-y, np.log(h))
second = np.multiply((1 - y), np.log(1 - h))
J = np.sum(first - second) / m
# Backpropagate to get gradients
d3 = h - y
d2 = np.multiply((d3*theta2[:,1:hidden_layer_size+1]), sigmoid_gradient(z2))
delta1 = (np.matmul(a1.T, d2)).T / m
delta2 = (np.matmul(d3.T, a2)) / m
# Reshape gradient matrices into a single array
grad = np.concatenate((np.ravel(delta1), np.ravel(delta2)))
return J, grad
# Add a new binary column to the data, which has value 1 where the result is positive, and 0 if negative
data['Result'] = data.apply(lambda x: result_assign(x['Win Margin']),axis=1)
# Select only quantitive paramaters to be used in the model
model_data = data[['Race Margin', 'Win % Margin', 'Skill Margin', 'Game Margin', 'AvgPPM Margin', 'Result']]
model_data.head()
# +
# Set X (training data) and y (target variable)
cols = model_data.shape[1]
X = model_data.iloc[:,0:cols-1]
y = model_data.iloc[:,cols-1:cols]
y0 = y
# Split the data into training and validation sets with 80/20 ratio
train_X, val_X, train_y, val_y = train_test_split(X, y, train_size=0.8, test_size=0.2, random_state = 0)
# Convert to numpy matrices
m = X.shape[0]
X_train = np.matrix(train_X)
y_train = np.matrix(train_y)
X_val = np.matrix(val_X)
y_val = np.matrix(val_y)
# Define architecture of neural network
input_layer_size = cols-1; # Each match has 5 features
hidden_layer_size = 50; # 50 hidden units
num_labels = 1; # Win/Loss parameter
# Randomly initialize the input parameter array, with values normalized by length
epsilon_1 = np.sqrt(6./(hidden_layer_size + input_layer_size))
epsilon_2 = np.sqrt(6./(hidden_layer_size + num_labels))
param1 = np.random.random(size=hidden_layer_size * (input_layer_size + 1))*2*epsilon_1 - epsilon_1
param2 = np.random.random(size=num_labels * (hidden_layer_size + 1))*2*epsilon_2 - epsilon_2
params = np.concatenate((param1,param2))
# +
# Minimize the backpropagation cost function
fmin = minimize(fun=backward_prop, x0=params, args=(input_layer_size, hidden_layer_size, num_labels, X_train, y_train),
method='TNC', jac=True, options={'maxiter': 250})
# Retrieve the corresponding theta parameters and reshape to matrices
theta1 = np.matrix(np.reshape(fmin.x[:hidden_layer_size * (input_layer_size + 1)], (hidden_layer_size, (input_layer_size + 1))))
theta2 = np.matrix(np.reshape(fmin.x[hidden_layer_size * (input_layer_size + 1):], (num_labels, (hidden_layer_size + 1))))
# Calculate predictions based on the model
a1_t, z2_t, a2_t, z3_t, h_t = forward_propagate(X_train, theta1, theta2)
a1_v, z2_v, a2_v, z3_v, h_v = forward_propagate(X_val, theta1, theta2)
y_pred_train = [1 if i>=0.5 else 0 for i in h_t]
y_pred_val = [1 if i>=0.5 else 0 for i in h_v]
# Compare predictions to actual data
correct_train = [1 if a == b else 0 for (a, b) in zip(y_pred_train, y_train)]
correct_val = [1 if a == b else 0 for (a, b) in zip(y_pred_val, y_val)]
accuracy_train = (sum(map(int, correct_train)) / float(len(correct_train)))
accuracy_val = (sum(map(int, correct_val)) / float(len(correct_val)))
print 'Train accuracy = {0}%'.format(accuracy_train * 100)
print 'Validation accuracy = {0}%'.format(accuracy_val * 100)
| models/old_models/neural_network_model.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Performing spectrum sensing on complex $\alpha-\mu$ fading channel
# %matplotlib inline
# %config IPython.matplotlib.backend = "retina"
import matplotlib.pyplot as plt
from matplotlib import rcParams
rcParams["figure.dpi"] = 150
rcParams["savefig.dpi"] = 150
rcParams["text.usetex"] = True
import tqdm
import numpy as np
import scipy.special as sps
import scipy.integrate as integrate
np.warnings.filterwarnings('ignore')
from maoud import ComplexAlphaMu, AlphaMu
from maoud import mpsk
from maoud import marcumq
K = int(1e6) # Number of Monte Carlo realizations
N = 25 # Number of transmitted samples
L = 15 # Number of pairs to simulate
M = 64. # Size of the constellation
alpha, mu = 2., 1.
alphamu = ComplexAlphaMu(alpha, mu)
x = np.linspace(1e-3, 3., 1000) # Support of the fading density
plt.plot(x, alphamu.envelope_pdf(x))
# ## Probabilistic Analysis
s = mpsk(M, (K, N))
# +
Es = 1.0/M
snr_db = 5
sigma2 = Es * (10 ** (-snr_db / 10.))
h = alphamu.rvs(x=x, y=x, size=K).reshape(-1, 1)
w = np.sqrt(sigma2/2)*np.random.randn(K, N) + 1j*np.sqrt(sigma2/2)*np.random.randn(K, N)
H0 = w
H1 = h*s + w
# energy statistic
EH0 = H0.real ** 2 + H0.imag ** 2
EH1 = H1.real ** 2 + H1.imag ** 2
EH0 = np.sum(EH0, 1)
EH1 = np.sum(EH1, 1)
# generate the thresholds
delta = np.linspace(np.min(EH0), np.max(EH0), L)
pf = np.zeros(L)
pd = np.zeros(L)
# computing probabilities of false alarm and detection
for l in tqdm.tqdm(range(L)):
pf[l] = np.sum(EH0 > delta[l])
pd[l] = np.sum(EH1 > delta[l])
pf = pf / K
pd = pd / K
# -
# ## Numerical/Theorectical Analysis
# +
T = 100
delta = np.linspace(np.min(EH0), np.max(EH0), T)
Pd = np.zeros(T)
Pf = 1.0 - sps.gammainc(N, delta / sigma2)
for l in tqdm.tqdm(range(T)):
cdf = lambda x: marcumq(np.sqrt(2.0*delta[l]/sigma2),N,np.sqrt(2*x*x*N*Es/sigma2))*alphamu.envelope_pdf(x)
Pd[l] = integrate.quad(cdf, 0.0, np.inf, epsrel=1e-9, epsabs=0)[0]
# -
# ## Plot
fig, ax = plt.subplots(figsize=(3.2360679775, 2))
ax.loglog(Pf, 1-Pd, 'k-', linewidth=1, label=r"Theorectical")
ax.loglog(pf, 1-pd, 'o', color='red', markeredgecolor='k', mew=.6, markersize=3., label=r"Simulation")
ax.tick_params(axis='x', which='minor', bottom='on')
plt.xlabel(r'Probability of false alarm')
plt.ylabel(r'Probability of miss')
plt.legend(fancybox=False, numpoints=1, edgecolor='k')
plt.savefig('spectrum_sensing.ps', transparent=True, bbox_inches='tight', pad_inches=.1)
| docs/source/ipython_notebooks/spectrum_sensing.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + _uuid="47a1741a-97ee-4016-98d4-4837c0b3addb" _cell_guid="51237a0b-61f7-499b-a248-9b904fb6de06"
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# + _uuid="fad76e64-6d29-457d-93a6-14dd68e0d180" _cell_guid="6b6b1a32-22f9-4ab1-8404-6b2c4e79c40a"
import tensorflow as tf
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler, Normalizer
from sklearn.decomposition import PCA as sklearnPCA
# Supress unnecessary warnings so that presentation looks clean
import warnings
warnings.filterwarnings("ignore")
from sklearn.metrics import classification_report, confusion_matrix
# + _uuid="ac510ca9-a950-41ac-97f0-7bffcddec1dc" _cell_guid="682e5fe9-aa34-4401-9b8e-15c6f1c6ea96"
train_df = pd.read_csv("../input/pasc-data-quest-20-20/doctor_train.csv")
test_df = pd.read_csv("../input/pasc-data-quest-20-20/doctor_test.csv")
train_df.head()
# + _uuid="83c293db-a8de-488e-9c18-07a586bb4ec5" _cell_guid="2e9eeee0-f3f1-4ac0-b964-ab10eab6c0b5"
print("This WBCD dataset is consisted of",train_df.shape)
print("This Test dataset is consisted of",test_df.shape)
# + _uuid="df814e8c-8139-4540-b474-0964df982a7d" _cell_guid="ad55b5fe-a436-441c-92cd-2e84a52daf77"
s = pd.Series(train_df.Y)
train_df.Y = s.replace({'no': 0, 'yes': 1})
# + _uuid="5f7ea8ba-4f58-4395-b715-7010aa68f2e9" _cell_guid="209f1d48-9e07-4ff3-9c6b-c6118721cfaf"
y = train_df.Y
y.name = 'label'
y
# + _uuid="a89fb306-14f3-4f07-a594-dd8fbda9cba9" _cell_guid="4bf60d47-d7fd-4f86-8d70-f6ed46e37a8f"
train_df['Y'].value_counts()
# + _uuid="0e762d2c-177b-4c64-9ab6-8a0ebdad66ad" _cell_guid="02440c3e-0400-4db7-8bf0-8c661858549f"
train_df = train_df.iloc[:,:-1]
print("This WBCD dataset is consisted of",train_df.shape)
# + _uuid="df589506-0dc5-4c0d-bd75-590e28338160" _cell_guid="966e462a-1489-4f59-a7ce-d9d17105eb64"
p = sns.countplot(train_df['age'],label="Count")
sns.set(rc={'figure.figsize':(30,30)})
# + _uuid="c1835a7d-6b02-4bd3-9391-466a1742eb34" _cell_guid="f15be3a9-d03a-4020-9e7c-573c961bd928"
# sns.countplot(wbcd['Profession'],label="Count")
# # %% [code]
# sns.countplot(wbcd['Status'],label="Count")
# # %% [code]
# sns.countplot(wbcd['edu'],label="Count")
# # %% [code]
# # import seaborn as sns; sns.set(style="ticks", color_codes=True)
# # g = sns.pairplot(wbcd)
# # %% [code]
# corr = wbcd.iloc[:,2:].corr()
# colormap = sns.diverging_palette(220, 10, as_cmap = True)
# plt.figure(figsize=(14,14))
# sns.heatmap(corr, cbar = True, square = True, annot=True, fmt= '.2f',annot_kws={'size': 8},
# cmap = colormap, linewidths=0.1, linecolor='white')
# plt.title('Correlation of WBCD Features', y=1.05, size=15)
# -
X.shape, train_df.shape, test_df.shape
# + _uuid="43147626-d3c2-45a4-922d-a074f5492563" _cell_guid="4d8a9df2-1765-494f-9fbd-9b181f2abac7"
train_df = train_df.set_index('ID')
test_df = test_df.set_index('ID')
train_df, test_df
# + _uuid="f69d856e-1b2a-4214-b9d0-7182ebf79ae7" _cell_guid="3b30727a-ca75-4c9f-936f-2646acbf74b1"
features = train_df.columns.to_list()
X = train_df[features]
# train,test = train_test_split(X, y, test_size=0.2, random_state=1)
X_train, X_test, y_train, y_val = train_test_split(train_df, y, test_size=0.33, random_state=1)
print("Training Data :",X_train.shape)
print("Val Data :",X_test.shape)
print("Test Data :",test_df.shape)
# -
X_train.columns
# + _uuid="e8239b35-666d-4be6-8b1e-2e7b61e2dc1b" _cell_guid="982d603e-820e-4fec-acab-1c191a3bfd38"
X_train = pd.get_dummies(X_train)
X_val = pd.get_dummies(X_test)
X_test = pd.get_dummies(test_df)
# + _uuid="bbf8d6a9-32d6-48b9-b6cd-ca2942574e55" _cell_guid="61976bcb-dff8-41c3-8e2d-9d735bdadd12"
for i in X_train.Profession_unknown:
if X_train.Profession_unknown.all() == 1:
X_train.Profession_admin[i] =1
X_train.Profession_blue_collar[i] = 1
X_train.Profession_entrepreneur[i] = 1
X_train.Profession_housemaid[i] = 1
X_train.Profession_management[i] = 1
X_train.Profession_retired[i] = 1
X_train.Profession_self_employed[i] = 1
X_train.Profession_services[i] = 1
X_train.Profession_student[i] = 1
X_train.Profession_technician[i] = 1
X_train.Profession_unemployed[i] = 1
for i in X_train.edu_unknown:
if X_train.edu_unknown.all() == 1:
X_train.edu_primary[i] =1
X_train.edu_secondary[i] = 1
X_train.edu_tertiary[i] = 1
for i in X_train.communication_unknown:
if X_train.communication_unknown.all() == 1:
X_train.communication_cellular[i] =1
X_train.communication_telephone[i] = 1
for i in X_train.side_effects_unknown:
if X_train.side_effects_unknown.all() == 1:
X_train.side_effects_failure[i] =1
X_train.side_effects_other[i] = 1
X_train.side_effects_success[i] = 1
# + _uuid="d3d5ff92-ffaa-46b1-a33f-28285f715b3b" _cell_guid="4d7632da-5808-4a3f-bb23-f1ef48890d06"
for col in X_train.columns:
if 'unknown' in col:
X_train = X_train.drop(col, axis=1)
X_val = X_val.drop(col, axis=1)
X_test = X_test.drop(col, axis=1)
# colname = col[:col.index('unknown')-1]
# print(colname)
# train_index = train_df[train_df.Profession == 'unknown'].index
# # val_index = X_val[val_df.Profession == 'unknown'].index
# for col2 in X_train.columns:
# if colname in col2:
# print(col2)
# X_train.loc[col2,train_index] = 1
# -
X_train.columns
# + _uuid="b0727f67-794c-49e6-b687-6a9681107a81" _cell_guid="fb369c6d-c96f-411b-92de-ab4c962d446f"
# X_val = X_val.drop('Profession_unknown', axis = 1)
# + _uuid="5ea8820a-f6a1-4b4d-b6e5-c77b28d04ac4" _cell_guid="15abd421-0fc4-4dee-9351-3f0f75744400"
# X_train = X_train.drop('Profession_unknown', axis = 1)
# + _uuid="809b7460-298e-4195-9cc9-49b85bc81781" _cell_guid="3fb822e9-d2db-4bd7-b27c-3849cfb9b9ff"
# X_test = X_test.drop('Profession_unknown', axis = 1)
# -
X_train.columns
# + _uuid="9b3c611d-48db-4005-8d63-00aa9f7bb799" _cell_guid="71ec43a9-e1ff-41bb-ae7f-232a01288538"
X_train.columns, X_val.columns
# +
# prof = [st for st in X_train.columns if 'Profession' in st]
# X_train[prof] == False
# + _uuid="fdf33d3f-4e58-4f1f-966d-1825a86927ff" _cell_guid="820f5e06-8723-46ae-a326-a2db9c41c09f"
X_train.Money = X_train.Money.fillna(X_train.Money.mean())
X_val.Money = X_val.Money.fillna(X_train.Money.mean())
X_test.Money = X_test.Money.fillna(X_train.Money.mean())
# + _uuid="7686cfe8-4dc8-46bd-bac9-6b7a1930e101" _cell_guid="b80a3d38-466f-4044-9a47-aa7d2ef8dd95"
X_train.age = X_train.age.fillna(X_train.age.median())
X_val.age = X_val.age.fillna(X_val.age.median())
X_test.age = X_test.age.fillna(X_train.age.median())
# X_train.age = X_train.age.fillna(32)
# X_val.age = X_val.age.fillna(32)
# X_test.age = X_test.age.fillna(32)
# -
X_train.isnull().sum()
# + _uuid="7e89b718-538e-48ad-a0ad-a961d8503fbc" _cell_guid="bcf3c4ce-c018-4bd3-ac50-1f9e6c40760e"
X_val['side_effects_success']
# + _uuid="3d9d835d-08b6-4835-ad7b-d2ca2f25a230" _cell_guid="22378a9c-c040-477f-9e33-8e9d0da5edc0"
X_train.columns, X_val.columns
# + _uuid="8a13941a-399a-4416-b711-cc6e484cdcc1" _cell_guid="fca014fb-cd1f-4bca-9a94-a27e52ab9f08"
X_train.isnull().sum()
# # %% [code]
# from keras.models import Sequential, Model
# from keras.layers import Conv1D, MaxPool1D, Dense, Dropout, Flatten, \
# BatchNormalization, Input, concatenate, Activation
# from keras.optimizers import Adam
# # %% [code]
# model = Sequential()
# model.add(Conv1D(filters=8, kernel_size=3, activation='relu', input_shape=(51,1)))
# model.add(MaxPool1D(strides=2))
# model.add(BatchNormalization())
# model.add(Conv1D(filters=16, kernel_size=3, activation='relu'))
# model.add(MaxPool1D(strides=2))
# model.add(BatchNormalization())
# # model.add(Conv1D(filters=32, kernel_size=3, activation='relu'))
# # model.add(MaxPool1D(strides=2))
# # model.add(BatchNormalization())
# # model.add(Conv1D(filters=64, kernel_size=3, activation='relu'))
# # model.add(MaxPool1D(strides=2))
# model.add(Flatten())
# model.add(Dropout(0.5))
# model.add(Dense(64, activation='relu'))
# model.add(Dropout(0.25))
# model.add(Dense(64, activation='relu'))
# model.add(Dense(1, activation='sigmoid'))
# # %% [code]
# model = Sequential()
# model.add(Dense(16, activation='relu', input_shape=(1,51)))
# model.add(BatchNormalization())
# model.add(Dense(64, activation='sigmoid'))
# model.add(BatchNormalization())
# # model.add(Dropout(0.5))
# model.add(Dense(64, activation='tanh'))
# # model.add(Dropout(0.25))
# model.add(Flatten())
# model.add(Dense(32, activation='relu'))
# model.add(BatchNormalization())
# model.add(Dense(64, activation='relu'))
# model.add(Dense(1, activation='sigmoid'))
# # %% [code]
# model.compile(loss='binary_crossentropy',optimizer='adam')
# model.summary()
# # %% [code]
# X_train_dl = np.array(X_train).reshape(X_train.shape[0], 1, X_train.shape[1])
# # %% [code]
# X_test_dl = np.array(X_test).reshape(X_test.shape[0], 1, X_test.shape[1])
# X_val_dl = np.array(X_val).reshape(X_val.shape[0], 1, X_val.shape[1])
# # %% [code]
# # from keras.utils import to_categorical
# # y_train_dl2 = to_categorical(y_train)
# # %% [code]
# model.fit(X_train_dl,y_train, batch_size=128, epochs=10)
# # %% [code]
# y_pred = model.predict(X_val_dl)
# y_ans = y_pred.T[1]
# thresh = 0.5
# y_ans[y_ans > thresh] = 1
# y_ans[y_ans <= thresh] = -1
thresh = 0.25
y_pred[y_pred>thresh] = 1.0
y_pred[y_pred<=thresh] = 0.0
print(confusion_matrix(y_val, y_pred))
print(classification_report(y_val, y_pred))
y_test = model.predict(X_test_dl)
y_test[y_test>thresh] = 1.0
y_test[y_test<=thresh] = 0.0
y_test = y_test.flatten()
print(y_test)
# + _uuid="13a2401e-f24b-4562-948f-43238d979e42" _cell_guid="93cf44c8-da74-4490-b74b-4273e9519547"
from sklearn.linear_model import ElasticNet, Lasso, BayesianRidge, LassoLarsIC, Ridge, LassoCV
from sklearn.ensemble import RandomForestRegressor, RandomForestClassifier, GradientBoostingRegressor, VotingRegressor, StackingRegressor
from sklearn.kernel_ridge import KernelRidge
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import RobustScaler
from sklearn.base import BaseEstimator, TransformerMixin, RegressorMixin, clone
from sklearn.model_selection import KFold, cross_val_score, train_test_split
from sklearn.metrics import mean_squared_error as mse
import xgboost as xgb
import lightgbm as lgb
from sklearn.svm import SVR
# + _uuid="4d0fab24-8c39-4994-9628-9cf3c89e435b" _cell_guid="1e25d05b-f966-4c5c-a56c-634767d8c2c2"
y_train.shape, X_train.shape
# + _uuid="000139eb-cdc8-4900-b20e-28d0817b67d4" _cell_guid="74de8a6f-2620-4d2b-aaa1-944d725e77f3"
if (y_train.all() == np.nan) == False:
print("Hello")
# + _uuid="4dbd6fa1-d564-457f-b03e-131cb0dbd90a" _cell_guid="51bbb298-f98a-4d85-8b05-a1d8610f6063"
X_train
# + _uuid="2dced181-0eee-4bf0-ac7f-f19b1d53ad35" _cell_guid="33171986-711d-4a98-97fe-339180b14513"
X_val.shape, X_train.shape
# + _uuid="152ed508-bd9f-4007-9568-2889106ad15f" _cell_guid="8c4ca402-11c0-415f-9d68-c6dfba8f19ac"
X_full = pd.concat([X_train, X_val])
y_full = pd.concat([y_train, y_val])
# + _uuid="b8fead66-16ff-453a-b38f-0038c8a96b52" _cell_guid="105a3b38-79f0-4dea-a382-080447a519ee"
y_train.describe()
# + _uuid="bf39f93e-f92e-4852-9987-96508787c0d9" _cell_guid="5452f9bc-aa9f-4b37-a976-b2eb403653a4"
# X_train = X_full
# y_train = y_full
# + _uuid="d9e0f42b-e3b9-481d-a48c-a058e8ef5ce2" _cell_guid="2338029a-f712-4bdb-8bea-c9d6faafb085"
reg2 = Lasso(alpha =0.005, random_state=42, max_iter=5000)
reg2.fit(X_train, y_train)
# print(reg2.feature_importances_)
y_pred = reg2.predict(X_val)
thresh = 0.15
y_pred[y_pred>thresh] = 1.0
y_pred[y_pred<=thresh] = 0.0
print(confusion_matrix(y_val, y_pred))
print(classification_report(y_val, y_pred))
# print(y_pred)
y_test = reg2.predict(X_test)
y_test[y_test>thresh] = 1.0
y_test[y_test<=thresh] = 0.0
print(y_test)
# + _uuid="342f1145-9238-4c4b-ad67-244562b76925" _cell_guid="9e544d86-b146-41b4-ac3d-f60a24d604be"
reg2 = RandomForestRegressor(n_estimators=20, criterion='mse', max_depth=None, min_samples_split=2)
# reg2 = Lasso(alpha =0.005, random_state=42, max_iter=5000)
reg2.fit(X_train, y_train)
# print(reg2.feature_importances_)
y_pred = reg2.predict(X_val)
thresh = 0.2
y_pred[y_pred>thresh] = 1.0
y_pred[y_pred<=thresh] = 0.0
print(confusion_matrix(y_val, y_pred))
print(classification_report(y_val, y_pred))
# print(y_pred)
y_test = reg2.predict(X_test)
y_test[y_test>thresh] = 1.0
y_test[y_test<=thresh] = 0.0
print(y_test)
# + _uuid="eafa6eff-aa40-490e-be27-7bac54778c05" _cell_guid="339b993a-e7d3-4dd9-a043-d3684365e590"
reg2 = RandomForestClassifier(n_estimators=20, max_depth=None, min_samples_split=2)
reg2.fit(X_train, y_train)
# print(reg2.feature_importances_)
y_pred = reg2.predict(X_val)
thresh = 0.05
y_pred[y_pred>thresh] = 1.0
y_pred[y_pred<=thresh] = 0.0
print(confusion_matrix(y_val, y_pred))
print(classification_report(y_val, y_pred))
# print(y_pred)
y_test = reg2.predict(X_test)
# y_test[y_test>thresh] = 1.0
# y_test[y_test<=thresh] = 0.0
print(y_test)
# + _uuid="fb5c1076-fc11-45b2-8176-0a2304f9613d" _cell_guid="4113b6d6-b208-445a-a04a-6525b1313c7a"
from sklearn.naive_bayes import GaussianNB
clf = GaussianNB()
clf.fit(X_train, y_train)
y_pred = clf.predict(X_val)
print(confusion_matrix(y_val, y_pred))
print(classification_report(y_val, y_pred))
# print(y_pred)
y_test = clf.predict(X_test)
# y_test[y_test>thresh] = 1.0
# y_test[y_test<=thresh] = 0.0
print(y_test)
# + _uuid="b1e34242-aca4-4995-ae23-a0dacccdc22b" _cell_guid="c83fd8e0-0830-4647-bc74-7f4b9461ad32"
from sklearn.ensemble import StackingClassifier
from sklearn.svm import LinearSVC
from sklearn.linear_model import LogisticRegression
estimators = [
('rf', reg2),
('lgm',clf5)
]
clf0 = StackingClassifier(
estimators=estimators, final_estimator=clf
)
clf0.fit(X_train, y_train)
y_pred = clf0.predict(X_val)
print(confusion_matrix(y_val, y_pred))
print(classification_report(y_val, y_pred))
# print(y_pred)
y_test = clf0.predict(X_test)
# y_test[y_test>thresh] = 1.0
# y_test[y_test<=thresh] = 0.0
print(y_test)
# +
from lightgbm import LGBMClassifier
clf5 = LGBMClassifier()
clf5.fit(X_train, y_train)
y_pred = clf5.predict(X_val)
print(confusion_matrix(y_val, y_pred))
print(classification_report(y_val, y_pred))
# print(y_pred)
y_test = clf5.predict(X_test)
# y_test[y_test>thresh] = 1.0
# y_test[y_test<=thresh] = 0.0
print(y_test)
# + _uuid="573723d6-082d-44b4-a626-3bf5ea65320a" _cell_guid="3a6d85c3-5df4-40c9-9019-93cc8c8757c7"
y_test
# + _uuid="16b3330f-4003-44ca-a4ea-b5774cd0a4b2" _cell_guid="2734dae9-d6c1-4462-bf5c-9296add9a244"
er = StackingClassifier(estimators = [])
# + _uuid="3b1f5229-bddc-4a98-b8e9-84ace498569d" _cell_guid="a446bf87-1e41-46df-8836-8d129ac3298b"
# from sklearn.datasets import load_iris
# from sklearn import tree
# X, y = load_iris(return_X_y=True)
# clf = tree.DecisionTreeClassifier()
# clf = clf.fit(X_train, y_train)
# y_pred = clf.predict(X_val)
# + _uuid="48cf3013-9df8-49a1-97fa-5a7cec3b89f6" _cell_guid="e4f10854-4060-4d61-a02f-96da355aae9f"
y_test
# + _uuid="2b9cb048-7f27-4db3-9651-278764473fa5" _cell_guid="dd936c38-6802-4ddb-b140-1d8e88feb272"
y_final = pd.Series(y_test).replace({1:'yes',0:'no'})
# + _uuid="5da91cef-078f-4710-baed-fa1c16cd3363" _cell_guid="aa76e413-fdbb-4b06-aaad-79541cb4b00e"
ans = pd.read_csv('../input/pasc-data-quest-20-20/sample_submission.csv') # load data from csv
# ans.drop(wbcd.columns.difference(['id','Y']), 1, inplace=True)
# ans.insert(1, "WattHour", y_test.astype(int), True)
# + _uuid="727929f6-1e43-4df5-8184-b7544a59f0a6" _cell_guid="320a8f6a-5d2c-4c27-bf9c-4f09676a301c"
ans['Y'] = y_final
# + _uuid="4ce0e08d-ea5f-4d00-8079-7f377367443e" _cell_guid="d57473b4-c10b-4cc6-8fdf-c8a1c50101ba"
ans
# + _uuid="d04f8b81-f870-4856-8124-81863e213ea9" _cell_guid="00c6a83c-0b25-41e9-a216-7da373d75ffa"
ans.to_csv('submission.csv',index=False)
# + _uuid="16a00854-0912-46aa-bb3c-24c569ce3144" _cell_guid="99be6725-bc26-4503-bc9f-1cc992f8db14"
pd.read_csv('../working/submission.csv')
# + _uuid="cc157a89-fd15-4e0e-81e5-85b9ea8bf17c" _cell_guid="88aee7a5-1338-4aa8-b9db-0253199b6691"
# + _uuid="69d24c8f-cb45-4445-990c-907aa8711bd3" _cell_guid="8d0c1301-5e3d-41ee-8a06-89672b708612"
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.datasets import make_moons, make_circles, make_classification
from sklearn.neural_network import MLPClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.gaussian_process import GaussianProcessClassifier
from sklearn.gaussian_process.kernels import RBF
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
# + _uuid="9b05c583-6fa4-4eb8-8045-66001ee2a21a" _cell_guid="7600390f-2801-44a0-be2f-f97f243f0e0c"
h = .02 # step size in the mesh
names = ["Nearest Neighbors", "Linear SVM", "RBF SVM", "Gaussian Process",
"Decision Tree", "Random Forest", "Neural Net", "AdaBoost",
"Naive Bayes", "QDA"]
classifiers = [
KNeighborsClassifier(3),
SVC(kernel="linear", C=0.025),
SVC(gamma=2, C=1),
GaussianProcessClassifier(1.0 * RBF(1.0)),
DecisionTreeClassifier(max_depth=5),
RandomForestClassifier(max_depth=5, n_estimators=10, max_features=1),
MLPClassifier(alpha=1, max_iter=1000),
AdaBoostClassifier(),
GaussianNB(),
QuadraticDiscriminantAnalysis()]
# + _uuid="5fb32fd9-8631-465e-b362-970c6dcf5330" _cell_guid="6706c873-e24a-42a9-afd3-ebb58ffe8994"
for name, clf in zip(names, classifiers):
clf.fit(X_train, y_train)
score = clf.score(X_val, y_val)
print(name,score)
# + _uuid="09ac2d9c-a201-4b50-a7d5-a47f232ff440" _cell_guid="857986c7-d92a-47cd-853d-47cdbf0cce59"
| do-you-exercise.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
print("Hello World!")
# + pycharm={"name": "#%%\n"}
import pandas as pd
data = {'Channel': ["ARD", "ZDF", "ZDF_neo", "arte"],
'Number': [1, 2, 3, 4],
'Language': ["de", "de", "de", "fr"]}
data_pandas = pd.DataFrame(data)
display(data_pandas)
# -
print("Me again!")
| backups/jupyter/my-jupyter/.ipynb_checkpoints/notebook01-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
from sklearn.metrics import r2_score
# +
automobiles_data = pd.read_csv('datasets/CarPrice_Assignment.csv')
automobiles_data.head()
# -
automobiles_data.shape
automobiles_data.columns
automobiles_data.drop(['car_ID', 'symboling', 'CarName'], axis=1, inplace=True)
automobiles_data = pd.get_dummies(automobiles_data)
automobiles_data.head()
automobiles_data.shape
X = automobiles_data.drop('price', axis=1)
y = automobiles_data['price']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=.2)
X_train.shape, X_test.shape
y_train.shape, y_test.shape
model = LinearRegression().fit(X_train, y_train)
training_score = model.score(X_train,y_train)
training_score
y_pred = model.predict(X_test)
y_pred
test_score = r2_score(y_test, y_pred)
test_score
import json
model.coef_
model.intercept_
# +
model_param = {}
model_param['coef'] = list(model.coef_)
model_param['intercept'] = model.intercept_.tolist()
# -
json_txt = json.dumps(model_param, indent=4)
json_txt
with open('models/regressor_param.txt', 'w') as file:
file.write(json_txt)
with open('models/regressor_param.txt', 'r') as file:
json_text = json.load(file)
json_model = LinearRegression()
json_model.coef_ = np.array(json_text['coef'])
json_model.intercept_ = np.array(json_text['intercept'])
# +
y_pred = json_model.predict(X_test)
r2_score(y_test, y_pred)
# -
test_score
| Skill_Paths/Machine_Learning_Literacy/Deploying_Machine_Learning_Solutions/SerializingSklearnModels.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # DateDiffLeapYearTransformer
# This notebook shows the functionality in the DateDiffLeapYearTransformer class. This transformer calculates the age gap between two datetime columns in a pandas DataFrame. The transformer doesn't use np.timedelta64 to avoid miscalculations due to leap years.<br>
import datetime
import pandas as pd
import numpy as np
import tubular
from tubular.dates import DateDiffLeapYearTransformer
tubular.__version__
# ## Create and Load datetime data
def create_datetime_data():
days_1 = np.random.randint(1, 29, 10)
months_1 = np.random.randint(1, 13, 10)
years_1 = np.random.randint(1970, 2000, 10)
days_2 = np.random.randint(1, 29, 10)
months_2 = np.random.randint(1, 13, 10)
years_2 = np.random.randint(2010, 2020, 10)
date_1 = [datetime.date(x, y, z) for x, y, z in zip(years_1, months_1, days_1)]
date_2 = [datetime.date(x, y, z) for x, y, z in zip(years_2, months_2, days_2)]
data = pd.DataFrame({"date_of_birth": date_1, "sale_date": date_2})
return data
datetime_data = create_datetime_data()
datetime_data
datetime_data.dtypes
# ## Usage
# The transformer requires 4 arguments:
# - column_lower: the datetime column that is being subtracted.
# - column_upper: the datetime column that is subtracted from.
# - new_column_name: the name of the new age column.
# - drop_cols: boolean to determine wherther column_lower and column_upper are dropped after the calculation.
#
# ### Keeping old columns
date_diff_leap_year_transformer = DateDiffLeapYearTransformer(
column_lower="date_of_birth",
column_upper="sale_date",
new_column_name="age",
drop_cols=False,
)
transformed_data = date_diff_leap_year_transformer.transform(datetime_data)
transformed_data
# ### Dropping old columns
date_diff_leap_year_transformer = DateDiffLeapYearTransformer(
column_lower="date_of_birth",
column_upper="sale_date",
new_column_name="age",
drop_cols=True,
)
transformed_data_2 = date_diff_leap_year_transformer.transform(datetime_data)
transformed_data_2
| examples/dates/DateDiffLeapYearTransformer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
import os
import tensorflow as tf
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
from tfx.components import CsvExampleGen, ImportExampleGen
from tfx.utils.dsl_utils import external_input
from tfx.proto import example_gen_pb2
context = InteractiveContext()
# +
base_dir = os.getcwd()
data_dir = os.path.join(os.pardir, os.pardir, "data")
output = example_gen_pb2.Output(
split_config=example_gen_pb2.SplitConfig(splits=[
example_gen_pb2.SplitConfig.Split(name='train', hash_buckets=6),
example_gen_pb2.SplitConfig.Split(name='eval', hash_buckets=2),
example_gen_pb2.SplitConfig.Split(name='test', hash_buckets=2)
]))
# +
examples = external_input(os.path.join(base_dir, data_dir, "csv"))
example_gen = CsvExampleGen(input=examples, output_config=output)
context.run(example_gen)
# -
examples = external_input(os.path.join(base_dir, data_dir, "tfrecords"))
example_gen = ImportExampleGen(input=examples)
context.run(example_gen)
help(CsvExampleGen)
| chapters/data_ingestion/me_coding_along.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Coursera ML Ex2 - Logistic Regression with Regulization
# ## 1. Import packages
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
from mpl_toolkits.mplot3d import Axes3D
# ## 2. Define constants
# training dataset
DATA_FILE_NAME = './ex2data2.csv'
# degree of features
DEGREE = 6
# gradient descent max step
INTERATIONS = 200000
# learning rate
ALPHA = 0.001
# regulization
LAMBDA = 1
# ## 3. Cost function
# +
def sigmoid(z):
return 1.0 / (1.0 + np.exp(-z))
def compute_cost(X, y, theta, lamda):
# number of training examples
m = y.size
# activation
h = sigmoid(np.dot(X, theta))
# cost
j = - np.sum(y * np.log(h) + (1 - y) * np.log(1 - h))
# regulization term
j += lamda * np.dot(theta.T, theta) / 2
j /= m
return j
# -
# ## 4. Gradient Descent
def gradient_descent(X, y, theta, alpha, lamda, num_inters):
# number of training examples
m = y.size
jHistory = np.empty(num_inters)
for i in range(num_inters):
delta = np.dot(X.T, sigmoid(np.dot(X, theta))- y) + lamda * theta
delta /= m
theta -= alpha * delta
jHistory[i] = compute_cost(X, y, theta, lamda)
return theta, jHistory
# ## 5. Load training dataset
# +
df = pd.read_csv(DATA_FILE_NAME)
df_0 = df[df.y == 0]
df_1 = df[df.y == 1]
# plot data
df_0.plot(x='x_1', y='x_2', legend=False, marker='o', style='o', mec='b', mfc='w')
plt.plot(df_1.x_1, df_1.x_2, marker='x', linestyle='None', mec='r', mfc='w')
plt.xlabel('x_1'); plt.ylabel('x_2'); plt.show()
# extract X,y
X = df.values[:, 0:2]
y = df.values[:,2]
m = y.size # number of training examples
# add X_0 to X
X = np.concatenate((np.ones((m,1)), X.reshape(-1,2)), axis=1)
# -
# ## 6. Learn parameters
theta, jHistory = gradient_descent(X, y, np.zeros(X.shape[1]), ALPHA, LAMBDA, INTERATIONS)
print(theta)
# plot J
plt.plot(range(jHistory.size), jHistory, color='g')
plt.xlabel('n'); plt.ylabel('J'); plt.show()
# ## 7. Plot result
# +
# training data
df_0.plot(x='x_1', y='x_2', legend=False, marker='o', style='o', mec='b', mfc='w')
plt.plot(df_1.x_1, df_1.x_2, marker='x', linestyle='None', mec='r', mfc='w')
# decision line
x = np.linspace(-1.0, 1.0, num=1000)
y = np.empty((1000, 1000))
for i in range(1000):
for j in range(1000):
y[i][j] = sigmoid(np.dot(np.array([1.0, x[i], x[j]]).T, theta))
plt.contour(x, x, y)
# predict for 3.5 and 7.0
plt.xlabel('x_1'); plt.ylabel('x_2'); plt.show()
# -
| coursera-ml/ex2-reg.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.5 64-bit (''venv'': conda)'
# name: python385jvsc74a57bd0379f303adb6a381777e199a378df62c5bd2528f4cad60f82c70a3b682fb35459
# ---
# # Measuring qubits in qoqo
#
# This notebook is designed to demonstrate the use of measurements in qoqo. We will look at several examples of measuring qubits, from single and multi-qubit registers. To learn about the effect of measurement, we will look at the state vectors before and after measurement.
from qoqo_pyquest import PyQuestBackend
from qoqo import Circuit
from qoqo import operations as ops
# ## Measuring a single qubit
#
# Here we first prepare the qubit in a superposition state,
# \begin{equation}
# |+ \rangle = \frac{1}{\sqrt{2}} \big{(} |0 \rangle + |1 \rangle \big{)}.
# \end{equation}
# We look at the state after preparation, then do a measurement in the Z basis, and finally look again at the state after measurement.
#
# We see that the state after measurement has been projected into the state either $|0>$ or $|1>$, consistently with the measurement outcome. Running this code many times should result in a random distribution of 'True' and 'False' outcomes.
# +
state_init = Circuit()
state_init += ops.Hadamard(qubit=0) # prepare |+> state
# write state before measuring to readout register 'psi_in'
read_input = Circuit()
read_input += ops.DefinitionComplex(name='psi_in', length=2, is_output=True)
read_input += ops.PragmaGetStateVector(readout='psi_in', circuit=Circuit())
# measure qubit in Z basis and write result to classical register 'M1'
meas_circ = Circuit()
meas_circ += ops.DefinitionBit(name='M1', length=1, is_output=True)
meas_circ += ops.MeasureQubit(qubit=0,readout='M1',readout_index=0)
# write state after measuring to readout register 'psi_out'
read_output = Circuit()
read_output += ops.DefinitionComplex(name='psi_out', length=2, is_output=True)
read_output += ops.PragmaGetStateVector(readout='psi_out', circuit=Circuit())
# put each step of the circuit together
circuit = state_init + read_input + meas_circ + read_output
# run the circuit and collect output
backend = PyQuestBackend(number_qubits=1)
(result_bit_registers, result_float_registers, result_complex_registers) \
= backend.run_circuit(circuit)
print('Input state: \n', result_complex_registers['psi_in'][0], '\n')
print('Measurement result: ', result_bit_registers['M1'][0][0], '\n')
print('State after measurement: \n', result_complex_registers['psi_out'][0])
# -
# ## Measuring a single qubit in the X basis
#
# Instead of measuring in the Z basis, we can measure the qubit in the X basis by performing a Hadamard operator before the measurement.
#
# This time we see that the measurement result is always 'False', since we are measuring the $|+ \rangle$ state in the X basis, and it is an X eigenvector of the X operator.
# +
# add Hadamard operator to change from Z to X basis
meas_X_circ = Circuit()
meas_X_circ += ops.DefinitionBit(name='M1', length=1, is_output=True)
meas_X_circ += ops.Hadamard(qubit=0)
meas_X_circ += ops.MeasureQubit(qubit=0,readout='M1',readout_index=0)
# perform additional Hadamard after measurement to readout in Z basis
read_output = Circuit()
read_output += ops.DefinitionComplex(name='psi_out', length=2, is_output=True)
read_output += ops.Hadamard(qubit=0)
read_output += ops.PragmaGetStateVector(readout='psi_out', circuit=Circuit())
circuit = state_init + read_input + meas_X_circ + read_output
# run the circuit and collect output
backend = PyQuestBackend(number_qubits=1)
(result_bit_registers, result_float_registers, result_complex_registers) \
= backend.run_circuit(circuit)
print('Input state: \n', result_complex_registers['psi_in'][0], '\n')
print('Measurement result: ', result_bit_registers['M1'][0][0], '\n')
print('State after measurement: \n', result_complex_registers['psi_out'][0])
# -
# ## Measuring a multi-qubit register
#
# Here we first prepare a multi-qubit register and demonstrate how it is possible to measure the entire register. As an example we prepare the multi-qubit register in the state,
# \begin{equation}
# |\psi \rangle = \frac{1}{\sqrt{2}} |010 \rangle + \frac{i}{\sqrt{2}} |101 \rangle.
# \end{equation}
#
# After preparation we read out the simulated state, before measurement. Next we measure each qubit of the state, and finally we readout out the post-measurement state.
# +
number_of_qubits = 3
state_init = Circuit()
state_init += ops.PauliX(qubit=1)
state_init += ops.Hadamard(qubit=0)
state_init += ops.CNOT(control=0, target=1)
state_init += ops.CNOT(control=0, target=2)
state_init += ops.SGate(qubit=0)
# write state before measuring to readout register 'psi_in'
read_input = Circuit()
read_input += ops.DefinitionComplex(name='psi_in', length=2**number_of_qubits,
is_output=True)
read_input += ops.PragmaGetStateVector(readout='psi_in', circuit=Circuit())
# measure qubits in Z basis and write result to classical register 'M1M2M3'
meas_circ = Circuit()
meas_circ += ops.DefinitionBit(name='M1M2M3', length=3, is_output=True)
meas_circ += ops.MeasureQubit(qubit=0,readout='M1M2M3',readout_index=0)
meas_circ += ops.MeasureQubit(qubit=1,readout='M1M2M3',readout_index=1)
meas_circ += ops.MeasureQubit(qubit=2,readout='M1M2M3',readout_index=2)
# write state after measuring to readout register 'psi_out'
read_output = Circuit()
read_output += ops.DefinitionComplex(name='psi_out', length=2**number_of_qubits,
is_output=True)
read_output += ops.PragmaGetStateVector(readout='psi_out', circuit=Circuit())
circuit = state_init + read_input + meas_circ + read_output
# run the circuit and collect output
backend = PyQuestBackend(number_qubits=number_of_qubits)
(result_bit_registers, result_float_registers, result_complex_registers) \
= backend.run_circuit(circuit)
print('Input state: \n', result_complex_registers['psi_in'][0], '\n')
print('Measurement results: ', result_bit_registers['M1M2M3'][0], '\n')
print('State after measurement: \n', result_complex_registers['psi_out'][0])
# -
# ## Measuring one qubit from a multi-qubit register
#
# Measuring only a single qubit from a multi-qubit register is an almost identical process to measuring the entire register, except we only add a single measurement in this case.
#
# Here we again prepare the input state,
# \begin{equation}
# |\psi \rangle = \frac{1}{\sqrt{2}} |010 \rangle + \frac{i}{\sqrt{2}} |101 \rangle.
# \end{equation}
#
# After preparation we read out the simulated state, before measurement. Next we measure the first qubit of the state, and finally we readout out the post-measurement state.
# +
number_of_qubits = 3
state_init = Circuit()
state_init += ops.PauliX(qubit=1)
state_init += ops.Hadamard(qubit=0)
state_init += ops.CNOT(control=0, target=1)
state_init += ops.CNOT(control=0, target=2)
state_init += ops.SGate(qubit=0)
# write state before measuring to readout register 'psi_in'
read_input = Circuit()
read_input += ops.DefinitionComplex(name='psi_in', length=2**number_of_qubits,
is_output=True)
read_input += ops.PragmaGetStateVector(readout='psi_in', circuit=Circuit())
# measure qubit in Z basis and write result to classical register 'M1'
meas_circ = Circuit()
meas_circ += ops.DefinitionBit(name='M1', length=1, is_output=True)
meas_circ += ops.MeasureQubit(qubit=0,readout='M1',readout_index=0)
# write state after measuring to readout register 'psi_out'
read_output = Circuit()
read_output += ops.DefinitionComplex(name='psi_out', length=2**number_of_qubits,
is_output=True)
read_output += ops.PragmaGetStateVector(readout='psi_out', circuit=Circuit())
circuit = state_init + read_input + meas_circ + read_output
# run the circuit and collect output
backend = PyQuestBackend(number_qubits=number_of_qubits)
(result_bit_registers, result_float_registers, result_complex_registers) \
= backend.run_circuit(circuit)
print('Input state: \n', result_complex_registers['psi_in'][0], '\n')
print('Measurement results: ', result_bit_registers['M1'][0], '\n')
print('State after measurement: \n', result_complex_registers['psi_out'][0])
# -
| qoqo/examples/Measurement_Example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Part 7 - Federated Learning with FederatedDataset
#
# Here we introduce a new tool for using federated datasets. We have created a `FederatedDataset` class which is intended to be used like the PyTorch Dataset class, and is given to a federated data loader `FederatedDataLoader` which will iterate on it in a federated fashion.
#
#
# Authors:
# - <NAME> - Twitter: [@iamtrask](https://twitter.com/iamtrask)
# - <NAME> - GitHub: [@LaRiffle](https://github.com/LaRiffle)
# We use the sandbox that we discovered last lesson
import torch as th
import syft as sy
sy.create_sandbox(globals(), verbose=False)
# Then search for a dataset
boston_data = grid.search("#boston", "#data", verbose=False, return_counter=False)
boston_target = grid.search("#boston", "#target", verbose=False, return_counter=False)
# We load a model and an optimizer
n_features = boston_data['alice'][0].shape[1]
n_targets = 1
model = th.nn.Linear(n_features, n_targets)
optimizer = th.optim.SGD(params=model.parameters(),lr=0.0000001)
# Here we cast the data fetched in a `FederatedDataset`. See the workers which hold part of the data.
# +
# Cast the result in BaseDatasets
datasets = []
for worker in boston_data.keys():
dataset = sy.BaseDataset(boston_data[worker][0], boston_target[worker][0])
datasets.append(dataset)
# Build the FederatedDataset object
dataset = sy.FederatedDataset(datasets)
print(dataset.workers)
# -
# We put it in a `FederatedDataLoader` and specify options
train_loader = sy.FederatedDataLoader(dataset, batch_size=4, shuffle=False, drop_last=False)
# And finally we iterate over epochs. You can see how similar this is compared to pure and local PyTorch training!
epochs = 10
for epoch in range(1, epochs + 1):
loss_accum = 0
for batch_idx, (data, target) in enumerate(train_loader):
model.send(data.location)
optimizer.zero_grad()
pred = model(data)
loss = ((pred - target)**2).sum()
loss.backward()
optimizer.step()
model.get()
loss = loss.get()
loss_accum += float(loss)
if batch_idx % 20 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * data.shape[0], len(train_loader),
100. * batch_idx / len(train_loader), loss.item()))
print('Total loss', loss_accum)
# # Congratulations!!! - Time to Join the Community!
#
# Congratulations on completing this notebook tutorial! If you enjoyed this and would like to join the movement toward privacy preserving, decentralized ownership of AI and the AI supply chain (data), you can do so in the following ways!
#
# ### Star PySyft on GitHub
#
# The easiest way to help our community is just by starring the Repos! This helps raise awareness of the cool tools we're building.
#
# - [Star PySyft](https://github.com/OpenMined/PySyft)
#
# ### Join our Slack!
#
# The best way to keep up to date on the latest advancements is to join our community! You can do so by filling out the form at [http://slack.openmined.org](http://slack.openmined.org)
#
# ### Join a Code Project!
#
# The best way to contribute to our community is to become a code contributor! At any time you can go to PySyft GitHub Issues page and filter for "Projects". This will show you all the top level Tickets giving an overview of what projects you can join! If you don't want to join a project, but you would like to do a bit of coding, you can also look for more "one off" mini-projects by searching for GitHub issues marked "good first issue".
#
# - [PySyft Projects](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3AProject)
# - [Good First Issue Tickets](https://github.com/OpenMined/PySyft/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22)
#
# ### Donate
#
# If you don't have time to contribute to our codebase, but would still like to lend support, you can also become a Backer on our Open Collective. All donations go toward our web hosting and other community expenses such as hackathons and meetups!
#
# [OpenMined's Open Collective Page](https://opencollective.com/openmined)
| examples/tutorials/Part 7 - Federated Learning with Federated Dataset.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Chapter 10 - Predicting Continuous Target Variables with Regression Analysis
# ### Overview
# - [Introducing a simple linear regression model](#Introducing-a-simple-linear-regression-model)
# - [Exploring the Housing Dataset](#Exploring-the-Housing-Dataset)
# - [Visualizing the important characteristics of a dataset](#Visualizing-the-important-characteristics-of-a-dataset)
# - [Implementing an ordinary least squares linear regression model](#Implementing-an-ordinary-least-squares-linear-regression-model)
# - [Solving regression for regression parameters with gradient descent](#Solving-regression-for-regression-parameters-with-gradient-descent)
# - [Estimating the coefficient of a regression model via scikit-learn](#Estimating-the-coefficient-of-a-regression-model-via-scikit-learn)
# - [Fitting a robust regression model using RANSAC](#Fitting-a-robust-regression-model-using-RANSAC)
# - [Evaluating the performance of linear regression models](#Evaluating-the-performance-of-linear-regression-models)
# - [Using regularized methods for regression](#Using-regularized-methods-for-regression)
# - [Turning a linear regression model into a curve - polynomial regression](#Turning-a-linear-regression-model-into-a-curve---polynomial-regression)
# - [Modeling nonlinear relationships in the Housing Dataset](#Modeling-nonlinear-relationships-in-the-Housing-Dataset)
# - [Dealing with nonlinear relationships using random forests](#Dealing-with-nonlinear-relationships-using-random-forests)
# - [Decision tree regression](#Decision-tree-regression)
# - [Random forest regression](#Random-forest-regression)
# - [Summary](#Summary)
# <br>
# <br>
from IPython.display import Image
# %matplotlib inline
# # Introducing a simple linear regression model
# #### Univariate Model
#
# $$
# y = w_0 + w_1 x
# $$
#
# Relationship between
# - a single feature (**explanatory variable**) $x$
# - a continous target (**response**) variable $y$
Image(filename='./images/10_01.png', width=500)
# - **regression line** : the best-fit line
# - **offsets** or **residuals**: the gap between the regression line and the sample points
# #### Multivariate Model
# $$
# y = w_0 + w_1 x_1 + \dots + w_m x_m
# $$
# <br>
# <br>
# # Exploring the Housing dataset
# - Information about houses in the suburbs of Boston
# - Collected by <NAME> and <NAME> in 1978
# - 506 samples
# Source: [https://archive.ics.uci.edu/ml/datasets/Housing](https://archive.ics.uci.edu/ml/datasets/Housing)
#
# Attributes:
#
# <pre>
# 1. CRIM per capita crime rate by town
# 2. ZN proportion of residential land zoned for lots over
# 25,000 sq.ft.
# 3. INDUS proportion of non-retail business acres per town
# 4. CHAS Charles River dummy variable (= 1 if tract bounds
# river; 0 otherwise)
# 5. NOX nitric oxides concentration (parts per 10 million)
# 6. RM average number of rooms per dwelling
# 7. AGE proportion of owner-occupied units built prior to 1940
# 8. DIS weighted distances to five Boston employment centres
# 9. RAD index of accessibility to radial highways
# 10. TAX full-value property-tax rate per $10,000
# 11. PTRATIO pupil-teacher ratio by town
# 12. B 1000(Bk - 0.63)^2 where Bk is the proportion of blacks
# by town
# 13. LSTAT % lower status of the population
# 14. MEDV Median value of owner-occupied homes in $1000's
# </pre>
# We'll consider **MEDV** as our target variable.
# +
import pandas as pd
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/'
'housing/housing.data',
header=None,
sep='\s+')
df.columns = ['CRIM', 'ZN', 'INDUS', 'CHAS',
'NOX', 'RM', 'AGE', 'DIS', 'RAD',
'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV']
df.head()
# -
# <br>
# <br>
# ## Visualizing the important characteristics of a dataset
# #### Scatter plot matrix
# +
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style='whitegrid', context='notebook')
cols = ['LSTAT', 'INDUS', 'NOX', 'RM', 'MEDV']
sns.pairplot(df[cols], size=2.5)
plt.tight_layout()
# plt.savefig('./figures/scatter.png', dpi=300)
plt.show()
# -
# #### Correlation Matrix
#
# - a scaled version of the covariance matrix
# - each entry contains the **Pearson product-moment correlation coefficients** (**Pearson's r**)
# - quantifies **linear** relationship between features
# - ranges in $[-1,1]$
# - $r=1$ perfect positive correlation
# - $r=0$ no correlation
# - $r=-1$ perfect negative correlation
#
# $$
# r = \frac{
# \sum_{i=1}^n [(x^{(i)}-\mu_x)(y^{(i)}-\mu_y)]
# }{
# \sqrt{\sum_{i=1}^n (x^{(i)}-\mu_x)^2}
# \sqrt{\sum_{i=1}^n (y^{(i)}-\mu_y)^2}
# } =
# \frac{\sigma_{xy}}{\sigma_x\sigma_y}
# $$
# +
import numpy as np
cm = np.corrcoef(df[cols].values.T)
sns.set(font_scale=1.5)
hm = sns.heatmap(cm,
cbar=True,
annot=True,
square=True,
fmt='.2f',
annot_kws={'size': 15},
yticklabels=cols,
xticklabels=cols)
# plt.tight_layout()
# plt.savefig('./figures/corr_mat.png', dpi=300)
plt.show()
# -
# - MEDV has large correlation with LSTAT and RM
# - The relation between MEDV ~ LSTAT may not be linear
# - The relation between MEDV ~ RM looks liinear
sns.reset_orig()
# %matplotlib inline
# <br>
# <br>
# # Implementing an ordinary least squares (OLS) linear regression model
# ## Solving regression for regression parameters with gradient descent
# #### OLS Cost Function (Sum of Squred Errors, SSE)
# $$
# J(w) = \frac12 \sum_{i=1}^n (y^{(i)} - \hat y^{(i)})^2 = \frac12 \| y - Xw - \mathbb{1}w_0\|^2
# $$
# - $\hat y^{(i)} = w^T x^{(i)} $ is the predicted value
# - OLS linear regression can be understood as Adaline without the step function, which converts the linear response $w^T x$ into $\{-1,1\}$.
# #### Gradient Descent (refresh)
# $$
# w_{k+1} = w_k - \eta_k \nabla J(w_k), \;\; k=1,2,\dots
# $$
# - $\eta_k>0$ is the learning rate
# - $$
# \nabla J(w_k) =
# \begin{bmatrix} -X^T(y-Xw- \mathbb{1}w_0) \\
# -\mathbb{1}^T(y-Xw- \mathbb{1}w_0)
# \end{bmatrix}
# $$
class LinearRegressionGD(object):
def __init__(self, eta=0.001, n_iter=20):
self.eta = eta
self.n_iter = n_iter
def fit(self, X, y):
self.w_ = np.zeros(1 + X.shape[1])
self.cost_ = []
for i in range(self.n_iter):
output = self.net_input(X)
errors = (y - output)
self.w_[1:] += self.eta * X.T.dot(errors)
self.w_[0] += self.eta * errors.sum()
cost = (errors**2).sum() / 2.0
self.cost_.append(cost)
return self
def net_input(self, X):
return np.dot(X, self.w_[1:]) + self.w_[0]
def predict(self, X):
return self.net_input(X)
X = df[['RM']].values
y = df[['MEDV']].values
y.shape
# +
from sklearn.preprocessing import StandardScaler
sc_x = StandardScaler()
sc_y = StandardScaler()
X_std = sc_x.fit_transform(X)
#y_std = sc_y.fit_transform(y[:, np.newaxis]).flatten()
y_std = sc_y.fit_transform(y).flatten()
# -
y_std.shape
lr = LinearRegressionGD()
lr.fit(X_std, y_std)
plt.plot(range(1, lr.n_iter+1), lr.cost_)
plt.ylabel('SSE')
plt.xlabel('Epoch')
plt.tight_layout()
# plt.savefig('./figures/cost.png', dpi=300)
plt.show()
def lin_regplot(X, y, model):
plt.scatter(X, y, c='lightblue')
plt.plot(X, model.predict(X), color='red', linewidth=2)
return
lin_regplot(X_std, y_std, lr)
plt.xlabel('Average number of rooms [RM] (standardized)')
plt.ylabel('Price in $1000\'s [MEDV] (standardized)')
plt.tight_layout()
# plt.savefig('./figures/gradient_fit.png', dpi=300)
plt.show()
print('Slope: %.3f' % lr.w_[1])
print('Intercept: %.3f' % lr.w_[0])
num_rooms_std = sc_x.transform(np.array([[5.0]]))
price_std = lr.predict(num_rooms_std)
print("Price in $1000's: %.3f" % sc_y.inverse_transform(price_std))
# <br>
# <br>
# ## Estimating the coefficient of a regression model via scikit-learn
from sklearn.linear_model import LinearRegression
slr = LinearRegression()
slr.fit(X, y)
y_pred = slr.predict(X)
print('Slope: %.3f' % slr.coef_[0])
print('Intercept: %.3f' % slr.intercept_)
# The solution is different from the previous result, since the data is **not** normalized here.
lin_regplot(X, y, slr)
plt.xlabel('Average number of rooms [RM]')
plt.ylabel('Price in $1000\'s [MEDV]')
plt.tight_layout()
# plt.savefig('./figures/scikit_lr_fit.png', dpi=300)
plt.show()
# <br>
# <br>
# # Fitting a robust regression model using RANSAC (RANdom SAmple Consensus)
# - Linear regression models can be heavily affected by outliers
# - A very small subset of data can have a big impact on the estimated model coefficients
# - Removing outliers is not easy
# RANSAC algorithm:
#
# 1. Select a random subset of samples to be *inliers* and fit the model
# 2. Test all other data points against the fitted model, and add those points that fall within a user-defined tolerance to inliers
# 3. Refit the model using all inliers.
# 4. Estimate the error of the fitted model vs. the inliers
# 5. Terminate if the performance meets a user-defined threshold, or if a fixed number of iterations has been reached.
# +
from sklearn.linear_model import RANSACRegressor
ransac = RANSACRegressor(LinearRegression(),
max_trials=100,
min_samples=50,
loss='absolute_loss',
residual_threshold=5.0, # problem-specific
random_state=0)
ransac.fit(X, y)
inlier_mask = ransac.inlier_mask_
outlier_mask = np.logical_not(inlier_mask)
line_X = np.arange(3, 10, 1)
line_y_ransac = ransac.predict(line_X[:, np.newaxis])
plt.scatter(X[inlier_mask], y[inlier_mask],
c='blue', marker='o', label='Inliers')
plt.scatter(X[outlier_mask], y[outlier_mask],
c='lightgreen', marker='s', label='Outliers')
plt.plot(line_X, line_y_ransac, color='red')
plt.xlabel('Average number of rooms [RM]')
plt.ylabel('Price in $1000\'s [MEDV]')
plt.legend(loc='upper left')
plt.tight_layout()
# plt.savefig('./figures/ransac_fit.png', dpi=300)
plt.show()
# -
print('Slope: %.3f' % ransac.estimator_.coef_[0])
print('Intercept: %.3f' % ransac.estimator_.intercept_)
# <br>
# <br>
# # Evaluating the performance of linear regression models
# +
from sklearn.model_selection import train_test_split
X = df.iloc[:, :-1].values
y = df['MEDV'].values
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.3, random_state=0)
# +
slr = LinearRegression()
slr.fit(X_train, y_train)
y_train_pred = slr.predict(X_train)
y_test_pred = slr.predict(X_test)
# -
# #### Residual Plot
# - It's not easy to plot linear regression line in general, since the model uses multiple explanatory variables
# - Residual plots are used for:
# - detect nonlinearity
# - detect outliers
# - check if errors are randomly distributed
# +
plt.scatter(y_train_pred, y_train_pred - y_train,
c='blue', marker='o', label='Training data')
plt.scatter(y_test_pred, y_test_pred - y_test,
c='lightgreen', marker='s', label='Test data')
plt.xlabel('Predicted values')
plt.ylabel('Residuals')
plt.legend(loc='upper left')
plt.hlines(y=0, xmin=-10, xmax=50, lw=2, color='red')
plt.xlim([-10, 50])
plt.tight_layout()
# plt.savefig('./figures/slr_residuals.png', dpi=300)
plt.show()
# -
# If we see patterns in residual plot, it implies that our model didn't capture some explanatory information which leaked into the pattern.
# #### MSE (Mean-Square Error)
# $$
# \text{MSE} = \frac{1}{n} \sum_{i=1}^n \left( y^{(i)} - \hat y^{(i)} \right)^2
# $$
# #### $R^2$ score
#
# - The fraction of variance captured by the model
# - $R^2=1$ : the model fits the data perfectly
#
# $$
# R^2 = 1 - \frac{SSE}{SST}, \;\; SST = \sum_{i=1}^n \left( y^{(i)}-\mu_y\right)^2
# $$
#
# $$
# R^2 = 1 - \frac{MSE}{Var(y)}
# $$
# +
from sklearn.metrics import r2_score
from sklearn.metrics import mean_squared_error
print('MSE train: %.3f, test: %.3f' % (
mean_squared_error(y_train, y_train_pred),
mean_squared_error(y_test, y_test_pred)))
print('R^2 train: %.3f, test: %.3f' % (
r2_score(y_train, y_train_pred),
r2_score(y_test, y_test_pred)))
# -
# The gap in MSE (between train and test) indicates overfitting
# <br>
# <br>
# # Using regularized methods for regression
# #### Ridge Regression
# $$
# J(w) = \frac12 \sum_{i=1}^n (y^{(i)}-\hat y^{(i)})^2 + \lambda \|w\|_2^2
# $$
#
# #### LASSO (Least Absolute Shrinkage and Selection Operator)
# $$
# J(w) = \frac12 \sum_{i=1}^n (y^{(i)}-\hat y^{(i)})^2 + \lambda \|w\|_1
# $$
#
# #### Elastic-Net
# $$
# J(w) = \frac12 \sum_{i=1}^n (y^{(i)}-\hat y^{(i)})^2 + \lambda_1 \|w\|_2^2 + \lambda_2 \|w\|_1
# $$
#
# +
from sklearn.linear_model import Ridge
from sklearn.linear_model import Lasso
from sklearn.linear_model import ElasticNet
ridge = Ridge(alpha=1.0)
lasso = Lasso(alpha=1.0)
enet = ElasticNet(alpha=1.0, l1_ratio=0.5)
ridge.fit(X_train, y_train)
lasso.fit(X_train, y_train)
enet.fit(X_train, y_train)
#y_train_pred = lasso.predict(X_train)
y_test_pred_r = ridge.predict(X_test)
y_test_pred_l = lasso.predict(X_test)
y_test_pred_e = enet.predict(X_test)
print("Ridge = ", ridge.coef_)
print("LASSO = ", lasso.coef_)
print("ENET = ",enet.coef_)
# -
print('MSE train: %.3f, test: %.3f' % (
mean_squared_error(y_train, y_train_pred),
mean_squared_error(y_test, y_test_pred)))
print('R^2 train: %.3f, test: %.3f' % (
r2_score(y_train, y_train_pred),
r2_score(y_test, y_test_pred)))
# <br>
# <br>
# # Turning a linear regression model into a curve - polynomial regression
# $$
# y = w_0 + w_1 x + w_2 x^2 + \dots + w_d x^d
# $$
# +
X = np.array([258.0, 270.0, 294.0,
320.0, 342.0, 368.0,
396.0, 446.0, 480.0, 586.0])[:, np.newaxis]
y = np.array([236.4, 234.4, 252.8,
298.6, 314.2, 342.2,
360.8, 368.0, 391.2,
390.8])
# +
from sklearn.preprocessing import PolynomialFeatures
lr = LinearRegression()
pr = LinearRegression()
quadratic = PolynomialFeatures(degree=2)
X_quad = quadratic.fit_transform(X)
# +
# fit linear features
lr.fit(X, y)
X_fit = np.arange(250, 600, 10)[:, np.newaxis]
y_lin_fit = lr.predict(X_fit)
# fit quadratic features
pr.fit(X_quad, y)
y_quad_fit = pr.predict(quadratic.fit_transform(X_fit))
# plot results
plt.scatter(X, y, label='training points')
plt.plot(X_fit, y_lin_fit, label='linear fit', linestyle='--')
plt.plot(X_fit, y_quad_fit, label='quadratic fit')
plt.legend(loc='upper left')
plt.tight_layout()
# plt.savefig('./figures/poly_example.png', dpi=300)
plt.show()
# -
y_lin_pred = lr.predict(X)
y_quad_pred = pr.predict(X_quad)
print('Training MSE linear: %.3f, quadratic: %.3f' % (
mean_squared_error(y, y_lin_pred),
mean_squared_error(y, y_quad_pred)))
print('Training R^2 linear: %.3f, quadratic: %.3f' % (
r2_score(y, y_lin_pred),
r2_score(y, y_quad_pred)))
# <br>
# <br>
# ## Modeling nonlinear relationships in the Housing Dataset
# +
X = df[['LSTAT']].values
y = df['MEDV'].values
regr = LinearRegression()
# create quadratic features
quadratic = PolynomialFeatures(degree=2)
cubic = PolynomialFeatures(degree=3)
X_quad = quadratic.fit_transform(X)
X_cubic = cubic.fit_transform(X)
# fit features
X_fit = np.arange(X.min(), X.max(), 1)[:, np.newaxis]
regr = regr.fit(X, y)
y_lin_fit = regr.predict(X_fit)
linear_r2 = r2_score(y, regr.predict(X))
regr = regr.fit(X_quad, y)
y_quad_fit = regr.predict(quadratic.fit_transform(X_fit))
quadratic_r2 = r2_score(y, regr.predict(X_quad))
regr = regr.fit(X_cubic, y)
y_cubic_fit = regr.predict(cubic.fit_transform(X_fit))
cubic_r2 = r2_score(y, regr.predict(X_cubic))
# plot results
plt.scatter(X, y, label='training points', color='lightgray')
plt.plot(X_fit, y_lin_fit,
label='linear (d=1), $R^2=%.2f$' % linear_r2,
color='blue',
lw=2,
linestyle=':')
plt.plot(X_fit, y_quad_fit,
label='quadratic (d=2), $R^2=%.2f$' % quadratic_r2,
color='red',
lw=2,
linestyle='-')
plt.plot(X_fit, y_cubic_fit,
label='cubic (d=3), $R^2=%.2f$' % cubic_r2,
color='green',
lw=2,
linestyle='--')
plt.xlabel('% lower status of the population [LSTAT]')
plt.ylabel('Price in $1000\'s [MEDV]')
plt.legend(loc='upper right')
plt.tight_layout()
# plt.savefig('./figures/polyhouse_example.png', dpi=300)
plt.show()
# -
# As the model complexity increases, the chance of overfitting increases as well
#
# Transforming the dataset:
# +
X = df[['LSTAT']].values
y = df['MEDV'].values
# transform features
X_log = np.log(X)
y_sqrt = np.sqrt(y)
# fit features
X_fit = np.arange(X_log.min()-1, X_log.max()+1, 1)[:, np.newaxis]
regr = regr.fit(X_log, y_sqrt)
y_lin_fit = regr.predict(X_fit)
linear_r2 = r2_score(y_sqrt, regr.predict(X_log))
# plot results
plt.scatter(X_log, y_sqrt, label='training points', color='lightgray')
plt.plot(X_fit, y_lin_fit,
label='linear (d=1), $R^2=%.2f$' % linear_r2,
color='blue',
lw=2)
plt.xlabel('log(% lower status of the population [LSTAT])')
plt.ylabel('$\sqrt{Price \; in \; \$1000\'s [MEDV]}$')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('./figures/transform_example.png', dpi=300)
plt.show()
# -
# <br>
# <br>
# # Dealing with nonlinear relationships using random forests
# We use Information Gain (IG) to find the feature to split, which will lead to the maximal IG:
#
# $$
# IG(D_p, x_i) = I(D_p) - \frac{N_{left}}{N_p} I(D_{left}) - \frac{N_{right}}{N_p} I(D_{right})
# $$
#
# where $I$ is the impurity measure.
#
# We've used e.g. entropy for discrete features. Here, we use MSE at node $t$ instead for continuous features:
#
# $$
# I(t) = MSE(t) = \frac{1}{N_t} \sum_{i \in D_t} (y^{(i)} - \bar y_t)^2
# $$
# where $\bar y_t$ is the sample mean,
# $$
# \bar y_t = \frac{1}{N_t} \sum_{i \in D_t} y^{(i)}
# $$
# ## Decision tree regression
# +
from sklearn.tree import DecisionTreeRegressor
X = df[['LSTAT']].values
y = df['MEDV'].values
tree = DecisionTreeRegressor(max_depth=3)
tree.fit(X, y)
sort_idx = X.flatten().argsort()
lin_regplot(X[sort_idx], y[sort_idx], tree)
plt.xlabel('% lower status of the population [LSTAT]')
plt.ylabel('Price in $1000\'s [MEDV]')
# plt.savefig('./figures/tree_regression.png', dpi=300)
plt.show()
r2 = r2_score(y, tree.predict(X))
print("R^2 = ", r2)
# -
# Disadvantage: it does not capture the continuity and differentiability of the desired prediction
# <br>
# <br>
# ## Random forest regression
# Advantages:
# - better generalization than individual trees
# - less sensitive to outliers in the dataset
# - don't require much parameter tuning
# +
X = df.iloc[:, :-1].values
y = df['MEDV'].values
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.4, random_state=1)
# +
from sklearn.ensemble import RandomForestRegressor
forest = RandomForestRegressor(n_estimators=1000,
criterion='mse',
random_state=1,
n_jobs=-1)
forest.fit(X_train, y_train)
y_train_pred = forest.predict(X_train)
y_test_pred = forest.predict(X_test)
print('MSE train: %.3f, test: %.3f' % (
mean_squared_error(y_train, y_train_pred),
mean_squared_error(y_test, y_test_pred)))
print('R^2 train: %.3f, test: %.3f' % (
r2_score(y_train, y_train_pred),
r2_score(y_test, y_test_pred)))
# +
plt.scatter(y_train_pred,
y_train_pred - y_train,
c='black',
marker='o',
s=35,
alpha=0.5,
label='Training data')
plt.scatter(y_test_pred,
y_test_pred - y_test,
c='lightgreen',
marker='s',
s=35,
alpha=0.7,
label='Test data')
plt.xlabel('Predicted values')
plt.ylabel('Residuals')
plt.legend(loc='upper left')
plt.hlines(y=0, xmin=-10, xmax=50, lw=2, color='red')
plt.xlim([-10, 50])
plt.tight_layout()
# plt.savefig('./figures/slr_residuals.png', dpi=300)
plt.show()
# -
# <br>
# <br>
# # Summary
# - Univariate and multivariate linear models
# - RANSAC to deal with outliers
# - Regularization: control model complexity to avoid overfitting
| chap12.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
import numpy as np
import matplotlib.pyplot as plt
from scipy.io import wavfile
from scipy.signal import fftconvolve
from librosa.core import stft
from librosa.core import istft
from librosa import amplitude_to_db, db_to_amplitude
from librosa.display import specshow
from librosa.output import write_wav
from scipy.signal import butter, lfilter, csd
from scipy.linalg import svd, pinv, inv
from utils import apply_reverb, read_wav
import corpus
import mir_eval
import pyroomacoustics as pra
# +
corners = np.array([[0,0], [0,8], [8,8], [8,0]]).T # [x,y]
room = pra.Room.from_corners(corners)
s1, s2 = map(read_wav, corpus.experiment_files_timit())
if len(s1) > len(s2):
pad_length = len(s1) - len(s2)
s2 = np.pad(s2, (0,pad_length), 'reflect')
else:
pad_length = len(s2) - len(s1)
s1 = np.pad(s1, (0,pad_length), 'reflect')
room.add_source([4.,4.], signal=s1)
room.add_source([2.,4.], signal=s2)
R = pra.linear_2D_array(center=[2.,1.], M=2, phi=0, d=0.75)
room.add_microphone_array(pra.MicrophoneArray(R, room.fs))
fig, ax = room.plot()
ax.set_xlim([-1, 9])
ax.set_ylim([-1, 9])
# -
# 3D case
# +
corners = np.array([[0,0], [0,8], [8,8], [8,0]]).T # [x,y]
room = pra.Room.from_corners(corners)
room.extrude(5.)
s1, s2 = map(read_wav, corpus.experiment_files_timit())
if len(s1) > len(s2):
pad_length = len(s1) - len(s2)
s2 = np.pad(s2, (0,pad_length), 'reflect')
else:
pad_length = len(s2) - len(s1)
s1 = np.pad(s1, (0,pad_length), 'reflect')
room.add_source([8.,4.,1.6], signal=s1)
# room.add_source([2.,4.,1.6], signal=s2)
#[[X],[Y],[Z]]
R = np.asarray([[4.75,5.5],[2.,2.],[1.,1]])
room.add_microphone_array(pra.MicrophoneArray(R, room.fs))
fig, ax = room.plot()
ax.set_xlim([-3, 9])
ax.set_ylim([-1, 9])
ax.set_zlim([0, 6])
# -
room.plot_rir()
fig = plt.gcf()
fig.set_size_inches(20, 10)
room.simulate()
print(room.mic_array.signals.shape)
# +
nfft=2048
win = 1024
hop = int(nfft/8)
Y1_o = stft(room.mic_array.signals[0,:len(s1)], n_fft=nfft, hop_length=hop, win_length=win)
Y2_o = stft(room.mic_array.signals[1,:len(s1)], n_fft=nfft, hop_length=hop, win_length=win)
X1_o = stft(s1, n_fft=nfft, hop_length=hop, win_length=win)
Gxx = np.abs(X1_o * np.conj(X1_o))
Gxy = np.abs(X1_o * np.conj(Y1_o))
Gyx = np.abs(Y1_o * np.conj(X1_o))
Gyy = np.abs(Y1_o * np.conj(Y1_o))
F,T = Gxx.shape
print(Gxx.shape)
print(Gxy.shape)
print(Gyx.shape)
print(Gyy.shape)
# +
from scipy.linalg import svd, pinv
temp = np.asarray([[Gxx, Gxy],[Gyx, Gyy]]).reshape(F*2,T*2)
print(temp.shape)
U, s, V = svd(temp)
plt.figure(figsize=(10,10))
plt.plot(s/sum(s))
tmpsum = 0
summed = []
for i in range(len(s)):
tmpsum += s[i]/sum(s)
summed.append(tmpsum)
summed = np.asarray(summed)
plt.figure(figsize=(10,10))
plt.plot(summed)
plt.axhline(y=0.95, color='g')
plt.axhline(y=0.9999, color='r')
plt.axvline(x=37, color='g')
plt.axvline(x=284, color='r')
plt.axvline(x=341, color='y')
smallUgt1 = U[:,np.where(s>1)].reshape(F*2,-1)
smallUgt10 = U[:,np.where(s>0.5)].reshape(F*2,-1)
smallVgt1 = V[np.where(s>1),:].reshape(-1, T*2)
smallVgt10 = V[np.where(s>0.5),:].reshape(-1, T*2)
Hsgt1 = np.matmul(smallUgt1[:F,:],pinv(smallVgt1[:,T:]).T)
Hsgt10 = np.matmul(smallUgt10[:F,:],pinv(smallVgt10[:,T:]).T)
smallU95p = U[:,:37].reshape(F*2,-1)
smallU9999p = U[:,:284].reshape(F*2,-1)
smallU999999p = U[:,:341].reshape(F*2,-1)
smallV95p = V[:37,:].reshape(-1, T*2)
smallV9999p = V[:284,:].reshape(-1, T*2)
smallV999999p = V[:341,:].reshape(-1, T*2)
Hs95p = np.matmul(smallU95p[:F,:],pinv(smallV95p[:,T:]).T)
Hs9999p = np.matmul(smallU9999p[:F,:],pinv(smallV9999p[:,T:]).T)
Hs999999p = np.matmul(smallU999999p[:F,:],pinv(smallV999999p[:,T:]).T)
# -
plt.figure(figsize=(10,10))
ax1 = plt.subplot(511)
specshow(amplitude_to_db(np.multiply(Hsgt1,Y1_o), ref=np.max), y_axis='log', x_axis='time')
plt.title('Reconstructed spectrogram Hsgt1')
plt.colorbar(format='%+2.0f dB')
plt.tight_layout()
ax1 = plt.subplot(512)
specshow(amplitude_to_db(np.multiply(Hsgt10,Y1_o), ref=np.max), y_axis='log', x_axis='time')
plt.title('Reconstructed spectrogram Hsgt10')
plt.colorbar(format='%+2.0f dB')
plt.tight_layout()
plt.subplot(513, sharex=ax1)
specshow(amplitude_to_db(np.multiply(Hs95p,Y1_o), ref=np.max), y_axis='log', x_axis='time')
plt.title('original spectrogram')
plt.colorbar(format='%+2.0f dB')
plt.tight_layout()
plt.subplot(514, sharex=ax1)
specshow(amplitude_to_db(np.multiply(Hs9999p,Y1_o), ref=np.max), y_axis='log', x_axis='time')
plt.title('original spectrogram')
plt.colorbar(format='%+2.0f dB')
plt.tight_layout()
plt.subplot(515, sharex=ax1)
specshow(amplitude_to_db(np.multiply(Hs999999p,Y1_o), ref=np.max), y_axis='log', x_axis='time')
plt.title('original spectrogram')
plt.colorbar(format='%+2.0f dB')
plt.tight_layout()
filter_result = np.multiply(pinv(Hs999999p).T,Y1_o)
plt.figure(figsize=(10,10))
ax1 = plt.subplot(211)
specshow(amplitude_to_db(filter_result, ref=np.max), y_axis='log', x_axis='time')
plt.title('Reconstructed spectrogram Hsgt1')
plt.colorbar(format='%+2.0f dB')
plt.tight_layout()
ax1 = plt.subplot(212)
specshow(amplitude_to_db(Y1_o, ref=np.max), y_axis='log', x_axis='time')
plt.title('Reconstructed spectrogram Hsgt1')
plt.colorbar(format='%+2.0f dB')
plt.tight_layout()
# +
filter_result = np.multiply((Hs999999p),Y1_o)
recon_y1_Hs = istft(filter_result, hop_length=hop, win_length=win)
fig, ax = plt.subplots()
ax.plot(s1)
ax.plot(recon_y1_Hs)
ax.set(xlabel='time (ms)', ylabel='voltage (mV)',
title='y1 signal')
ax.grid()
| Statistical/Dereverberation-Hs_magnitude.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#Nearest Neighbors Classification in Scikit-learn
from sklearn.neighbors import NearestNeighbors
import numpy as np
X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
nbrs = NearestNeighbors(n_neighbors=2, algorithm='ball_tree').fit(X)
distances, indices = nbrs.kneighbors(X)
indices
distances
# -
nbrs.kneighbors_graph(X).toarray()
# +
#Build a classifier using k-Nearest Neighbors Classifier with in-built Iris dataset and plot the decision boundaries of each class
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn import neighbors, datasets
n_neighbors = 15
# import some data to play with
iris = datasets.load_iris()
# we only take the first two features. We could avoid this ugly
# slicing by using a two-dim dataset
X = iris.data[:, :2]
y = iris.target
h = .02 # step size in the mesh
# Create color maps
cmap_light = ListedColormap(['orange', 'cyan', 'cornflowerblue'])
cmap_bold = ListedColormap(['darkorange', 'c', 'darkblue'])
for weights in ['uniform', 'distance']:
# we create an instance of Neighbours Classifier and fit the data.
clf = neighbors.KNeighborsClassifier(n_neighbors, weights=weights)
clf.fit(X, y)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold,
edgecolor='k', s=20)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("3-Class classification (k = %i, weights = '%s')"
% (n_neighbors, weights))
plt.show()
# +
#Change features from sepal length and width to petal length and width. Build the classifier again and discuss the output.
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn import neighbors, datasets
n_neighbors = 15
# import some data to play with
iris = datasets.load_iris()
# we only take the first two features. We could avoid this ugly
# slicing by using a two-dim dataset
X = iris.data[:, :2]
y = iris.target
h = .02 # step size in the mesh
# Create color maps
cmap_light = ListedColormap(['orange', 'cyan', 'cornflowerblue'])
cmap_bold = ListedColormap(['darkorange', 'c', 'darkblue'])
for weights in ['uniform', 'distance']:
# we create an instance of Neighbours Classifier and fit the data.
clf = neighbors.KNeighborsClassifier(n_neighbors, weights=weights)
clf.fit(X, y)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 2
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 2
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold,
edgecolor='k', s=30)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("3-Class classification (k = %i, weights = '%s')"
% (n_neighbors, weights))
plt.show()
# +
#Consider all the features and build the classifier again and discuss the output.
print(__doc__)
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn import neighbors, datasets
n_neighbors = 15
# import some data to play with
iris = datasets.load_iris()
# we only take the first two features. We could avoid this ugly
# slicing by using a two-dim dataset
X = iris.data[:, :2]
y = iris.target
h = .05 # step size in the mesh
# Create color maps
cmap_light = ListedColormap(['orange', 'cyan', 'cornflowerblue'])
cmap_bold = ListedColormap(['darkgreen', 'black', 'red'])
for weights in ['uniform', 'distance']:
# we create an instance of Neighbours Classifier and fit the data.
clf = neighbors.KNeighborsClassifier(n_neighbors, weights=weights)
clf.fit(X, y)
# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 5
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 5
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold,
edgecolor='k', s=30)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("3-Class classification (k = %i, weights = '%s')"
% (n_neighbors, weights))
plt.show()
# +
#Demonstrate the resolution of a regression problem using a k-Nearest Neighbor and the interpolation of the target
print(__doc__)
# Generate sample data
import numpy as np
import matplotlib.pyplot as plt
from sklearn import neighbors
np.random.seed(0)
X = np.sort(5 * np.random.rand(40, 1), axis=0)
T = np.linspace(0, 5, 500)[:, np.newaxis]
y = np.sin(X).ravel()
# Add noise to targets
y[::5] += 1 * (0.5 - np.random.rand(8))
# #############################################################################
# Fit regression model
n_neighbors = 5
for i, weights in enumerate(['uniform', 'distance']):
knn = neighbors.KNeighborsRegressor(n_neighbors, weights=weights)
y_ = knn.fit(X, y).predict(T)
plt.subplot(2, 1, i + 1)
plt.scatter(X, y, color='darkorange', label='data')
plt.plot(T, y_, color='navy', label='prediction')
plt.axis('tight')
plt.legend()
plt.title("KNeighborsRegressor (k = %i, weights = '%s')" % (n_neighbors,
weights))
plt.tight_layout()
plt.show()
| Experiment Num 6- 18SCSE1010358.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Day Agenda
# - Linear Regression with Multiple variables
# - Linear Equation
# - Preprocessing data
# - Evalution Metrics for LR
# - Non-Linear Regression
# - Polynomial Regression
# - formula
# - y=ax^2+bx^1+cx^0+c+e
# - Transforming normal features into polynomail features with degree
# - Finfinding the evalution metrics of model
# # Simple linear regression
# - y=mx+c+e
# - y-dependent feature-1
# - x-independent feature-1
# - m-slope
# - c-intercept
# - e-random error
# ## Multiple Linear Regression
# - Realtionship between the one dependent variable(y) to mutltiple independent variables(x1,x2,..xn)
# $$ y=mx1+mx2+mx3+mx4+...mxn+c+e $$
# - y-dependent varaible(Target)
# - x1,x2,x3,--xn - independent faetures(input's)
# - m -slope
# - c- intercept
# - e - random error --mse,rmse,mae
# ### After define the business use case
# ## 1.Load the data set
#
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
#- csv-comma seperated values
df=pd.read_csv("https://raw.githubusercontent.com/AP-State-Skill-Development-Corporation/Datasets/master/Regression/FuelConsumptionCo2.csv")
df.head()
df.columns
df.info()
df.describe()
# ## 2. Preprocessing
# - cleaning dataset
# - handling missing values
# - fill the missing values
df.isnull().sum()
df['CO2EMISSIONS'] ## accessing a particular column
df['CO2EMISSIONS'].value_counts()# count of values in a column
# ## 3. Define the input and Output
# - input -independent variables('except co2 emission')
# - output- Target/Dependent variable('co2 emission')
df.columns
import seaborn as sns
sns.pairplot(df)
df.columns
x=df[['FUELCONSUMPTION_CITY','FUELCONSUMPTION_HWY','FUELCONSUMPTION_COMB']]
x.head()
y=df['CO2EMISSIONS']
y
# ## 4.Separate or split the data to training and testing
# - training data>testing data
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.3,random_state=2)
x_train.shape
x_test.shape
y_test.shape
y_train.shape
# ## 5. Train the model
# +
from sklearn.linear_model import LinearRegression
l=LinearRegression()
# -
l.fit(x_train,y_train)
# ## 6.Test or Evolution of model
#
# ### predict the co2 emission by manual data
#
l.predict([[9.9,6.7,8.5]])
l.predict([[11.2,7.7,9.6]])
#y=mx1+mx2+mx3+c
20.19153263*11.2+1.70148849*7.7+-9.1409838 *9.6+77.08819317963537
l.coef_
l.intercept_
y_pred=l.predict(x_test)
y_pred
y_pred.shape
from sklearn.metrics import r2_score,mean_squared_error
r2_score(y_test,y_pred)*100
mean_squared_error(y_test,y_pred)**0.5
from sklearn.metrics import mean_absolute_error
mean_absolute_error(y_test,y_pred)**0.5
# ## Polynomial Regression
# - simple Polynomial Regression
# - y=ax^2+bx^+c+e
import pandas as pd
import matplotlib.pyplot as plt
df1=pd.read_csv("https://raw.githubusercontent.com/AP-State-Skill-Development-Corporation/Datasets/master/Regression/china_gdp.csv")
df1.head()
x=df1['Year']
y=df1["Value"]
plt.scatter(x,y,label="chine gdp")
plt.xlabel("year")
plt.ylabel("value of gdp")
plt.legend()
plt.show()
from sklearn.preprocessing import PolynomialFeatures
p=PolynomialFeatures(degree=3)
x=x.values.reshape(-1,1)
x_poly=p.fit_transform(x) # ax^3+bx^2+cx^1+dx^0
x_poly
from sklearn.linear_model import LinearRegression
m=LinearRegression()
m.fit(x_poly,y)
m.predict(p.fit_transform([[2014]]))
df1
y1_pred=m.predict(x_poly)
r2_score(y,y1_pred)*100
mean_absolute_error(y,y1_pred)**0.5
mean_squared_error(y,y1_pred)**0.5
from sklearn import metrics
dir(metrics)
plt.scatter(x,y,label="actual output")
plt.plot(x,y1_pred,c='r',label="Polynomial Regression")
plt.xlabel("year")
plt.ylabel("GDP")
plt.legend()
plt.show()
m.coef_
m.intercept_
| Day7-LR with Multiple variables/Linear Regression with Multiple variables.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="GsjD1hXjw7SK"
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a href="https://colab.research.google.com/github/martin-fabbri/colab-notebooks/blob/master/deeplearning.ai/nlp/c4_w1_01_stack_semantics.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# </td>
# <td>
# <a href="https://github.com/martin-fabbri/colab-notebooks/blob/master/deeplearning.ai/nlp/c4_w1_01_stack_semantics.ipynb" target="_parent"><img src="https://raw.githubusercontent.com/martin-fabbri/colab-notebooks/master/assets/github.svg" alt="View On Github"/></a> </td>
# </table>
# + id="177MXDM7y1Pl" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1613377124312, "user_tz": 480, "elapsed": 1326, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}} outputId="268b4c9f-40e3-4165-a6d6-25e4329b6560"
# %env PYTHONPATH=
# + id="f5ZqfCHM0YKx" executionInfo={"status": "ok", "timestamp": 1613377166218, "user_tz": 480, "elapsed": 43213, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}}
# %%capture
# %%bash
MINICONDA_INSTALLER_SCRIPT=Miniconda3-latest-Linux-x86_64.sh
MINICONDA_PREFIX=/usr/local
wget https://repo.continuum.io/miniconda/$MINICONDA_INSTALLER_SCRIPT
chmod +x $MINICONDA_INSTALLER_SCRIPT
./$MINICONDA_INSTALLER_SCRIPT -b -f -p $MINICONDA_PREFIX
conda install --channel defaults conda python=3.8 --yes
conda update --channel defaults --all --yes
# + id="CVnXYQyl3FQN" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1613377166407, "user_tz": 480, "elapsed": 43395, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}} outputId="b48203fa-3c92-463a-f5c6-d3f03a37ab06"
# !conda --version
# + id="bJDeBRl13LiF" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1613377166573, "user_tz": 480, "elapsed": 43552, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}} outputId="77980422-0531-4588-a55b-c8825fe4f217"
# !python --version
# + id="IUTJ7mjjxE9x" executionInfo={"status": "ok", "timestamp": 1613377203855, "user_tz": 480, "elapsed": 80827, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}}
# %%capture
# %%bash
pip install streamlit
pip install st-annotated-text
# + id="uSfnX4zJ5yqX" executionInfo={"status": "ok", "timestamp": 1613377203858, "user_tz": 480, "elapsed": 80824, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}}
# import os
# os.kill(os.getpid(), 9)
# + id="7sXVBnbaixuY" executionInfo={"status": "ok", "timestamp": 1613377215071, "user_tz": 480, "elapsed": 92031, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}}
# %%capture
# %%bash
npm install -g npm
npm install localtunnel
# + colab={"base_uri": "https://localhost:8080/"} id="n9oLn_pFyDec" executionInfo={"status": "ok", "timestamp": 1613377404378, "user_tz": 480, "elapsed": 301, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}} outputId="2a8ef1d7-e271-48e0-aec9-9b3f42fd82e4"
# %%writefile requirements.txt
requests==2.25.1
tensorflow==2.4.1
streamlit==0.76.0
google_api_python_client==1.12.8
protobuf==3.14.0
# + colab={"base_uri": "https://localhost:8080/"} id="CSOD332bwIma" executionInfo={"status": "ok", "timestamp": 1613377412566, "user_tz": 480, "elapsed": 3291, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}} outputId="ecdf4d52-3659-46cf-bdff-5b2e29c734e4"
# #%%capture
# %%bash
pip install -r requirements.txt
# + colab={"base_uri": "https://localhost:8080/"} id="8lBhYtDHvNvK" executionInfo={"status": "ok", "timestamp": 1613382275212, "user_tz": 480, "elapsed": 305, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}} outputId="70172e96-f87c-407f-94ab-71c47f472a0b"
# %%writefile SessionState.py
"""Hack to add per-session state to Streamlit.
Usage
-----
>>> import SessionState
>>>
>>> session_state = SessionState.get(user_name='', favorite_color='black')
>>> session_state.user_name
''
>>> session_state.user_name = 'Mary'
>>> session_state.favorite_color
'black'
Since you set user_name above, next time your script runs this will be the
result:
>>> session_state = get(user_name='', favorite_color='black')
>>> session_state.user_name
'Mary'
"""
try:
import streamlit.ReportThread as ReportThread
from streamlit.server.Server import Server
except Exception:
# Streamlit >= 0.65.0
import streamlit.report_thread as ReportThread
from streamlit.server.server import Server
class SessionState(object):
def __init__(self, **kwargs):
"""A new SessionState object.
Parameters
----------
**kwargs : any
Default values for the session state.
Example
-------
>>> session_state = SessionState(user_name='', favorite_color='black')
>>> session_state.user_name = 'Mary'
''
>>> session_state.favorite_color
'black'
"""
for key, val in kwargs.items():
setattr(self, key, val)
def get(**kwargs):
"""Gets a SessionState object for the current session.
Creates a new object if necessary.
Parameters
----------
**kwargs : any
Default values you want to add to the session state, if we're creating a
new one.
Example
-------
>>> session_state = get(user_name='', favorite_color='black')
>>> session_state.user_name
''
>>> session_state.user_name = 'Mary'
>>> session_state.favorite_color
'black'
Since you set user_name above, next time your script runs this will be the
result:
>>> session_state = get(user_name='', favorite_color='black')
>>> session_state.user_name
'Mary'
"""
# Hack to get the session object from Streamlit.
ctx = ReportThread.get_report_ctx()
this_session = None
current_server = Server.get_current()
if hasattr(current_server, "_session_infos"):
# Streamlit < 0.56
session_infos = Server.get_current()._session_infos.values()
else:
session_infos = Server.get_current()._session_info_by_id.values()
for session_info in session_infos:
s = session_info.session
if (
# Streamlit < 0.54.0
(hasattr(s, "_main_dg") and s._main_dg == ctx.main_dg)
or
# Streamlit >= 0.54.0
(not hasattr(s, "_main_dg") and s.enqueue == ctx.enqueue)
or
# Streamlit >= 0.65.2
(
not hasattr(s, "_main_dg")
and s._uploaded_file_mgr == ctx.uploaded_file_mgr
)
):
this_session = s
if this_session is None:
raise RuntimeError(
"Oh noes. Couldn't get your Streamlit Session object. "
"Are you doing something fancy with threads?"
)
# Got the session object! Now let's attach some state into it.
if not hasattr(this_session, "_custom_session_state"):
this_session._custom_session_state = SessionState(**kwargs)
return this_session._custom_session_state
# + colab={"base_uri": "https://localhost:8080/"} id="1IZis4petkQ7" executionInfo={"status": "ok", "timestamp": 1613382280273, "user_tz": 480, "elapsed": 359, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}} outputId="0937e2e5-6f48-4ba1-d92f-9f19a4d49799"
# %%writefile utils.py
# Utils for preprocessing data etc
import tensorflow as tf
import googleapiclient.discovery
from google.api_core.client_options import ClientOptions
base_classes = [
"chicken_curry",
"chicken_wings",
"fried_rice",
"grilled_salmon",
"hamburger",
"ice_cream",
"pizza",
"ramen",
"steak",
"sushi",
]
classes_and_models = {
"model_1": {
"classes": base_classes,
"model_name": "vision_food_delete_me", # change to be your model name
},
"model_2": {
"classes": sorted(base_classes + ["donut"]),
"model_name": "efficientnet_model_2_11_classes",
},
"model_3": {
"classes": sorted(base_classes + ["donut", "not_food"]),
"model_name": "efficientnet_model_3_12_classes",
},
}
def predict_json(project, region, model, instances, version=None):
"""Send json data to a deployed model for prediction.
Args:
project (str): project where the Cloud ML Engine Model is deployed.
model (str): model name.
instances ([Mapping[str: Any]]): Keys should be the names of Tensors
your deployed model expects as inputs. Values should be datatypes
convertible to Tensors, or (potentially nested) lists of datatypes
convertible to Tensors.
version (str): version of the model to target.
Returns:
Mapping[str: any]: dictionary of prediction results defined by the
model.
"""
# Create the ML Engine service object
prefix = "{}-ml".format(region) if region else "ml"
api_endpoint = "https://{}.googleapis.com".format(prefix)
client_options = ClientOptions(api_endpoint=api_endpoint)
# Setup model path
model_path = "projects/{}/models/{}".format(project, model)
if version is not None:
model_path += "/versions/{}".format(version)
# Create ML engine resource endpoint and input data
ml_resource = googleapiclient.discovery.build(
"ml", "v1", cache_discovery=False, client_options=client_options
).projects()
instances_list = (
instances.numpy().tolist()
) # turn input into list (ML Engine wants JSON)
input_data_json = {
"signature_name": "serving_default",
"instances": instances_list,
}
request = ml_resource.predict(name=model_path, body=input_data_json)
response = request.execute()
# # ALT: Create model api
# model_api = api_endpoint + model_path + ":predict"
# headers = {"Authorization": "Bearer " + token}
# response = requests.post(model_api, json=input_data_json, headers=headers)
if "error" in response:
raise RuntimeError(response["error"])
return response["predictions"]
# Create a function to import an image and resize it to be able to be used with our model
def load_and_prep_image(filename, img_shape=224, rescale=False):
"""
Reads in an image from filename, turns it into a tensor and reshapes into
(224, 224, 3).
"""
# Decode it into a tensor
# img = tf.io.decode_image(filename) # no channels=3 means model will break for some PNG's (4 channels)
img = tf.io.decode_image(
filename, channels=3
) # make sure there's 3 colour channels (for PNG's)
# Resize the image
img = tf.image.resize(img, [img_shape, img_shape])
# Rescale the image (get all values between 0 and 1)
if rescale:
return img / 255.0
else:
return img
def update_logger(
image, model_used, pred_class, pred_conf, correct=False, user_label=None
):
"""
Function for tracking feedback given in app, updates and reutrns
logger dictionary.
"""
logger = {
"image": image,
"model_used": model_used,
"pred_class": pred_class,
"pred_conf": pred_conf,
"correct": correct,
"user_label": user_label,
}
return logger
# + id="2SRVb2JPenwI" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1613382285201, "user_tz": 480, "elapsed": 413, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.<KEY>", "userId": "04899005791450850059"}} outputId="ac179b7e-54e1-4968-d76b-cd242cce552e"
# %%writefile app.py
import os
import json
import requests
import SessionState
import streamlit as st
import tensorflow as tf
from utils import (
load_and_prep_image,
classes_and_models,
update_logger,
predict_json,
)
# Setup environment credentials (you'll need to change these)
os.environ[
"GOOGLE_APPLICATION_CREDENTIALS"
] = "/content/deeplearning-300314-28fd9a664766.json" # change for your GCP key
PROJECT = "deeplearning-300314" # change for your GCP project
# https://gcping.com/
# Oregon, USA
# us-west1
REGION = (
"us-west1"
)
### Streamlit code (works as a straigtht-forward script) ###
st.title("Welcome to Food Vision 🍔📸")
st.header("Identify what's in your food photos!")
@st.cache # cache the function so predictions aren't always redone (Streamlit refreshes every click)
def make_prediction(image, model, class_names):
"""
Takes an image and uses model (a trained TensorFlow model) to make a
prediction.
Returns:
image (preproccessed)
pred_class (prediction class from class_names)
pred_conf (model confidence)
"""
image = load_and_prep_image(image)
# Turn tensors into int16 (saves a lot of space, ML Engine has a limit of 1.5MB per request)
image = tf.cast(tf.expand_dims(image, axis=0), tf.int16)
# image = tf.expand_dims(image, axis=0)
preds = predict_json(
project=PROJECT, region=REGION, model=model, instances=image
)
pred_class = class_names[tf.argmax(preds[0])]
pred_conf = tf.reduce_max(preds[0])
return image, pred_class, pred_conf
# Pick the model version
choose_model = st.sidebar.selectbox(
"Pick model you'd like to use",
(
"Model 1 (10 food classes)", # original 10 classes
"Model 2 (11 food classes)", # original 10 classes + donuts
"Model 3 (11 food classes + non-food class)",
), # 11 classes (same as above) + not_food class
)
# Model choice logic
if choose_model == "Model 1 (10 food classes)":
CLASSES = classes_and_models["model_1"]["classes"]
MODEL = classes_and_models["model_1"]["model_name"]
elif choose_model == "Model 2 (11 food classes)":
CLASSES = classes_and_models["model_2"]["classes"]
MODEL = classes_and_models["model_2"]["model_name"]
else:
CLASSES = classes_and_models["model_3"]["classes"]
MODEL = classes_and_models["model_3"]["model_name"]
# Display info about model and classes
if st.checkbox("Show classes"):
st.write(
f"You chose {MODEL}, these are the classes of food it can identify:\n",
CLASSES,
)
# File uploader allows user to add their own image
uploaded_file = st.file_uploader(
label="Upload an image of food", type=["png", "jpeg", "jpg"]
)
# Setup session state to remember state of app so refresh isn't always needed
# See: https://discuss.streamlit.io/t/the-button-inside-a-button-seems-to-reset-the-whole-app-why/1051/11
session_state = SessionState.get(pred_button=False)
# Create logic for app flow
if not uploaded_file:
st.warning("Please upload an image.")
st.stop()
else:
session_state.uploaded_image = uploaded_file.read()
st.image(session_state.uploaded_image, use_column_width=True)
pred_button = st.button("Predict")
# Did the user press the predict button?
if pred_button:
session_state.pred_button = True
# And if they did...
if session_state.pred_button:
(
session_state.image,
session_state.pred_class,
session_state.pred_conf,
) = make_prediction(
session_state.uploaded_image, model=MODEL, class_names=CLASSES
)
st.write(
f"Prediction: {session_state.pred_class}, \
Confidence: {session_state.pred_conf:.3f}"
)
# Create feedback mechanism (building a data flywheel)
session_state.feedback = st.selectbox(
"Is this correct?", ("Select an option", "Yes", "No")
)
if session_state.feedback == "Select an option":
pass
elif session_state.feedback == "Yes":
st.write("Thank you for your feedback!")
# Log prediction information to terminal (this could be stored in Big Query or something...)
print(
update_logger(
image=session_state.image,
model_used=MODEL,
pred_class=session_state.pred_class,
pred_conf=session_state.pred_conf,
correct=True,
)
)
elif session_state.feedback == "No":
session_state.correct_class = st.text_input(
"What should the correct label be?"
)
if session_state.correct_class:
st.write(
"Thank you for that, we'll use your help to make our model better!"
)
# Log prediction information to terminal (this could be stored in Big Query or something...)
print(
update_logger(
image=session_state.image,
model_used=MODEL,
pred_class=session_state.pred_class,
pred_conf=session_state.pred_conf,
correct=False,
user_label=session_state.correct_class,
)
)
# + id="nRRFAcvcoh23" executionInfo={"status": "ok", "timestamp": 1613382290221, "user_tz": 480, "elapsed": 673, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}}
# !streamlit run app.py &>/dev/null&
# + id="A_JOoPbkfGl9" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1613382291603, "user_tz": 480, "elapsed": 467, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}} outputId="eca901c4-2526-46e6-e176-692c81a94095"
# !curl http://localhost:8501
# + id="BPvnzVGCfEUl" colab={"base_uri": "https://localhost:8080/"} executionInfo={"status": "ok", "timestamp": 1613383016823, "user_tz": 480, "elapsed": 723556, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}} outputId="ba0c28d0-3f7f-48b7-9867-31447c9f7baf"
# !npx localtunnel --port 8501
# + colab={"base_uri": "https://localhost:8080/"} id="i2KbaMySHyuc" executionInfo={"status": "ok", "timestamp": 1613383175868, "user_tz": 480, "elapsed": 346, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}} outputId="aefee136-6b2a-4f5e-b584-fa8c996f2105"
# %%writefile app.yaml
runtime: custom
env: flex
# + colab={"base_uri": "https://localhost:8080/"} id="4CNv0XASIeHf" executionInfo={"status": "ok", "timestamp": 1613383220475, "user_tz": 480, "elapsed": 17360, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}} outputId="3d2eea61-4cc0-484e-ad09-8b6483cdb156"
# !gcloud auth login
# + colab={"base_uri": "https://localhost:8080/"} id="2zlMWhTaInfn" executionInfo={"status": "ok", "timestamp": 1613383260108, "user_tz": 480, "elapsed": 1266, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}} outputId="c2641658-eb17-40d7-fd96-c2ff4198d91b"
# !gcloud config set project deeplearning-300314
# + colab={"base_uri": "https://localhost:8080/"} id="lS6nSt-pJUMJ" executionInfo={"status": "ok", "timestamp": 1613383449014, "user_tz": 480, "elapsed": 338, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}} outputId="2738ca1d-1ce2-41ec-84a9-4bf741a733a7"
# %%writefile Dockerfile
FROM python:3.7
## App engine stuff
# Expose port you want your app on
EXPOSE 8080
# Upgrade pip
RUN pip install -U pip
COPY requirements.txt app/requirements.txt
RUN pip install -r app/requirements.txt
# Create a new directory for app (keep it in its own directory)
COPY . /app
WORKDIR app
# Run
ENTRYPOINT ["streamlit", "run", "app.py", "--server.port=8080", "--server.address=0.0.0.0"]
# + colab={"base_uri": "https://localhost:8080/"} id="Kzf-xGPVRE6d" executionInfo={"status": "ok", "timestamp": 1613385490193, "user_tz": 480, "elapsed": 283, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}} outputId="5fe206e3-421a-481e-d226-5d9f18115e0c"
# %%writefile .dockerignore
# *.json
*.jpg
*.jpeg
*.git
*.key
env
images
keynote-images
# Python stuff
*.pyc
*.pyo
*.pyd
__pycache__
.pytest_cache
# + colab={"base_uri": "https://localhost:8080/"} id="wht2PrCEIPrL" executionInfo={"status": "ok", "timestamp": 1613386327084, "user_tz": 480, "elapsed": 786803, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}} outputId="a5e91767-9fe9-4c8e-81bc-ff752d71c0b8"
# !gcloud app deploy app.yaml
# + colab={"base_uri": "https://localhost:8080/"} id="pT7Nt5tgN_2A" executionInfo={"status": "ok", "timestamp": 1613384723906, "user_tz": 480, "elapsed": 69778, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14GimIntWE1SvxJeoWOj0PwvUuB_-simILm2JI8het08=s64", "userId": "04899005791450850059"}} outputId="aa9e72b8-9909-46f2-bc54-47423e59f676"
# !gcloud app logs tail -s default
| e2e-systems/streamlit-app-nlp.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import json
from requests import get
import pandas as pdds
from IPython.display import display_html
url = 'http://musicbrainz.org/ws/2/work/5c0858a3-f0a3-43c9-8968-19b8d044c886?inc=artist-rels+recording-rels&fmt=json'
response = get(url)
my_json = response.text
tmp_load= json.dumps(my_json, ensure_ascii=False)
work = json.loads(tmp_load)
with open('obra_json.json', 'w+', encoding='utf-8') as f:
f.write(str(work))
| musicbrainz/lab/get_obra_json.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: venv
# language: python
# name: venv
# ---
# # Sentiment Analysis - SQL Achemy
# ## A. Load Libraries
#
import pandas as pd
import numpy as np
import csv
from sqlalchemy import Column, String, Integer, ForeignKey, DateTime, func
from sqlalchemy.orm import relationship, backref
from sqlalchemy.ext.declarative import declarative_base
# ## B. Load 4 csv files
df_Conversation = pd.read_csv("C:\Programming\CustomerIntention\src\data\Conversation.csv", encoding = 'utf-8')
df_Conversation.head()
df_Conversation_Information = pd.read_csv("C:\Programming\CustomerIntention\src\data\Conversation_Information.csv", encoding = 'utf-8')
df_Conversation_Information.head()
df_Customer = pd.read_csv("C:\Programming\CustomerIntention\src\data\Customer.csv", encoding = 'utf-8')
df_Customer.head()
df_Fan_Page = pd.read_csv("C:\Programming\CustomerIntention\src\data\Fan_Page.csv", encoding = 'utf-8')
df_Fan_Page.head()
df_Conversation_Intention = pd.read_csv("C:\Programming\CustomerIntention\src\data\Conversation_Intention.csv", encoding = 'utf-8')
df_Conversation_Intention.head()
df_Conversation_Entities = pd.read_csv("C:\Programming\CustomerIntention\src\data\Conversation_Entities.csv", encoding = 'utf-8')
df_Conversation_Entities.head()
# ## Design tables with SQL Alchemy (demo only)
# ### Import SQL Alchemy
# import sqlalchemy
# sqlalchemy.__version__
# ## C. Create an engine to access the localhost created in the Command Prompt run as administrator
from sqlalchemy import create_engine
engine = create_engine('mysql+mysqldb://phuongdaingo:0505@localhost:3306/customerintention?charset=utf8mb4', echo=True)
# ## D. Design 6 tables 'Conversation', 'Conversation_Information', 'Customer', 'Fan_Page', 'Conversation_Intention', 'Conversation_Entities'
# For reference of other methods of DataTime only:
# https://stackoverflow.com/questions/13370317/sqlalchemy-default-datetime
# Now let's execute this cell to start creating tables for TablePlus.
# +
from sqlalchemy import Column, String, Integer, ForeignKey, DateTime, func, Boolean, MetaData, Table, Float
from sqlalchemy.dialects.mysql import TINYINT
from sqlalchemy.orm import relationship, backref
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
metadata = MetaData(bind=engine)
class Conversation(Base):
__tablename__ = Table('conversation', Base.metadata,
autoload=True, autoload_with=engine) # metadata comes from database
# Database (TablePlus) will regularize PK, Python won't do so (primary_key=True) since this is for mapping tables only.
# If Python is used for creating tables, we will need ID as a PK so 'primary_key=True' will be included.
id = Column(Integer, primary_key=True)
conversation_id = Column(Integer)
message = Column(String())
order = Column(Integer) # must have data type, Integer doesn't need to have Integer(8)
sender = Column(Integer)
fan_page_id = Column(Integer)
cus_id = Column(Integer)
class Conversation_Information(Base):
__tablename__ = Table('conversation_information', Base.metadata,
autoload=True, autoload_with=engine)
id = Column(Integer, primary_key=True)
conversation_id = Column(Integer)
customer_count = Column(Integer)
sales_count = Column(Integer)
start_time = Column(
DateTime(timezone=True)) # https://stackoverflow.com/questions/13370317/sqlalchemy-default-datetime
end_time = Column(
DateTime(timezone=True))
class Customer(Base):
__tablename__ = Table('customer', Base.metadata,
autoload=True, autoload_with=engine)
id = Column(Integer, primary_key=True)
cus_name = Column(String)
cus_id = Column(Integer) # must not fix structure of the database here
class Fan_Page(Base):
__tablename__ = Table('fan_page', Base.metadata,
autoload=True, autoload_with=engine)
id = Column(Integer, primary_key=True)
fan_page_name = Column(String)
fan_page_id = Column(Integer)
class Conversation_Intention(Base):
__tablename__ = Table('conversation_intention', Base.metadata,
autoload=True, autoload_with=engine)
id = Column(Integer, primary_key=True)
conversation_id = Column(Integer)
intention_label = Column(String)
intention_score = Column(Float)
class Conversation_Entities(Base):
__tablename__ = Table('conversation_entities', Base.metadata,
autoload=True, autoload_with=engine)
id = Column(Integer, primary_key=True)
conversation_id = Column(Integer)
conversation_entity = Column(String)
conversation_entity_score = Column(Float)
conversation_entity_word = Column(String)
# Mapping classes with tables in TablePlus's databases
# Should not create tables by Python but TablePlus
from sqlalchemy.orm import sessionmaker
#Session = sessionmaker()
#Session.configure(bind=engine)
Session = sessionmaker(bind=engine) # writing queries requires session before executing queries
session = Session() # object
#Base.metadata.create_all(engine)
# -
# ### Print the current row of each table
conversation_results = session.query(Conversation)
conversation_results #list
for conversation in conversation_results: # each item of the object
print(conversation.message)
conversation_information_results = session.query(Conversation_Information)
conversation_information_results #list
for conversation_information in conversation_information_results:
print(conversation_information.customer_count)
customer_results = session.query(Customer)
customer_results #list
for customer in customer_results:
print(customer.cus_name)
fan_page_results = session.query(Fan_Page)
fan_page_results #list
for fan_page in fan_page_results:
print(fan_page.fan_page_name)
# ### Add new row(s) to the database
# https://docs.sqlalchemy.org/en/14/tutorial/data_insert.html
# ID is set as Primary Key which is auto-increment in TablePlus so I will not try to add a new row having ID in this code cell.
try:
newConversation = Conversation(
message = 'Could I ask you something?',
order = '0',
sender = '0'
)
session.add(newConversation)
newConversationInformation = Conversation_Information(
conversation_id = '1',
customer_count = '1',
sales_count = '0'
)
session.add(newConversationInformation)
newCustomer = Customer(
cus_name = 'Frank',
cus_id = '3'
)
session.add(newCustomer)
newFanPage = Fan_Page(
fan_page_name = 'DEF',
fan_page_id = '2'
)
session.add(newFanPage)
session.commit() # commit once only
except:
session.rollback()
try:
newConversationInformation = Conversation_Information(
conversation_id = '2',
customer_count = '2',
sales_count = '2'
)
session.add(newConversationInformation)
session.commit() # commit once only
except:
session.rollback()
df_Conversation.shape
# ### Print them out again to see the differences
customer_results = session.query(Customer)
customer_results #list
for conversation in conversation_results:
print(conversation.message)
conversation_information_results = session.query(Conversation_Information)
conversation_information_results #list
for conversation_information in conversation_information_results:
print(conversation_information.customer_count)
customer_results = session.query(Customer)
customer_results #list
for customer in customer_results:
print(customer.cus_name)
fan_page_results = session.query(Fan_Page)
fan_page_results #list
for fan_page in fan_page_results:
print(fan_page.fan_page_name)
# ## E. Insert all rows of each dataframe to database's tables in TablePlus
# New Method: inserting directly from data frames
#
# Inserting takes long time means that selecting or filtering will take less time, and in reverse, due to adding IDX for a column or different columns depending on purposes of saving data into relational database only or reading the data.
#
# We will insert dataframes in batches into session (relational database), then commit to finalize saving. If an error happen, that batch will be stopped inserting and still stay in the session and other batches will not be entered into the session as well if flush() is placed outside 'for loop'. Therefore, if flush() is placed inside the for loop, batches will be flushed into the session regarless any error might occur. But we have to set rollback() in the except case to delete any existing batches in the session causing an error.
# ### Insert 'df_Conversation' dataframe into 'conversation' database
# +
import time
#import mysql.connector # as below mysql, not sqlite3 for this case
import traceback
from tqdm import tqdm
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String, create_engine # use sqlalchemy with connection string for mysql
from sqlalchemy.orm import scoped_session, sessionmaker
import unicodedata
Base = declarative_base()
DBSession = scoped_session(sessionmaker()) # the scoped_session() function is provided which produces a thread-managed registry of Session objects. It is commonly used in web applications so that a single global variable can be used to safely represent transactional sessions with sets of objects, localized to a single thread.
engine = None
def init_sqlalchemy(dbname='mysql+mysqldb://phuongdaingo:0505@localhost:3306/customerintention?charset=utf8mb4'):
global engine
engine = create_engine(dbname, echo=False)
DBSession.remove()
DBSession.configure(bind=engine, autoflush=False, expire_on_commit=False)
Base.metadata.drop_all(engine)
Base.metadata.create_all(engine)
def conversation_sqlalchemy_orm(n=100000):
init_sqlalchemy()
t0 = time.time()
error_i_list = [] # a new list containing i(s) of batch(es) causing errors
# Index column must match with ID column of df_Conversation > indexing to the row 10th iso using loop with iterows (time consuming), but by using range(df.rows) > take out the 10th row
for i in tqdm(range(n)): # use tqdm to track progress
try: # create custome, then add into session
conversation = Conversation()
conversation.order = df_Conversation['Order'].iloc[i]
conversation.sender = df_Conversation['Sender'].iloc[i]
conversation.message = unicodedata.normalize('NFC', str(df_Conversation['Message'].iloc[i]).encode("utf-8").decode("utf-8"))
conversation.fan_page_id = int(df_Conversation['Fanpage'].iloc[i]) # recreate the DB
conversation.cus_id = df_Conversation['PSID'].iloc[i]
DBSession.add(conversation) # error might happen here or below
DBSession.commit()
if i % 1000 == 0: # when i reachs 1000 rows, it will execute by flush() to insert the batch of 1000 rows into the session of the relational database
DBSession.flush() # should use try, except inside each 'for loop' to wrap i # error might happen here
DBSession.commit() #2nd attempt: place commit() here, then compare the progress # commit here to insert batch without affecting other batch(es) with errors
#text = unicodedata.normalize('NFC', text) # text: string type to fix error and replace all string texts into being wrapped by unicode
except Exception as er:
print('Error at index {}: '.format(i))
print(traceback.format_exc()) # print error(s)
print('-' * 20)
DBSession.rollback() # rollback here to delete all rows of a batch/batches causing errors to avoid being flooded or stuck with new batches coming in and then getting stuck as well
error_i_list.append(i) # append into array the index of batch(es) causing errors
# DBSession.commit() # 1st attempt: place commit() here, outside of 'for loop' # faster but will stop other batches coming in if errors happen
print(
"Conversation's SQLAlchemy ORM: Total time for " + str(n) +
" records " + str(time.time() - t0) + " secs")
# A new function to select rows from conversations with a condition filtering by cus_id, joining with table 'customer' to return the cus_name
#def join_tables():
if __name__ == '__main__':
#conversation_sqlalchemy_orm(df_Conversation.shape[0]) # number of rows of df as I want --> customized function name
conversation_sqlalchemy_orm(df_Conversation.shape[0])
# -
# ### Insert 'df_Conversation_Information' dataframe into 'conversation_information' database
#
# There will be an Error at index 260000 due to Duplicate entry '250001' for key 'IDX_conversation_id' so I have to delete this IDX of Conversation ID in the MYSQL by creating a new table for Conversation there.
# +
import time
#import mysql.connector # as below mysql, not sqlite3 for this case
import traceback
from tqdm import tqdm
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String, create_engine # use sqlalchemy with connection string for mysql
from sqlalchemy.orm import scoped_session, sessionmaker
Base = declarative_base()
DBSession = scoped_session(sessionmaker()) # the scoped_session() function is provided which produces a thread-managed registry of Session objects. It is commonly used in web applications so that a single global variable can be used to safely represent transactional sessions with sets of objects, localized to a single thread.
engine = None
def init_sqlalchemy(dbname='mysql+mysqldb://phuongdaingo:0505@localhost:3306/customerintention?charset=utf8mb4'):
global engine
engine = create_engine(dbname, echo=False)
DBSession.remove()
DBSession.configure(bind=engine, autoflush=False, expire_on_commit=False)
Base.metadata.drop_all(engine)
Base.metadata.create_all(engine)
def conversation_information_sqlalchemy_orm(n=100000):
init_sqlalchemy()
t0 = time.time()
error_i_list = [] # a new list containing i(s) of batch(es) causing errors
# Index column must match with ID column of df_Conversation > indexing to the row 10th iso using loop with iterows (time consuming), but by using range(df.rows) > take out the 10th row
for i in tqdm(range(n)): # use tqdm to track progress
try: # create custome, then add into session
conversation_information = Conversation_Information()
conversation_information.conversation_id = df_Conversation_Information['Conversation_ID'].iloc[i]
conversation_information.customer_count = df_Conversation_Information['CustomerCount'].iloc[i]
conversation_information.sales_count = df_Conversation_Information['SalesCount'].iloc[i]
conversation_information.start_time = df_Conversation_Information['StartTime'].iloc[i] # google, insert 1 row only for trial
conversation_information.end_time = df_Conversation_Information['EndTime'].iloc[i]
DBSession.add(conversation_information) # error might happen here or below
if i % 10000 == 0: # when i reachs 1000 rows, it will execute by flush() to insert the batch of 1000 rows into the session of the relational database
DBSession.flush() # should use try, except inside each 'for loop' to wrap i # error might happen here
DBSession.commit() #2nd attempt: place commit() here, then compare the progress # commit here to insert batch without affecting other batch(es) with errors
except Exception as er:
print('Error at index {}: '.format(i))
print(traceback.format_exc()) # print error(s)
print('-' * 20)
DBSession.rollback() # rollback here to delete all rows of a batch/batches causing errors to avoid being flooded or stuck with new batches coming in and then getting stuck as well
error_i_list.append(i) # append into array the index of batch(es) causing errors
# DBSession.commit() # 1st attempt: place commit() here, outside of 'for loop' # faster but will stop other batches coming in if errors happen
print(
"Conversation_Information's SQLAlchemy ORM: Total time for " + str(n) +
" records " + str(time.time() - t0) + " secs")
# A new function to select rows from conversations with a condition filtering by cus_id, joining with table 'customer' to return the cus_name
#def join_tables():
if __name__ == '__main__':
conversation_information_sqlalchemy_orm(df_Conversation_Information.shape[0]) # number of rows of df as I want --> customized function name
# -
# ### Insert 'df_Fan_Page' dataframe into 'fan_page' database
# +
import time
#import mysql.connector # as below mysql, not sqlite3 for this case
import traceback
from tqdm import tqdm
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String, create_engine # use sqlalchemy with connection string for mysql
from sqlalchemy.orm import scoped_session, sessionmaker
import unicodedata
Base = declarative_base()
DBSession = scoped_session(sessionmaker()) # the scoped_session() function is provided which produces a thread-managed registry of Session objects. It is commonly used in web applications so that a single global variable can be used to safely represent transactional sessions with sets of objects, localized to a single thread.
engine = None
def init_sqlalchemy(dbname='mysql+mysqldb://phuongdaingo:0505@localhost:3306/customerintention?charset=utf8mb4'):
global engine
engine = create_engine(dbname, echo=False)
DBSession.remove()
DBSession.configure(bind=engine, autoflush=False, expire_on_commit=False)
Base.metadata.drop_all(engine)
Base.metadata.create_all(engine)
def fan_page_sqlalchemy_orm(n=100000):
init_sqlalchemy()
t0 = time.time()
error_i_list = [] # a new list containing i(s) of batch(es) causing errors
# Index column must match with ID column of df_Conversation > indexing to the row 10th iso using loop with iterows (time consuming), but by using range(df.rows) > take out the 10th row
for i in tqdm(range(n)): # use tqdm to track progress
try: # create custome, then add into session
fan_page = Fan_Page()
fan_page.fan_page_name = unicodedata.normalize('NFC', str(df_Fan_Page['FanpageName'].iloc[i]).encode("utf-8").decode("utf-8"))
fan_page.fan_page_id = df_Fan_Page['Fanpage'].iloc[i]
#fan_page.start_time = df_Fan_Page['StartTime'].iloc[i] # google, insert 1 row only for trial
#fan_page.end_time = df_Fan_Page['EndTime'].iloc[i]
DBSession.add(fan_page) # error might happen here or below
if i % 10000 == 0: # when i reachs 1000 rows, it will execute by flush() to insert the batch of 1000 rows into the session of the relational database
DBSession.flush() # should use try, except inside each 'for loop' to wrap i # error might happen here
DBSession.commit() #2nd attempt: place commit() here, then compare the progress # commit here to insert batch without affecting other batch(es) with errors
except Exception as er:
print('Error at index {}: '.format(i))
print(traceback.format_exc()) # print error(s)
print('-' * 20)
DBSession.rollback() # rollback here to delete all rows of a batch/batches causing errors to avoid being flooded or stuck with new batches coming in and then getting stuck as well
error_i_list.append(i) # append into array the index of batch(es) causing errors
# DBSession.commit() # 1st attempt: place commit() here, outside of 'for loop' # faster but will stop other batches coming in if errors happen
print(
"Fan_Page's SQLAlchemy ORM: Total time for " + str(n) +
" records " + str(time.time() - t0) + " secs")
# A new function to select rows from conversations with a condition filtering by cus_id, joining with table 'customer' to return the cus_name
#def join_tables():
if __name__ == '__main__':
fan_page_sqlalchemy_orm(df_Fan_Page.shape[0]) # number of rows of df as I want --> customized function name
# -
# ### Insert 'df_Customer' dataframe into 'customer' database
# +
import time
#import mysql.connector # as below mysql, not sqlite3 for this case
import traceback
from tqdm import tqdm
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String, create_engine # use sqlalchemy with connection string for mysql
from sqlalchemy.orm import scoped_session, sessionmaker
import unicodedata
Base = declarative_base()
DBSession = scoped_session(sessionmaker()) # the scoped_session() function is provided which produces a thread-managed registry of Session objects. It is commonly used in web applications so that a single global variable can be used to safely represent transactional sessions with sets of objects, localized to a single thread.
engine = None
def init_sqlalchemy(dbname='mysql+mysqldb://phuongdaingo:0505@localhost:3306/customerintention?charset=utf8mb4'):
global engine
engine = create_engine(dbname, echo=False)
DBSession.remove()
DBSession.configure(bind=engine, autoflush=False, expire_on_commit=False)
Base.metadata.drop_all(engine)
Base.metadata.create_all(engine)
def customer_sqlalchemy_orm(n=100000):
init_sqlalchemy()
t0 = time.time()
error_i_list = [] # a new list containing i(s) of batch(es) causing errors
# index column must match with ID column of df_Customer > indexing to the row 10th iso using loop with iterows (time consuming), but by using range(df.rows) > take out the 10th row
for i in tqdm(range(n)): # use tqdm to track progress
try: # create custome, then add into session
customer = Customer()
customer.cus_name = unicodedata.normalize('NFC', str(df_Customer['CusName'].iloc[i]).encode("utf-8").decode("utf-8")) # Use rows from dataframe to insert them into the relational databse, not insert the self-created rows
#customer.cus_name = df_Customer['CusName'].iloc[i]
customer.cus_id = df_Customer['PSID'].iloc[i]
DBSession.add(customer) # error might happen here or below
DBSession.commit()
if i % 10000 == 0: # when i reachs 10000 rows, it will execute by flush() to insert the batch of 1000 rows into the session of the relational database
DBSession.flush() # should use try, except inside each 'for loop' to wrap i # error might happen here
DBSession.commit() #2nd attempt: place commit() here, then compare the progress # commit here to insert batch without affecting other batch(es) with errors
except Exception as er:
print('Error at index {}: '.format(i))
print(traceback.format_exc()) # print error(s)
print('-' * 20)
DBSession.rollback() # rollback here to delete all rows of a batch/batches causing errors to avoid being flooded or stuck with new batches coming in and then getting stuck as well
error_i_list.append(i) # append into array the index of batch(es) causing errors
#DBSession.commit() # 1st attempt: place commit() here, outside of 'for loop' # faster but will stop other batches coming in if errors happen
print(
"Customer's SQLAlchemy ORM: Total time for " + str(n) +
" records " + str(time.time() - t0) + " secs")
# A new function to select rows from conversations with a condition filtering by cus_id, joining with table 'customer' to return the cus_name
#def join_tables():
if __name__ == '__main__':
#customer_sqlalchemy_orm(df_Customer.shape[0]) # number of rows of df as I want --> customized function name
customer_sqlalchemy_orm(df_Customer.shape[0])
# -
# ### Insert 'Conversation_Intention' dataframe into 'conversation_intention' database
# +
import time
#import mysql.connector # as below mysql, not sqlite3 for this case
import traceback
from tqdm import tqdm
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String, create_engine # use sqlalchemy with connection string for mysql
from sqlalchemy.orm import scoped_session, sessionmaker
import unicodedata
Base = declarative_base()
DBSession = scoped_session(sessionmaker()) # the scoped_session() function is provided which produces a thread-managed registry of Session objects. It is commonly used in web applications so that a single global variable can be used to safely represent transactional sessions with sets of objects, localized to a single thread.
engine = None
def init_sqlalchemy(dbname='mysql+mysqldb://phuongdaingo:0505@localhost:3306/customerintention?charset=utf8mb4'):
global engine
engine = create_engine(dbname, echo=False)
DBSession.remove()
DBSession.configure(bind=engine, autoflush=False, expire_on_commit=False)
Base.metadata.drop_all(engine)
Base.metadata.create_all(engine)
def conversation_intention_sqlalchemy_orm(n=100000):
init_sqlalchemy()
t0 = time.time()
error_i_list = [] # a new list containing i(s) of batch(es) causing errors
# Index column must match with ID column of df_Conversation > indexing to the row 10th iso using loop with iterows (time consuming), but by using range(df.rows) > take out the 10th row
for i in tqdm(range(n)): # use tqdm to track progress
try: # create custome, then add into session
conversation_intention = Conversation_Intention()
conversation_intention.reference_id = df_Conversation_Intention['Conversation_ID'].iloc[i]
conversation_intention.intention_label = unicodedata.normalize('NFC', str(df_Conversation_Intention['Intention_Label'].iloc[i]).encode("utf-8").decode("utf-8"))
conversation_intention.intention_score = df_Conversation_Intention['Fanpage'].iloc[i]
DBSession.add(conversation_intention) # error might happen here or below
if i % 10000 == 0: # when i reachs 1000 rows, it will execute by flush() to insert the batch of 1000 rows into the session of the relational database
DBSession.flush() # should use try, except inside each 'for loop' to wrap i # error might happen here
DBSession.commit() #2nd attempt: place commit() here, then compare the progress # commit here to insert batch without affecting other batch(es) with errors
except Exception as er:
print('Error at index {}: '.format(i))
print(traceback.format_exc()) # print error(s)
print('-' * 20)
DBSession.rollback() # rollback here to delete all rows of a batch/batches causing errors to avoid being flooded or stuck with new batches coming in and then getting stuck as well
error_i_list.append(i) # append into array the index of batch(es) causing errors
# DBSession.commit() # 1st attempt: place commit() here, outside of 'for loop' # faster but will stop other batches coming in if errors happen
print(
"Conversation_Intention's SQLAlchemy ORM: Total time for " + str(n) +
" records " + str(time.time() - t0) + " secs")
# A new function to select rows from conversations with a condition filtering by cus_id, joining with table 'customer' to return the cus_name
#def join_tables():
if __name__ == '__main__':
conversation_intention_sqlalchemy_orm(df_Conversation_Intention.shape[0]) # number of rows of df as I want --> customized function name
# -
# ### Insert 'Conversation_Entities' dataframe into 'conversation_entities' database
# +
import time
#import mysql.connector # as below mysql, not sqlite3 for this case
import traceback
from tqdm import tqdm
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy import Column, Integer, String, create_engine # use sqlalchemy with connection string for mysql
from sqlalchemy.orm import scoped_session, sessionmaker
import unicodedata
Base = declarative_base()
DBSession = scoped_session(sessionmaker()) # the scoped_session() function is provided which produces a thread-managed registry of Session objects. It is commonly used in web applications so that a single global variable can be used to safely represent transactional sessions with sets of objects, localized to a single thread.
engine = None
def init_sqlalchemy(dbname='mysql+mysqldb://phuongdaingo:0505@localhost:3306/customerintention?charset=utf8mb4'):
global engine
engine = create_engine(dbname, echo=False)
DBSession.remove()
DBSession.configure(bind=engine, autoflush=False, expire_on_commit=False)
Base.metadata.drop_all(engine)
Base.metadata.create_all(engine)
def conversation_entities_sqlalchemy_orm(n=100000):
init_sqlalchemy()
t0 = time.time()
error_i_list = [] # a new list containing i(s) of batch(es) causing errors
# Index column must match with ID column of df_Conversation > indexing to the row 10th iso using loop with iterows (time consuming), but by using range(df.rows) > take out the 10th row
for i in tqdm(range(n)): # use tqdm to track progress
try: # create custome, then add into session
conversation_entities = Conversation_Entities()
conversation_entities.reference_id = df_Conversation_Entities['Conversation_ID'].iloc[i]
conversation_entities.conversation_entity = unicodedata.normalize('NFC', str(df_Conversation_Entities['Conversation_Entity'].iloc[i]).encode("utf-8").decode("utf-8"))
conversation_entities.conversation_entity_score = df_Conversation_Entities['Conversation_Entity_Score'].iloc[i]
conversation_entities.conversation_entity_word = unicodedata.normalize('NFC', str(df_Conversation_Entities['Conversation_Entity_word'].iloc[i]).encode("utf-8").decode("utf-8"))
DBSession.add(conversation_entities) # error might happen here or below
if i % 10000 == 0: # when i reachs 1000 rows, it will execute by flush() to insert the batch of 1000 rows into the session of the relational database
DBSession.flush() # should use try, except inside each 'for loop' to wrap i # error might happen here
DBSession.commit() #2nd attempt: place commit() here, then compare the progress # commit here to insert batch without affecting other batch(es) with errors
except Exception as er:
print('Error at index {}: '.format(i))
print(traceback.format_exc()) # print error(s)
print('-' * 20)
DBSession.rollback() # rollback here to delete all rows of a batch/batches causing errors to avoid being flooded or stuck with new batches coming in and then getting stuck as well
error_i_list.append(i) # append into array the index of batch(es) causing errors
# DBSession.commit() # 1st attempt: place commit() here, outside of 'for loop' # faster but will stop other batches coming in if errors happen
print(
"Conversation_Entities's SQLAlchemy ORM: Total time for " + str(n) +
" records " + str(time.time() - t0) + " secs")
# A new function to select rows from conversations with a condition filtering by cus_id, joining with table 'customer' to return the cus_name
#def join_tables():
if __name__ == '__main__':
conversation_entities_sqlalchemy_orm(df_Conversation_Entities.shape[0]) # number of rows of df as I want --> customized function name
# -
# ## F. Select, Filter - Using query on session with joining method
# https://www.tutorialspoint.com/sqlalchemy/sqlalchemy_orm_working_with_joins.htm
#
# https://docs.sqlalchemy.org/en/14/orm/query.html
# ### Filter the Conversation
stmt = session.query(Conversation).filter(Conversation.order == 0).all() #first(): get the first or all(): get all
for val in stmt:
print(val.message)
print(val.order)
print(val.sender)
stmt_Conversation = session.query(Conversation).join(Fan_Page.fan_page_id).filter(Conversation.order == 0).all() #first(): get the first or all(): get all
for val in stmt_Conversation:
print(val.message)
print(val.order)
print(val.sender)
# ### Filter the Conversation_Information
stmt = session.query(Conversation_Information).filter(Conversation_Information.conversation_id == 0).first() #first(): get the first or all(): get all
#for val in stmt:
#print(val.conversation_id)
#print(val.customer_count)
#print(val.sales_count)
print(stmt.conversation_id)
print(stmt.customer_count)
print(stmt.sales_count)
# Reference: https://stackoverflow.com/questions/51451768/sqlalchemy-querying-with-datetime-columns-to-filter-by-month-day-year
stmt = session.query(Conversation_Information).filter(Conversation_Information.start_time == '2019-11-03'
).all() #first(): get the first or all(): get all
#for val in stmt:
#print(val.conversation_id)
#print(val.customer_count)
#print(val.sales_count)
print(stmt.conversation_id)
print(stmt.customer_count)
print(stmt.sales_count)
stmt_Conversation_Info = session.query(Conversation_Information).join(Conversation.id).limit(5).filter(Conversation_Information.conversation_id == 0).all() #first(): get the first or all(): get all
for val in stmt_Conversation_Info:
print(val.conversation_id)
print(val.customer_count)
print(val.sales_count)
# ### Filter the Customer
stmt = session.query(Customer).filter(Customer.cus_name == 'Simon').first() #first(): get the first or all(): get all
print(stmt.cus_id)
print(stmt.cus_name)
print(stmt.id)
def filter_Customer_name(name)
stmt = session.query(Customer).filter(Customer.cus_name == name).first() #first(): get the first or all(): get all
for val in stmt:
print(val.cus_id)
print(val.cus_name)
print(val.id)
filter_Customer_name('Tòng thị tươi thuý')
stmt_Customer = session.query(Customer).join(Conversation.id).filter(Customer.cus_name == 'Simon').first() #first(): get the first or all(): get all
for val in stmt_Customer:
print(val.cus_id)
print(val.cus_name)
print(val.message)
# ### Filter the Fan_Page
stmt = session.query(Fan_Page).filter(Fan_Page.fan_page_name == 'Simon').all() #first(): get the first or all(): get all
for val in stmt:
print(val.fan_page_name)
print(val.fan_page_id)
def filter_Fan_Page_name(name)
stmt = session.query(Fan_Page).filter(Fan_Page.fan_page_name == name).all() #first(): get the first or all(): get all
for val in stmt:
print(val.fan_page_name)
print(val.fan_page_id)
filter_Fan_Page_name('Hân Beauty')
stmt_Fan_Page = session.query(Fan_Page).filter(Fan_Page.fan_page_name == 'Simon').all() #first(): get the first or all(): get all
for val in stmt_Fan_Page:
print(val.fan_page_name)
print(val.fan_page_id)
print(val.message)
# ## G. Select, Filter with data directly created in the MySQL Relational Database
# ### Filter by query Conversation
stmt = session.query(Conversation).filter(Conversation.order == 0).all() #first(): get the first or all(): get all
for val in stmt:
print(val.message)
print(val.order)
print(val.sender)
# ### Filter by query Conversation_Information
stmt = session.query(Conversation_Information).filter(Conversation_Information.conversation_id == 0).first() #first(): get the first or all(): get all
#for val in stmt:
#print(val.conversation_id)
#print(val.customer_count)
#print(val.sales_count)
print(stmt.conversation_id)
print(stmt.customer_count)
print(stmt.sales_count)
# ### Filter by query Customer
stmt = session.query(Customer).filter(Customer.cus_name == 'Frank').first() #first(): get the first or all(): get all
print(stmt.cus_id)
print(stmt.cus_name)
print(stmt.id)
stmt = session.query(Customer).filter(Customer.cus_id % 2 == 0).all()
for val in stmt:
print(val.cus_name)
# ### Filter by query Fan_Page
stmt = session.query(Fan_Page).filter(Fan_Page.fan_page_name == 'ABC').all() #first(): get the first or all(): get all
for val in stmt:
print(val.fan_page_name)
print(val.fan_page_id)
| Notebooks/SQL Alchemy - Map & Filter Tables with MySQL Database.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# inspired from: https://www.kaggle.com/aitude/ashrae-kfold-lightgbm-without-leak-1-08
# ## Import Packages
# +
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import lightgbm as lgb
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import KFold
import datetime
import gc
DATA_PATH = "./"
# -
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
# ## Load Data
# ## Utility Functions
# +
# https://www.kaggle.com/nz0722/aligned-timestamp-lgbm-by-meter-type?scriptVersionId=22831732
weather_dtypes = {
'site_id': np.uint8,
'air_temperature': np.float32,
'cloud_coverage': np.float32,
'dew_temperature': np.float32,
'precip_depth_1_hr': np.float32,
'sea_level_pressure': np.float32,
'wind_direction': np.float32,
'wind_speed': np.float32,
}
RAW_DATA_DIR = './'
weather_train = pd.read_csv('weather_train.csv',dtype=weather_dtypes, parse_dates=['timestamp'])
weather_test = pd.read_csv('weather_test.csv',dtype=weather_dtypes, parse_dates=['timestamp'])
weather = pd.concat([weather_train,weather_test],ignore_index=True)
del weather_train, weather_test
weather_key = ['site_id', 'timestamp']
temp_skeleton = weather[weather_key + ['air_temperature']].drop_duplicates(subset=weather_key).sort_values(by=weather_key).copy()
del weather
data_to_plot = temp_skeleton.copy()
data_to_plot["hour"] = data_to_plot["timestamp"].dt.hour
# calculate ranks of hourly temperatures within date/site_id chunks
temp_skeleton['temp_rank'] = temp_skeleton.groupby(['site_id', temp_skeleton.timestamp.dt.date])['air_temperature'].rank('average')
# create a dataframe of site_ids (0-16) x mean hour rank of temperature within day (0-23)
df_2d = temp_skeleton.groupby(['site_id', temp_skeleton.timestamp.dt.hour])['temp_rank'].mean().unstack(level=1)
# Subtract the columnID of temperature peak by 14, getting the timestamp alignment gap.
site_ids_offsets = pd.Series(df_2d.values.argmax(axis=1) - 14)
site_ids_offsets.index.name = 'site_id'
def timestamp_align(df):
df['offset'] = df.site_id.map(site_ids_offsets)
df['timestamp_aligned'] = (pd.to_datetime(df["timestamp"]) - pd.to_timedelta(df.offset, unit='H'))
df['timestamp'] = df['timestamp_aligned']
del df['timestamp_aligned']
return df
# +
# Original code from https://www.kaggle.com/aitude/ashrae-missing-weather-data-handling by @aitude
from meteocalc import Temp, dew_point, heat_index, wind_chill, feels_like
def c2f(T):
return T * 9 / 5. + 32
def windchill(T, v):
return (10*v**.5 - v +10.5) * (33 - T)
def prepareweather(df):
df['RH'] = 100 - 5 * (df['air_temperature']-df['dew_temperature'])
# df['RH_above50'] = (df['RH'] > 50).astype(int)
df['heat'] = df.apply(lambda x: heat_index(c2f(x.air_temperature), x.RH).c, axis=1)
df['windchill'] = df.apply(lambda x: windchill(x.air_temperature, x.wind_speed), axis=1)
df['feellike'] = df.apply(lambda x: feels_like(c2f(x.air_temperature), x.RH, x.wind_speed*2.237).c, axis=1)
return df
def add_lag_feature(weather_df, window=3):
group_df = weather_df.groupby('site_id')
cols = ['air_temperature', 'dew_temperature', 'heat', 'windchill', 'feellike']
rolled = group_df[cols].rolling(window=window, min_periods=0)
lag_mean = rolled.mean().reset_index().astype(np.float16)
lag_max = rolled.max().reset_index().astype(np.float16)
lag_min = rolled.min().reset_index().astype(np.float16)
lag_std = rolled.std().reset_index().astype(np.float16)
for col in cols:
weather_df[f'{col}_mean_lag{window}'] = lag_mean[col]
# weather_df[f'{col}_max_lag{window}'] = lag_max[col]
# weather_df[f'{col}_min_lag{window}'] = lag_min[col]
# weather_df[f'{col}_std_lag{window}'] = lag_std[col]
def fill_weather_dataset(weather_df):
# Find Missing Dates
time_format = "%Y-%m-%d %H:%M:%S"
start_date = datetime.datetime.strptime(weather_df['timestamp'].min(),time_format)
end_date = datetime.datetime.strptime(weather_df['timestamp'].max(),time_format)
total_hours = int(((end_date - start_date).total_seconds() + 3600) / 3600)
hours_list = [(end_date - datetime.timedelta(hours=x)).strftime(time_format) for x in range(total_hours)]
missing_hours = []
for site_id in range(16):
site_hours = np.array(weather_df[weather_df['site_id'] == site_id]['timestamp'])
new_rows = pd.DataFrame(np.setdiff1d(hours_list,site_hours),columns=['timestamp'])
new_rows['site_id'] = site_id
weather_df = pd.concat([weather_df,new_rows])
weather_df = weather_df.reset_index(drop=True)
# for col in weather_df.columns:
# if col != 'timestamp':
# if weather_df[col].isna().sum():
# weather_df['na_'+col] = weather_df[col].isna().astype(int)
# weather_df['weath_na_total'] = weather_df.isna().sum(axis=1)
weather_df = timestamp_align(weather_df)
# Add new Features
weather_df["datetime"] = pd.to_datetime(weather_df["timestamp"])
weather_df["day"] = weather_df["datetime"].dt.day
weather_df["week"] = weather_df["datetime"].dt.week
weather_df["month"] = weather_df["datetime"].dt.month
# Reset Index for Fast Update
weather_df = weather_df.set_index(['site_id','day','month'])
air_temperature_filler = pd.DataFrame(weather_df.groupby(['site_id','day','month'])['air_temperature'].mean(),columns=["air_temperature"])
weather_df.update(air_temperature_filler,overwrite=False)
# Step 1
cloud_coverage_filler = weather_df.groupby(['site_id','day','month'])['cloud_coverage'].mean()
# Step 2
cloud_coverage_filler = pd.DataFrame(cloud_coverage_filler.fillna(method='ffill'),columns=["cloud_coverage"])
weather_df.update(cloud_coverage_filler,overwrite=False)
due_temperature_filler = pd.DataFrame(weather_df.groupby(['site_id','day','month'])['dew_temperature'].mean(),columns=["dew_temperature"])
weather_df.update(due_temperature_filler,overwrite=False)
# Step 1
sea_level_filler = weather_df.groupby(['site_id','day','month'])['sea_level_pressure'].mean()
# Step 2
sea_level_filler = pd.DataFrame(sea_level_filler.fillna(method='ffill'),columns=['sea_level_pressure'])
weather_df.update(sea_level_filler,overwrite=False)
wind_direction_filler = pd.DataFrame(weather_df.groupby(['site_id','day','month'])['wind_direction'].mean(),columns=['wind_direction'])
weather_df.update(wind_direction_filler,overwrite=False)
wind_speed_filler = pd.DataFrame(weather_df.groupby(['site_id','day','month'])['wind_speed'].mean(),columns=['wind_speed'])
weather_df.update(wind_speed_filler,overwrite=False)
# Step 1
precip_depth_filler = weather_df.groupby(['site_id','day','month'])['precip_depth_1_hr'].mean()
# Step 2
precip_depth_filler = pd.DataFrame(precip_depth_filler.fillna(method='ffill'),columns=['precip_depth_1_hr'])
weather_df.update(precip_depth_filler,overwrite=False)
weather_df = weather_df.reset_index()
weather_df = weather_df.drop(['datetime','day','week','month'],axis=1)
print('add heat, RH...')
weather_df = prepareweather(weather_df)
print('add lag features')
add_lag_feature(weather_df, window=3)
return weather_df
# Original code from https://www.kaggle.com/gemartin/load-data-reduce-memory-usage by @gemartin
from pandas.api.types import is_datetime64_any_dtype as is_datetime
from pandas.api.types import is_categorical_dtype
def reduce_mem_usage(df, use_float16=False):
"""
Iterate through all the columns of a dataframe and modify the data type to reduce memory usage.
"""
start_mem = df.memory_usage().sum() / 1024**2
print("Memory usage of dataframe is {:.2f} MB".format(start_mem))
for col in df.columns:
if is_datetime(df[col]) or is_categorical_dtype(df[col]):
continue
col_type = df[col].dtype
if col_type != object:
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == "int":
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
if use_float16 and c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
else:
df[col] = df[col].astype("category")
end_mem = df.memory_usage().sum() / 1024**2
print("Memory usage after optimization is: {:.2f} MB".format(end_mem))
print("Decreased by {:.1f}%".format(100 * (start_mem - end_mem) / start_mem))
return df
def features_engineering(df):
# Sort by timestamp
df.sort_values("timestamp")
df.reset_index(drop=True)
# Add more features
df["timestamp"] = pd.to_datetime(df["timestamp"],format="%Y-%m-%d %H:%M:%S")
df["hour"] = df["timestamp"].dt.hour
df["weekday"] = df["timestamp"].dt.weekday
# Remove Unused Columns
drop = ["timestamp","sea_level_pressure", "wind_direction", "wind_speed", "precip_depth_1_hr"]
df = df.drop(drop, axis=1)
gc.collect()
return df
def building_features(building_meta_df):
building_addfeatures = pd.read_feather('building_all_meters.feather')
for col in building_meta_df.columns:
if col != 'timestamp':
if building_meta_df[col].isna().sum():
building_meta_df['na_'+col] = building_meta_df[col].isna().astype(int)
building_meta_df['build_na_total'] = building_meta_df.isna().sum(axis=1)
building_meta_df = pd.concat([building_meta_df,
building_addfeatures[['meter_reading_0', 'meter_reading_1',
'meter_reading_2', 'meter_reading_3']]], axis=1)
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
building_meta_df.primary_use = le.fit_transform(building_meta_df.primary_use)
building_meta_df['cnt_building_per_site'] = building_meta_df.groupby(['site_id']).building_id.transform(lambda x: x.size)
building_meta_df['cnt_building_per_site_prim'] = building_meta_df.groupby(['site_id', 'primary_use']).building_id.transform(lambda x: x.size)
building_meta_df['sqr_mean_per_site'] = building_meta_df.groupby(['site_id', ]).square_feet.transform('median')
building_meta_df['sqr_mean_per_prim_site'] = building_meta_df.groupby(['site_id', 'primary_use']).square_feet.transform('median')
return building_meta_df
# -
# ## Fill Weather Information
#
# I'm using [this kernel](https://www.kaggle.com/aitude/ashrae-missing-weather-data-handling) to handle missing weather information.
# +
train_df = pd.read_feather("./train_cleanup_001.feather")
# train_df = pd.read_csv(DATA_PATH + 'train.csv')
# Remove outliers
train_df = train_df [ train_df['building_id'] != 1099 ]
train_df = train_df.query('not (building_id <= 104 & meter == 0 & timestamp <= "2016-05-20")')
building_df = pd.read_csv(DATA_PATH + 'building_metadata.csv')
weather_trn = pd.read_csv(DATA_PATH + 'weather_train.csv')
weather_tst = pd.read_csv(DATA_PATH + 'weather_test.csv')
weather_df = pd.concat([weather_trn, weather_tst], axis=0)
# -
train_df.columns
# %%time
weather_df = fill_weather_dataset(weather_df)
building_df = building_features(building_df)
# ## Merge Data
#
# We need to add building and weather information into training dataset.
train_df["timestamp"] = pd.to_datetime(train_df["timestamp"])
train_df.dtypes
train_df = train_df.merge(building_df, left_on='building_id',right_on='building_id',how='left')
# train_df["timestamp"] = pd.to_datetime(train_df["timestamp"])
train_df = train_df.merge(weather_df,how='left',left_on=['site_id','timestamp'],right_on=['site_id','timestamp'])
del weather_df
gc.collect()
# ## Features Engineering
train_df = features_engineering(train_df)
train_df = reduce_mem_usage(train_df,use_float16=False)
building_df = reduce_mem_usage(building_df,use_float16=False)
# ## KFOLD LIGHTGBM Model
categorical_features = ["building_id", "site_id", "meter", "primary_use", "weekday"]
features0 = ['building_id',
'square_feet', 'hour', 'primary_use',
'weekday', 'year_built', 'heat', 'floor_count', 'feellike',
'sqr_mean_per_prim_site', 'air_temperature',
#'heat_mean_lag3',
'sqr_mean_per_site', 'dew_temperature_mean_lag3',
'dew_temperature', 'cnt_building_per_site_prim',
'feellike_mean_lag3', 'air_temperature_mean_lag3',
'windchill_mean_lag3', 'cnt_building_per_site', 'windchill',
'offset',
]
categorical_features0 = [ 'building_id', 'weekday', 'primary_use',]
train_df.columns
train_df = train_df.reset_index(drop=True)
# +
params = {
"objective": "regression",
"boosting": "gbdt",
"num_leaves": 16,
"learning_rate": 0.05,
"feature_fraction": 0.85,
"reg_lambda": 2,
"metric": "rmse",
'min_data_in_leaf' : 6000,
}
X_trn = train_df[train_df.meter==0].reset_index(drop=True)
X_trn = X_trn[features0+["meter_reading"]]
target = np.log1p(X_trn["meter_reading"])
X_trn= X_trn.drop('meter_reading', axis = 1)
kf = KFold(n_splits=3)
models0 = []
for train_index,test_index in kf.split(X_trn):
train_features = X_trn.loc[train_index]
train_target = target.loc[train_index]
test_features = X_trn.loc[test_index]
test_target = target.loc[test_index]
d_training = lgb.Dataset(train_features, label=train_target,categorical_feature=categorical_features0, free_raw_data=False)
d_test = lgb.Dataset(test_features, label=test_target,categorical_feature=categorical_features0, free_raw_data=False)
model = lgb.train(params, train_set=d_training, num_boost_round=1000, valid_sets=[d_training,d_test], verbose_eval=25, early_stopping_rounds=50)
models0.append(model)
del train_features, train_target, test_features, test_target, d_training, d_test
gc.collect()
# -
df_fimp_1 = pd.DataFrame()
df_fimp_1["feature"] = X_trn.columns.values
df_fimp_1["importance"] = models0[0].feature_importance()
df_fimp_1["half"] = 1
df_fimp_1.sort_values(by='importance', ascending=False)
df_fimp_1.sort_values(by='importance', ascending=False).feature.values
# # %matplotlib inline
for model in models0:
lgb.plot_importance(model)
plt.show()
# # meter = 1
features1 = ['building_id', 'hour', 'square_feet', 'dew_temperature_mean_lag3',
'dew_temperature', 'weekday', 'windchill_mean_lag3',
'cloud_coverage', 'heat', 'air_temperature_mean_lag3',
'year_built', 'RH', 'heat_mean_lag3', 'windchill',
'cnt_building_per_site_prim', 'air_temperature',
'cnt_building_per_site', 'feellike_mean_lag3', 'feellike',
'sqr_mean_per_prim_site', 'primary_use', 'meter_reading_2',
'sqr_mean_per_site'
]
categorical_features1 = ["building_id", "primary_use", "weekday"]
# +
params = {
"objective": "regression",
"boosting": "gbdt",
"num_leaves": 1280,
"learning_rate": 0.05,
"feature_fraction": 0.85,
"reg_lambda": 2,
"metric": "rmse",
}
X_trn = train_df[train_df.meter==1].reset_index(drop=True)
X_trn = X_trn[features1+["meter_reading"]]
target = np.log1p(X_trn["meter_reading"])
X_trn= X_trn.drop('meter_reading', axis = 1)
kf = KFold(n_splits=3)
models1 = []
for train_index,test_index in kf.split(X_trn):
train_features = X_trn.loc[train_index]
train_target = target.loc[train_index]
test_features = X_trn.loc[test_index]
test_target = target.loc[test_index]
d_training = lgb.Dataset(train_features, label=train_target,categorical_feature=categorical_features1, free_raw_data=False)
d_test = lgb.Dataset(test_features, label=test_target,categorical_feature=categorical_features1, free_raw_data=False)
model = lgb.train(params, train_set=d_training, num_boost_round=1000, valid_sets=[d_training,d_test], verbose_eval=25, early_stopping_rounds=50)
models1.append(model)
del train_features, train_target, test_features, test_target, d_training, d_test
gc.collect()
# -
df_fimp_1 = pd.DataFrame()
df_fimp_1["feature"] = X_trn.columns.values
df_fimp_1["importance"] = models1[0].feature_importance()
df_fimp_1["half"] = 1
df_fimp_1.sort_values(by='importance', ascending=False)
df_fimp_1.sort_values(by='importance', ascending=False).feature[:-10].values
# # %matplotlib inline
for model in models1:
lgb.plot_importance(model)
plt.show()
# # meter == 2
features2 = ['building_id', 'hour', 'square_feet', 'dew_temperature_mean_lag3',
'dew_temperature', 'heat', 'windchill_mean_lag3', 'cloud_coverage',
'RH', 'windchill', 'weekday', 'air_temperature_mean_lag3',
'heat_mean_lag3', 'air_temperature', 'cnt_building_per_site_prim',
'feellike', 'feellike_mean_lag3', 'cnt_building_per_site',
'year_built', 'sqr_mean_per_prim_site', 'meter_reading_1', 'site_id',
'primary_use',
]
categorical_features2 = ["building_id", "site_id", "primary_use", "weekday"]
# +
params = {
"objective": "regression",
"boosting": "gbdt",
"num_leaves": 1280,
"learning_rate": 0.05,
"feature_fraction": 0.85,
"reg_lambda": 2,
"metric": "rmse",
}
X_trn = train_df[train_df.meter==2].reset_index(drop=True)
X_trn = X_trn[features2+["meter_reading"]]
target = np.log1p(X_trn["meter_reading"])
X_trn= X_trn.drop('meter_reading', axis = 1)
kf = KFold(n_splits=3)
models2 = []
for train_index,test_index in kf.split(X_trn):
train_features = X_trn.loc[train_index]
train_target = target.loc[train_index]
test_features = X_trn.loc[test_index]
test_target = target.loc[test_index]
d_training = lgb.Dataset(train_features, label=train_target,categorical_feature=categorical_features2, free_raw_data=False)
d_test = lgb.Dataset(test_features, label=test_target,categorical_feature=categorical_features2, free_raw_data=False)
model = lgb.train(params, train_set=d_training, num_boost_round=1000, valid_sets=[d_training,d_test], verbose_eval=25, early_stopping_rounds=50)
models2.append(model)
del train_features, train_target, test_features, test_target, d_training, d_test
gc.collect()
# -
# # %matplotlib inline
for model in models2:
lgb.plot_importance(model)
plt.show()
df_fimp_1 = pd.DataFrame()
df_fimp_1["feature"] = X_trn.columns.values
df_fimp_1["importance"] = models2[0].feature_importance()
df_fimp_1["half"] = 1
df_fimp_1.sort_values(by='importance', ascending=False)
df_fimp_1.sort_values(by='importance', ascending=False).feature.values
# # meter == 3
features3 = ['dew_temperature_mean_lag3', 'hour', 'dew_temperature',
'building_id', 'square_feet', 'windchill_mean_lag3', 'RH',
'air_temperature_mean_lag3', 'heat', 'cloud_coverage', 'windchill',
'heat_mean_lag3', 'air_temperature', 'weekday',
'feellike_mean_lag3', 'feellike', 'year_built',
'cnt_building_per_site_prim', 'sqr_mean_per_prim_site',
'floor_count', 'cnt_building_per_site', 'build_na_total',
'meter_reading_2'
]
categorical_features3 = ["building_id", "weekday"]
# +
params = {
"objective": "regression",
"boosting": "gbdt",
"num_leaves": 1280,
"learning_rate": 0.05,
"feature_fraction": 0.85,
"reg_lambda": 2,
"metric": "rmse",
}
X_trn = train_df[train_df.meter==3].reset_index(drop=True)
X_trn = X_trn[features3+["meter_reading"]]
target = np.log1p(X_trn["meter_reading"])
X_trn= X_trn.drop('meter_reading', axis = 1)
kf = KFold(n_splits=3)
models3 = []
for train_index,test_index in kf.split(X_trn):
train_features = X_trn.loc[train_index]
train_target = target.loc[train_index]
test_features = X_trn.loc[test_index]
test_target = target.loc[test_index]
d_training = lgb.Dataset(train_features, label=train_target,categorical_feature=categorical_features3, free_raw_data=False)
d_test = lgb.Dataset(test_features, label=test_target,categorical_feature=categorical_features3, free_raw_data=False)
model = lgb.train(params, train_set=d_training, num_boost_round=1000, valid_sets=[d_training,d_test], verbose_eval=25, early_stopping_rounds=50)
models3.append(model)
del train_features, train_target, test_features, test_target, d_training, d_test
gc.collect()
# -
# # %matplotlib inline
for model in models3:
lgb.plot_importance(model)
plt.show()
df_fimp_1 = pd.DataFrame()
df_fimp_1["feature"] = X_trn.columns.values
df_fimp_1["importance"] = models3[0].feature_importance()
df_fimp_1["half"] = 1
df_fimp_1.sort_values(by='importance', ascending=False)
df_fimp_1.sort_values(by='importance', ascending=False).feature[:-10].values
# ## Load Test Data
test_df = pd.read_csv('test.csv')
row_ids = test_df["row_id"]
test_df.drop("row_id", axis=1, inplace=True)
#test_df = reduce_mem_usage(test_df)
# ## Merge Building Data
test_df = test_df.merge(building_df,left_on='building_id',right_on='building_id',how='left')
del building_df
gc.collect()
# ## Fill Weather Information
weather_df = pd.read_csv('weather_test.csv')
weather_df = fill_weather_dataset(weather_df)
# weather_df = reduce_mem_usage(weather_df)
# ## Merge Weather Data
# +
#test_df["timestamp"] = pd.to_datetime(test_df["timestamp"].astype('float'))
# -
test_df["timestamp"] = pd.to_datetime(test_df["timestamp"])
test_df = test_df.merge(weather_df,how='left',on=['timestamp','site_id'])
del weather_df
gc.collect()
# ## Features Engineering
test_df = features_engineering(test_df)
# ## Prediction
featuress = [features0, features1, features2, features3]
def create_X(test_df, target_meter):
target_test_df = test_df[test_df['meter'] == target_meter]
target_test_df = target_test_df[featuress[target_meter]]
return target_test_df
# +
from tqdm import tqdm_notebook as tqdm
def pred(X_test, models, batch_size=1000000):
iterations = (X_test.shape[0] + batch_size -1) // batch_size
print('iterations', iterations)
y_test_pred_total = np.zeros(X_test.shape[0])
for i, model in enumerate(models):
print(f'predicting {i}-th model')
for k in tqdm(range(iterations)):
y_pred_test = model.predict(X_test[k*batch_size:(k+1)*batch_size], num_iteration=model.best_iteration)
y_test_pred_total[k*batch_size:(k+1)*batch_size] += y_pred_test
y_test_pred_total /= len(models)
return y_test_pred_total
# -
# +
# %%time
X_test = create_X(test_df, target_meter=0)
gc.collect()
y_test0 = pred(X_test, models0)
sns.distplot(y_test0)
del X_test
gc.collect()
# +
# %%time
X_test = create_X(test_df, target_meter=1)
gc.collect()
y_test1 = pred(X_test, models1)
sns.distplot(y_test1)
del X_test
gc.collect()
# +
# %%time
X_test = create_X(test_df, target_meter=2)
gc.collect()
y_test2 = pred(X_test, models2)
sns.distplot(y_test2)
del X_test
gc.collect()
# +
# %%time
X_test = create_X(test_df, target_meter=3)
gc.collect()
y_test3 = pred(X_test, models3)
sns.distplot(y_test3)
del X_test
gc.collect()
# -
sample_submission = pd.read_csv('sample_submission.csv')
sample_submission.loc[test_df['meter'] == 0, 'meter_reading'] = np.expm1(y_test0)
sample_submission.loc[test_df['meter'] == 1, 'meter_reading'] = np.expm1(y_test1)
sample_submission.loc[test_df['meter'] == 2, 'meter_reading'] = np.expm1(y_test2)
sample_submission.loc[test_df['meter'] == 3, 'meter_reading'] = np.expm1(y_test3)
sample_submission.to_csv('submission_multimeter003.csv.gz',
index=False,compression='gzip',
float_format='%.4f',
chunksize=25000)
# +
# #!kaggle competitions submit -c ashrae-energy-prediction -f submission_multimeter003.csv.gz -m "submission_multimeter003"
# -
| solutions/rank-3/level1--submission_multimeter003--lightgbm.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import matplotlib.pyplot as plt
import sys
import os
module_path = os.path.abspath(os.path.join('..'))
if module_path not in sys.path:
sys.path.append(module_path+"\\python")
import Analyze_Result
import Simulate
import Network
from Analyze_Result import aggregate_through_years_statusQuo
# -
###Calculating net present value of infrastructure, environmental, safety and total cost for statusQuo strategy
df_net_present_statusQuo=Analyze_Result.aggregate_through_years_statusQuo()
net_present_value_lifecycle_infrastructure_cost=[]
net_present_value_environmental_cost=[]
net_present_value_safety_cost=[]
net_present_value_total_under_after_lifespan_strategy_cost=[]
for index, row in df_net_present_statusQuo.iterrows():
net_present_value_lifecycle_infrastructure_cost.append(row['total infra']/(1+Network.r)**index)
net_present_value_environmental_cost.append(row['environmental restoration']/(1+Network.r)**index)
net_present_value_safety_cost.append(row['total safety']/(1+Network.r)**index)
net_present_value_total_under_after_lifespan_strategy_cost.append(row['total cost']/(1+Network.r)**index)
total_infrastructre=sum(net_present_value_lifecycle_infrastructure_cost)
total_environmental=sum(net_present_value_environmental_cost)
total_safety=sum(net_present_value_safety_cost)
total_total=sum(net_present_value_total_under_after_lifespan_strategy_cost)
print (total_infrastructre)
print(total_environmental)
print(total_safety)
print(total_total)
# +
plt.rcParams["figure.figsize"] = (12, 8)
plt.style.use('ggplot')
x = ['Lifecycle Infrastructure', 'Environmental Restoration', 'Safety', 'Total Cost']
cost = [total_infrastructre, total_environmental, total_safety, total_total]
x_pos = [i for i, _ in enumerate(x)]
plt.bar(x_pos, cost, color='green')
plt.xlabel("Cost Elements", fontsize=14)
plt.ylabel("Cost Output",fontsize=14)
plt.title("Net Preset Cost Output from various cost elements", fontsize=20)
plt.xticks(x_pos, x)
plt.figsize=(28,25)
plt.show()
# -
| bin/jupyter/.ipynb_checkpoints/bikhodi-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [default]
# language: python
# name: python3
# ---
# 
#
# ## Introduction to Data-X
# Mostly basics about Anaconda, Git, Python, and Jupyter Notebooks
#
# ### Author: <NAME>
#
# ---
#
# # Useful Links
# 1. Managing conda environments:
# - https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html
# 2. Github:
# - https://readwrite.com/2013/09/30/understanding-github-a-journey-for-beginners-part-1/
# - https://readwrite.com/2013/10/02/github-for-beginners-part-2/
# 3. Learning Python (resources):
# - https://www.datacamp.com/
# - [Python Bootcamp](https://bids.berkeley.edu/news/python-boot-camp-fall-2016-training-videos-available-online
# )
# 4. Datahub: http://datahub.berkeley.edu/ (to run notebooks in the cloud)
# 5. Google Colab: https://colab.research.google.com (also running notebooks in the cloud)
# 5. Data-X website resources: https://data-x.blog
# 6. Book: [Hands on Machine Learning with Scikit-Learn and Tensorflow](https://www.amazon.com/Hands-Machine-Learning-Scikit-Learn-TensorFlow/dp/1491962291/ref=sr_1_1?ie=UTF8&qid=1516300239&sr=8-1&keywords=hands+on+machine+learning+with+scikitlearn+and+tensorflow)
# # Introduction to Jupyter Notebooks
# From the [Project Jupyter Website](https://jupyter.org/):
#
# * *__Project Jupyter__ exists to develop open-source software, open-standards, and services for interactive computing across dozens of programming languages. Collaborative, Reproducible.*
#
# * *__The Jupyter Notebook__ is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and narrative text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, data visualization, machine learning, and much more.*
# # Notebook contains 2 cell types Markdown & Code
#
# ## Markdown cells
#
# Where you write text.
#
# Or, equations in Latex: $erf(x) = \frac{1}{\sqrt\pi}\int_{-x}^x e^{-t^2} dt$
# Centered Latex Matrices:
#
# $$
# \begin{bmatrix}
# x_{11} & x_{12} & x_{13} & \dots & x_{1n} \\
# x_{21} & x_{22} & x_{23} & \dots & x_{2n} \\
# \vdots & \vdots & \vdots & \ddots & \vdots \\
# x_{d1} & x_{d2} & x_{d3} & \dots & x_{dn}
# \end{bmatrix}
# $$
# <div class='alert alert-warning'>Bootstrap CSS and `HTML`</div>
# Python (or any other programming language) Code
# ```python
# # simple adder function
# def adder(x,y):
# return x+y
# ```
# # Header 1
# ## Header 2
# ### Header 3...
#
# **bold**, *italic*
#
# Divider
#
# ---
# * Bullet
# * Lists
#
#
# 1. Enumerated
# 2. Lists
# Useful images:
# <img src='https://image.slidesharecdn.com/juan-rodriguez-ucberkeley-120331003737-phpapp02/95/juanrodriguezuc-berkeley-3-728.jpg?cb=1333154305' width='300px'>
#
# ---
# An internal (HTML) link to section in the notebook:
#
#
# ## <a href='#bottom'>Link: Take me to the bottom of the notebook</a>
# ___
#
# ## **Find a lot of useful Markdown commands here:**
# ### https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet
#
# ___
# # Code Cells
# In them you can interactively run Python commands
print('hello world!')
print('2nd row')
# +
# Comment in a code cells
# -
# Lines evaluated sequentially
# A cell displays output of last line
2+2
3+3
5+5
# Stuck in an infinite loop
while True:
continue
# +
# Cells evaluated sequentially
# -
tmp_str = 'this is now stored in memory'
print(tmp_str)
print("Let's Start Over")
print(tmp_str)
# ## Jupyter / Ipython Magic
# Magic commands (only for Jupyter and IPython, won't work in script)
# %ls
# Time several runs of same operation
# %timeit [i for i in range(1000)];
# Time operation
# %time
[x for x in range(100000)];
# %ls resources/
# %load resources/print_hw3.py
# %lsmagic
?%alias
# ?str()
# ## Terminal / Command Prompt commands
# Shell commands
# !cat resources/random.txt
# !ls # in mac
# !dir #in windows
# show first lines of a data file
# !head -n 10 resources/sample_data.csv
# count rows of a data file
# !wc -l resources/sample_data.csv
# # Useful tips (Keyboard shortcuts etc):
# 4. Enter selection mode / Cell mode (Esc / Return)
# 1. Insert cells (press A or B in selection mode)
# 2. Delete / Cut cells (press X in selection mode)
# 3. Mark several cells (Shift in selection mode)
# 6. Merge cells (Select, then Shift+M)
# # Printing to pdf
# ### (USEFUL FOR HOMEWORKS)
# **Easiest**: File -> Print Preview.
# Then save that page as a PDF (Ctrl + P, Save as PDF usually works).
#
# **Pro:** Install a Latex compiler. Then: File -> Download As -> PDF.
# # Quick Review of Python Topics
# ### Why Python?
# Python has experienced incredible growth over the last couple of years, and many of the state of the art Machine Learning libraries being developed today have support for Python (scikit-learn, TensorFlow etc.)
#
# <img src='https://zgab33vy595fw5zq-zippykid.netdna-ssl.com/wp-content/uploads/2017/09/growth_major_languages-1-1400x1200.png' width=600></img>
#
# Source: https://stackoverflow.blog/2017/09/06/incredible-growth-python/
# ### Check what Python distribution you are running
# !which python #works on unix system, maybe not Windows
# Check that it is Python 3
import sys # import built in package
print(sys.version)
# ## Python as a calculator
# Addition
2.1 + 2
# Mult
10*10
# Floor division
7//3
# Floating point division, note py2 difference
7/3
type(2)
type(2.0)
a = 3
b = 5
print (b**a) # ** is exponentiation
print (b%a) # modulus operator = remainder
type(5) == type(5.0)
# boolean checks
a = True
b = False
print (a and b)
# conditional programming
if 5 == 5:
print('correct!')
else:
print('what??')
print (isinstance(1,int))
# ## String slicing and indices
# <img src="resources/spam.png" width="480">
# Strings and slicing
x = "abcdefghijklmnopqrstuvwxyz"
print(x)
print(x[1]) # zero indexed
print (type(x))
print (len(x))
print(x)
print (x[1:6:2]) # start:stop:step
print (x[::3])
print (x[::-1])
# ### Manipulating text
# Triple quotes are useful for multiple line strings
y = '''The quick brown
fox jumped over
the lazy dog.'''
print (y)
# ### String operators and methods
# tokenize by space
words = y.split(' ')
print (words)
# remove break line character
[w.replace('\n','') for w in words]
# <div class='alert alert-success'>TAB COMPLETION TIPS</div>
words.
y.
str()
# # Data Structures
# ## **Tuple:** Sequence of Python objects. Immutable.
t = ('a','b', 3)
print (t)
print (type (t))
isinstance(t,tuple)
t[1]
t[1] = 2 #error
# ## **List:** Sequence of Python objects. Mutable
y = list() # create empty list
type(y)
type([])
# Append to list
y.append('hello')
y.append('world')
print(y)
y.pop(1)
print(y)
# List addition (merge)
y + ['data-x']
# List multiplication
y*4
# list of numbers
even_nbrs = list(range(0,20,2)) # range has lazy evaluation
print (even_nbrs)
# supports objects of different data types
z = [1,4,'c',4, 2, 6]
print (z)
# list length (number of elements)
print(len(z))
# it's easy to know if an element is in a list
print ('c' in z)
print (z[2]) # print element at index 2
# traverse / loop over all elements in a list
for i in z:
print (i)
# lists can be sorted,
# but not with different data types
z.sort()
#z.sort() # doesn't work
z.pop(2)
z.sort() # now it works!
z
print (z.count(4)) # how many times is there a 4
# loop examples
for x in z:
print ("this item is ", x)
# print with index
for i,x in enumerate(z):
print ("item at index ", i," is ", x )
# print all even numbers up to an integer
for i in range(0,10,2):
print (i)
# list comprehesion is like f(x) for x as an element of Set X
# S = {x² : x in {0 ... 9}}
S = [x**2 for x in range(10)]
print (S)
# All even elements from S
# M = {x | x in S and x even}
M = [x for x in S if x % 2 == 0]
print (M)
# Matrix representation with Lists
print([[1,2,3],[4,5,6]]) # 2 x 3 matrix
# # Sets (collection of unique elements)
# a set is not ordered
a = set([1, 2, 3, 3, 3, 4, 5,'a'])
print (a)
b = set('abaacdef')
print (b) # not ordered
print (a|b) # union of a and b
print(a&b) # intersection of a and b
a.remove(5)
print (a) # removes the '5'
# # Dictionaries: Key Value pairs
# Almost like JSON data
# Dictionaries, many ways to create them
# First way to create a dictionary is just to assign it
D1 = {'f1': 10, 'f2': 20, 'f3':25}
D1['f2']
# 2. creating a dictionary using the dict()
D2 = dict(f1=10, f2=20, f3 = 30)
print (D2['f3'])
# 3. Another way, start with empty dictionary
D3 = {}
D3['f1'] = 10
D3['f2'] = 20
print (D3['f1'])
# 4th way, start with list of key-value tuples
y = [('f1', 10), ('f2', 40),('f3',60)]
D4 = dict(y)
print (D4['f2'])
#5 From keys
keys = ('a', 'b', 'c')
D5 = dict.fromkeys(keys) # new dict with empty values
print (D5['c'])
# Dictionaries can be more complex, ie dictionary of dictionaries or of tuples, etc.
D5['a'] = D1
D5['b'] = D2
print (D5['a']['f3'])
# traversing by key
# key is imutable, key can be number or string
for k in D1.keys():
print (k)
# traversing by values
for v in D1.values():
print(v)
# traverse by key and value is called item
for k, v in D1.items(): # tuples with keys and values
print (k,v)
# # User input
# +
# input
# raw_input() was renamed to input() in Python v3.x
# The old input() is gone, but you can emulate it with eval(input())
print ("Input a number:")
s = input() # returns a string
a = int(s)
print ("The number is ", a)
# -
# # Import packages
import numpy as np
np.subtract(3,1)
# # Functions
def adder(x,y):
s = x+y
return(s)
adder(2,3)
# # Classes
# +
class Holiday():
def __init__(self,holiday):
self.base = 'Happy {}!'
self.greeting = self.base.format(holiday)
def greet(self):
print(self.greeting)
easter = Holiday('Easter')
hanukkah = Holiday('Hanukkah')
# -
easter.greeting
hanukkah.greet()
# +
# extend class
class Holiday_update(Holiday):
def update_greeting(self, new_holiday):
self.greeting = self.base.format(new_holiday)
# -
hhg = Holiday_update('July 4th')
hhg.greet()
hhg.update_greeting('Labor day / End of Burning Man')
hhg.greet()
# <div id='bottom'></div>
| 01-introduction/python-jupyter-basics_v3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [supervision2]
# language: python
# name: Python [supervision2]
# ---
import pyspark
# PYSPARK_DRIVER_PYTHON="jupyter" PYSPARK_DRIVER_PYTHON_OPTS="notebook" ~/Projects/spark-projects/spark-2.2.1-bin-hadoop2.7/bin/pyspark
# sc = pyspark.SparkContext()
print sc
print sc.appName
print sc.applicationId
print sc.sparkHome
sc.sparkUser()
sc.uiWebUrl
from pyspark.sql import SparkSession, SQLContext
sqlcontext = SQLContext(sc)
spark=sqlContext.sparkSession
print spark
comm_data = spark.read.csv("../data/mp_comm_volume.csv", header="true",inferSchema="true")
comm_data.columns
comm_data.count()
comm_data.describe(['ReceivedComms']).show()
comm_data.describe(['SentComms']).show()
import matplotlib.pyplot as plt
import pandas
comm_data.printSchema
# +
# date_date = spark.sql("SELECT Comm_Date AS f1 from comm_data order by Comm_Date")
# date_date.show(4)
# comm_data.sort("Comm_Date").
# -
comm_data.createOrReplaceTempView("mpcomms")
whendf = spark.sql("SELECT Comm_Date as ft1,ReceivedComms as xrecv, SentComms as xsent from mpcomms order by Comm_Date desc")
whendf.take(3)
whendf2 = spark.sql("SELECT Comm_Date as ft1, ReceivedComms as xrecv, SentComms as xsent from mpcomms order by Comm_Date")
whendf2.take(10)
from pyspark.sql.functions import *
whendf3 = whendf2.filter(whendf2.ft1 > to_date(lit("2016-1-1")))
whendf3.take(6)
import matplotlib.pyplot as plt
import pandas as pad
import numpy as np
# %matplotlib inline
pdf1 = whendf3.toPandas()
pdf1.plot(x='ft1', figsize=(12,6))
pdf1.dtypes
pdf1.plot(x='ft1', figsize=(15,6), subplots=True)
pdf1.index = pdf1['ft1']
pdf1.plot()
pdf2 = pdf1['2016-06': '2017-06']
del pdf2['ft1']
pdf2.head()
pdf3 = pdf2.resample('D')
pdf3.plot(figsize=(15,6))
# +
# pdf4 = pdf3.fillna(0, axis=1)
# pdf4.head()
# +
# pdf3.isnull()
# -
pad.__version__
pdf3.head()
pdf4 = pdf3.replace(np.nan, 0)
pdf4.plot(figsize=(15,6))
from statsmodels.tsa.seasonal import seasonal_decompose
result = seasonal_decompose(pdf4['xrecv'], model="additive")
result.plot()
# +
# result.seasonal
testar1 = np.ones(5)/float(5)
testar1
testar2 = np.array([1, 10, 100, 1000, 10000])
testar2
np.convolve(testar2, testar1, 'same')
# +
def moving_average(data, window_size):
window = np.ones(int(window_size))/float(window_size)
return np.convolve(data, window, 'same')
def explain_anomalies_rolling_std(y, window_size, sigma=1.0):
""" Helps in exploring the anamolies using rolling standard deviation
Args:
-----
y (pandas.Series): independent variable
window_size (int): rolling window size
sigma (int): value for standard deviation
Returns:
--------
a dict (dict of 'standard_deviation': int, 'anomalies_dict': (index: value))
containing information about the points indentified as anomalies
"""
avg = moving_average(y, window_size)
avg_list = avg.tolist()
residual = y - avg
# Calculate the variation in the distribution of the residual
testing_std = pd.rolling_std(residual, window_size)
testing_std_as_df = pd.DataFrame(testing_std)
rolling_std = testing_std_as_df.replace(np.nan,
testing_std_as_df.ix[window_size - 1]).round(3).iloc[:,0].tolist()
std = np.std(residual)
return {'stationary standard_deviation': round(std, 3),
'anomalies_dict': collections.OrderedDict([(index, y_i)
for index, y_i, avg_i, rs_i in izip(count(),
y, avg_list, rolling_std)
if (y_i > avg_i + (sigma * rs_i)) | (y_i < avg_i - (sigma * rs_i))])}
# -
window_size = 7
pdf4['recv_ma'] = moving_average(pdf4['xrecv'], window_size)
pdf4['recv_residual'] = pdf4['xrecv'] - pdf4['recv_ma']
# pdf4.plot(figsize=(15,6), y=['xrecv', 'recvd_ma', 'recv_residual'])
pdf4['recv_residualstd'] = pdf4['recv_residual'].rolling(window=window_size).std()
pdf4.plot(figsize=(15,6), y=['recv_residual','recv_residualstd'])
# ma = moving_average()
pad.__version__
np.__version__
pdf4.ix[window_size - 1]
pdf4.head(-1)
pdf4['recv_residualstd'] = pdf4['recv_residualstd'].replace(np.nan, pdf4['recv_residualstd'].ix[window_size - 1])
agg_residual_std = np.std(pdf4['recv_residual'])
agg_residual_mean = np.mean(pdf4['recv_residual'])
print agg_residual_mean, agg_residual_std
sigma = 2.0
# pdf4['recv_anamolous_1'] = pdf4[ ( pdf4['xrecv'] > (pdf4['recv_ma'] + (sigma * pdf4['recv_residualstd'])) ) | (pdf4['xrecv'] < (pdf4['recv_ma'] - (sigma * pdf4['recv_residualstd'])) ) ]
pdf4['recv_anamolous'] = (pdf4['xrecv'] > pdf4['recv_ma'] + sigma * pdf4['recv_residualstd'] ) | ( pdf4['xrecv'] < (pdf4['recv_ma'] - (sigma * pdf4['recv_residualstd'])))
pdf4.drop('recv_anamolous_1', axis=1, inplace=True)
pdf4.head(10)
def plot_results(x, y, window_size, sigma_value=1,
text_xlabel="X Axis", text_ylabel="Y Axis", applying_rolling_std=False):
""" Helps in generating the plot and flagging the anamolies.
Supports both moving and stationary standard deviation. Use the 'applying_rolling_std' to switch
between the two.
Args:
-----
x (pandas.Series): dependent variable
y (pandas.Series): independent variable
window_size (int): rolling window size
sigma_value (int): value for standard deviation
text_xlabel (str): label for annotating the X Axis
text_ylabel (str): label for annotatin the Y Axis
applying_rolling_std (boolean): True/False for using rolling vs stationary standard deviation
"""
plt.figure(figsize=(15, 8))
plt.plot(x, y, "k.")
y_av = moving_average(y, window_size)
plt.plot(x, y_av, color='green')
plt.xlim(0, 1000)
plt.xlabel(text_xlabel)
plt.ylabel(text_ylabel)
# Query for the anomalies and plot the same
events = {}
if applying_rolling_std:
events = explain_anomalies_rolling_std(y, window_size=window_size, sigma=sigma_value)
else:
events = explain_anomalies(y, window_size=window_size, sigma=sigma_value)
x_anomaly = np.fromiter(events['anomalies_dict'].iterkeys(), dtype=int, count=len(events['anomalies_dict']))
y_anomaly = np.fromiter(events['anomalies_dict'].itervalues(), dtype=float,
count=len(events['anomalies_dict']))
plt.plot(x_anomaly, y_anomaly, "r*", markersize=12)
# add grid and lines and enable the plot
plt.grid(True)
plt.show()
# +
# 4. Lets play with the functions
# x = data_as_frame['Months']
# Y = data_as_frame['SunSpots']
# plot the results
# plot_results(x, y=Y, window_size=10, text_xlabel="Months", sigma_value=3,
# text_ylabel="No. of Sun spots")
# events = explain_anomalies(y, window_size=5, sigma=3)
plt.figure(figsize=(15, 8))
x_axis_data = pdf4.index.values
y_xrecv_data = pdf4['xrecv']
y_xrecv_ma_data = pdf4['recv_ma']
plt.plot(x_axis_data, y_xrecv_data, "k.")
plt.plot(x_axis_data, y_xrecv_ma_data, color='green')
# col = np.where(pdf4['recv_anamolous'],'k',np.where(y<5,'b','r'))
anomalous_data = pdf4[pdf4['recv_anamolous']]
x_anomalous = anomalous_data.index.values
y_anomalous = anomalous_data['xrecv']
plt.plot(x_anomalous, y_anomalous, "r*", markersize=12)
# Display the anomaly dict
# print("Information about the anomalies model:{}".format(events))
# -
anomalous_data
| comm_anamolies.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # E 01 Read some data and look at it
# My approach to teach you python is by letting you *do* things. In this example you are going to be able to do a lot of things, quite fast. That doesn't mean that you'll *understand* what you did, but with this very first crash course I hope to give you a bit of the taste of how Python works.
# ## Get the data
# The data files for our exercises are available on OLAT.
#
# Copy the file data_Zhadang.csv from the data folder on the course OLAT page to the same directory as your copy of this notebook.
# **Q: Open the file with a text editor. What kind of file is it? What does "csv" stands for?**
# ## Read the data
# To read and analyse the data, we are going to use a very powerful package called [pandas](http://pandas.pydata.org/). Pandas is one of the reason why scientists are moving to python. It is really cool, as I hope to be able to show you now.
import pandas as pd # pd is the short name for pandas. It's good to stick to it.
# While we are at it, let's import some other things we might need later
# %matplotlib inline
import matplotlib.pyplot as plt # plotting library
import numpy as np # numerical library
# We are now using pandas to read the data out of the csv file
# The first argument to the function is a path to a file, the other arguments
# are called "keywords". They tell to pandas to do certain things
df = pd.read_csv('data/data_Zhadang.csv', index_col=0, parse_dates=True)
# df is a new variable we just created. It the short name for "dataframe". A dataframe is a kind of table, a little bit like in excel. Let's simply print it:
df
# The dataframe has one "column" (TEMP_2M) and an "index" (a timestamp). A dataframe has many useful functions, for example you can make a plot out of it:
df.plot();
# ## Select parts of the data
# Pandas is really good at *indexing* data. This means that it should be as easy as possible to select, for example, a specific day and plot it on a graph:
df_sel = df.loc['2011-05-12']
df_sel.plot();
# **Q: Now select another day (for example July 1st 2011), and plot the data.**
# +
# your answer here
# -
# Note that you can also select specific range of time, for example the first week of July:
df.loc['2011-07-01':'2011-07-07'].plot();
# Or the month of August:
df.loc['2011-08'].plot();
# ## Computing averages
# Pandas comes with very handy operations like "[resample](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.resample.html)". Resample helps you to compute statistics over time. It's better explained with an example:
daily_mean = df.resample('D').mean()
# **Q: Print the daily_mean variable. Plot it.**
# +
# your answer here
# -
# **Q: Now try the functions df.resample('D').max() and df.resample('D').min(). What will they do? Plot them.**
# +
# your answer here
# -
# ## Adding and selecting columns to dataframes
# Columns in the dataframe can be created with the simple syntax:
daily_mean['TEMP_MAX'] = df.resample('D').max()
daily_mean['TEMP_MIN'] = df.resample('D').min()
# **Q: Print the daily_mean dataframe. How many columns does it have? Plot it.**
# +
# your answer here
# -
# It is easy to select a single column and, for example, plot it alone:
daily_mean['TEMP_MAX'].plot();
# ## Operations on columns
# Operations on columns are just like normal array operations:
temp_range = daily_mean['TEMP_MAX'] - daily_mean['TEMP_MIN']
# **Q: What is temp_range? Plot it. Add it to the daily_mean dataframe**
# +
# your answer here
# -
# ## Exercise: apply what you just learned
# In the example above, we used resample with the argument 'D', for "daily frequency". The equivalent for monthly would be 'MS' (the "S" is for "start"). Could you repeat the operation above, but with monthly averages instead of daily averages?
# +
# your answer here
# +
# and here if you want
# -
# How could you plot two daily temperature cycles in the same figure? Have a go!
# +
# your answer here
# -
# To read more about pandas indexing and data selection conventions look here: https://pandas.pydata.org/pandas-docs/stable/indexing.html
| exercises/E01_First_Datacrunching.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Example Usage of HDFWriter
#
# If properties of a class needs to be saved in a hdf file, then the class should inherit from `HDFWriterMixin` as demonstrated below.
#
# `hdf_properties (list)` : Contains names of all the properties that needs to be saved.<br>
# `hdf_name (str)` : Specifies the default name of the group under which the properties will be saved.
# +
from tardis.io.util import HDFWriterMixin
class ExampleClass(HDFWriterMixin):
hdf_properties = ['property1', 'property2']
hdf_name = 'mock_setup'
def __init__(self, property1, property2):
self.property1 = property1
self.property2 = property2
# +
import numpy as np
import pandas as pd
#Instantiating Object
property1 = np.array([4.0e14, 2, 2e14, 27.5])
property2 = pd.DataFrame({'one': pd.Series([1., 2., 3.], index=['a', 'b', 'c']),
'two': pd.Series([1., 2., 3., 4.], index=['a', 'b', 'c', 'd'])})
obj = ExampleClass(property1, property2)
# -
# You can now save properties using `to_hdf` method.
#
# #### Parameters
# `file_path` : Path where the HDF file will be saved<br>
# `path` : Path inside the HDF store to store the `elements`<br>
# `name` : Name of the group inside HDF store, under which properties will be saved.<br>
# If not specified , then it uses the value specified in `hdf_name` attribute.<br>
# If `hdf_name` is also not defined , then it converts the Class name into Snake Case, and uses this value.<br>
# Like for example , if `name` is not passed as an argument , and `hdf_name` is also not defined for `ExampleClass` above, then , it will save properties under `example_class` group.
#
obj.to_hdf(file_path='test.hdf', path='test')
#obj.to_hdf(file_path='test.hdf', path='test', name='hdf')
# You can now read hdf file using `pd.HDFStore` , or `pd.read_hdf`
#Read HDF file
with pd.HDFStore('test.hdf','r') as data:
print data
#print data['/test/mock_setup/property1']
# ## Saving nested class objects.
#
# Just extend `hdf_properties` list to include that class object. <br>
class NestedExampleClass(HDFWriterMixin):
hdf_properties = ['property1', 'nested_object']
def __init__(self, property1, nested_obj):
self.property1 = property1
self.nested_object = nested_obj
obj2 = NestedExampleClass(property1, obj)
obj2.to_hdf(file_path='nested_test.hdf')
#Read HDF file
with pd.HDFStore('nested_test.hdf','r') as data:
print data
# ## Modifed Usage
#
# In `BasePlasma` class, the way properties of object are collected is different. It does not uses `hdf_properties` attribute.<br>
# That\`s why , `PlasmaWriterMixin` (which extends `HDFWriterMixin`) changes how the properties of `BasePlasma` class will be collected, by changing `get_properties` function.<br>
#
# Here is a quick demonstration, if behaviour of default `get_properties` function inside `HDFWriterMixin` needs to be changed, by subclassing it to create a new `Mixin`.
class ModifiedWriterMixin(HDFWriterMixin):
def get_properties(self):
#Change behaviour here, how properties will be collected from Class
data = {name: getattr(self, name) for name in self.outputs}
return data
# A demo class , using this modified mixin.
class DemoClass(ModifiedWriterMixin):
outputs = ['property1']
hdf_name = 'demo'
def __init__(self, property1):
self.property1 = property1
obj3 = DemoClass('random_string')
obj3.to_hdf('demo_class.hdf')
with pd.HDFStore('demo_class.hdf','r') as data:
print data
| docs/using/interaction/hdf_writer.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
t = (1,2,3)
mylist = [1,2,3]
type(t)
type(mylist)
t = ('one', 2)
t[0]
t[-1]
t = ('a','a','b')
t.count('a')
t.index('a')
t.index('b')
t
mylist
mylist[0] = 'NEW'
mylist
# ###### tuple is immutable, we cannot reasign a new value to any index positons.
# +
# t[0] = 'NEW'
# -
| 00-Python Object/.ipynb_checkpoints/Tuples-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
a = np.arange(12).reshape((3, 4))
print(a)
print(a < 5)
print(a[a < 5])
print(a < 10)
print(a[a < 10])
b = a[a < 10]
print(b)
print(a)
print(a[a < 5].sum())
print(a[a < 5].mean())
print(a[a < 5].max())
print(a[a < 10].min())
print(a[a < 10].std())
print(a < 5)
print(np.all(a < 5))
print(np.all(a < 5, axis=0))
print(np.all(a < 5, axis=1))
print(a < 10)
print(np.all(a < 10, axis=0))
print(np.all(a < 10, axis=1))
print(a[:, np.all(a < 10, axis=0)])
print(a[np.all(a < 10, axis=1), :])
print(a[np.all(a < 10, axis=1)])
print(a[:, np.all(a < 5, axis=0)])
print(a[np.all(a < 5, axis=1)])
print(a[np.all(a < 5, axis=1)].ndim)
print(a[np.all(a < 5, axis=1)].shape)
print(a < 5)
print(np.any(a < 5))
print(np.any(a < 5, axis=0))
print(np.any(a < 5, axis=1))
print(a[:, np.any(a < 5, axis=0)])
print(a[np.any(a < 5, axis=1)])
print(a[~(a < 5)])
print(a[:, np.all(a < 10, axis=0)])
print(a[:, ~np.all(a < 10, axis=0)])
print(a[np.any(a < 5, axis=1)])
print(a[~np.any(a < 5, axis=1)])
print(a)
print(np.delete(a, [0, 2], axis=0))
print(np.delete(a, [0, 2], axis=1))
print(a < 2)
print(np.where(a < 2))
print(np.where(a < 2)[0])
print(np.where(a < 2)[1])
print(np.delete(a, np.where(a < 2)[0], axis=0))
print(np.delete(a, np.where(a < 2)[1], axis=1))
print(a == 6)
print(np.where(a == 6))
print(np.delete(a, np.where(a == 6)))
print(np.delete(a, np.where(a == 6)[0], axis=0))
print(np.delete(a, np.where(a == 6)[1], axis=1))
print(a[(a < 10) & (a % 2 == 1)])
print(a[np.any((a == 2) | (a == 10), axis=1)])
print(a[:, ~np.any((a == 2) | (a == 10), axis=0)])
| notebook/numpy_condition.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] tags=[]
# # STIRAP in a 3-level system
# STIRAP (STImulated Raman Adiabatic Passage, see e.g. [Shore1998](https://journals.aps.org/rmp/pdf/10.1103/RevModPhys.70.1003)) is a method for adiabatically transferring the population of a quantum system from one state to another by using two drive fields coupled to an intermediate state without actually ever populating the intermediate state. The benefits over e.g. two Rabi pulses, are that since STIRAP is an adiabatic process, it is relatively easy (I've been told) to make it highly efficient. The other key benefit is that the intermediate state can be an unstable state, yet there is no population loss since it is never populated.
#
# This notebook sets up a 3-level system and relevant couplings using the `toy_models` package and then time evolves the system using `QuTiP` to simulate STIRAP. I'll be following the definitions of Shore1998 as best as I can. The level diagram from the paper is shown below.
#
# 
# ## Imports
# +
# %load_ext autoreload
# %autoreload 2
import matplotlib.pyplot as plt
plt.style.use("ggplot")
import numpy as np
import qutip
from sympy import Symbol
from toy_systems.couplings import ToyCoupling, ToyEnergy
from toy_systems.decays import ToyDecay
from toy_systems.hamiltonian import Hamiltonian
from toy_systems.quantum_system import QuantumSystem
from toy_systems.states import Basis, BasisState, ToyQuantumNumbers
from toy_systems.visualization import Visualizer
# -
# ## Set up states and basis
# We start by defining the three states of the system: we'll have two ground states (i.e. states that don't decay) $|1\rangle$ and $|3\rangle$, and one excited state $|2\rangle$, which we will later set to have a decay to an additional state $|4\rangle$ representing all decays out of the system:
# +
# Define states
s1 = BasisState(qn=ToyQuantumNumbers(label="1"))
s2 = BasisState(qn=ToyQuantumNumbers(label="2"))
s3 = BasisState(qn=ToyQuantumNumbers(label="3"))
s4 = BasisState(qn=ToyQuantumNumbers(label="4")) # A target state for decays from |2>
# Define basis
basis = Basis((s1, s2, s3, s4))
basis.print()
# -
# ## Define energies, couplings and decays
# I'm going to define the system in the rotating frame as given in [Shore1998](https://journals.aps.org/rmp/pdf/10.1103/RevModPhys.70.1003) so that the Hamiltonian doesn't have any quickly rotating terms of the form $e^{i\omega t}$.
#
# The Hamiltonian I'm trying to produce is shown below (with $\hbar = 1$):
#
# 
# ### Energies
# +
Δp = Symbol('Delta_p') # Detuning for pump beam
Δs = Symbol('Delta_s') # Detuning for Stokes beam
E1 = ToyEnergy([s1], 0)
E2 = ToyEnergy([s2], Δp)
# The energy for state |3> needs to be defined in two parts since it contains two sympy.Symbols
E3p = ToyEnergy([s3], Δp)
E3s = ToyEnergy([s3], -Δs)
# -
# ### Couplings
# +
Ωp = Symbol('Omega_p') # Drive field Rabi rate for pump beam
Ωs = Symbol('Omega_s') # Drive field Rabi rate for Stokes beam
coupling_p = ToyCoupling(s1,s2,Ωp/2, time_dep = "exp(-(t+t_p)**2/(2*sigma_p**2))", time_args= {"t_p":-1, "sigma_p":1})
coupling_s = ToyCoupling(s2,s3,Ωs/2, time_dep = "exp(-(t+t_s)**2/(2*sigma_s**2))", time_args= {"t_s":1, "sigma_s":1})
# -
# ### Decays
# Defining a decay from $|2\rangle$ to $|4\rangle$ :
decay = ToyDecay(s2, ground = s4, gamma = Symbol("Gamma"))
# ### Define a QuantumSystem
# The QuantumSystem object combines the basis, Hamiltonian and decays to make setting parameters for time evolution using QuTiP more convenient.
# +
# Define the system
system = QuantumSystem(
basis=basis,
couplings=[E1, E2, E3p, E3s, coupling_p, coupling_s],
decays=[decay],
)
# Get representations of the Hamiltonian and the decays that will be accepted by qutip
Hqobj, c_qobj = system.get_qobjs()
visualizer = Visualizer(system, vertical={"label":10}, horizontal={"label":50})
# -
# ## Time-evolution using `QuTiP`
# We can now see if time evolving the system results in something resembling STIRAP. The key to success is to choose the parameters well. Shore gives us the rule of thumb that we should have $\sqrt{\Omega_p^2 + \Omega_s^2}\tau > 10$ where $\tau$ is proportional to the time overlap of the Stokes and pump pulse. In practice it seems that taking the centers of the Gaussians to be separated by $2\sigma$ works pretty well. The broader the Gaussians are (i.e. larger $\sigma$), the more adiabatic the process, which results in less population in the intermediate state and therefore less loss. I'm taking both pulses to have the same parameters for simplicity (except they occur at different times of course).
# Get a pointer to the time-evolution arguments
args = Hqobj.args
print("Keys for setting arguments:")
print(f"args = {args}")
# +
# Generate a Qobj representing the initial state
psi0 = (1*s1).qobj(basis)
# Make operators for getting the probability of being in each state
P_1_op = qutip.Qobj((1*s1).density_matrix(basis), type = "oper")
P_2_op = qutip.Qobj((1*s2).density_matrix(basis), type = "oper")
P_3_op = qutip.Qobj((1*s3).density_matrix(basis), type = "oper")
P_4_op = qutip.Qobj((1*s4).density_matrix(basis), type = "oper")
# Set the parameters for the system
# Good STIRAP
Omega = 10
t0 = 10
sigma = 10
Delta = 0
# Bad STIRAP
# Omega = 5
# t0 = 1
# sigma = 1
# Delta = 0
args["Delta_p"] = Delta
args["Omega_p"] = Omega
args["sigma_p"] = sigma
args["t_p"] = -t0
args["Delta_s"] = Delta
args["Omega_s"] = Omega
args["sigma_s"] = sigma
args["t_s"] = t0
# Times at which result is requested
times = np.linspace(-5*sigma,5*sigma,1001)
# Setting the max_step is sometimes necessary
options = qutip.solver.Options(method = 'adams', nsteps=10000, max_step=1e0)
# Setup a progress bar
pb = qutip.ui.progressbar.EnhancedTextProgressBar()
# Run the time-evolution
result = qutip.mesolve(Hqobj, psi0, times, c_ops = c_qobj, e_ops = [P_1_op, P_2_op, P_3_op, P_4_op],
progress_bar=pb, options = options)
# +
fig, ax = plt.subplots(figsize = (16,9))
ln = []
ln+=ax.plot(times, result.expect[0], label = "P_1")
ln+=ax.plot(times, result.expect[1], label = "P_2")
ln+=ax.plot(times, result.expect[2], label = "P_3")
ln+=ax.plot(times, result.expect[3], label = "P_4")
ax.set_title("STIRAP", fontsize = 18)
ax.set_xlabel("Time / (1/Γ)", fontsize = 16)
ax.set_ylabel("Population in each state", fontsize = 16)
axc = ax.twinx()
ln+=coupling_p.plot_time_dep(times, args, ax=axc, ls = '--', c = 'k', lw = 1, label = 'Pump')
ln+=coupling_s.plot_time_dep(times, args, ax=axc, ls = ':', c = 'k', lw = 1, label = 'Stokes')
ax.legend(ln, [l.get_label() for l in ln], fontsize = 16)
print(f"Transfer efficiency: {result.expect[2][-1]*100:.1f} %")
# -
| examples/STIRAP in a 3-level system.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# ## Topic Modeling: Latent Dirichlet Allocation with gensim
# + [markdown] slideshow={"slide_type": "slide"}
# ### Imports
# + slideshow={"slide_type": "fragment"}
import warnings
from collections import OrderedDict
from pathlib import Path
import numpy as np
import pandas as pd
# Visualization
from ipywidgets import interact, FloatSlider
import matplotlib.pyplot as plt
from matplotlib.ticker import FuncFormatter
import seaborn as sns
import pyLDAvis
from pyLDAvis.sklearn import prepare
from wordcloud import WordCloud
from termcolor import colored
# spacy for language processing
import spacy
# sklearn for feature extraction & modeling
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer, TfidfTransformer
from sklearn.decomposition import LatentDirichletAllocation, TruncatedSVD, NMF
from sklearn.model_selection import train_test_split
from sklearn.externals import joblib
# gensim for alternative models
from gensim.models import LdaModel, LdaMulticore
from gensim.corpora import Dictionary
from gensim.matutils import Sparse2Corpus
# -
% matplotlib inline
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = (14.0, 8.7)
pyLDAvis.enable_notebook()
warnings.filterwarnings('ignore')
pd.options.display.float_format = '{:,.2f}'.format
# + [markdown] slideshow={"slide_type": "skip"}
# ## Load BBC data
# + slideshow={"slide_type": "skip"}
path = Path('bbc')
files = path.glob('**/*.txt')
doc_list = []
for i, file in enumerate(files):
with open(str(file), encoding='latin1') as f:
_, topic, file_name = file.parts
lines = f.readlines()
file_id = file_name.split('.')[0]
heading = lines[0].strip()
body = ' '.join([l.strip() for l in lines[1:]])
doc_list.append([topic, heading, body])
# + [markdown] slideshow={"slide_type": "skip"}
# ### Convert to DataFrame
# + slideshow={"slide_type": "skip"}
docs = pd.DataFrame(doc_list, columns=['topic', 'heading', 'article'])
docs.info()
# + [markdown] slideshow={"slide_type": "slide"}
# ## Create Train & Test Sets
# + slideshow={"slide_type": "fragment"}
train_docs, test_docs = train_test_split(docs,
stratify=docs.topic,
test_size=50,
random_state=42)
# + slideshow={"slide_type": "fragment"}
train_docs.shape, test_docs.shape
# + slideshow={"slide_type": "fragment"}
pd.Series(test_docs.topic).value_counts()
# + [markdown] slideshow={"slide_type": "slide"}
# ### Vectorize train & test sets
# + slideshow={"slide_type": "fragment"}
vectorizer = CountVectorizer(max_df=.2,
min_df=3,
stop_words='english',
max_features=2000)
train_dtm = vectorizer.fit_transform(train_docs.article)
words = vectorizer.get_feature_names()
train_dtm
# + slideshow={"slide_type": "fragment"}
test_dtm = vectorizer.transform(test_docs.article)
test_dtm
# + [markdown] slideshow={"slide_type": "slide"}
# ## LDA with gensim
# + [markdown] slideshow={"slide_type": "fragment"}
# ### Using `CountVectorizer` Input
# + slideshow={"slide_type": "fragment"}
max_df = .2
min_df = 3
max_features = 2000
# used by sklearn: https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/feature_extraction/stop_words.py
stop_words = pd.read_csv('http://ir.dcs.gla.ac.uk/resources/linguistic_utils/stop_words',
header=None,
squeeze=True).tolist()
# + slideshow={"slide_type": "fragment"}
vectorizer = CountVectorizer(max_df=max_df,
min_df=min_df,
stop_words='english',
max_features=max_features)
train_dtm = vectorizer.fit_transform(train_docs.article)
test_dtm = vectorizer.transform(test_docs.article)
# + slideshow={"slide_type": "slide"}
train_corpus = Sparse2Corpus(train_dtm, documents_columns=False)
test_corpus = Sparse2Corpus(test_dtm, documents_columns=False)
id2word = pd.Series(vectorizer.get_feature_names()).to_dict()
# + [markdown] slideshow={"slide_type": "slide"}
# ### Train Model & Review Results
# -
LdaModel(corpus=None,
num_topics=100,
id2word=None,
distributed=False,
chunksize=2000, # Number of documents to be used in each training chunk.
passes=1, # Number of passes through the corpus during training
update_every=1, # Number of docs to be iterated through for each update
alpha='symmetric',
eta=None, # a-priori belief on word probability
decay=0.5, # percentage of previous lambda forgotten when new document is examined
offset=1.0, # controls slow down of the first steps the first few iterations.
eval_every=10, # estimate log perplexity
iterations=50, # Maximum number of iterations through the corpus
gamma_threshold=0.001, # Minimum change in the value of the gamma parameters to continue iterating
minimum_probability=0.01, # Topics with a probability lower than this threshold will be filtered out
random_state=None,
ns_conf=None,
minimum_phi_value=0.01, # if `per_word_topics` is True, represents lower bound on term probabilities
per_word_topics=False, # If True, compute a list of most likely topics for each word with phi values multiplied by word count
callbacks=None)
num_topics = 5
topic_labels = ['Topic {}'.format(i) for i in range(1, num_topics+1)]
# + slideshow={"slide_type": "fragment"}
lda_gensim = LdaModel(corpus=train_corpus,
num_topics=num_topics,
id2word=id2word)
# + slideshow={"slide_type": "fragment"}
topics = lda_gensim.print_topics()
topics[0]
# + [markdown] slideshow={"slide_type": "slide"}
# ### Evaluate Topic Coherence
#
# Topic Coherence measures whether the words in a topic tend to co-occur together.
#
# - It adds up a score for each distinct pair of top ranked words.
# - The score is the log of the probability that a document containing at least one instance of the higher-ranked word also contains at least one instance of the lower-ranked word.
#
# Large negative values indicate words that don't co-occur often; values closer to zero indicate that words tend to co-occur more often.
# + slideshow={"slide_type": "fragment"}
coherence = lda_gensim.top_topics(corpus=train_corpus, coherence='u_mass')
# + slideshow={"slide_type": "slide"}
topic_coherence = []
topic_words = pd.DataFrame()
for t in range(len(coherence)):
label = topic_labels[t]
topic_coherence.append(coherence[t][1])
df = pd.DataFrame(coherence[t][0], columns=[(label, 'prob'), (label, 'term')])
df[(label, 'prob')] = df[(label, 'prob')].apply(lambda x: '{:.2%}'.format(x))
topic_words = pd.concat([topic_words, df], axis=1)
topic_words.columns = pd.MultiIndex.from_tuples(topic_words.columns)
pd.set_option('expand_frame_repr', False)
topic_words.head().to_csv('topic_words.csv', index=False)
print(topic_words.head())
pd.Series(topic_coherence, index=topic_labels).plot.bar();
# + [markdown] slideshow={"slide_type": "slide"}
# ### Using `gensim` `Dictionary`
# + slideshow={"slide_type": "fragment"}
docs = [d.split() for d in train_docs.article.tolist()]
docs = [[t for t in doc if t not in stop_words] for doc in docs]
# + slideshow={"slide_type": "fragment"}
dictionary = Dictionary(docs)
dictionary.filter_extremes(no_below=min_df, no_above=max_df, keep_n=max_features)
# + slideshow={"slide_type": "fragment"}
corpus = [dictionary.doc2bow(doc) for doc in docs]
# + slideshow={"slide_type": "fragment"}
print('Number of unique tokens: %d' % len(dictionary))
print('Number of documents: %d' % len(corpus))
# + slideshow={"slide_type": "slide"}
num_topics = 5
chunksize = 500
passes = 20
iterations = 400
eval_every = None # Don't evaluate model perplexity, takes too much time.
temp = dictionary[0] # This is only to "load" the dictionary.
id2word = dictionary.id2token
# + slideshow={"slide_type": "fragment"}
# %%time
model = LdaModel(corpus=corpus,
id2word=id2word,
chunksize=chunksize,
alpha='auto',
eta='auto',
iterations=iterations,
num_topics=num_topics,
passes=passes,
eval_every=eval_every)
# + slideshow={"slide_type": "slide"}
model.show_topics()
# -
# ### Evaluating Topic Assignments on the Test Set
# + slideshow={"slide_type": "slide"}
docs_test = [d.split() for d in test_docs.article.tolist()]
docs_test = [[t for t in doc if t not in stop_words] for doc in docs_test]
test_dictionary = Dictionary(docs_test)
test_dictionary.filter_extremes(no_below=min_df, no_above=max_df, keep_n=max_features)
test_corpus = [dictionary.doc2bow(doc) for doc in docs_test]
# + slideshow={"slide_type": "slide"}
gamma, _ = model.inference(test_corpus)
topic_scores = pd.DataFrame(gamma)
topic_scores.head(10)
# + slideshow={"slide_type": "slide"}
topic_probabilities = topic_scores.div(topic_scores.sum(axis=1), axis=0)
topic_probabilities.head()
# + slideshow={"slide_type": "slide"}
topic_probabilities.idxmax(axis=1).head()
# + slideshow={"slide_type": "slide"}
predictions = test_docs.topic.to_frame('topic').assign(predicted=topic_probabilities.idxmax(axis=1).values)
heatmap_data = predictions.groupby('topic').predicted.value_counts().unstack()
sns.heatmap(heatmap_data, annot=True, cmap='Blues');
# -
# ## Resources
#
# - pyLDAvis:
# - [Talk by the Author](https://speakerdeck.com/bmabey/visualizing-topic-models) and [Paper by (original) Author](http://www.aclweb.org/anthology/W14-3110)
# - [Documentation](http://pyldavis.readthedocs.io/en/latest/index.html)
# - LDA:
# - [<NAME> Homepage @ Columbia](http://www.cs.columbia.edu/~blei/)
# - [Introductory Paper](http://www.cs.columbia.edu/~blei/papers/Blei2012.pdf) and [more technical review paper](http://www.cs.columbia.edu/~blei/papers/BleiLafferty2009.pdf)
# - [Blei Lab @ GitHub](https://github.com/Blei-Lab)
#
# - Topic Coherence:
# - [Exploring Topic Coherence over many models and many topics](https://www.aclweb.org/anthology/D/D12/D12-1087.pdf)
# - [Paper on various Methods](http://www.aclweb.org/anthology/N10-1012)
# - [Blog Post - Overview](http://qpleple.com/topic-coherence-to-evaluate-topic-models/)
#
| Chapter14/05_lda_with_gensim.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Workflow A: Advanced Chemical ---[]---- Gene (EGFR)
#
# <br>
# <br>
#Loading the functions from a notebook
# %run /Users/priyash/Documents/GitHub/robogallery/user_queries/proj_tools/template.ipynb
with open('/Users/priyash/Documents/GitHub/robogallery/user_queries/json_queries/EGFR_advanced.json', 'r') as myfile:
query = json.loads(myfile.read())
printjson(query)
# <br>
#
# ## Strider Direct
#
# <br>
#
#
start = dt.now()
strider_result = strider(query)
end = dt.now()
print(f"Strider produced {len(strider_result['message']['results'])} results in {end-start}.")
prov = get_provenance(strider_result)
display(prov)
| user_queries/notebooks/Mini-hackathon Workflow A Advanced.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: geo_dev
# language: python
# name: geo_dev
# ---
import geopandas as gpd
import seaborn as sns
import matplotlib.pyplot as plt
import pandas as pd
from sklearn import preprocessing
path = 'files/AMS/context_data.csv'
data = pd.read_csv(path, index_col=0)
data
# ## Standardize
x = data.values
scaler = preprocessing.StandardScaler()
cols = list(data.columns)
data[cols] = scaler.fit_transform(data[cols])
data
data.to_csv('files/AMS/context_data_norm.csv')
data.isna().any().any()
| code_production/Amsterdam/200307_Normalize.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from sklearn.feature_extraction import DictVectorizer
onehot_encoder = DictVectorizer()
X = [
{'city': 'New York'},
{'city': 'San Francisco'},
{'city': 'Chapel Hill'}
]
print(onehot_encoder.fit_transform(X).toarray())
| chapter04/ed2-ch4-s1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import pandas as pd
from pandas import Series, DataFrame
url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv'
url
dframe_wine = pd.read_csv(url, sep=';')
dframe_wine.tail()
dframe_wine[['alcohol','density','pH','citric acid','quality']].mean()
dframe_wine
dframe_wine.groupby('quality')
def max_to_min(arr):return arr.max() - arr.min()
wino = dframe_wine.groupby('quality')
wino.describe()
wino.agg(max_to_min)
wino.mean()
dframe_wine.head()
dframe_wine['x/y ratio'] = dframe_wine['quality']/ dframe_wine['alcohol']
dframe_wine.head()
dframe_wine.pivot_table(index=['quality'])
# %matplotlib inline
dframe_wine.plot(kind='scatter',x='quality',y='alcohol',color= 'pink')
| Lecture 44 - Aggregation_Working with Data.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from tqdm import tqdm_notebook as tqdm
from sklearn.metrics import classification_report
from sklearn.pipeline import Pipeline
from sklearn.model_selection import StratifiedKFold
# -
sns.set_context('talk')
# # Load data
df = pd.read_csv('data/train.csv').set_index('customer')
del_columns = ['category', 'nationality', 'is_pep']
for var_ in del_columns:
df.drop(var_, axis=1, inplace=True)
df['suspicious'].astype(int).sum()
df.head()
# # Basic statistics
df.shape
df.info()
df.describe()
sns.boxplot(df['age'])
# # Create balanced dataset
cases_susp = df[df['suspicious']==1]
cases_susp = cases_susp.append(cases_susp.loc[np.random.choice(cases_susp.index, 3*cases_susp.shape[0], replace=True)])
cases_norm = df[df['suspicious']==0].sample(n=1*cases_susp.shape[0])
print(cases_susp.shape, cases_norm.shape)
df_bal = pd.concat([cases_norm, cases_susp])
df_bal.shape
# # Model
test_features = pd.read_csv('data/test.csv').set_index('customer')
for var_ in del_columns:
test_features.drop(var_, axis=1, inplace=True)
test_features.shape
# ## Random Forest
from sklearn.ensemble import RandomForestClassifier
# ### Train
sub = df_bal #.sample(n=100)
X = sub.drop('suspicious', axis=1)
y = sub['suspicious']
X.shape
# #### Create pipeline
from sklearn.preprocessing import StandardScaler
clf = Pipeline([
('scaler', StandardScaler()),
('clf', RandomForestClassifier(n_estimators=1000, n_jobs=2, verbose=1))
])
# #### Splitting
skf = StratifiedKFold(n_splits=5)
# %%time
for train_index, test_index in tqdm(skf.split(X, y), total=skf.get_n_splits(X, y)):
X_train, X_test = X.iloc[train_index], X.iloc[test_index]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
report = classification_report(y_test, y_pred)
print(report)
# + active=""
# 1-1
# precision recall f1-score support
#
# 0 0.77 0.77 0.77 4544
# 1 0.77 0.76 0.77 4544
#
# avg / total 0.77 0.77 0.77 9088
# 1-2
#
# 0 0.82 0.90 0.86 9087
# 1 0.75 0.59 0.66 4544
#
# avg / total 0.79 0.80 0.79 13631
#
# 0 0.81 0.89 0.85 9087
# 1 0.73 0.58 0.65 4543
#
# avg / total 0.78 0.79 0.78 13630
# 1-3
# 0 0.85 0.94 0.89 13631
# 1 0.72 0.48 0.58 4544
#
# avg / total 0.81 0.82 0.81 18175
#
# 0 0.85 0.94 0.89 13630
# 1 0.72 0.49 0.58 4543
#
# avg / total 0.81 0.82 0.81 18173
# 1-4
# 0 0.87 0.96 0.91 18174
# 1 0.72 0.43 0.54 4544
#
# avg / total 0.84 0.85 0.84 22718
#
# 0 0.87 0.96 0.91 18174
# 1 0.71 0.42 0.53 4543
#
# avg / total 0.84 0.85 0.83 22717
# + active=""
# By sampling with replacement
# 1-1 18K each
#
# 0 0.87 0.84 0.86 9087
# 1 0.84 0.88 0.86 9087
#
# avg / total 0.86 0.86 0.86 18174
#
# 0 1.00 0.78 0.88 9087
# 1 0.82 1.00 0.90 9087
#
# avg / total 0.91 0.89 0.89 18174
#
# 27K each
#
# 0 0.92 0.86 0.89 13631
# 1 0.87 0.92 0.89 13631
#
# avg / total 0.89 0.89 0.89 27262
#
# 0 1.00 0.84 0.91 13630
# 1 0.86 1.00 0.92 13630
#
# avg / total 0.93 0.92 0.92 27260
#
# 36K each
# 0 0.95 0.88 0.92 18174
# 1 0.89 0.95 0.92 18174
#
# avg / total 0.92 0.92 0.92 36348
#
# 0 1.00 0.86 0.93 18174
# 1 0.88 1.00 0.93 18174
#
# avg / total 0.94 0.93 0.93 36348
#
# 90K each
# 0 1.00 0.95 0.97 45435
# 1 0.95 1.00 0.97 45435
#
# avg / total 0.97 0.97 0.97 90870
#
# 0 1.00 0.95 0.97 45435
# 1 0.95 1.00 0.97 45435
#
# avg / total 0.97 0.97 0.97 90870
#
# 180K each
#
# 0 1.00 0.98 0.99 90870
# 1 0.98 1.00 0.99 90870
#
# avg / total 0.99 0.99 0.99 181740
#
# 0 1.00 0.98 0.99 90870
# 1 0.98 1.00 0.99 90870
#
# avg / total 0.99 0.99 0.99 181740
#
# 18K-36K
# 0 0.91 0.93 0.92 18174
# 1 0.85 0.82 0.83 9087
#
# avg / total 0.89 0.89 0.89 27261
#
# 0 1.00 0.90 0.95 18174
# 1 0.83 1.00 0.91 9087
#
# avg / total 0.94 0.93 0.93 27261
# -
# #### Final
# %%time
clf.fit(X, y)
# ### Test
sub_test = test_features # [['nationality', 'atm_withdrawal']]
sub_test.shape
# %%time
predictions = clf.predict_proba(sub_test)
# sort by `suspicious` probability
df_pred = pd.DataFrame(
predictions,
columns=['normal_prob', 'suspicious_prob'],
index=sub_test.index
).sort_values(by='suspicious_prob', ascending=False)
df_pred.head()
N = 1000
idx = df_pred['suspicious_prob'].head(N).index
fraud_rows = test_features.loc[idx]
(pd.Series(fraud_rows.index)
.to_frame()
.to_csv('fraudulent_customers.txt', index=False))
sns.distplot(df_pred['suspicious_prob'], kde=False)
plt.ylabel('Count')
| CrowdAI_EDA_kpj-Copy1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Conditional GAN Implementation
#
# Based on the paper: <i>Conditional Generative Adversarial Nets</i> and <i>Least Squares Generative Adversarial Networks</i>. There is a good guide [here](https://machinelearningmastery.com/how-to-develop-a-conditional-generative-adversarial-network-from-scratch/).
# +
import tensorflow as tf
from tensorflow.keras import backend as K
from tensorflow.keras.layers import Input,Dense,Reshape,Dropout,BatchNormalization,Activation,UpSampling2D,Embedding
from tensorflow.keras.layers import BatchNormalization,Conv2D,LeakyReLU,Flatten,Conv2DTranspose,Concatenate
from tensorflow.keras.optimizers import Adam,RMSprop
from tensorflow.keras.models import Model,Sequential
from tensorflow.keras.losses import BinaryCrossentropy
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
tf.keras.backend.set_floatx('float32')
import warnings
warnings.filterwarnings('ignore')
# -
# ### Data Cleaning
(x_train, y_train),(x_test, y_test) = tf.keras.datasets.mnist.load_data()
x_train = x_train.astype('float32')
y_train = y_train.astype('float32')
x_train = x_train[y_train <= 3] # using numbers 0,1,2,3
y_train = y_train[y_train <= 3]
y_train = y_train.astype("int32")
y_train = np.expand_dims(y_train,axis=-1)
#one_hot_y = np.zeros((len(y_train),4)) # getting one_hot encoding for labels
#one_hot_y[np.arange(y_train.size),y_train]=1
x_train = np.expand_dims(x_train,axis=-1)
x_train = x_train[:24750]
y_train = y_train[:24750]
print(x_train.shape,y_train.shape)
plt.subplot(1,4,1)
plt.imshow(np.squeeze(x_train[0]),cmap="gray")
plt.subplot(1,4,2)
plt.imshow(np.squeeze(x_train[1]),cmap="gray")
plt.subplot(1,4,3)
plt.imshow(np.squeeze(x_train[2]),cmap="gray")
plt.subplot(1,4,4)
plt.imshow(np.squeeze(x_train[4]),cmap="gray")
plt.show()
x_train = x_train/255 # scaling the images
# ### Model Implementation
def get_generator(noise_dim=100,class_dim=4):
""" conditional generator implementation
-class representation is added as an "extra" channel to the base of the generated image
"""
z = Input(shape=(noise_dim))
label = Input(shape=(1))
label_h = Embedding(input_dim=class_dim,output_dim=100)(label)
label_h = Dense(7*7)(label_h)
label_h = Reshape((7,7,1))(label_h)
h = Dense(7*7*256)(z)
h = BatchNormalization(momentum=0.9)(h)
h = Activation('relu')(h)
h = Reshape((7,7,256))(h)
h = Concatenate()([h,label_h]) # adding the class-representation as an extra channel
h = Dropout(0.4)(h)
h = Conv2DTranspose(filters=128,kernel_size=5,strides=2,padding='same',activation=None)(h)
h = BatchNormalization(momentum=0.9)(h)
h = Activation('relu')(h)
h = Conv2DTranspose(filters=64,kernel_size=5,strides=2,padding='same',activation=None)(h)
h = BatchNormalization(momentum=0.9)(h)
h = Activation('relu')(h)
h = Conv2DTranspose(filters=32,kernel_size=5,strides=1,padding='same',activation=None)(h)
h = BatchNormalization(momentum=0.9)(h)
h = Activation('relu')(h)
h = Conv2DTranspose(filters=1,kernel_size=5,strides=1,padding='same',activation=None)(h)
h = Activation('sigmoid')(h)
model = Model(inputs=[z,label],outputs=h)
return model
def get_discriminator(class_dim=4):
""" conditional discriminator implementation
-class representation is added as an "extra" channel to the input image
"""
x = Input(shape=(28,28,1))
label = Input(shape=(1))
label_h = Embedding(input_dim=class_dim,output_dim=100)(label)
label_h = Dense(28*28)(label_h)
label_h = Reshape((28,28,1))(label_h)
h = Concatenate()([x,label_h]) # adding the class-representation as an extra channel
h = Conv2D(filters=64,kernel_size=5,strides=2,padding='same',activation=None)(h)
h = LeakyReLU(0.2)(h)
h = Dropout(0.4)(h)
h = Conv2D(filters=128,kernel_size=5,strides=2,padding='same',activation=None)(h)
h = LeakyReLU(0.2)(h)
h = Dropout(0.4)(h)
h = Conv2D(filters=256,kernel_size=5,strides=2,padding='same',activation=None)(h)
h = LeakyReLU(0.2)(h)
h = Dropout(0.4)(h)
h = Flatten()(h)
h = Dense(1,activation=None)(h)
model = Model(inputs=[x,label],outputs=h)
return model
def discriminator_model(discriminator,optimizer=Adam(lr=0.0002)):
""" compiling discriminator model
"""
x = Input(shape=(28,28,1))
label = Input(shape=(1))
out = discriminator([x,label])
model = Model(inputs=[x,label],outputs=out)
model.compile(loss='mean_squared_error',optimizer=optimizer)
return model
def adversarial_model(generator,discriminator,noise_dim=100,optimizer=Adam(lr=0.0001)):
""" compiling adversarial model - used to train generator
"""
z = Input(shape=(noise_dim))
label = Input(shape=(1))
gen = generator([z,label])
out = discriminator([gen,label])
model = Model(inputs=[z,label],outputs=out)
model.compile(loss='mean_squared_error',optimizer=optimizer)
return model
# ### Model Training
#
# The noise prior z~N(0,1)
d = get_discriminator()
generator = get_generator()
discriminator = discriminator_model(d)
adversarial = adversarial_model(generator,d)
# +
num_epochs=30
batch_size=50
for epoch_i in range(num_epochs): # number of epochs
all_a_losses = []
all_d_losses = []
print("Epoch {}:".format(epoch_i+1))
for i in range(0,len(x_train)-batch_size,batch_size): # looping through batches rather than sampling
x_subset = x_train[i:i+batch_size]
y_subset = y_train[i:i+batch_size]
# training the discriminator:
z = np.random.normal(0.0,1.0,size=(batch_size,100))
x_gen = generator([z,y_subset])
x = np.concatenate((x_subset,x_gen))
y = np.vstack([np.ones((batch_size,1)),np.zeros((batch_size,1))])
discriminator.trainable=True
d_loss = discriminator.train_on_batch([x,np.vstack([y_subset,y_subset])],y)
all_d_losses.append(float(d_loss))
# training the generator:
y = np.ones([batch_size,1]) # we switch the labels here to maximize the domain-confusion
z = np.random.normal(0.0,1.0,size=(batch_size,100))
discriminator.trainable=False # prevents discriminator from being updated when it should only be the generator
a_loss = adversarial.train_on_batch([z,y_subset],y)
all_a_losses.append(float(a_loss))
if i%2000 == 0:
for _ in range(4): # making 4 plots
this_noise = np.random.normal(0.0,1.0,size=(1,100))
for i in range(4):
this_label = np.array([[i]])
gen = generator([this_noise,this_label])
gen = gen.numpy()*255
gen.shape=(28,28)
plt.subplot(1,4,i+1)
plt.imshow(gen,cmap="gray")
plt.show()
print("gen. loss:{}; disc. loss:{}".format(sum(all_a_losses)/len(all_a_losses),sum(all_d_losses)/len(all_d_losses)))
print("--------------------------------------------------------------------------------------------------")
# -
this_noise = np.random.normal(0.0,1.0,size=(1,100))
for i in range(4):
this_label = np.array([[i]])
gen = generator([this_noise,this_label])
gen = gen.numpy()*255
gen.shape=(28,28)
plt.subplot(1,4,i+1)
plt.imshow(gen,cmap="gray")
plt.show()
| generative_adversarial_nets/Conditional_LSGAN.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Quantum Computing
# <br>
# <img src=https://azurecomcdn.azureedge.net/cvt-77f94256f1c090f65c1a7a723791f2b6776f142767230e7a53a61deadacd009a/images/page/overview/what-is-quantum-computing/quantum-use-cases.jpg width="500" height="240" align="center"/>
# <br>
# Quantum Computing is an area in computing where quantum physics is utilised to solve complex computational issues. In Physics, the word quantum refers to the smallest physical unit.
# In computations, a quantum computer uses qubits, also known as quantum bits, whereas classical computers use bits. A qubit can be both 1 & 0 or either of those, whereas a bit can be only 1 or 0 at a time. Classical computers used in day to day tasks may lack computational powers to solve complex problems where quantum computers come to the rescue. Adding more transistors to a classical computer only increases the computational power linearly, whereas more qubits linked together increases power exponentially.
# A quantum computer works with the help of concepts of quantum physics such as,
# <b>1. Superposition:</b> Unlike a bit, a qubit can exist in a combination of all possible states of 1 and 0 at the same time until it is finally observed. This capability of a qubit helps a quantum computer increase its computational power because a qubit can take all computational paths simultaneously. Researchers use microwave beams or lasers to achieve this.
# <b>2. Entanglement:</b> When two qubits are entangled together, they form a single system and become correlated. Entanglement helps quantum computers by increasing computational power in solving complex problems since one qubit helps predict details of the entangled qubit.
# <b>3. Interference:</b> Interference provides the ability to bias the measurement of a qubit to the desired state. This is allowed due to the superposition of a qubit.
# ***
# <br>
# # Deutsch's Algorithm
# Deutsch's algorithm is considered the first to show how different a quantum computer computes from a classical computer.
# ***Problem Definition***
# f is a function that takes 0 or 1 as the input and returns 0 or 1 as the output. Moreover, it could be a balanced or a constant function. A balanced function is a function that gives 0 as the output for half the time and 1 for the other half. A constant function is a function that gives the same output for any input, which can be either 0 or 1.
# $f : {0,1}\rightarrow {0,1} $
# - if $ f(0)=1, f(1)=0 $ it is a balanced function
# - if $f(0)=0, f(1)=1 $it is a balanced function
# - if $f(0)=1, f(1)=1 $ it is a constant function
# - if $f(0)=0, f(1)=0 $it is a constant function
# The aim is to find whether a function is a constant or a balanced function.
# ***Classical Solution***
# In a single bit instance, we must check the output twice to decide the function type. However, when the number of input bits(n) increases, it grows exponentially in the worst case. We will have to check the output $2^{n-1}$ + 1 times to decide whether a function is balanced or a constant.
# Deutsch's algorithm can assess this problem with one function evaluation in the quantum world.
# ***
# ## Implementation
# $f$ is a function which converts a single bit(0 or 1) to 0 or 1.
# |$$f(0)$$ |$$f(1)$$ |
# |----------|----------|
# | 0 | 0 |
# | 0 | 1 |
# | 1 | 0 |
# | 1 | 1 |
# We have to decide whether $f$ is a constant or a balanced function. If $f(0) = f(1)$ it is a constant function. Else it is a balanced function.
# If the function is a constant function the oracle will return 0 and if it is a balanced function it will return 1 always. This is equivalent to XOR performed on $ f(0) $ and $ f(1) $.
# |$$f(0)$$ |$$f(1)$$ |$$f(0){\oplus}f(1)$$| |
# |----------|----------|--------------------|---------|
# | 0 | 1 | 1 |Balanced
# | 1 | 0 | 1 |Balanced
# | 1 | 1 | 0 |Constant
# | 0 | 0 | 0 |Constant
# If $U_f$ is the oracle it is as follows.
# $U_f :|x\rangle|y\rangle \rightarrow |x\rangle|y { \oplus} f(x)\rangle$
# ***
# <br>
# ## Function Oracles and the Use of Gates
# ### Balanced Function $f(0)=0$ and $f(1)=1$
# $|01\rangle = \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0 \end{bmatrix} \begin{matrix}|00\rangle\\|01\rangle\\|10\rangle\\|11\rangle \end{matrix}$
# $|x\rangle|y\rangle \rightarrow |x\rangle|y { \oplus} f(x)\rangle$
# $\begin{matrix}|00\rangle\\|01\rangle\\|10\rangle\\|11\rangle \end{matrix} \rightarrow \begin{matrix}|00\rangle\\|01\rangle\\|11\rangle\\|10\rangle \end{matrix} $
# The swapping of probability amplitudes in last two bits can be done using a CNOT gate.
# ### Constant Function $f(0)=1$ and $f(1)=1$
# $|x\rangle|y\rangle \rightarrow |x\rangle|y { \oplus} f(x)\rangle$
# $\begin{matrix}|00\rangle\\|01\rangle\\|10\rangle\\|11\rangle \end{matrix} \rightarrow \begin{matrix}|01\rangle\\|00\rangle\\|11\rangle\\|10\rangle \end{matrix} $
# The swapping of y bit can be done using a X gate.
# ### Constant Function $f(0)=0$ and $f(1)=0$
# $|x\rangle|y\rangle \rightarrow |x\rangle|y { \oplus} f(x)\rangle$
# $\begin{matrix}|00\rangle\\|01\rangle\\|10\rangle\\|11\rangle \end{matrix} \rightarrow \begin{matrix}|00\rangle\\|01\rangle\\|10\rangle\\|11\rangle \end{matrix} $
# Since y bit remains the same Identity matrix can be used here.
# ### Balanced Function $f(0)=1$ and $f(1)=0$
# $|x\rangle|y\rangle \rightarrow |x\rangle|y { \oplus} f(x)\rangle$
# $\begin{matrix}|00\rangle\\|01\rangle\\|10\rangle\\|11\rangle \end{matrix} \rightarrow \begin{matrix}|01\rangle\\|00\rangle\\|10\rangle\\|11\rangle \end{matrix} $
# This can be achieved by applying a CNOT gate and then a X gate.
# ***
# <br>
# ## Gates
# ### Hadamard Gate
# When Hadamard gate applied to $|0\rangle$ and $|1\rangle$,
# $\frac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \begin{bmatrix} 1 \\ 0 \end{bmatrix} = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 \\ 1 \end{bmatrix} $
# $H|0\rangle = \frac{1}{\sqrt{2}}(|0\rangle + |1\rangle) $
# chance of measuring 0 in $|0\rangle$ state is $\left(\frac{1}{\sqrt{2}}\right) ^ 2 = \frac{1}{2} $ and chance of measuring 1 is $\left(\frac{1}{\sqrt{2}}\right) ^ 2 = \frac{1}{2} $
# $\frac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix} \begin{bmatrix} 0 \\ 1\end{bmatrix} = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 \\ -1 \end{bmatrix} $
# $H|1\rangle = \frac{1}{\sqrt{2}}(|0\rangle - |1\rangle) $
# chance of measuring 0 in $|1\rangle$ state is $\left(\frac{1}{\sqrt{2}}\right) ^ 2 = \frac{1}{2} $ and chance of measuring 1 is $\left(\frac{-1}{\sqrt{2}}\right) ^ 2 = \frac{1}{2} $
# Therefore Hadamard gate transforms $|0\rangle$ and $|1\rangle$ to superposition.
# ### X Gate
# By applying a X gate to $|0\rangle$ and $|1\rangle$ , amplitudes of their states can be switched.
# $ X|0\rangle = \begin{bmatrix} 0 & 1 \\1 &0 \end{bmatrix}\begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} 0\\ 1 \end{bmatrix} =|1\rangle $
# Likewise, $ X|1\rangle =|0\rangle $
# ### CNOT Gate
# CNOT gate flips the second qubit, only if first qubit is 1.
# $CNOT \times |01\rangle = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0\\0 & 0 & 0 & 1\\0 & 0 & 1 & 0 \end{bmatrix} \begin{bmatrix} 0 \\ 1 \\ 0 \\ 0\end{bmatrix}=\begin{bmatrix} 0 \\ 1 \\ 0 \\ 0\end{bmatrix} = |01\rangle$
# $CNOT \times |00\rangle = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0\\0 & 0 & 0 & 1\\0 & 0 & 1 & 0 \end{bmatrix} \begin{bmatrix} 1 \\ 0 \\ 0 \\ 0\end{bmatrix}=\begin{bmatrix} 1 \\ 0 \\ 0 \\ 0\end{bmatrix} = |00\rangle$
# $CNOT \times |10\rangle = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0\\0 & 0 & 0 & 1\\0 & 0 & 1 & 0 \end{bmatrix} \begin{bmatrix}0 \\ 0 \\ 1 \\ 0\end{bmatrix}=\begin{bmatrix} 0 \\ 0 \\ 0 \\ 1\end{bmatrix} = |11\rangle$
# $CNOT \times |11\rangle = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0\\0 & 0 & 0 & 1\\0 & 0 & 1 & 0 \end{bmatrix} \begin{bmatrix}0 \\ 0 \\ 0 \\ 1\end{bmatrix}=\begin{bmatrix} 0 \\ 0 \\ 1 \\ 0\end{bmatrix} = |10\rangle$
# ### Hadamard Followed by CNOT
# $ H{\oplus}H =\frac{1}{2}\begin{bmatrix} 1 & 1 & 1 & 1 \\ 1 & -1 & 1 & -1\\1& 1 & -1 & -1\\1 & -1 & -1 & 1 \end{bmatrix}$
# $CNOT \times H{\oplus}H \times |10\rangle = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0\\0 & 0 & 0 & 1\\0 & 0 & 1 & 0 \end{bmatrix} \frac{1}{2}\begin{bmatrix} 1 & 1 & 1 & 1 \\ 1 & -1 & 1 & -1\\1& 1 & -1 & -1\\1 & -1 & -1 & 1 \end{bmatrix}\begin{bmatrix}0 \\ 0 \\ 1 \\ 0\end{bmatrix}=\begin{bmatrix} \frac{1}{2} \\ \frac{1}{2} \\ - \frac{1}{2} \\ - \frac{1}{2}\end{bmatrix}$
# $CNOT \times H{\oplus}H \times |01\rangle = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0\\0 & 0 & 0 & 1\\0 & 0 & 1 & 0 \end{bmatrix} \frac{1}{2}\begin{bmatrix} 1 & 1 & 1 & 1 \\ 1 & -1 & 1 & -1\\1& 1 & -1 & -1\\1 & -1 & -1 & 1 \end{bmatrix}\begin{bmatrix}0 \\1 \\ 0 \\ 0\end{bmatrix}=\begin{bmatrix} \frac{1}{2} \\ -\frac{1}{2} \\ - \frac{1}{2} \\ \frac{1}{2}\end{bmatrix}$
# ***
# <br>
# +
#import Qiskit
import qiskit
# Aer simulator.
import qiskit.providers.aer as aer
# Diagrams.
import matplotlib.pyplot as plt
# Change pyplot style.
plt.style.use('ggplot')
# -
# # Constant Zero Function
# ### 1. Create a 2-qubit circuit
# +
circuit = qiskit.QuantumCircuit(2, 1)
# Initialise the first qubit to |0>.
circuit.initialize([1, 0], 0)
# Initialise the second qubit to |1>.
circuit.initialize([0, 1], 1)
# Draw the circuit.
circuit.draw(output='mpl', scale=1.8)
# -
# $q_0$ is set to $|0\rangle$ and $q_1$ is set to $|1\rangle$
# ### 2. Apply a Hadamard gate to all qubits
# +
circuit.h(0)
circuit.h(1)
# Draw the circuit.
circuit.draw(output='mpl', scale=1.8)
# -
# ### 3. Apply Identity gate to second qubit and Hadamard gate to the first qubit
# Applying Identity matrix to second qubit will not make any effect on qubit state. Applying Hadamard gate again to first qubit will make it return to $|0\rangle$ state again.
# +
circuit.i(1)
circuit.h(0)
# Measure the first qubit.
circuit.measure(0, 0)
# Draw the circuit.
circuit.draw(output='mpl', scale=1.8)
# -
# Create a simulation instance.
simulator = aer.QasmSimulator()
# Compile the circuit in the simluator.
compcircuit = qiskit.transpile(circuit, simulator)
# Simulate the circuit 1000 times.
job = simulator.run(compcircuit, shots=1000)
# Get the results.
results = job.result()
# Show the result counts.
counts = results.get_counts()
counts
# Display histogram
qiskit.visualization.plot_histogram(counts, figsize=(1, 4))
# Since this is a constant function it will always return o.
# ***
# # Constant One Function
# +
# Create a quantum circuit.
circuit = qiskit.QuantumCircuit(2, 1)
# Initialise the first qubit to |0>.
circuit.initialize([1, 0], 0)
# Initialise the second qubit to |1>.
circuit.initialize([0, 1], 1)
# Apply a Hadamard gate to each qubit.
circuit.h((0, 1))
# Apply X gate to second qubit.
circuit.x(1)
# Apply another Hadamard gate to the first qubit.
circuit.h(0)
# Measure the first qubit.
circuit.measure(0, 0)
# Draw the circuit.
circuit.draw(output='mpl', scale=1.8)
# -
#
#
# +
# Create a simulation instance.
simulator = aer.QasmSimulator()
# Compile the circuit in the simluator.
compcircuit = qiskit.transpile(circuit, simulator)
# Simulate the circuit 1000 times.
job = simulator.run(compcircuit, shots=1000)
# Get the results.
results = job.result()
# Show the result counts.
counts = results.get_counts()
# Display histogram
qiskit.visualization.plot_histogram(counts, figsize=(1, 4))
# -
# Since this is also a constant function it will always return o.
# ***
# <br>
# # Balanced Function: *f*(0)=0, *f*(1)=1
# +
# Create a quantum circuit.
circuit = qiskit.QuantumCircuit(2, 1)
# Initialise the first qubit to |0>.
circuit.initialize([1, 0], 0)
# Initialise the second qubit to |1>.
circuit.initialize([0, 1], 1)
# Apply a Hadamard gate to each qubit.
circuit.h((0, 1))
# CNOT gate.
circuit.cnot(0, 1)
# Apply another Hadamard gate to the first qubit.
circuit.h(0)
# Measure the first qubit.
circuit.measure(0, 0)
# Draw the circuit.
circuit.draw(output='mpl', scale=1.8)
# +
# Create a simulation instance.
simulator = aer.QasmSimulator()
# Compile the circuit in the simluator.
compcircuit = qiskit.transpile(circuit, simulator)
# Simulate the circuit 1000 times.
job = simulator.run(compcircuit, shots=1000)
# Get the results.
results = job.result()
# Show the result counts.
counts = results.get_counts()
# Display histogram
qiskit.visualization.plot_histogram(counts, figsize=(1, 4))
# -
# Since this is a balanced function it will always return 1.
# ***
# <br>
# # Balanced Function: *f*(0)=1, *f*(1)=0
# +
# Create a quantum circuit.
circuit = qiskit.QuantumCircuit(2, 1)
# Initialise the first qubit to |0>.
circuit.initialize([1, 0], 0)
# Initialise the second qubit to |1>.
circuit.initialize([0, 1], 1)
# Apply a Hadamard gate to each qubit.
circuit.h((0, 1))
# CNOT gate.
circuit.cnot(0, 1)
# Apply x to second qubit.
circuit.x(1)
# Apply another Hadamard gate to the first qubit.
circuit.h(0)
# Measure the first qubit.
circuit.measure(0, 0)
# Draw the circuit.
circuit.draw(output='mpl', scale=1.8)
# +
# Create a simulation instance.
simulator = aer.QasmSimulator()
# Compile the circuit in the simluator.
compcircuit = qiskit.transpile(circuit, simulator)
# Simulate the circuit 1000 times.
job = simulator.run(compcircuit, shots=1000)
# Get the results.
results = job.result()
# Show the result counts.
counts = results.get_counts()
# Display histogram
qiskit.visualization.plot_histogram(counts, figsize=(1, 4))
# -
# Since this is a balanced function it will always return 1.
| quantum-deutsch.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.8.10 64-bit (''base'': conda)'
# name: python3
# ---
# # Pivot_Longer : One function to cover transformations from wide to long form.
import janitor
import pandas as pd
import numpy as np
# Unpivoting(reshaping data from wide to long form) in Pandas is executed either through [pd.melt](https://pandas.pydata.org/docs/reference/api/pandas.melt.html), [pd.wide_to_long](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.wide_to_long.html), or [pd.DataFrame.stack](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.stack.html). However, there are scenarios where a few more steps are required to massage the data into the long form that we desire. Take the dataframe below, copied from [Stack Overflow](https://stackoverflow.com/questions/64061588/pandas-melt-multiple-columns-to-tabulate-a-dataset#64062002):
# + tags=[]
df = pd.DataFrame(
{
"id": [1, 2, 3],
"M_start_date_1": [201709, 201709, 201709],
"M_end_date_1": [201905, 201905, 201905],
"M_start_date_2": [202004, 202004, 202004],
"M_end_date_2": [202005, 202005, 202005],
"F_start_date_1": [201803, 201803, 201803],
"F_end_date_1": [201904, 201904, 201904],
"F_start_date_2": [201912, 201912, 201912],
"F_end_date_2": [202007, 202007, 202007],
}
)
df
# -
# Below is a [beautiful solution](https://stackoverflow.com/a/64062027/7175713), from Stack Overflow :
# +
df1 = df.set_index('id')
df1.columns = df1.columns.str.split('_', expand=True)
df1 = (df1.stack(level=[0,2,3])
.sort_index(level=[0,1], ascending=[True, False])
.reset_index(level=[2,3], drop=True)
.sort_index(axis=1, ascending=False)
.rename_axis(['id','cod'])
.reset_index())
df1
# -
# We propose an alternative, based on [pandas melt](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.melt.html) and [concat](https://pandas.pydata.org/docs/reference/api/pandas.concat.html), that abstracts the reshaping mechanism, allows the user to focus on the task, can be applied to other scenarios, and is chainable :
# +
result = (df.pivot_longer(index="id",
names_to=("cod", ".value", 'dates'),
names_pattern="(M|F)_(start|end)_(.+)",
sort_by_appearance=True)
.drop(columns='dates')
)
result
# -
df1.equals(result)
# [pivot_longer](https://pyjanitor-devs.github.io/pyjanitor/reference/janitor.functions/janitor.pivot_longer.html#janitor.pivot_longer) is not a new idea; it is a combination of ideas from R's [tidyr](https://tidyr.tidyverse.org/reference/pivot_longer.html) and [data.table](https://rdatatable.gitlab.io/data.table/) and is built on the powerful pandas' [melt](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.melt.html) and [concat](https://pandas.pydata.org/docs/reference/api/pandas.concat.html) functions.
# [pivot_longer](https://pyjanitor-devs.github.io/pyjanitor/reference/janitor.functions/janitor.pivot_longer.html#janitor.pivot_longer) can melt dataframes easily; It is just a wrapper around pandas' [melt](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.melt.html).
#
# [Source Data](https://pandas.pydata.org/pandas-docs/stable/user_guide/reshaping.html#reshaping-by-melt)
# +
index = pd.MultiIndex.from_tuples([('person', 'A'), ('person', 'B')])
df = pd.DataFrame({'first': ['John', 'Mary'],
'last': ['Doe', 'Bo'],
'height': [5.5, 6.0],
'weight': [130, 150]},
index=index)
df
# -
df.pivot_longer(index=['first','last'])
# If you want the data unpivoted in order of appearance, you can set `sort_by_appearance` to `True``:
df.pivot_longer(
index=['first','last'],
sort_by_appearance = True
)
# If you wish to reuse the original index, you can set `ignore_index` to `False``; note that the index labels will be repeated as necessary:
df.pivot_longer(
index=['first','last'],
ignore_index = False
)
# You can also unpivot MultiIndex columns, the same way you would with pandas' [melt](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.melt.html#pandas.melt):
#
# [Source Data](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.melt.html#pandas.melt)
# +
df = pd.DataFrame({'A': {0: 'a', 1: 'b', 2: 'c'},
'B': {0: 1, 1: 3, 2: 5},
'C': {0: 2, 1: 4, 2: 6}})
df.columns = [list('ABC'), list('DEF')]
df
# -
df.pivot_longer(
index = [("A", "D")],
values_to = "num"
)
df.pivot_longer(
index = [("A", "D")],
column_names = [("B", "E")]
)
# And just like [melt](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.melt.html#pandas.melt), you can unpivot on a specific level, with `column_level`:
df.pivot_longer(
index = "A",
column_names = "B",
column_level = 0
)
# Note that when unpivoting MultiIndex columns, you need to pass a list of tuples to the `index` or `column_names` parameters.
#
#
# Also, if `names_sep` or `names_pattern` is not None, then unpivoting on MultiIndex columns is not supported.
# You can dynamically select columns, using regular expressions with the `janitor.patterns` function (inspired by R's data.table's [patterns](https://rdatatable.gitlab.io/data.table/reference/patterns.html) function, and is really just a wrapper around `re.compile`), especially if it is a lot of column names, and you are *lazy* like me 😄
# +
url = 'https://github.com/tidyverse/tidyr/raw/master/data-raw/billboard.csv'
df = pd.read_csv(url)
df
# -
# unpivot all columns that start with 'wk'
df.pivot_longer(column_names = janitor.patterns("^(wk)"),
names_to='week')
# You can also use [pyjanitor's](https://pyjanitor-devs.github.io/pyjanitor/) [select_columns](https://pyjanitor-devs.github.io/pyjanitor/reference/janitor.functions/janitor.select_columns.html#janitor.select_columns) syntax:
df.pivot_longer(column_names = "wk*",
names_to = 'week')
# [pivot_longer](https://pyjanitor-devs.github.io/pyjanitor/reference/janitor.functions/janitor.pivot_longer.html#janitor.pivot_longer) can also unpivot paired columns. In this regard, it is like pandas' [wide_to_long](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.wide_to_long.html), but with more flexibility and power. Let's look at an example from pandas' [wide_to_long](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.wide_to_long.html) docs :
# +
df = pd.DataFrame({
'famid': [1, 1, 1, 2, 2, 2, 3, 3, 3],
'birth': [1, 2, 3, 1, 2, 3, 1, 2, 3],
'ht1': [2.8, 2.9, 2.2, 2, 1.8, 1.9, 2.2, 2.3, 2.1],
'ht2': [3.4, 3.8, 2.9, 3.2, 2.8, 2.4, 3.3, 3.4, 2.9]
})
df
# -
# In the data above, the `height`(ht) is paired with `age`(numbers). [pd.wide_to_long](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.wide_to_long.html) can handle this easily:
pd.wide_to_long(df, stubnames='ht', i=['famid', 'birth'], j='age')
# Now let's see how [pivot_longer](https://pyjanitor-devs.github.io/pyjanitor/reference/janitor.functions/janitor.pivot_longer.html) handles this:
df.pivot_longer(index=['famid','birth'],
names_to=('.value', 'age'),
names_pattern=r"(ht)(\d)")
# The first observable difference is that [pivot_longer](https://pyjanitor-devs.github.io/pyjanitor/reference/janitor.functions/janitor.pivot_longer.html) is method chainable, while [pd.wide_to_long](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.wide_to_long.html) is not. Now, let's learn more about the `.value` variable.
#
#
# When `.value` is used in `names_to`, a pairing is created between `names_to` and `names_pattern``. For the example above, we get this pairing :
#
# {".value": ("ht"), "age": (\d)}
#
# This tells the [pivot_longer](https://pyjanitor-devs.github.io/pyjanitor/reference/janitor.functions/janitor.pivot_longer.html) function to keep values associated with `.value`(`ht`) as the column name, while values not associated with `.value`, in this case, the numbers, will be collated under a new column `age``. Internally, pandas `str.extract` is used to get the capturing groups before reshaping. This level of abstraction, we believe, allows the user to focus on the task, and get things done faster.
#
# Note that if you want the data returned in order of appearance you can set `sort_by_appearance` to `True`:
#
df.pivot_longer(
index = ['famid','birth'],
names_to = ('.value', 'age'),
names_pattern = r"(ht)(\d)",
sort_by_appearance = True,
)
# Note that you are likely to get more speed when `sort_by_appearance` is `False``.
#
# Note also that the values in the `age` column are of `object` dtype. You can change the dtype, using pandas' [astype](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.astype.html) method.
# We've seen already that [pd.wide_to_long](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.wide_to_long.html) handles this already and very well, so why bother? Let's look at another scenario where [pd.wide_to_long](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.wide_to_long.html) would need a few more steps. [Source Data](https://community.rstudio.com/t/pivot-longer-on-multiple-column-sets-pairs/43958):
# +
df = pd.DataFrame(
{
"off_loc": ["A", "B", "C", "D", "E", "F"],
"pt_loc": ["G", "H", "I", "J", "K", "L"],
"pt_lat": [
100.07548220000001,
75.191326,
122.65134479999999,
124.13553329999999,
124.13553329999999,
124.01028909999998,
],
"off_lat": [
121.271083,
75.93845266,
135.043791,
134.51128400000002,
134.484374,
137.962195,
],
"pt_long": [
4.472089953,
-144.387785,
-40.45611048,
-46.07156181,
-46.07156181,
-46.01594293,
],
"off_long": [
-7.188632000000001,
-143.2288569,
21.242563,
40.937416999999996,
40.78472,
22.905889000000002,
],
}
)
df
# -
# We can unpivot with [pd.wide_to_long](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.wide_to_long.html) by first reorganising the columns :
df1 = df.copy()
df1.columns = ["_".join(col.split("_")[::-1])
for col in df1.columns]
df1
# Now, we can unpivot :
# + tags=[]
pd.wide_to_long(
df1.reset_index(),
stubnames=["loc", "lat", "long"],
sep="_",
i="index",
j="set",
suffix=".+",
)
# -
# We can get the same transformed dataframe, with less lines, using [pivot_longer](https://pyjanitor-devs.github.io/pyjanitor/reference/janitor.functions/janitor.pivot_longer.html):
# + tags=[]
df.pivot_longer(
names_to = ["set", ".value"],
names_pattern = "(.+)_(.+)"
)
# +
# Another way to see the pairings,
# to see what is linked to `.value`,
# names_to = ["set", ".value"]
# names_pattern = "(.+)_(.+)"
# column _names = off_loc
# off_lat
# off_long
# -
# Again, the key here is the `.value` symbol. Pairing `names_to` with `names_pattern` and its results from [pd.str.extract](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.extract.html), we get :
#
# set--> (.+) --> [off, pt] and
# .value--> (.+) --> [loc, lat, long]
#
# All values associated with `.value`(loc, lat, long) remain as column names, while values not associated with `.value`(off, pt) are lumped into a new column `set``.
#
# Notice that we did not have to reset the index - [pivot_longer](https://pyjanitor-devs.github.io/pyjanitor/reference/janitor.functions/janitor.pivot_longer.html) takes care of that internally; [pivot_longer](https://pyjanitor-devs.github.io/pyjanitor/reference/janitor.functions/janitor.pivot_longer.html) allows you to focus on what you want, so you can get it and move on.
# Note that the unpivoting could also have been executed with `names_sep`:
df.pivot_longer(
names_to = ["set", ".value"],
names_sep = "_",
ignore_index = False,
sort_by_appearance = True
)
# Let's look at another example, from [Stack Overflow](https://stackoverflow.com/questions/45123924/convert-pandas-dataframe-from-wide-to-long/45124130) :
df = pd.DataFrame([{'a_1': 2, 'ab_1': 3,
'ac_1': 4, 'a_2': 5,
'ab_2': 6, 'ac_2': 7}])
df
# The data above requires extracting `a`, `ab` and `ac` from `1` and `2`. This is another example of a paired column. We could solve this using [pd.wide_to_long](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.wide_to_long.html); infact there is a very good solution from [Stack Overflow](https://stackoverflow.com/a/45124775/7175713)
# + tags=[]
df1 = df.copy()
df1['id'] = df1.index
pd.wide_to_long(df1, ['a','ab','ac'],i='id',j='num',sep='_')
# -
# Or you could simply pass the buck to [pivot_longer](https://pyjanitor-devs.github.io/pyjanitor/reference/janitor.functions/janitor.pivot_longer.html):
# + tags=[]
df.pivot_longer(
names_to = ('.value', 'num'),
names_sep = '_'
)
# -
# In the solution above, we used the `names_sep` argument, as it is more convenient. A few more examples to get you familiar with the `.value` symbol.
#
# [Source Data](https://stackoverflow.com/questions/55403008/pandas-partial-melt-or-group-melt)
# +
df = pd.DataFrame([[1,1,2,3,4,5,6],
[2,7,8,9,10,11,12]],
columns=['id', 'ax','ay','az','bx','by','bz'])
df
# -
df.pivot_longer(
index = 'id',
names_to = ('name', '.value'),
names_pattern = '(.)(.)'
)
# For the code above `.value` is paired with `x`, `y`, `z`(which become the new column names), while `a`, `b` are unpivoted into the `name` column.
# In the dataframe below, we need to unpivot the data, keeping only the suffix `hi`, and pulling out the number between `A` and `g`. [Source Data](https://stackoverflow.com/questions/35929985/melt-a-data-table-with-a-column-pattern)
df = pd.DataFrame([{'id': 1, 'A1g_hi': 2,
'A2g_hi': 3, 'A3g_hi': 4,
'A4g_hi': 5}])
df
df.pivot_longer(
index = 'id',
names_to = ['time','.value'],
names_pattern = "A(\d)g_(hi)")
# Let's see an example where we have multiple values in a paired column, and we wish to split them into separate columns. [Source Data](https://stackoverflow.com/questions/64107566/how-to-pivot-longer-and-populate-with-fields-from-column-names-at-the-same-tim?noredirect=1#comment113369419_64107566) :
# +
df = pd.DataFrame(
{
"Sony | TV | Model | value": {0: "A222", 1: "A234", 2: "A4345"},
"Sony | TV | Quantity | value": {0: 5, 1: 5, 2: 4},
"Sony | TV | Max-quant | value": {0: 10, 1: 9, 2: 9},
"Panasonic | TV | Model | value": {0: "T232", 1: "S3424", 2: "X3421"},
"Panasonic | TV | Quantity | value": {0: 1, 1: 5, 2: 1},
"Panasonic | TV | Max-quant | value": {0: 10, 1: 12, 2: 11},
"Sanyo | Radio | Model | value": {0: "S111", 1: "S1s1", 2: "S1s2"},
"Sanyo | Radio | Quantity | value": {0: 4, 1: 2, 2: 4},
"Sanyo | Radio | Max-quant | value": {0: 9, 1: 9, 2: 10},
}
)
df
# -
# The goal is to reshape the data into long format, with separate columns for `Manufacturer`(Sony,...), `Device`(TV, Radio), `Model`(S3424, ...), `maximum quantity` and `quantity``.
#
# Below is the [accepted solution](https://stackoverflow.com/a/64107688/7175713) on Stack Overflow :
# +
df1 = df.copy()
# Create a multiIndex column header
df1.columns = pd.MultiIndex.from_arrays(
zip(*df1.columns.str.split("\s?\|\s?"))
)
# Reshape the dataframe using
# `set_index`, `droplevel`, and `stack`
(df1.stack([0, 1])
.droplevel(1, axis=1)
.set_index("Model", append=True)
.rename_axis([None, "Manufacturer", "Device", "Model"])
.sort_index(level=[1, 2, 3])
.reset_index()
.drop("level_0", axis=1)
)
# -
# Or, we could use [pivot_longer](https://pyjanitor-devs.github.io/pyjanitor/reference/janitor.functions/janitor.pivot_longer.html), along with `.value` in `names_to` and a regular expression in `names_pattern` :
df.pivot_longer(
names_to = ("Manufacturer", "Device", ".value"),
names_pattern = r"(.+)\|(.+)\|(.+)\|.*",
)
# The cleanup (removal of whitespace in the column names) is left as an exercise for the reader.
# What if we are interested in unpivoting only a part of the entire dataframe? [Source Data](https://stackoverflow.com/questions/63044119/converting-wide-format-data-into-long-format-with-multiple-indices-and-grouped-d)
df = pd.DataFrame({'time': [1, 2, 3],
'factor': ['a','a','b'],
'variable1': [0,0,0],
'variable2': [0,0,1],
'variable3': [0,2,0],
'variable4': [2,0,1],
'variable5': [1,0,1],
'variable6': [0,1,1],
'O1V1': [0,0.2,-0.3],
'O1V2': [0,0.4,-0.9],
'O1V3': [0.5,0.2,-0.6],
'O1V4': [0.5,0.2,-0.6],
'O1V5': [0,0.2,-0.3],
'O1V6': [0,0.4,-0.9],
'O1V7': [0.5,0.2,-0.6],
'O1V8': [0.5,0.2,-0.6],
'O2V1': [0,0.5,0.3],
'O2V2': [0,0.2,0.9],
'O2V3': [0.6,0.1,-0.3],
'O2V4': [0.5,0.2,-0.6],
'O2V5': [0,0.5,0.3],
'O2V6': [0,0.2,0.9],
'O2V7': [0.6,0.1,-0.3],
'O2V8': [0.5,0.2,-0.6],
'O3V1': [0,0.7,0.4],
'O3V2': [0.9,0.2,-0.3],
'O3V3': [0.5,0.2,-0.7],
'O3V4': [0.5,0.2,-0.6],
'O3V5': [0,0.7,0.4],
'O3V6': [0.9,0.2,-0.3],
'O3V7': [0.5,0.2,-0.7],
'O3V8': [0.5,0.2,-0.6]})
df
# What is the task? This is copied verbatim from the source:
#
# <blockquote>Each row of the data frame represents a time period. There are multiple 'subjects' being monitored, namely O1, O2, and O3. Each subject has 8 variables being measured. I need to convert this data into long format where each row contains the information for one subject at a given time period, but with only the first 4 subject variables, as well as the extra information about this time period in columns 2-4, but not columns 5-8.</blockquote>
# Below is the accepted solution, using [wide_to_long](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.wide_to_long.html):
# +
df1 = df.rename(columns={x: x[2:]+x[1:2] for x in df.columns[df.columns.str.startswith('O')]})
df1 = pd.wide_to_long(df1, i=['time', 'factor']+[f'variable{i}' for i in range(1,7)],
j='id', stubnames=[f'V{i}' for i in range(1,9)], suffix='.*')
df1 = (df1.reset_index()
.drop(columns=[f'V{i}' for i in range(5,9)]
+[f'variable{i}' for i in range(3,7)]))
df1
# -
# We can abstract the details and focus on the task with [pivot_longer](https://pyjanitor-devs.github.io/pyjanitor/reference/janitor.functions/janitor.pivot_longer.html):
df.pivot_longer(
index = slice("time", "variable2"),
column_names = janitor.patterns(".+V[1-4]$"),
names_to = ("id", ".value"),
names_pattern = ".(.)(.+)$",
sort_by_appearance = True
)
# One more example on the `.value` symbol for paired columns [Source Data](https://stackoverflow.com/questions/59477686/python-pandas-melt-single-column-into-two-seperate) :
df = pd.DataFrame({'id': [1, 2],
'A_value': [50, 33],
'D_value': [60, 45]})
df
df.pivot_longer(
index = 'id',
names_to = ('value_type', '.value'),
names_sep = '_'
)
# There are scenarios where we need to unpivot the data, and group values within the column names under new columns. The values in the columns will not become new column names, so we do not need the `.value` symbol. Let's see an example below: [Source Data](https://stackoverflow.com/questions/59550804/melt-column-by-substring-of-the-columns-name-in-pandas-python)
# +
df = pd.DataFrame({'subject': [1, 2],
'A_target_word_gd': [1, 11],
'A_target_word_fd': [2, 12],
'B_target_word_gd': [3, 13],
'B_target_word_fd': [4, 14],
'subject_type': ['mild', 'moderate']})
df
# -
# In the dataframe above, `A` and `B` represent conditions, while the suffixes `gd` and `fd` represent value types. We are not interested in the words in the middle (`_target_word`). We could solve it this way (this is the chosen solution, copied from [Stack Overflow](https://stackoverflow.com/a/59550967/7175713)) :
new_df =(pd.melt(df,
id_vars=['subject_type','subject'],
var_name='abc')
.sort_values(by=['subject', 'subject_type'])
)
new_df['cond']=(new_df['abc']
.apply(lambda x: (x.split('_'))[0])
)
new_df['value_type']=(new_df
.pop('abc')
.apply(lambda x: (x.split('_'))[-1])
)
new_df
# Or, we could just pass the buck to [pivot_longer](https://pyjanitor-devs.github.io/pyjanitor/reference/janitor.functions/janitor.pivot_longer.html):
df.pivot_longer(
index = ["subject", "subject_type"],
names_to = ("cond", "value_type"),
names_pattern = "([A-Z]).*(gd|fd)",
)
# In the code above, we pass in the new names of the columns to `names_to`('cond', 'value_type'), and pass the groups to be extracted as a regular expression to `names_pattern`.
# Here's another example where [pivot_longer](https://pyjanitor-devs.github.io/pyjanitor/reference/janitor.functions/janitor.pivot_longer.html) abstracts the process and makes reshaping easy.
#
#
# In the dataframe below, we would like to unpivot the data and separate the column names into individual columns(`vault` should be in an `event` column, `2012` should be in a `year` column and `f` should be in a `gender` column). [Source Data](https://dcl-wrangle.stanford.edu/pivot-advanced.html)
df = pd.DataFrame(
{
"country": ["United States", "Russia", "China"],
"vault_2012_f": [
48.132,
46.366,
44.266,
],
"vault_2012_m": [46.632, 46.866, 48.316],
"vault_2016_f": [
46.866,
45.733,
44.332,
],
"vault_2016_m": [45.865, 46.033, 45.0],
"floor_2012_f": [45.366, 41.599, 40.833],
"floor_2012_m": [45.266, 45.308, 45.133],
"floor_2016_f": [45.999, 42.032, 42.066],
"floor_2016_m": [43.757, 44.766, 43.799],
}
)
df
# We could achieve this with a combination of [pd.melt](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.melt.html) and pandas string methods (or janitor's [deconcatenate_columns](https://pyjanitor-devs.github.io/pyjanitor/reference/janitor.functions/janitor.deconcatenate_column.html#janitor.deconcatenate_column) method); or we could, again, pass the buck to [pivot_longer](https://pyjanitor-devs.github.io/pyjanitor/reference/janitor.functions/janitor.pivot_longer.html):
df.pivot_longer(
index = "country",
names_to = ["event", "year", "gender"],
names_sep = "_",
values_to = "score",
)
# Again, if you want the data returned in order of appearance, you can turn on the `sort_by_appearance` parameter:
df.pivot_longer(
index = "country",
names_to = ["event", "year", "gender"],
names_sep = "_",
values_to = "score",
sort_by_appearance = True
)
# One more feature that [pivot_longer](https://pyjanitor-devs.github.io/pyjanitor/reference/janitor.functions/janitor.pivot_longer.html) offers is to pass a list of regular expressions to `names_pattern`. This comes in handy when one single regex cannot encapsulate similar columns for reshaping to long form. This idea is inspired by the [melt](https://rdatatable.gitlab.io/data.table/reference/melt.data.table.html) function in R's [data.table](https://rdatatable.gitlab.io/data.table/). A couple of examples should make this clear.
#
# [Source Data](https://stackoverflow.com/questions/61138600/tidy-dataset-with-pivot-longer-multiple-columns-into-two-columns)
# +
df = pd.DataFrame(
[{'title': 'Avatar',
'actor_1': 'CCH_Pound…',
'actor_2': 'Joel_Davi…',
'actor_3': 'Wes_Studi',
'actor_1_FB_likes': 1000,
'actor_2_FB_likes': 936,
'actor_3_FB_likes': 855},
{'title': 'Pirates_of_the_Car…',
'actor_1': 'Johnny_De…',
'actor_2': 'Orlando_B…',
'actor_3': 'Jack_Daven…',
'actor_1_FB_likes': 40000,
'actor_2_FB_likes': 5000,
'actor_3_FB_likes': 1000},
{'title': 'The_Dark_Knight_Ri…',
'actor_1': 'Tom_Hardy',
'actor_2': 'Christian…',
'actor_3': 'Joseph_Gor…',
'actor_1_FB_likes': 27000,
'actor_2_FB_likes': 23000,
'actor_3_FB_likes': 23000},
{'title': 'John_Carter',
'actor_1': 'Daryl_Sab…',
'actor_2': 'Samantha_…',
'actor_3': 'Polly_Walk…',
'actor_1_FB_likes': 640,
'actor_2_FB_likes': 632,
'actor_3_FB_likes': 530},
{'title': 'Spider-Man_3',
'actor_1': 'J.K._Simm…',
'actor_2': 'James_Fra…',
'actor_3': 'Kirsten_Du…',
'actor_1_FB_likes': 24000,
'actor_2_FB_likes': 11000,
'actor_3_FB_likes': 4000},
{'title': 'Tangled',
'actor_1': 'Brad_Garr…',
'actor_2': 'Donna_Mur…',
'actor_3': 'M.C._Gainey',
'actor_1_FB_likes': 799,
'actor_2_FB_likes': 553,
'actor_3_FB_likes': 284}]
)
df
# -
# Above, we have a dataframe of movie titles, actors, and their facebook likes. It would be great if we could transform this into a long form, with just the title, the actor names, and the number of likes. Let's look at a possible solution :
#
# First, we reshape the columns, so that the numbers appear at the end.
df1 = df.copy()
pat = r"(?P<actor>.+)_(?P<num>\d)_(?P<likes>.+)"
repl = lambda m: f"""{m.group('actor')}_{m.group('likes')}_{m.group('num')}"""
df1.columns = df1.columns.str.replace(pat, repl, regex=True)
df1
# Now, we can reshape, using [pd.wide_to_long](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.wide_to_long.html) :
pd.wide_to_long(df1,
stubnames = ['actor', 'actor_FB_likes'],
i = 'title',
j = 'group',
sep = '_')
# We could attempt to solve it with [pivot_longer](https://pyjanitor-devs.github.io/pyjanitor/reference/janitor.functions/janitor.pivot_longer.html), using the `.value` symbol :
df1.pivot_longer(
index = 'title',
names_to = (".value", "group"),
names_pattern = "(.+)_(\d)$"
)
# What if we could just get our data in long form without the massaging? We know our data has a pattern to it --> it either ends in a number or *likes*. Can't we take advantage of that? Yes, we can (I know, I know; it sounds like a campaign slogan 🤪)
df.pivot_longer(
index = 'title',
names_to = ("actor", "num_likes"),
names_pattern = ('\d$', 'likes$'),
)
# A pairing of `names_to` and `names_pattern` results in:
#
# {"actor": '\d$', "num_likes": 'likes$'}
#
# The first regex looks for columns that end with a number, while the other looks for columns that end with *likes*. [pivot_longer](https://pyjanitor-devs.github.io/pyjanitor/reference/janitor.functions/janitor.pivot_longer.html) will then look for columns that end with a number and lump all the values in those columns under the `actor` column, and also look for columns that end with *like* and combine all the values in those columns into a new column -> `num_likes`. Underneath the hood, [numpy select](https://numpy.org/doc/stable/reference/generated/numpy.select.html) and [pd.Series.str.contains](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.contains.html) are used to pull apart the columns into the new columns.
#
# Again, it is about the goal; we are not interested in the numbers (1,2,3), we only need the names of the actors, and their facebook likes. [pivot_longer](https://pyjanitor-devs.github.io/pyjanitor/reference/janitor.functions/janitor.pivot_longer.html) aims to give as much flexibility as possible, in addition to ease of use, to allow the end user focus on the task.
#
# Let's take a look at another example. [Source Data](https://stackoverflow.com/questions/60439749/pair-wise-melt-in-pandas-dataframe) :
# +
df = pd.DataFrame({'id': [0, 1],
'Name': ['ABC', 'XYZ'],
'code': [1, 2],
'code1': [4, np.nan],
'code2': ['8', 5],
'type': ['S', 'R'],
'type1': ['E', np.nan],
'type2': ['T', 'U']})
df
# -
# We cannot directly use [pd.wide_to_long](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.wide_to_long.html) here without some massaging, as there is no definite suffix(the first `code` does not have a suffix), neither can we use `.value` here, again because there is no suffix. However, we can see a pattern where some columns start with `code`, and others start with `type`. Let's see how [pivot_longer](https://pyjanitor-devs.github.io/pyjanitor/reference/janitor.functions/janitor.pivot_longer.html) solves this, using a sequence of regular expressions in the `names_pattern` argument :
df.pivot_longer(
index = ["id", "Name"],
names_to = ("code_all", "type_all"),
names_pattern = ("^code", "^type")
)
# The key here is passing the right regular expression, and ensuring the names in `names_to` is paired with the right regex in `names_pattern`; as such, every column that starts with `code` will be included in the new `code_all` column; the same happens to the `type_all` column. Easy and flexible, right?
#
# Let's explore another example, from [Stack Overflow](https://stackoverflow.com/questions/12466493/reshaping-multiple-sets-of-measurement-columns-wide-format-into-single-columns) :
# +
df = pd.DataFrame(
[
{
"ID": 1,
"DateRange1Start": "1/1/90",
"DateRange1End": "3/1/90",
"Value1": 4.4,
"DateRange2Start": "4/5/91",
"DateRange2End": "6/7/91",
"Value2": 6.2,
"DateRange3Start": "5/5/95",
"DateRange3End": "6/6/96",
"Value3": 3.3,
}
])
df
# -
# In the dataframe above, we need to reshape the data to have a start date, end date and value. For the `DateRange` columns, the numbers are embedded within the string, while for `value` it is appended at the end. One possible solution is to reshape the columns so that the numbers are at the end :
df1 = df.copy()
pat = r"(?P<head>.+)(?P<num>\d)(?P<tail>.+)"
repl = lambda m: f"""{m.group('head')}{m.group('tail')}{m.group('num')}"""
df1.columns = df1.columns.str.replace(pat, repl, regex=True)
df1
# Now, we can unpivot:
pd.wide_to_long(df1,
stubnames = ['DateRangeStart',
'DateRangeEnd',
'Value'],
i = 'ID',
j = 'num')
# Using the `.value` symbol in pivot_longer:
df1.pivot_longer(
index = 'ID',
names_to = [".value",'num'],
names_pattern = "(.+)(\d)$"
)
# Or, we could allow pivot_longer worry about the massaging; simply pass to `names_pattern` a list of regular expressions that match what we are after :
df.pivot_longer(
index = 'ID',
names_to = ("DateRangeStart", "DateRangeEnd", "Value"),
names_pattern = ("Start$", "End$", "^Value")
)
# The code above looks for columns that end with *Start*(`Start$`), aggregates all the values in those columns into `DateRangeStart` column, looks for columns that end with *End*(`End$`), aggregates all the values within those columns into `DateRangeEnd` column, and finally looks for columns that start with *Value*(`^Value`), and aggregates the values in those columns into the `Value` column. Just know the patterns, and pair them accordingly. Again, the goal is a focus on the task, to make it simple for the end user.
# Let's look at another example [Source Data](https://stackoverflow.com/questions/64316129/how-to-efficiently-melt-multiple-columns-using-the-module-melt-in-pandas/64316306#64316306) :
# +
df = pd.DataFrame({'Activity': ['P1', 'P2'],
'General': ['AA', 'BB'],
'm1': ['A1', 'B1'],
't1': ['TA1', 'TB1'],
'm2': ['A2', 'B2'],
't2': ['TA2', 'TB2'],
'm3': ['A3', 'B3'],
't3': ['TA3', 'TB3']})
df
# -
# This is a [solution](https://stackoverflow.com/a/64316306/7175713) provided by yours truly :
(pd.wide_to_long(df,
i = ["Activity", "General"],
stubnames = ["t", "m"],
j = "number")
.set_axis(["Task", "M"],
axis = "columns")
.droplevel(-1)
.reset_index()
)
# Or, we could use [pivot_longer](https://pyjanitor-devs.github.io/pyjanitor/reference/janitor.functions/janitor.pivot_longer.html), abstract the details, and focus on the task :
df.pivot_longer(
index = ['Activity','General'],
names_pattern = ['^m','^t'],
names_to = ['M','Task']
)
# Alright, one last example :
#
#
# [Source Data](https://stackoverflow.com/questions/64159054/how-do-you-pivot-longer-columns-in-groups)
# +
df = pd.DataFrame({'Name': ['John', 'Chris', 'Alex'],
'activity1': ['Birthday', 'Sleep Over', 'Track Race'],
'number_activity_1': [1, 2, 4],
'attendees1': [14, 18, 100],
'activity2': ['Sleep Over', 'Painting', 'Birthday'],
'number_activity_2': [4, 5, 1],
'attendees2': [10, 8, 5]})
df
# -
# The task here is to unpivot the data, and group the data under three new columns ("activity", "number_activity", and "attendees").
#
# We can see that there is a pattern to the data; let's create a list of regular expressions that match the patterns and pass to `names_pattern``:
df.pivot_longer(
index = 'Name',
names_to = ('activity','number_activity','attendees'),
names_pattern = ("^activity","^number_activity","^attendees")
)
# Alright, let's look at one final example:
#
#
# [Source Data](https://stackoverflow.com/questions/60387077/reshaping-and-melting-dataframe-whilst-picking-up-certain-regex)
# +
df = pd.DataFrame({'Location': ['Madrid', 'Madrid', 'Rome', 'Rome'],
'Account': ['ABC', 'XYX', 'ABC', 'XYX'],
'Y2019:MTD:January:Expense': [4354, 769867, 434654, 632556456],
'Y2019:MTD:January:Income': [56456, 32556456, 5214, 46724423],
'Y2019:MTD:February:Expense': [235423, 6785423, 235423, 46588]})
df
# +
df.pivot_longer(index = ['Location','Account'],
names_to=("year", "month", ".value"),
names_pattern=r"Y(.+):MTD:(.{3}).+(Income|Expense)",
sort_by_appearance=True)
# -
# [pivot_longer](https://pyjanitor-devs.github.io/pyjanitor/reference/janitor.functions/janitor.pivot_longer.html) does not solve all problems; no function does. Its aim is to make it easy to unpivot dataframes from wide to long form, while offering a lot of flexibility and power.
| examples/notebooks/Pivoting Data from Wide to Long.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# +
# # %load ../../_data/standard_import.txt
# %matplotlib ipympl
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
# +
# %run "sandpitmodule.ipynb"
def sandpit_intro():
sp = Sandpit(
lambda x,y: -((x-4.1)**2 + (y-0.9)**2) - 30*np.exp(-((x-1.5)**2 + (y-3.7)**2)/(2*1.4)),
)
sp.draw()
def sandpit_depth_only():
sp = Sandpit(
lambda x,y: -((x-4.1)**2 + (y-0.9)**2) - 30*np.exp(-((x-1.5)**2 + (y-3.7)**2)/(2*1.4)),
)
sp.game_mode = 1
sp.win_text = """
### Congratulations!
Well done, you found the phone.
As you can see, it's much more difficult to find the bottom of the sandpit when you can only sample the depth.
Knowing the Jacobian makes it much easier to decide where to try next.
"""
sp.draw()
def sandpit_multiple_minima():
θ = 2 * np.pi * np.random.random()
u0 = np.random.choice((1,3))
v0 = np.random.choice((1,3))
u1 = (np.random.random() - 0.5)*2/3
v1 = (np.random.random() - 0.5)*2/3
u = lambda x,y : (x - 3)*np.cos(θ) + (y - 3)*np.sin(θ) + 2 + u1
v = lambda x,y : -(x - 3)*np.sin(θ) + (y - 3)*np.cos(θ) + 3 + v1
sp = Sandpit(
lambda x,y:
np.sinc(u(x, y) - 0) * np.sinc(v(x, y) - 0) +
np.sinc(u(x, y) - 2) * np.sinc(v(x, y) - 0) +
np.sinc(u(x, y) - 4) * np.sinc(v(x, y) - 0) +
np.sinc(u(x, y) - 0) * np.sinc(v(x, y) - 2) +
np.sinc(u(x, y) - 2) * np.sinc(v(x, y) - 2) +
np.sinc(u(x, y) - 4) * np.sinc(v(x, y) - 2) +
np.sinc(u(x, y) - 0) * np.sinc(v(x, y) - 4) +
np.sinc(u(x, y) - 2) * np.sinc(v(x, y) - 4) +
np.sinc(u(x, y) - 4) * np.sinc(v(x, y) - 4) +
np.sinc(u(x, y) - 0) * np.sinc(v(x, y) - 6) +
np.sinc(u(x, y) - 2) * np.sinc(v(x, y) - 6) +
-np.sinc(u(x, y) - 1) * np.sinc(v(x, y) - 1) +
-np.sinc(u(x, y) - 3) * np.sinc(v(x, y) - 1) +
-np.sinc(u(x, y) - 1) * np.sinc(v(x, y) - 3) +
-np.sinc(u(x, y) - 3) * np.sinc(v(x, y) - 3) +
-np.sinc(u(x, y) - 1) * np.sinc(v(x, y) - 5) +
-np.sinc(u(x, y) - 3) * np.sinc(v(x, y) - 5) +
-np.sinc(u(x, y) - u0) * np.sinc(v(x, y) - v0)
)
sp.grad_length /= 2
sp.win_text = """
### Congratulations!
Well done, you found the phone.
You may run this example again to find the phone in a different landscape.
Try to think of methods to avoid getting stuck in the local minima when trying to find the global minimum.
"""
sp.draw()
def sandpit_random():
a = np.random.rand(4, 4) * np.outer(np.arange(4) + 1., np.arange(4) + 1.)**-2
φx = 2 * np.pi * np.random.rand(4, 4)
φy = 2 * np.pi * np.random.rand(4, 4)
fn = lambda n,m,x,y: a[n,m] * np.cos(np.pi*n*x/6 + φx[n,m]) * np.cos(np.pi*m*y/6 + φy[n,m])
ff = lambda x,y: (fn(0,0,x,y)+fn(0,1,x,y)+fn(0,2,x,y)+fn(0,3,x,y)+
fn(1,0,x,y)+fn(1,1,x,y)+fn(1,2,x,y)+fn(1,3,x,y)+
fn(2,0,x,y)+fn(2,1,x,y)+fn(2,2,x,y)+fn(2,3,x,y)+
fn(3,0,x,y)+fn(3,1,x,y)+fn(3,2,x,y)+fn(3,3,x,y)+
(1 - (x*(x - 6)*y*(y - 6))/(81))**7 / 9
)
sp = Sandpit(ff)
sp.grad_length *= 1.5
sp.win_text = """
### Congratulations!
Well done, you found the phone.
You may run this example again to find the phone in a different landscape.
"""
sp.draw()
def sandpit_gradient(next_step) :
a = np.random.rand(4, 4) * np.outer(np.arange(4) + 1., np.arange(4) + 1.)**-2
φx = 2 * np.pi * np.random.rand(4, 4)
φy = 2 * np.pi * np.random.rand(4, 4)
fn = lambda n,m,x,y: a[n,m] * np.cos(np.pi*n*x/6 + φx[n,m]) * np.cos(np.pi*m*y/6 + φy[n,m])
ff = lambda x,y: (fn(0,0,x,y)+fn(0,1,x,y)+fn(0,2,x,y)+fn(0,3,x,y)+
fn(1,0,x,y)+fn(1,1,x,y)+fn(1,2,x,y)+fn(1,3,x,y)+
fn(2,0,x,y)+fn(2,1,x,y)+fn(2,2,x,y)+fn(2,3,x,y)+
fn(3,0,x,y)+fn(3,1,x,y)+fn(3,2,x,y)+fn(3,3,x,y)+
(1 - (x*(x - 6)*y*(y - 6))/(81))**7 / 9
)
sp = Sandpit(ff)
sp.game_mode = 2
sp.next_step = next_step
sp.win_text = """
### Congratulations!
Well done, you found the phone.
You may run this example again to find the phone in a different landscape.
"""
sp.draw()
def sandpit_rocks():
a = np.random.rand(4, 4) * np.outer(np.arange(4) + 1., np.arange(4) + 1.)**-2
φx = 2 * np.pi * np.random.rand(4, 4)
φy = 2 * np.pi * np.random.rand(4, 4)
b = np.random.rand(4, 4) * np.outer(np.arange(4) + 1., np.arange(4) + 1.)**-2
θx = 2 * np.pi * np.random.rand(4, 4)
θy = 2 * np.pi * np.random.rand(4, 4)
fn = lambda n,m,x,y: a[n,m] * np.cos(np.pi*n*x/6 + φx[n,m]) * np.cos(np.pi*m*y/6 + φy[n,m])
gn = lambda n,m,x,y: b[n,m]/25 * np.cos(10*np.pi*n*x/6 + θx[n,m]) * np.cos(10*np.pi*m*y/6 + θy[n,m])
ff = lambda x,y: (fn(0,0,x,y)+fn(0,1,x,y)+fn(0,2,x,y)+fn(0,3,x,y)+
fn(1,0,x,y)+fn(1,1,x,y)+fn(1,2,x,y)+fn(1,3,x,y)+
fn(2,0,x,y)+fn(2,1,x,y)+fn(2,2,x,y)+fn(2,3,x,y)+
fn(3,0,x,y)+fn(3,1,x,y)+fn(3,2,x,y)+fn(3,3,x,y)+
(1 - (x*(x - 6)*y*(y - 6))/(81))**7 / 9 +
gn(0,0,x,y)+gn(0,1,x,y)+gn(0,2,x,y)+gn(0,3,x,y)+
gn(1,0,x,y)+gn(1,1,x,y)+gn(1,2,x,y)+gn(1,3,x,y)+
gn(2,0,x,y)+gn(2,1,x,y)+gn(2,2,x,y)+gn(2,3,x,y)+
gn(3,0,x,y)+gn(3,1,x,y)+gn(3,2,x,y)+gn(3,3,x,y)
)
sp = Sandpit(ff)
sp.grad_length *= 1.5
sp.win_text = """
### Congratulations!
Well done, you found the phone.
You may run this example again to find the phone in a different landscape.
"""
sp.draw()
def sandpit_well():
a = np.random.rand(4, 4) * np.outer(np.arange(4) + 1., np.arange(4) + 1.)**-2
φx = 2 * np.pi * np.random.rand(4, 4)
φy = 2 * np.pi * np.random.rand(4, 4)
x0 = 6 * np.random.rand()
y0 = 6 * np.random.rand()
fn = lambda n,m,x,y: a[n,m] * np.cos(np.pi*n*x/6 + φx[n,m]) * np.cos(np.pi*m*y/6 + φy[n,m])
ff = lambda x,y: (fn(0,0,x,y)+fn(0,1,x,y)+fn(0,2,x,y)+fn(0,3,x,y)+
fn(1,0,x,y)+fn(1,1,x,y)+fn(1,2,x,y)+fn(1,3,x,y)+
fn(2,0,x,y)+fn(2,1,x,y)+fn(2,2,x,y)+fn(2,3,x,y)+
fn(3,0,x,y)+fn(3,1,x,y)+fn(3,2,x,y)+fn(3,3,x,y)+
(1 - (x*(x - 6)*y*(y - 6))/(81))**7 / 9 -
0.6*np.exp(-((x-x0)**2+(y-y0)**2)/(2*0.2**2))
)
sp = Sandpit(ff)
sp.grad_length *= 1.5
sp.win_text = """
### Congratulations!
Well done, you found the phone.
You may run this example again to find the phone in a different landscape.
"""
sp.draw()
def sandpit_random_test():
a = np.random.rand(4, 4) * np.outer(np.arange(4) + 1., np.arange(4) + 1.)**-2
φx = 2 * np.pi * np.random.rand(4, 4)
φy = 2 * np.pi * np.random.rand(4, 4)
fn = lambda n,m,x,y: a[n,m] * np.cos(np.pi*n*x/6 + φx[n,m]) * np.cos(np.pi*m*y/6 + φy[n,m])
ff = lambda x,y: (fn(0,0,x,y)+fn(0,1,x,y)+fn(0,2,x,y)+fn(0,3,x,y)+
fn(1,0,x,y)+fn(1,1,x,y)+fn(1,2,x,y)+fn(1,3,x,y)+
fn(2,0,x,y)+fn(2,1,x,y)+fn(2,2,x,y)+fn(2,3,x,y)+
fn(3,0,x,y)+fn(3,1,x,y)+fn(3,2,x,y)+fn(3,3,x,y)+
(1 - (x*(x - 6)*y*(y - 6))/(81))**7 / 9
)
sp = Sandpit(ff)
sp.grad_length *= 1.5
sp.win_text = """
### Congratulations!
Well done, you found the phone.
You may run this example again to find the phone in a different landscape.
"""
sp.game_mode = 1
return sp
# + language="html"
# <style>
# .output_wrapper button.btn.btn-default,
# .output_wrapper .ui-dialog-titlebar,
# span.mpl-message {
# display: none;
# }
# .widget-area {
# display: table-footer-group !important;
# position: relative;
# top: -48px;
# }
# .output_subarea.output_markdown.rendered_html {
# position: relative;
# left: 8em
# }
# div.cell.code_cell.rendered {
# display: table;
# }
# .widget-area button.close {
# display: none
# }
# </style>
# -
| _math/Imperial College London - Math for ML/sandpit-exercises.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
import pandas as pd
# %pylab inline
plt.style.use('ggplot')
#Some default stuff for my plotting
aspect_mult = 0.9
figsize(aspect_mult*16,aspect_mult*9)
linewidth = 3
df = pd.read_csv("data/universities_2016.csv")
df.head()
result = df.groupby('University').size()
result.sort_index()
result
result.keys()
plt.barh(range(result.shape[0]),result.values)
plt.yticks(np.arange(result.shape[0])+0.4,result.keys(), rotation=0,fontsize=14)
plt.xlim(0,max(result)+1)
plt.title("DSIDE 2016/2017 University Representation",fontsize=18, color = 'k')
plt.ylabel("Universities",fontsize=16, color = 'k')
plt.xlabel("Number of Students",fontsize=16, color = 'k')
pyplot.savefig('../images/2016-universities.png',bbox_inches='tight')
| notebooks/.ipynb_checkpoints/universities-2016-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Matplotlib で 図表を作成する
# +
import matplotlib.pyplot as plt
fig=plt.figure()
ax1=fig.add_subplot(221)
ax2=fig.add_subplot(222)
ax3=fig.add_subplot(223)
# -
# ### subplots()関数を利用してサブプロットを配置する
# +
fig,axes = plt.subplots(2,2)
print(type(axes))
print(type(fig))
plt.show()
# -
# ### スタイルの適用 ggplotスタイルを適用する
# +
plt.style.use('ggplot')
fig = plt.figure()
ax = fig.add_subplot(111)
dat=[0,1]
ax.plot(dat)
plt.show()
# -
# # 1.折れ線グラフの作成
# #### plotメソッドの引数は、リスト、Seriesデータ、配列データが、可能
# +
import matplotlib.pyplot as plt
plt.style.use('ggplot')
fig=plt.figure()
ax=fig.add_subplot(111)
ax.plot([1,3])
plt.show()
# +
import pandas as pd
ser=pd.Series([0,1])
print(ser)
fig=plt.figure()
ax=fig.add_subplot(111)
ax.plot(ser)
plt.show()
# +
fig=plt.figure()
ax=fig.add_subplot(111)
x=[0,2,4]
y=[0,4,2]
ax.plot(x,y)
plt.show()
# -
# #### plotメソッドの引数としては、X変数、Y変数のふたつを選択できる。Seriesデータを引数にできる。
#
#
#
#
# +
import pandas as pd
fig=plt.figure()
ax=fig.add_subplot(111)
x=pd.Series([0,2,4])
y=pd.Series([0,4,2])
ax.plot(x,y)
plt.show()
# -
# #### 複数の折れ線グラフを描画する
# +
import pandas as pd
fig=plt.figure()
ax=fig.add_subplot(111)
x=pd.Series([0,2,4])
y1=pd.Series([0,4,2.5])
y2=pd.Series([4,0,1.5])
# axというオブジェクトに対して、plot(x,y1)とplot(x,y2)のふたつのメソッドを付加する
ax.plot(x,y1)
ax.plot(x,y2)
plt.show()
# -
# #### データを読み込んで, DataFrameにして、それを描画する
# +
import os
import pandas as pd
base_url='C:/Users/<NAME>/TextBook-Jupyter-practical-introduction/anime'
anime_stock_returns=os.path.join(base_url,'anime_stock_returns.csv')
print()
df=pd.read_csv(anime_stock_returns,index_col=0,parse_dates=['Date'])
df.head()
# -
print(type(df['IG Port']))
print(type(df['TOEI ANIMATION']))
print(df.index)
# #### df.indexは、Datetime型データ
# +
import matplotlib.pyplot as plt
fig=plt.figure(figsize=(10,4))
ax=fig.add_subplot(111)
ax.plot(df.index,df['TOEI ANIMATION'],label='TOEI ANIMATION')
ax.plot(df.index,df['IG Port'],label='IG Port')
ax.set_title('stock chart')
ax.set_ylabel('percentage')
ax.set_xlabel('month')
ax.legend()
plt.show()
# -
# # 2.散布図の作成
# +
import numpy as np
np.random.seed(2)
x=np.arange(1,101)
y=4*x*np.random.rand(100)
print(type(x))
print(type(y))
fig=plt.figure()
ax=fig.add_subplot(111)
# 配列データを引数とする
ax.scatter(x,y)
plt.show()
# -
# #### Seriesデータを引数として、散布図を作成
# ##### csvファイルの読み込み
# +
import os
import pandas as pd
base_url='C:/Users/<NAME>/TextBook-Jupyter-practical-introduction/anime'
anime_master=os.path.join(base_url,'anime_master.csv')
df=pd.read_csv(anime_master,index_col='anime_id')
df.head()
# +
fig=plt.figure(figsize=(10,4))
ax=fig.add_subplot(111)
# Seriesデータを引数とする
ax.scatter(df['members'],df['rating'],alpha=0.5)
plt.show()
# -
# ## 2-1. グループ別の散布図を作成
# #### DataFrameから、データを一部抽出
# +
df.loc[df['members']>=800000,['name','members']]
# -
df.loc[(df['members']>=600000)&(df['rating']>=8.5),['name','rating']]
types =df['type'].unique()
types
# +
df=pd.read_csv(anime_master,index_col='anime_id')
fig = plt.figure(figsize=(10,5))
ax=fig.add_subplot(111)
types =df['type'].unique()
print(types)
print(df['type']=='OVA')
for t in types:
x=df.loc[df['type']==t, 'members']
y=df.loc[df['type']==t, 'rating']
ax.scatter(x,y,alpha=0.5, label=t)
ax.set_title('grouped plot')
ax.set_xlabel('members')
ax.set_ylabel('Rating')
ax.legend(loc='lower right', fontsize=12)
plt.show()
# -
# # 3 棒グラフ
# +
fig=plt.figure()
ax=fig.add_subplot(111)
x=[1,2]
y=[2,3]
ax.bar(x,y)
plt.show()
# +
fig=plt.figure()
ax=fig.add_subplot(111)
x=[1,2]
y=[2,3]
labels=['apple','orange']
ax.bar(x,y,tick_label=labels)
plt.show()
# -
# ## 3-2. 横向きの棒グラフ
# +
fig=plt.figure()
ax=fig.add_subplot(111)
ax.barh(x,y,tick_label=labels)
plt.show()
# -
# ## 3-3. グループ別の棒グラフ
# #### Seriesデータ、配列データを引数として、棒グラフを作成
# +
import os
import pandas as pd
base_url='C:/Users/kohsuke maeda/TextBook-Jupyter-practical-introduction/anime'
anime_master=os.path.join(base_url,'anime_master.csv')
df=pd.read_csv(anime_master)
fig=plt.figure()
ax=fig.add_subplot(111)
y=df.groupby('type').sum()['members']
x=range(len(y))
#x :配列データ
#y :Seriesデータ
xlabels=y.index
print(type(x))
print(type(y))
ax.bar(x,y,tick_label=xlabels)
ax.set_ylabel('total')
plt.show()
# -
# ## 3-4. ラベル付きの複数グループ別の棒グラフ
# +
import numpy as np
x=[1,2]
y1,y2,y3 = [1,2],[2,4],[3,6]
fig=plt.figure()
ax=fig.add_subplot(111)
w=0.2
ax.bar(x,y1,width=w, label='y1')
ax.bar(np.array(x)+w, y2, width=w, label='y2')
ax.bar(np.array(x)+w*2,y3,width=w,label='y3')
ax.legend()
plt.show()
# -
# #### ラベル付き複数グループの棒グラフの実例
# +
import os
import pandas as pd
base_url='C:/Users/<NAME>/TextBook-Jupyter-practical-introduction/anime'
anime_genre_top10=os.path.join(base_url,'anime_genre_top10_pivoted.csv')
df=pd.read_csv(anime_genre_top10,index_col='genre')
print(df.head())
print(df.shape)
# -
# #### y軸を対数軸とする
# +
import matplotlib.pyplot as plt
import numpy as np
fig = plt.figure(figsize=(18,3))
ax=fig.add_subplot(111)
print(len(df))
print(df.columns)
wt=np.array(range(len(df)))
w=0.1
print(wt)
for i in df.columns:
ax.bar(wt,df[i],width=w,label=i)
wt=wt+w
ax.set_xticks(np.array(range(len(df))))
ax.set_xticklabels(df.index,ha='left')
ax.set_ylabel('total')
ax.legend()
plt.show()
# +
fig = plt.figure(figsize=(18,3))
ax=fig.add_subplot(111)
print(len(df))
print(df.columns)
wt=np.array(range(len(df)))
w=0.1
print(wt)
for i in df.columns:
ax.bar(wt,df[i],width=w,label=i)
wt=wt+w
ax.set_xticks(np.array(range(len(df)+2)))
ax.set_xticklabels(df.index,ha='left')
ax.set_ylabel('log-total')
ax.set_yscale('log')
ax.legend()
plt.show()
# -
# ## 3-5. 積み上げ棒グラフ
# +
import matplotlib.pyplot as plt
import numpy as np
np.random.seed(0)
x=np.arange(5)
y=np.random.rand(15).reshape((3,5))
# y は、(3,5)行列
print(y)
#y1, y2, y3は、(1,5)ベクトル
y1,y2,y3 = y
print(np.array(y1))
print(np.array(y2))
print(np.array(y3))
y1b = np.array(y1)
y2b = y1b+np.array(y2)
y3b = y2b+np.array(y3)
fig=plt.figure(figsize=(10,3))
ax=fig.add_subplot(111)
ax.bar(x,y3b,label='y3')
ax.bar(x,y2b,label='y2')
ax.bar(x,y1b,label='y1')
ax.legend()
plt.show()
# -
# # 4.ヒストグラム
# +
import matplotlib.pyplot as plt
import numpy as np
plt.style.use('ggplot')
mu=100
sigma=10
np.random.seed(0)
x=np.random.normal(mu,sigma,10000)
fig=plt.figure()
ax=fig.add_subplot(111)
ax.hist(x)
plt.show()
# -
# #### ビンの数を変更する
# +
fig=plt.figure()
ax=fig.add_subplot(111)
ax.hist(x,rwidth=0.9,bins=10)
plt.show()
# -
# #### binsを10から20に変更するとヒストグラムが細かくなる
# +
fig=plt.figure()
ax=fig.add_subplot(111)
ax.hist(x,rwidth=0.9,bins=20)
plt.show()
# -
# ## 4-1. ヒストグラムの実例
# +
import os
import pandas as pd
base_url='C:/Users/<NAME>/TextBook-Jupyter-practical-introduction/anime'
anime_master=os.path.join(base_url,'anime_master.csv')
df=pd.read_csv(anime_master)
df.head()
# -
# #### Seriesデータを引数として、ヒストグラムを作成する
# +
fig=plt.figure()
ax=fig.add_subplot(111)
ax.hist(df['rating'],rwidth=1, range=(0,10))
ax.set_title('Rating')
plt.show()
# -
# #### rwidthを1から、0.5に変更するとビンの幅が小さくなる
# +
fig=plt.figure()
ax=fig.add_subplot(111)
ax.hist(df['rating'],rwidth=0.5, range=(0,10))
ax.set_title('Rating')
plt.show()
# -
# #### rangeを(0,10)を(0,20)に変更するとX軸の範囲が広くなる
# +
fig=plt.figure()
ax=fig.add_subplot(111)
ax.hist(df['rating'],rwidth=0.5, range=(0,20))
ax.set_title('Rating')
plt.show()
# -
# # 5. 箱ひげ図
# #### list of list を引数とする場合
# +
import matplotlib.pyplot as plt
import numpy as np
plt.style.use('ggplot')
# list of list
x=[[1,2,3,3,11,20],[1,2,9,10,15,16]]
fig=plt.figure()
ax=fig.add_subplot(111)
ax.boxplot(x)
plt.show()
# -
# #### DataFrameを引数とする場合
# +
import matplotlib.pyplot as plt
import numpy as np
plt.style.use('ggplot')
# list of list
x=[[1,2,3,3,11,20],[1,2,9,10,15,16]]
df=pd.DataFrame(x)
print(df.head())
print(df.shape)
# +
fig=plt.figure()
ax=fig.add_subplot(111)
ax.boxplot(df)
plt.show()
# -
# ## 5-1. 箱ひげ図の実例
# +
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
plt.style.use('ggplot')
base_url='C:/Users/<NAME>/TextBook-Jupyter-practical-introduction/anime'
anime_master=os.path.join(base_url,'anime_master.csv')
df=pd.read_csv(anime_master)
df.head()
# -
# #### list of listを引数として箱ひげ図を作成する
# +
labels=[]
types_list=[]
for label,df_per_type in df.groupby('type'):
labels.append(label)
types_list.append(df_per_type['episodes'].tolist())
type(df_per_type['episodes'])
# Seriesデータ
fig=plt.figure()
ax=fig.add_subplot(111)
ax.boxplot(types_list,labels=labels)
plt.show()
# -
type(types_list)
| Matplotlib.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: TensorFlow 2.4 on Python 3.8 & CUDA 11.1
# language: python
# name: python3
# ---
# + [markdown] id="Qofg_NrDnFSH"
# **12장 – 텐서플로를 사용한 사용자 정의 모델과 훈련**
# + [markdown] id="oAgZdgLUnFSJ"
# _이 노트북은 12장에 있는 모든 샘플 코드와 연습문제 해답을 가지고 있습니다._
# + [markdown] id="i8anu_aEnFSJ"
# <table align="left">
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/rickiepark/handson-ml2/blob/master/12_custom_models_and_training_with_tensorflow.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />구글 코랩에서 실행하기</a>
# </td>
# </table>
# + [markdown] id="_tnhYgkcnFSK"
# # 설정
# + [markdown] id="Z_MO6EFinFSL"
# 먼저 몇 개의 모듈을 임포트합니다. 맷플롯립 그래프를 인라인으로 출력하도록 만들고 그림을 저장하는 함수를 준비합니다. 또한 파이썬 버전이 3.5 이상인지 확인합니다(파이썬 2.x에서도 동작하지만 곧 지원이 중단되므로 파이썬 3을 사용하는 것이 좋습니다). 사이킷런 버전이 0.20 이상인지와 텐서플로 버전이 2.0 이상인지 확인합니다.
# + id="VBj7sKkwnFSM"
# 파이썬 ≥3.5 필수
import sys
assert sys.version_info >= (3, 5)
# 사이킷런 ≥0.20 필수
import sklearn
assert sklearn.__version__ >= "0.20"
try:
# # %tensorflow_version은 코랩 명령입니다.
# %tensorflow_version 2.x
except Exception:
pass
# 이 노트북은 텐서플로 ≥2.4이 필요합니다
# 2.x 버전은 대부분 동일한 결과를 만들지만 몇 가지 버그가 있습니다.
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.4"
# 공통 모듈 임포트
import numpy as np
import os
# 노트북 실행 결과를 동일하게 유지하기 위해
np.random.seed(42)
tf.random.set_seed(42)
# 깔끔한 그래프 출력을 위해
# %matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# 그림을 저장할 위치
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "deep"
IMAGES_PATH = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID)
os.makedirs(IMAGES_PATH, exist_ok=True)
def save_fig(fig_id, tight_layout=True, fig_extension="png", resolution=300):
path = os.path.join(IMAGES_PATH, fig_id + "." + fig_extension)
print("그림 저장:", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format=fig_extension, dpi=resolution)
# + [markdown] id="iHim5E7UnFSN"
# ## 텐서와 연산
# + [markdown] id="RkZT7K1pnFSN"
# ### 텐서
# + id="Jqh1vBv7nFSO" outputId="89bdeae2-2212-4bed-e339-2b79322b966d" colab={"base_uri": "https://localhost:8080/"}
tf.constant([[1., 2., 3.], [4., 5., 6.]]) # 행렬
# + id="zz58NEmvnFSP" outputId="1eb3be74-ddf3-4e97-d19e-047220ddc7ac" colab={"base_uri": "https://localhost:8080/"}
tf.constant(42) # 스칼라
# + id="cISdcS3mnFSP" outputId="bdee6051-bb43-4b7c-dbee-7aa379848add" colab={"base_uri": "https://localhost:8080/"}
t = tf.constant([[1., 2., 3.], [4., 5., 6.]])
t
# + id="sCixixr5nFSP" outputId="a5ea39a6-034c-4328-c9ac-e7da005f800a" colab={"base_uri": "https://localhost:8080/"}
t.shape
# + id="uuds8EZnnFSQ" outputId="391b6994-18ac-41c3-8728-71af3352e8e0" colab={"base_uri": "https://localhost:8080/"}
t.dtype
# + [markdown] id="5LcmbcOznFSQ"
# ### 인덱싱
# + id="rQt1EHV-nFSQ" outputId="f06c31e5-c53f-4318-e86a-fff585ef9335" colab={"base_uri": "https://localhost:8080/"}
t[:, 1:]
# + id="-8jI6UyZnFSR" outputId="57edf7a3-84dc-46e2-9ca4-63e2feb26c0d" colab={"base_uri": "https://localhost:8080/"}
t[..., 1, tf.newaxis]
# + [markdown] id="7Ny0ZhPFnFSR"
# ### 연산
# + id="k_S2puMnnFSR" outputId="fd744bb0-e740-4762-80ec-9e22b4559357" colab={"base_uri": "https://localhost:8080/"}
t + 10
# + id="FQkXtyMFnFSR" outputId="e331d0ac-2858-4e70-8e85-97ca9683c772" colab={"base_uri": "https://localhost:8080/"}
tf.square(t)
# + id="9RgHRMwpnFSR" outputId="646a5b4c-c04c-4f68-b513-e7be218c5b52" colab={"base_uri": "https://localhost:8080/"}
t @ tf.transpose(t)
# + [markdown] id="FepPlGrBnFSS"
# ### `keras.backend` 사용하기
# + id="ll9jO3G6nFSS" outputId="4e09e7a9-a30b-4672-da11-685109b18d81" colab={"base_uri": "https://localhost:8080/"}
from tensorflow import keras
K = keras.backend
K.square(K.transpose(t)) + 10
# + [markdown] id="QrqIuM0NnFSS"
# ### 넘파이 변환
# + id="_J3NWmTVnFSS" outputId="4d10c3d1-a661-46d6-e4e2-9de4083741ef" colab={"base_uri": "https://localhost:8080/"}
a = np.array([2., 4., 5.])
tf.constant(a)
# + id="LSnlGqM4nFSS" outputId="7d120ee6-d494-4e76-925f-13f64b1d0dd2" colab={"base_uri": "https://localhost:8080/"}
t.numpy()
# + id="IpmcaxRRnFSS" outputId="ab99b995-ddd7-454b-ad25-25ed3a2aa8c1" colab={"base_uri": "https://localhost:8080/"}
np.array(t)
# + id="xME7iW_VnFST" outputId="076e8535-e490-4e25-b713-019f5d0e0134" colab={"base_uri": "https://localhost:8080/"}
tf.square(a)
# + id="GaHgPvyNnFST" outputId="4ea536f5-09a1-400a-e90b-a3e84bd1cbbc" colab={"base_uri": "https://localhost:8080/"}
np.square(t)
# + [markdown] id="qEscuNeMnFST"
# ### 타입 변환
# + id="zmBGBSFxnFSU" outputId="c560f0a8-e5c3-40ad-aedc-e7ca447c85ec" colab={"base_uri": "https://localhost:8080/"}
try:
tf.constant(2.0) + tf.constant(40)
except tf.errors.InvalidArgumentError as ex:
print(ex)
# + id="1dztSrtpnFSU" outputId="24d5960f-d0b8-4be0-bac7-3615e1a73f87" colab={"base_uri": "https://localhost:8080/"}
try:
tf.constant(2.0) + tf.constant(40., dtype=tf.float64)
except tf.errors.InvalidArgumentError as ex:
print(ex)
# + id="bml4toHdnFSU" outputId="4fbd9268-815d-4c81-913a-591a112a16a1" colab={"base_uri": "https://localhost:8080/"}
t2 = tf.constant(40., dtype=tf.float64)
tf.constant(2.0) + tf.cast(t2, tf.float32)
# + [markdown] id="rUX0FqA2nFSU"
# ### 문자열
# + id="kmfyUUWDnFSU" outputId="ab1a8bc7-fc20-4b11-8889-e6c8e962f9cf" colab={"base_uri": "https://localhost:8080/"}
tf.constant(b"hello world")
# + id="S4eTX-ZvnFSU" outputId="fe1b8126-82ec-4085-c591-0d60c9f374f1" colab={"base_uri": "https://localhost:8080/"}
tf.constant("café")
# + id="M3FuAE4gnFSV" outputId="84215369-038c-41b3-a35b-fb29fadc32fc" colab={"base_uri": "https://localhost:8080/"}
u = tf.constant([ord(c) for c in "café"])
u
# + id="tg50H0XNnFSV" outputId="4ed9cbb9-a09c-4168-ee44-901c1c2a1832" colab={"base_uri": "https://localhost:8080/"}
b = tf.strings.unicode_encode(u, "UTF-8")
tf.strings.length(b, unit="UTF8_CHAR")
# + id="0gjS_MnenFSV" outputId="24b37c8f-6b91-4796-b68e-0723c6517e58" colab={"base_uri": "https://localhost:8080/"}
tf.strings.unicode_decode(b, "UTF-8")
# + [markdown] id="xIunxUy1nFSV"
# ### 문자열 배열
# + id="huIxMcD8nFSV"
p = tf.constant(["Café", "Coffee", "caffè", "咖啡"])
# + id="79sKVmlsnFSV" outputId="f630892f-5661-4cf9-e20d-e3df377296bb" colab={"base_uri": "https://localhost:8080/"}
tf.strings.length(p, unit="UTF8_CHAR")
# + id="iZn8rFS3nFSV" outputId="d9c297eb-6a3f-4b2b-c7a3-ccc4dee2aea1" colab={"base_uri": "https://localhost:8080/"}
r = tf.strings.unicode_decode(p, "UTF8")
r
# + id="w_zPFyFCnFSV" outputId="475fcba7-c220-4c49-e49e-dedf29b7418e" colab={"base_uri": "https://localhost:8080/"}
print(r)
# + [markdown] id="Vk6tGFd4nFSW"
# ### 래그드 텐서
# + id="kc0CcQ9-nFSW" outputId="96652dc2-c317-46f8-99dd-e24087c6bf51" colab={"base_uri": "https://localhost:8080/"}
print(r[1])
# + id="RC13J_djnFSW" outputId="fb0c7250-9608-416d-b969-6ca40cf2244f" colab={"base_uri": "https://localhost:8080/"}
print(r[1:3])
# + id="hGTY7YJunFSW" outputId="87121768-2448-4592-f557-04eedfe1e523" colab={"base_uri": "https://localhost:8080/"}
r2 = tf.ragged.constant([[65, 66], [], [67]])
print(tf.concat([r, r2], axis=0))
# + id="jK9hMKkmnFSW" outputId="2b0e09d2-25cd-4481-8cf3-fe03e59c3818" colab={"base_uri": "https://localhost:8080/"}
r3 = tf.ragged.constant([[68, 69, 70], [71], [], [72, 73]])
print(tf.concat([r, r3], axis=1))
# + id="twTle-bpnFSW" outputId="a845e545-1959-4241-c533-96c0c6ae4c11" colab={"base_uri": "https://localhost:8080/"}
tf.strings.unicode_encode(r3, "UTF-8")
# + id="qGBmIeoLnFSW" outputId="ec66f750-2fc3-4fcc-b320-3ffb2b35270c" colab={"base_uri": "https://localhost:8080/"}
r.to_tensor()
# + [markdown] id="tMc8-5sjnFSW"
# ### 희소 텐서
# + id="71zD-IL-nFSW"
s = tf.SparseTensor(indices=[[0, 1], [1, 0], [2, 3]],
values=[1., 2., 3.],
dense_shape=[3, 4])
# + id="gDXfjz0onFSW" outputId="286ee4b1-a165-459c-8c08-328abb208230" colab={"base_uri": "https://localhost:8080/"}
print(s)
# + id="6EECeDVVnFSX" outputId="bedaca50-177f-40f4-9c41-b21f2f13f222" colab={"base_uri": "https://localhost:8080/"}
tf.sparse.to_dense(s)
# + id="R2glxW27nFSX"
s2 = s * 2.0
# + id="NAYv7KbanFSX" outputId="f15fdcb7-5d3d-418c-e188-058565b8eced" colab={"base_uri": "https://localhost:8080/"}
try:
s3 = s + 1.
except TypeError as ex:
print(ex)
# + id="KBvWo_EnnFSX" outputId="62ea5d33-8add-4ce1-ab4b-68dc66af270b" colab={"base_uri": "https://localhost:8080/"}
s4 = tf.constant([[10., 20.], [30., 40.], [50., 60.], [70., 80.]])
tf.sparse.sparse_dense_matmul(s, s4)
# + id="iVef5eaJnFSX" outputId="ecfb8acb-6767-49c2-d254-afe7d8b5886a" colab={"base_uri": "https://localhost:8080/"}
s5 = tf.SparseTensor(indices=[[0, 2], [0, 1]],
values=[1., 2.],
dense_shape=[3, 4])
print(s5)
# + id="RvZ3zednnFSX" outputId="605e4642-1285-4eef-cca8-1f6a5120008f" colab={"base_uri": "https://localhost:8080/"}
try:
tf.sparse.to_dense(s5)
except tf.errors.InvalidArgumentError as ex:
print(ex)
# + id="049PReJ_nFSX" outputId="5ae7117f-5231-4421-a8f6-d42b6486e290" colab={"base_uri": "https://localhost:8080/"}
s6 = tf.sparse.reorder(s5)
tf.sparse.to_dense(s6)
# + [markdown] id="kwMI9FoNnFSX"
# ### 집합
# + id="gaYypzW3nFSX" outputId="a8a50dff-d462-409f-8d25-4f965d5a7725" colab={"base_uri": "https://localhost:8080/"}
set1 = tf.constant([[2, 3, 5, 7], [7, 9, 0, 0]])
set2 = tf.constant([[4, 5, 6], [9, 10, 0]])
tf.sparse.to_dense(tf.sets.union(set1, set2))
# + id="ywZU2OS-nFSY" outputId="5ff120ad-06c6-40cc-deaf-96f073507f5e" colab={"base_uri": "https://localhost:8080/"}
tf.sparse.to_dense(tf.sets.difference(set1, set2))
# + id="1C8SMyv-nFSY" outputId="1d1e6ed1-3275-446b-83f9-ef168d10aa16" colab={"base_uri": "https://localhost:8080/"}
tf.sparse.to_dense(tf.sets.intersection(set1, set2))
# + [markdown] id="aHM1uuinnFSY"
# ### 변수
# + id="qSn5cvBrnFSY"
v = tf.Variable([[1., 2., 3.], [4., 5., 6.]])
# + id="OU0pq8XQnFSY" outputId="42d986af-88e0-49b3-993e-e023b65d4f24" colab={"base_uri": "https://localhost:8080/"}
v.assign(2 * v)
# + id="sdy-xfQLnFSY" outputId="e102663e-464b-4b80-f751-f1735c851d30" colab={"base_uri": "https://localhost:8080/"}
v[0, 1].assign(42)
# + id="p-o6a231nFSY" outputId="b57aa092-a87d-45bd-89a9-0d5711b2b206" colab={"base_uri": "https://localhost:8080/"}
v[:, 2].assign([0., 1.])
# + id="qdNmt8zPnFSY" outputId="858b660a-ebe7-4879-d896-5ff1527c0a22" colab={"base_uri": "https://localhost:8080/"}
try:
v[1] = [7., 8., 9.]
except TypeError as ex:
print(ex)
# + id="-Vne6RM8nFSZ" outputId="4c4f03f1-f7c9-45e3-8bb5-351f5eabd281" colab={"base_uri": "https://localhost:8080/"}
v.scatter_nd_update(indices=[[0, 0], [1, 2]],
updates=[100., 200.])
# + id="IO7z_1ZxnFSZ" outputId="3db41b44-6b3c-4fde-bc22-b70ed1a744f0" colab={"base_uri": "https://localhost:8080/"}
sparse_delta = tf.IndexedSlices(values=[[1., 2., 3.], [4., 5., 6.]],
indices=[1, 0])
v.scatter_update(sparse_delta)
# + [markdown] id="IG6tPHOxnFSZ"
# ### 텐서 배열
# + id="4ePYHLefnFSZ"
array = tf.TensorArray(dtype=tf.float32, size=3)
array = array.write(0, tf.constant([1., 2.]))
array = array.write(1, tf.constant([3., 10.]))
array = array.write(2, tf.constant([5., 7.]))
# + id="ZWrJYjaNnFSZ" outputId="5718f4cb-5884-4e08-989e-2ef6d72622d5" colab={"base_uri": "https://localhost:8080/"}
array.read(1)
# + id="qqJJof_tnFSa" outputId="f9fb09d2-c693-4fc2-9bb6-e4e01b0f9c45" colab={"base_uri": "https://localhost:8080/"}
array.stack()
# + id="Af5sF8MhnFSa" outputId="ae026565-749d-4100-bfe0-b32f97b9bba9" colab={"base_uri": "https://localhost:8080/"}
mean, variance = tf.nn.moments(array.stack(), axes=0)
mean
# + id="32DE6kdEnFSa" outputId="bc3ba2d5-c6ff-480f-9554-0c2939f5a223" colab={"base_uri": "https://localhost:8080/"}
variance
# + [markdown] id="XPbiThrvnFSa"
# ## 사용자 정의 손실 함수
# + [markdown] id="g6PwQCm3nFSa"
# 캘리포니아 주택 데이터셋을 로드하여 준비해 보겠습니다. 먼저 이 데이터셋을 로드한 다음 훈련 세트, 검증 세트, 테스트 세트로 나눕니다. 마지막으로 스케일을 변경합니다:
# + id="4Ze9UWvrnFSa" outputId="853c189b-f078-47dc-d0bf-92fde2030287" colab={"base_uri": "https://localhost:8080/"}
from sklearn.datasets import fetch_california_housing
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
housing = fetch_california_housing()
X_train_full, X_test, y_train_full, y_test = train_test_split(
housing.data, housing.target.reshape(-1, 1), random_state=42)
X_train, X_valid, y_train, y_valid = train_test_split(
X_train_full, y_train_full, random_state=42)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_valid_scaled = scaler.transform(X_valid)
X_test_scaled = scaler.transform(X_test)
# + id="SGVUuLCGnFSb"
def huber_fn(y_true, y_pred):
error = y_true - y_pred
is_small_error = tf.abs(error) < 1
squared_loss = tf.square(error) / 2
linear_loss = tf.abs(error) - 0.5
return tf.where(is_small_error, squared_loss, linear_loss)
# + id="pr1qnz3_nFSb" outputId="b1e8262a-bf23-47df-8770-3641de052d55" colab={"base_uri": "https://localhost:8080/", "height": 276}
plt.figure(figsize=(8, 3.5))
z = np.linspace(-4, 4, 200)
plt.plot(z, huber_fn(0, z), "b-", linewidth=2, label="huber($z$)")
plt.plot(z, z**2 / 2, "b:", linewidth=1, label=r"$\frac{1}{2}z^2$")
plt.plot([-1, -1], [0, huber_fn(0., -1.)], "r--")
plt.plot([1, 1], [0, huber_fn(0., 1.)], "r--")
plt.gca().axhline(y=0, color='k')
plt.gca().axvline(x=0, color='k')
plt.axis([-4, 4, 0, 4])
plt.grid(True)
plt.xlabel("$z$")
plt.legend(fontsize=14)
plt.title("Huber loss", fontsize=14)
plt.show()
# + id="jaC8PoiYnFSb"
input_shape = X_train.shape[1:]
model = keras.models.Sequential([
keras.layers.Dense(30, activation="selu", kernel_initializer="lecun_normal",
input_shape=input_shape),
keras.layers.Dense(1),
])
# + id="SiLyH6ncnFSc"
model.compile(loss=huber_fn, optimizer="nadam", metrics=["mae"])
# + id="tMH11jIEnFSc" outputId="b1b9aa46-2efe-4bd4-eabf-a23b88cd869d" colab={"base_uri": "https://localhost:8080/"}
model.fit(X_train_scaled, y_train, epochs=2,
validation_data=(X_valid_scaled, y_valid))
# + [markdown] id="GiDKxlo0nFSc"
# ## 사용자 정의 요소를 가진 모델을 저장하고 로드하기
# + id="4vsC0bFSnFSc"
model.save("my_model_with_a_custom_loss.h5")
# + id="DU0qVKUQnFSc"
model = keras.models.load_model("my_model_with_a_custom_loss.h5",
custom_objects={"huber_fn": huber_fn})
# + id="3D4RbreznFSc" outputId="790f5d75-8dde-43b5-b9dc-9f6571c53a4c" colab={"base_uri": "https://localhost:8080/"}
model.fit(X_train_scaled, y_train, epochs=2,
validation_data=(X_valid_scaled, y_valid))
# + id="XU-KvpzJnFSc"
def create_huber(threshold=1.0):
def huber_fn(y_true, y_pred):
error = y_true - y_pred
is_small_error = tf.abs(error) < threshold
squared_loss = tf.square(error) / 2
linear_loss = threshold * tf.abs(error) - threshold**2 / 2
return tf.where(is_small_error, squared_loss, linear_loss)
return huber_fn
# + id="pagOIVIvnFSd"
model.compile(loss=create_huber(2.0), optimizer="nadam", metrics=["mae"])
# + id="VpXt3HOpnFSd" outputId="2c90da98-3ac4-4afc-aab9-3e0b375ff03e" colab={"base_uri": "https://localhost:8080/"}
model.fit(X_train_scaled, y_train, epochs=2,
validation_data=(X_valid_scaled, y_valid))
# + id="or-QRU9DnFSd"
model.save("my_model_with_a_custom_loss_threshold_2.h5")
# + id="VRAUNTJxnFSd"
model = keras.models.load_model("my_model_with_a_custom_loss_threshold_2.h5",
custom_objects={"huber_fn": create_huber(2.0)})
# + id="ZQIXRq1snFSd" outputId="768a1bf8-afc2-4a26-d918-9cb0b6baef1e" colab={"base_uri": "https://localhost:8080/"}
model.fit(X_train_scaled, y_train, epochs=2,
validation_data=(X_valid_scaled, y_valid))
# + id="3u4WR3alnFSd"
class HuberLoss(keras.losses.Loss):
def __init__(self, threshold=1.0, **kwargs):
self.threshold = threshold
super().__init__(**kwargs)
def call(self, y_true, y_pred):
error = y_true - y_pred
is_small_error = tf.abs(error) < self.threshold
squared_loss = tf.square(error) / 2
linear_loss = self.threshold * tf.abs(error) - self.threshold**2 / 2
return tf.where(is_small_error, squared_loss, linear_loss)
def get_config(self):
base_config = super().get_config()
return {**base_config, "threshold": self.threshold}
# + id="y1JdYpsKnFSd"
model = keras.models.Sequential([
keras.layers.Dense(30, activation="selu", kernel_initializer="lecun_normal",
input_shape=input_shape),
keras.layers.Dense(1),
])
# + id="MSvlez6KnFSd"
model.compile(loss=HuberLoss(2.), optimizer="nadam", metrics=["mae"])
# + id="AgKhF3hKnFSd" outputId="84fb0610-d355-4fc0-a289-2de847fc33be" colab={"base_uri": "https://localhost:8080/"}
model.fit(X_train_scaled, y_train, epochs=2,
validation_data=(X_valid_scaled, y_valid))
# + id="uyTT256tnFSe"
model.save("my_model_with_a_custom_loss_class.h5")
# + id="Ot6cK1oYnFSe"
model = keras.models.load_model("my_model_with_a_custom_loss_class.h5",
custom_objects={"HuberLoss": HuberLoss})
# + id="M9OIPCgNnFSe" outputId="922f5b7f-623c-4dca-9680-fd293812be3b" colab={"base_uri": "https://localhost:8080/"}
model.fit(X_train_scaled, y_train, epochs=2,
validation_data=(X_valid_scaled, y_valid))
# + id="jpZgLrW-nFSe" outputId="1b906c14-5f76-4c05-edbc-3dc56130f2e4" colab={"base_uri": "https://localhost:8080/"}
model.loss.threshold
# + [markdown] id="truxJ-5enFSe"
# ## 그외 사용자 정의 함수
# + id="o1ZGp6XNnFSe"
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
# + id="yB3e_4JbnFSe"
def my_softplus(z): # tf.nn.softplus(z) 값을 반환합니다
return tf.math.log(tf.exp(z) + 1.0)
def my_glorot_initializer(shape, dtype=tf.float32):
stddev = tf.sqrt(2. / (shape[0] + shape[1]))
return tf.random.normal(shape, stddev=stddev, dtype=dtype)
def my_l1_regularizer(weights):
return tf.reduce_sum(tf.abs(0.01 * weights))
def my_positive_weights(weights): # tf.nn.relu(weights) 값을 반환합니다
return tf.where(weights < 0., tf.zeros_like(weights), weights)
# + id="KoYamcfmnFSf"
layer = keras.layers.Dense(1, activation=my_softplus,
kernel_initializer=my_glorot_initializer,
kernel_regularizer=my_l1_regularizer,
kernel_constraint=my_positive_weights)
# + id="OjTWCzNknFSf"
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
# + id="eOl39GkynFSf"
model = keras.models.Sequential([
keras.layers.Dense(30, activation="selu", kernel_initializer="lecun_normal",
input_shape=input_shape),
keras.layers.Dense(1, activation=my_softplus,
kernel_regularizer=my_l1_regularizer,
kernel_constraint=my_positive_weights,
kernel_initializer=my_glorot_initializer),
])
# + id="Pj0gQWgnnFSf"
model.compile(loss="mse", optimizer="nadam", metrics=["mae"])
# + id="c1nX7r3RnFSf" outputId="06dffd5a-07a7-4d09-ea34-acdd8a34a1e8" colab={"base_uri": "https://localhost:8080/"}
model.fit(X_train_scaled, y_train, epochs=2,
validation_data=(X_valid_scaled, y_valid))
# + id="--fx4XjEnFSf"
model.save("my_model_with_many_custom_parts.h5")
# + id="mzao-J6hnFSf"
model = keras.models.load_model(
"my_model_with_many_custom_parts.h5",
custom_objects={
"my_l1_regularizer": my_l1_regularizer,
"my_positive_weights": my_positive_weights,
"my_glorot_initializer": my_glorot_initializer,
"my_softplus": my_softplus,
})
# + id="jCVaCcxwnFSf"
class MyL1Regularizer(keras.regularizers.Regularizer):
def __init__(self, factor):
self.factor = factor
def __call__(self, weights):
return tf.reduce_sum(tf.abs(self.factor * weights))
def get_config(self):
return {"factor": self.factor}
# + id="8B62Te2ZnFSf"
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
# + id="D5zjm10KnFSg"
model = keras.models.Sequential([
keras.layers.Dense(30, activation="selu", kernel_initializer="lecun_normal",
input_shape=input_shape),
keras.layers.Dense(1, activation=my_softplus,
kernel_regularizer=MyL1Regularizer(0.01),
kernel_constraint=my_positive_weights,
kernel_initializer=my_glorot_initializer),
])
# + id="YffhdIG-nFSg"
model.compile(loss="mse", optimizer="nadam", metrics=["mae"])
# + id="KgQEiQR6nFSg" outputId="4b4043b0-0f69-4636-8544-5cc37ae97ef5" colab={"base_uri": "https://localhost:8080/"}
model.fit(X_train_scaled, y_train, epochs=2,
validation_data=(X_valid_scaled, y_valid))
# + id="RfI--9P8nFSg"
model.save("my_model_with_many_custom_parts.h5")
# + id="bKcPzkuGnFSh"
model = keras.models.load_model(
"my_model_with_many_custom_parts.h5",
custom_objects={
"MyL1Regularizer": MyL1Regularizer,
"my_positive_weights": my_positive_weights,
"my_glorot_initializer": my_glorot_initializer,
"my_softplus": my_softplus,
})
# + [markdown] id="8i816JgTnFSh"
# ## 사용자 정의 지표
# + id="EZWWawkinFSh"
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
# + id="K1fe2oELnFSh"
model = keras.models.Sequential([
keras.layers.Dense(30, activation="selu", kernel_initializer="lecun_normal",
input_shape=input_shape),
keras.layers.Dense(1),
])
# + id="SIHTKaKNnFSh"
model.compile(loss="mse", optimizer="nadam", metrics=[create_huber(2.0)])
# + id="g4l-_an2nFSh" outputId="0b42b348-4068-4e70-f444-1b5883d8607d" colab={"base_uri": "https://localhost:8080/"}
model.fit(X_train_scaled, y_train, epochs=2)
# + [markdown] id="ZqeCDPtvnFSh"
# **노트**: 손실과 지표에 같은 함수를 사용하면 다른 결과가 나올 수 있습니다. 이는 일반적으로 부동 소수점 정밀도 오차 때문입니다. 수학 식이 동일하더라도 연산은 동일한 순서대로 실행되지 않습니다. 이로 인해 작은 차이가 발생합니다. 또한 샘플 가중치를 사용하면 정밀도보다 더 큰 오차가 생깁니다:
#
# * 에포크에서 손실은 지금까지 본 모든 배치 손실의 평균입니다. 각 배치 손실은 가중치가 적용된 샘플 손실의 합을 _배치 크기_ 로 나눈 것입니다(샘플 가중치의 합으로 나눈 것이 아닙니다. 따라서 배치 손실은 손실의 가중 평균이 아닙니다).
# * 에포크에서 지표는 가중치가 적용된 샘플 손실의 합을 지금까지 본 모든 샘플 가중치의 합으로 나눈 것입니다. 다른 말로하면 모든 샘플 손실의 가중 평균입니다. 따라서 위와 같지 않습니다.
#
# 수학적으로 말하면 손실 = 지표 * 샘플 가중치의 평균(더하기 약간의 부동 소수점 정밀도 오차)입니다.
# + id="9DGQB6V3nFSh"
model.compile(loss=create_huber(2.0), optimizer="nadam", metrics=[create_huber(2.0)])
# + id="EjeISBIOnFSh" outputId="3484be7d-b172-4be9-c8f1-8758a24467bc" colab={"base_uri": "https://localhost:8080/"}
sample_weight = np.random.rand(len(y_train))
history = model.fit(X_train_scaled, y_train, epochs=2, sample_weight=sample_weight)
# + id="eKbXc3FInFSi" outputId="e199de4c-bb02-41d8-8c04-7d88d1bb1988" colab={"base_uri": "https://localhost:8080/"}
history.history["loss"][0], history.history["huber_fn"][0] * sample_weight.mean()
# + [markdown] id="8_Df63lFnFSi"
# ### 스트리밍 지표
# + id="7wW4uLd9nFSi" outputId="9a8d3c26-bc07-4bf1-d104-83474e997416" colab={"base_uri": "https://localhost:8080/"}
precision = keras.metrics.Precision()
precision([0, 1, 1, 1, 0, 1, 0, 1], [1, 1, 0, 1, 0, 1, 0, 1])
# + id="zed4nGD3nFSi" outputId="4461d5d8-ce7f-4589-b3b5-f134b9102915" colab={"base_uri": "https://localhost:8080/"}
precision([0, 1, 0, 0, 1, 0, 1, 1], [1, 0, 1, 1, 0, 0, 0, 0])
# + id="RqFgNHLPnFSi" outputId="52e2add8-3f95-4498-e6c8-c971d3e637ec" colab={"base_uri": "https://localhost:8080/"}
precision.result()
# + id="DoCvoOt5nFSi" outputId="c847c2d0-e801-4dfe-fb5d-615ee2c0fe21" colab={"base_uri": "https://localhost:8080/"}
precision.variables
# + id="Bb3EbnzGnFSi"
precision.reset_states()
# + [markdown] id="WuhZp4TEnFSi"
# 스트리밍 지표 만들기:
# + id="PseG5pPUnFSi"
class HuberMetric(keras.metrics.Metric):
def __init__(self, threshold=1.0, **kwargs):
super().__init__(**kwargs) # 기본 매개변수 처리 (예를 들면, dtype)
self.threshold = threshold
self.huber_fn = create_huber(threshold)
self.total = self.add_weight("total", initializer="zeros")
self.count = self.add_weight("count", initializer="zeros")
def update_state(self, y_true, y_pred, sample_weight=None):
metric = self.huber_fn(y_true, y_pred)
self.total.assign_add(tf.reduce_sum(metric))
self.count.assign_add(tf.cast(tf.size(y_true), tf.float32))
def result(self):
return self.total / self.count
def get_config(self):
base_config = super().get_config()
return {**base_config, "threshold": self.threshold}
# + id="N3oSW-yJnFSi" outputId="c6168f06-a1a0-4a27-c304-b863fc4a0f13" colab={"base_uri": "https://localhost:8080/"}
m = HuberMetric(2.)
# total = 2 * |10 - 2| - 2²/2 = 14
# count = 1
# result = 14 / 1 = 14
m(tf.constant([[2.]]), tf.constant([[10.]]))
# + id="fZxdkhK_nFSi" outputId="096a373e-d98a-4fff-8320-7e78788b6d32" colab={"base_uri": "https://localhost:8080/"}
# total = total + (|1 - 0|² / 2) + (2 * |9.25 - 5| - 2² / 2) = 14 + 7 = 21
# count = count + 2 = 3
# result = total / count = 21 / 3 = 7
m(tf.constant([[0.], [5.]]), tf.constant([[1.], [9.25]]))
m.result()
# + id="V4Yj14hCnFSj" outputId="1e5d3580-85db-4d3e-87d6-6147b9ab4139" colab={"base_uri": "https://localhost:8080/"}
m.variables
# + id="AR4lyDg4nFSj" outputId="e4cc7879-23d8-4a53-dd57-95400cb52a3c" colab={"base_uri": "https://localhost:8080/"}
m.reset_states()
m.variables
# + [markdown] id="NWT6cVPunFSj"
# `HuberMetric` 클래스가 잘 동작하는지 확인해 보죠:
# + id="o_JZayvonFSj"
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
# + id="JI1L36IJnFSj"
model = keras.models.Sequential([
keras.layers.Dense(30, activation="selu", kernel_initializer="lecun_normal",
input_shape=input_shape),
keras.layers.Dense(1),
])
# + id="6sJSLY0HnFSj"
model.compile(loss=create_huber(2.0), optimizer="nadam", metrics=[HuberMetric(2.0)])
# + id="68uiyAcUnFSj" outputId="fb70d71b-7ded-43dd-c69c-7b0d93e4c53b" colab={"base_uri": "https://localhost:8080/"}
model.fit(X_train_scaled.astype(np.float32), y_train.astype(np.float32), epochs=2)
# + id="bxInm5qnnFSj"
model.save("my_model_with_a_custom_metric.h5")
# + id="7JTorko_nFSj"
model = keras.models.load_model("my_model_with_a_custom_metric.h5",
custom_objects={"huber_fn": create_huber(2.0),
"HuberMetric": HuberMetric})
# + id="EO4pBgVxnFSj" outputId="89d66c65-882d-4ac5-f9dd-12527a04f0f8" colab={"base_uri": "https://localhost:8080/"}
model.fit(X_train_scaled.astype(np.float32), y_train.astype(np.float32), epochs=2)
# + [markdown] id="mH6-Xw2HnFSj"
# **경고**: 텐서플로 2.2에서 tf.keras가 `model.metrics`의 0번째 위치에 지표를 추가합니다([텐서플로 이슈 #38150](https://github.com/tensorflow/tensorflow/issues/38150) 참조). 따라서 `HuberMetric`에 접근하려면 `model.metrics[0]` 대신 `model.metrics[-1]`를 사용해야 합니다.
# + id="4KnXWivunFSk" outputId="0404c5ad-3def-49a5-bd71-c5c1469612f4" colab={"base_uri": "https://localhost:8080/"}
model.metrics[-1].threshold
# + [markdown] id="ViTVxzoBnFSk"
# 잘 동작하는군요! 다음처럼 더 간단하게 클래스를 만들 수 있습니다:
# + id="f43iC6UWnFSk"
class HuberMetric(keras.metrics.Mean):
def __init__(self, threshold=1.0, name='HuberMetric', dtype=None):
self.threshold = threshold
self.huber_fn = create_huber(threshold)
super().__init__(name=name, dtype=dtype)
def update_state(self, y_true, y_pred, sample_weight=None):
metric = self.huber_fn(y_true, y_pred)
super(HuberMetric, self).update_state(metric, sample_weight)
def get_config(self):
base_config = super().get_config()
return {**base_config, "threshold": self.threshold}
# + [markdown] id="FtBpUQKWnFSk"
# 이 클래스는 크기를 잘 처리하고 샘플 가중치도 지원합니다.
# + id="CFi9R0PsnFSk"
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
# + id="YNuyOkfjnFSk"
model = keras.models.Sequential([
keras.layers.Dense(30, activation="selu", kernel_initializer="lecun_normal",
input_shape=input_shape),
keras.layers.Dense(1),
])
# + id="uLjhx8PCnFSk"
model.compile(loss=keras.losses.Huber(2.0), optimizer="nadam", weighted_metrics=[HuberMetric(2.0)])
# + id="09rxtKpanFSk" outputId="98c06747-17aa-4c68-f947-37b5da6ffe8d" colab={"base_uri": "https://localhost:8080/"}
sample_weight = np.random.rand(len(y_train))
history = model.fit(X_train_scaled.astype(np.float32), y_train.astype(np.float32),
epochs=2, sample_weight=sample_weight)
# + id="mJ4-Y0E2nFSk" outputId="a0b07151-f308-4339-d169-ca57abf441cf" colab={"base_uri": "https://localhost:8080/"}
history.history["loss"][0], history.history["HuberMetric"][0] * sample_weight.mean()
# + id="adIdqEocnFSk"
model.save("my_model_with_a_custom_metric_v2.h5")
# + id="F_niUjqDnFSk"
model = keras.models.load_model("my_model_with_a_custom_metric_v2.h5",
custom_objects={"HuberMetric": HuberMetric})
# + id="7V2PkjHknFSl" outputId="ceb9ae9b-5821-47b5-f803-50b6820bdd7b" colab={"base_uri": "https://localhost:8080/"}
model.fit(X_train_scaled.astype(np.float32), y_train.astype(np.float32), epochs=2)
# + id="LiY6tzrrnFSl" outputId="bba96dba-f439-4c16-fee6-88e7166fd4b4" colab={"base_uri": "https://localhost:8080/"}
model.metrics[-1].threshold
# + [markdown] id="wHvCg6OCnFSl"
# ## 사용자 정의 층
# + id="yaYQR8DxnFSm"
exponential_layer = keras.layers.Lambda(lambda x: tf.exp(x))
# + id="JbXxzU34nFSm" outputId="a703b63f-c919-4574-8917-31a565c05890" colab={"base_uri": "https://localhost:8080/"}
exponential_layer([-1., 0., 1.])
# + [markdown] id="mM-jjL37nFSm"
# 회귀 모델이 예측할 값이 양수이고 스케일이 매우 다른 경우 (예를 들어, 0.001, 10., 10000) 출력층에 지수 함수를 추가하면 유용할 수 있습니다:
# + id="H_fZsjonnFSm"
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
# + id="GVDT8nyxnFSm" outputId="576b6915-f500-4db9-b9c5-58107c0b26fa" colab={"base_uri": "https://localhost:8080/"}
model = keras.models.Sequential([
keras.layers.Dense(30, activation="relu", input_shape=input_shape),
keras.layers.Dense(1),
exponential_layer
])
model.compile(loss="mse", optimizer="sgd")
model.fit(X_train_scaled, y_train, epochs=5,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
# + id="BD9ygbq5nFSm"
class MyDense(keras.layers.Layer):
def __init__(self, units, activation=None, **kwargs):
super().__init__(**kwargs)
self.units = units
self.activation = keras.activations.get(activation)
def build(self, batch_input_shape):
self.kernel = self.add_weight(
name="kernel", shape=[batch_input_shape[-1], self.units],
initializer="glorot_normal")
self.bias = self.add_weight(
name="bias", shape=[self.units], initializer="zeros")
super().build(batch_input_shape) # must be at the end
def call(self, X):
return self.activation(X @ self.kernel + self.bias)
def compute_output_shape(self, batch_input_shape):
return tf.TensorShape(batch_input_shape.as_list()[:-1] + [self.units])
def get_config(self):
base_config = super().get_config()
return {**base_config, "units": self.units,
"activation": keras.activations.serialize(self.activation)}
# + id="PhAOl_-xnFSm"
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
# + id="Ux2CIm0QnFSm"
model = keras.models.Sequential([
MyDense(30, activation="relu", input_shape=input_shape),
MyDense(1)
])
# + id="ZzYKMKhfnFSm" outputId="1e267b1f-84f6-466e-d1ac-3a1c17ba70e0" colab={"base_uri": "https://localhost:8080/"}
model.compile(loss="mse", optimizer="nadam")
model.fit(X_train_scaled, y_train, epochs=2,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
# + id="D3dQrUgwnFSm"
model.save("my_model_with_a_custom_layer.h5")
# + id="vqqfaYhpnFSn"
model = keras.models.load_model("my_model_with_a_custom_layer.h5",
custom_objects={"MyDense": MyDense})
# + id="XE8kj1ZCnFSn"
class MyMultiLayer(keras.layers.Layer):
def call(self, X):
X1, X2 = X
print("X1.shape: ", X1.shape ," X2.shape: ", X2.shape) # 사용자 정의 층 디버깅
return X1 + X2, X1 * X2
def compute_output_shape(self, batch_input_shape):
batch_input_shape1, batch_input_shape2 = batch_input_shape
return [batch_input_shape1, batch_input_shape2]
# + [markdown] id="ZRR1NXSpnFSn"
# 사용자 정의 층은 다음처럼 함수형 API를 사용해 호출할 수 있습니다:
# + id="qlHFFWy8nFSn" outputId="e2b1de3b-ec1a-4fcc-a80b-1a24cc9780ee" colab={"base_uri": "https://localhost:8080/"}
inputs1 = keras.layers.Input(shape=[2])
inputs2 = keras.layers.Input(shape=[2])
outputs1, outputs2 = MyMultiLayer()((inputs1, inputs2))
# + [markdown] id="rnu1SI3pnFSn"
# `call()` 메서드는 심볼릭 입력을 받습니다. 이 입력의 크기는 부분적으로만 지정되어 있습니다(이 시점에서는 배치 크기를 모릅니다. 그래서 첫 번째 차원이 None입니다):
#
# 사용자 층에 실제 데이터를 전달할 수도 있습니다. 이를 테스트하기 위해 각 데이터셋의 입력을 각각 네 개의 특성을 가진 두 부분으로 나누겠습니다:
# + id="5eIOchaEnFSn" outputId="5e6292de-75f8-41b1-fb1d-a09e0c08694c" colab={"base_uri": "https://localhost:8080/"}
def split_data(data):
columns_count = data.shape[-1]
half = columns_count // 2
return data[:, :half], data[:, half:]
X_train_scaled_A, X_train_scaled_B = split_data(X_train_scaled)
X_valid_scaled_A, X_valid_scaled_B = split_data(X_valid_scaled)
X_test_scaled_A, X_test_scaled_B = split_data(X_test_scaled)
# 분할된 데이터 크기 출력
X_train_scaled_A.shape, X_train_scaled_B.shape
# + [markdown] id="2Kjpmi_lnFSn"
# 크기가 완전하게 지정된 것을 볼 수 있습니다:
# + id="QUFlvjdNnFSn" outputId="1482c85a-5fe0-40f1-ccc3-0e5c3e2a0819" colab={"base_uri": "https://localhost:8080/"}
outputs1, outputs2 = MyMultiLayer()((X_train_scaled_A, X_train_scaled_B))
# + [markdown] id="Gj0EYDB9nFSn"
# 함수형 API를 사용해 완전한 모델을 만들어 보겠습니다(이 모델은 간단한 예제이므로 놀라운 성능을 기대하지 마세요):
# + id="ITI2PrCbnFSn" outputId="7953ca01-ec92-4a0c-c10b-6fd3733cbdd8" colab={"base_uri": "https://localhost:8080/"}
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
input_A = keras.layers.Input(shape=X_train_scaled_A.shape[-1])
input_B = keras.layers.Input(shape=X_train_scaled_B.shape[-1])
hidden_A, hidden_B = MyMultiLayer()((input_A, input_B))
hidden_A = keras.layers.Dense(30, activation='selu')(hidden_A)
hidden_B = keras.layers.Dense(30, activation='selu')(hidden_B)
concat = keras.layers.Concatenate()((hidden_A, hidden_B))
output = keras.layers.Dense(1)(concat)
model = keras.models.Model(inputs=[input_A, input_B], outputs=[output])
# + id="Hf5QYYp6nFSn"
model.compile(loss='mse', optimizer='nadam')
# + id="TlnQRPmenFSo" outputId="448960f2-5023-4334-f24d-92870bc1b192" colab={"base_uri": "https://localhost:8080/"}
model.fit((X_train_scaled_A, X_train_scaled_B), y_train, epochs=2,
validation_data=((X_valid_scaled_A, X_valid_scaled_B), y_valid))
# + [markdown] id="7wMgYJv8nFSo"
# 훈련과 테스트에서 다르게 동작하는 층을 만들어 보죠:
# + id="NeKKkO32nFSo"
class AddGaussianNoise(keras.layers.Layer):
def __init__(self, stddev, **kwargs):
super().__init__(**kwargs)
self.stddev = stddev
def call(self, X, training=None):
if training:
noise = tf.random.normal(tf.shape(X), stddev=self.stddev)
return X + noise
else:
return X
def compute_output_shape(self, batch_input_shape):
return batch_input_shape
# + [markdown] id="Jh5WfjzunFSo"
# 다음은 사용자 정의 층을 사용하는 간단한 모델입니다:
# + id="Ln23QcsdnFSo"
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
model = keras.models.Sequential([
AddGaussianNoise(stddev=1.0),
keras.layers.Dense(30, activation="selu"),
keras.layers.Dense(1)
])
# + id="Gj0MuOcKnFSo" outputId="61067f1c-8caa-47e8-8853-334f42336ea4" colab={"base_uri": "https://localhost:8080/"}
model.compile(loss="mse", optimizer="nadam")
model.fit(X_train_scaled, y_train, epochs=2,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
# + [markdown] id="Bo5Aksa0nFSo"
# ## 사용자 정의 모델
# + id="qZ9CBGORnFSo"
X_new_scaled = X_test_scaled
# + id="ScNXzm1GnFSo"
class ResidualBlock(keras.layers.Layer):
def __init__(self, n_layers, n_neurons, **kwargs):
super().__init__(**kwargs)
self.hidden = [keras.layers.Dense(n_neurons, activation="elu",
kernel_initializer="he_normal")
for _ in range(n_layers)]
def call(self, inputs):
Z = inputs
for layer in self.hidden:
Z = layer(Z)
return inputs + Z
# + id="LjmHGRPOnFSo"
class ResidualRegressor(keras.models.Model):
def __init__(self, output_dim, **kwargs):
super().__init__(**kwargs)
self.hidden1 = keras.layers.Dense(30, activation="elu",
kernel_initializer="he_normal")
self.block1 = ResidualBlock(2, 30)
self.block2 = ResidualBlock(2, 30)
self.out = keras.layers.Dense(output_dim)
def call(self, inputs):
Z = self.hidden1(inputs)
for _ in range(1 + 3):
Z = self.block1(Z)
Z = self.block2(Z)
return self.out(Z)
# + id="oP_n1Vf1nFSo"
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
# + id="FTnG6qh4nFSp" outputId="22a7e069-f609-4f0b-bdaf-ace1c3411bfa" colab={"base_uri": "https://localhost:8080/"}
model = ResidualRegressor(1)
model.compile(loss="mse", optimizer="nadam")
history = model.fit(X_train_scaled, y_train, epochs=5)
score = model.evaluate(X_test_scaled, y_test)
y_pred = model.predict(X_new_scaled)
# + id="aqkYIQvOnFSp" outputId="3ecc3fbc-7b8a-4b76-d73d-75234cc81740" colab={"base_uri": "https://localhost:8080/"}
model.save("my_custom_model.ckpt")
# + id="j2fgl5UknFSp"
model = keras.models.load_model("my_custom_model.ckpt")
# + id="yaRZkdOCnFSp" outputId="5f0fbc3b-92c5-4a16-d7bf-d7b6ea2327ba" colab={"base_uri": "https://localhost:8080/"}
history = model.fit(X_train_scaled, y_train, epochs=5)
# + [markdown] id="VoxJM__-nFSp"
# 대신 시퀀셜 API를 사용하는 모델을 정의할 수 있습니다:
# + id="SNXYstMCnFSp"
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
# + id="j0lwOBjenFSp"
block1 = ResidualBlock(2, 30)
model = keras.models.Sequential([
keras.layers.Dense(30, activation="elu", kernel_initializer="he_normal"),
block1, block1, block1, block1,
ResidualBlock(2, 30),
keras.layers.Dense(1)
])
# + id="6Ol9WZK-nFSp" outputId="fee72b7b-9161-408a-e1e7-053bd689e584" colab={"base_uri": "https://localhost:8080/"}
model.compile(loss="mse", optimizer="nadam")
history = model.fit(X_train_scaled, y_train, epochs=5)
score = model.evaluate(X_test_scaled, y_test)
y_pred = model.predict(X_new_scaled)
# + [markdown] id="0ivfG84mnFSp"
# ## 모델 구성 요소에 기반한 손실과 지표
# + [markdown] id="Jbr8J6n1nFSp"
# **노트**: TF 2.2에 있는 이슈([#46858](https://github.com/tensorflow/tensorflow/issues/46858)) 때문에 `build()` 메서드와 함께 `add_loss()`를 사용할 수 없습니다. 따라서 다음 코드는 책과 다릅니다. `build()` 메서드 대신 생성자에 `reconstruct` 층을 만듭니다. 이 때문에 이 층의 유닛 개수를 하드코딩해야 합니다(또는 생성자 매개변수로 전달해야 합니다).
# + id="YHTjLpT3nFSp"
class ReconstructingRegressor(keras.models.Model):
def __init__(self, output_dim, **kwargs):
super().__init__(**kwargs)
self.hidden = [keras.layers.Dense(30, activation="selu",
kernel_initializer="lecun_normal")
for _ in range(5)]
self.out = keras.layers.Dense(output_dim)
self.reconstruct = keras.layers.Dense(8) # TF 이슈 #46858에 대한 대책
self.reconstruction_mean = keras.metrics.Mean(name="reconstruction_error")
# TF 이슈 #46858 때문에 주석 처리
# def build(self, batch_input_shape):
# n_inputs = batch_input_shape[-1]
# self.reconstruct = keras.layers.Dense(n_inputs, name='recon')
# super().build(batch_input_shape)
def call(self, inputs, training=None):
Z = inputs
for layer in self.hidden:
Z = layer(Z)
reconstruction = self.reconstruct(Z)
self.recon_loss = 0.05 * tf.reduce_mean(tf.square(reconstruction - inputs))
if training:
result = self.reconstruction_mean(recon_loss)
self.add_metric(result)
return self.out(Z)
def train_step(self, data):
x, y = data
with tf.GradientTape() as tape:
y_pred = self(x)
loss = self.compiled_loss(y, y_pred, regularization_losses=[self.recon_loss])
gradients = tape.gradient(loss, self.trainable_variables)
self.optimizer.apply_gradients(zip(gradients, self.trainable_variables))
return {m.name: m.result() for m in self.metrics}
# + id="XQah3_DFnFSq"
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
# + id="fIBmQGYanFSq" outputId="057adc0b-9382-4ae4-ed3a-288d19ae9491" colab={"base_uri": "https://localhost:8080/"}
model = ReconstructingRegressor(1)
model.compile(loss="mse", optimizer="nadam")
history = model.fit(X_train_scaled, y_train, epochs=2)
y_pred = model.predict(X_test_scaled)
# + [markdown] id="rGDLWUXhnFSq"
# ## 자동 미분을 사용하여 그레이디언트 계산하기
# + id="NUl_neZpnFSr"
def f(w1, w2):
return 3 * w1 ** 2 + 2 * w1 * w2
# + id="X1cstf8lnFSr" outputId="895a06e6-b96e-47f1-befd-d976f361c2a4" colab={"base_uri": "https://localhost:8080/"}
w1, w2 = 5, 3
eps = 1e-6
(f(w1 + eps, w2) - f(w1, w2)) / eps
# + id="bLBVUzu9nFSs" outputId="2f0d4b05-69df-4623-f8c4-530484a94004" colab={"base_uri": "https://localhost:8080/"}
(f(w1, w2 + eps) - f(w1, w2)) / eps
# + id="wGGD9esynFSs"
w1, w2 = tf.Variable(5.), tf.Variable(3.)
with tf.GradientTape() as tape:
z = f(w1, w2)
gradients = tape.gradient(z, [w1, w2])
# + id="WHexGsK7nFSs" outputId="bd147c62-d029-4ca2-a886-52e0680bc8e7" colab={"base_uri": "https://localhost:8080/"}
gradients
# + id="jDJYd-manFSs" outputId="606d4971-cefd-4c1d-e5b8-76c618438f49" colab={"base_uri": "https://localhost:8080/"}
with tf.GradientTape() as tape:
z = f(w1, w2)
dz_dw1 = tape.gradient(z, w1)
try:
dz_dw2 = tape.gradient(z, w2)
except RuntimeError as ex:
print(ex)
# + id="DXw2nnLcnFSs"
with tf.GradientTape(persistent=True) as tape:
z = f(w1, w2)
dz_dw1 = tape.gradient(z, w1)
dz_dw2 = tape.gradient(z, w2) # works now!
del tape
# + id="gJidVNB7nFSs" outputId="d55f4da2-5005-40c3-d52e-488a7f8d561b" colab={"base_uri": "https://localhost:8080/"}
dz_dw1, dz_dw2
# + id="tyXHSb1SnFSs"
c1, c2 = tf.constant(5.), tf.constant(3.)
with tf.GradientTape() as tape:
z = f(c1, c2)
gradients = tape.gradient(z, [c1, c2])
# + id="YrBr2oY_nFSs" outputId="fc41953f-590d-41ba-a3ae-5e38ce635dc7" colab={"base_uri": "https://localhost:8080/"}
gradients
# + id="PxGzWYUbnFSt"
with tf.GradientTape() as tape:
tape.watch(c1)
tape.watch(c2)
z = f(c1, c2)
gradients = tape.gradient(z, [c1, c2])
# + id="sMfZRrDZnFSt" outputId="c80cc715-c429-4011-d89d-9cb91a5b6306" colab={"base_uri": "https://localhost:8080/"}
gradients
# + id="ICrkGqt9nFSt" outputId="8ac4df3e-6f58-4389-b750-a28184ca9dee" colab={"base_uri": "https://localhost:8080/"}
with tf.GradientTape() as tape:
z1 = f(w1, w2 + 2.)
z2 = f(w1, w2 + 5.)
z3 = f(w1, w2 + 7.)
tape.gradient([z1, z2, z3], [w1, w2])
# + id="ZrZ77owOnFSt"
with tf.GradientTape(persistent=True) as tape:
z1 = f(w1, w2 + 2.)
z2 = f(w1, w2 + 5.)
z3 = f(w1, w2 + 7.)
tf.reduce_sum(tf.stack([tape.gradient(z, [w1, w2]) for z in (z1, z2, z3)]), axis=0)
del tape
# + id="5VDDlIkVnFSt"
with tf.GradientTape(persistent=True) as hessian_tape:
with tf.GradientTape() as jacobian_tape:
z = f(w1, w2)
jacobians = jacobian_tape.gradient(z, [w1, w2])
hessians = [hessian_tape.gradient(jacobian, [w1, w2])
for jacobian in jacobians]
del hessian_tape
# + id="HFk0AReSnFSt" outputId="2f375eca-0ff7-4f21-d86a-9d30d1cf12a9" colab={"base_uri": "https://localhost:8080/"}
jacobians
# + id="yZVuaFulnFSt" outputId="7974c698-77d9-4186-9860-ac1cc9111e37" colab={"base_uri": "https://localhost:8080/"}
hessians
# + id="Z29IDrk0nFSu" outputId="9ce03f6b-7847-4c46-beb4-eb2a023f1582" colab={"base_uri": "https://localhost:8080/"}
def f(w1, w2):
return 3 * w1 ** 2 + tf.stop_gradient(2 * w1 * w2)
with tf.GradientTape() as tape:
z = f(w1, w2)
tape.gradient(z, [w1, w2])
# + id="O5-EE34anFSu" outputId="b9f03eba-7437-4593-bf6c-3670c8003a00" colab={"base_uri": "https://localhost:8080/"}
x = tf.Variable(100.)
with tf.GradientTape() as tape:
z = my_softplus(x)
tape.gradient(z, [x])
# + id="Da4WXjSvnFSu" outputId="d5c30d95-563a-42a2-b39d-8a7c506f6f87" colab={"base_uri": "https://localhost:8080/"}
tf.math.log(tf.exp(tf.constant(30., dtype=tf.float32)) + 1.)
# + id="isJYdf_DnFSu" outputId="e93f7627-b12c-4d5e-fa0e-3282b37f6160" colab={"base_uri": "https://localhost:8080/"}
x = tf.Variable([100.])
with tf.GradientTape() as tape:
z = my_softplus(x)
tape.gradient(z, [x])
# + id="v4CtFIKOnFSu"
@tf.custom_gradient
def my_better_softplus(z):
exp = tf.exp(z)
def my_softplus_gradients(grad):
return grad / (1 + 1 / exp)
return tf.math.log(exp + 1), my_softplus_gradients
# + id="tBHtwlqMnFSu"
def my_better_softplus(z):
return tf.where(z > 30., z, tf.math.log(tf.exp(z) + 1.))
# + id="mkpIj-COnFSu" outputId="e7e0cfa5-cbe6-43a0-f9e9-5687ed90477d" colab={"base_uri": "https://localhost:8080/"}
x = tf.Variable([1000.])
with tf.GradientTape() as tape:
z = my_better_softplus(x)
z, tape.gradient(z, [x])
# + [markdown] id="XI8B76oxnFSu"
# # 사용자 정의 훈련 반복
# + id="ANdypoJYnFSu"
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
# + id="CnTqizVwnFSu"
l2_reg = keras.regularizers.l2(0.05)
model = keras.models.Sequential([
keras.layers.Dense(30, activation="elu", kernel_initializer="he_normal",
kernel_regularizer=l2_reg),
keras.layers.Dense(1, kernel_regularizer=l2_reg)
])
# + id="wFdg6eRenFSv"
def random_batch(X, y, batch_size=32):
idx = np.random.randint(len(X), size=batch_size)
return X[idx], y[idx]
# + id="LSqc9FVNnFSv"
def print_status_bar(iteration, total, loss, metrics=None):
metrics = " - ".join(["{}: {:.4f}".format(m.name, m.result())
for m in [loss] + (metrics or [])])
end = "" if iteration < total else "\n"
print("\r{}/{} - ".format(iteration, total) + metrics,
end=end)
# + id="I-sdMEKxnFSv" outputId="9d98b783-ffa7-4d12-df12-d961f7a501a0" colab={"base_uri": "https://localhost:8080/"}
import time
mean_loss = keras.metrics.Mean(name="loss")
mean_square = keras.metrics.Mean(name="mean_square")
for i in range(1, 50 + 1):
loss = 1 / i
mean_loss(loss)
mean_square(i ** 2)
print_status_bar(i, 50, mean_loss, [mean_square])
time.sleep(0.05)
# + [markdown] id="T74QlMxbnFSv"
# A fancier version with a progress bar:
# + id="JCoqZPL1nFSv"
def progress_bar(iteration, total, size=30):
running = iteration < total
c = ">" if running else "="
p = (size - 1) * iteration // total
fmt = "{{:-{}d}}/{{}} [{{}}]".format(len(str(total)))
params = [iteration, total, "=" * p + c + "." * (size - p - 1)]
return fmt.format(*params)
# + id="palDPRM_nFSv" outputId="2038faef-dd24-49ab-d42e-72169e9f492c" colab={"base_uri": "https://localhost:8080/", "height": 35}
progress_bar(3500, 10000, size=6)
# + id="_INP4LY-nFSv"
def print_status_bar(iteration, total, loss, metrics=None, size=30):
metrics = " - ".join(["{}: {:.4f}".format(m.name, m.result())
for m in [loss] + (metrics or [])])
end = "" if iteration < total else "\n"
print("\r{} - {}".format(progress_bar(iteration, total), metrics), end=end)
# + id="_wv5QF3bnFSv" outputId="91d9da4b-8e24-41c5-e4ac-903f69a0c524" colab={"base_uri": "https://localhost:8080/"}
mean_loss = keras.metrics.Mean(name="loss")
mean_square = keras.metrics.Mean(name="mean_square")
for i in range(1, 50 + 1):
loss = 1 / i
mean_loss(loss)
mean_square(i ** 2)
print_status_bar(i, 50, mean_loss, [mean_square])
time.sleep(0.05)
# + id="GY73UpFLnFSv"
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
# + id="PG-ZQpginFSv" outputId="6d9d9693-adcc-4dac-aec0-199d095b4e8d" colab={"base_uri": "https://localhost:8080/"}
n_epochs = 5
batch_size = 32
n_steps = len(X_train) // batch_size
optimizer = keras.optimizers.Nadam(lr=0.01)
loss_fn = keras.losses.mean_squared_error
mean_loss = keras.metrics.Mean()
metrics = [keras.metrics.MeanAbsoluteError()]
# + id="ITOnR9uLnFSv" outputId="b0a13b06-c286-46c6-f832-791df6a47ef5" colab={"base_uri": "https://localhost:8080/"}
for epoch in range(1, n_epochs + 1):
print("Epoch {}/{}".format(epoch, n_epochs))
for step in range(1, n_steps + 1):
X_batch, y_batch = random_batch(X_train_scaled, y_train)
with tf.GradientTape() as tape:
y_pred = model(X_batch)
main_loss = tf.reduce_mean(loss_fn(y_batch, y_pred))
loss = tf.add_n([main_loss] + model.losses)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
for variable in model.variables:
if variable.constraint is not None:
variable.assign(variable.constraint(variable))
mean_loss(loss)
for metric in metrics:
metric(y_batch, y_pred)
print_status_bar(step * batch_size, len(y_train), mean_loss, metrics)
print_status_bar(len(y_train), len(y_train), mean_loss, metrics)
for metric in [mean_loss] + metrics:
metric.reset_states()
# + id="_VOoVK34nFSw" outputId="7bcb9f91-cced-4c12-e04c-59faa1778280" colab={"base_uri": "https://localhost:8080/", "height": 209, "referenced_widgets": ["<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "b442291bc2fd4afeb00345c601401a48", "d00ac2178fda4662bac2b5894225e695", "d40d0e4d6dbc4fe7837af600a714baf5", "15750aba26bc47b1b03a133a4cadad3a", "844cea147d9d46c081ade1ef0e117c72", "d0295bc6f1bb4ee38d7ac9244ee54d8e", "b531e026313f4933943884f6e1e795f5", "<KEY>", "e09a357ca8bf49d1b9da060f8eb29224", "5f29d13ac54e4d968770d64d1fad99aa", "306c5a5eb7dd4afdaab6886e31e99a02", "b6a618847516464680c2a16aef583371", "75d93ec1d61648679862d7ddee5547b9", "4abc8ad70e6d4112a18aa58394942015", "4bf89d5e43274887bd1724a2d305e0a4", "<KEY>", "<KEY>", "c715ff27d24e45f9b4aacede06814708", "<KEY>", "d0e360ece5814c02ac20a2fe42e5c66f", "51a6f9642c7e45408f3d984b6a25344a", "be5e3d53ea4144539386c29ffef07a3d", "f1e80d88677d40f1bdf9ffcf4fe649d8", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "d4e99b5a737f4a93a8ad75d870dc485a", "d6384c2670af484a8d0ce6019929b947", "<KEY>", "<KEY>", "df7ed54f99b54d2abff6b5bdde6db72e", "645ace54d7b9427aabe2b2dfa6dc6dea", "<KEY>", "<KEY>", "<KEY>", "0797d34c1de449309ddbaebb42e85c07", "74bea35b82934882b9098623f450fab7", "<KEY>", "252a5e08473f4751a6b63bda7ce5055d", "0b1a5bde488248e483f211a75e50b9a7", "<KEY>", "eb43a6e971134e57b35e636ac843b377", "40e7a51a0db544ea80d4f5e3805a2a0d", "<KEY>", "004faff048744caf8c45ec588d15c83c", "434a0e19af3d45cab0e00916a6f1d504", "<KEY>", "<KEY>", "<KEY>", "f087a4deddfb4b13b39d053bc39ed254", "<KEY>", "2d1ae6fa9fe146a9a288fd5ea41667a2", "4fb6a7ada2cf4ab9812a62110edd7169", "19386467e8344d93b420b0f8c2483400", "<KEY>", "b20bc4ddb0004894b3fb69a7dc5a6b37", "6157ade5095b4bda96075e2b5f51df95", "e3c0df173966467db5fb15ded865d25b"]}
try:
from tqdm.notebook import trange
from collections import OrderedDict
with trange(1, n_epochs + 1, desc="All epochs") as epochs:
for epoch in epochs:
with trange(1, n_steps + 1, desc="Epoch {}/{}".format(epoch, n_epochs)) as steps:
for step in steps:
X_batch, y_batch = random_batch(X_train_scaled, y_train)
with tf.GradientTape() as tape:
y_pred = model(X_batch)
main_loss = tf.reduce_mean(loss_fn(y_batch, y_pred))
loss = tf.add_n([main_loss] + model.losses)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
for variable in model.variables:
if variable.constraint is not None:
variable.assign(variable.constraint(variable))
status = OrderedDict()
mean_loss(loss)
status["loss"] = mean_loss.result().numpy()
for metric in metrics:
metric(y_batch, y_pred)
status[metric.name] = metric.result().numpy()
steps.set_postfix(status)
for metric in [mean_loss] + metrics:
metric.reset_states()
except ImportError as ex:
print("To run this cell, please install tqdm, ipywidgets and restart Jupyter")
# + [markdown] id="CnRi0O4YnFSw"
# ## 텐서플로 함수
# + id="Z4ZR1OPsnFSx"
def cube(x):
return x ** 3
# + id="8l01IQPXnFSx" outputId="1ed93f76-dc77-44d6-e02b-0d2f4dd0eeca" colab={"base_uri": "https://localhost:8080/"}
cube(2)
# + id="i7FxLva0nFSx" outputId="c9f65ec5-995d-464a-8835-e1cf7a6dbd21" colab={"base_uri": "https://localhost:8080/"}
cube(tf.constant(2.0))
# + id="T03G4Mt8nFSx" outputId="eb48f1b7-91a0-4558-b7fd-87a72160a584" colab={"base_uri": "https://localhost:8080/"}
tf_cube = tf.function(cube)
tf_cube
# + id="1FjYnl2snFSx" outputId="5ce6e3b9-c3dc-4687-f810-b2003cbd1555" colab={"base_uri": "https://localhost:8080/"}
tf_cube(2)
# + id="CJ98sHrKnFSx" outputId="f6cf2a50-4345-4c52-caef-169b2f08d19c" colab={"base_uri": "https://localhost:8080/"}
tf_cube(tf.constant(2.0))
# + [markdown] id="69nqOhgSnFSx"
# ### TF 함수와 콘크리트 함수
# + id="TKtHslXenFSx" outputId="ba36d1c1-5305-486a-a8ab-69699c0b3fe5" colab={"base_uri": "https://localhost:8080/"}
concrete_function = tf_cube.get_concrete_function(tf.constant(2.0))
concrete_function.graph
# + id="_rPVLwFKnFSx" outputId="02127582-2606-4521-f4ed-36384f0f33fd" colab={"base_uri": "https://localhost:8080/"}
concrete_function(tf.constant(2.0))
# + id="Vvpikc9TnFSx" outputId="f26069b6-22c4-4351-d186-c758751df25d" colab={"base_uri": "https://localhost:8080/"}
concrete_function is tf_cube.get_concrete_function(tf.constant(2.0))
# + [markdown] id="3D-erxOBnFSx"
# ### 함수 정의와 그래프
# + id="_IvdEuL2nFSy" outputId="e7f2706a-82e1-44c1-f951-596f220b86ec" colab={"base_uri": "https://localhost:8080/"}
concrete_function.graph
# + id="G6izh2CRnFSy" outputId="749caa01-b752-4265-a212-372fc096e25d" colab={"base_uri": "https://localhost:8080/"}
ops = concrete_function.graph.get_operations()
ops
# + id="AoSFwR_ynFSy" outputId="fdd94e28-e51a-4c02-c32a-9b6c659135e1" colab={"base_uri": "https://localhost:8080/"}
pow_op = ops[2]
list(pow_op.inputs)
# + id="9Fzk_02ynFSy" outputId="7d7f5203-6fe3-42d2-9467-632c972e8b15" colab={"base_uri": "https://localhost:8080/"}
pow_op.outputs
# + id="aGRFv-VxnFSy" outputId="380b405d-5e63-4615-a49a-279199d6522e" colab={"base_uri": "https://localhost:8080/"}
concrete_function.graph.get_operation_by_name('x')
# + id="COu4KwZSnFSy" outputId="2e1730e8-18ab-4ce0-fc8c-2d302ea248aa" colab={"base_uri": "https://localhost:8080/"}
concrete_function.graph.get_tensor_by_name('Identity:0')
# + id="eJy4X6jbnFSy" outputId="428aa840-dcdc-4e48-fc74-97e458373e99" colab={"base_uri": "https://localhost:8080/"}
concrete_function.function_def.signature
# + [markdown] id="9Nh2laLOnFSy"
# ### TF 함수가 계산 그래프를 추출하기 위해 파이썬 함수를 트레이싱하는 방법
# + id="SZK5xGMGnFSy"
@tf.function
def tf_cube(x):
print("print:", x)
return x ** 3
# + id="kZa67mMWnFSy" outputId="da0c33bf-94b3-4889-82a8-ad182f5f7326" colab={"base_uri": "https://localhost:8080/"}
result = tf_cube(tf.constant(2.0))
# + id="0MW_WncUnFSy" outputId="eee3f3d4-a620-4359-f7b3-da5c47d8752b" colab={"base_uri": "https://localhost:8080/"}
result
# + id="N4ejN4cKnFSz" outputId="432b2eab-eeef-4894-ba6e-a7c32e5ff29c" colab={"base_uri": "https://localhost:8080/"}
result = tf_cube(2)
result = tf_cube(3)
result = tf_cube(tf.constant([[1., 2.]])) # New shape: trace!
result = tf_cube(tf.constant([[3., 4.], [5., 6.]])) # New shape: trace!
result = tf_cube(tf.constant([[7., 8.], [9., 10.], [11., 12.]])) # New shape: trace!
# + [markdown] id="OtxmwBibnFSz"
# 특정 입력 시그니처를 지정하는 것도 가능합니다:
# + id="ilXpmEtbnFSz"
@tf.function(input_signature=[tf.TensorSpec([None, 28, 28], tf.float32)])
def shrink(images):
print("트레이싱", images)
return images[:, fd00:a516:7c1b:17cd:6d81:2137:bd2a:2c5b, ::2] # 행과 열의 절반을 버립니다
# + id="_SaQ3wV8nFSz"
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
# + id="rnZJvhf3nFSz" outputId="26fca1a2-c0cd-45ea-a39e-6f8b57a519b6" colab={"base_uri": "https://localhost:8080/"}
img_batch_1 = tf.random.uniform(shape=[100, 28, 28])
img_batch_2 = tf.random.uniform(shape=[50, 28, 28])
preprocessed_images = shrink(img_batch_1) # 함수 트레이싱
preprocessed_images = shrink(img_batch_2) # 동일한 콘크리트 함수 재사용
# + id="g0a7ieyFnFSz" outputId="18258769-b77b-4d48-ce1e-7457a7a90821" colab={"base_uri": "https://localhost:8080/"}
img_batch_3 = tf.random.uniform(shape=[2, 2, 2])
try:
preprocessed_images = shrink(img_batch_3) # 다른 타입이나 크기 거부
except ValueError as ex:
print(ex)
# + [markdown] id="iGEDLTpVnFSz"
# ### 오토그래프를 사용해 제어 흐름 나타내기
# + [markdown] id="tEAizr5BnFSz"
# `range()`를 사용한 정적인 `for` 반복:
# + id="XF0hg7XcnFSz"
@tf.function
def add_10(x):
for i in range(10):
x += 1
return x
# + id="_C_KpXXAnFSz" outputId="ffe73dbb-0a5e-4eb6-f2c0-731a140b6061" colab={"base_uri": "https://localhost:8080/"}
add_10(tf.constant(5))
# + id="ytBDurUinFSz" outputId="2a1da7fe-cfca-4344-b09c-1b9ac44df20e" colab={"base_uri": "https://localhost:8080/"}
add_10.get_concrete_function(tf.constant(5)).graph.get_operations()
# + [markdown] id="cNEX0KW4nFSz"
# `tf.while_loop()`를 사용한 동적인 반복:
# + id="WmjQ3HJWnFSz"
@tf.function
def add_10(x):
condition = lambda i, x: tf.less(i, 10)
body = lambda i, x: (tf.add(i, 1), tf.add(x, 1))
final_i, final_x = tf.while_loop(condition, body, [tf.constant(0), x])
return final_x
# + id="MTYxWdZanFS0" outputId="3884b63f-6298-4d96-b5c9-8cf13ee507cf" colab={"base_uri": "https://localhost:8080/"}
add_10(tf.constant(5))
# + id="7qYJYD_cnFS0" outputId="3dc3c05c-4f46-48fb-fdee-df40a8299933" colab={"base_uri": "https://localhost:8080/"}
add_10.get_concrete_function(tf.constant(5)).graph.get_operations()
# + [markdown] id="oEcvIYTrnFS0"
# (오토그래프에 의한) `tf.range()`를 사용한 동적인 `for` 반복:
# + id="s5E-YpJonFS0"
@tf.function
def add_10(x):
for i in tf.range(10):
x = x + 1
return x
# + id="kxac-wYhnFS0" outputId="2733f1fa-b174-430c-9e49-75b8e85d555a" colab={"base_uri": "https://localhost:8080/"}
add_10.get_concrete_function(tf.constant(0)).graph.get_operations()
# + [markdown] id="jKv6EsoMnFS0"
# ### TF 함수에서 변수와 다른 자원 다루기
# + id="8lALRLVnnFS0"
counter = tf.Variable(0)
@tf.function
def increment(counter, c=1):
return counter.assign_add(c)
# + id="Rb1pt6VInFS0" outputId="26bddd04-3f11-4f64-e48e-d2e3e669ac74" colab={"base_uri": "https://localhost:8080/"}
increment(counter)
increment(counter)
# + id="L2hj3cM9nFS1" outputId="2f6d7af9-b74c-4430-9aab-0923829a23e7" colab={"base_uri": "https://localhost:8080/"}
function_def = increment.get_concrete_function(counter).function_def
function_def.signature.input_arg[0]
# + id="6BQEr96bnFS1"
counter = tf.Variable(0)
@tf.function
def increment(c=1):
return counter.assign_add(c)
# + id="Iq0eZORHnFS1" outputId="ccdc723f-af45-4319-97fc-628a6ef2f273" colab={"base_uri": "https://localhost:8080/"}
increment()
increment()
# + id="m3AP7PgGnFS1" outputId="5d54dc3a-6b79-4528-8084-d96a8e59bfa9" colab={"base_uri": "https://localhost:8080/"}
function_def = increment.get_concrete_function().function_def
function_def.signature.input_arg[0]
# + id="ixmvKjGtnFS1"
class Counter:
def __init__(self):
self.counter = tf.Variable(0)
@tf.function
def increment(self, c=1):
return self.counter.assign_add(c)
# + id="iE07HYZPnFS1" outputId="f6a42171-4846-4d73-b1b7-34206ff69773" colab={"base_uri": "https://localhost:8080/"}
c = Counter()
c.increment()
c.increment()
# + id="rDYlFn4xnFS1" outputId="90c634c6-64ae-4ce6-de19-9e4f7bb8e262" colab={"base_uri": "https://localhost:8080/"}
@tf.function
def add_10(x):
for i in tf.range(10):
x += 1
return x
print(tf.autograph.to_code(add_10.python_function))
# + id="q5-0PLgtnFS2"
def display_tf_code(func):
from IPython.display import display, Markdown
if hasattr(func, "python_function"):
func = func.python_function
code = tf.autograph.to_code(func)
display(Markdown('```python\n{}\n```'.format(code)))
# + id="QLMou_bhnFS2" outputId="14751d22-48d1-4fcc-b61d-54558a752b81" colab={"base_uri": "https://localhost:8080/", "height": 486}
display_tf_code(add_10)
# + [markdown] id="1gFgVxYPnFS2"
# ## tf.keras와 TF 함수를 함께 사용하거나 사용하지 않기
# + [markdown] id="W1lPoDfknFS2"
# 기본적으로 tf.keras는 자동으로 사용자 정의 코드를 TF 함수로 변환하기 때문에 `tf.function()`을 사용할 필요가 없습니다:
# + id="W2OWiyhpnFS2"
# 사용자 손실 함수
def my_mse(y_true, y_pred):
print("my_mse() 손실 트레이싱")
return tf.reduce_mean(tf.square(y_pred - y_true))
# + id="-Vw6sZmSnFS2"
# 사용자 지표 함수
def my_mae(y_true, y_pred):
print("my_mae() 지표 트레이싱")
return tf.reduce_mean(tf.abs(y_pred - y_true))
# + id="i31eB7X0nFS2"
# 사용자 정의 층
class MyDense(keras.layers.Layer):
def __init__(self, units, activation=None, **kwargs):
super().__init__(**kwargs)
self.units = units
self.activation = keras.activations.get(activation)
def build(self, input_shape):
self.kernel = self.add_weight(name='kernel',
shape=(input_shape[1], self.units),
initializer='uniform',
trainable=True)
self.biases = self.add_weight(name='bias',
shape=(self.units,),
initializer='zeros',
trainable=True)
super().build(input_shape)
def call(self, X):
print("MyDense.call() 트레이싱")
return self.activation(X @ self.kernel + self.biases)
# + id="guEjwBD3nFS2"
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
# + id="GsUahIc4nFS2"
# 사용자 정의 모델
class MyModel(keras.models.Model):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.hidden1 = MyDense(30, activation="relu")
self.hidden2 = MyDense(30, activation="relu")
self.output_ = MyDense(1)
def call(self, input):
print("MyModel.call() 트레이싱")
hidden1 = self.hidden1(input)
hidden2 = self.hidden2(hidden1)
concat = keras.layers.concatenate([input, hidden2])
output = self.output_(concat)
return output
model = MyModel()
# + id="cXNdmo_ZnFS2"
model.compile(loss=my_mse, optimizer="nadam", metrics=[my_mae])
# + id="ZiUY_qRjnFS2" outputId="adb3c763-db1c-4a28-cab4-dc3e6984ae4e" colab={"base_uri": "https://localhost:8080/"}
model.fit(X_train_scaled, y_train, epochs=2,
validation_data=(X_valid_scaled, y_valid))
model.evaluate(X_test_scaled, y_test)
# + [markdown] id="DqKwtqeZnFS2"
# `dynamic=True`로 모델을 만들어 이 기능을 끌 수 있습니다(또는 모델의 생성자에서 `super().__init__(dynamic=True, **kwargs)`를 호출합니다):
# + id="RIT3wTOenFS2"
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
# + id="NXF_pU75nFS3"
model = MyModel(dynamic=True)
# + id="W1Ijplz0nFS3"
model.compile(loss=my_mse, optimizer="nadam", metrics=[my_mae])
# + [markdown] id="PpNoK8_WnFS3"
# 사용자 정의 코드는 반복마다 호출됩니다. 너무 많이 출력되는 것을 피하기 위해 작은 데이터셋으로 훈련, 검증, 평가해 보겠습니다:
# + id="clBvdSLJnFS3" outputId="7e56c770-cca3-4017-855b-06a8113c64d9" colab={"base_uri": "https://localhost:8080/"}
model.fit(X_train_scaled[:64], y_train[:64], epochs=1,
validation_data=(X_valid_scaled[:64], y_valid[:64]), verbose=0)
model.evaluate(X_test_scaled[:64], y_test[:64], verbose=0)
# + [markdown] id="NgR_CLQHnFS3"
# 또는 모델을 컴파일할 때 `run_eagerly=True`를 지정합니다:
# + id="dXpvcUGfnFS3"
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
# + id="I6k23xDJnFS3"
model = MyModel()
# + id="3L_mdyoknFS3"
model.compile(loss=my_mse, optimizer="nadam", metrics=[my_mae], run_eagerly=True)
# + id="Zne2esoMnFS3" outputId="6f5cd521-64d4-4d2e-dae0-c614fd0c2f46" colab={"base_uri": "https://localhost:8080/"}
model.fit(X_train_scaled[:64], y_train[:64], epochs=1,
validation_data=(X_valid_scaled[:64], y_valid[:64]), verbose=0)
model.evaluate(X_test_scaled[:64], y_test[:64], verbose=0)
# + [markdown] id="w8yHh6uVnFS3"
# ## 사용자 정의 옵티마이저
# + [markdown] id="Kb-o4KXMnFS3"
# 사용자 정의 옵티마이저를 정의하는 것은 일반적이지 않습니다. 하지만 어쩔 수 없이 만들어야 하는 상황이라면 다음 예를 참고하세요:
# + id="cu5eHM8NnFS3"
class MyMomentumOptimizer(keras.optimizers.Optimizer):
def __init__(self, learning_rate=0.001, momentum=0.9, name="MyMomentumOptimizer", **kwargs):
"""super().__init__()를 호출하고 _set_hyper()를 사용해 하이퍼파라미터를 저장합니다"""
super().__init__(name, **kwargs)
self._set_hyper("learning_rate", kwargs.get("lr", learning_rate)) # lr=learning_rate을 처리
self._set_hyper("decay", self._initial_decay) #
self._set_hyper("momentum", momentum)
def _create_slots(self, var_list):
"""모델 파라미터마다 연관된 옵티마이저 변수를 만듭니다.
텐서플로는 이런 옵티마이저 변수를 '슬롯'이라고 부릅니다.
모멘텀 옵티마이저에서는 모델 파라미터마다 하나의 모멘텀 슬롯이 필요합니다.
"""
for var in var_list:
self.add_slot(var, "momentum")
@tf.function
def _resource_apply_dense(self, grad, var):
"""슬롯을 업데이트하고 모델 파라미터에 대한 옵티마이저 스텝을 수행합니다.
"""
var_dtype = var.dtype.base_dtype
lr_t = self._decayed_lr(var_dtype) # 학습률 감쇠 처리
momentum_var = self.get_slot(var, "momentum")
momentum_hyper = self._get_hyper("momentum", var_dtype)
momentum_var.assign(momentum_var * momentum_hyper - (1. - momentum_hyper)* grad)
var.assign_add(momentum_var * lr_t)
def _resource_apply_sparse(self, grad, var):
raise NotImplementedError
def get_config(self):
base_config = super().get_config()
return {
**base_config,
"learning_rate": self._serialize_hyperparameter("learning_rate"),
"decay": self._serialize_hyperparameter("decay"),
"momentum": self._serialize_hyperparameter("momentum"),
}
# + id="9a-d3UwinFS3"
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
# + id="JPvzY12KnFS3" outputId="4ba0634f-8c25-4bce-9771-0ee48530b439" colab={"base_uri": "https://localhost:8080/"}
model = keras.models.Sequential([keras.layers.Dense(1, input_shape=[8])])
model.compile(loss="mse", optimizer=MyMomentumOptimizer())
model.fit(X_train_scaled, y_train, epochs=5)
# + [markdown] id="goUFeTllnFS4"
# # 연습문제
# + [markdown] id="FN1NBFYMnFS4"
# ## 1. to 11.
# 부록 A 참조.
# + [markdown] id="inl1mvaknFS4"
# # 12. _층 정규화_ 를 수행하는 사용자 정의 층을 구현하세요.
#
# _15장에서 순환 신경망을 사용할 때 이런 종류의 층을 사용합니다._
# + [markdown] id="RVTVssxNnFS4"
# ### a.
# _문제: `build()` 메서드에서 두 개의 훈련 가능한 가중치 *α*와 *β*를 정의합니다. 두 가중치 모두 크기가 `input_shape[-1:]`이고 데이터 타입은 `tf.float32`입니다. *α*는 1로 초기화되고 *β*는 0으로 초기화되어야 합니다._
# + [markdown] id="ArXtJ4jznFS4"
# 솔루션: 아래 참조.
# + [markdown] id="OtZf7k4bnFS4"
# ### b.
# _문제: `call()` 메서드는 샘플의 특성마다 평균 μ와 표준편차 σ를 계산해야 합니다. 이를 위해 전체 샘플의 평균 μ와 분산 σ<sup>2</sup>을 반환하는 `tf.nn.moments(inputs, axes=-1, keepdims=True)`을 사용할 수 있습니다(분산의 제곱근으로 표준편차를 계산합니다). 그다음 *α*⊗(*X* - μ)/(σ + ε) + *β*를 계산하여 반환합니다. 여기에서 ⊗는 원소별
# 곱셈(`*`)을 나타냅니다. ε은 안전을 위한 항입니다(0으로 나누어지는 것을 막기 위한 작은 상수. 예를 들면 0.001)._
# + id="zaJez0D7nFS4"
class LayerNormalization(keras.layers.Layer):
def __init__(self, eps=0.001, **kwargs):
super().__init__(**kwargs)
self.eps = eps
def build(self, batch_input_shape):
self.alpha = self.add_weight(
name="alpha", shape=batch_input_shape[-1:],
initializer="ones")
self.beta = self.add_weight(
name="beta", shape=batch_input_shape[-1:],
initializer="zeros")
super().build(batch_input_shape) # 반드시 끝에 와야 합니다
def call(self, X):
mean, variance = tf.nn.moments(X, axes=-1, keepdims=True)
return self.alpha * (X - mean) / (tf.sqrt(variance + self.eps)) + self.beta
def compute_output_shape(self, batch_input_shape):
return batch_input_shape
def get_config(self):
base_config = super().get_config()
return {**base_config, "eps": self.eps}
# + [markdown] id="ZmgaOCRenFS4"
# _ε_ 하이퍼파라미터(`eps`)는 필수가 아닙니다. 또한 `tf.sqrt(variance) + self.eps` 보다 `tf.sqrt(variance + self.eps)`를 계산하는 것이 좋습니다. sqrt(z)의 도함수는 z=0에서 정의되지 않기 때문에 분산 벡터의 한 원소가 0에 가까우면 훈련이 이리저리 널뜁니다. 제곱근 안에 _ε_를 넣으면 이런 현상을 방지할 수 있습니다.
# + [markdown] id="-txGpum2nFS4"
# ### c.
# _문제: 사용자 정의 층이 `keras.layers.LayerNormalization` 층과 동일한(또는 거의 동일한) 출력을 만드는지 확인하세요._
# + [markdown] id="XayuPDh8nFS4"
# 각 클래스의 객체를 만들고 데이터(예를 들면, 훈련 세트)를 적용해 보죠. 차이는 무시할 수 있는 수준입니다.
# + id="G-JdGdrpnFS4" outputId="2b92e9eb-5b06-4f0d-d830-c64ee33e18df" colab={"base_uri": "https://localhost:8080/"}
X = X_train.astype(np.float32)
custom_layer_norm = LayerNormalization()
keras_layer_norm = keras.layers.LayerNormalization()
tf.reduce_mean(keras.losses.mean_absolute_error(
keras_layer_norm(X), custom_layer_norm(X)))
# + [markdown] id="ZhYijbpJnFS5"
# 네 충분히 가깝네요. 조금 더 확실하게 알파와 베타를 완전히 랜덤하게 지정하고 다시 비교해 보죠:
# + id="LmRALOLQnFS5" outputId="256086be-d91d-494a-99f5-bcbae13bdfa3" colab={"base_uri": "https://localhost:8080/"}
random_alpha = np.random.rand(X.shape[-1])
random_beta = np.random.rand(X.shape[-1])
custom_layer_norm.set_weights([random_alpha, random_beta])
keras_layer_norm.set_weights([random_alpha, random_beta])
tf.reduce_mean(keras.losses.mean_absolute_error(
keras_layer_norm(X), custom_layer_norm(X)))
# + [markdown] id="y7nGTB_bnFS6"
# 여전히 무시할 수 있는 수준입니다! 사용자 정의 층이 잘 동작합니다.
# + [markdown] id="8OldjLnsnFS6"
# ## 13. 사용자 정의 훈련 반복을 사용해 패션 MNIST 데이터셋으로 모델을 훈련해보세요.
#
# _패션 MNIST 데이터셋은 10장에서 소개했습니다._
# + [markdown] id="3Sc-mwzpnFS6"
# ### a.
# _문제: 에포크, 반복, 평균 훈련 손실, (반복마다 업데이트되는) 에포크의 평균 정확도는 물론 에포크 끝에서 검증 손실과 정확도를 출력하세요._
# + id="4csPXhr8nFS6" outputId="4de09f00-b38c-42cd-8a53-b1ed1de8d6bf" colab={"base_uri": "https://localhost:8080/"}
(X_train_full, y_train_full), (X_test, y_test) = keras.datasets.fashion_mnist.load_data()
X_train_full = X_train_full.astype(np.float32) / 255.
X_valid, X_train = X_train_full[:5000], X_train_full[5000:]
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]
X_test = X_test.astype(np.float32) / 255.
# + id="pU9Tyc3LnFS6"
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
# + id="1-7YuvHCnFS6"
model = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(100, activation="relu"),
keras.layers.Dense(10, activation="softmax"),
])
# + id="FDtSEXLanFS6" outputId="1474eae4-4734-42c6-c961-0bea5d8777f2" colab={"base_uri": "https://localhost:8080/"}
n_epochs = 5
batch_size = 32
n_steps = len(X_train) // batch_size
optimizer = keras.optimizers.Nadam(lr=0.01)
loss_fn = keras.losses.sparse_categorical_crossentropy
mean_loss = keras.metrics.Mean()
metrics = [keras.metrics.SparseCategoricalAccuracy()]
# + id="rrbSZk-rnFS6" outputId="62c98fbb-39ee-48cd-e2af-3fd14fbbb5ef" colab={"base_uri": "https://localhost:8080/", "height": 209, "referenced_widgets": ["954df2e8c6f94ad784aaabfbd4d4ab85", "dba56a9e52cb428f968f0821f3e7c417", "cd13ef0f97764193aedd194cbe18d00c", "fbfb5813c7c54c47a7f4715c215de32b", "01101b8e43d845e98557ae7d17a4aedd", "077ffc2e3d8d420bbfaf84eb275ce5fb", "<KEY>", "c77cec643eab4e4c9c963d7704008d2b", "f7b7b3259e1f4ea299a3d3c5c0abaf9d", "fa6c3a02385541c584bcf9c90cebbca0", "7a9231435c1c429e89e0eab0a539353e", "<KEY>", "<KEY>", "<KEY>", "f1bc96fa597c483787ec73c6074360c9", "<KEY>", "<KEY>", "<KEY>", "fffcbe1917fe4729a2e4a5fcabbe2199", "d86e2684ae994c91a2bab1a48d52f303", "78fc4e9aa67b451ca54d192050b8e6a1", "<KEY>", "<KEY>", "<KEY>", "c596cd26628e4c23a49b12fc8610ceaf", "e447fde6569049eca870b7414ad01b1c", "2a0f4067caf94a99a21ca5091812b47f", "4e2faca0583c4a66b2ded1cad440319f", "ff90e29f3e024f4d9348372aca149fef", "<KEY>", "ed486814f2a8437983e66318b028dc1e", "<KEY>", "<KEY>", "<KEY>", "d7e0647152754ed499fb50725f1366eb", "<KEY>", "<KEY>", "<KEY>", "5e06565a48bf460bb9bee34f0adf503a", "9011c19bf8b04e239a12c3a62b221e1e", "1666384c6cc9435a8ad3e4769c7b8913", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "e9a2f5cc32c041a49016ce0dcda1aaa9", "<KEY>", "<KEY>", "29eb106ec39e48f68cd9d351cf6ceac8", "<KEY>", "d646b7a00e8e4e95b360a496d8c87935", "ca2720f9581241b9a4115740bad32d4a", "<KEY>", "<KEY>", "<KEY>", "b39754b6bd1b425cad2defffd36347ac", "<KEY>", "4e38d1bfa23b42158f4b3e6fe455e598", "618e97706ef24d1f86cac54878f553f2", "51b5d2a15fa74330b16ba332a02ab0de", "<KEY>", "1c7a6f444e3b46f49242009f3648e98c", "bd38c3d7fc5d47cfb6eb91958a5b8be1", "4494340079f74e7c9e337ea2c5d89233", "ef4cd6ea805c430d8086f989bbaf05cb"]}
with trange(1, n_epochs + 1, desc="All epochs") as epochs:
for epoch in epochs:
with trange(1, n_steps + 1, desc="Epoch {}/{}".format(epoch, n_epochs)) as steps:
for step in steps:
X_batch, y_batch = random_batch(X_train, y_train)
with tf.GradientTape() as tape:
y_pred = model(X_batch)
main_loss = tf.reduce_mean(loss_fn(y_batch, y_pred))
loss = tf.add_n([main_loss] + model.losses)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
for variable in model.variables:
if variable.constraint is not None:
variable.assign(variable.constraint(variable))
status = OrderedDict()
mean_loss(loss)
status["loss"] = mean_loss.result().numpy()
for metric in metrics:
metric(y_batch, y_pred)
status[metric.name] = metric.result().numpy()
steps.set_postfix(status)
y_pred = model(X_valid)
status["val_loss"] = np.mean(loss_fn(y_valid, y_pred))
status["val_accuracy"] = np.mean(keras.metrics.sparse_categorical_accuracy(
tf.constant(y_valid, dtype=np.float32), y_pred))
steps.set_postfix(status)
for metric in [mean_loss] + metrics:
metric.reset_states()
# + [markdown] id="VXaxk5WwnFS6"
# ### b.
# _문제: 상위 층과 하위 층에 학습률이 다른 옵티마이저를 따로 사용해보세요._
# + id="W_NUUQJLnFS6"
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
# + id="jMk78kcFnFS6"
lower_layers = keras.models.Sequential([
keras.layers.Flatten(input_shape=[28, 28]),
keras.layers.Dense(100, activation="relu"),
])
upper_layers = keras.models.Sequential([
keras.layers.Dense(10, activation="softmax"),
])
model = keras.models.Sequential([
lower_layers, upper_layers
])
# + id="_vvluGEWnFS7" outputId="3282292b-a4ae-4feb-9d8b-cdcee47d3a81" colab={"base_uri": "https://localhost:8080/"}
lower_optimizer = keras.optimizers.SGD(lr=1e-4)
upper_optimizer = keras.optimizers.Nadam(lr=1e-3)
# + id="Y1Ah5AEMnFS7"
n_epochs = 5
batch_size = 32
n_steps = len(X_train) // batch_size
loss_fn = keras.losses.sparse_categorical_crossentropy
mean_loss = keras.metrics.Mean()
metrics = [keras.metrics.SparseCategoricalAccuracy()]
# + id="E2BbBpTMnFS7" outputId="09a19422-66c9-4d2c-e676-87e650d43e4a" colab={"base_uri": "https://localhost:8080/", "height": 209, "referenced_widgets": ["b64df0792ca744b0956faca854697247", "240dd8d4d60c4592a6e6d97f459e4ed3", "4a762cb8d0884184ba530d0cf66808c7", "<KEY>", "702a63248dbd41c0ab76eade65b4e982", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "ce807f932d1f437096dfac5d75a2a19f", "<KEY>", "<KEY>", "69bee84df81845378fd777fe2cb80e92", "<KEY>", "cf9c56ae1e7f4b4eb3c8081d0edea083", "<KEY>", "26ab338c09f944e1bb4132c9d699b068", "32343adfca8c4590a2389d2a34dc2985", "2cfd5ac794f74386a5d491877ecef946", "6db5e6485eda4dd8b6a8817bd4041f9c", "<KEY>", "982450d3a33144baae5a36bad4e84473", "cec12d5e2bce479fa6ef9441420ee512", "bba4a2ffffa643fdb726b64292b7d75e", "<KEY>", "41fca9356d384fa58c4cd2dcf5ba860f", "c6648f3fe191404f97adfa8e849f9538", "ea23077adb7c4a28bf6e2fe66f4aafff", "0dfd299aaff848fba46d52e485d98177", "<KEY>", "<KEY>", "c06811c055164e80ada214e67f974892", "<KEY>", "<KEY>", "2421b8871d6540fe8a1737bdca64ce00", "f8b2396a2aba44f7897137d1f14ba1da", "<KEY>", "<KEY>", "444fa7320f2749e1849109a7f195d04a", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "8861a8a339b24927a70c7a4d34f661aa", "<KEY>", "d2d60853305f45878e742cfab60b3eaa", "<KEY>", "542b77db259740e9be24d4554e9e5cbf", "<KEY>", "997db3cd02f24749b8130ed33ba68ef0", "<KEY>", "80cfa25092094cd5a00cf0ff3ac0e6e9", "<KEY>", "397a32ea60e04325a591bb8a5084db14", "<KEY>", "<KEY>", "<KEY>", "<KEY>", "942d2aefa11e4075951993c98e872d5e", "<KEY>", "<KEY>", "6aaa7c3b09904b189efe4a4a634d3e95", "<KEY>", "<KEY>", "9ff2ccdefc944a83b69345b3ae6df4c7"]}
with trange(1, n_epochs + 1, desc="All epochs") as epochs:
for epoch in epochs:
with trange(1, n_steps + 1, desc="Epoch {}/{}".format(epoch, n_epochs)) as steps:
for step in steps:
X_batch, y_batch = random_batch(X_train, y_train)
with tf.GradientTape(persistent=True) as tape:
y_pred = model(X_batch)
main_loss = tf.reduce_mean(loss_fn(y_batch, y_pred))
loss = tf.add_n([main_loss] + model.losses)
for layers, optimizer in ((lower_layers, lower_optimizer),
(upper_layers, upper_optimizer)):
gradients = tape.gradient(loss, layers.trainable_variables)
optimizer.apply_gradients(zip(gradients, layers.trainable_variables))
del tape
for variable in model.variables:
if variable.constraint is not None:
variable.assign(variable.constraint(variable))
status = OrderedDict()
mean_loss(loss)
status["loss"] = mean_loss.result().numpy()
for metric in metrics:
metric(y_batch, y_pred)
status[metric.name] = metric.result().numpy()
steps.set_postfix(status)
y_pred = model(X_valid)
status["val_loss"] = np.mean(loss_fn(y_valid, y_pred))
status["val_accuracy"] = np.mean(keras.metrics.sparse_categorical_accuracy(
tf.constant(y_valid, dtype=np.float32), y_pred))
steps.set_postfix(status)
for metric in [mean_loss] + metrics:
metric.reset_states()
| 12_custom_models_and_training_with_tensorflow.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + id="nLZATWJP_ptK"
# !git clone https://github.com/nab170130/vaal.git
# !git clone https://github.com/circulosmeos/gdown.pl.git
# %cd /content/vaal/
# + id="I2DfGsOj_yXA"
import main as main_vaal
import arguments
args = arguments.get_args()
print(args)
args.dataset = "mnist"
main_vaal.main(args)
| benchmark_notebooks/baseline/AL Baseline MNIST Low Data VAAL.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] slideshow={"slide_type": "slide"}
# 
#
# # Functional Programming in Python <center>
#
# <p>
# <p>
# <NAME>
# + [markdown] slideshow={"slide_type": "slide"}
# # Functions as first class citizens
# + slideshow={"slide_type": "subslide"}
def mul(a, b):
return a*b
mul(2, 3)
# + slideshow={"slide_type": "fragment"}
mul = lambda a, b: a*b
mul(2, 3)
# + [markdown] slideshow={"slide_type": "subslide"}
# # Lambda is another way of defining a function
# + [markdown] slideshow={"slide_type": "slide"}
# # Higher Order Functions
# + slideshow={"slide_type": "subslide"} active=""
# # Functions as arguments to other functions
# + slideshow={"slide_type": "subslide"}
mul(mul(2, 3), 3)
# + slideshow={"slide_type": "subslide"}
def transform_and_add(func, a, b):
return func(a) + func(b)
transform_and_add(lambda x: x**2, 1, 2)
# + [markdown] slideshow={"slide_type": "fragment"}
# # Why would I want something like this?
# + [markdown] slideshow={"slide_type": "subslide"}
# # A Familiar Pattern
# + slideshow={"slide_type": "fragment"}
def square_and_add(a, b):
return (a**2 + b**2)
def cube_and_add(a, b):
return (a**3 + b**3)
def quad_and_add(a, b):
return (a**4 + b**4)
print(square_and_add(1, 2))
print(cube_and_add(1, 2))
print(quad_and_add(1, 2))
# + slideshow={"slide_type": "subslide"}
square = lambda x: x**2
cube = lambda x: x**3
quad = lambda x: x**4
print(square_and_add(1, 2) == transform_and_add(square, 1, 2))
print(cube_and_add(1, 2) == transform_and_add(cube, 1, 2))
print(quad_and_add(1, 2) == transform_and_add(quad, 1, 2))
# + slideshow={"slide_type": "subslide"}
def square_and_add(a, b):
return (a**2 + b**2)
def cube_and_mul(a, b):
return ((a**3) * (b**3))
def quad_and_div(a, b):
return ((a**4) / (b**4))
print(square_and_add(1, 2))
print(cube_and_mul(1, 2))
print(quad_and_div(1, 2))
# + slideshow={"slide_type": "subslide"}
def transform_and_reduce(func_transform, func_reduce, a, b):
return func_reduce(func_transform(a), func_transform(b))
print(square_and_add(1, 2) == transform_and_reduce(square, lambda x, y: x+y, 1, 2))
print(cube_and_mul(1, 2) == transform_and_reduce(cube, lambda x, y: x*y, 1, 2))
print(quad_and_div(1, 2) == transform_and_reduce(quad, lambda x, y: x/y, 1, 2))
# + [markdown] slideshow={"slide_type": "subslide"}
# # Operators to the rescure
# + slideshow={"slide_type": "fragment"}
import operator
print(square_and_add(1, 2) == transform_and_reduce(square, operator.add, 1, 2))
print(cube_and_mul(1, 2) == transform_and_reduce(cube, operator.mul, 1, 2))
print(quad_and_div(1, 2) == transform_and_reduce(quad, operator.truediv, 1, 2))
# + [markdown] slideshow={"slide_type": "slide"}
# # Lets do some maths
#
# # Number of transform functions = m
# # Number of reduce functions = n
#
# + [markdown] slideshow={"slide_type": "fragment"}
# # Number of functions in the first workflow = m\*n
# + [markdown] slideshow={"slide_type": "fragment"}
# # Number of functions in the second workflow = m + n
# + [markdown] slideshow={"slide_type": "subslide"}
# # Write small, re-useable function
# + slideshow={"slide_type": "fragment"}
print(square_and_add(1, 2) == transform_and_reduce(lambda x: x**2, lambda x, y: x+y, 1, 2))
print(cube_and_mul(1, 2) == transform_and_reduce(lambda x: x**3, lambda x, y: x*y, 1, 2))
print(quad_and_div(1, 2) == transform_and_reduce(lambda x: x**4, lambda x, y: x/y, 1, 2))
# + [markdown] slideshow={"slide_type": "slide"}
# # Function returns Function
# + slideshow={"slide_type": "subslide"}
from time import time
def timer(func):
def inner(*args, **kwargs):
t = time()
func(*args, **kwargs)
print("Time take = {time}".format(time = time() - t))
return inner
def echo_func(input):
print(input)
timed_echo = timer(echo_func)
timed_echo(1000000)
# + [markdown] slideshow={"slide_type": "slide"}
# # Partial Functions
# + slideshow={"slide_type": "subslide"}
def logger(level, message):
print("{level}: {message}".format(level = level, message = message))
def debug(message):
return logger("debug", message)
def info(messgae):
return logger("info", message)
debug("Error 404")
# + slideshow={"slide_type": "subslide"}
from functools import partial
debug = partial(logger, "debug")
info = partial(logger, "info")
debug("Error 404")
# + [markdown] slideshow={"slide_type": "fragment"}
# # debug("Error 404") = partial(logger, "debug")("Error 404")
# + slideshow={"slide_type": "fragment"}
partial(logger, "debug")("Error 404")
# + [markdown] slideshow={"slide_type": "slide"}
# # Currying
# + [markdown] slideshow={"slide_type": "fragment"}
# # f(a, b, c) => g(a)(b)(c)
# + slideshow={"slide_type": "subslide"}
def transform_and_add(func_transform, a, b):
return func_transform(a) + func_transform(b)
def curry_transform_and_add(func_transform):
def apply(a, b):
return func_transform(a) + func_transform(b)
return apply
# + slideshow={"slide_type": "subslide"}
print(transform_and_add(cube, 1, 2) == curry_transform_and_add(cube)(1, 2))
# + [markdown] slideshow={"slide_type": "subslide"}
# # Currying gets you specialized functions from more general functions
# + [markdown] slideshow={"slide_type": "slide"}
# # Map, Reduce, Filter
#
# ## An alternate view to iteration
# + slideshow={"slide_type": "subslide"}
input_list = [1, 2, 3, 4]
squared_list = map(lambda x: x**2, input_list)
print(type(squared_list))
print(next(squared_list))
print(next(squared_list))
# + slideshow={"slide_type": "subslide"}
from functools import reduce
sum_list = reduce(operator.add, input_list)
print(sum_list)
# + slideshow={"slide_type": "subslide"}
sum_squared_list = reduce(operator.add,
map(lambda x: x**2, input_list))
print(sum_squared_list)
# + slideshow={"slide_type": "subslide"}
even_list = list(
filter(lambda x: x%2==0, input_list))
sum_even_list = reduce(operator.add, even_list)
print(sum_even_list)
# + slideshow={"slide_type": "subslide"}
print(reduce(operator.add,
(map(lambda x: x**2,
filter(lambda x: x%2==0, input_list)))))
# + [markdown] slideshow={"slide_type": "subslide"}
# # Benefits
#
# * Functional
# * One-liner
# * Elemental operations
# + [markdown] slideshow={"slide_type": "slide"}
# # itertools — Functions creating iterators for efficient looping
# + slideshow={"slide_type": "subslide"}
from itertools import accumulate
acc = accumulate(input_list, operator.add)
print(input_list)
print(type(acc))
print(next(acc))
print(next(acc))
print(next(acc))
# + [markdown] slideshow={"slide_type": "slide"}
# # Recursion
# + slideshow={"slide_type": "subslide"}
def factorial(n):
if n == 0:
return 1
else:
return n * factorial(n - 1)
# + [markdown] slideshow={"slide_type": "fragment"}
# # Sadly, no tail recursion
# + [markdown] slideshow={"slide_type": "slide"}
# # Comprehension
# + slideshow={"slide_type": "subslide"}
print(input_list)
collection = list()
is_even = lambda x: x%2==0
for data in input_list:
if(is_even(data)):
collection.append(data)
else:
collection.append(data*2)
print(collection)
# + slideshow={"slide_type": "subslide"}
collection = [data if is_even(data) else data*2
for data in input_list]
print(collection)
# + [markdown] slideshow={"slide_type": "slide"}
# # Generators
# + slideshow={"slide_type": "subslide"}
collection = (data if is_even(data) else data*2
for data in input_list)
print(collection)
# + [markdown] slideshow={"slide_type": "slide"}
# # Pipelines
#
# ## Sequence of Operations
# + slideshow={"slide_type": "fragment"}
def pipeline_each(data, fns):
return reduce(lambda a, x: map(x, a),
fns,
data)
# + slideshow={"slide_type": "subslide"}
import re
strings_to_clean = ["apple https://www.apple.com/",
"google https://www.google.com/",
"facebook https://www.facebook.com/"]
def format_string(input_string):
return re.sub(r"http\S+", "", input_string).strip().title()
for _str in map(format_string, strings_to_clean):
print(_str)
# + [markdown] slideshow={"slide_type": "fragment"}
# ## No Modularity
# + slideshow={"slide_type": "subslide"}
import re
def remove_url(input_string):
return re.sub(r"http\S+", "", input_string).strip()
def title_case(input_string):
return input_string.title()
def format_string(input_string):
return title_case(remove_url(input_string))
for _str in map(format_string, strings_to_clean):
print(_str)
# + [markdown] slideshow={"slide_type": "fragment"}
# # f(g(h(i(...x)))
# + [markdown] slideshow={"slide_type": "fragment"}
# # Modular but Ugly
# + slideshow={"slide_type": "subslide"}
import re
for _str in pipeline_each(strings_to_clean, [remove_url,
title_case]):
print(_str)
# + [markdown] slideshow={"slide_type": "fragment"}
# # [f, g, h, i, ...]
# + [markdown] slideshow={"slide_type": "fragment"}
# # Modular and Concise
# + [markdown] slideshow={"slide_type": "slide"}
# # Thank You
#
# # <NAME>
#
# # @shagunsodhani
| notebook/Demo.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: conda_mxnet_p36
# language: python
# name: conda_mxnet_p36
# ---
# ## Exporting ONNX Models with MXNet
#
# The [Open Neural Network Exchange](https://onnx.ai/) (ONNX) is an open format for representing deep learning models with an extensible computation graph model, definitions of built-in operators, and standard data types. Starting with MXNet 1.3, models trained using MXNet can now be saved as ONNX models.
#
# In this example, we show how to train a model on Amazon SageMaker and save it as an ONNX model. This notebook is based on the [MXNet MNIST notebook](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/mxnet_mnist/mxnet_mnist.ipynb) and the [MXNet example for exporting to ONNX](https://mxnet.incubator.apache.org/tutorials/onnx/export_mxnet_to_onnx.html).
# ### Setup
#
# First we need to define a few variables that we'll need later in the example.
# +
import boto3
from sagemaker import get_execution_role
from sagemaker.session import Session
# AWS region
region = boto3.Session().region_name
# S3 bucket for saving code and model artifacts.
# Feel free to specify a different bucket here if you wish.
bucket = Session().default_bucket()
# Location to save your custom code in tar.gz format.
custom_code_upload_location = 's3://{}/customcode/mxnet'.format(bucket)
# Location where results of model training are saved.
model_artifacts_location = 's3://{}/artifacts'.format(bucket)
# IAM execution role that gives SageMaker access to resources in your AWS account.
# We can use the SageMaker Python SDK to get the role from our notebook environment.
role = get_execution_role()
# -
# ### The training script
#
# The ``mnist.py`` script provides all the code we need for training and hosting a SageMaker model. The script we will use is adaptated from Apache MXNet [MNIST tutorial](https://mxnet.incubator.apache.org/tutorials/python/mnist.html).
# !pygmentize mnist.py
# ### Exporting to ONNX
#
# The important part of this script can be found in the `save` method. This is where the ONNX model is exported:
#
# ```python
# import os
#
# from mxnet.contrib import onnx as onnx_mxnet
# import numpy as np
#
# def save(model_dir, model):
# symbol_file = os.path.join(model_dir, 'model-symbol.json')
# params_file = os.path.join(model_dir, 'model-0000.params')
#
# model.symbol.save(symbol_file)
# model.save_params(params_file)
#
# data_shapes = [[dim for dim in data_desc.shape] for data_desc in model.data_shapes]
# output_path = os.path.join(model_dir, 'model.onnx')
#
# onnx_mxnet.export_model(symbol_file, params_file, data_shapes, np.float32, output_path)
# ```
#
# The last line in that method, `onnx_mxnet.export_model`, saves the model in the ONNX format. We pass the following arguments:
#
# * `symbol_file`: path to the saved input symbol file
# * `params_file`: path to the saved input params file
# * `data_shapes`: list of the input shapes
# * `np.float32`: input data type
# * `output_path`: path to save the generated ONNX file
#
# For more information, see the [MXNet Documentation](https://mxnet.incubator.apache.org/api/python/contrib/onnx.html#mxnet.contrib.onnx.mx2onnx.export_model.export_model).
# ### Training the model
#
# With the training script written to export an ONNX model, the rest of training process looks like any other Amazon SageMaker training job using MXNet. For a more in-depth explanation of these steps, see the [MXNet MNIST notebook](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/mxnet_mnist/mxnet_mnist.ipynb).
# +
from sagemaker.mxnet import MXNet
mnist_estimator = MXNet(entry_point='mnist.py',
role=role,
output_path=model_artifacts_location,
code_location=custom_code_upload_location,
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
framework_version='1.3.0',
hyperparameters={'learning-rate': 0.1})
train_data_location = 's3://sagemaker-sample-data-{}/mxnet/mnist/train'.format(region)
test_data_location = 's3://sagemaker-sample-data-{}/mxnet/mnist/test'.format(region)
mnist_estimator.fit({'train': train_data_location, 'test': test_data_location})
# -
# ### Next steps
#
# Now that we have an ONNX model, we can deploy it to an endpoint in the same way we do in the [MXNet MNIST notebook](https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/mxnet_mnist/mxnet_mnist.ipynb).
#
# For examples on how to write a `model_fn` to load the ONNX model, please refer to:
# * the [MXNet ONNX Super Resolution notebook](https://github.com/awslabs/amazon-sagemaker-examples/tree/master/sagemaker-python-sdk/mxnet_onnx_superresolution)
# * the [MXNet documentation](https://mxnet.incubator.apache.org/api/python/contrib/onnx.html#mxnet.contrib.onnx.onnx2mx.import_model.import_model)
| sagemaker-python-sdk/mxnet_onnx_export/mxnet_onnx_export.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import os
import pandas
from os import walk
class Node:
def __init__(self, node_dir, idStr):
self.nodeSelf = node_dir
self.result_dir = os.path.join(node_dir, "result")
self.idStr = idStr
# 准备工作
def init(data_path):
if not os.path.exists(data_path):
raise Exception("找不到要可视化数据的文件夹")
# 设置可视化结果要保存的文件夹
visual_path = data_path + "-visualization"
visual_path_data = os.path.join(visual_path, 'data')
if not os.path.exists(visual_path):
os.makedirs(visual_path)
if not os.path.exists(visual_path_data):
os.makedirs(visual_path_data)
return visual_path, visual_path_data
def prepare(poi_path):
# 读取初始景点文件,获得每一个景点的经纬度坐标
data = pandas.read_csv(poi_path, sep='\t')
data.drop(['type_id', 'id', 'page', 'father_name',
'father_id', 'sub_page'], axis=1, inplace=True)
data.rename(columns={'name': 'poi_name'}, inplace=True)
data.head() # 显示数据的前几行data
return data
def main(node, visual_path_data, data):
dataframe_list = []
# walk会返回3个参数,分别是路径,目录list,文件list,你可以按需修改下
for root, dirs, files in walk(node.nodeSelf):
for file in files:
if '.csv' in file:
file_path = root + "\\" + file
dataframe = pandas.read_csv(file_path) # 读取文件
dataframe = dataframe.loc[:, ~dataframe.columns.str.contains(
'^Unnamed')] # 删除文件中的index列
dataframe = pandas.merge(dataframe, data) # 将经纬度坐标添加进表格中
file_name_list = file_path.split("\\")
file_name = ''.join(filter(lambda s: isinstance(
s, str) and len(s) <= 5 and s != ".", file_name_list))
print(file_name_list)
print(file_name)
dataframe.to_csv(os.path.join(
visual_path_data, file_name), encoding="utf-8-sig")
# +
# 设置要可视化的文件夹
data_path = ".\\2019-04-14-17-01-26"
poi_path = '.\\data\\list_all_sub.txt'
visual_path, visual_path_data = init(data_path)
data = prepare(poi_path)
# 创建对象
root_node = Node(data_path, '')
main(root_node, visual_path_data, data)
# -
| preprocess.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python (oled)
# language: python
# name: oled
# ---
import numpy as np
import matplotlib.pyplot as plt
import os
# %matplotlib inline
import win32com.client
import py2origin as py2o
# +
x = np.arange(-10,10.1,0.1)
y = np.cos(x)
y2 = np.sin(x)
y3 = np.tan(x)
data = np.array((x,y,y2))
header = ['x','cos(x)','sin(x)']
origin,wb,ws = py2o.numpy_to_origin(
data,column_axis=0,
long_names=header,
origin_version=2018,
worksheet_name='Trig functions',
workbook_name='data')
# -
py2o.createGraph_multiwks(origin,
'Trig','Spectra_Wide.otp',
os.path.abspath('OriginTemplates'),
[ws],[0,0],[1,2],['Sym','Line'])
origin.Exit()
| Numpy_to_Originlab_example.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: milo_py37
# language: python
# name: milo_py37
# ---
# # Multiplicity figure
# +
from collections import defaultdict, Counter
import pandas as pd
from scipy import stats as sci_stats
from matplotlib import pyplot as pl
from matplotlib import cm
import numpy as np
import seaborn as sns
from glob import glob
import matplotlib.gridspec as gridspec
from matplotlib.colors import ListedColormap, to_rgba
from statsmodels.stats.multitest import fdrcorrection as benjamini_hochberg
from matplotlib.patches import Rectangle
# %matplotlib inline
import warnings
warnings.filterwarnings("ignore")
# +
plates = ['P1', 'P2', 'P3']
plate2env = {'P1': r'YPD 30$\degree$C', 'P2': r'SC 30$\degree$C', 'P3': r'SC 37$\degree$C'}
strains = ['diploid', 'alpha', 'a']
strains_for_print = {'a': '$MATa$', 'diploid': 'Diploid', 'alpha': r'$MAT\alpha$'}
color_by_strain = {'diploid': '#555555', 'alpha': '#FFB000', 'a': '#648FFF'}
fa_gens = [70, 550, 1410, 2640, 3630, 5150, 7530, 10150]
seq_gens = [70, 1410, 2640, 5150, 7530, 10150]
all_wells = sorted([i.split('/')[-1].split('_')[0] for i in glob('../../Output/WGS/combined_option/processed_well_output/*_processed.tsv')])
wells = [w for w in all_wells if w!='P1B03'] #P1B03 excluded because it is a haploid population that diploidized
gene_info = pd.read_csv('../accessory_files/yeast_gene_annotations.tsv', delimiter='\t')
gene_info = gene_info[gene_info['featureType']=='ORF'].loc[gene_info['briefDescription'].apply(lambda bd: ('Putative protein' not in bd) and ('Dubious open reading frame' not in bd))]
gene_to_start_end = {i[0]: i[1:] for i in gene_info.as_matrix(['Gene_ORF', 'start', 'end'])}
orf_sizes = list(gene_info['end']-gene_info['start'])
essential_orfs_by_Liu = list(gene_info[gene_info['Essential_by_Liu2015']]['ORF'])
essential_orfs_by_Gaiever_not_Liu = [i for i in gene_info[gene_info['Essential_by_Giaever2002']]['ORF'] if i not in essential_orfs_by_Liu]
o2g = {i[0]:i[1] for i in gene_info.as_matrix(['ORF', 'Gene_ORF']) if pd.notnull(i[1])}
o2g.update({i[0]:i[0] for i in gene_info.as_matrix(['ORF', 'Gene_ORF']) if pd.isnull(i[1])})
g2o = {o2g[o]:o for o in o2g}
wellinfo = pd.read_csv('../accessory_files/VLTE_by_well_info.csv')[['plate.well', 'contam', 'strain']]
wellinfo['plate_well'] = wellinfo['plate.well'].apply(lambda p: p[:2]+p[3:]) #reformatting to match for merge
well_to_strain = {i[0]:i[1] for i in wellinfo.as_matrix(['plate_well', 'strain'])}
wells_w_ade2_stop_lost = ['P2F07', 'P1C09', 'P1E11', 'P3B10', 'P2B09']
cb_pal = sns.color_palette('colorblind')
# -
# ## Loading mutation data for next figures
# ## Some code for calculating mutational opportunities:
# +
nt2codon = {
'TTT': 'F', 'TTC': 'F',
'TTA': 'L', 'TTG': 'L', 'CTT': 'L', 'CTC': 'L', 'CTA': 'L', 'CTG': 'L',
'TCT': 'S', 'TCC': 'S', 'TCA': 'S', 'TCG': 'S', 'AGT': 'S', 'AGC': 'S',
'TAT': 'Y', 'TAC': 'Y',
'TAA': '*', 'TAG': '*', 'TGA': '*',
'TGT': 'C', 'TGC': 'C',
'TGG': 'W',
'CCT': 'P', 'CCC': 'P', 'CCA': 'P', 'CCG': 'P',
'CAT': 'H', 'CAC': 'H',
'CAA': 'Q', 'CAG': 'Q',
'CGT': 'R', 'CGC': 'R', 'CGA': 'R', 'CGG': 'R', 'AGA': 'R', 'AGG': 'R',
'ATT': 'I', 'ATC': 'I', 'ATA': 'I',
'ATG': 'M',
'ACT': 'T', 'ACC': 'T', 'ACA': 'T', 'ACG': 'T',
'AAT': 'N', 'AAC': 'N',
'AAA': 'K', 'AAG': 'K',
'GTT': 'V', 'GTC': 'V', 'GTA': 'V', 'GTG': 'V',
'GCT': 'A', 'GCC': 'A', 'GCA': 'A', 'GCG': 'A',
'GAT': 'D', 'GAC': 'D',
'GAA': 'E', 'GAG': 'E',
'GGT': 'G', 'GGC': 'G', 'GGA': 'G', 'GGG': 'G'
}
def get_attrib(row, attrib):
if row['type']=='gene':
if attrib+'=' in row['attributes']:
return row['attributes'].split(attrib+'=')[1].split(';')[0]
return ''
def read_fasta(fasta_file):
"""
Reads a fasta file and returns a dictionary with seqid keys and sequence values
"""
fd = dict()
with open(fasta_file, 'r') as infile:
for line in infile:
if '>' in line:
current_key = line[1:].strip()
fd[current_key] = ''
else:
fd[current_key] += line.strip()
return fd
def reverse_transcribe(seq):
"""reverse transcribes a dna sequence (does not convert any non-atcg/ATCG characters)"""
watson_crick = {'A': 'T', 'T': 'A', 'G': 'C', 'C': 'G', 'a': 't', 't': 'a', 'g': 'c', 'c': 'g'}
return ''.join([watson_crick.setdefault(c, c) for c in seq[::-1]])
class SeqInfoGetter:
def __init__(self, gff_file, fasta_file):
gff_cols = ['seqid', 'source', 'type', 'start', 'end', 'score', 'strand', 'phase', 'attributes']
self.gff = pd.read_csv(gff_file, delimiter='\t', skiprows=1, header=None, names=gff_cols)
self.gff['ORF'] = self.gff.apply(lambda row: get_attrib(row, "ID"), axis=1)
self.genes = self.gff[self.gff['ORF']!='']
self.genes['Gene'] = self.genes.apply(lambda row: get_attrib(row, "gene"), axis=1)
self.chromo_seqs = read_fasta(fasta_file)
def get_nt_seq(self, element_name, element_type):
td = self.genes[self.genes[element_type]==element_name]
if len(td) != 1:
print(len(td), 'hits, aborting.')
return None
else:
row = td.iloc[0]
cs = self.chromo_seqs[row['seqid']]
if row['strand'] == '+':
return cs[row['start']-1:row['end']]
else:
return reverse_transcribe(cs[row['start']-1:row['end']])
def get_aa_seq(self, element_name, element_type):
nt_s = self.get_nt_seq(element_name, element_type)
if nt_s:
aas = ''
for i in range(len(nt_s)//3):
aas += nt2codon[nt_s[i*3:(i+1)*3]]
if len(nt_s) % 3 != 0:
aas += '-leftover->' + nt_s[-1*(len(nt_s) % 3):]
return aas
def get_mutational_opps(self, element_name, element_type, verbose=False, return_nonsyn_over_all=False):
nt_s = self.get_nt_seq(element_name, element_type)
if nt_s:
if len(nt_s) % 3 != 0:
if verbose:
print('Warning: seq len not a multiple of 3', element_name)
print(self.genes[self.genes[element_type]==element_name].iloc[0]['Gene'])
print(self.get_aa_seq(element_name, element_type))
syn, nonsyn = 0, 0
for i in range(len(nt_s)//3):
codon_seq = nt_s[i*3:(i+1)*3]
codes_for = nt2codon[codon_seq]
for j in range(3):
for nt in 'ATCG':
if nt != codon_seq[j]:
if nt2codon[codon_seq[:j]+nt+codon_seq[j+1:]] == codes_for:
syn += 1
else:
nonsyn += 1
if return_nonsyn_over_all:
return nonsyn/(syn+nonsyn)
else:
return nonsyn / syn
# -
seqI = SeqInfoGetter('../../Output/WGS/reference/w303_vlte.gff', '../../Output/WGS/reference/w303_vlte.fasta')
orf_lens = {o: len(seqI.get_nt_seq(o, "ORF")) for o in seqI.genes['ORF']}
orf_mutational_opp_ratios = {o: seqI.get_mutational_opps(o, "ORF") for o in seqI.genes['ORF']} # Yeilds % of nonsyn/syn random mutations in each ORF
orf_mutational_nonsyn_opps = {o: seqI.get_mutational_opps(o, "ORF", return_nonsyn_over_all=True)*orf_lens[o] for o in seqI.genes['ORF']} # Yeilds % of nonsyn/syn random mutations in each ORF
total_len = np.sum(list(orf_lens.values()))
total_nonsyn_ratio = np.sum([orf_mutational_opp_ratios[o]*orf_lens[o]/total_len for o in orf_lens])
total_nonsyn_ratio
# +
def is_snp(row):
if row['mutation_type'] != 'Indel':
# * is given if there is a spanning deletion at this site (so no counts for ref or alt (not a SNP)))
if len(row['REF']) == 1 and len(row['ALT'])==1 and row['ALT'] != '*':
return True
return False
def hit_orfs(orf_list, search_list):
for o in str(orf_list).split(';'):
if o in search_list:
return True
return False
# by well dataframes with mutations
well_dats = dict()
for well in wells:
well_dats[well] = pd.read_csv('../../Output/WGS/combined_option/processed_well_output/' + well + '_processed.tsv', delimiter='\t')
# Exclude from analysis mutations in the 2-micron plasmid and telomeres, and SVs
well_dats[well] = well_dats[well][pd.isnull(well_dats[well]['SVTYPE']) & (well_dats[well]['CHROM']!='2-micron') & (~well_dats[well]['in_telomere'])]
well_dats[well]['is_snp'] = well_dats[well].apply(lambda r: is_snp(r), axis=1)
# a dataframe with hits and multiplicity for each ORF in the yeast genome
orf_hit_df = pd.read_csv('../../Output/WGS/combined_option/gene_hit_data.tsv', delimiter='\t')
orf_hit_df = orf_hit_df.merge(gene_info[['ORF', 'briefDescription', 'Essential_by_Liu2015', 'Essential_by_Giaever2002', 'start', 'end']], on='ORF', how='left')
# -
# # Multiplicity fig
# getting how many times each amino acid position is hit
# for now just taking the first annotation from the ANN column:
# when split by |, the 14th column is the codon position like 54/109
aa_hits = defaultdict(set)
for well in wells:
td = well_dats[well]
mgs_seen = set()
for entry in td[td['fixed_by_10150'] & pd.notnull(td['ORF_hit'])].as_matrix(['ANN', 'ORF_hit', 'mutation_type', 'CHROM', 'POS', 'REF', 'ALT', 'mutation_group']):
if entry[7] not in mgs_seen:
mgs_seen.add(entry[7])
aa_pos_split = str(entry[0]).split('|')
if len(aa_pos_split) > 13:
if aa_pos_split[13] != '':
aa_hits[entry[1]+'_'+aa_pos_split[13]].add(well+' '+str(entry[2])+' '+str(entry[3])+' '+str(entry[4])+' ' + '->'.join(entry[5:7]) + ' '+ aa_pos_split[10])
# +
# Simulating multiplicity by drawing genes to hit for each population,
# taking into account the number of hit mutations in each population,
# and the lengths of all ORFs in the yeast genome
def simulate_gene_hits(well_num_hits, nsamps=1000):
all_m_opps = list(orf_mutational_nonsyn_opps.values())
orf_hits = [0 for o in all_m_opps]
mean_opps = np.mean(all_m_opps)
orf_hit_probs = np.array(all_m_opps)/np.sum(all_m_opps)
multiplicities = []
pops_hit = []
for n in range(nsamps):
hit_table = []
for num_hits in well_num_hits:
hit_table.append(np.random.multinomial(num_hits, orf_hit_probs))
hit_table = np.array(hit_table)
orf_hits = np.sum(hit_table, axis=0)
multiplicities += [mult for mult in list(mean_opps * (orf_hits / np.array(all_m_opps))) if mult != 0] # we do not include orfs with zero hits
pops_hit += list(np.sum(np.clip(hit_table, 0, 1), axis=0))
return multiplicities, pops_hit
pop_hits = np.sum(orf_hit_df[wells], axis=0)
sim_mult, sim_hits = simulate_gene_hits(pop_hits)
pop_hits = np.sum(orf_hit_df[wells], axis=0)
sim_mult, sim_hits = simulate_gene_hits(pop_hits)
# +
def simulate_aa_pos_hits(nsamps=100):
## What I want to do is actually look at the ORFs that are hit and randomize which codon they hit
pops_hit = []
for n in range(nsamps):
aa_sim_hits_dict = defaultdict(set)
for well in wells:
for entry in np.array(orf_hit_df[['ORF', 'size', well]]):
for i in range(entry[2]):
aa_sim_hits_dict[entry[0]+'_'+str(np.random.randint(entry[1]))].add(well)
pops_hit += [len(i) for i in aa_sim_hits_dict.values()]
return pops_hit
sim_aa_hits = simulate_aa_pos_hits()
sim_aa_hits += [0]*(np.sum(list(orf_lens.values()))//3-len(sim_aa_hits))
# +
f, subs = pl.subplots(1, 3, figsize=(7.5, 1.5), dpi=300)
pl.subplots_adjust(wspace=0.7)
actual_mult = list(orf_hit_df['multiplicity'])
actual_mult += [0]*(len(orf_lens)-len(orf_hit_df))
max_m = int(np.ceil(max(actual_mult)))
subs[0].hist(sim_mult, histtype='step', log=True, bins=[i for i in range(max_m)], cumulative=-1, edgecolor='k', alpha=0.5, label='Null',
weights=np.ones_like(sim_mult)/float(len(sim_mult)))
subs[0].hist(actual_mult, histtype='step', log=True, bins=[i for i in range(max_m)], cumulative=-1, lw=1, label='Actual',
weights=np.ones_like(actual_mult)/float(len(actual_mult)))
subs[0].set_xlabel('Multiplicity ($m$)', fontsize=9)
subs[0].set_ylabel('Fraction of\nGenes ' + r'$\geq m$', fontsize=9)
subs[0].set_ylim([0.5/len(orf_lens), 1.1])
actual_hits = list(orf_hit_df['pops_hit'])
actual_hits += [0]*(len(orf_lens)-len(orf_hit_df))
max_m = int(np.ceil(max(actual_hits)))
subs[1].hist(sim_hits, histtype='step', log=True, bins=[i for i in range(max_m)], cumulative=-1, edgecolor='k', alpha=0.5, label='Null',
weights=np.ones_like(sim_hits)/float(len(sim_hits)))
for i in range(5,8):
print('Prob of getting', i, 'pop hits or more:', len([j for j in sim_hits if j>=i])/len(sim_hits))
subs[1].hist(actual_hits, histtype='step', log=True, bins=[i for i in range(max_m)], cumulative=-1, lw=1, label='Actual',
weights=np.ones_like(actual_hits)/float(len(actual_hits)))
subs[1].set_xlabel('Populations hit ($PH$)', fontsize=9)
subs[1].set_ylabel('Fraction of\nGenes ' + r'$\geq PH$', fontsize=9)
subs[1].set_ylim([0.5/len(orf_lens), 1.1])
actual_aa_hits = [len(aa_hits[a]) for a in aa_hits]
actual_aa_hits += [0]*(np.sum(list(orf_lens.values()))//3-len(actual_aa_hits))
max_m = int(np.ceil(max(actual_aa_hits)))
subs[2].hist(sim_aa_hits, histtype='step', log=True, bins=[i for i in range(max_m)], cumulative=-1, edgecolor='k', alpha=0.5, label='Null',
weights=np.ones_like(sim_aa_hits)/float(len(sim_aa_hits)))
subs[2].hist(actual_aa_hits, histtype='step', log=True, bins=[i for i in range(max_m)], cumulative=-1, lw=1, label='Actual',
weights=np.ones_like(actual_aa_hits)/float(len(actual_aa_hits)))
subs[2].set_xlabel('Populations hit ($PH$)', fontsize=9)
subs[2].set_ylabel('Fraction of\nAA sites ' + r'$\geq PH$', fontsize=9)
subs[2].legend(frameon=False, fontsize=7)
lets = 'ABC'
for i in range(3):
subs[i].annotate(lets[i], fontsize=12, xy=(-0.65, 1.1), xycoords="axes fraction", horizontalalignment="center")
sns.despine()
f.savefig('../../Output/Figs/Figure6_multiplicity.png', background='transparent', bbox_inches='tight', pad_inches=0.1)
f.savefig('../../Output/Figs/Figure6_multiplicity.svg', background='transparent', bbox_inches='tight', pad_inches=0.1)
# -
# ## Note that a lot of these are indels that may be hypermutable due to repetitive regions:
# +
orf_hit_nums = {i[0]:i[1] for i in np.array(orf_hit_df[['ORF', 'num_hits']])}
orf_codon_nums = {i[0]:i[1]//3 for i in np.array(orf_hit_df[['ORF', 'size']])}
for aa in aa_hits:
if len(aa_hits[aa])>2:
print(aa, o2g.get(aa.split('_')[0], 'NA'), len(aa_hits[aa]), len(set([a.split(' ')[0] for a in aa_hits[aa]])))
print(orf_hit_nums[aa.split('_')[0]], orf_codon_nums[aa.split('_')[0]])
print('P value:', (1-sci_stats.binom.cdf(len(aa_hits[aa])-1, orf_hit_nums[aa.split('_')[0]], (1/orf_codon_nums[aa.split('_')[0]])))*orf_codon_nums[aa.split('_')[0]])
for h in aa_hits[aa]:
print(h)
# -
| Other_and_plotting/.ipynb_checkpoints/PLOT_multiplicity-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Introduction
# <div style="width:100%;height:6px;background-color:Black;"></div>
# ### Pressure Swing Distillation
#
# Separation of the Methanol and Acetone minimum temperature azeotrope by using the pressure sensitivity of the azeotropic composition of this mixture by operating two columns at different pressures, adapted from [Luyben et al. Ind.Eng.Chem.Res. (2008) 47 pp. 2696-2707.](http://pubs.acs.org/doi/pdf/10.1021/ie701695u).
#
# The number of stages in this flowsheet differs from the specifications used in the ChemSep example.
#
# [Link to the flowsheet drawing](http://www.chemsep.org/downloads/data/Pressure_Swing_MA_iecr47p2696.png)
# # .NET Initialization
# <div style="width:100%;height:6px;background-color:Black;"></div>
# +
import clr
clr.AddReference(r"..\bin\MiniSim.Core")
import MiniSim.Core.Expressions as expr
from MiniSim.Core.Flowsheeting import MaterialStream, Flowsheet
import MiniSim.Core.Numerics as num
from MiniSim.Core.UnitsOfMeasure import Unit, SI, METRIC, PhysicalDimension
from MiniSim.Core.ModelLibrary import Flash, Heater, Mixer, Splitter, EquilibriumStageSection
import MiniSim.Core.PropertyDatabase as chemsep
from MiniSim.Core.Reporting import Generator, StringBuilderLogger
from MiniSim.Core.Thermodynamics import ThermodynamicSystem
# -
# %matplotlib inline
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (16,8)
plt.rcParams['grid.color'] = 'k'
# -
# # General Objects Instantiation
# <div style="width:100%;height:6px;background-color:Black;"></div>
Database = chemsep.ChemSepAdapter()
logger = StringBuilderLogger();
reporter = Generator(logger)
# # Set up Thermodynamics
# +
sys= ThermodynamicSystem("Test2","NRTL", "default")
sys.AddComponent(Database.FindComponent("Acetone"))
sys.AddComponent(Database.FindComponent("Methanol"))
Database.FillBIPs(sys)
kmolh=Unit.Make([SI.kmol],[SI.h])
tonh=Unit.Make([METRIC.ton],[SI.h])
sys.VariableFactory.SetOutputDimension(PhysicalDimension.HeatFlow, SI.MW)
sys.VariableFactory.SetOutputDimension(PhysicalDimension.Pressure, METRIC.bar)
sys.VariableFactory.SetOutputDimension(PhysicalDimension.MassFlow, tonh)
# -
# # Analysis of the Thermo System
def thermoAnalysis(psys):
numComps=len(sys.Components)
names=sys.GetComponentIds()
numSteps=20
mixture= MaterialStream("Mix", sys)
mixture.Specify("VF",0.0)
mixture.Specify("P",psys, METRIC.mbar)
for c in names:
mixture.Specify("n["+c+"]",1.0)
mixture.InitializeFromMolarFlows()
mixture.FlashPZ()
test= Flowsheet("test")
test.AddMaterialStream(mixture)
solver= num.DecompositionSolver(logger)
result=solver.Solve(test)
fig,axs=plt.subplots(numComps,numComps,figsize=(8,8))
for i in range(numComps):
for j in range(numComps):
if j!=i:
xvec=[]
yvec=[]
for c in range(numSteps):
for k in range(numComps):
mixture.Specify("n["+names[k]+"]",0.0)
mixture.Specify("n["+names[i]+"]",c/(numSteps-1))
mixture.Specify("n["+names[j]+"]",1.0-c/(numSteps-1))
mixture.InitializeFromMolarFlows()
mixture.FlashPZ()
solver.Solve(test)
xvec.append(mixture.GetVariable('xL['+names[j]+']').Val())
yvec.append(mixture.GetVariable('xV['+names[j]+']').Val())
axs[i,j].plot(xvec, yvec)
axs[i,j].plot(xvec, xvec)
axs[i,j].set_title(names[j] +' in '+names[i])
axs[i,j].set_xlabel('$x_{'+names[j]+'}$')
axs[i,j].set_ylabel('$y_{'+names[j]+'}$')
axs[i, j].set_aspect('equal', 'box')
else:
axs[i, j].axis('off')
plt.tight_layout()
logger.Flush()
plt.suptitle('(x,y)-Diagram at '+str(round(psys,2))+' mbar', y=1.05);
thermoAnalysis(1000)
thermoAnalysis(10000)
# # Low pressure column
# +
Feed = (MaterialStream("Feed", sys)
.Specify("T",43, METRIC.C)
.Specify("P",1, METRIC.bar)
.Specify("n[Acetone]", 270, kmolh)
.Specify("n[Methanol]", 270, kmolh)
.InitializeFromMolarFlows()
.FlashPT())
Recycle = (MaterialStream("Recycle", sys)
.Specify("T",54, METRIC.C)
.Specify("P",10, METRIC.bar)
.Specify("n[Acetone]",10, kmolh)
.Specify("n[Methanol]", 170, kmolh)
.InitializeFromMolarFlows()
.FlashPZ())
S01 = MaterialStream("S01", sys)
S02 = MaterialStream("S02", sys)
S03 =(MaterialStream("S03", sys)
.Init("T", 51, METRIC.C)
.Init("P", 1, METRIC.bar)
.Init("n[Acetone]",50, kmolh)
.Init("n[Methanol]", 50, kmolh))
S04 = MaterialStream("S04", sys)
S05 = (MaterialStream("S05", sys)
.Init("T", 61, METRIC.C)
.Init("P", 1, METRIC.bar)
.Init("n[Acetone]",2, kmolh)
.Init("n[Methanol]", 100, kmolh))
Methanol = MaterialStream("Methanol", sys)
D1 = MaterialStream("D1", sys)
# +
C1 = (EquilibriumStageSection("C1",sys,52)
.Connect("VIn", S05)
.Connect("LIn", S03)
.Connect("VOut", S01)
.Connect("LOut", S04)
.ConnectFeed(Feed,37)
.ConnectFeed(Recycle,41)
.MakeAdiabatic()
.MakeIsobaric()
.FixStageEfficiency(1.0)
.Initialize(2.3,0.25,logger))
REB1 =(Flash("REB1",sys)
.Connect("In", S04)
.Connect("Vap", S05)
.Connect("Liq", Methanol)
.Specify("P", 1, METRIC.bar)
.Specify("VF",0.7)
.Initialize())
COND1 = (Heater("COND1",sys)
.Connect("In", S01)
.Connect("Out", S02)
.Specify("P",1, METRIC.bar)
.Specify("VF",0)
.Initialize())
RefluxRatio1=2.36
REFSPL1 = (Splitter("REFSPL1",sys)
.Connect("In", S02)
.Connect("Out1", S03)
.Connect("Out2", D1)
.Specify("DP",0, METRIC.bar)
.Specify("K",RefluxRatio1/(1.0+RefluxRatio1))
.Initialize())
C1.Initialize(1.0,0.25,logger)
logger.Flush();
# -
flowsheet= (Flowsheet("Flow")
.AddMaterialStreams(Feed, Recycle, S01,S02,S03,D1,S04,S05, Methanol)
.AddUnits(C1, REB1, COND1, REFSPL1))
# +
solver= num.DecompositionSolver(logger)
solver.Solve(flowsheet)
print (logger.Flush())
# -
reporter.Report(flowsheet, 5, False)
print (logger.Flush())
# # Add High pressure column
# +
S06 = MaterialStream("S06", sys)
S07 = MaterialStream("S07", sys)
S08 = (MaterialStream("S08", sys)
.Init("T", 54, METRIC.C)
.Init("P", 10, METRIC.bar)
.Init("n[Acetone]",50, kmolh)
.Init("n[Methanol]", 50, kmolh))
S09 = MaterialStream("S09", sys)
S10 = (MaterialStream("S10", sys)
.Init("T", 140, METRIC.C)
.Init("P", 10, METRIC.bar)
.Init("n[Acetone]",100, kmolh)
.Init("n[Methanol]", 2, kmolh))
Acetone = MaterialStream("Acetone", sys)
D2 = MaterialStream("D2", sys)
C2 = (EquilibriumStageSection("C2",sys,61)
.Connect("VIn", S10)
.Connect("LIn", S08)
.Connect("VOut", S06)
.Connect("LOut", S09)
.ConnectFeed(D1,41)
.MakeAdiabatic()
.MakeIsobaric()
.FixStageEfficiency(1.0)
.Initialize(3.11,0.1,logger))
REB2 = (Flash("REB2",sys)
.Connect("In", S09)
.Connect("Vap", S10)
.Connect("Liq", Acetone)
.Specify("P",10, METRIC.bar)
.Specify("VF",0.7)
.Initialize())
COND2 = (Heater("COND2",sys)
.Connect("In", S06)
.Connect("Out", S07)
.Specify("P",10, METRIC.bar)
.Specify("VF",0)
.Initialize())
RefluxRatio2=3.11
REFSPL2 = (Splitter("REFSPL2",sys)
.Connect("In", S07)
.Connect("Out1", D2)
.Connect("Out2", S08)
.Specify("DP",0, METRIC.bar)
.Specify("K",1-RefluxRatio2/(1.0+RefluxRatio2))
.Initialize())
C2.Solve()
REB2.Solve()
COND2.Solve()
REFSPL2.Solve()
C2.Initialize(3.11,0.1,logger)
logger.Flush();
# -
flowsheet.AddMaterialStreams(S06,S07,S08,D2,S09,S10,Acetone)
flowsheet.AddUnits(C2, REB2, COND2, REFSPL2);
solver.Solve(flowsheet)
print (logger.Flush())
# # Close Recycle
# The stream recycle was fixed while we performed the startup calculations. Now we connect the recycle stream with the distillate stream of C2 to close the loop. We use a simple adiabatic heater to close the mass and energy balance.
# +
Recycle.Unfix()
RECY01 = (Heater("RECY01",sys)
.Connect("In", D2)
.Connect("Out", Recycle)
.Specify("DP",9, METRIC.bar)
.Specify("Q",0, SI.kW)
.Initialize()
.Solve())
flowsheet.AddUnit(RECY01);
# -
solver.Solve(flowsheet)
print (logger.Flush())
# # Reach Specifications
REB1.Unspecify("VF")
REB2.Unspecify("VF")
Methanol.GetVariable("x[Methanol]").Fix(0.99)
Acetone.GetVariable("x[Acetone]").Fix(0.99)
solver.Solve(flowsheet)
print (logger.Flush())
reporter.Report(flowsheet, 6, False)
print (logger.Flush())
# # Overall Mass Balance
# For reporting purposes, we create a temporary flowsheet that collects the main process streams. This flowsheet will not be solved, but is used by the reporter object.
# +
summary= Flowsheet("Summary").AddMaterialStreams(Feed, Recycle, Acetone, Methanol)
reporter.Report(summary, 4, False)
print (logger.Flush())
# -
# # Temperature Profiles
tprof=C1.GetProfile("T")
stages= range(1, C1.NumberOfTrays+1)
df_temp= pd.DataFrame(tprof, index=stages, columns=["T"])
plt.plot( df_temp['T'], stages, linestyle='-', marker='o')
plt.gca().invert_yaxis()
plt.xlabel("Temperature ["+str(sys.VariableFactory.Output.UnitDictionary[PhysicalDimension.Temperature])+"]");
plt.ylabel("Stage");
plt.title("Temperature Profile C1");
reporter.Report(C1,True)
print(logger.Flush())
tprof=C2.GetProfile("T")
stages= range(1, C2.NumberOfTrays+1)
df_temp= pd.DataFrame(tprof, index=stages, columns=["T"])
plt.plot( df_temp['T'], stages, linestyle='-', marker='o')
plt.gca().invert_yaxis()
plt.xlabel("Temperature ["+str(sys.VariableFactory.Output.UnitDictionary[PhysicalDimension.Temperature])+"]");
plt.ylabel("Stage");
plt.title("Temperature Profile C2");
reporter.Report(C2,True)
print(logger.Flush())
| doc/PressureSwingDistillation.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + pycharm={"name": "#%%\n"}
from PIL import Image
import imagehash
import tarfile
import io
import matplotlib.pyplot as plt
# + pycharm={"name": "#%%\n"}
# Insert your own file path below
path = 'pref_000.tar.gz'
tf = tarfile.open(path)
img_names = tf.getnames()
num_files = len(img_names)
# + pycharm={"name": "#%%\n"}
dist = 100
D = 12
total_close = 0
min_coords_list = []
img_p = []
for i in range(num_files):
img = Image.open(io.BytesIO(tf.extractfile(tf.getmember(img_names[i])).read()))
phash_val = imagehash.phash(img)
img_p.append(phash_val)
# + pycharm={"name": "#%%\n"}
for image in range(num_files):
for images in range(num_files):
if images != image:
d = img_p[image]-img_p[images]
if d <= dist:
dist = d
coord = (image, images, dist)
if dist <= D:
total_close += 1
min_coords_list.append(coord)
plt.clf()
plt.subplot(121)
plt.imshow(Image.open(io.BytesIO(tf.extractfile(tf.getmember(img_names[coord[0]])).read())))
plt.axis('off')
plt.subplot(122)
plt.imshow(Image.open(io.BytesIO(tf.extractfile(tf.getmember(img_names[coord[1]])).read())))
plt.axis('off')
plt.savefig(f'Pair {coord[0]}-{coord[1]}.png')
dist = 100
| image_clustering_and_metric_inference/pHash_mem.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # 深度强化学习-用卷积神经网络实现AI玩Flappy Bird游戏
#
# 本节课我们结合Flappy bird游戏,详细讲述了深度强化学习原理,以及如何训练一个神经网络来玩儿游戏
#
# 整个代码包括了利用PyGame包实现一个Flappy Bird游戏,卷积神经网络的定义与实现,以及深度强化学习算法。
#
# 本程序参考了AI玩Flappy Bird的TensorFlow版本:https://github.com/yenchenlin/DeepLearningFlappyBird
#
# 本文件是集智AI学园http://campus.swarma.org 出品的“火炬上的深度学习”第X课的配套源代码
# ## 一、PyGAME实现Flappy Bird游戏
# 在这部分中,我们调用PyGame包实现了一个Flappy Bird游戏。通过PyGame,我们可以非常方便的加载图片、音频,来快速实现小游戏
# ### 1. 加载游戏所需的必要资源
# +
# 加载游戏中的所有资源,包括图片以及音频
# 调用PyGame包,关于该包的安装,请参看:http://www.pygame.org/wiki/GettingStarted
import pygame
# 需要获取操作系统类型,故而调用sys包
import sys
def load():
# 加载各类资源的函数
# 精灵在不同状态下的图片
PLAYER_PATH = (
'assets/sprites/redbird-upflap.png',
'assets/sprites/redbird-midflap.png',
'assets/sprites/redbird-downflap.png'
)
# 背景图地址
BACKGROUND_PATH = 'assets/sprites/background-black.png'
# 管道图片所在的地址
PIPE_PATH = 'assets/sprites/pipe-green.png'
IMAGES, SOUNDS, HITMASKS = {}, {}, {}
# 加载成绩数字所需的图片
IMAGES['numbers'] = (
pygame.image.load('assets/sprites/0.png').convert_alpha(),
pygame.image.load('assets/sprites/1.png').convert_alpha(),
pygame.image.load('assets/sprites/2.png').convert_alpha(),
pygame.image.load('assets/sprites/3.png').convert_alpha(),
pygame.image.load('assets/sprites/4.png').convert_alpha(),
pygame.image.load('assets/sprites/5.png').convert_alpha(),
pygame.image.load('assets/sprites/6.png').convert_alpha(),
pygame.image.load('assets/sprites/7.png').convert_alpha(),
pygame.image.load('assets/sprites/8.png').convert_alpha(),
pygame.image.load('assets/sprites/9.png').convert_alpha()
)
# 加载地面的图片
IMAGES['base'] = pygame.image.load('assets/sprites/base.png').convert_alpha()
# 加载声音文件(在不同的系统中,声音文件扩展名不同)
if 'win' in sys.platform:
soundExt = '.wav'
else:
soundExt = '.ogg'
SOUNDS['die'] = pygame.mixer.Sound('assets/audio/die' + soundExt)
SOUNDS['hit'] = pygame.mixer.Sound('assets/audio/hit' + soundExt)
SOUNDS['point'] = pygame.mixer.Sound('assets/audio/point' + soundExt)
SOUNDS['swoosh'] = pygame.mixer.Sound('assets/audio/swoosh' + soundExt)
SOUNDS['wing'] = pygame.mixer.Sound('assets/audio/wing' + soundExt)
# 加载背景图
IMAGES['background'] = pygame.image.load(BACKGROUND_PATH).convert()
# s加载精灵图
IMAGES['player'] = (
pygame.image.load(PLAYER_PATH[0]).convert_alpha(),
pygame.image.load(PLAYER_PATH[1]).convert_alpha(),
pygame.image.load(PLAYER_PATH[2]).convert_alpha(),
)
# 加载水管
IMAGES['pipe'] = (
pygame.transform.rotate(
pygame.image.load(PIPE_PATH).convert_alpha(), 180),
pygame.image.load(PIPE_PATH).convert_alpha(),
)
# 获得水管的蒙板
HITMASKS['pipe'] = (
getHitmask(IMAGES['pipe'][0]),
getHitmask(IMAGES['pipe'][1]),
)
# 玩家的蒙板
HITMASKS['player'] = (
getHitmask(IMAGES['player'][0]),
getHitmask(IMAGES['player'][1]),
getHitmask(IMAGES['player'][2]),
)
#返回了三个字典,每个字典的值分别存储图像、声音和蒙板
return IMAGES, SOUNDS, HITMASKS
def getHitmask(image):
"""根据图像的alpha,获得蒙板"""
#所谓蒙板就是指将图像中的主体从整个图像中抠出来的技术,从而方便与其它的对象合成到一起
#蒙板用一个boolean类型的列表来存储
mask = []
for x in range(image.get_width()):
mask.append([])
for y in range(image.get_height()):
mask[x].append(bool(image.get_at((x,y))[3]))
return mask
# -
# ### 2. 实现Flappy Bird的游戏逻辑
# +
# 加载程序所需的包
import numpy as np
import sys
import random
import pygame
import pygame.surfarray as surfarray
from pygame.locals import *
from itertools import cycle
FPS = 30 #帧率
SCREENWIDTH = 288 #屏幕的宽度
SCREENHEIGHT = 512 #屏幕的高度
pygame.init() #游戏初始化
FPSCLOCK = pygame.time.Clock() #定义程序时钟
SCREEN = pygame.display.set_mode((SCREENWIDTH, SCREENHEIGHT)) #定义屏幕对象
pygame.display.set_caption('Flappy Bird') #设定窗口名称
IMAGES, SOUNDS, HITMASKS = load() #加载游戏资源
PIPEGAPSIZE = 100 # 定义两个水管之间的宽度
BASEY = SCREENHEIGHT * 0.79 #设定基地的高度
# 设定小鸟属性:宽度、高度等
PLAYER_WIDTH = IMAGES['player'][0].get_width()
PLAYER_HEIGHT = IMAGES['player'][0].get_height()
# 设定水管属性:高度、宽度
PIPE_WIDTH = IMAGES['pipe'][0].get_width()
PIPE_HEIGHT = IMAGES['pipe'][0].get_height()
#背景宽度
BACKGROUND_WIDTH = IMAGES['background'].get_width()
PLAYER_INDEX_GEN = cycle([0, 1, 2, 1])
# 游戏模型类
class GameState:
def __init__(self):
# 初始化
# 初始成绩、玩家索引、循环迭代都为0
self.score = self.playerIndex = self.loopIter = 0
#设定玩家的初始位置
self.playerx = int(SCREENWIDTH * 0.2)
self.playery = int((SCREENHEIGHT - PLAYER_HEIGHT) / 2)
self.basex = 0
# 地面的初始移位
self.baseShift = IMAGES['base'].get_width() - BACKGROUND_WIDTH
# 生成两个随机的水管
newPipe1 = getRandomPipe()
newPipe2 = getRandomPipe()
# 设定初始水管的位置x,y坐标
self.upperPipes = [
{'x': SCREENWIDTH, 'y': newPipe1[0]['y']},
{'x': SCREENWIDTH + (SCREENWIDTH / 2), 'y': newPipe2[0]['y']},
]
self.lowerPipes = [
{'x': SCREENWIDTH, 'y': newPipe1[1]['y']},
{'x': SCREENWIDTH + (SCREENWIDTH / 2), 'y': newPipe2[1]['y']},
]
# 定义玩家的属性
self.pipeVelX = -4
self.playerVelY = 0 # 小鸟在y轴上的速度,初始设置维playerFlapped
self.playerMaxVelY = 10 # Y轴上的最大速度, 也就是最大的下降速度
self.playerMinVelY = -8 # Y轴向上的最大速度
self.playerAccY = 1 # 小鸟往下落的加速度
self.playerFlapAcc = -9 # 扇动翅膀的加速度
self.playerFlapped = False # 玩家是否煽动了翅膀
def frame_step(self, input_actions):
# input_actions是一个行动数组,分别存储了0或者1两个动作的激活情况
# 游戏每一帧的循环
pygame.event.pump()
# 每一步的默认回报
reward = 0.1
terminal = False
# 限定每一帧只能做一个动作
if sum(input_actions) != 1:
raise ValueError('Multiple input actions!')
# input_actions[0] == 1: 对应什么都不做
# input_actions[1] == 1: 对应小鸟煽动了翅膀
if input_actions[1] == 1:
# 小鸟煽动翅膀向上
if self.playery > -2 * PLAYER_HEIGHT:
self.playerVelY = self.playerFlapAcc
self.playerFlapped = True
#SOUNDS['wing'].play()
# 检查是否通过了管道,如果通过,则增加成绩
playerMidPos = self.playerx + PLAYER_WIDTH / 2
for pipe in self.upperPipes:
pipeMidPos = pipe['x'] + PIPE_WIDTH / 2
if pipeMidPos <= playerMidPos < pipeMidPos + 4:
self.score += 1
#SOUNDS['point'].play()
reward = 1
# playerIndex轮换
if (self.loopIter + 1) % 3 == 0:
self.playerIndex = next(PLAYER_INDEX_GEN)
self.loopIter = (self.loopIter + 1) % 30
self.basex = -((-self.basex + 100) % self.baseShift)
# 小鸟运动
if self.playerVelY < self.playerMaxVelY and not self.playerFlapped:
self.playerVelY += self.playerAccY
if self.playerFlapped:
self.playerFlapped = False
self.playery += min(self.playerVelY, BASEY - self.playery - PLAYER_HEIGHT)
if self.playery < 0:
self.playery = 0
# 管道的移动
for uPipe, lPipe in zip(self.upperPipes, self.lowerPipes):
uPipe['x'] += self.pipeVelX
lPipe['x'] += self.pipeVelX
# 当管道快到左侧边缘的时候,产生新的管道
if 0 < self.upperPipes[0]['x'] < 5:
newPipe = getRandomPipe()
self.upperPipes.append(newPipe[0])
self.lowerPipes.append(newPipe[1])
# 当第一个管道移出屏幕的时候,就把它删除
if self.upperPipes[0]['x'] < -PIPE_WIDTH:
self.upperPipes.pop(0)
self.lowerPipes.pop(0)
# 检查碰撞
isCrash= checkCrash({'x': self.playerx, 'y': self.playery,
'index': self.playerIndex},
self.upperPipes, self.lowerPipes)
# 如果有碰撞发生,则游戏结束,terminal=True
if isCrash:
#SOUNDS['hit'].play()
#SOUNDS['die'].play()
terminal = True
self.__init__()
reward = -1
# 将所有角色都根据每个角色的坐标画到屏幕上
SCREEN.blit(IMAGES['background'], (0,0))
for uPipe, lPipe in zip(self.upperPipes, self.lowerPipes):
SCREEN.blit(IMAGES['pipe'][0], (uPipe['x'], uPipe['y']))
SCREEN.blit(IMAGES['pipe'][1], (lPipe['x'], lPipe['y']))
SCREEN.blit(IMAGES['base'], (self.basex, BASEY))
# print score so player overlaps the score
# showScore(self.score)
SCREEN.blit(IMAGES['player'][self.playerIndex],
(self.playerx, self.playery))
# 将当前的游戏屏幕生成一个二维画面返回
image_data = pygame.surfarray.array3d(pygame.display.get_surface())
pygame.display.update()
FPSCLOCK.tick(FPS)
#print self.upperPipes[0]['y'] + PIPE_HEIGHT - int(BASEY * 0.2)
# 该函数的输出有三个变量:游戏当前帧的游戏画面,当前获得的游戏得分,游戏是否已经结束
return image_data, reward, terminal
def getRandomPipe():
#随机生成管道的函数
"""returns a randomly generated pipe"""
# 两个管道之间的竖直间隔从下列数中直接取
gapYs = [20, 30, 40, 50, 60, 70, 80, 90]
index = random.randint(0, len(gapYs)-1)
gapY = gapYs[index]
#设定新生成管道的位置
gapY += int(BASEY * 0.2)
pipeX = SCREENWIDTH + 10
# 返回管道的坐标
return [
{'x': pipeX, 'y': gapY - PIPE_HEIGHT}, # upper pipe
{'x': pipeX, 'y': gapY + PIPEGAPSIZE}, # lower pipe
]
def showScore(score):
# 在屏幕上直接展示成绩的函数
"""displays score in center of screen"""
scoreDigits = [int(x) for x in list(str(score))]
totalWidth = 0 # total width of all numbers to be printed
for digit in scoreDigits:
totalWidth += IMAGES['numbers'][digit].get_width()
Xoffset = (SCREENWIDTH - totalWidth) / 2
for digit in scoreDigits:
SCREEN.blit(IMAGES['numbers'][digit], (Xoffset, SCREENHEIGHT * 0.1))
Xoffset += IMAGES['numbers'][digit].get_width()
def checkCrash(player, upperPipes, lowerPipes):
# 检测碰撞的函数,基本思路为:将每一个物体都看作是一个矩形区域,然后检查两个矩形区域是否有碰撞
# 检查碰撞是细到每个对象的图像蒙板级别,而不单纯是看矩形之间的碰撞
"""returns True if player collders with base or pipes."""
pi = player['index']
player['w'] = IMAGES['player'][0].get_width()
player['h'] = IMAGES['player'][0].get_height()
# 检查小鸟是否碰撞到了地面
if player['y'] + player['h'] >= BASEY - 1:
return True
else:
# 检查小鸟是否与管道碰撞
playerRect = pygame.Rect(player['x'], player['y'],
player['w'], player['h'])
for uPipe, lPipe in zip(upperPipes, lowerPipes):
# 上下管道矩形
uPipeRect = pygame.Rect(uPipe['x'], uPipe['y'], PIPE_WIDTH, PIPE_HEIGHT)
lPipeRect = pygame.Rect(lPipe['x'], lPipe['y'], PIPE_WIDTH, PIPE_HEIGHT)
# 获得每个元素的蒙板
pHitMask = HITMASKS['player'][pi]
uHitmask = HITMASKS['pipe'][0]
lHitmask = HITMASKS['pipe'][1]
# 检查是否与上下管道相撞
uCollide = pixelCollision(playerRect, uPipeRect, pHitMask, uHitmask)
lCollide = pixelCollision(playerRect, lPipeRect, pHitMask, lHitmask)
if uCollide or lCollide:
return True
return False
def pixelCollision(rect1, rect2, hitmask1, hitmask2):
"""在像素级别检查两个物体是否发生碰撞"""
rect = rect1.clip(rect2)
if rect.width == 0 or rect.height == 0:
return False
# 确定矩形框,并针对矩形框中的每个像素进行循环,查看两个对象是否碰撞
x1, y1 = rect.x - rect1.x, rect.y - rect1.y
x2, y2 = rect.x - rect2.x, rect.y - rect2.y
for x in range(rect.width):
for y in range(rect.height):
if hitmask1[x1+x][y1+y] and hitmask2[x2+x][y2+y]:
return True
return False
# -
# ### 3. 对游戏做小测试
# +
import matplotlib.pyplot as plt
from IPython.display import display, clear_output
# 新建一个游戏
game = GameState()
fig = plt.figure()
axe = fig.add_subplot(111)
dat = np.zeros((10, 10))
img = axe.imshow(dat)
# 进行100步循环,并将每一帧的画面打印出来
for i in range(100):
clear_output(wait = True)
image_data, reward, terminal = game.frame_step([0,1])
image = np.transpose(image_data, (1, 0, 2))
img.set_data(image)
img.autoscale()
display(fig)
# -
# ## 二、训练神经网络玩游戏
# ### 1. 定义网络
# +
# 导入必需的包
from __future__ import print_function
import torch
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import cv2 #需要安装OpenCV的包
import sys
sys.path.append("game/")
import random
import numpy as np
from collections import deque
# 定义一系列常数,其中,epsilon为每周期随机输出一个动作的概率
GAME = 'bird' # 游戏名称
ACTIONS = 2 # 有效输出动作的个数
GAMMA = 0.99 # 强化学习中未来的衰减率
OBSERVE = 10000. # 训练之前的时间步,需要先观察10000帧
EXPLORE = 3000000. # 退火所需的时间步,所谓的退火就是指随机选择率epsilon逐渐变小
FINAL_EPSILON = 0.0001 # epsilon的最终值
INITIAL_EPSILON = 0.1 # epsilon的初始值
REPLAY_MEMORY = 50000 # 最多记忆多少帧训练数据
BATCH = 32 # 每一个批次的数据记录条数
FRAME_PER_ACTION = 1 # 每间隔多少时间完成一次有效动作的输出
# +
# 创建一个多层CNN网络,该网络接收的输入为4帧画面,输出为每个可能动作对应的Q函数值
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
# 第一层卷积,从4通道到32通道,窗口大小8,跳跃间隔4,填空白2
self.conv1 = nn.Conv2d(4, 32, 8, 4, padding = 2)
# Pooling层,窗口2*2
self.pool = nn.MaxPool2d(2, 2)
# 第二层卷积,从32通道到64通道,窗口大小4,跳跃间隔2,填空白1
self.conv2 = nn.Conv2d(32, 64, 4, 2, padding = 1)
# 第二个Pooling层,窗口2*2,空白1
self.pool2 = nn.MaxPool2d(2, 2, padding = 1)
# 第三层卷积层,输入输出通道都是64,填空白为1
self.conv3 = nn.Conv2d(64, 64, 3, 1, padding = 1)
# 最后有两层全链接层
self.fc_sz = 1600
self.fc1 = nn.Linear(self.fc_sz, 256)
self.fc2 = nn.Linear(256, ACTIONS)
def forward(self, x):
# 输入为一个batch的数据,每一个为前后相连的4张图像,每个图像为80*80的大小
# x的尺寸为:batch_size, 4, 80, 80
x = self.conv1(x)
# x的尺寸为:batch_size, 32, 20, 20
x = F.relu(x)
x = self.pool(x)
# x的尺寸为:batch_size, 32, 10, 10
x = F.relu(self.conv2(x))
# x的尺寸为:batch_size, 64, 5, 5
#x = self.pool2(x)
x = F.relu(self.conv3(x))
# x的尺寸为:batch_size, 64, 5, 5
#x = self.pool2(x)
# 将x设为1600维的向量, batch_size, 1600
x = x.view(-1, self.fc_sz)
x = F.relu(self.fc1(x))
readout = self.fc2(x)
return readout, x
def init(self):
# 初始化所有的网络权重
self.conv1.weight.data = torch.abs(0.01 * torch.randn(self.conv1.weight.size()))
self.conv2.weight.data = torch.abs(0.01 * torch.randn(self.conv2.weight.size()))
self.conv3.weight.data = torch.abs(0.01 * torch.randn(self.conv3.weight.size()))
self.fc1.weight.data = torch.abs(0.01 * torch.randn(self.fc1.weight.size()))
self.fc2.weight.data = torch.abs(0.01 * torch.randn(self.fc2.weight.size()))
self.conv1.bias.data = torch.ones(self.conv1.bias.size()) * 0.01
self.conv2.bias.data = torch.ones(self.conv2.bias.size()) * 0.01
self.conv3.bias.data = torch.ones(self.conv3.bias.size()) * 0.01
self.fc1.bias.data = torch.ones(self.fc1.bias.size()) * 0.01
self.fc2.bias.data = torch.ones(self.fc2.bias.size()) * 0.01
# +
# 开始在内存/GPU上定义一个网络
use_cuda = torch.cuda.is_available() #检测本台机器中是否有GPU
# 创建一个神经网络
net = Net()
# 初始化网络权重。之所以自定义初始化过程是为了增加神经网络权重的多样性
net.init()
# 如果有GPU,就把神经网络全部搬到GPU内存中做运算
net = net.cuda() if use_cuda else net
# 定义损失函数为MSE
criterion = nn.MSELoss().cuda() if use_cuda else nn.MSELoss()
# 定义优化器,并设置初始学习率维10^-6
optimizer = torch.optim.Adam(net.parameters(), lr=1e-6 )
# 开启一个游戏进程,开始与游戏引擎通话
game_state = GameState()
# 学习样本的存储区域deque是一个类似于list的存储容器
D = deque()
# 状态打印log记录位置
#a_file = open("logs_" + GAME + "/readout.txt", 'w')
#h_file = open("logs_" + GAME + "/hidden.txt", 'w')
# 将游戏设置为初始状态,并获得一个80*80的游戏湖面
do_nothing = np.zeros(ACTIONS)
do_nothing[0] = 1
x_t, r_0, terminal = game_state.frame_step(do_nothing)
x_t = cv2.cvtColor(cv2.resize(x_t, (80, 80)), cv2.COLOR_BGR2GRAY)
ret, x_t = cv2.threshold(x_t,1,255,cv2.THRESH_BINARY)
# 将初始的游戏画面叠加成4张作为神经网络的初始输入状态s_t
s_t = np.stack((x_t, x_t, x_t, x_t), axis=0)
# 设置初始的epsilon(采取随机行动的概率),并准备训练
epsilon = INITIAL_EPSILON
t = 0
# -
# ### 2. 边做边学的核心算法
# 该算法分为三个阶段:
#
# 1、按照Epsilon贪婪算法采取一次行动;
# 2、将选择好的行动输入给游戏引擎,得到下一帧的状态,并生成本帧的训练数据
# 3、开始训练:
# 记录每轮平均得分的容器
scores = []
all_turn_scores = []
while "flappy bird" != "angry bird":
# 开始游戏循环
######################################################
##########首先,按照贪婪策略选择一个行动 ##################
s = Variable(torch.from_numpy(s_t).type(torch.FloatTensor))
s = s.cuda() if use_cuda else s
s = s.view(-1, s.size()[0], s.size()[1], s.size()[2])
# 获取当前时刻的游戏画面,输入到神经网络中
readout, h_fc1 = net(s)
# 神经网络产生的输出为readout:选择每一个行动的预期Q值
readout = readout.cpu() if use_cuda else readout
# readout为一个二维向量,分别对应每一个动作的预期Q值
readout_t = readout.data.numpy()[0]
# 按照epsilon贪婪策略产生小鸟的行动,即以epsilon的概率随机输出行动或者以
# 1-epsilon的概率按照预期输出最大的Q值给出行动
a_t = np.zeros([ACTIONS])
action_index = 0
if t % FRAME_PER_ACTION == 0:
# 如果当前帧可以行动,则
if random.random() <= epsilon:
# 产生随机行动
#print("----------Random Action----------")
action_index = random.randrange(ACTIONS)
else:
# 选择神经网络判断的预期Q最大的行动
action_index = np.argmax(readout_t)
a_t[action_index] = 1
else:
a_t[0] = 1 # do nothing
# 模拟退火:让epsilon开始降低
if epsilon > FINAL_EPSILON and t > OBSERVE:
epsilon -= (INITIAL_EPSILON - FINAL_EPSILON) / EXPLORE
#########################################################################
##########其次,将选择好的行动输入给游戏引擎,并得到下一帧的状态 ###################
x_t1_colored, r_t, terminal = game_state.frame_step(a_t)
# 返回的x_t1_colored为游戏画面,r_t为本轮的得分,terminal为游戏在本轮是否已经结束
# 记录一下每一步的成绩
scores.append(r_t)
if terminal:
# 当游戏结束的时候,计算一下本轮的总成绩,并将总成绩存储到all_turn_scores中
all_turn_scores.append(sum(scores))
scores = []
# 对游戏的原始画面做相应的处理,从而变成一张80*80的,朴素的(无背景画面)的图
x_t1 = cv2.cvtColor(cv2.resize(x_t1_colored, (80, 80)), cv2.COLOR_BGR2GRAY)
ret, x_t1 = cv2.threshold(x_t1, 1, 255, cv2.THRESH_BINARY)
x_t1 = np.reshape(x_t1, (1, 80, 80))
# 将当前帧的画面和前三帧的画面合并起来作为Agent获得的环境反馈结果
s_t1 = np.append(x_t1, s_t[:3, :, :], axis=0)
# 生成一个训练数据,分别将本帧的输入画面s_t,本帧的行动a_t,得到的环境回报r_t以及环境被转换的新状态s_t1存到D中
D.append((s_t, a_t, r_t, s_t1, terminal))
if len(D) > REPLAY_MEMORY:
# 如果D中的元素已满,则扔掉最老的一条训练数据
D.popleft()
#########################################################################
##########最后,当运行周期超过一定次数后开始训练神经网络 ###################
if t > OBSERVE:
# 从D中随机采样出一个batch的训练数据
minibatch = random.sample(D, BATCH)
optimizer.zero_grad()
# 将这个batch中的s变量都分别存放到列表中
s_j_batch = [d[0] for d in minibatch]
a_batch = [d[1] for d in minibatch]
r_batch = [d[2] for d in minibatch]
s_j1_batch = [d[3] for d in minibatch]
# 接下来,要根据s_j1_batch,神经网络给出预估的未来Q值
s = Variable(torch.FloatTensor(np.array(s_j1_batch, dtype=float)))
s = s.cuda() if use_cuda else s
readout, h_fc1 = net(s)
readout = readout.cpu() if use_cuda else readout
readout_j1_batch = readout.data.numpy()
# readout_j1_batch存储了一个minibatch中的所有未来一步的Q预估值
# 根据Q的预估值,当前的反馈r,以及游戏是否结束,更新待训练的目标函数值
y_batch = []
for i in range(0, len(minibatch)):
terminal = minibatch[i][4]
# 当游戏结束的时候,则用环境的反馈作为目标,否则用下一状态的Q值+本期的环境反馈
if terminal:
y_batch.append(r_batch[i])
else:
y_batch.append(r_batch[i] + GAMMA * np.max(readout_j1_batch[i]))
# 开始梯度更新
y = Variable(torch.FloatTensor(y_batch))
a = Variable(torch.FloatTensor(a_batch))
s = Variable(torch.FloatTensor(np.array(s_j_batch, dtype=float)))
if use_cuda:
y = y.cuda()
a = a.cuda()
s = s.cuda()
# 计算s_j_batch的Q值
readout, h_fc1 = net(s)
readout_action = readout.mul(a).sum(1)
# 根据s_j_batch下所选择的预估Q和目标y的Q值的差来作为损失函数训练网络
loss = criterion(readout_action, y)
loss.backward()
optimizer.step()
if t % 1000 == 0:
print('损失函数:', loss)
# 将状态更新一次,时间步+1
s_t = s_t1
t += 1
# 每隔 10000 次循环,存储一下网络
if t % 10000 == 0:
torch.save(net, 'saving_nets/' + GAME + '-dqn' + str(t) + '.txt')
# 状态信息的转化,基本分为Observe,explore和train三个阶段
# Observe没有训练,explore开始训练,并且开始模拟退火,train模拟退火结束
state = ""
if t <= OBSERVE:
state = "observe"
elif t > OBSERVE and t <= OBSERVE + EXPLORE:
state = "explore"
else:
state = "train"
# 打印当前运行的一些基本数据,分别输出到屏幕以及log文件中
if t % 1000 == 0:
sss = "时间步 {}/ 状态 {}/ Epsilon {:.2f}/ 行动 {}/ 奖励 {}/ Q_MAX {:e}/ 轮得分 {:.2f}".format(
t, state, epsilon, action_index, r_t, np.max(readout_t), np.mean(all_turn_scores[-1000:]))
print(sss)
f = open('log_file.txt', 'a')
f.write(sss + '\n')
f.close()
# write info to files
f = open('final_log_file.txt', 'r')
line = f.read().strip().split('\n')
values = []
for ln in line:
segs = ln.split('/')
values.append(float(segs[-1].split(' ')[-1]))
plt.figure()
plt.plot(np.arange(len(values))*1000, values)
plt.xlabel('Frames')
plt.ylabel('Average Score')
plt.show()
#net = torch.load('saving_nets/' + GAME + '-dqn' + str(2876000) + '.txt')
net = torch.load('final_model.mdl')
FINAL_EPSILON = 0.0001 # epsilon的最终值
BATCH = 32 # 每一个批次的数据记录条数
FRAME_PER_ACTION = 1 # 每间隔多少时间完成一次有效动作的输出
# +
# 开始在内存/GPU上定义一个网络
use_cuda = torch.cuda.is_available() #检测本台机器中是否有GPU
# 如果有GPU,就把神经网络全部搬到GPU内存中做运算
net = net.cuda() if use_cuda else net
# 开启一个游戏进程,开始与游戏引擎通话
game_state = GameState()
# 状态打印log记录位置
#a_file = open("logs_" + GAME + "/readout.txt", 'w')
#h_file = open("logs_" + GAME + "/hidden.txt", 'w')
# 将游戏设置为初始状态,并获得一个80*80的游戏湖面
do_nothing = np.zeros(ACTIONS)
do_nothing[0] = 1
x_t, r_0, terminal = game_state.frame_step(do_nothing)
x_t = cv2.cvtColor(cv2.resize(x_t, (80, 80)), cv2.COLOR_BGR2GRAY)
ret, x_t = cv2.threshold(x_t,1,255,cv2.THRESH_BINARY)
# 将初始的游戏画面叠加成4张作为神经网络的初始输入状态s_t
s_t = np.stack((x_t, x_t, x_t, x_t), axis=0)
# 设置初始的epsilon(采取随机行动的概率),并准备训练
epsilon = FINAL_EPSILON
t = 0# 记录每轮平均得分的容器
scores = []
all_turn_scores = []
fig = plt.figure()
axe = fig.add_subplot(111)
dat = np.zeros((10, 10))
img = axe.imshow(dat)
while "flappy bird" != "angry bird":
# 开始游戏循环
######################################################
##########首先,按照贪婪策略选择一个行动 ##################
s = Variable(torch.from_numpy(s_t).type(torch.FloatTensor))
s = s.cuda() if use_cuda else s
s = s.view(-1, s.size()[0], s.size()[1], s.size()[2])
# 获取当前时刻的游戏画面,输入到神经网络中
readout, h_fc1 = net(s)
# 神经网络产生的输出为readout:选择每一个行动的预期Q值
readout = readout.cpu() if use_cuda else readout
# readout为一个二维向量,分别对应每一个动作的预期Q值
readout_t = readout.data.numpy()[0]
# 按照epsilon贪婪策略产生小鸟的行动,即以epsilon的概率随机输出行动或者以
# 1-epsilon的概率按照预期输出最大的Q值给出行动
a_t = np.zeros([ACTIONS])
action_index = 0
if t % FRAME_PER_ACTION == 0:
# 如果当前帧可以行动,则
if random.random() <= epsilon:
# 产生随机行动
#print("----------Random Action----------")
action_index = random.randrange(ACTIONS)
else:
# 选择神经网络判断的预期Q最大的行动
action_index = np.argmax(readout_t)
a_t[action_index] = 1
else:
a_t[0] = 1 # do nothing
#########################################################################
##########其次,将选择好的行动输入给游戏引擎,并得到下一帧的状态 ###################
x_t1_colored, r_t, terminal = game_state.frame_step(a_t)
# 返回的x_t1_colored为游戏画面,r_t为本轮的得分,terminal为游戏在本轮是否已经结束
# 记录一下每一步的成绩
scores.append(r_t)
if terminal:
# 当游戏结束的时候,计算一下本轮的总成绩,并将总成绩存储到all_turn_scores中
all_turn_scores.append(sum(scores))
scores = []
# 对游戏的原始画面做相应的处理,从而变成一张80*80的,朴素的(无背景画面)的图
x_t1 = cv2.cvtColor(cv2.resize(x_t1_colored, (80, 80)), cv2.COLOR_BGR2GRAY)
ret, x_t1 = cv2.threshold(x_t1, 1, 255, cv2.THRESH_BINARY)
x_t1 = np.reshape(x_t1, (1, 80, 80))
# 将当前帧的画面和前三帧的画面合并起来作为Agent获得的环境反馈结果
s_t1 = np.append(x_t1, s_t[:3, :, :], axis=0)
s_t = s_t1
t += 1
clear_output(wait = True)
image = np.transpose(x_t1_colored, (1, 0, 2))
img.set_data(image)
img.autoscale()
display(fig)
# -
| jizhi-pytorch-2/05_reinforcement_learning/Reinforcement Learning/FlappyBird-Copy1.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/MilanCugur/Genetic_Evolution_For_CNN/blob/master/src/gea_cnn.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="FfJ-fn_XLnLU" colab_type="text"
# # Data Loading
# + id="8DoSLvx2LdSl" colab_type="code" outputId="d34e3ebe-8036-4db5-9317-adfb57965d42" colab={"base_uri": "https://localhost:8080/", "height": 122}
from google.colab import drive
drive.mount('/content/drive')
# + id="cfoTzVEoMfV6" colab_type="code" outputId="b9645f02-4c9c-4f92-d13a-e523b716bd5a" colab={"base_uri": "https://localhost:8080/", "height": 80}
import numpy as np
from keras.models import Model
from keras.layers import Input, Conv2D, MaxPool2D, Add, Activation, Flatten, Concatenate, Dense, Dropout, BatchNormalization
from keras.optimizers import Adam
from keras.losses import categorical_crossentropy
from keras.layers import LeakyReLU, concatenate
from keras.layers.advanced_activations import ReLU
from keras.initializers import glorot_normal
import keras.backend as K
from keras.models import load_model # Save model params
def extract_dataset(path):
"""
extract DoubledMNIST dataset
Argument: path to .zip file with the dataset
Return value: x_train, y_train, x_test, y_test lists of numpy arrays
(DoubledMNIST dataset: train size 120k images 56x56, test size 20k images 56x56)
"""
# import libraries
import os # for basic os operations
from zipfile import ZipFile
from skimage import io
import shutil
if not path.endswith('.zip'):
raise ValueError("Error: path is not '.zip' file")
archive = ZipFile(path, 'r') # extract
archive.extractall('./DoubledMNIST')
archive.close()
del archive
x_train = []
y_train = []
x_test = []
y_test = []
for file in os.listdir('./DoubledMNIST/train'):
img = io.imread(os.path.join('./DoubledMNIST/train', file))
x_train.append(np.array(img))
y_train.append(int(file.split('_')[1]))
for file in os.listdir('./DoubledMNIST/test'):
img = io.imread(os.path.join('./DoubledMNIST/test', file))
x_test.append(np.array(img))
y_test.append(int(file.split('_')[1]))
shutil.rmtree('./DoubledMNIST')
return np.array(x_train), np.array(y_train), np.array(x_test), np.array(y_test)
# + id="2bKQ505PMxBn" colab_type="code" colab={}
def load_mnist(doubled=0, ntrain=None, ntest=None):
"""
doubled==0 -> load MNIST; doubled==1-> load DoubledMNIST
ntrain - number of train samples
ntest - number of test samples
"""
from keras.utils import to_categorical
import numpy as np
if doubled==0:
# load mnist
from keras.datasets import mnist
(_x_train, _y_train), (_x_test, _y_test) = mnist.load_data()
if ntrain==None:
ntrain = _x_train.shape[0]
if ntest==None:
ntest = _x_test.shape[0]
assert ntrain<=_x_train.shape[0] and ntest<=_x_test.shape[0]
else:
# load doubled mnist
_x_train, _y_train, _x_test, _y_test = extract_dataset('./drive/My Drive/ni_sem/DoubledMNIST.zip')
# Prepare images
box_size = _x_train.shape[1]
y_train = to_categorical(_y_train)[:ntrain]
y_test = to_categorical(_y_test)[:ntest]
x_train = np.array(_x_train).astype('float32')[:ntrain]
x_train /= 255
x_train = np.reshape(x_train,[-1, box_size, box_size, 1])
x_test = np.array(_x_test).astype('float32')[:ntest]
x_test /= 255
x_test = np.reshape(x_test, [-1, box_size, box_size, 1])
return x_train, y_train, x_test, y_test, box_size
# + id="P5K7U7oOg4_4" colab_type="code" outputId="91c73b29-f908-430b-c4da-19d868b881b0" colab={"base_uri": "https://localhost:8080/", "height": 85}
# %%time
x_train, y_train, x_test, y_test, box_size = load_mnist(doubled=0) #, ntrain=10000, ntest=1000)
# + id="V8PNO4_WwR-E" colab_type="code" outputId="a776b62a-c286-4072-af57-1e0bfe651911" colab={"base_uri": "https://localhost:8080/", "height": 34}
x_train.shape, y_train.shape, x_test.shape, y_test.shape
# + id="DA_twWHcP-71" colab_type="code" outputId="79d188a4-ae55-46fa-e6b6-cea85bcd5a2b" colab={"base_uri": "https://localhost:8080/", "height": 282}
from matplotlib import pyplot as plt # smal demonstration
plt.imshow(x_test[19].reshape((x_test.shape[1], x_test.shape[2])))
plt.show()
print(y_test[19])
# + [markdown] id="8uungJ3ZiczL" colab_type="text"
# # CNN tools
# + id="1HDESDezifHo" colab_type="code" colab={}
STAGES = np.array(["s1","s2","s3"]) # S
NUM_NODES = np.array([3,4,5]) # K
FILTERS = np.array([32, 48, 64])
sampleIndividual = [1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1]
# sampleIndividual = [1, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1]; radi klasicna CNN
# stage1 examples
# sampleIndividual = [1, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1]; radi; trojka eliminisana
# sampleIndividual = [0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1]; radi; dvojka eliminisana
# sampleIndividual = [0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1]; radi; jedinica eliminisana
# sampleIndividual = [0, 0, 0, 1, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1]; radi; samo jedna konv.
# stage2 examples
# sampleIndividual = [1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1]; radi 0->3->4->5
# sampleIndividual = [1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1]; radi 0->1->3->5
# sampleIndividual = [1, 0, 1, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 1]; radi; 0->1->2,3,4->5
# + id="Mbp0WjNmifCP" colab_type="code" colab={}
def __create_indices(num_nodes):
"""
num_nodes - number of nodes per each stage
Calculate bits indices (startindex, length) for each stage
"""
l = 0 # genome length
bits_indices, i = np.empty((0,2),dtype = np.int32), 0
for Ks in num_nodes:
length = Ks * (Ks - 1)
bits_indices = np.vstack([bits_indices,[i, i + int(0.5 * length)]])
i += int(0.5 * length)
l += length
l = int(0.5 * l)
return bits_indices, l
def CNN_build(stages, num_nodes, n_filters, individual, box_size, n_classes, verbose=0):
"""
stages - array of stage names
num_nodes - number of conv nodes per each stage
n_filters - number of filters per stage
individual - binary list representing individual architecture
box_size - expect input images like (box_size, box_size)
n_classes - number of output clasees
Build CNN architecture from the given list
"""
L = len(individual)
bits_indices, _L= __create_indices(num_nodes)
assert(L==_L) # small check of the input individual connections info
if(verbose):
print('Starting network building..')
image_shape = (box_size, box_size, 1)
x_input = Input(shape=image_shape)
previous = None # output from previous stage (initially input of CNN)
# Build stage by stage
for i, (s, Ks, n_filter) in enumerate(zip(stages, num_nodes, n_filters)):
if i==0:
previous = x_input
if(verbose):
print('\nBuild layer', s, ':', Ks, 'nodes,', n_filter, 'filters.')
stage_indices = individual[bits_indices[i][0]:bits_indices[i][1]] # connection indices for current stage nodes; ex. [1, 0, 1, 0, 0, 1, 0, 0, 0, 1]
stage_indexes = np.split(range(int(Ks*(Ks-1)/2)),np.cumsum(range(Ks - 1)))[1:] # connection indexes for current stage nodes; ex. [array([0]), array([1, 2]), array([3, 4, 5]), array([6, 7, 8, 9])]
stage_nodes = [] # nodes in a stage; ex. [vs1_1, vs1_2, vs1_3] (0, 4 are dummy)
to_him = list(np.zeros(Ks)) # number of nodes to which i-th node points to
from_him = list(np.zeros(Ks))
if(verbose): # number of nodes from i-th node to others
print('Stage indices:', stage_indices)
print('Stage indexes:', stage_indexes)
# default stage input node
if(verbose):
print('Building '+'v'+str(s)+'_0')
vs0 = Conv2D(filters=n_filter, kernel_size=(3,3), strides=(1,1), activation='relu', padding='same', name='v'+str(s)+'_0')(previous) # TODO
if(verbose):
print('Builded '+'v'+str(s)+'_0')
# first node and trivial vs0->vs1
if(verbose):
print('Building '+'v'+str(s)+'_1')
vs1 = Conv2D(filters=n_filter, kernel_size=(3,3), strides=(1,1), activation='relu', padding='same', name='v'+str(s)+'_1')(vs0)
stage_nodes += [vs1]
if(verbose):
print('Builded '+'v'+str(s)+'_1')
for j in range(2, Ks+1):
name = 'v'+str(s)+'_'+str(j) # name of the current node
if(verbose):
print('Building '+name)
tonode = stage_indices[stage_indexes[j-2][0]:stage_indexes[j-2][-1]+1] # slice from stage_indices
input = None # Input to current node
if sum(tonode)==0: # empty input, connect to vs0
input = vs0
else: # have some input
for k, connection in enumerate(tonode):
if connection==1:
from_him[k] += 1
to_him[j-1] += 1
if input is None:
input = stage_nodes[k]
else:
input = Add()([input, stage_nodes[k]])
v = Conv2D(filters=n_filter, kernel_size=(3,3), strides=(1,1), activation='relu', padding='same', name='v'+str(s)+'_'+str(j))(input)
stage_nodes += [v]
if(verbose):
print('Builded node '+name)
if(verbose):
print('from_him: ', from_him)
print('to_him: ', to_him)
print('stage_nodes: ', stage_nodes)
if sum(from_him)==sum(to_him)==0: # only one convolution vs0
previous = MaxPool2D(pool_size=(2,2), padding='same')(vs0)
else: # have some of the ordinary nodes
if(verbose):
print('Building '+'v'+str(s)+'_'+str(Ks+1))
input = None # last node no output definitelly
for k in range(len(stage_nodes)):
if from_him[k]==0 and to_him[k]!=0: # no connections from that node
if(verbose):
print('Connect to last node node', k, ' ', stage_nodes[k])
if input is None:
input = stage_nodes[k]
else:
input = Add()([input, stage_nodes[k]])
vsKs = Conv2D(filters=n_filter, kernel_size=(3,3), strides=(1,1), activation='relu', padding='same', name='v'+str(s)+'_'+str(Ks+1))(input) # defaul stage output node
if(verbose):
print('Builded '+'v'+str(s)+str(Ks+1))
previous = MaxPool2D(pool_size=(2,2), padding='same')(vsKs)
# Adding FC part of NN
x = Flatten(name='flatten')(previous)
x = Dense(units=32, activation='relu', name='next_to_last')(x)
x = Dense(units=n_classes, activation='softmax', name='last')(x)
# Creaate Model
model = Model(inputs=x_input, outputs=x, name='individual')
if(verbose):
print('Created Network builded.')
return model
# + id="X3hhN7e_SZwZ" colab_type="code" colab={}
def compile_model(model):
"""
model - created Keras model
Compile forwarded model, and return it compiled
"""
model.compile(optimizer=Adam(lr=1e-3), loss=categorical_crossentropy, metrics=['accuracy'])
return model
# + id="8_TzENOzS2vu" colab_type="code" colab={}
def visualize_model(model):
"""
model - created Keras model
plot forwarded model architecture
"""
from keras.utils import plot_model
print('Model summary: ')
model.summary()
plot_model(model, to_file='model.png')
return
# + id="FAH-z7QeUFqt" colab_type="code" colab={}
def train_model(model, x_train, y_train, x_test, y_test, epochs, batch_size, verbose=0, validation_split=0.0, callbacks=[]):
"""
model - compiled CNN model
x_train - input images
y_train - input labels (one hot encoded)
x_test - test images
y_test - test labels (one hot encoded)
epochs - number of epochs
batch_size - mini batch size of training
verbose - verbose of training
validation_split - data split used for validation
Train forwrded model. Returns (train history, model obtained test accuracy)
"""
if (epochs == 0):
# for faster testing
# print('only eval, without training')
return None, model.evaluate(x_test, y_test)
# print('training and eval')
history = model.fit(x_train, y_train, epochs=epochs, batch_size=batch_size, verbose=verbose, validation_split=validation_split, callbacks=callbacks)
return history, model.evaluate(x_test, y_test)
# + id="Vv7hRnZHWK9J" colab_type="code" colab={}
#model = CNN_build(STAGES, NUM_NODES, FILTERS, sampleIndividual, box_size, 10, 0)
#model = compile_model(model)
#visualize_model(model)
#history, result = train_model(model, x_train, y_train, x_test, y_test, 1, 1024, 1)
#result
# + id="qKOiF1-tx74y" colab_type="code" colab={}
def toPseudo(model):
"""
model - input CNN model
return - structure that describe model weights for later from that model loading
"""
return [(layer.get_config()['name'], layer.get_weights()) for layer in model.layers]
# + id="OlLUG0GedB1l" colab_type="code" colab={}
def loadWeights(toModel, fromPseudoModel, numSameStages, numNodesPerStage):
'''
toModel: keras model for which to load weights
fromPseudoModel: list of (layer name, layer weights) from which to load weights
numSameStages: number of first same stages; can be 0, 1, 2, 3; trivial cases 0 and 3
numNodesPerStage: number of nodes per stage; ex. [3,4,5]
You need to call model.compile. This can be done either before or after the model.load_weights
call but must be after the model architecture is specified and before the model.predict call.
returns the model with loaded weights from file
IMPORTANT: toModel and fromModel MUST HAVE exactly the same architecture on first numSameStages! (same indices eqvivalently)
TODO: add critical pool if want more pooling operations in architecture
'''
assert numSameStages<=len(numNodesPerStage)
allflag = (numSameStages==len(numNodesPerStage)) # to load all weights
for i, (name, weights) in enumerate(fromPseudoModel):
#print(name, weights)
if numSameStages==0:
if not allflag:
break
toModel.layers[i].set_weights(weights)
if 'max_pooling' in name:
numSameStages-=1
# + [markdown] id="_kdgjBhwRq2c" colab_type="text"
# # Genetic Algorithm
# + id="iEWPhebDtHe5" colab_type="code" colab={}
import numpy as np
from random import random, seed
# + id="zFU56u2otadr" colab_type="code" outputId="14eeb3dc-5872-4825-db80-340e819c4638" colab={"base_uri": "https://localhost:8080/", "height": 51}
np.random.seed(42) # reproducible
class Genetic:
def __init__(self, pc, qc, pm, qm, numGen, numInd, geneLength, bitIndices):
'''
pc: probability of crossover - whether crossover process begins
qc: probability of stages being exchanged - while in crossover process
pm: probability of mutation - whether mutation process begins
qm: probability of a per bit mutation - while in mutation process
numGen: number of generations
numInd: number of individuals
bitIndices: 2d matrix where each row has two columns - first is the index, and second is the length of bits in gene that code each segment
'''
self.pc = pc
self.qc = qc
self.pm = pm
self.qm = qm
self.numGen = numGen
self.currNumGen = 0
self.numInd = numInd
self.geneLength = geneLength
self.bitIndices = bitIndices
self.oldGen = None
self.initFirstGeneration()
def initFirstGeneration(self):
'''
initializes the first generation
'''
self.currNumGen = 1
self.currGen = np.random.randint(0, 2, (self.numInd, self.geneLength))
def getCurrentGeneration(self):
return self.currGen
def selection(self, fitness):
'''
returns indices of individuals that survived the selection
'''
npfit = np.array(fitness)
proba = npfit - np.min(npfit) # removes the worst one
proba = proba / np.sum(proba)
return np.random.choice(self.numInd, replace=True, size=self.numInd, p=proba)
def mutate(self, newGen, indices):
'''
mutates individuals in newGen on positions where indices are 0 (because those individuals didn't mate)
'''
for i, had in enumerate(indices):
if had == 0 and np.random.random() <= self.pm:
newGen[i] = self.mutateIndividual(self.currGen[i])
else:
newGen[i] = np.copy(self.currGen[i])
def mutateIndividual(self, individual):
'''
returns a new individual by mutating the given one
'''
mut = np.copy(individual)
for i, val in enumerate(mut):
if np.random.random() <= self.qm:
mut[i] = 1 - mut[i]
return mut
def crossover(self, individualA, individualB):
'''
returns two new individuals by performing crossover on two given individuals.
it takes care to only swap the whole segments, and not bits within segments
'''
a = np.copy(individualA)
b = np.copy(individualB)
for segment in self.bitIndices:
if np.random.random() <= self.qc:
start = segment[0]
end = segment[1]
tmpa = np.copy(a[start:end])
a[start:end] = b[start:end]
b[start:end] = tmpa
return a, b
def newGeneration(self, fitness, verbose=False):
'''
creates a new generation of individuals by selection, crossover, and mutation
of previous generation. Selection is based on the rulet method
fitness - np array of fitness metrics for all individuals, based on which to construct rulet
'''
self.currNumGen += 1
if self.currNumGen > self.numGen:
raise Exception(f"currNumGen > numGen, {self.currNumGen} > {self.numGen}")
newGenIdx = self.selection(fitness)
if verbose:
print(f'survived selection: {newGenIdx}')
newGen = np.zeros((self.numInd, self.geneLength), dtype='int32') # np matrix of new generation
hadCrossoverIdx = np.zeros(self.numInd) # tracks if an individial had a crossover
assert(len(newGen)%2 == 0)
# for each pair of neighbours, try crossover
for i in range(0, len(newGen), 2):
if np.random.random() <= self.pc:
newGen[i], newGen[i+1] = self.crossover(self.currGen[newGenIdx[i]], self.currGen[newGenIdx[i+1]])
hadCrossoverIdx[i] = 1
hadCrossoverIdx[i+1] = 1
self.mutate(newGen, hadCrossoverIdx)
self.oldGen = self.currGen
self.currGen = newGen
def findIndividualsWithSameRoots(self, verbose=False):
'''
for each individual in a new generation finds the indices of individuals in the old generation
which had the same firts n segments
returns a list, where i-th element has a touple (listOfParentsWithSameSegment, numberOfSameSegments)
'''
parentsAndNumSegments = []
for indiv in self.currGen:
parents, numSameSegments = self.hasSameRoots(indiv)
parentsAndNumSegments.append((parents, numSameSegments))
if numSameSegments > 0 and verbose:
print('individual:',indiv)
print(f'has the same {numSameSegments} first segments as:')
print(parents)
print(f'e.g: {self.oldGen[parents[0]]}')
return parentsAndNumSegments
def hasSameRoots(self, individual):
'''
returns indices of individuals from last generations which have the biggest same root as the
given individual, and returns the number of segments which are the same (starting from the first)
'''
for i, segment in reversed(list(enumerate(self.bitIndices))):
nColumns = segment[1]
# print('bools',(self.oldGen[:,:nColumns] == individual[:nColumns]))
# print('oldgen:',self.oldGen)
# print('ind:', individual)
# find rows which have the individual (only look at the part of the colums)
matchedRows = (self.oldGen[:,:nColumns] == individual[:nColumns]).all(axis=1)
sameRootIndividuals = np.where(matchedRows)[0]
if sameRootIndividuals.size > 0:
return sameRootIndividuals, i+1
return np.empty(0), 0
STAGES = np.array(["s1","s2","s3"]) # S
NUM_NODES = np.array([3,4,5]) # K
BITS_INDICES, geneLength = __create_indices(NUM_NODES)
gen = Genetic(0.2, 0.3, 0.8, 0.1, 10, 10, geneLength, BITS_INDICES)
print('mean1', np.mean(gen.getCurrentGeneration()))
gen.newGeneration(np.random.random(10))
print('mean2', np.mean(gen.getCurrentGeneration()))
# + id="R5onkJV5pZ4e" colab_type="code" colab={}
import types
STAGES = np.array(["s1","s2","s3"]) # S
NUM_NODES = np.array([3,4,5]) # K
BIT_INDICES, L = __create_indices(NUM_NODES)
# params is used as function parameter throught the core algorithm
params = types.SimpleNamespace()
params.pc = 0.2 # pc: probability of crossover - whether crossover process begins
params.pm = 0.8 # pm: probability of mutation - whether mutation process begins
params.qc = 0.3 # qc: probability of stages being exchanged - while in crossover process
params.qm = 0.1 # qm: probability of a per bit mutation - while in mutation process
params.geneLength = L # number of bits needed to encode the gene
params.numGenerations = 10 # 10 # number of generations
params.numIndividuals = 10 # 10 # number of individuals
params.bitIndices = BITS_INDICES # 2d matrix where each row has two columns - first is the index, and second is the length of bits in gene that code each segment
params.boxSize = box_size # width and height of the input
params.numClasses = 10 # number of output classes
params.stageNames = STAGES # list containing names of stages
params.numFilters = FILTERS # list containing number of filters per stage
params.numNodes = NUM_NODES # number of nodes within each stage
params.xTrain = x_train # training set data
params.yTrain = y_train # training set labels
params.xTest = x_test # test set data
params.yTest = y_test # test set labels
params.epochs = 10 # default number of epochs to train in the first generation
params.batchSize = 256
params.verbose = True
params.numInheritedStagesToEpochs = { # maps number of inherited stages into
# number of needed epochs to train it
# all: 0epoch, 1stage: 4epoch, 2stage: 3epoch
0: params.epochs,
1: params.epochs - 2,
2: params.epochs - 4,
3: 0
}
params.isModification = False
assert(params.numIndividuals%2 == 0)
# + id="Wx7G8lmn0Tso" colab_type="code" colab={}
def inheritWeightsFromParents(model, params, parentSegmentTuple, lastGenWeights):
'''
returns model, howManyEpochsToTrain, parentIndex
returns the model which inherits weights from last generation if possible
(and if params.isModification=true)
'''
parents = parentSegmentTuple[0]
numSegments = parentSegmentTuple[1]
epochsToTrain = params.numInheritedStagesToEpochs[numSegments] if params.isModification else params.epochs
trainForEpochs = epochsToTrain
parentIndex = None
if params.isModification and numSegments > 0:
parentIndex = parents[0] # TODO this is a list, might take the parent with the best fitness
loadWeights(model, lastGenWeights[parentIndex], numSegments, params.numNodes)
if model is None:
print('\t\t\t\ MODEL IS NONE!')
return model, trainForEpochs, parentIndex
def createAndEvaluateModel(params, individual, parentSegmentTuple, oldNetworkWeights, lastGenFitness, verbose):
'''
This clears the session to avoid slowdown after training many instances
returns its weights and fitness
'''
# build model
model = CNN_build(params.stageNames, params.numNodes, params.numFilters, individual, params.boxSize, params.numClasses, verbose=0)
model = compile_model(model)
# inherit weights
if oldNetworkWeights is None:
assert(lastGenFitness is None)
assert(parentSegmentTuple is None)
trainForEpochs = params.epochs
else:
assert(lastGenFitness is not None)
assert(parentSegmentTuple is not None)
model, trainForEpochs, parentIndex = inheritWeightsFromParents(model, params, parentSegmentTuple, oldNetworkWeights)
# train or copy from last gen
if trainForEpochs == 0:
assert(lastGenFitness is not None)
assert(oldNetworkWeights is not None)
print('\t\t\t\Skipping training because model is the same as last gen')
fitness = lastGenFitness[parentIndex]
pseudoWeights = oldNetworkWeights[parentIndex]
else:
history, lossAndAcc = train_model(model, params.xTrain, params.yTrain, params.xTest,
params.yTest, trainForEpochs,
params.batchSize, verbose=params.verbose, validation_split=0.0)
fitness = lossAndAcc[1]
pseudoWeights = toPseudo(model)
K.clear_session()
return pseudoWeights, fitness
def executeSelectionWithGeneticAlgorithm(params):
'''
args: params object defined above
returns individuals in the last generation, index of the best individual, and their fitnesses, and np matrix of all fitnesses
'''
genetic = Genetic(params.pc, params.qc, params.pm, params.qm, params.numGenerations, params.numIndividuals, params.geneLength, params.bitIndices)
oldNetworksWeights = None
allFitnesses = np.zeros((params.numGenerations, params.numIndividuals))
for i in range(params.numGenerations):
nthGen = i+1
print(f'\t\t\tStarting generation {nthGen}...')
print(f'Creating models from individuals...')
individuals = genetic.getCurrentGeneration()
# print("current generation:", individuals)
newNetworksWeights = []
if i > 0:
print(f'findIndividualsWithSameRoots...')
parentSegmentTuples = genetic.findIndividualsWithSameRoots()
lastGenFitness = allFitnesses[i-1]
else:
parentSegmentTuples = None
lastGenFitness = None
currGenFitness = []
for j, individual in enumerate(individuals):
print(f"Creating and evaluating indiv #{j}")
parentSegmentTuple = None if parentSegmentTuples is None else parentSegmentTuples[j]
newNetWeight, fitness = createAndEvaluateModel(params, individual, parentSegmentTuple,
oldNetworksWeights, lastGenFitness, params.verbose)
newNetworksWeights.append(newNetWeight)
currGenFitness.append(fitness)
currGenFitness = np.array(currGenFitness)
allFitnesses[i] = currGenFitness
print(f'this gen fitnesses: {currGenFitness}')
if i < params.numGenerations - 1:
genetic.newGeneration(fitness=currGenFitness)
oldNetworksWeights = newNetworksWeights
bestIdx = np.argmax(currGenFitness)
print(f'The best individual {individuals[bestIdx]} had fitness (accuracy): {currGenFitness[bestIdx]}')
return individuals, bestIdx, currGenFitness, allFitnesses
# + id="YhAAN4N9PnlE" colab_type="code" colab={}
def plotEvolutionProgress(allFit, takeBestN):
topn = np.zeros((params.numGenerations, takeBestN))
for i, row in enumerate(allFit):
row.sort()
topn[i] = row[-takeBestN:]
for i, col in reversed(list(enumerate(topn.T))):
plt.plot(range(1, params.numGenerations+1), col, label=f'#{takeBestN - i}')
plt.legend()
plt.show()
# plotEvolutionProgress(np.random.randn(params.numGenerations, params.numIndividuals), takeBestN = 2)
#plotEvolutionProgress(allFitnesses, takeBestN = 1)
#allFitnesses.shape
# + [markdown] id="QdSVY0xOnvKS" colab_type="text"
# ## Baseline: 4 gen x 4 individuals
# + id="THqXfftoHnfM" colab_type="code" outputId="8de7de3c-f1f4-422c-a820-2f9add5cee70" colab={"base_uri": "https://localhost:8080/"}
# %%time
np.random.seed(43) # reproducible
params.isModification = False
lastGenIndividuals, bestIdx, lastGenFitness, allFitnesses = executeSelectionWithGeneticAlgorithm(params)
# + id="RIPoJK_R0H_O" colab_type="code" outputId="a03a809e-6c8a-41ab-b8d7-ac681805d585" colab={"base_uri": "https://localhost:8080/"}
plotEvolutionProgress(allFitnesses, takeBestN = 4)
allFitnesses.shape
# + [markdown] id="bs1lDrRlnywQ" colab_type="text"
# ## Modification: 4 gen x 4 individuals
# + id="LVHdNEUGn1HP" colab_type="code" outputId="c7e00a99-759b-4d76-b81f-af6f7c9113c0" colab={"base_uri": "https://localhost:8080/"}
# %%time
np.random.seed(43) # reproducible
params.isModification = True
lastGenIndividuals, bestIdx, lastGenFitness, allFitnesses = executeSelectionWithGeneticAlgorithm(params)
# + id="XNd7V9ajqPkw" colab_type="code" outputId="66a235cb-bfb6-4a0a-8787-ab8cb81eb4e9" colab={"base_uri": "https://localhost:8080/"}
plotEvolutionProgress(allFitnesses, takeBestN = 4)
allFitnesses.shape
# + [markdown] id="TfbLSwCp032T" colab_type="text"
# ## Modification: 20 gen x 2 individuals
# + id="gLTxO6ll09ah" colab_type="code" outputId="b60a1a80-5ba8-473a-d18c-834b0fd387f5" colab={"base_uri": "https://localhost:8080/"}
# %%time
np.random.seed(43) # reproducible
params.isModification = True
lastGenIndividuals, bestIdx, lastGenFitness, allFitnesses = executeSelectionWithGeneticAlgorithm(params)
# + id="3B3dQCPt9gQN" colab_type="code" outputId="16aa4e74-1304-4aad-aa31-d643e819c4a4" colab={"base_uri": "https://localhost:8080/"}
plotEvolutionProgress(allFitnesses, takeBestN = 2)
allFitnesses.shape
# + [markdown] id="2FqoY7lN9hEN" colab_type="text"
# ## Modification: 2 gen x 20 individuals
# + id="hf1-r-G7AFOs" colab_type="code" outputId="56f178fb-9ff9-4ce5-b883-f1ee6f93a0dc" colab={"base_uri": "https://localhost:8080/", "height": 1000}
# %%time
np.random.seed(43) # reproducible
params.isModification = True
lastGenIndividuals, bestIdx, lastGenFitness, allFitnesses = executeSelectionWithGeneticAlgorithm(params)
# + id="ZsYjksWrAGES" colab_type="code" outputId="01190080-2414-4a75-8ff1-a4a8629db11a" colab={"base_uri": "https://localhost:8080/"}
plotEvolutionProgress(allFitnesses, takeBestN = 2)
allFitnesses.shape
# + [markdown] id="z4qWi23vA1Aj" colab_type="text"
# ## Modification: 20 gen x 20 individuals, (MNIST/6)
# + id="fOWUaOvEM9QT" colab_type="code" outputId="1e82538a-8647-4139-ad49-baf72e4ad6ea" colab={"base_uri": "https://localhost:8080/", "height": 1000}
# %%time
np.random.seed(43) # reproducible
params.isModification = True
lastGenIndividuals, bestIdx, lastGenFitness, allFitnesses = executeSelectionWithGeneticAlgorithm(params)
# + [markdown] id="aHqh5JrxgTDq" colab_type="text"
# ## Baseline 20 gen x 20 individuals
# + id="OOPhKQ-JgR3Q" colab_type="code" colab={}
# %%time
np.random.seed(43) # reproducible
params.isModification = False
lastGenIndividuals, bestIdx, lastGenFitness, allFitnesses = executeSelectionWithGeneticAlgorithm(params)
# + id="6CrCDRDMA5fc" colab_type="code" colab={}
plotEvolutionProgress(allFitnesses, takeBestN = 5)
allFitnesses.shape
# + [markdown] id="vstiUgUrivpq" colab_type="text"
# ## Baseline: 20 gen x 2 individuals
# + id="m8KjjexlizkL" colab_type="code" outputId="ec6714c7-4c47-4bb8-8b6e-6dffa99234d7" colab={"base_uri": "https://localhost:8080/", "height": 1000}
# %%time
np.random.seed(43) # reproducible
params.isModification = False
lastGenIndividuals, bestIdx, lastGenFitness, allFitnesses = executeSelectionWithGeneticAlgorithm(params)
# + id="W5Ry72Ayizbv" colab_type="code" outputId="39d53602-c4a2-4394-abd7-d928f51e063c" colab={"base_uri": "https://localhost:8080/", "height": 282}
plotEvolutionProgress(allFitnesses, takeBestN = 2)
allFitnesses.shape
# + [markdown] id="3xJLUTkD3Vd0" colab_type="text"
# ## Baseline: 2 gen x 20 individuals
# + id="rvf9mjis3a8S" colab_type="code" outputId="d2a97031-5531-4186-fd3d-07899710a648" colab={"base_uri": "https://localhost:8080/", "height": 1000}
# %%time
np.random.seed(43) # reproducible
params.isModification = False
lastGenIndividuals, bestIdx, lastGenFitness, allFitnesses = executeSelectionWithGeneticAlgorithm(params)
# + id="EC6-6PJ13azj" colab_type="code" outputId="539352d7-cada-4b92-8446-6d67912d224e" colab={"base_uri": "https://localhost:8080/", "height": 282}
plotEvolutionProgress(allFitnesses, takeBestN = 5)
allFitnesses.shape
# + [markdown] id="C64C734M9H5u" colab_type="text"
# # New try 10 epoch per train
# + id="LEjyQoZB9Lrh" colab_type="code" colab={}
# %%time
np.random.seed(43) # reproducible
params.isModification = True
lastGenIndividuals, bestIdx, lastGenFitness, allFitnesses = executeSelectionWithGeneticAlgorithm(params)
# + id="dWfu4E129Ll3" colab_type="code" outputId="b955e4b3-849d-4ba8-93ad-48158afff3ce" colab={"base_uri": "https://localhost:8080/", "height": 282}
plotEvolutionProgress(allFitnesses, takeBestN = 2)
allFitnesses.shape
# + id="eNk4q44pJ5tl" colab_type="code" colab={"base_uri": "https://localhost:8080/", "height": 595} outputId="fe0377ac-27a2-4386-da85-1827f994cf31"
lastGenIndividuals, bestIdx, lastGenFitness, allFitnesses
# + id="0IM7qy-HkurB" colab_type="code" colab={}
STAGES = np.array(["s1","s2","s3"]) # S
NUM_NODES = np.array([3,4,5]) # K
FILTERS = np.array([32, 48, 64])
sampleIndividual = [1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1, 1, 1, 0]
# + id="o9T9FF95lC6c" colab_type="code" colab={}
model = CNN_build(STAGES, NUM_NODES, FILTERS, sampleIndividual, box_size, 10, 0)
model = compile_model(model)
visualize_model(model)
# + [markdown] id="CuXJhy_ieyQ0" colab_type="text"
# # Manually
# + id="FaIZak9OZnqx" colab_type="code" outputId="1703400d-603d-4371-b083-ebfdaeae9c73" colab={"base_uri": "https://localhost:8080/", "height": 51}
# %%time
x_train, y_train, x_test, y_test, box_size = load_mnist(doubled=1)
# + id="UwTVIW1He7kO" colab_type="code" colab={}
STAGES = np.array(["s1","s2","s3"]) # S
NUM_NODES = np.array([3,4,5]) # K
FILTERS = np.array([32, 48, 64])
sampleIndividual = [1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 0]
# + id="BqSY8vpXfM8B" colab_type="code" outputId="21ba34ef-54a2-4c71-fe2d-221ae5ac9a7d" colab={"base_uri": "https://localhost:8080/", "height": 1000}
model = CNN_build(STAGES, NUM_NODES, FILTERS, sampleIndividual, 56, 10, 1)
# + id="nAOeVitcfOyI" colab_type="code" colab={}
model = compile_model(model)
# + id="C_90wxfdfRXF" colab_type="code" outputId="c7e4595d-fd7e-4c79-9c0a-69035287737a" colab={"base_uri": "https://localhost:8080/", "height": 1000}
visualize_model(model)
# + id="zGteF_JxgBak" colab_type="code" colab={}
from keras.callbacks import ReduceLROnPlateau, EarlyStopping
reducelr = ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=2, verbose=1, mode='min', cooldown=1)
estop = EarlyStopping(monitor='val_loss', min_delta=0, patience=5, verbose=1, mode='min')
# + id="LRmqrhD9fbnX" colab_type="code" outputId="16049449-de70-49f5-da7c-025ba4ac9c1a" colab={"base_uri": "https://localhost:8080/", "height": 680}
# %%time
history, result = train_model(model, x_train, y_train, x_test, y_test, 30, 512, 1, 0.2, [reducelr, estop])
# + id="YMYjil9fjP13" colab_type="code" outputId="2e259362-e372-43fa-cc10-3c9b25a3ef02" colab={"base_uri": "https://localhost:8080/", "height": 34}
result
# + id="Dq26Lc0ujT2Y" colab_type="code" outputId="dd631450-8098-4b7f-96b4-7beaed43d8b9" colab={"base_uri": "https://localhost:8080/", "height": 295}
from matplotlib import pyplot as plt
epochs = history.epoch
loss = history.history['loss']
val_loss = history.history['val_loss']
plt.title('Loss/Val loss curve')
plt.xlabel('epochs')
plt.ylabel('loss')
plt.plot(epochs, loss, color='red', label='training')
plt.plot(epochs, val_loss, color='orange', label='validation')
plt.legend()
plt.show()
# + id="3K8N0nlVjZYp" colab_type="code" outputId="bd568d14-514d-411a-9ab3-997471262ffc" colab={"base_uri": "https://localhost:8080/", "height": 295}
acc = history.history['acc']
val_acc = history.history['val_acc']
plt.title('Acc/Val acc curve')
plt.xlabel('epochs')
plt.ylabel('accuracy')
plt.plot(epochs, acc, color='red', label='training')
plt.plot(epochs, val_acc, color='orange', label='validation')
plt.legend()
plt.show()
# + [markdown] id="hVp4A8xQU-fb" colab_type="text"
# # Resources
# + [markdown] id="MZObn4NLVDK0" colab_type="text"
# * Google Schoolar Searches: [link](https://scholar.google.com/scholar?hl=sr&as_sdt=0%2C5&q=genetic+cnn+handwritting&btnG=)
#
# * Fokus na rad:
# * .pdf: [link](https://arxiv.org/abs/1703.01513)
# * github: [link](https://arxiv.org/abs/1703.01513)
# * Dodatno rad:
# * .pdf: [link](https://arxiv.org/pdf/1710.10741.pdf)
# * Clanak na netu: [link](https://blog.coast.ai/lets-evolve-a-neural-network-with-a-genetic-algorithm-code-included-8809bece164)
# * Ako sami implementiramo: [link](https://github.com/joeddav/devol/blob/master/devol/devol.py)
#
| src/gea_cnn.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# **Корректность проверена на Python 3.6:**
# + numpy 1.15.4
# + pandas 0.23.4
# # Линейная регрессия и стохастический градиентный спуск
# Задание основано на материалах лекций по линейной регрессии и градиентному спуску. Вы будете прогнозировать выручку компании в зависимости от уровня ее инвестиций в рекламу по TV, в газетах и по радио.
# ## Вы научитесь:
# - решать задачу восстановления линейной регрессии
# - реализовывать стохастический градиентный спуск для ее настройки
# - решать задачу линейной регрессии аналитически
# ## Введение
# Линейная регрессия - один из наиболее хорошо изученных методов машинного обучения, позволяющий прогнозировать значения количественного признака в виде линейной комбинации прочих признаков с параметрами - весами модели. Оптимальные (в смысле минимальности некоторого функционала ошибки) параметры линейной регрессии можно найти аналитически с помощью нормального уравнения или численно с помощью методов оптимизации.
# Линейная регрессия использует простой функционал качества - среднеквадратичную ошибку. Мы будем работать с выборкой, содержащей 3 признака. Для настройки параметров (весов) модели решается следующая задача:
# $$\Large \frac{1}{\ell}\sum_{i=1}^\ell{{((w_0 + w_1x_{i1} + w_2x_{i2} + w_3x_{i3}) - y_i)}^2} \rightarrow \min_{w_0, w_1, w_2, w_3},$$
# где $x_{i1}, x_{i2}, x_{i3}$ - значения признаков $i$-го объекта, $y_i$ - значение целевого признака $i$-го объекта, $\ell$ - число объектов в обучающей выборке.
# ## Градиентный спуск
# Параметры $w_0, w_1, w_2, w_3$, по которым минимизируется среднеквадратичная ошибка, можно находить численно с помощью градиентного спуска.
# Градиентный шаг для весов будет выглядеть следующим образом:
# $$\Large w_0 \leftarrow w_0 - \frac{2\eta}{\ell} \sum_{i=1}^\ell{{((w_0 + w_1x_{i1} + w_2x_{i2} + w_3x_{i3}) - y_i)}}$$
# $$\Large w_j \leftarrow w_j - \frac{2\eta}{\ell} \sum_{i=1}^\ell{{x_{ij}((w_0 + w_1x_{i1} + w_2x_{i2} + w_3x_{i3}) - y_i)}},\ j \in \{1,2,3\}$$
# Здесь $\eta$ - параметр, шаг градиентного спуска.
# ## Стохастический градиентный спуск
# Проблема градиентного спуска, описанного выше, в том, что на больших выборках считать на каждом шаге градиент по всем имеющимся данным может быть очень вычислительно сложно.
# В стохастическом варианте градиентного спуска поправки для весов вычисляются только с учетом одного случайно взятого объекта обучающей выборки:
# $$\Large w_0 \leftarrow w_0 - \frac{2\eta}{\ell} {((w_0 + w_1x_{k1} + w_2x_{k2} + w_3x_{k3}) - y_k)}$$
# $$\Large w_j \leftarrow w_j - \frac{2\eta}{\ell} {x_{kj}((w_0 + w_1x_{k1} + w_2x_{k2} + w_3x_{k3}) - y_k)},\ j \in \{1,2,3\},$$
# где $k$ - случайный индекс, $k \in \{1, \ldots, \ell\}$.
# ## Нормальное уравнение
# Нахождение вектора оптимальных весов $w$ может быть сделано и аналитически.
# Мы хотим найти такой вектор весов $w$, чтобы вектор $y$, приближающий целевой признак, получался умножением матрицы $X$ (состоящей из всех признаков объектов обучающей выборки, кроме целевого) на вектор весов $w$. То есть, чтобы выполнялось матричное уравнение:
# $$\Large y = Xw$$
# Домножением слева на $X^T$ получаем:
# $$\Large X^Ty = X^TXw$$
# Это хорошо, поскольку теперь матрица $X^TX$ - квадратная, и можно найти решение (вектор $w$) в виде:
# $$\Large w = {(X^TX)}^{-1}X^Ty$$
# Матрица ${(X^TX)}^{-1}X^T$ - [*псевдообратная*](https://ru.wikipedia.org/wiki/Псевдообратная_матрица) для матрицы $X$. В NumPy такую матрицу можно вычислить с помощью функции [numpy.linalg.pinv](http://docs.scipy.org/doc/numpy-1.10.0/reference/generated/numpy.linalg.pinv.html).
#
# Однако, нахождение псевдообратной матрицы - операция вычислительно сложная и нестабильная в случае малого определителя матрицы $X$ (проблема мультиколлинеарности).
# На практике лучше находить вектор весов $w$ решением матричного уравнения
# $$\Large X^TXw = X^Ty$$Это может быть сделано с помощью функции [numpy.linalg.solve](http://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.linalg.solve.html).
#
# Но все же на практике для больших матриц $X$ быстрее работает градиентный спуск, особенно его стохастическая версия.
# ## Инструкции по выполнению
# **1. Загрузите данные из файла *advertising.csv* в объект pandas DataFrame. [Источник данных](http://www-bcf.usc.edu/~gareth/ISL/data.html).**
# +
import pandas as pd
import seaborn as sns
import numpy as np
adver_data = pd.read_csv('advertising.csv')
# -
def write_to_file(answer, filename):
with open(filename, 'w') as f_out:
f_out.write(str(round(answer, 3)))
# **Посмотрите на первые 5 записей и на статистику признаков в этом наборе данных.**
adver_data.head()
sns.pairplot(adver_data)
# **Создайте массивы NumPy *X* из столбцов TV, Radio и Newspaper и *y* - из столбца Sales. Используйте атрибут *values* объекта pandas DataFrame.**
X = adver_data[['TV', 'Radio', 'Newspaper']]
y = adver_data.Sales
# **Отмасштабируйте столбцы матрицы *X*, вычтя из каждого значения среднее по соответствующему столбцу и поделив результат на стандартное отклонение. Для определенности, используйте методы mean и std векторов NumPy (реализация std в Pandas может отличаться). Обратите внимание, что в numpy вызов функции .mean() без параметров возвращает среднее по всем элементам массива, а не по столбцам, как в pandas. Чтобы произвести вычисление по столбцам, необходимо указать параметр axis.**
means, stds = X.apply(np.mean), X.apply(np.std)
X = X.apply(lambda x: (x - means)/stds, axis = 1)
# **Добавьте к матрице *X* столбец из единиц, используя методы *hstack*, *ones* и *reshape* библиотеки NumPy. Вектор из единиц нужен для того, чтобы не обрабатывать отдельно коэффициент $w_0$ линейной регрессии.**
# +
# import numpy as np
# X = np.hstack # Ваш код здесь
X['x0'] = 1
X = X[['x0', 'TV', 'Radio', 'Newspaper']]
X.head()
# -
# **2. Реализуйте функцию *mserror* - среднеквадратичную ошибку прогноза. Она принимает два аргумента - объекты Series *y* (значения целевого признака) и *y\_pred* (предсказанные значения). Не используйте в этой функции циклы - тогда она будет вычислительно неэффективной.**
def mserror(y, y_pred):
return sum(map(lambda x1, x2: (x1 - x2) ** 2, y, y_pred))/len(y)
# **Какова среднеквадратичная ошибка прогноза значений Sales, если всегда предсказывать медианное значение Sales по исходной выборке? Полученный результат, округленный до 3 знаков после запятой, является ответом на *'1 задание'.***
y_mean_sales = [np.median(y)] * len(y)
answer1 = mserror(y, y_mean_sales)
print(answer1)
write_to_file(answer1, '1.txt')
# **3. Реализуйте функцию *normal_equation*, которая по заданным матрицам (массивам NumPy) *X* и *y* вычисляет вектор весов $w$ согласно нормальному уравнению линейной регрессии.**
#
# $$X^TXw = X^Ty$$
def normal_equation(X, y):
A = np.dot(np.transpose(X), X)
b = np.dot(np.transpose(X), y)
w = np.linalg.solve(A, b)
return w
norm_eq_weights = normal_equation(X, y)
print(norm_eq_weights)
# **Какие продажи предсказываются линейной моделью с весами, найденными с помощью нормального уравнения, в случае средних инвестиций в рекламу по ТВ, радио и в газетах? (то есть при нулевых значениях масштабированных признаков TV, Radio и Newspaper). Полученный результат, округленный до 3 знаков после запятой, является ответом на *'2 задание'*.**
# +
def predict(data, weights):
return np.dot([1] + data, weights)
answer2 = predict([0, 0, 0], norm_eq_weights)
print(answer2)
write_to_file(answer2, '2.txt')
# -
# **4. Напишите функцию *linear_prediction*, которая принимает на вход матрицу *X* и вектор весов линейной модели *w*, а возвращает вектор прогнозов в виде линейной комбинации столбцов матрицы *X* с весами *w*.**
def linear_prediction(X, w):
return np.dot(X, w.reshape(4, 1))
# **Какова среднеквадратичная ошибка прогноза значений Sales в виде линейной модели с весами, найденными с помощью нормального уравнения?
# Полученный результат, округленный до 3 знаков после запятой, является ответом на *'3 задание'***
answer3 = mserror(y, linear_prediction(X, norm_eq_weights))
print(answer3)
write_to_file(*answer3, '3.txt')
# **5. Напишите функцию *stochastic_gradient_step*, реализующую шаг стохастического градиентного спуска для линейной регрессии. Функция должна принимать матрицу *X*, вектора *y* и *w*, число *train_ind* - индекс объекта обучающей выборки (строки матрицы *X*), по которому считается изменение весов, а также число *$\eta$* (eta) - шаг градиентного спуска (по умолчанию *eta*=0.01). Результатом будет вектор обновленных весов. Наша реализация функции будет явно написана для данных с 3 признаками, но несложно модифицировать для любого числа признаков, можете это сделать.**
#
# $$w_0 \leftarrow w_0 + \frac{2\eta}{\ell} {(y_k - (w_0 + w_1x_{k1} + w_2x_{k2} + w_3x_{k3}))}$$
# $$w_j \leftarrow w_j + \frac{2\eta}{\ell} {x_{kj}(y_k - (w_0 + w_1x_{k1} + w_2x_{k2} + w_3x_{k3}))},\ j \in \{1,2,3\},$$
def stochastic_gradient_step(X, y, w, train_ind, eta=0.01):
# grad0 = # Ваш код здесь
# grad1 = # Ваш код здесь
# grad2 = # Ваш код здесь
# grad3 = # Ваш код здесь
# return w - eta * np.array([grad0, grad1, grad2, grad3])
l = len(y)
x_k = X.values[train_ind]
y_k = y.values[train_ind]
return w + 2*eta/l*x_k*(y_k - np.dot(w, x_k))
# **6. Напишите функцию *stochastic_gradient_descent*, реализующую стохастический градиентный спуск для линейной регрессии. Функция принимает на вход следующие аргументы:**
# - X - матрица, соответствующая обучающей выборке
# - y - вектор значений целевого признака
# - w_init - вектор начальных весов модели
# - eta - шаг градиентного спуска (по умолчанию 0.01)
# - max_iter - максимальное число итераций градиентного спуска (по умолчанию 10000)
# - min_weight_dist - максимальное евклидово расстояние между векторами весов на соседних итерациях градиентного спуска,
# при котором алгоритм прекращает работу (по умолчанию 1e-8)
# - seed - число, используемое для воспроизводимости сгенерированных псевдослучайных чисел (по умолчанию 42)
# - verbose - флаг печати информации (например, для отладки, по умолчанию False)
#
# **На каждой итерации в вектор (список) должно записываться текущее значение среднеквадратичной ошибки. Функция должна возвращать вектор весов $w$, а также вектор (список) ошибок.**
def stochastic_gradient_descent(X, y, w_init, eta=1e-2, max_iter=1e4,
min_weight_dist=1e-8, seed=42, verbose=False):
# Инициализируем расстояние между векторами весов на соседних
# итерациях большим числом.
weight_dist = np.inf
# Инициализируем вектор весов
w = w_init
# Сюда будем записывать ошибки на каждой итерации
errors = []
# Счетчик итераций
iter_num = 0
# Будем порождать псевдослучайные числа
# (номер объекта, который будет менять веса), а для воспроизводимости
# этой последовательности псевдослучайных чисел используем seed.
np.random.seed(seed)
# Основной цикл
while weight_dist > min_weight_dist and iter_num < max_iter:
# порождаем псевдослучайный
# индекс объекта обучающей выборки
random_ind = np.random.randint(X.shape[0])
w_next = stochastic_gradient_step(X, y, w, random_ind, eta)
y_pred = linear_prediction(X, w_next)
errors.append(mserror(y, y_pred))
weight_dist = np.linalg.norm(w - w_next)
iter_num += 1
w = w_next
return w, errors
# **Запустите $10^5$ итераций стохастического градиентного спуска. Укажите вектор начальных весов *w_init*, состоящий из нулей. Оставьте параметры *eta* и *seed* равными их значениям по умолчанию (*eta*=0.01, *seed*=42 - это важно для проверки ответов).**
stoch_grad_desc_weights, stoch_errors_by_iter = stochastic_gradient_descent(X, y, np.zeros(4), max_iter = 1e5)
# **Посмотрим, чему равна ошибка на первых 50 итерациях стохастического градиентного спуска. Видим, что ошибка не обязательно уменьшается на каждой итерации.**
# %pylab inline
plot(range(50), stoch_errors_by_iter[:50])
xlabel('Iteration number')
ylabel('MSE')
# **Теперь посмотрим на зависимость ошибки от номера итерации для $10^5$ итераций стохастического градиентного спуска. Видим, что алгоритм сходится.**
# %pylab inline
plot(range(len(stoch_errors_by_iter)), stoch_errors_by_iter)
xlabel('Iteration number')
ylabel('MSE')
# **Посмотрим на вектор весов, к которому сошелся метод.**
stoch_grad_desc_weights
# **Посмотрим на среднеквадратичную ошибку на последней итерации.**
stoch_errors_by_iter[-1]
# **Какова среднеквадратичная ошибка прогноза значений Sales в виде линейной модели с весами, найденными с помощью градиентного спуска? Полученный результат, округленный до 3 знаков после запятой, является ответом на *'4 задание'*.**
answer4 = stoch_errors_by_iter[-1]
print(answer4)
write_to_file(*answer4, '4.txt')
| course-2/week-1/PA_linreg_stochastic_grad_descent.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Copyright 2015 The TensorFlow Authors and <NAME>. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Custom image operations.
Most of the following methods extend TensorFlow image library, and part of
the code is shameless copy-paste of the former!
"""
import tensorflow as tf
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.framework import tensor_shape
from tensorflow.python.framework import tensor_util
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import check_ops
from tensorflow.python.ops import clip_ops
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.ops import gen_image_ops
from tensorflow.python.ops import gen_nn_ops
from tensorflow.python.ops import string_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.ops import random_ops
from tensorflow.python.ops import variables
# =========================================================================== #
# Modification of TensorFlow image routines.
# =========================================================================== #
def _assert(cond, ex_type, msg):
"""A polymorphic assert, works with tensors and boolean expressions.
If `cond` is not a tensor, behave like an ordinary assert statement, except
that a empty list is returned. If `cond` is a tensor, return a list
containing a single TensorFlow assert op.
Args:
cond: Something evaluates to a boolean value. May be a tensor.
ex_type: The exception class to use.
msg: The error message.
Returns:
A list, containing at most one assert op.
"""
if _is_tensor(cond):
return [control_flow_ops.Assert(cond, [msg])]
else:
if not cond:
raise ex_type(msg)
else:
return []
def _is_tensor(x):
"""Returns `True` if `x` is a symbolic tensor-like object.
Args:
x: A python object to check.
Returns:
`True` if `x` is a `tf.Tensor` or `tf.Variable`, otherwise `False`.
"""
return isinstance(x, (ops.Tensor, variables.Variable))
def _ImageDimensions(image):
"""Returns the dimensions of an image tensor.
Args:
image: A 3-D Tensor of shape `[height, width, channels]`.
Returns:
A list of `[height, width, channels]` corresponding to the dimensions of the
input image. Dimensions that are statically known are python integers,
otherwise they are integer scalar tensors.
"""
if image.get_shape().is_fully_defined():
return image.get_shape().as_list()
else:
static_shape = image.get_shape().with_rank(3).as_list()
dynamic_shape = array_ops.unstack(array_ops.shape(image), 3)
return [s if s is not None else d
for s, d in zip(static_shape, dynamic_shape)]
def _Check3DImage(image, require_static=True):
"""Assert that we are working with properly shaped image.
Args:
image: 3-D Tensor of shape [height, width, channels]
require_static: If `True`, requires that all dimensions of `image` are
known and non-zero.
Raises:
ValueError: if `image.shape` is not a 3-vector.
Returns:
An empty list, if `image` has fully defined dimensions. Otherwise, a list
containing an assert op is returned.
"""
try:
image_shape = image.get_shape().with_rank(3)
except ValueError:
raise ValueError("'image' must be three-dimensional.")
if require_static and not image_shape.is_fully_defined():
raise ValueError("'image' must be fully defined.")
if any(x == 0 for x in image_shape):
raise ValueError("all dims of 'image.shape' must be > 0: %s" %
image_shape)
if not image_shape.is_fully_defined():
return [check_ops.assert_positive(array_ops.shape(image),
["all dims of 'image.shape' "
"must be > 0."])]
else:
return []
def fix_image_flip_shape(image, result):
"""Set the shape to 3 dimensional if we don't know anything else.
Args:
image: original image size
result: flipped or transformed image
Returns:
An image whose shape is at least None,None,None.
"""
image_shape = image.get_shape()
if image_shape == tensor_shape.unknown_shape():
result.set_shape([None, None, None])
else:
result.set_shape(image_shape)
return result
# =========================================================================== #
# Image + BBoxes methods: cropping, resizing, flipping, ...
# =========================================================================== #
def bboxes_crop_or_pad(bboxes,
height, width,
offset_y, offset_x,
target_height, target_width):
"""Adapt bounding boxes to crop or pad operations.
Coordinates are always supposed to be relative to the image.
Arguments:
bboxes: Tensor Nx4 with bboxes coordinates [y_min, x_min, y_max, x_max];
height, width: Original image dimension;
offset_y, offset_x: Offset to apply,
negative if cropping, positive if padding;
target_height, target_width: Target dimension after cropping / padding.
"""
with tf.name_scope('bboxes_crop_or_pad'):
# Rescale bounding boxes in pixels.
scale = tf.cast(tf.stack([height, width, height, width]), bboxes.dtype)
bboxes = bboxes * scale
# Add offset.
offset = tf.cast(tf.stack([offset_y, offset_x, offset_y, offset_x]), bboxes.dtype)
bboxes = bboxes + offset
# Rescale to target dimension.
scale = tf.cast(tf.stack([target_height, target_width,
target_height, target_width]), bboxes.dtype)
bboxes = bboxes / scale
return bboxes
def resize_image_bboxes_with_crop_or_pad(image, bboxes,
target_height, target_width):
"""Crops and/or pads an image to a target width and height.
Resizes an image to a target width and height by either centrally
cropping the image or padding it evenly with zeros.
If `width` or `height` is greater than the specified `target_width` or
`target_height` respectively, this op centrally crops along that dimension.
If `width` or `height` is smaller than the specified `target_width` or
`target_height` respectively, this op centrally pads with 0 along that
dimension.
Args:
image: 3-D tensor of shape `[height, width, channels]`
target_height: Target height.
target_width: Target width.
Raises:
ValueError: if `target_height` or `target_width` are zero or negative.
Returns:
Cropped and/or padded image of shape
`[target_height, target_width, channels]`
"""
with tf.name_scope('resize_with_crop_or_pad'):
image = ops.convert_to_tensor(image, name='image')
assert_ops = []
assert_ops += _Check3DImage(image, require_static=False)
assert_ops += _assert(target_width > 0, ValueError,
'target_width must be > 0.')
assert_ops += _assert(target_height > 0, ValueError,
'target_height must be > 0.')
image = control_flow_ops.with_dependencies(assert_ops, image)
# `crop_to_bounding_box` and `pad_to_bounding_box` have their own checks.
# Make sure our checks come first, so that error messages are clearer.
if _is_tensor(target_height):
target_height = control_flow_ops.with_dependencies(
assert_ops, target_height)
if _is_tensor(target_width):
target_width = control_flow_ops.with_dependencies(assert_ops, target_width)
def max_(x, y):
if _is_tensor(x) or _is_tensor(y):
return math_ops.maximum(x, y)
else:
return max(x, y)
def min_(x, y):
if _is_tensor(x) or _is_tensor(y):
return math_ops.minimum(x, y)
else:
return min(x, y)
def equal_(x, y):
if _is_tensor(x) or _is_tensor(y):
return math_ops.equal(x, y)
else:
return x == y
height, width, _ = _ImageDimensions(image)
width_diff = target_width - width
offset_crop_width = max_(-width_diff // 2, 0)
offset_pad_width = max_(width_diff // 2, 0)
height_diff = target_height - height
offset_crop_height = max_(-height_diff // 2, 0)
offset_pad_height = max_(height_diff // 2, 0)
# Maybe crop if needed.
height_crop = min_(target_height, height)
width_crop = min_(target_width, width)
cropped = tf.image.crop_to_bounding_box(image, offset_crop_height, offset_crop_width,
height_crop, width_crop)
bboxes = bboxes_crop_or_pad(bboxes,
height, width,
-offset_crop_height, -offset_crop_width,
height_crop, width_crop)
# Maybe pad if needed.
resized = tf.image.pad_to_bounding_box(cropped, offset_pad_height, offset_pad_width,
target_height, target_width)
bboxes = bboxes_crop_or_pad(bboxes,
height_crop, width_crop,
offset_pad_height, offset_pad_width,
target_height, target_width)
# In theory all the checks below are redundant.
if resized.get_shape().ndims is None:
raise ValueError('resized contains no shape.')
resized_height, resized_width, _ = _ImageDimensions(resized)
assert_ops = []
assert_ops += _assert(equal_(resized_height, target_height), ValueError,
'resized height is not correct.')
assert_ops += _assert(equal_(resized_width, target_width), ValueError,
'resized width is not correct.')
resized = control_flow_ops.with_dependencies(assert_ops, resized)
return resized, bboxes
def resize_image(image, size,
method=tf.image.ResizeMethod.BILINEAR,
align_corners=False):
"""Resize an image and bounding boxes.
"""
# Resize image.
with tf.name_scope('resize_image'):
height, width, channels = _ImageDimensions(image)
image = tf.expand_dims(image, 0)
image = tf.image.resize_images(image, size,
method, align_corners)
image = tf.reshape(image, tf.stack([size[0], size[1], channels]))
return image
def random_flip_left_right(image, bboxes, seed=None):
"""Random flip left-right of an image and its bounding boxes.
"""
def flip_bboxes(bboxes):
"""Flip bounding boxes coordinates.
"""
bboxes = tf.stack([bboxes[:, 0], 1 - bboxes[:, 3],
bboxes[:, 2], 1 - bboxes[:, 1]], axis=-1)
return bboxes
# Random flip. Tensorflow implementation.
with tf.name_scope('random_flip_left_right'):
image = ops.convert_to_tensor(image, name='image')
_Check3DImage(image, require_static=False)
uniform_random = random_ops.random_uniform([], 0, 1.0, seed=seed)
mirror_cond = math_ops.less(uniform_random, .5)
# Flip image.
result = control_flow_ops.cond(mirror_cond,
lambda: array_ops.reverse_v2(image, [1]),
lambda: image)
# Flip bboxes.
bboxes = control_flow_ops.cond(mirror_cond,
lambda: flip_bboxes(bboxes),
lambda: bboxes)
return fix_image_flip_shape(image, result), bboxes
| preprocessing/tf_image.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### <NAME>
# ## Roll # : BAI09056
# ### IIMB - BAI09 - Assignment 2
#
# +
from IPython.display import HTML
HTML('''<script>
code_show=true;
function code_toggle() {
if (code_show){
$('div.input').hide();
} else {
$('div.input').show();
}
code_show = !code_show
}
$( document ).ready(code_toggle);
</script>
<form action="javascript:code_toggle()"><input type="submit" value="Toggle on/off Code"></form>''')
# +
import scipy.stats as stats
import numpy as np
import pandas as pd
import warnings
warnings.filterwarnings('ignore')
# %matplotlib inline
# -
# # Q 1.1
#
# We will use the following formula to calculate the coefficient of CRIM.
#
# \begin{equation*} \beta = r * \frac{SD_Y} {SD_X}\end{equation*}
#
# \begin{equation*}\text {where r = Correlation of X (CRIM) and Y (PRICE) &} \end{equation*}
# \begin{equation*}SD_x \text{= Standard deviation of X}\end{equation*}
# \begin{equation*}SD_y \text{= Standard deviation of Y}\end{equation*}
#
#
# From table 1.1 we can find SDx = 8.60154511 & SDy = 9.197
# From table 1.2 we can find r = -.388
#
# Using the above we can find:
# +
sd_crim = 8.60154511
sd_price = 9.197
r = -.388
B1 = r * sd_price / sd_crim
print("B1 {}, implies as crime rate increases by 1 unit, unit price reduces by {} units (Lac INR)".format(B1, abs(B1)))
# -
# # Q 1.2
#
# The range of coefficients is given by:
# \begin{equation*} \beta \pm \text{t-crit *} SE_{beta}\end{equation*}
#
# where t-critical is the critical value of T for significance alpha.
#
# Interpretation: \begin{equation*} \beta =\text {Increase in Y as X changes by 1 Unit} \end{equation*}
# +
n = 506
seb1 = 0.044
tcrit = abs(stats.t.ppf(0.025, df = 505))
print("T-critical at alpha {} and df {} is {}".format(0.05, 505, tcrit))
print("Min B1 {}".format(B1 + tcrit * seb1))
print("Max B1 {}".format(B1 - tcrit * seb1))
print("Price will reduce between 32K to 50K with 95% CI, hence his assumption that it reduces by at least 30K is correct")
# -
# # Q 1.3
#
# Regression is valid for only the observed ranges of X (Predcitor). The min value of Crime rate = .0068 > 0. Hence it is incorrect to draw any conclusion about the predicted values of Y for Crim==0 as that value is unobserved.
#
# We cannot claim the value will be 24.03
# # Q 1.4
#
# Here Y predicted can be calculated from the regression equation:
# 24.033 - 0.414 * 1 (Value of CRIM)
#
# For large values of n the range of Y-predicted is given by:
# \begin{equation*} \hat Y \pm \text{t-crit *} SE_{Y}\end{equation*}
#
# where t-critical is the critical value of T for significance alpha (0.05).
#
#
# +
se = 8.484 #seb1 * sd_crim * (n - 1) ** 0.5
#print(se)
yhat = 24.033 - 0.414 * 1
yhat_max = (yhat + tcrit * se)
print("Max Value of Price for CRIM ==1 is {}".format(yhat_max))
# -
# # Q 1.5
#
# Here Y predicted (mean value of regression) can be calculated from the regression equation:
# 24.033 + 6.346 * 1 (Value of SEZ)
#
# t-critical is computed as:
# \begin{equation*} t = \frac {(t_o - t_{mean})} {SE_{estimate}} \end{equation*}
#
# We can calculate the probability using CDF of a normal Distribution. Since the value is >= 40 Lac, hence we will consider the right-tail of the t-value to compute the probability.
# +
yhat = 22.094 + 6.346
print("Mean Regression value {}".format(yhat))
t = (40 - yhat) / 9.064
print("t-crit at alpha 0.05 is {}".format(t))
print("Y-pred follows a normal distribution. Probability of Price being at least 40 lac is {} percent".format(round((1 - stats.norm.cdf(t))* 100, 2)))
# -
# # Q 1.6 - a
#
# From the residual plot, by visual inspection we can see that the spread of standardised errors are higher for lower values of standardised prediction compared to higher values.
#
# Hence the variance of the residuals are not equal and it demonstrates heteroscedasticity
#
# # Q 1.6 - b
#
# 1. It is a right skewed distribution
# 2. The left tail has less proportion of data than that of a normal distribution
# 3. Between 40-80 % range the distribution has much less proportion of data compared to a normal distribution
#
# From observing the P-P plot we conclude there is considerable difference between this distribution and normal distribution.
#
# # Q 1.6 - c
#
# Based on the above we can conclude that this regression equation may not be functionally correct. It may not be correct to rely on predictions using this model.
# # Q 1.7
#
# The increase in R-squared when a new variable is added to a model is the given by the **Square of the Semi-Partial (PART) Correlation**.
#
# - From Table 1.7: R-squared @ Step 2 = 0.542
# - From Table 1.8: PART Correlation for adding RES = -.153
print("R-squared in Step 3 is {}".format(0.542 + (-.153) ** 2))
# # Q 1.8
#
# It reduces as there is correlation among RM and CRIM. Part of what was explained by RM in model 1 is now being explained by CRIM in model 2 as CRIM and RM is correlated.
#
# Technically this is call Omitted Variable Bias. The reduction can be explained by the following equation:
#
# \begin{equation*} \alpha_{RM_{Model1}} = \beta_{RM_{Model2}} + \frac{\beta_{CRIM_{Model2}} * Cor(RM, CRIM)} {Var(RM)} \end{equation*}
#
#
# From Correlation table we see that RM and CRIM has negative Correlation, hence the overall value of coefficient for RM reduces
# +
# Import the library
import matplotlib.pyplot as plt
from matplotlib_venn import venn3
# Make the diagram
v = venn3(subsets = (1, 1, 1, 1, 1, 1, 1), set_labels= ('PRICE', 'RM', 'CRIM'))
v.get_label_by_id('101').set_text('Y_CRIM')
v.get_label_by_id('110').set_text('Y_RM')
v.get_label_by_id('111').set_text('Y_RM_CRIM')
v.get_label_by_id('011').set_text('RM_CRIM')
plt.show()
# -
# # Q 1.9
#
# We will use the model in step - 6 for answering this question.
#
# - Since the variables are not standardised we cannot use the magnitude of the coefficients as a measure of impact on dependent variable (Price)
# - We will use the notion of the Standardised Coefficients to measure how much 1 SD change in the variable X (Predictor) changes Y (dependant)
#
# - From Tables 1.1 and 1.8 we can easily obtain the Standardised Coefficients for the regression model for all variables except for RM as the SD of RM is not provided in table 1.1 and the Standardised coefficient of RM is not provided in table 1.8. Standardised Coefficient is calculated using:
#
# \begin{equation*} \beta_{STANDARDISED} = \hat\beta * \frac {S_X} {S_Y} \end{equation*}
#
# where \begin{equation*} \text{Standard Deviation X} = S_X \end{equation*}
# & \begin{equation*} \text{Standard Deviation Y} = S_Y \end{equation*}
#
#
# - To calculate the variance of RM we will use the Model 1
# - In Model 1 the coefficient of RM is 9.102
# - Standardized Coefficient of RM = .695, SD of PRICE (Y) = 9.197
# - Using these values and rearranging the equation discussed above, we get SD of RM = .7022
#
# - From the below table we can see that **RM** has the highest impact on PRICE.
# +
data = pd.DataFrame({"_": ["INTERCEPT","RM","CRIM","RES","SEZ","Highway", "AGE"]})
data["Coefficients"] = [-8.993, 7.182, -.194, -.318, 4.499, -1.154, -.077]
data["Standardized Coefficients"] = ['', (7.182 * .7022) / 9.197, -.194 * 8.60154511 / 9.197,
-.238, .124, .264,
-.077 * 28.1489 / 9.197]
data
# -
# # Q 2.1
#
# Correct:
#
# ***1. The model explains 42.25% of variation in box office collection.***
#
# ***2. There are outliers in the model.***
#
# ***3. The residuals do not follow a normal distribution.***
#
# Incorrect:
#
# 4.The model cannot be used since R-square is low.
#
# 5.Box office collection increases as the budget increases.
#
#
#
# # Q 2.2
#
# Here Budget (X) can never be = 0, as it may not be possible to produce a movie without money and X = 0 is unobserved i.e. X = 0 falls outside the domain of the observed values of the variable X. The relationship between the variables can change as we move outside the observed region. The Model explains the relationship between Y and X within the range of observed values only. We cannot predict for a point that is outside the range of observed values using the regression model.
#
# Hence Mr Chellapa's observation is incorrect
#
# # Q 2.3
#
# Since the variable is insignificant at alpha = 0.05, hence the coefficient may not be different from zero. There is is no statistical validity that the collection of movie released in Releasing_Time Normal_Season is different from Releasing_Time Holiday_Season (which is factored in the intercept / constant).
#
# Since we do not have the data hence we cannot rerun the model without the insignificant variable. We will assume that the co-efficient is 0 and it's removal does not have any effect on the overall equation (other significant variables).
#
# Hence the difference is **Zero**.
y = 2.685 + .147
#print("With beta = .147 y = {}".format(y))
#print("With beta = 0 y = {}".format(2.685))
# # Q 2.4
#
# The beta for Release Normal Time is being considered as 0 as it is statistically insignificant at alpha. Hence it will be factored in the Intercept term. Releasing_Time Long_Weekend is statistically significant and the coefficient = 1.247.
#
# The range of values will be considered because of variability of the coefficient.
#
# SE =0.588, tCrit @ 0.05 = 1.964
# Max Value = Constant + tcrit * SE
# MIn Value = Constant - tcrit * SE
# +
Bmax = np.exp(2.685 + 1.247 + 1.964 *.588)# - np.exp(2.685)
print("Max earning from Long weekend movie releases can be {}".format(Bmax))
Bmin = np.exp(2.685+1.247 - 1.964 *.588)
print("Min earning from Long weekend movie releases can be {}".format(Bmin))
print("Movies released in normal Weekends may earn on Average {}".format(np.exp(2.685)))
#print("Movies released in normal Weekends may earn on Average {}".format(np.exp(2.685 + .147)))
print("Movies released in Long Weekends may or may not earn at least 5 Cr more than movies released in normal season as the min difference is around 2 Cr")
print("Mr. Chellapa's statement is incorrect.")
# -
# # Q 2.5
#
# The increase in R-squared when a new variable is added to a model is the given by the **Square of the Semi-Partial (PART) Correlation**.
#
# The assumption here is the variable "Director_CAT C" was the last variable added to model at Step 6. We have to make this assumption as variables added in prior stages are not available.
#
# - From Table 2.5 : R-squared @ Step 5 = 0.810 ** 2 = .6561
# - From Table 2.6: PART Correlation for adding Director_CAT C = -.104
print("R-squared in Step 3 is {}".format(0.6561 + (-.104) ** 2))
# # Q2.6
#
# - Budget_35_Cr is the highest impact on the performance of the movie. On average a move with budget exceeding 35 Cr adds 1.53 Cr extra than a move with lesser budget.
#
# - Recommendation:
# Use high enough budget to:
# - Hire Category A Production House
# - Do not hire Category C Director
# - Do not hire Category C Music Director
# - Produce a Comedy movie
# # Q 2.7
#
# - We cannot say that the variables have no relationship to Y (BOX Office Collection)
# - We can conclude that in presence of the other variables the variables in Model 2 are not explaining additional information about Y
# Make the diagram
v = venn3(subsets = (1, 1, 1, 1, 1, 1, 1), set_labels= ('Y', 'A', 'B'))
v.get_label_by_id('101').set_text('Y_B')
v.get_label_by_id('110').set_text('Y_A')
v.get_label_by_id('111').set_text('Y_A_B')
v.get_label_by_id('011').set_text('A_B')
plt.show()
# From chart above we can see that as we add new variables (A, B) it explains variations in Y. The explained variation in Y due to addition of a new variable should be significant enough. This is measured by:
# 1. t-test for individual variable
# 2. Partial F-test for the models generated consecutively
#
# We may conclude that the variables of Model 2 may not be explaining significant variations in Y in presence of the additional variables added later on and hence was dropped.
#
#
#
# # Q 2.8
#
# We are making the assumption that the variable Youtube views imply views of the actual movie and not the trailers before movie release dates. The following explanation will not be valid in that case. Also, we are assuming that revenue collected from advertisements during Youtube views do not fall under the Box Office Collection.
#
# Youtube_Views = Will not contribute anything meaningful functionally to the Box Office collection as the movie has been created and released in theaters and all possible collection is completed. The main essence of the prediction here is to understand before making a movie, what all factors may lead to better revenue collection for a movie
# # Q 3.1
# ### Table 3.1
#
# - **Observations** (N) = 543
# - **Standard Error**
# - \begin{equation*} SE = \sqrt {\frac{ \sum_{k=1}^N {(Y_k - \hat{Y_k})^2}} {N - 2}} \end{equation*}
#
# \begin{equation*} (Y_k - \hat{Y_k})^2 = \epsilon_k^2 = \text{Residual SS (SSE)} = \text{17104.06 (Table 3.2)}\end{equation*}
#
#
# - **R-Squared** = 1 - SSE / SST
# - SSE = 17104.06 (Table 3.2)
# - SST = 36481.89 (Table 3.2)
#
#
#
# - **Adjuated R-Squared** = 1 - (SSE / N-k-1) / (SST/N-1)
# - N = 543
# - K = 3
#
#
#
# - **Multiple R** = \begin{equation*} \sqrt R_{Squared}\end{equation*}
#
x = ["Multiple R", "R Square", "Adjusted R Squared", "Standard Error", "Observations"]
data = pd.DataFrame({"Regression Statistics": x})
data["_"] = [(1 - 17104.06/36481.89) ** 0.5,1 - 17104.06/36481.89, 1 - (17104.06/(543 - 3 -1))/(36481.89/542),((17104.06)/541) ** 0.5,543]
data
# ### Table 3.2
#
# - **DF Calculation**
# - DF for Regression (K) = Number of variables = 3
# - DF for Residual = N - K - 1 = 539
#
#
# - **SS Calculation**
# - Residual SS (SSE) = 17104.06 (given)
# - Total SS (TSS)= 36481.89 (given)
# - Regression SS (SSR) = TSS - SSE = 19377.83
#
#
# - **MS Calculation**
# - MSR (Regression) = SSR / DF for SSR (=3)
# - MSE (Error) = SSE / DF for SSE (= 539)
#
#
# - **F Claculation**
# - F = MSR / MSE
# +
x = ["Regression", "Residual", "Total"]
ss = [36481.89 - 17104.06, 17104.06,36481.89]
df = [3, 539,542]
ms = [19377.83 / 3, 17104 / 539, '']
f = [(19377.83 / 3) / (17104 / 539),'','']
sf = [1 - stats.f.cdf(305, 3, 539),'','']
data = pd.DataFrame({"_": x})
data["DF"] = df
data["SS"] = ss
data["MS"] = ms
data["F"] = f
data["SignificanceF"] = sf
data
# -
# ### Table 3.3 - Coefficients
#
# - MLR T-Test
# - \begin{equation*} t_i = \frac {\beta_i - 0} {Se(\beta_i)}\end{equation*}
# where i denotes the different variables (here i = 3)
# +
data = pd.DataFrame({"_":["Intercept", "Margin", "Gender", "College"]})
data["Coefficeints"] = [38.59235, 5.32e-05, 1.551306, -1.47506]
data["Standard Error"] = [0.937225, 2.18e-06, 0.777806, 0.586995]
data["t Stat"] = [(38.59235 / 0.937225),5.32e-05 / 2.18e-06, 1.551306/0.777806, -1.47506/ 0.586995]
data["P-Value"] = ['','','','']
data["Lower 95%"] = [36.75129, 4.89E-05, 0.023404, -2.62814]
data["Upper 95%"] = [40.4334106,5.7463E-05,3.07920835,-0.3219783]
data
# -
# # Q 3.2
#
# From the table above we see that for all the variables the t-value > 1.964. hence all the variables are significant. 1.964 = Critical value of t @ significance 0.05
# # Q 3.3
#
# F-distribution with DF = 3, 539 at significance = 95% is 2.621. Hence the model is significant.
1 - stats.f.cdf(2.621, 3, 539)
stats.f.ppf(0.95, 3, 539)
# # Q 3.4
#
# The increase in R-squared when a new variable is added to a model is the given by the **Square of the Semi-Partial (PART) Correlation**.
#
# - R-squared for Model 2 = 0.52567 (R1)
# - R-squared for Model 3 = 0.531163 (R2)
#
# Part Correlation of College & % Votes = \begin{equation*}\sqrt{R_2 - R_1} \end{equation*}
#
print("Increase in R-Squared due to adding College = {}".format(0.531163 - 0.52567))
print("Part Correlation of College & % Votes = {}".format((0.531163 - 0.52567)**0.5))
# # Q 3.5
#
# We will conduct Partial F-test between models to test for significance of each model. We make the assumption that the variables added are significant at each step (model) at alpha 0.05
#
# \begin{equation*}F_{PARTIAL} = \frac{\frac{R_{FULL}^2 - R_{PARTIAL}^2} {k - r}} {\frac{1 - R_{FULL}^2} {N - k - 1}}\end{equation*}
#
# where k = variables in full model,
# r = variables in reduced model,
# N = Total number of records
#
# +
def f_partial(rf, rp, n, k, r):
return ((rf **2 - rp ** 2)/(k-r))/((1 - rf ** 2)/ (n - k - 1))
print("Model 3 Partial F {}".format(f_partial(0.531163, 0.52567, 543, 3, 2)))
print("Model 3 Critical F at Df = (1, 539) {}".format(1 - stats.f.cdf(4.36, 1, 539)))
print("Model 4 Partial F {}".format(f_partial(0.56051, 0.531163, 543, 4, 3)))
print("Model 4 Critical F at Df = (1, 539) {}".format(1 - stats.f.cdf(25.13, 1, 539)))
print("Model 5 Partial F {}".format(f_partial(0.581339, 0.56051, 543, 5, 4)))
print("Model 5 Critical F at Df = (1, 539) {}".format(1 - stats.f.cdf(19.29, 1, 539)))
print("\nHence we can see that all the models are significant. The number of features (5) are not very high, hence we conclude it's justified to add the additional variables")
# -
# # Q 3.6
#
# - Since the variables are not standardised we cannot use the magnitude of the coefficients as a measure of impact on dependent variable (Vote %)
# - We will use the notion of the Standardised Coefficients to measure how much 1 SD change in the variable X (Predictor) changes Y (dependant)
#
# - Using Table 3.5 and equations below we will compute Standardised Coefficient:
#
# \begin{equation*} \beta_{STANDARDISED} = \hat\beta * \frac {S_X} {S_Y} \end{equation*}
#
# where \begin{equation*} \text{Standard Deviation X} = S_X \end{equation*}
# & \begin{equation*} \text{Standard Deviation Y} = S_Y \end{equation*}
#
# - From the below table we can see that **MARGIN** has the highest impact on Vote %. 1 SD change in Margin changes .75 SD in Vote %
data = pd.DataFrame({"_": ["INTERCEPT","MARGIN","Gender","College","UP","AP"]})
data["Coefficients"] = [38.56993, 5.58E-05, 1.498308, -1.53774, -3.71439, 5.715821]
data["Standard deviation"] = ['', 111365.7, 0.311494, 0.412796, 0.354761, 0.209766]
data["Standardized Coefficients"] = ['', 5.58E-05 * 111365.7 / 8.204253, 1.498308 * 0.311494 / 8.204253,
-1.53774 * 0.412796 / 8.204253, -3.71439 * 0.354761 / 8.204253,
5.715821 * 0.209766 / 8.204253]
data
# # Q 4.1
# +
positives = 353+692
negatives = 751+204
N = positives + negatives
print("Total Positives: {} :: Total Negatives: {} :: Total Records: {}".format(positives, negatives, N))
pi1 = positives / N
pi2 = negatives / N
print("P(Y=1) = positives / N = {} :: P(Y=0) = negatives /N = {}".format(pi1, pi2))
_2LL0 = -2* (negatives * np.log(pi2) + positives * np.log(pi1))
print("-2LL0 = {}".format(_2LL0))
# -
# - -2LLo is called the "Null Deviance" of a model. It is -2 Log Likelihood of a model which had no predictor variables. Hence we obtain the probabilities of positive and negative in the dataset using the frequencies for such model.
#
# - After adding "Premium" 2LL reduces to 2629.318 (Table 4.2). Hence reduction is equal to (-2LLo -(-2LLm)):
print(2768.537 - 2629.318)
# # Q 4.2
# +
print("True Positive :Actually Positive and Predicted Positive = {}".format(692))
print("False Positive :Actually Negative and Predicted Positive = {}".format(204))
print("Precision = True Positive / (True Positive + False Positive) = {}".format(692.0 / (692 + 204)))
# -
# # Q 4.3
#
# exp(B) = change in odds ratio. The odds ratio can be interpreted as the multiplicative adjustment to the odds of the outcome, given a **unit** change in the independent variable. In this case the unit of measurement for Premium (1 INR) which is very small compared to the actual Premium (1000s INR), hence a unit change does not lead to a meaningful change in odds ratio, subsequently the odds ratio will be very close to one.
# # Q 4.4
#
# Assumptions: Actual Data was not available. Decision would be made based on outcome of Model results
print("The model predicts 751 + 353 = {} customers have a probability less than 0.5 of paying premium".format(
751+353))
print("They will call 1104 customers through Call Center")
# # Q 4.5
#
# Total points we are getting is 1960.
#
# total = tp + fp + fn + tn
#
# **Formula** :
#
# sensitivity = tp/ (tp + fn)
#
# specificity = tn / (tn + fp)
#
# recall = sensitivity
# precision = tp / (tp + fp)
#
# f-score = 2 \* precision * recall / (precision + recall)
# +
tp = 60.0
fp = 20.0
fn = 51*20
tn = 43 * 20
total = tp + fp + fn + tn
print("Number of records ::".format(total))
sensitivity = tp/ (tp + fn)
specificity = tn / (tn + fp)
recall = sensitivity
precision = tp / (tp + fp)
fsc = 2 * precision * recall / (precision + recall)
print("Precision {} :: \nRecall {} :: \nsensitivity {} :: \nspecificity {} :: \nf-score {}".format(precision, recall, sensitivity, specificity, fsc))
# -
# # Q 4.6
#
# Probability of Y==1 can be calculated using the following formula:
#
# \begin{equation*} P(Y=1) = \frac{\exp^z} {1 + \exp^z}
# \end{equation*}
#
# \begin{equation*} \text{where z} = \beta_0 + \beta_1 * Salaried + \beta_2 * HouseWife +\beta_3 * others\end{equation*}
#
# However in this case the variable Housewife is not a significant variable. Hence using this equation to calculate probability for the variable house wife may not be appropriate. We will procced to compute the probability using the equation but will consider the coefficient of Housewife as 0 (B is not significantly different from 0 for insignificant variables). Ideally we need to rerun the Model removing the insignificant variable, but since we do not have the data we will use the same equation and assume the coefficients for the other variables will not change if we had removed Housewife.
# +
#print("Probability of House wife paying the Premium is (beta ==22.061): {}".format(np.exp(-.858 + 22.061)
# / (1 + np.exp(-.858 + 22.061))))
print("Probability of House wife paying the Premium is (beta = 0): {}".format(np.exp(-.858 + 0)
/ (1 + np.exp(-.858 + 0))))
print("Since Beta is insignificant B == 0, hence .298 is the probability for housewife paying renewal")
# -
# # Q 4.7
#
# The Constant / Intercept measures for people with the following occupations **Professionals, Business and Agriculture** and they have a lower probability of renewal payment. From Model 3 - Coefficient of intercept is negative, hence our conclusion
# # Q 4.8
#
# Probability can be calculated using the following formula:
#
# \begin{equation*} P(Y=1) = \frac{\exp^z} {1 + \exp^z}
# \end{equation*}
#
# \begin{equation*} \text{where z} = constant + \beta_1 * Policy Term\end{equation*}
#
# The regression equations reduces to the simple term as shown above because SSC Education, Agriculturist Profession & Marital Status Single will be factored in the term constant of the given equation and the remainder of the variable will be Zero.
#
print("Probability : {}".format(np.exp(3.105 + 60 * -0.026)/ (1 + np.exp(3.105 + 60 * -0.026))))
# # Q 4.9
#
# The coefficients tell about the relationship between the independent variables and the dependent variable, where the dependent variable is on the logit scale. These estimates tell the amount of increase in the predicted log odds that would be predicted by a 1 unit increase in the predictor, holding all other predictors constant.
#
# **Findings**:
#
# - Married People have higher possibility of renewals (log odds ratio increases)
# - As payment term increases it leads to slightly reduced log odds of renewals
# - Professionals, Business men have much higher chance of defaulting on log odds of renewals
# - Being a graduate does increase the chance of payment of renewals (log odds)
# - Annual / Half yearly / Quarterly policy renewal schemes see reduced payment of renewals (log odds)
# - Model Change - Premuim : Variable scale should be changed for better understanding of Premium's contribution to affinity to renew policy (may be reduce unit to 1000s)
#
# **Recommendations :**
#
# - For new customers target Married people and graduates
# - For existing customers send more reminders (via Call centers / messgaes etc) to Business men, Professionals for renewal
# - For people paying premiums in yearly / quarterly / halfyearly terms, send reminders to them before renewal dates
# - For people with long payment terms keep sending them payment reminders as the tenure of their engagement advances
#
# # Q 4.10
#
# The bins are computes as following:
# - Decile=1 = 0 -.1 (both inclusive)
# - Decile=.9 = 1.00001 - .2 (both incusive and so on)
# - upto Decile1
#
# We arrange the table in descending order of probabilities, i.e. Decile=.1 contains .90001 till 1 probability values, Decile=.2 contain .800001 till 0.9 pronbability values.
#
# Gain is calculated as:
#
# \begin{equation*} gain = \frac {\text{cumulative number of positive obs upto decile i}}
# {\text {Total number of positive observations}} \end{equation*}
#
# Lift is calculated as:
#
# \begin{equation*} lift = \frac {\text{cumulative number of positive obs upto decile i}}
# {\text {Total number of positive observations upto decile i from random model}} \end{equation*}
#
# +
data = pd.DataFrame({'Decile': [.1, .2, .3, .4, .5, .6, .7, .8, .9, 1]})
data['posunits'] = [31, 0, 0, 0, 3, 5, 5, 4, 2, 1]
data['negunits'] = [0, 0, 0, 0, 0, 5, 11, 17, 12, 2]
data['posCountunits'] = data['posunits'] * 20
data['negCountunits'] = data['negunits'] * 20
avgPerDec = np.sum(data['posCountunits']) / 10
data['avgCountunits'] = avgPerDec
data['cumPosCountunits'] = data['posCountunits'].cumsum()
data['cumAvgCountunits'] = data['avgCountunits'].cumsum()
data['lift'] = data['cumPosCountunits'] / data['cumAvgCountunits']
data['gain'] = data['cumPosCountunits'] / data['posCountunits'].sum()
data['avgLift'] = 1
#print(df)
#### Plots
plt.figure(figsize=(15, 5))
plt.subplot(1,2,1)
plt.plot(data.avgLift, 'r-', label='Average Model Performance')
plt.plot(data.lift, 'g-', label='Predict Model Performance')
plt.title('Cumulative Lift Chart')
plt.xlabel('Deciles')
plt.ylabel('Normalised Model')
plt.legend()
plt.xlim(0, 10)
plt.subplot(1,2,2)
plt.plot(data.Decile, 'r-', label='Average Model Performance')
plt.plot(data.gain, 'g-', label='Predict Model Performance')
plt.title('Cumulative Gain Chart')
plt.xlabel('Deciles')
plt.ylabel('Gain')
plt.legend()
plt.xlim(0, 10)
data
# -
# **Observaions**
#
# - From gain we see that the model captures 76% positives by the fifth decile
# - From Lift we see for the 1st decile model captures 6 times more positives than an ordinary model, 3 times for second decile, 2 times for 3rd decile, 1.5 times for 4th decile and 1.27 times for the 5th decile
# # Q 5
# +
import statsmodels.api as sm
import statsmodels.formula.api as smf
from IPython.display import display
pd.options.display.max_columns = None
# %load_ext rpy2.ipython
# -
oakland = pd.read_excel("./Oakland A Data 1.xlsx", sheet_name='Attendance Data')
#oakland.info()
print("There are no Missing Values in Data")
oakland.describe()
# +
import seaborn as sns
fig = plt.figure(figsize=(15,5))
ax = plt.subplot("121")
ax.set_title("Distribution plot for TIX")
sns.distplot(oakland.TIX)
ax = plt.subplot("122")
ax.set_title("Distribution plot for LOG(TIX)")
sns.distplot(np.log(oakland.TIX))
plt.show()
print("TIX is right skewed distribution. The log Transformed TIX is more of an approximate normal distribution.")
# -
# - <NAME> has played for 21.33% games for Oakland A during the period when the data was captured
# - We will perform a Two Sample T-test between the mean of TIX when Nobel Played vs When Nobel did not play to check wthether ther was any significant difference of mean between the two categories
sns.boxplot(x='NOBEL', y='TIX', data=oakland)
plt.show()
# +
x1, S1, n1 = oakland.loc[oakland.NOBEL==1, "TIX"].mean(), oakland.loc[oakland.NOBEL==1, "TIX"].std(), oakland.loc[oakland.NOBEL==1, "TIX"].shape[0]
x2, S2, n2 = oakland.loc[oakland.NOBEL==0, "TIX"].mean(), oakland.loc[oakland.NOBEL==0, "TIX"].std(), oakland.loc[oakland.NOBEL==0, "TIX"].shape[0]
#x1, S1, n1 = np.mean(np.log(oakland.loc[oakland.NOBEL==1, "TIX"])), np.std(np.log(oakland.loc[oakland.NOBEL==1, "TIX"])), oakland.loc[oakland.NOBEL==1, "TIX"].shape[0]
#x2, S2, n2 = np.mean(np.log(oakland.loc[oakland.NOBEL==0, "TIX"])), np.std(np.log(oakland.loc[oakland.NOBEL==0, "TIX"])), oakland.loc[oakland.NOBEL==0, "TIX"].shape[0]
alpha = 0.05
adjustedAlpha = alpha
print("Alpha: {}".format(adjustedAlpha))
print("Mean TIX (x1) = {}, STD TIX = {} and number of games = {} with Nobel".format(x1, S1, n1))
print("Mean TIX (x1) = {}, STD TIX = {} and number of games = {} without Nobel".format(x2, S2, n2))
ho = "x1 - x2 <= 0"
ha = "x1 - x2 >0"
def pairwise_t_test(S1, S2, n1, n2, x1, x2, adjustedAlpha):
print("NUll Hypothesis: {}".format(ho))
print("Alternate Hypothesis: {}".format(ha))
print("This is 2 Sample T test, with unknown population SD and the SD of the two are unequal")
Su = ((S1 ** 2) / n1 + (S2 ** 2) / n2) ** 0.5
print("SE {}".format(Su))
df = np.math.floor(Su ** 4 / ((((S1 ** 2) / n1) ** 2) / (n1 -1) + (((S2 ** 2) / n2) ** 2) / (n2 -1)))
print("DF {}".format(df))
tstat = ((x1 - x2) - 0) /(Su)
print("T-stat {}".format(tstat))
print("This is a two sided T-Test")
#print("alpha/ Significance: {}".format(adjustedAlpha / 2))
print("Significant t-value at alpha - {} is : {}".format(adjustedAlpha , -1*stats.t.ppf(adjustedAlpha,
df = df)))
print("p-value:{} is greater than alpha({})".format(1 - stats.t.cdf(tstat, df = df), adjustedAlpha))
print("Hence we can retain the NULL Hypothesis (ho)")
pairwise_t_test(S1, S2, n1, n2, x1, x2, adjustedAlpha)
# -
# - In general we see that there is not statistical evidence that a single factor, presence of Nobel has any effect on increasing ticket sales
# - We will check whether this factor become important in presence of other factors before drawing any final conclusions
# +
corr = oakland[["TIX","OPP","POS","GB","DOW","TEMP","PREC","TOG","TV","PROMO","NOBEL","YANKS","WKEND","OD","DH"]].corr(method='pearson')
# Generate a mask for the upper triangle
mask = np.zeros_like(corr, dtype=np.bool)
mask[np.triu_indices_from(mask)] = True
# Set up the matplotlib figure
f, ax = plt.subplots(figsize=(12, 12))
# Generate a custom diverging colormap
cmap = sns.diverging_palette(255, 150, as_cmap=True)
# Draw the heatmap with the mask and correct aspect ratio
sns.heatmap(corr, mask=mask, cmap=cmap, vmax=.3, center=0,
square=True, linewidths=.5, cbar_kws={"shrink": .5})
# -
# - From the correlation plot above we see that "Game with YANKS" and PROMO along with whether the match is a "DOUBLE HEADER" has high correlation to TIX sales
#
# **We will now create a series of Regression Models to check the validity of the claim that MARK NOBEL's presence increase the TIX and revenue generation for OAKLAND A**
#
# - From the plots of TIX we noticed that TIX is not normally distributed. The Regression Model developed with TIX may end up with Error terms which are not Normally distributed
# - To address this issue we will build the models using the Log transformed values of TIX, as from the plot it is clear that the log transformed variable is closer to a Normal Distribution.
# +
y = np.log(oakland.TIX.values)
cols2use = "NOBEL"
x = oakland[cols2use]
#lg_model_1 = sm.OLS(y, sm.add_constant(x)).fit()
#lg_model_1.summary()
#lg_model_1.params
#lg_model_1.summary2()
# + magic_args="-i x -i y -w 800 -h 400" language="R"
# library(caret)
# x = data.frame(x)
# x$x = as.factor(x$x)
# y = data.frame(y)
# y$y = as.numeric(y$y)
#
# #print(str(y$y))
# #print(str(x))
# objControl <- trainControl(method = "none", returnResamp = 'final',
# summaryFunction = defaultSummary,
# #summaryFunction = twoClassSummary, defaultSummary
# classProbs = FALSE,
# savePredictions = TRUE)
# set.seed(766)
# reg_caret_model <- train(x,
# y$y,
# method = 'lm',
# trControl = objControl,
# metric = "Rsquared",
# tuneGrid = NULL,
# verbose = FALSE)
#
# #print(plot(varImp(reg_caret_model, scale = TRUE)))
#
# print(summary(reg_caret_model))
# par(mfrow = c(2, 2))
# print(plot(reg_caret_model$finalModel))
# -
# - As noticed with the Hypothesis test, from the model above we can see that on its own the variable checking for the presence of Nobel is not Significant in predicting for TIX
#
# **We will build a Model with NOBEL, YANKS, DH and PROMO**
y = np.log(oakland.TIX.values)
cols2use = ["NOBEL", "YANKS", "DH", "PROMO" ]
x = oakland[cols2use]
# + magic_args="-i x -i y -w 800 -h 400" language="R"
# library(caret)
# x = data.frame(x)
# x$NOBEL = factor(x$NOBEL)
# x$YANKS = factor(x$YANKS)
# x$DH = factor(x$DH)
# x$PROMO = factor(x$PROMO)
# y = data.frame(y)
# y$y = as.numeric(y$y)
#
# #print(str(y$y))
# #print(str(x))
# objControl <- trainControl(method = "none", returnResamp = 'final',
# summaryFunction = defaultSummary,
# #summaryFunction = twoClassSummary, defaultSummary
# classProbs = FALSE,
# savePredictions = TRUE)
# set.seed(766)
# reg_caret_model <- train(x,
# y$y,
# method = 'lm',
# trControl = objControl,
# metric = "Rsquared",
# tuneGrid = NULL,
# verbose = FALSE)
#
# #print(plot(varImp(reg_caret_model, scale = TRUE)))
#
# print(summary(reg_caret_model))
# par(mfrow = c(2, 2))
# print(plot(reg_caret_model$finalModel))
# -
# - As noticed with the Hypothesis test, from the model above we can see that the variable checking for the presence of Nobel is not Significant in predicting for TIX
#
# **We will build a Stepwise Model with all variables and select the best model. If the variable NOBEL is significant it will be added by the STEPWISE Selection Algorithm**
y = np.log(oakland.TIX.values)
cols2use = ["OPP","POS","GB","DOW","TEMP","PREC","TOG","TV","PROMO","NOBEL","YANKS","WKEND","OD","DH"]
x = oakland[cols2use]
# + magic_args="-i x -i y -w 800 -h 400" language="R"
# library(caret)
# x = data.frame(x)
# x$NOBEL = factor(x$NOBEL)
# x$YANKS = factor(x$YANKS)
# x$DH = factor(x$DH)
# x$PROMO = factor(x$PROMO)
# x$OPP = factor(x$OPP)
# x$POS = factor(x$POS)
# x$GB = factor(x$GB)
# x$DOW = factor(x$DOW)
# x$PREC = factor(x$PREC)
# x$TOG = factor(x$TOG)
# x$TV = factor(x$TV)
# x$WKEND = factor(x$WKEND)
# x$OD = factor(x$OD)
#
# y = data.frame(y)
# y$y = as.numeric(y$y)
#
# #print(str(y$y))
# #print(str(x))
# objControl <- trainControl(method = "none", returnResamp = 'final',
# summaryFunction = defaultSummary,
# #summaryFunction = twoClassSummary, defaultSummary
# classProbs = FALSE,
# savePredictions = TRUE)
# set.seed(766)
# reg_caret_model <- train(x,
# y$y,
# method = 'lmStepAIC',
# trControl = objControl,
# metric = "Rsquared",
# tuneGrid = NULL,
# verbose = FALSE)
#
# print(plot(varImp(reg_caret_model, scale = TRUE)))
#
# print(summary(reg_caret_model))
# par(mfrow = c(2, 2))
# print(plot(reg_caret_model$finalModel))
# -
# **From the models created above including building the stepwise regression model and the analysis done above we can see that presence of Nobel is not Significant in increasing Ticket Sales and Revenue collected from Ticket sales.
# He Does not have any contribution to increased revenue colection due to ticket Sales.**
#
# # Q6
#
# ## Q6-1
#
# - NPS is a KPI which is used by many organizations to understand and measure customer satisfaction
# - Organizations also believe that it is important for every organization to know what their customers tell their friends about the organization. NPS is considered by many organizations as a measurement of whether a customer will recommend the company or product/service to a friend or colleague
#
#
# **Business Problem**
#
# - Managment at Manipal Hospitals belived that loyalty in healthcare depends on technical and emotional aspects
# - Happy customer may lead to new business, unhapy customers may lead to lack of new business / erosion of exising business
# - Through NPS forms they wanted to collect customer feedback and sentiments
# - By analysing the NPS data they also wanted to understand the reasons that led to the customer giving such a NPS score
# - They wanted to analyse the reasons that would help to resolve the issues and then keeping the customers informed about the corrective actions; they believed they could improve the customer satisfaction and hence the NPS by such action
#
# **How Analytics can help with the Problem**
#
# - The historical paper based feedback when conevrted into digital data and the digital data captured post March 2014 can be analysed using analytics to derive insights
# - By analysing past data, analytics can help unearth patterns in data that may be related to high or low customer statisfaction and NPS
# - These patterns can be formualted into prescriptive actions which can help improve the process for the future there by improving the overall customer satisfaction and better NPS
# - If analytics can help link customer demographics / behaviour to NPS then hospital can devise different startegies for different customer profiles, which also can lead to better NPS and satisfied customer
#
#
#
# ## Q6-2
#
# Sensitivity, Specificity for a multinomial / 3-class problem can be calculated in the following manner. We will elaborate the method using the following tables and derive the formula for the metrics.
#
# total records = tp + fp + fn + tn
#
#
# For 2-class the following are the definition for sensitivity and specificity:
#
# sensitivity = tp/ (tp + fn)
#
# specificity = tn / (tn + fp)
#
#
# where tp = True positive
# fp = False Postive
# tn = True Negative
# fn = False Negative
#
#
# The definition for Specificity / sensitivity does not change from the above in 3-class scenario. The way we compute the tp, tn, fp and fn changes. We will demonstrate the same below.
#
# Lets say we have 3 classes A, B, C.
#
# Step 1: We will construct the Confusion Matrix for "A". Table belows shows FP1 and FP2 etc. information.
# Here :
#
# fp = FP1 + FP2
#
# fn = FN1 + FN2
#
# tn = Sum(X)
#
# The formula for the metrics changes to:
#
# sensitivity = tp/ (tp + fn1 + fn2)
#
# specificity = tn / (tn + fp1 + fp2)
# +
array1 = pd.MultiIndex.from_arrays(np.array([['Predcited', '', ''],['A', 'B', 'C']]))
array2 = pd.MultiIndex.from_arrays(np.array([['Actual', '', ''],['A', 'B', 'C']]))
array1
data = data = pd.DataFrame(np.array([['TP', 'FN1', 'FN2'], ['FP1', 'X', 'X'], ['FP2', 'X', 'X']]),
columns=array1, index=array2)
data
# -
# Step 2: We will construct the Confusion Matrix for "B". Table belows shows FP1 and FP2 etc. information.
# Here:
#
# fp = FP1 + FP2
#
# fn = FN1 + FN2
#
# tn = sum(X)
#
# The formula for the metrics changes to:
#
# sensitivity = tp/ (tp + fn1 + fn2)
#
# specificity = tn / (tn + fp1 + fp2)
# +
array1 = pd.MultiIndex.from_arrays(np.array([['Predcited', '', ''],['A', 'B', 'C']]))
array2 = pd.MultiIndex.from_arrays(np.array([['Actual', '', ''],['A', 'B', 'C']]))
array1
data = data = pd.DataFrame(np.array([['X', 'FP1', 'X'], ['FN1', 'TP', 'FN2'], ['X', 'FP2', 'X']]),
columns=array1, index=array2)
data
# -
# Step 3: We will construct the Confusion Matrix for "C". Table belows shows FP1 and FP2 etc. information.
# Here :
#
# fp = FP1 + FP2
#
# fn = FN1 + FN2
#
# tn = sum(X)
#
# The formula for the metrics changes to:
#
# sensitivity = tp/ (tp + fn1 + fn2)
#
# specificity = tn / (tn + fp1 + fp2)
# +
array1 = pd.MultiIndex.from_arrays(np.array([['Predicted', '', ''],['A', 'B', 'C']]))
array2 = pd.MultiIndex.from_arrays(np.array([['Actual', '', ''],['A', 'B', 'C']]))
array1
data = data = pd.DataFrame(np.array([['X', 'X', 'FP1'], ['X', 'X', 'FP2'], ['FN1', 'FN2', 'TP']]),
columns=array1, index=array2)
data
# -
# ## Q6-3
#
# #### Binary Classification Model
# ##### Train Data Source: Training Data or Binary Class - tab
# ##### Test Data Source: Test Data for Binary Class - tab
train_df = pd.read_excel("./IMB NPS 651.xlsx", sheet_name='Training Data or Binary Class')
test_df = pd.read_excel("./IMB NPS 651.xlsx", sheet_name='Test Data for Binary Class')
#train_df.info()
print("There are no Nulls in data, hence missing value treatment is not required.")
columns2Drop=["CE_NPS", "AdmissionDate", "DischargeDate", "HospitalNo2", "SN"]
train_df.drop(columns2Drop, inplace = True, axis = 'columns')
test_df.drop(columns2Drop, inplace = True, axis = 'columns')
pd.options.display.max_columns = None
#train_df.describe()
train_df['NPS_bin'] = 0
train_df.loc[train_df.NPS_Status != "Promotor", 'NPS_bin'] = 1
#train_df.describe()
test_df['NPS_bin'] = 0
test_df.loc[test_df.NPS_Status != "Promotor", 'NPS_bin'] = 1
train_df.drop(['NPS_Status'], axis = 'columns', inplace = True)
test_df.drop(['NPS_Status'], axis = 'columns', inplace = True)
catCols = train_df.select_dtypes(exclude=["number","bool_"]).columns
#
#for c in catCols:
# print(train_df[["NPS_bin"] + [c]].groupby([c]).agg([np.mean, np.std, len]))
# +
#catCols = train_df.select_dtypes(exclude=["number","bool_"]).columns
#for c in catCols:
# print(test_df[["NPS_bin"] + [c]].groupby([c]).agg([np.mean, np.std, len]))
# -
# - There are 5000 records approximately
# - To reduce initial complexity and to improve the ability of the model to generalise, we will not encode any variable which has less than 100 rows per category into seperate encoded variables, but merge all such variables into one bucket (constant / others / intercept)
# - Please note 100 is not a magic number, and its not deduced by any statistical / mathematical way; more complex testing can be performed for optimality of such number, but we will keep things simple for now
# - Also the count is based on training set and not testing set
# - For Dep column: "GEN" is the base category
# - Estimated cost is at a whole different range, hence we will take a Log transform of estimated cost
# - Promoter is encoded as 0 and Passive & Detractors are encoded as 1
train_df["marital_status"]= 0
train_df.loc[train_df.MaritalStatus == "Married", 'marital_status'] = 1
test_df["marital_status"]= 0
test_df.loc[test_df.MaritalStatus == "Married", 'marital_status'] = 1
train_df.drop('MaritalStatus', axis = 'columns', inplace=True)
test_df.drop('MaritalStatus', axis = 'columns', inplace=True)
train_df["gender"]= 0
train_df.loc[train_df.Sex == "M", 'gender'] = 1
test_df["gender"]= 0
test_df.loc[test_df.Sex == "M", 'gender'] = 1
train_df.drop('Sex', axis = 'columns', inplace=True)
test_df.drop('Sex', axis = 'columns', inplace=True)
# +
trainrows = train_df.shape[0]
train_test = pd.concat([train_df, test_df], axis='rows')
cols2use = ['BedCategory', 'Department', 'InsPayorcategory', 'State', 'Country', 'STATEZONE']
for c in cols2use:
xx = pd.get_dummies(train_test[c])
interim = train_df[["NPS_bin"] + [c]].groupby([c], as_index = False).agg([len]).reset_index()
interim.columns = [''.join(x) for x in interim.columns]
interim.columns = ['x', 'y']
cols = interim.loc[interim.y >= 100, 'x']
xx = xx[cols]
train_test.drop(c, axis='columns', inplace = True)
train_test = pd.concat([train_test, xx], axis = 'columns')
# +
train_test.drop('GEN', axis = 'columns', inplace = True)
train_test['Estimatedcost'] = np.log1p(train_test['Estimatedcost'] )
# +
train_df = train_test.iloc[:trainrows, :]
test_df = train_test.iloc[trainrows:, :]
import gc
del(xx, interim, cols, cols2use, columns2Drop, train_test)
gc.collect()
# + magic_args="-i train_df" language="R"
# library(caret)
#
# for (f in colnames(train_df))
# {
# if (class(train_df[[f]])=="character")
# {
# train_df[[f]] <- as.integer(train_df[[f]])
# }
# }
#
# y = as.factor(train_df$NPS_bin)
# train_df$NPS_bin = NULL
# levels(y) <- make.names(levels(factor(y)))
# print(levels(y))
#
# objControl <- trainControl(method = "none", returnResamp = 'final',
# summaryFunction = twoClassSummary,
# #summaryFunction = twoClassSummary, defaultSummary
# classProbs = TRUE,
# savePredictions = TRUE)
#
# lgCaretModel <- train(train_df,
# y,
# method = 'glmStepAIC',
# trControl = objControl,
# metric = "ROC",
# verbose = TRUE)
#
#
# plot(varImp(lgCaretModel, scale = TRUE))
#
# print(summary(lgCaretModel))
# par(mfrow = c(2, 2))
# print(plot(lgCaretModel$finalModel))
#
# caretPredictedClass = predict(object = lgCaretModel, train_df, type = 'raw')
# confusionMatrix(caretPredictedClass,y)
#
# -
# **We run a stepwise model and select important varibales at significance of 0.1**
#
# **We rebuild the model with just the signifact factors**
#
# - Details of the models is as below
cols4logit = ['CE_CSAT', 'CE_VALUEFORMONEY', 'EM_NURSING', 'AD_TARRIFFPACKAGESEXPLAINATION',
'AD_STAFFATTITUDE', 'INR_ROOMCLEANLINESS', 'INR_ROOMAMBIENCE', 'FNB_FOODQUALITY', 'FNB_FOODDELIVERYTIME',
'FNB_STAFFATTITUDE', 'AE_PATIENTSTATUSINFO', 'AE_ATTENDEEFOOD', 'DOC_TREATMENTEXPLAINATION',
'DOC_VISITS', 'NS_NURSESATTITUDE', 'OVS_OVERALLSTAFFPROMPTNESS', 'OVS_SECURITYATTITUDE',
'DP_DISCHARGEQUERIES', 'PEDIATRIC','GENERAL', 'ULTRA SPL', 'RENAL', 'CORPORATE',
'Karnataka', 'EXEMPTION']
#,'EXEMPTION','EM_IMMEDIATEATTENTION', 'LengthofStay', 'ORTHO', "INDIA", "EAST", 'Estimatedcost', ]
# +
import statsmodels.api as sm
lg_model_1 = sm.GLM(train_df['NPS_bin'], sm.add_constant(train_df[cols4logit]),family=sm.families.Binomial()).fit()
lg_model_1.summary()
# -
train_df_predict_1 = lg_model_1.predict(sm.add_constant(train_df[cols4logit]))
test_df_predict_1 = lg_model_1.predict(sm.add_constant(test_df[cols4logit]))
# +
from sklearn import metrics
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
#confusion_matrix(test_df.NPS_bin, test_df_predict_1.values >= 0.5)
def draw_cm( actual, predicted ):
plt.figure(figsize=(9,9))
cm = metrics.confusion_matrix( actual, predicted )
sns.heatmap(cm, annot=True, fmt='.0f', cmap = 'Blues_r')
plt.ylabel('Actual')
plt.xlabel('Predicted')
plt.title('Classification Matrix Plot', size = 15);
plt.show()
draw_cm(test_df.NPS_bin, test_df_predict_1 >=0.5)
# +
def draw_roc( actual, probs ):
fpr, tpr, thresholds = metrics.roc_curve( actual, probs, drop_intermediate = False )
auc_score = metrics.roc_auc_score( actual, probs )
plt.figure(figsize=(10, 10))
plt.plot( fpr, tpr, label='ROC curve (area = %0.2f)' % auc_score )
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate or [1 - True Negative Rate]')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
plt.legend(loc="lower right")
plt.show()
return fpr, tpr, thresholds
fpr, tpr, thresholds = draw_roc(test_df.NPS_bin, test_df_predict_1 >=0.5)
# -
# - The Regression has been set up to identify detractors and understand reasons that may lead to a not such a good score
# - This is not a model to understand on day 1 when a customer comes in whether he will turn out to be a detractor or not. Such a model will be based on Customer Demograhics and other customer attributes vs NPS_Score. This model includes NPS_Scores provided by customers for individual departments which will be not available for a new customer. Hence using this model for such analysis may not be prudent
#
# **Observations**
#
# - Areas to improve: As these are coming out as key features leading to higher Detractors / Passive responders
# - Admission Staff Attitude
# - Cleanliness and Hygiene of the Room and Bath Room
# - Karnataka residents are more dis-satisfied
# - Helpfulness or lack of it of security staff
# - Nursing Attitude
# - Food and Beverage Staff Attitude
#
# - Some areas that are working well for them:
# - Prompt response to concerns or complaints made
# - Regular process updates and visits by Doctors
# - Emergency Nursing
# - Explanation of tariff & packages available
# - Guidance and Information on Patient Health Status
#
# **Recommendations**
#
# - Focus on Staff and Nurse behavioural training
# - Improve room and bathroom hygiene
# - Given a large number of patients are from Karnataka, and given these people have a higher chance of giving poor NPS_Scores, it is advisable to understand the need of patients from this geographic region and if possible cater to those needs. A follow up study can be conducted to understand the need of people from these regions to further improve their scores.
# ## Q6-4
#
# #### Ordinal Logistic Classification Model
# ##### Train data source : Training Data for Multi-Class M - tab
# ##### Test data source : Test Data for Multi-Class Model
#
# +
train_df = pd.read_excel("./IMB NPS 651.xlsx", sheet_name='Training Data for Multi-Class M')
test_df = pd.read_excel("./IMB NPS 651.xlsx", sheet_name='Test Data for Multi-Class Model')
#train_df.info()
print("There are no Nulls in data, hence missing value treatment is not required.")
columns2Drop=["CE_NPS", "AdmissionDate", "DischargeDate", "HospitalNo2", "SN"]
train_df.drop(columns2Drop, inplace = True, axis = 'columns')
test_df.drop(columns2Drop, inplace = True, axis = 'columns')
train_df["marital_status"]= 0
train_df.loc[train_df.MaritalStatus == "Married", 'marital_status'] = 1
test_df["marital_status"]= 0
test_df.loc[test_df.MaritalStatus == "Married", 'marital_status'] = 1
train_df.drop('MaritalStatus', axis = 'columns', inplace=True)
test_df.drop('MaritalStatus', axis = 'columns', inplace=True)
train_df["gender"]= 0
train_df.loc[train_df.Sex == "M", 'gender'] = 1
test_df["gender"]= 0
test_df.loc[test_df.Sex == "M", 'gender'] = 1
train_df.drop('Sex', axis = 'columns', inplace=True)
test_df.drop('Sex', axis = 'columns', inplace=True)
trainrows = train_df.shape[0]
train_test = pd.concat([train_df, test_df], axis='rows')
cols2use = ['BedCategory', 'Department', 'InsPayorcategory', 'State', 'Country', 'STATEZONE']
train_test.loc[train_test.BedCategory == "SPECIAL", "BedCategory"] = "BedCategory_SPECIAL"
train_df.loc[train_df.BedCategory == "SPECIAL", "BedCategory"] = "BedCategory_SPECIAL"
test_df.loc[test_df.BedCategory == "SPECIAL", "BedCategory"] = "BedCategory_SPECIAL"
for c in cols2use:
xx = pd.get_dummies(train_test[c])
interim = train_df[["NPS_Status"] + [c]].groupby([c], as_index = False).agg([len]).reset_index()
interim.columns = [''.join(x) for x in interim.columns]
interim.columns = ['x', 'y']
cols = interim.loc[interim.y >= 150, 'x']
xx = xx[cols]
train_test.drop(c, axis='columns', inplace = True)
train_test = pd.concat([train_test, xx], axis = 'columns')
train_test.drop('GEN', axis = 'columns', inplace = True)
train_test.loc[train_test.NPS_Status == "Passive", "NPS_Status"] = "BasePassive"
train_test['Estimatedcost'] = np.log1p(train_test['Estimatedcost'] )
train_df = train_test.iloc[:trainrows, :]
test_df = train_test.iloc[trainrows:, :]
import gc
del(xx, interim, cols, cols2use, columns2Drop, train_test)
gc.collect()
# +
cols4logit = list(set(train_df.columns)-set(['NPS_Status']))
import statsmodels.api as sm
import statsmodels.formula.api as smf
stats.chisqprob = lambda chisq, df: stats.chi2.sf(chisq, df)
lg_model_1 = sm.MNLogit(train_df['NPS_Status'], sm.add_constant(train_df[cols4logit])).fit()
#lg_model_1.summary()
# +
# Get significant variable
def get_significant_vars (modelobject):
var_p_vals_df = pd.DataFrame(modelobject.pvalues)
var_p_vals_df['vars'] = var_p_vals_df.index
var_p_vals_df.columns = ['pvals0', 'pvals1','vars']
return list(var_p_vals_df[(var_p_vals_df.pvals0 <= 0.05)|(var_p_vals_df.pvals1 <= 0.05) ]['vars'])
significant_vars_1 = get_significant_vars(lg_model_1)
#significant_vars_1
# +
# build proper model
cols4logit = significant_vars_1[1:]
import statsmodels.api as sm
import statsmodels.formula.api as smf
stats.chisqprob = lambda chisq, df: stats.chi2.sf(chisq, df)
lg_model_1 = sm.MNLogit(train_df['NPS_Status'], sm.add_constant(train_df[cols4logit])).fit()
lg_model_1.summary()
# +
# Predictions and Confusion Matrix
train_df_predict_1 = lg_model_1.predict(sm.add_constant(train_df[cols4logit]))
test_df_predict_1 = lg_model_1.predict(sm.add_constant(test_df[cols4logit]))
test_df_predict_1
values = np.argmax(test_df_predict_1.values, axis=1)
finPred = pd.DataFrame({"NPS_Status": test_df.NPS_Status})
finPred['predVal'] = values
finPred['pred'] = 'X'
finPred.loc[finPred.predVal==0, 'pred'] = 'BasePassive'
finPred.loc[finPred.predVal==1, 'pred'] = 'Detractor'
finPred.loc[finPred.predVal==2, 'pred'] = 'Promotor'
pd.crosstab(finPred.NPS_Status, finPred.pred)
#print(test_df_predict_1.head())
#np.sum(test_df.NPS_Status=="Promotor")
# -
# ### Compare with Binary Model
#
# - In Binary Model focus was identifying who were non-Promoters and trying to find reasons for why they gave non-positive rating. In Ordinal Logistic Model, the base class was considered as the people who gave passive scores and we are trying to find the reasons which led to negative scores and also identify the areas/reasons working well by studying the positive scores, so that, better practices can be used in the areas not doing so well
#
# - What is working well / contributing to good NPS score
# - Ateendee Food
# - Food Delivery time
# - Age (Increase in age of Patients leads to improved NPS score)
# - Discharge Queries
# - Overall Staff Promptness
# - AE_PATIENTSTATUSINFO
# - AD_TARRIFFPACKAGESEXPLAINATION
# - CE_VALUEFORMONEY
#
# - What is contributing to Detractors
# - OVS_SECURITYATTITUDE
# - Admission Time
#
# - What is needed to push Passive customers to Promoters
# - Improve Room cleanliness
# - Better Explanation of Doctor Treatment
# - Improvement of Security Attitude
# - Improvement of Staff Attitude
# - Value for Money - Improve people's perception of the treatment value / may be better explained with explanation of Doctor Treatment
#
#
# **The results are in line with the findings from the binary classification model. However this model is more powerful as it provides complete segregation of the Passive and Detractor communities. It is easier to identify the reasons for huge dissatisfaction among some patients.**
#
# **Passive responders at time are more difficult to understand and react to as they are not completely open in coming out with their observations. Where as Promoters and Detractors (though not desirable) voice clear cut opinions about what is working well and what is not. This clear preferences / feedback helps in taking corrective action / continuing with what is working well**
# # Q6-5
# ### Conclusions
#
# - Better Explanation of Doctor Treatment is needed, notion of Value for Money - Improve people's perception of the treatment value / may also improve
# - Improvement of Security Attitude via trainings
# - Improvement of Staff Attitude via trainings
# - Focus on Staff and Nurse behavioural training
# - Improve room and bathroom hygiene
# - Given a large number of patients are from Karnataka, and given these people have a higher chance of giving poor NPS_Scores, it is advisable to understand the need of patients from this geographic region and if possible cater to those needs. A follow up study can be conducted to understand the need of people from these regions to further improve their scores.
| IIMB-Assignments/Assgn-2/M3_Assignment Cases and Data Files -20180831/Module3_Assignment2_Sayantan_Raha-v3.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="I1JiGtmRbLVp"
# ##### Copyright 2021 The TF-Agents Authors.
# + cellView="form" id="nQnmcm0oI1Q-"
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] id="xCnjvyteX4in"
# # Introduction to Multi-Armed Bandits
#
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://www.tensorflow.org/agents/tutorials/intro_bandit">
# <img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
# View on TensorFlow.org</a>
# </td>
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/agents/blob/master/docs/tutorials/intro_bandit.ipynb">
# <img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
# Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/agents/blob/master/docs/tutorials/intro_bandit.ipynb">
# <img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
# View source on GitHub</a>
# </td>
# <td>
# <a href="https://storage.googleapis.com/tensorflow_docs/agents/docs/tutorials/intro_bandit.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
# </td>
# </table>
# + [markdown] id="b5tItHFpLyXG"
# ## Introduction
#
# Multi-Armed Bandit (MAB) is a Machine Learning framework in which an agent has to select actions (arms) in order to maximize its cumulative reward in the long term. In each round, the agent receives some information about the current state (context), then it chooses an action based on this information and the experience gathered in previous rounds. At the end of each round, the agent receives the reward associated with the chosen action.
#
# Perhaps the purest example is the problem that lent its name to MAB: imagine that we are faced with `k` slot machines (one-armed bandits), and we need to figure out which one has the best payout, while not losing too much money.
#
# 
#
# Trying each machine once and then choosing the one that paid the most would not be a good strategy: The agent could fall into choosing a machine that had a lucky outcome in the beginning but is suboptimal in general. Instead, the agent should repeatedly come back to choosing machines that do not look so good, in order to collect more information about them. This is the main challenge in Multi-Armed Bandits: the agent has to find the right mixture between exploiting prior knowledge and exploring so as to avoid overlooking the optimal actions.
#
# More practical instances of MAB involve a piece of side information every time the learner makes a decision. We call this side information "context" or "observation".
#
# + [markdown] id="Y2gzFh2YwJAj"
# ## Multi-Armed Bandits and Reinforcement Learning
#
# Why is there a MAB Suite in the TF-Agents library? What is the connection between RL and MAB? Multi-Armed Bandits can be thought of as a special case of Reinforcement Learning. To quote [Intro to RL](https://www.tensorflow.org/agents/tutorials/0_intro_rl):
#
# *At each time step, the agent takes an action on the environment based on its policy $\pi(a_t|s_t)$, where $s_t$ is the current observation from the environment, and receives a reward $r_{t+1}$ and the next observation $s_{t+1}$ from the environment. The goal is to improve the policy so as to maximize the sum of rewards (return).*
#
# In the general RL case, the next observation $s_{t+1}$ depends on the previous state $s_t$ and the action $a_t$ taken by the policy. This last part is what separates MAB from RL: in MAB, the next state, which is the observation, does not depend on the action chosen by the agent.
#
# This similarity allows us to reuse all the concepts that exist in TF-Agents.
#
#
# * An **environment** outputs observations, and responds to actions with rewards.
# * A **policy** outputs an action based on an observation, and
# * An **agent** repeatedly updates the policy based on previous observation-action-reward tuples.
#
# + [markdown] id="KA1ELdJrfJaV"
# ## The Mushroom Environment
#
# For illustrative purposes, we use a toy example called the "Mushroom Environment". The mushroom dataset ([Schlimmer, 1981](https://archive.ics.uci.edu/ml/datasets/Mushroom)) consists of labeled examples of edible and poisonous mushrooms. Features include shapes, colors, sizes of different parts of the mushroom, as well as odor and many more.
#
# 
#
# The mushroom dataset, just like all supervised learning datasets, can be turned into a contextual MAB problem. We use the method also used by [Riquelme et al. (2018)](https://arxiv.org/pdf/1802.09127.pdf). In this conversion, the agent receives the features of a mushroom, decides to eat it or not. Eating an edible mushroom results in a reward of +5, while eating a poisonous mushroom will give either +5 or -35 with equal probability. Not eating the mushroom results in 0 reward, independently of the type of the mushroom. The following table summarizes the reward assignments:
#
# >```
# | edible | poisonous
# -----------|--------|----------
# eating it | +5 | -35 / +5
# leaving it | 0 | 0
# ```
# + [markdown] id="VXdlbTmc8yMt"
# ## The LinUCB Agent
# Performing well in a contextual bandit environment requires a good estimate on the reward function of each action, given the observation. One possibility is to estimate the reward function with linear functions. That is, for every action $i$, we are trying to find the parameter $\theta_i\in\mathbb R^d$ for which the estimates
#
# $r_{t, i} \sim \langle v_t, \theta_i\rangle$
#
# are as close to the reality as possible. Here $v_t\in\mathbb R^d$ is the context received at time step $t$. Then, if the agent is very confident in its estimates, it can choose $\arg\max_{1, ..., K}\langle v_t, \theta_k\rangle$ to get the highest expected reward.
#
# As explained above, simply choosing the arm with the best estimated reward does not lead to a good strategy. There are many different ways to mix exploitation and exploration in linear estimator agents, and one of the most famous is the Linear Upper Confidence Bound (LinUCB) algorithm (see e.g. [Li et al. 2010](https://arxiv.org/abs/1003.0146)). LinUCB has two main building blocks (with some details omitted):
#
# 1. It maintains estimates for the parameters of every arm with Linear Least Squares: $\hat\theta_i\sim X^+_i r_i$, where $X_i$ and $r_i$ are the stacked contexts and rewards of rounds where arm $i$ was chosen, and $()^+$ is the pseudo inverse.
# 2. It maintains *confidence ellipsoids* defined by the inverse covariance $X_i^\top X_i$ for the above estimates.
#
#
#
#
# The main idea of LinUCB is that of "Optimism in the Face of Uncertainty". The agent incorporates exploration via boosting the estimates by an amount that corresponds to the variance of those estimates. That is where the confidence ellipsoids come into the picture: for every arm, the optimistic estimate is $\hat r_i = \max_{\theta\in E_i}\langle v_t, \theta\rangle$, where $E_i$ is the ellipsoid around $\hat\theta_i$. The agent chooses best looking arm $\arg\max_i\hat r_i$.
#
# Of course the above description is just an intuitive but superficial summary of what LinUCB does. An implementation can be found in our codebase [here](https://github.com/tensorflow/agents/blob/master/tf_agents/bandits/agents/lin_ucb_agent.py)
# + [markdown] id="r-Fc1dYdD1YM"
# ## What's Next?
# If you want to have a more detailed tutorial on our Bandits library take a look at our [tutorial for Bandits](https://colab.research.google.com/github/tensorflow/agents/blob/master/docs/tutorials/bandits_tutorial.ipynb). If, instead, you would like to start exploring our library right away, you can find it [here](https://github.com/tensorflow/agents/tree/master/tf_agents/bandits). If you are even more eager to start training, look at some of our end-to-end examples [here](https://github.com/tensorflow/agents/tree/master/tf_agents/bandits/agents/examples/v2), including the above described mushroom environment with LinUCB [here](https://github.com/tensorflow/agents/tree/master/tf_agents/bandits/agents/examples/v2/train_eval_mushroom.py).
| site/en-snapshot/agents/tutorials/intro_bandit.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import pandas as pd
import numpy as np
columns_to_use = ['Type', 'Method', 'Regionname', 'Rooms', 'Distance',
'Postcode', 'Bedroom2', 'Bathroom', 'Landsize', 'Lattitude',
'Longtitude', 'Propertycount','Price']
df_melbourne = pd.read_csv("../Kaggle/Melbourne-House-Snapshot/melb_data.csv", usecols=columns_to_use)
df_melbourne.head(n=10)
# -
df_melbourne.info()
# +
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import mean_absolute_error
# Function for comparing different approaches
def score_dataset(X_train, X_valid, y_train, y_valid):
model = RandomForestRegressor(n_estimators=100, random_state=0)
model.fit(X_train, y_train)
preds = model.predict(X_valid)
return mean_absolute_error(y_valid, preds) #Quanto Menor, Melhor!!!
# +
from sklearn.model_selection import train_test_split
y = df_melbourne.Price
X = df_melbourne.drop(['Price'], axis=1)
X_train, X_valid, y_train, y_valid = train_test_split(X, y, train_size=0.8, test_size=0.2,
random_state=0)
# -
# ## Recuperando Lista de Variáveis Categóricas
# +
lista = (df_melbourne.dtypes == 'object')
object_cols = list(lista[lista].index)
print("Variáveis Categóricas:")
print(object_cols)
# -
# ### (1) Removendo Atributos Categóricos
# +
drop_X_train = X_train.select_dtypes(exclude=['object'])
drop_X_valid = X_valid.select_dtypes(exclude=['object'])
print("MAE from Approach 1 (Drop categorical variables):")
print(score_dataset(drop_X_train, drop_X_valid, y_train, y_valid))
# -
# ### (2) Label Encoding
# +
from sklearn.preprocessing import LabelEncoder
# Make copy to avoid changing original data
label_X_train = X_train.copy()
label_X_valid = X_valid.copy()
# Apply label encoder to each column with categorical data
label_encoder = LabelEncoder()
for col in object_cols:
label_X_train[col] = label_encoder.fit_transform(X_train[col])
label_X_valid[col] = label_encoder.transform(X_valid[col])
print("MAE from Approach 2 (Label Encoding):")
print(score_dataset(label_X_train, label_X_valid, y_train, y_valid))
# -
# ### (3) One-Hot Encoder
# +
from sklearn.preprocessing import OneHotEncoder
# Apply one-hot encoder to each column with categorical data
OH_encoder = OneHotEncoder(handle_unknown='ignore', sparse=False)
OH_cols_train = pd.DataFrame(OH_encoder.fit_transform(X_train[object_cols]))
OH_cols_valid = pd.DataFrame(OH_encoder.transform(X_valid[object_cols]))
# One-hot encoding removed index; put it back
OH_cols_train.index = X_train.index
OH_cols_valid.index = X_valid.index
# Remove categorical columns (will replace with one-hot encoding)
num_X_train = X_train.drop(object_cols, axis=1)
num_X_valid = X_valid.drop(object_cols, axis=1)
# Add one-hot encoded columns to numerical features
OH_X_train = pd.concat([num_X_train, OH_cols_train], axis=1)
OH_X_valid = pd.concat([num_X_valid, OH_cols_valid], axis=1)
print("MAE from Approach 3 (One-Hot Encoding):")
print(score_dataset(OH_X_train, OH_X_valid, y_train, y_valid))
| 001 - Feature Engineering/Exercicios_Resolvidos/.ipynb_checkpoints/Categorical_Encoding-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/podyssea/RecommenderSystems/blob/main/RecSys_coursework_2021_2210049p.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="THQFNe3zdt1f"
# # Assessed Coursework Template Notebook
#
# This is the template notebook for the RecSys(H) 2021 coursework. It deals with data preparation and evaluation only.
#
# Please note:
# - use H1 text headings for grouping together blocks of cells. You can then hide these while working on other blocks
# - leave the cell output visible when you submit the notebook
#
#
# + [markdown] id="Ww--_kl9-ndn"
# ## Setup block
#
# Setup the data files, Python etc.
# + id="iFgYpbhh0tkX" colab={"base_uri": "https://localhost:8080/"} outputId="47dd1258-778c-4266-f5f4-ed8f16445df7"
# !rm -rf ratings* books* to_read* test*
# !curl -o ratings.csv "http://www.dcs.gla.ac.uk/~craigm/recsysH/coursework/final-ratings.csv"
# !curl -o books.csv "http://www.dcs.gla.ac.uk/~craigm/recsysH/coursework/final-books.csv"
# !curl -o to_read.csv "http://www.dcs.gla.ac.uk/~craigm/recsysH/coursework/final-to_read.csv"
# !curl -o test.csv "http://www.dcs.gla.ac.uk/~craigm/recsysH/coursework/final-test.csv"
# + id="1VpVnNrZ1EiX" colab={"base_uri": "https://localhost:8080/"} outputId="2ded0fe3-d28c-45f1-8684-ff0217af65fb"
#Standard setup
import pandas as pd
import numpy as np
import torch
# !pip install git+https://github.com/cmacdonald/spotlight.git@master#egg=spotlight
from spotlight.interactions import Interactions
SEED=42
# + [markdown] id="RtJO0e0m-hun"
# # data preparation
# + id="qKAb25iw1MYw"
#load in the csv files
ratings_df = pd.read_csv("ratings.csv")
books_df = pd.read_csv("books.csv")
to_read_df = pd.read_csv("to_read.csv")
test = pd.read_csv("test.csv")
# + id="W6rqfn53OhDC"
#cut down the number of items and users
counts=ratings_df[ratings_df["book_id"] < 2000].groupby(["book_id"]).count().reset_index()
valid_books=counts[counts["user_id"] >= 10][["book_id"]]
books_df = books_df.merge(valid_books, on="book_id")
ratings_df = ratings_df[ratings_df["user_id"] < 2000].merge(valid_books, on="book_id")
to_read_df = to_read_df[to_read_df["user_id"] < 2000].merge(valid_books, on="book_id")
test = test[test["user_id"] < 2000].merge(valid_books, on="book_id")
#stringify the id columns
def str_col(df):
if "user_id" in df.columns:
df["user_id"] = "u" + df.user_id.astype(str)
if "book_id" in df.columns:
df["book_id"] = "b" + df.book_id.astype(str)
str_col(books_df)
str_col(ratings_df)
str_col(to_read_df)
str_col(test)
# + [markdown] id="5Rqh9hFM6k20"
# # Implicit
# + colab={"base_uri": "https://localhost:8080/", "height": 399} id="jujqfHPH56tB" outputId="dd357568-e528-4817-ef7c-b8cd11728310"
to_read_df
# + [markdown] id="q8K38Kb86sZ9"
# #explicit
# + colab={"base_uri": "https://localhost:8080/", "height": 399} id="FNVhxoEg6vX7" outputId="13944b94-737d-4375-fc4f-b338ba973a32"
ratings_df
# + [markdown] id="sHx_Q7Sz61Tj"
# #test
# + colab={"base_uri": "https://localhost:8080/", "height": 399} id="GDWNC4ko62tO" outputId="b2745e2c-f108-4892-b00f-0218ac165eae"
test
# + [markdown] id="uitG5dl069Dm"
# #books
# + colab={"base_uri": "https://localhost:8080/", "height": 943} id="eke1nbYq6-in" outputId="53481f81-292b-4885-800b-540570667a4a"
books_df
# + colab={"base_uri": "https://localhost:8080/", "height": 670} id="g1ld1dBdoyLT" outputId="877e1daf-288e-4e97-f81e-927ff57bafdb"
books_df.sort_values('average_rating', ascending=False).head(5)
# + [markdown] id="C7cgXhmYUXIn"
# Here we construct the Interactions objects from `ratings.csv`, `to_read.csv` and `test.csv`. We manually specify the num_users and num_items parameters to all Interaction objects, in case the test set differs from your training sets.
# + id="15ClgJOdTTt1" colab={"base_uri": "https://localhost:8080/"} outputId="5d5d8c3e-d86c-4db6-e68a-e520aeff2789"
from collections import defaultdict
from itertools import count, combinations
from spotlight.cross_validation import random_train_test_split
iid_map = defaultdict(count().__next__)
rating_iids = np.array([iid_map[iid] for iid in ratings_df["book_id"].values], dtype = np.int32)
test_iids = np.array([iid_map[iid] for iid in test["book_id"].values], dtype = np.int32)
toread_iids = np.array([iid_map[iid] for iid in to_read_df["book_id"].values], dtype = np.int32)
uid_map = defaultdict(count().__next__)
test_uids = np.array([uid_map[uid] for uid in test["user_id"].values], dtype = np.int32)
rating_uids = np.array([uid_map[uid] for uid in ratings_df["user_id"].values], dtype = np.int32)
toread_uids = np.array([uid_map[iid] for iid in to_read_df["user_id"].values], dtype = np.int32)
uid_rev_map = {v: k for k, v in uid_map.items()}
iid_rev_map = {v: k for k, v in iid_map.items()}
rating_dataset = Interactions(user_ids=rating_uids,
item_ids=rating_iids,
ratings=ratings_df["rating"].values,
num_users=len(uid_rev_map),
num_items=len(iid_rev_map))
toread_dataset = Interactions(user_ids=toread_uids,
item_ids=toread_iids,
num_users=len(uid_rev_map),
num_items=len(iid_rev_map))
test_dataset = Interactions(user_ids=test_uids,
item_ids=test_iids,
num_users=len(uid_rev_map),
num_items=len(iid_rev_map))
print(rating_dataset)
print(toread_dataset)
print(test_dataset)
#here we define the validation set
toread_dataset_train, validation = random_train_test_split(toread_dataset, random_state=np.random.RandomState(SEED))
print(validation)
num_items = test_dataset.num_items
num_users = test_dataset.num_users
# + colab={"base_uri": "https://localhost:8080/"} id="cO3KrKeICGC8" outputId="2242bf4a-453f-48f1-f413-612611ba31ce"
print(toread_dataset_train)
# + id="1mJr6xPE7rgj" colab={"base_uri": "https://localhost:8080/"} outputId="67509be9-d0c8-42fd-9092-a388e750aa45"
print(num_items)
# + [markdown] id="Kt4I2C5DTUL5"
# #Example code
#
# To evaluate soem of your hand-implemented recommender systems (e.g. Q1, Q4), you will need to instantiate objects that match the specification of a Spotlight model, which `mrr_score()` expects.
#
#
# Here is an example recommender object that returns 0 for each item, regardless of user.
# + id="s2eaxy_hakbC" colab={"base_uri": "https://localhost:8080/"} outputId="65b3c935-dfc8-4c53-c8e9-9ef55fd0aef1"
from spotlight.evaluation import mrr_score, rmse_score
class dummymodel:
def __init__(self, numitems):
self.predictions=np.zeros(numitems)
#uid is the user we are requesting recommendations for;
#returns an array of scores, one for each item
def predict(self, uid):
#this model returns all zeros, regardless of userid
return( self.predictions )
#lets evaluate how the effeciveness of dummymodel
dummymodel(num_items)
# print(mrr_score(dummymodel(num_items), test_dataset, train=rating_dataset, k=100).mean())
#as expected, a recommendation model that gives 0 scores for all items obtains a MRR score of 0
# + id="ZQTJOmS5dB3i" colab={"base_uri": "https://localhost:8080/"} outputId="ec023716-005c-4a15-af7b-df00319f1a00"
#note that the latest copy of Craig's Spotlight displays a progress bar if you set verbose=True
print(mrr_score(dummymodel(num_items), test_dataset, train=rating_dataset, k=100, verbose=True).mean())
# + [markdown] id="SyvGgW_3ZjLV"
# #Question 1
#
# Non personalised baselines for ranking books based on statistics
# + id="q0aSv5xLy1Rj" colab={"base_uri": "https://localhost:8080/"} outputId="bed7dbe6-2399-4709-becb-4e563b9a2877"
#group the ratings by book id and display only book id and rating then take the average for each book
# and pass the rating column into a list
average_rating = ratings_df[["book_id", "rating"]].groupby(["book_id"]).mean()
non_personalised_ar = average_rating['rating'].tolist()
#pass them into a similar model to dummy model and take predictions
# this process will remain the same throughout non-personalised based models
class average_rating:
def __init__(self, numitems):
self.predictions=np.ones(numitems) * non_personalised_ar
#uid is the user we are requesting recommendations for;
#returns an array of scores, one for each item
def predict(self, uid):
#this model returns all zeros, regardless of userid
return( self.predictions )
#take the mrr score
print(mrr_score(average_rating(num_items), test_dataset, train=rating_dataset, k=100, verbose=True).mean())
# + id="08rDcHdZzP1f" colab={"base_uri": "https://localhost:8080/"} outputId="b9f8e488-8a88-4438-f6dc-74596ddaaf81"
#group by book id and display book id and rating counts from the book_df and take the sum of the ratings for each book
# and pass the rating counts into a list
number_of_ratings = books_df[["book_id", "ratings_count"]].groupby(["book_id"]).sum()
non_personalised_nor = number_of_ratings['ratings_count'].tolist()
class number_of_ratings:
def __init__(self, numitems):
self.predictions=np.ones(numitems) * non_personalised_nor
#uid is the user we are requesting recommendations for;
#returns an array of scores, one for each item
def predict(self, uid):
#this model returns all zeros, regardless of userid
return( self.predictions )
print(mrr_score(number_of_ratings(num_items), test_dataset, train=rating_dataset, k=100, verbose=True).mean())
# + id="pjKvLxfJ1nvw" colab={"base_uri": "https://localhost:8080/"} outputId="d99f2747-7a76-49e4-f27d-a775063c49b9"
#take the number of 5 star ratings from the books df and pass them into the model
star5_ratings = books_df['ratings_5'].tolist()
class number_of_5_star_ratings:
def __init__(self, numitems):
self.predictions=np.ones(numitems) * star5_ratings
#uid is the user we are requesting recommendations for;
#returns an array of scores, one for each item
def predict(self, uid):
#this model returns all zeros, regardless of userid
return( self.predictions )
print(mrr_score(number_of_5_star_ratings(num_items), test_dataset, train=rating_dataset, k=100, verbose=True).mean())
# + id="dJAPojDD3ZvM" colab={"base_uri": "https://localhost:8080/"} outputId="65475efa-fa15-47ee-b644-3d6340324e51"
#divide the number of 5 star ratings by the number of ratings for a specific item
fractions_of_ratings = np.asarray(star5_ratings) / np.asarray(non_personalised_nor)
class fractions_of_5_star:
def __init__(self, numitems):
self.predictions=np.ones(numitems) * fractions_of_ratings
#uid is the user we are requesting recommendations for;
#returns an array of scores, one for each item
def predict(self, uid):
#this model returns all zeros, regardless of userid
return( self.predictions )
print(mrr_score(fractions_of_5_star(num_items), test_dataset, train=rating_dataset, k=100, verbose=True).mean())
# + [markdown] id="b6LgG2lCVtz0"
# # Question 2
# + id="fhskxrunFmXg"
#import necessary modules
from spotlight.interactions import Interactions
from spotlight.cross_validation import random_train_test_split
from spotlight.factorization.explicit import ExplicitFactorizationModel
from spotlight.factorization.implicit import ImplicitFactorizationModel
from collections import defaultdict
from itertools import count
import itertools
import time
from scipy.stats import rankdata
import random
# + id="H9GIhUi9Ep5h"
#define the latent factors
latent_factors = [8,16,32,64]
# + colab={"base_uri": "https://localhost:8080/"} id="J0ubnRoHNy-G" outputId="8a91c2d3-9846-47a4-cbc5-1a1c945dc84e"
#train both the Implicit and Explicit model on the explicit dataset
##Implicit
emodel = ExplicitFactorizationModel(n_iter=5,
use_cuda=False,
random_state=np.random.RandomState(SEED) # ensure results are repeatable
)
emodel.fit(rating_dataset)
print("======== MRR For Explicit Model on Explicit Data ========================================")
print(mrr_score(emodel, test_dataset).mean())
print("=====================================================)")
##Explicit
imodel = ImplicitFactorizationModel(loss="bpr",n_iter=5,
use_cuda=False,
random_state=np.random.RandomState(SEED) # ensure results are repeatable
)
imodel.fit(rating_dataset)
print("======== MRR Implicit Model on Explicit Data =========")
print(mrr_score(imodel, test_dataset).mean())
print("=====================================================)")
# + id="zofngZHIY61K" colab={"base_uri": "https://localhost:8080/"} outputId="1e88e8e7-aa86-4913-9f17-c3107109d839"
#for every latent factor in the latent factor set train an implicit model and fit
# it using the explicit data validate.
#then print the MRR score for evaluation using the test dataset provided
for factor in latent_factors:
imodel = ImplicitFactorizationModel(loss="bpr",n_iter=5,
embedding_dim = factor,
use_cuda=False,
random_state=np.random.RandomState(SEED) # ensure results are repeatable
)
imodel.fit(rating_dataset)
print("Implicit Factorization Model with", factor, "latent factor")
print("MRR Score:", mrr_score(imodel, validation).mean())
print()
# + id="lvp5-9rqjuM6" colab={"base_uri": "https://localhost:8080/"} outputId="860196cc-87de-4680-eea3-d5bb4976ca9f"
#train and validate which latent factor is closer to the actual model
imodel_closer_to_validation = ImplicitFactorizationModel(loss="bpr",n_iter=5,
embedding_dim = 32,
use_cuda=False,
random_state=np.random.RandomState(SEED) # ensure results are repeatable
)
imodel_closer_to_validation.fit(rating_dataset)
print("MRR Score:", mrr_score(imodel_closer_to_validation, test_dataset).mean())
# + [markdown] id="GdumyHtgZnMH"
# # Question 3 (a)
# + colab={"base_uri": "https://localhost:8080/"} id="xYguO9opBn6G" outputId="9f22dfb6-a6c4-423a-8680-9d260318ada3"
#instantiate an implicit model for every latent factor in the latent factor set using the implicit dataset
# train and fit it on the implicit train set which is already defined and get the MRR score using the validation set
#get the MRR using the test dataset to evaluate
for factor in latent_factors:
implicit_model = ImplicitFactorizationModel(loss="bpr",n_iter=5,
embedding_dim=factor,
use_cuda=False,
random_state=np.random.RandomState(SEED))
implicit_model.fit(toread_dataset_train)
print("Implicit Factorization Model with", factor, "latent factor")
print("MRR Score:", mrr_score(implicit_model, validation).mean())
print()
# + id="VnBbymkFZmBQ" colab={"base_uri": "https://localhost:8080/"} outputId="09137af6-2b11-4bf0-a6a1-3324e742cc4f"
#find the best implicit model, using the validation data
implicit_model_closer_to_validation = ImplicitFactorizationModel(loss="bpr",n_iter=5,
embedding_dim = 16,
use_cuda=False,
random_state=np.random.RandomState(SEED) # ensure results are repeatable
)
implicit_model_closer_to_validation.fit(toread_dataset_train)
print("MRR Score:", mrr_score(implicit_model_closer_to_validation, test_dataset).mean())
# + [markdown] id="_AGrtKZeILSa"
# # Question 3 (b)
# + id="mgEX1t4Bbq5I"
# here we are creating a replication of the books-df to use for this question
# we do this because we need a column renamed to item_id
books_df_replicate = books_df.copy()
books_df_replicate.rename(columns = {"Unnamed: 0" : "item_id"}, inplace = True)
# + id="oYVzjYnP959r"
#define a function which takes in an item id and looks in the above created df to return the title of that item
def item_to_titles(item_ids):
return books_df_replicate.loc[books_df_replicate["item_id"].isin(item_ids)]["title"]
#define a function which takes 3 sets of item item ids, finds the titles and returns which of them
# are common between the first and predictions, and second and predictions
def find_common_titles(a,b, predictions):
previously_vs_predicted = item_to_titles(np.intersect1d(a, predictions))
print("These titles were predicted to be previously shelved correctly")
print(previously_vs_predicted)
currently_vs_predicted = item_to_titles(np.intersect1d(b, predictions))
print("\n\nThese titles were predicted to be currently shelved correctly")
print(currently_vs_predicted)
#define a function to get the predictions given a user id
# the function looks into the toread dataset (previously shelved) and finds the indexes of that user
# it stores the items into a list by accessing the toread dataset using those indexes - same applies for currently shelved using the test dataset and predictions
# then the function uses the find_common_titles function to return the common titles between the the previously, currently shelved and predicted
def get_predictions_for_highest_rated_user(user_id):
item_ids_indexes_prev = np.where(toread_dataset.user_ids == user_id)
previously_shelved = toread_dataset.item_ids[item_ids_indexes_prev]
item_ids_indexes_curr = np.where(test_dataset.user_ids == user_id)
currently_shelved = test_dataset.item_ids[item_ids_indexes_curr]
predictions = implicit_model.predict(user_id)
predicted_shelved = np.where(predictions > 0)[0]
shelved_items = [previously_shelved, currently_shelved, predicted_shelved]
return find_common_titles(previously_shelved, currently_shelved, predicted_shelved)
# #Train the best model in terms of MRR from Q3
# best_implicit_model = ImplicitFactorizationModel(loss="bpr",n_iter=5,
# embedding_dim=16,
# use_cuda=False,
# random_state=np.random.RandomState(SEED))
# best_implicit_model.fit(toread_dataset_train, verbose=False)
# + colab={"base_uri": "https://localhost:8080/"} id="4MXHsKGK957W" outputId="e8e6aead-6b21-4f27-f421-b61519179a03"
#get the mrr scores using the implicit model created above on the test dataset
mrr_scores = mrr_score(implicit_model_closer_to_validation, test_dataset)
#find the maximum of the mrr scores and the indexes at which this highest occurs in the mrr scores
m = max(mrr_scores)
indexes_of_highest = [i for i, j in enumerate(mrr_scores) if j == m]
#from the test dataset find the uids of the highest rated users
uids = test_dataset.user_ids[indexes_of_highest]
#for each uid in uids found above convert the uid to user_id using the reverse mapping
#appending to an empty list to get a list of user ids with the highest RR
index_to_user_id = []
for uid in uids:
user_id_convert = uid_rev_map.get(uid)
index_to_user_id.append(user_id_convert)
#print the top 5 rated uer ids
print("To 5 highest rated users are: ", index_to_user_id[:5], "with uids ", uids[:5])
# + id="2r2Y9LePDCTI" colab={"base_uri": "https://localhost:8080/"} outputId="663c387e-0edd-4583-aeb5-0d5e6b472d18"
#call the above created function to get the common titles predicted and actually shelved for each uid found
for uid in uids[:5]:
print("Results for", uid_rev_map.get(uid))
get_predictions_for_highest_rated_user(uid)
print("============================================================\n\n\n")
# + [markdown] id="ngwO6En5ltlM"
# # Question 3c
# + id="pkCDiDO6Vi7J" colab={"base_uri": "https://localhost:8080/", "height": 943} outputId="d0db1073-aa7d-47d4-d44f-21016b808355"
books_df
# + id="PTHRRiKoM0wA"
from scipy import spatial
from scipy.stats import rankdata
#define function to calculate the intra list diversity measure for given embeddings on books
# returns the measure directly
def ild(emb):
commons = []
for combination in combinations(range(len(emb)), 2):
i = emb[combination[0]].detach().numpy()
j = emb[combination[1]].detach().numpy()
commons.append(1-spatial.distance.cosine(i,j))
intra_list = 2/(5*4) * sum(commons)
return intra_list
#Function to return books based on predictions for a given book list
# return specific fields for that book
pd.set_option('display.max_columns', None)
def return_books(blist):
for id in blist:
bookids = [iid_rev_map[bid] for bid in pred[id]]
print(books_df.loc[books_df['book_id'].isin(bookids)][['title','authors', 'average_rating', 'ratings_5']]) # <---- change the visible columns from here
print()
# + id="JSvpGFv7M0fO"
#set a list with 0s to append the ild later
arrange_user_ids = np.arange(num_users)
ILD_list = np.zeros(num_users)
#define an empty prediction list
pred = []
#for loop to calculate the ild based on top 5 users predictions based on their embeddings
# append the values to the ild list
for each in arrange_user_ids:
pred.append(rankdata(implicit_model_closer_to_validation.predict(each), method = "ordinal")[:5]-1)
best_5 = pred[-1]
best_5_embeddings = implicit_model_closer_to_validation._net.item_embeddings.weight[best_5]
ILD_list[each] = ild(best_5_embeddings)
# + colab={"base_uri": "https://localhost:8080/"} id="R_oBojfGMrFU" outputId="96566944-0fa7-4dd2-f869-f09ca6fd0c6c"
#calculate the maximum and minimum ilds
maximum_ILD = ILD_list.max()
print(maximum_ILD)
minimum_ILD = ILD_list.min()
print(minimum_ILD)
#set limit to the number of returned users
highest_ild = np.where(ILD_list >= maximum_ILD - 0.04)
lowest_ild = np.where(ILD_list <= minimum_ILD + 0.04)
# + id="TY4nh8PTagKe" outputId="489451cb-f5f5-4fde-9351-1506344575c7" colab={"base_uri": "https://localhost:8080/"}
return_books(lowest_ild[0])
# + colab={"base_uri": "https://localhost:8080/"} id="Yh9AV8JgMq6q" outputId="3b23b91c-c6dc-4884-c7e2-e6df7b92ac4d"
return_books(highest_ild[0])
# + [markdown] id="C0M8fE2p4InF"
# #Question 4
# + id="yTJuO5tZ4KgJ"
#create a class which will train an explicit model on the explicit and implicit data passing in the best latent factors recorded from above trained models
#fit using the training data suited for each model and allocate equal weight to both models when calling the prediction (unweighted)
class unweighted_combsum:
def __init__(self):
self.explicit = self.create_train_model(rating_dataset, 32)
self.implicit = self.create_train_model(toread_dataset_train, 16)
def create_train_model(self, train_dataset, latent):
model = ImplicitFactorizationModel(n_iter = 5, loss = "bpr", random_state=np.random.RandomState(SEED), embedding_dim = latent)
model.fit(train_dataset)
return model
def predict(self, uid):
# returns the combined rating
return 0.5 * self.explicit.predict(uid) + 0.5 * self.implicit.predict(uid)
# + id="R0VnLg6pIwLb"
#call the models in a variable
q4 = unweighted_combsum()
# + id="Q4pfxaYdIwJC"
#get the MRR scores of the model
q4_mrr_scores = mrr_score(q4, test_dataset)
# + id="3vrK5r-2sPXC" colab={"base_uri": "https://localhost:8080/"} outputId="e9ad301a-5514-4faa-a301-c2bf1ef00684"
q4_mrr_scores
# + id="sAsXRJJZAnpV" colab={"base_uri": "https://localhost:8080/"} outputId="2e1aa811-6df9-416c-f5d9-6cca26363dd6"
#best model from q2
best_implicit_q2 = ImplicitFactorizationModel(loss="bpr",n_iter=5,
embedding_dim=32, #this is Spotlight default
use_cuda=False,
random_state=np.random.RandomState(SEED) # ensure results are repeatable
)
best_implicit_q2.fit(rating_dataset)
q2_mrr_scores = mrr_score(best_implicit_q2, test_dataset)
print("======== MRR ========= for latent factor 32")
print("=====================================================)")
print(mrr_score(best_implicit_q2, test_dataset).mean())
print("=====================================================)")
# + id="nDbRJFTPBchx" colab={"base_uri": "https://localhost:8080/"} outputId="1a572a19-bcca-4733-a509-b11ceb451b42"
#best model from q3
best_implicit_q3 = ImplicitFactorizationModel(loss="bpr",n_iter=5,
embedding_dim=16,
use_cuda=False,
random_state=np.random.RandomState(SEED))
best_implicit_q3.fit(toread_dataset_train)
q3_mrr_scores = mrr_score(best_implicit_q3, test_dataset)
print("======== MRR ========= for latent factor 16")
print("=====================================================)")
print(mrr_score(best_implicit_q3, test_dataset).mean())
print("=====================================================)")
# + id="C2Gf07w4B4mB" colab={"base_uri": "https://localhost:8080/"} outputId="f1b5e956-d52d-44b2-c326-224cf7592642"
import matplotlib.pyplot as plt
#calculate the differences in the scores calucated in q2 and q3 individually to find out how many RR scores changed
diff_q2_q4 = q4_mrr_scores - q2_mrr_scores # <------best
diff_q3_q4 = q4_mrr_scores - q3_mrr_scores
print("\n=========== FROM Q2 ==========")
print(sum(i > 0 for i in diff_q2_q4), "are better")
print(sum(i < 0 for i in diff_q2_q4), "are worse")
print(sum(i == 0 for i in diff_q2_q4), "have not changed")
print("\n=========== FROM Q3 ==========")
print(sum(i > 0 for i in diff_q3_q4), "are better")
print(sum(i < 0 for i in diff_q3_q4), "are worse")
print(sum(i == 0 for i in diff_q3_q4), "have not changed")
# + id="92VrdbzeXQ3V"
#Create a dataframe that has the item ids, the previous RR and the RR obtained from the combsum model we created
## we create another column named diff to pass in the difference
data = {'user_ids' : uid_map.keys(), 'Previous RR': q3_mrr_scores, 'New RR': q4_mrr_scores}
RR_df = pd.DataFrame(data)
RR_df["diff"] = RR_df['New RR'] - RR_df['Previous RR']
RR_df = RR_df[RR_df['diff'] != 0]
RR_to_plot = RR_df[['user_ids', 'Previous RR', 'New RR']]
# + id="kW8dEROCeH7b" colab={"base_uri": "https://localhost:8080/", "height": 326} outputId="2836ab50-fe09-42c2-8b74-ee7106e321b6"
#we plot a bar chart of the old RR and the new RR for each user
## for the first 100 values
RR_to_plot.head(80).plot.bar(x='user_ids',rot=90, figsize = (50,20))
plt.xlabel('user_ids', fontsize = 20)
plt.ylabel('MRR Score', fontsize = 20)
plt.title('First 80 users MRR scores Previous and New', fontsize = 40)
plt.show()
# + [markdown] id="Oje13m_ezswO"
# # Question 5
# + id="qwTFiVHoz3rj"
## Referenced from RecSys - Lab 1 Solution
def calculate_lift():
positives=ratings_df[ratings_df["rating"]>=4]
positives
# #join positives with itself on userId to get all pairs of books watched by a given user.
pairs=pd.merge(positives, positives, on=["user_id"])
pairs
# #we dont care either A->B, or B->A
sequences=pairs[pairs['Unnamed: 0_x'] < pairs['Unnamed: 0_y']]
sequences
# #lets count the frequency of each pair of books
paircounts=sequences[["book_id_x", "book_id_y", "user_id"]].groupby(["book_id_x", "book_id_y"]).count()
paircounts
#sort by the most popular pairs.
pairswithcounts_reset = paircounts.reset_index()
pairswithcounts = pairswithcounts_reset.rename(columns={'user_id' : 'count'}).sort_values(['count'], ascending=False)
pairswithcounts.head()
pairswithcounts.merge(books_df, left_on=["book_id_x"], right_on="book_id").merge(books_df, left_on=["book_id_y"], right_on="book_id")[["title_x", "title_y"]]
# # pairswithcounts gives is the frequency of (X AND Y).
# #We therefore need the counts of books
bookCounts = positives.groupby(['book_id']).count()[['user_id']].reset_index().rename(columns={'user_id' : 'count'})
bookCounts
# #lets puts all the information in the sample dataframe.
allstats = pairswithcounts.merge(bookCounts, left_on='book_id_x', right_on='book_id').merge(bookCounts, left_on='book_id_y', right_on='book_id')
allstats
# #and drop out some unused columns
allstats = allstats[['book_id_x', 'book_id_y', 'count', 'count_x', 'count_y']]
allstats
allstats = allstats.loc[:,~allstats.columns.duplicated()]
allstats
# #to calculate probabilites we need a denominator. I used the number of total ratings
num=float(ratings_df.count()["rating"])
# #we can then perform artihmetic on columns
allstats["lift"] = (allstats["count"] / num ) / ( (allstats["count_x"] / num) * (allstats["count_y"] / num))
allstats["loglift"] = np.log(allstats["lift"])
withtitles = allstats.merge(books_df, left_on=['book_id_x'], right_on="book_id").merge(books_df, left_on=["book_id_y"], right_on="book_id")
withtitles
#we add the support column
withtitles["support"] = withtitles["count"] / sequences["book_id_x"].count()
#select the columns we want to see
withtitles[["title_x", "book_id_x", "book_id_y", "lift", "support"]]
final = withtitles[["title_y", "title_x", "book_id_x", "book_id_y", "lift", "support", "count", "loglift"]]
#remove the duplicates from the dataframe
final = final.loc[:,~final.columns.duplicated()]
return final
# + id="dToMipXRd61v" colab={"base_uri": "https://localhost:8080/", "height": 1000} outputId="1d7c1b0b-c9c2-457a-c8bc-699a0c785133"
#display tha dataframe
calculate_lift()
# + id="zvFs1TGkFMtA"
#create the loglift class where we set a minimum support which we will use to evaluate
#this minimum support will filter the dataframe and calculate the according lift scores
class loglift:
def __init__(self, min_sup = 0):
self.minimum_support = min_sup
self.df = lifts.loc[(lifts["support"] > self.minimum_support)]
self.book_ids = books_df["book_id"].values
self.book_lift_scores = self.calculate_book_lift_scores()
#use this function to calculate the book lift for the given books
def calculate_book_lift_scores(self):
return np.array([self.df.loc[(self.df["book_id_x"] == bookid) | (self.df["book_id_y"] == bookid)]["loglift"].values.sum() for bookid in self.book_ids])
#call the predictions
def predict(self, uid):
userid = uid_rev_map[uid]
scores = []
scores = np.zeros(1826)
#for books that are not already rated
already_rated_books = ratings_df.loc[ratings_df["user_id"] == userid]["book_id"].values
#get the indices of the books that are not already rated
lift_indices = np.array([i for i in range(len(self.book_ids)) if self.book_ids[i] not in already_rated_books])\
#calculate the scores for these books
scores[lift_indices] = self.book_lift_scores[lift_indices]
return scores
# + id="7tj4daukIqcC"
#pass the lift dataframe in a variable
lifts = calculate_lift()
# + id="fCzvNtOoEVMt"
#initialize an empty list
q5_mrr_scores = []
#get 10 values of minimum support between minimum and maximum to experiment
min_supports = np.linspace(lifts["support"].min(), lifts["support"].max(), 10)
#using the minimum support on the loglift recommender, append the MRR score calculated
for min_support in min_supports:
q5_mrr_scores.append(mrr_score(loglift(min_support), validation))
# + id="uIuWC3YdIKnm" colab={"base_uri": "https://localhost:8080/"} outputId="0f9a1974-c553-4e0c-dbfa-28402c16f2ee"
#get the mean MRR for each minimum support calculated from above
## this will be used for plotting
mean_RR = []
for each in q5_mrr_scores:
mean_RR.append(each.mean())
print(mean_RR)
print(q5_mrr_scores)
print(min_supports)
# + id="jatkQGfMjQCC" colab={"base_uri": "https://localhost:8080/", "height": 323} outputId="cbe99f26-6454-4a69-b4e7-a291e495e4b3"
#plot the minimum support against the mean RR of each support to see how MRR behaves
plt.plot(min_supports, mean_RR, marker = 'D')
plt.xticks(min_supports, rotation = 90)
plt.xlabel('Supports')
plt.ylabel('MRR')
## store the best minimum support
best_min_support = min_supports[np.argmax(mean_RR)]
# + id="SbpZ8zIek6Q5" colab={"base_uri": "https://localhost:8080/"} outputId="c0742441-1b2c-4c99-8d30-46590f52c67b"
print(mrr_score(loglift(best_min_support), test_dataset).mean())
# + [markdown] id="EXWyeWxAzsex"
# # Question 6
# + id="y6vL3qNlz1de"
## initiliase and train recommenders
class initiliase_recommenders:
def __init__(self):
## ===========================================Low Group========================================
self.average_rating = average_rating(num_items)
self.number_of_ratings = number_of_ratings(num_items)
self.emodel = ExplicitFactorizationModel(n_iter=5,random_state=np.random.RandomState(SEED))
## =========================================================================================
## ========================================== High Group ======================================
self.number_of_5_star_ratings = number_of_5_star_ratings(num_items)
self.loglift = loglift(best_min_support)
self.fractions_of_5_star = fractions_of_5_star(num_items)
## ===========================================Two Best======================================
self.imodel = ImplicitFactorizationModel(n_iter = 5, loss = "bpr", random_state=np.random.RandomState(SEED), embedding_dim = 32)
self.best_implicit_model = ImplicitFactorizationModel(n_iter = 5, loss = "bpr", random_state=np.random.RandomState(SEED), embedding_dim = 16)
## =========================================================================================
# call train function
self.train_models()
#iterate over the recommender list
self.recommender_list = self.iterate_over()
#train them
def train_models(self):
self.emodel.fit(rating_dataset)
self.imodel.fit(rating_dataset)
self.best_implicit_model.fit(toread_dataset_train)
#iterate over the recommenders and assigns a value to each one
def iterate_over(self):
recommenders = []
for attr, value in self.__dict__.items():
recommenders.append(value)
return recommenders
# + id="2OCvRcddVspN"
#this class combines all the recommenders and takes in a list of weights for bias
class combine_recommenders:
def __init__(self, recommenders, weights = []):
self.recommenders = recommenders
self.number_of_recommenders = len(recommenders)
#if we give a weights list equal to recommenders set weights equal to that list
## else assign an equal weight to the recommender 1/number of recommenders to be equal to 1
if len(weights) == self.number_of_recommenders:
self.weights = weights
else:
self.weights = np.ones(self.number_of_recommenders) * 1/self.number_of_recommenders
#call predictions
def predict(self, uid):
predictions = 0
for rec in range(self.number_of_recommenders):
predictions += self.recommenders[rec].predict(uid) * self.weights[rec]
return predictions
# + id="rbURVrQJHYDK"
#define a function that returns the weights based on the bias set, returns changed weight list
def calculate_bias_weight(bias,indexes):
weights = np.ones(8) * (1-bias*len(indexes))/(8-len(indexes))
for i in indexes:
weights[i] = bias
return weights
# + id="dh-vUsnaXUNX"
#initialise the recommenders
trained_recommenders = initiliase_recommenders()
# + [markdown] id="1x-9d1Sys2gO"
# #Get Weights for different Bias Values
# + id="_yqgqXsGXYfF"
#assing equal weights to all recommenders of 1/number of recommenders
no_of_recommenders = 8
weights_list_without_bias = []
recommender_indices = list(range(no_of_recommenders))
weights_list_without_bias.append(np.ones(no_of_recommenders) * 1/no_of_recommenders)
#assign a bias of 0.3 one recommender each time in turn while experimenting
bias_test = 0.3
weights_list_with_bias = []
for i in range(no_of_recommenders):
weights_list_with_bias.append(calculate_bias_weight(bias_test, [i]))
#assing bias to the highest performing group of models
highest_group_equal_weights = []
for bias in [0.15, 0.18]:
highest_group_equal_weights.append(calculate_bias_weight(bias, recommender_indices[3:]))
#assign bias to the two best performing models
two_highest_rec_equal_weights = []
for bias in [0.25,0.35,0.45]:
two_highest_rec_equal_weights.append(calculate_bias_weight(bias, recommender_indices[6:]))
# + [markdown] id="JKlfCM02ss56"
# #Get MRR Scores for those Bias Values in order set above
# + id="XZbUo1cojAc0" colab={"base_uri": "https://localhost:8080/"} outputId="8d6401c4-f99b-4b65-b383-f701c1672e5d"
#get MRR without bias
q6_mrr_scores_without_bias = mrr_score(combine_recommenders(trained_recommenders.recommender_list, weights_list_without_bias), validation).mean()
print(q6_mrr_scores_without_bias)
#get MRR for weight with bias
q6_mmr_scores_with_bias = []
for each in weights_list_with_bias:
q6_mmr_scores_with_bias.append(mrr_score(combine_recommenders(trained_recommenders.recommender_list, each), validation).mean())
print(q6_mmr_scores_with_bias)
#get MRR with equal bias on the best performing group
q6_mmr_scores_highest_group_equal_weights = []
for each in highest_group_equal_weights:
q6_mmr_scores_highest_group_equal_weights.append(mrr_score(combine_recommenders(trained_recommenders.recommender_list, each), validation).mean())
print(q6_mmr_scores_highest_group_equal_weights)
#get MRR with bias on top two models
q6_mmr_scores_two_highest_rec_equal_weights = []
for each in two_highest_rec_equal_weights:
q6_mmr_scores_two_highest_rec_equal_weights.append(mrr_score(combine_recommenders(trained_recommenders.recommender_list, each), validation).mean())
print(q6_mmr_scores_two_highest_rec_equal_weights)
# + [markdown] id="ERT-dofgHrDA"
# #Get Graph
# + id="zFih4RDEI55Q"
#flatten the list of MRR scores to prepare data for plotting
flattened_list = []
flattened_list.append(q6_mrr_scores_without_bias)
for each in q6_mmr_scores_with_bias:
flattened_list.append(each)
for each in q6_mmr_scores_highest_group_equal_weights:
flattened_list.append(each)
for each in q6_mmr_scores_two_highest_rec_equal_weights:
flattened_list.append(each)
# + id="ri_bVJYQiV5M" colab={"base_uri": "https://localhost:8080/", "height": 313} outputId="3c2c82ef-8cb7-4804-a57a-84a5595768e2"
#Plot the graph of the MRR against experiment
plt.plot(range(1,15), flattened_list)
plt.ylabel('MRR Score')
plt.xlabel('Bias Experiment')
plt.title('MRR/Bias Experiment')
# + id="clUydOZ0oBOB"
| RecSys_coursework_2021_2210049p.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [Root]
# language: python
# name: Python [Root]
# ---
# HIDDEN
from datascience import *
from prob140 import *
import numpy as np
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
# %matplotlib inline
from scipy import stats
# ### Least Squares ###
# We now turn to the conditional expectation $E(Y \mid X)$ viewed as an estimate or predictor of $Y$ given the value of $X$. As you saw in Data 8, the *mean squared error* of prediction can be used to compare predictors: those with small mean squared errors are better.
#
# In this section we will identify *least squares predictors*, that is, predictors that minimize mean squared error among all predictors in a specified class.
# ### Minimizing the MSE ###
# Suppose you are trying to estimate or predict the value of $Y$ based on the value of $X$. The predictor $E(Y \mid X) = b(X)$ seems to be a good one to use, based on the scatter plots we examined in the previous section.
#
# It turns out that $b(X)$ is the *best* predictor of $Y$ based on $X$, according the least squares criterion.
#
# Let $h(X)$ be any function of $X$, and consider using $h(X)$ to predict $Y$. Define the *mean squared error of the predictor $h(X)$* to be
#
# $$
# MSE(h) ~ = ~ E\Big{(}\big{(}Y - h(X)\big{)}^2\Big{)}
# $$
#
# We will now show that $b(X)$ is the best predictor of $Y$ based on $X$, in the sense that it minimizes this mean squared error over all functions $h(X)$.
#
# To do so, we will use a fact we proved in the previous section:
#
# - If $g(X)$ is any function of $X$ then $E\big{(}(Y - b(X))g(X)\big{)} = 0$.
# \begin{align*}
# MSE(h) ~ &= ~ E\Big{(}\big{(}Y - h(X)\big{)}^2\Big{)} \\
# &= ~ E\Big{(}\big{(}Y - b(X)\big{)}^2\Big{)} + E\Big{(}\big{(}b(X) - h(X)\big{)}^2\Big{)} + 2E\Big{(}\big{(}Y - b(X)\big{)}\big{(}b(X) - h(X)\big{)}\Big{)} \\
# &= ~ E\Big{(}\big{(}Y - b(X)\big{)}^2\Big{)} + E\Big{(}\big{(}b(X) - h(X)\big{)}^2\Big{)} \\
# &\ge ~ E\Big{(}\big{(}Y - b(X)\big{)}^2\Big{)} \\
# &= ~ MSE(b)
# \end{align*}
# ### Least Squares Predictor ###
# The calculations in this section include much of the theory behind *least squares prediction* familiar to you from Data 8. The result above shows that the least squares predictor of $Y$ based on $X$ is the conditional expectation $b(X) = E(Y \mid X)$.
#
# In terms of the scatter diagram of observed values of $X$ and $Y$, the result is saying that the best predictor of $Y$ given $X$, by the criterion of smallest mean squared error, is the average of the vertical strip at the given value of $X$.
#
# Given $X$, the root mean squared error of this estimate is the *SD of the strip*, that is, the conditional SD of $Y$ given $X$:
#
# $$
# SD(Y \mid X) ~ = ~ \sqrt{Var(Y \mid X)}
# $$
#
# This is a random variable; its value is determined by the variation within the strip at the given value of $X$.
#
# Overall across the entire scatter diagram, the root mean squared error of the estimate $E(Y \mid X)$ is
#
# $$
# RMSE(b) ~ = ~ \sqrt{E\Big{(}\big{(}Y - b(X)\big{)}^2\Big{)}} ~ = ~ \sqrt{E\big{(} Var(Y \mid X) \big{)}}
# $$
# Notice that the result makes no assumption about the joint distribution of $X$ and $Y$. The scatter diagram of the generated $(X, Y)$ points can have any arbitrary shape. So the result can be impractical, as there isn't always a recognizable functional form for $E(Y \mid X)$.
#
# Sometimes we want to restrict our attention to a class of predictor functions of a specified type, and find the best one among those. The most important example of such a class is the set of all linear functions $aX + b$.
# ### Least Squares Linear Predictor ###
# Let $h(X) = aX + b$ for constants $a$ and $b$, and let $MSE(a, b)$ denote $MSE(h)$.
#
# $$
# MSE(a, b) ~ = ~ E\big{(} (Y - (aX + b))^2 \big{)}
# ~ = ~ E(Y^2) + a^2E(X^2) + b^2 -2aE(XY) - 2bE(Y) + 2abE(X)
# $$
#
#
#
# To find the *least squares linear predictor*, we have to minimize this MSE over all $a$ and $b$. We will do this using calculus, in two steps:
# - Fix $a$ and find the value $b_a^*$ that minimizes $MSE(a, b)$ for that fixed value of $a$.
# - Then plug in the minimizing value $b_a^*$ in place of $b$ and minimize $MSE(a, b_a^*)$ with respect to $a$.
#
# #### Step 1. ####
# Fix $a$ and minimize $MSE(a, b)$ with respect to $b$.
#
# $$
# \frac{d}{db} MSE(a, b) ~ = ~ 2b - 2E(Y) + 2aE(X)
# $$
#
# Set this equal to 0 and solve to see that the minimizing value of $b$ for the fixed value of $a$ is
#
# $$
# b_a^* ~ = ~ E(Y) - aE(X)
# $$
#
# #### Step 2. ####
# Now we have to minimize the following function with respect to $a$:
#
# \begin{align*}
# E\big{(} (Y - (aX + b_a^*))^2 \big{)} ~ &= ~
# E\big{(} (Y - (aX + E(Y) - aE(X)))^2 \big{)} \\
# &= ~ E\Big{(} \big{(} (Y - E(Y)) - a(X - E(X))\big{)}^2 \Big{)} \\
# &= ~ E\big{(} (Y - E(Y))^2 \big{)} - 2aE\big{(} (Y - E(Y))(X - E(X)) \big{)} + a^2E\big{(} (X - E(X))^2 \big{)} \\
# &= ~ Var(Y) - 2aCov(X, Y) + a^2Var(X)
# \end{align*}
#
# The derivative with respect to $a$ is $2Cov(X, Y) + 2aVar(X)$. Thus the minimizing value of $a$ is
#
# $$
# a^* ~ = ~ \frac{Cov(X, Y)}{Var(X)}
# $$
# ### Slope and Intercept of the Regression Line ###
# The least squares straight line is called the *regression line*.You now have a proof of its equation, familiar to you from Data 8. The slope and intercept are given by
#
# \begin{align*}
# \text{slope of regression line} ~ &= ~ \frac{Cov(X,Y)}{Var(X)} ~ = ~ r(X, Y) \frac{SD(Y)}{SD(X)} \\ \\
# \text{intercept of regression line} ~ &= ~ E(Y) - \text{slope} \cdot E(X)
# \end{align*}
#
# To derive the second expression for the slope, recall that in exercises you defined the *correlation* between $X$ and $Y$ to be
#
# $$
# r(X, Y) ~ = ~ \frac{Cov(X, Y)}{SD(X)SD(Y)}
# $$
# #### Regression in Standard Units ####
# If both $X$ and $Y$ are measured in standard units, then the slope of the regression line is the correlation $r(X, Y)$ and the intercept is 0.
#
# In other words, given that $X = x$ standard units, the predicted value of $Y$ is $r(X, Y)x$ standard units. When $r(X, Y)$ is positive but not 1, this result is called the *regression effect*: the predicted value of $Y$ is closer to 0 than the given value of $X$.
# It is important to note that the equation of the regression line holds regardless of the shape of the joint distribution of $X$ and $Y$. Also note that there is always a best straight line predictor among all straight lines, regardless of the relation between $X$ and $Y$. If the relation isn't roughly linear you won't want to use the best straight line for predictions, because the best straight line is best among a bad class of predictors. But it exists.
| miscellaneous_notebooks/OLD/Prediction/Least_Squares.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# #!/usr/bin/env python
# vim: tabstop=8 expandtab shiftwidth=4 softtabstop=4:
import cv2
import numpy as np
import tensorflow as tf
import csv
import os
import sklearn
from keras.models import Sequential
from keras.layers import Flatten, Dense, Lambda
from keras.layers.convolutional import Convolution2D, Cropping2D
from keras.layers.pooling import MaxPooling2D
from sklearn.model_selection import train_test_split
from sklearn.utils import shuffle
import matplotlib.pyplot as plt
lines = []
with open('../data/driving_log.csv') as csvfile:
reader = csv.reader(csvfile)
for line in reader:
lines.append(line)
#returns a numpy array that is a copy of the input array with a new row of zeros at its end
def appendEmptyRow(array):
__array = np.empty((array.shape[0], array.shape[1]+1),dtype=np.dtype('<U128')) #empty array of unicode 128 character strings, little endian
__array[:,0:array.shape[1]] = array
return __array
#augments and pre-processes (shuffle, crop) the datai
#reformats the data : [img_path angle]
def preprocess(samples):
#TODO calculate a more accurate correction factor
correction_factor = 0.2
samples=np.array(samples)
#augment with left and right images
samples_center = appendEmptyRow(samples[:,np.array([0,3])]) #row of zeros is appended to indicate that this is not a flipped image
samples_left = appendEmptyRow(samples[:,np.array([1,3])])
samples_left[:,1] = [str(float(sample_left) + correction_factor) for sample_left in samples_left[:,1]] #slightly change the steering angle label for the left camera e
samples_right = appendEmptyRow(samples[:,np.array([2,3])])
samples_right[:,1] = [str(float(sample_right) - correction_factor) for sample_right in samples_right[:,1]]
samples_mirror = samples_center
samples_mirror[:,1] = [str(float(sample_mirror) *-1) for sample_mirror in samples_mirror[:,1]] #flip the angle
samples_mirror[:,-1] = 'f' #this one is flipped
#appending in a loop was the only way that worked
preprocessed_samples = samples_center
arrays_to_append = [samples_left,samples_right,samples_mirror]
for array in arrays_to_append :
preprocessed_samples = np.append(preprocessed_samples, array, axis=0)
print(preprocessed_samples.shape)
return shuffle(preprocessed_samples)
train_samples, validation_samples = train_test_split(lines, test_size= 0.2)
#TODO :make sure samples are shuffled somewhere in the process
def generator(samples, batch_size=32):
num_samples = len(samples)
while 1: # Loop forever so the generator never terminates
for offset in range(0, num_samples, batch_size):
batch_samples = samples[offset:offset+batch_size]
images = []
angles = []
for batch_sample in batch_samples:
name = '../data/IMG/'+batch_sample[0].split('/')[-1]
image = cv2.imread(name)
angle = float(batch_sample[1])
if bool(batch_sample[2]):
image = np.fliplr(image) # if this is one of the inverted samples, we must now invert the image
images.append(image)
angles.append(angle)
X_train = np.array(images)
y_train = np.array(angles)
yield sklearn.utils.shuffle(X_train, y_train)
# compile and train the model using the generator function
preprocessed_train_samples = preprocess(train_samples)
preprocessed_validation_samples = preprocess(validation_samples)
train_generator = generator(preprocessed_train_samples , batch_size=24)
validation_generator = generator(preprocessed_validation_samples , batch_size=24)
ch, row, col = 3, 160, 320
#model
model = Sequential()
model.add(Cropping2D(cropping = ((70,25),(0,0)),input_shape=(row,col,ch)))
model.add(Lambda(lambda x: (x-128)/128))
model.add(Convolution2D(16,3,3,
activation='relu',
border_mode= 'same'))
model.add(Convolution2D(16,3,3,
activation='relu',
border_mode='same'))
model.add(MaxPooling2D())
model.add(Convolution2D(24,3,3,
activation='relu',
border_mode='same'))
model.add(Convolution2D(24,3,3,
activation='relu',
border_mode='same'))
model.add(MaxPooling2D())
model.add(Convolution2D(32,3,3,
activation='relu',
border_mode='same'))
model.add(Convolution2D(32,3,3,
activation='relu',
border_mode='same'))
model.add(MaxPooling2D())
model.add(Flatten())
model.add(Dense(120))
model.add(Dense(84))
model.add(Dense(1))
#train
model.compile(loss='mse', optimizer='adam')
#If the above code throw exceptions, try :
history_object = model.fit_generator(train_generator, steps_per_epoch= len(preprocessed_train_samples),
validation_data=validation_generator, validation_steps=len(preprocessed_validation_samples), epochs=5, verbose = 1)
model.save('../model.h5')
# +
#This is to display training / validation losses
### print the keys contained in the history object
print(history_object.history.keys())
### plot the training and validation loss for each epoch
plt.plot(history_object.history['loss'])
plt.plot(history_object.history['val_loss'])
plt.title('model mean squared error loss')
plt.ylabel('mean squared error loss')
plt.xlabel('epoch')
plt.legend(['training set', 'validation set'], loc='upper right')
plt.show()
| .ipynb_checkpoints/train-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# + [markdown] deletable=true editable=true
# <img src="Snapshot.png" width="320">
#
# # Snapshots
#
# The `dual_canvas.SnapshotCanvas` class wraps a dual canvas in
# a composite widget which creates and displays a snapshot of the
# drawn canvas. The snapshot is stored to a specified file path.
#
# If you save the widget state for the canvas the snapshot will
# appear when you reopen the notebook or "offline" when viewed
# using `nbviewer`.
#
# + deletable=true editable=true
from jp_doodle import dual_canvas
from IPython.display import display
# + deletable=true editable=true
# In this demonstration we do most of the work in Javascript.
demo = dual_canvas.SnapshotCanvas("Snapshot_example.png", width=320, height=220)
demo.display_all()
demo.js_init("""
//element.circle({name: "full filled", x:0, y:0, r:20, color:"green"});
element.circle({name: "half circle", x:50, y:0, r:20, color:"red",
arc: 2*Math.PI, start: Math.PI});
element.circle({name: "full stroked", x:0, y:-20, r:20, color:"cyan",
fill: false, lineWidth: 5});
element.circle({name: "half stroked transparent", x:50, y:30,
r:20, color:"rgba(2,22,222,0.5)",
fill: false, lineWidth: 10,
arc: Math.PI, start: 3*Math.PI/2});
// Fit the figure into the available space
element.fit(null, 10);
""")
# + [markdown] deletable=true editable=true
# It is possible to display the snapshot, snapshot button, and
# canvas separately or arranged differently in other composite
# widgets.
# + deletable=true editable=true
demo2 = dual_canvas.SnapshotCanvas("Snapshot_example2.png", width=320, height=220)
demo2.js_init("""
element.text({name: "full", x:-20, y:20,
text:"Cookie", font: "40pt Arial", color:"#ee3", background:"#faf"});
element.text({name: "rotated", x:-30, y:-40, text: "SNAP IT\u2192", color:"#7DD",
font: "40pt Times", degrees: 92});
element.circle({name: "alignment reference", x:20, y:-10, r:3, color:"red"});
element.text({name: "transparent", x:20, y:-10, text: "EYES ONLY",
color:"rgba(222,111,111,0.4)", degrees:45, align:"center",
valign: "center", font: "30pt Courier", background: "rgba(0,0,0,0.05)"});
element.text({name: "left aligned", text: "\u2190EXIT", x:20, y:-10,
degrees: -10, align:"left", color:"rgba(200,150,50,0.8)",
font: "20pt Arial"});
// Fit the figure into the available space
element.fit(null, 10);
""")
# + deletable=true editable=true
# the snapshot
display(demo2.snapshot_widget)
# + deletable=true editable=true
# the snapshot button
display(demo2.snapshot_button())
# + deletable=true editable=true
# the canvas
display(demo2)
# + deletable=true editable=true
| notebooks/Feature demonstrations/Snapshot.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from unityagents import UnityEnvironment
import numpy as np
import time
import matplotlib.pyplot as plt
from agent import Agent
from collections import deque
import torch
# -
#Single Agent reacher
env = UnityEnvironment(file_name='../../unity/Reacher_multi.app')
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
# ## Examine State and Action Space
# <p>
# In this environment, a double-jointed arm can move to target locations. A reward of +0.1 is provided for each step that the agent's hand is in the goal location. Thus, the goal of your agent is to maintain its position at the target location for as many time steps as possible.
# </p>
# <p>
# The observation space consists of 33 variables corresponding to position, rotation, velocity, and angular velocities of the arm. Each action is a vector with four numbers, corresponding to torque applicable to two joints. Every entry in the action vector must be a number between -1 and 1.
# </p>
# +
# reset the environment
env_info = env.reset(train_mode=True)[brain_name]
# number of agents
num_agents = len(env_info.agents)
print('Number of agents:', num_agents)
# size of each action
action_size = brain.vector_action_space_size
print('Size of each action:', action_size)
# examine the state space
states = env_info.vector_observations
state_size = states.shape[1]
print('There are {} agents. Each observes a state with length: {}'.format(states.shape[0], state_size))
print('The state for the first agent looks like:', states[0])
# -
# ## Random Walk
states = env_info.vector_observations # get the current state (for each agent)
scores = np.zeros(num_agents) # initialize the score (for each agent)
i = 0
while True:
i+=1
actions = np.random.randn(num_agents, action_size) # select an action (for each agent)
actions = np.clip(actions, -1, 1) # all actions between -1 and 1
env_info = env.step(actions)[brain_name] # send all actions to tne environment
next_states = env_info.vector_observations # get next state (for each agent)
rewards = env_info.rewards # get reward (for each agent)
dones = env_info.local_done # see if episode finished
scores += env_info.rewards # update the score (for each agent)
states = next_states # roll over states to next time step
if np.any(dones): # exit loop if episode finished
print(i)
break
print('Total score (averaged over agents) this episode: {}'.format(np.mean(scores)))
# # DDPG Implementation
state_dim = int(env_info.vector_observations.shape[1])
action_dim = int(brain.vector_action_space_size)
def ddpg(n_episodes=1000, max_t=1000):
""" Deep Deterministic Policy Gradients
Params
======
n_episodes (int): maximum number of training episodes
max_t (int): maximum number of timesteps per episode
"""
scores_window = deque(maxlen=100)
scores = np.zeros(num_agents)
scores_episode = []
agents =[]
for i in range(num_agents):
#Declare Agent class and append to memory
agents.append(Agent(state_dim, action_dim, random_seed=0))
for i_episode in range(1, n_episodes+1):
env_info = env.reset(train_mode=True)[brain_name]
states = env_info.vector_observations
for agent in agents:
agent.reset()
scores = np.zeros(num_agents)
for t in range(max_t):
actions = np.array([agents[i].act(states[i]) for i in range(num_agents)]).squeeze(1)
env_info = env.step(actions)[brain_name] # send the action to the environment
next_states = env_info.vector_observations # get the next state
rewards = env_info.rewards # get the reward
dones = env_info.local_done
for i in range(num_agents):
agents[i].step(t, states[i], actions[i], rewards[i], next_states[i], dones[i])
states = next_states
scores += rewards
if t % 20:
print('\rTimestep {}\tScore: {:.2f}\tmin: {:.2f}\tmax: {:.2f}'
.format(t, np.mean(scores), np.min(scores), np.max(scores)), end="")
if np.any(dones):
break
score = np.mean(scores)
scores_window.append(score) # save most recent score
scores_episode.append(score)
print('\rEpisode {}\tScore: {:.2f}\tAverage Score: {:.2f}'.format(i_episode, score, np.mean(scores_window)), end="\n")
if i_episode % 100 == 0:
print('\rEpisode {}\tAverage Score: {:.2f}'.format(i_episode, np.mean(scores_window)))
if np.mean(scores_window)>=30.0:
print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:.2f}'.format(i_episode-100, np.mean(scores_window)))
torch.save(Agent.actor_local.state_dict(), '../models/checkpoint_actor.pth')
torch.save(Agent.critic_local.state_dict(), '../models/checkpoint_critic.pth')
break
return scores_episode
scores = ddpg()
# # Analysis
#Display Scores
fig = plt.figure()
plt.plot(np.arange(1, len(scores) + 1), scores)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.title("Score vs Episode #")
plt.show()
# ### Test the model
from actor import Actor
from critic import Critic
# +
#define seed
random_seed = 9
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") #Enable cuda if available
#Load actor weights
Agent.actor_local = Actor(state_size, action_size, random_seed).to(device)
Agent.actor_local.load_state_dict(torch.load('../models/checkpoint_actor.pth', map_location=device))
#Load critic weights
Agent.critic_local = Critic(state_size, action_size, random_seed).to(device)
Agent.critic_local.load_state_dict(torch.load('../models/checkpoint_critic.pth', map_location=device))
# -
#load 20 parallel agents
agents = []
for i in range(num_agents):
agents.append(Agent(state_size, action_size, random_seed=random_seed))
# test model
env_info = env.reset(train_mode=True)[brain_name] #reset env
scores = np.zeros(num_agents)
while True:
actions = np.array([agents[i].act(states[i]) for i in range(num_agents)]).squeeze(1)
env_info = env.step(actions)[brain_name] # send the action to the environment
next_states = env_info.vector_observations # get the next state
rewards = env_info.rewards # get the reward
dones = env_info.local_done
states = next_states
scores += rewards
print('\rScore: {:.2f}\tmin: {:.2f}\tmax: {:.2f}'
.format(np.mean(scores), np.min(scores), np.max(scores)), end="")
if np.any(dones):
break
# +
###The end!
| DDPG/src/code/Continuous_Control.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: firstEnv
# language: python
# name: firstenv
# ---
import deepfakebody.digestive.gitract as gi
help(gi.generate)
help(gi.generate_interpolation)
gi.generate("test_1k", "/work/vajira/DL/checkpoints/results", "/work/vajira/DL/checkpoints",
num_img_per_tile = 1,
num_of_outputs= 1000, trunc_psi=0.75)
gi.generate_interpolation("test_4", "/work/vajira/DL/checkpoints/results", "/work/vajira/DL/checkpoints",
num_img_per_tile=1,
num_of_outputs=1,
save_frames=True,
num_of_steps_to_interpolate=100,seed=100)
| tutorials/deepfake_gitract.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Several Tips for Improving Neural Network
# > In this post, it will be mentioned about how we can improve the performace of neural network. Especially, we are talking about ReLU activation function, Weight Initialization, Dropout, and Batch Normalization
#
# - toc: true
# - badges: true
# - comments: true
# - author: <NAME>
# - categories: [Python, Deep_Learning, Tensorflow-Keras]
# - image: images/gradient_descent.gif
# +
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
plt.rcParams['figure.figsize'] = (16, 10)
plt.rcParams['text.usetex'] = True
plt.rc('font', size=15)
# -
# ## ReLU Activation Function
# ### Problem of Sigmoid
# Previously, we talked about the process happened int neural network. When the input pass througth the network, and generate the output, we called **forward propagation**. From this, we can measure the error between the predicted output and actual output. Of course, we want to train the neural network for minimizing this error. So we differentiate the the error and update the weight based on this. It is called **backpropation**.
#
# 
#
# $$g(z) = \frac{1}{1 + e^{-z}} $$
#
# This is the **sigmoid** function. We used this for measuring the probability of binary classification. And its range is from 0 to 1. When we apply sigmoid function in the output, sigmoid function will be affected in backpropgation. The problem is that, when we differentiate the middle point of sigmoid function. It doesn't care while we differentiate the sigmoid function in middle point. The problem is when the error goes $\infty$ or $-\infty$. As you can see, when the error is high, the gradient of sigmoid goes to 0, and when the error is negatively high, the gradient of sigmoid goes to 0 too. When we cover the chain rule in previous post, the gradient in post step is used to calculate the overall gradient. So what if error is too high in some nodes, the overall gradient go towards to 0, because of chain rule. This kind of problem is called **Vanishing Gradient**. Of course, we cannot calculate the gradient, and it is hard to update the weight.
# ### ReLU
# Here, we introduce the new activation function, **Rectified Linear Unit** (ReLU for short). Originally, simple linear unit is like this,
#
# $$ f(x) = x $$
#
# But we just consider the range of over 0, and ignore the value less than 0. We can express the form like this,
#
# $$ f(x) = \max(0, x) $$
#
# This form can be explained that, when the input is less than 0, then output will be 0. and input is larger than 0, input will be output itself.
#
# 
#
# So in this case, how can we analyze its gradient? If the x is larger than 0, its gradient will be 1. Unlike sigmoid, whatever the number of layers is increased, if the error is larger than 0, its gradient maintains and transfers to next step of chain rule. But there is a small problem when the error is less than 0. In this range, its gradient is 0. That is, gradient will be omitted when the error is less than 0. May be this is a same situation in Sigmoid case. But At least, we can main the gradient terms when the error is larger than 0.
#
# There are another variation for handling vanishing gradient problem, such as Exponential Linear Unit (ELU), Scaled Exponential Linear Unit (SELU), Leaky ReLU and so on.
# ### Comparing the performance of each activation function
# In this example, we will use MNIST dataset for comparing the preformance of each activation function.
# +
from tensorflow.keras.utils import to_categorical
from tensorflow.keras.datasets import mnist
# Load dataset
(X_train, y_train), (X_test, y_test) = mnist.load_data()
print(X_train.shape, X_test.shape)
# Expand the dimension from 2D to 3D
X_train = tf.expand_dims(X_train, axis=-1)
X_test = tf.expand_dims(X_test, axis=-1)
print(X_train.shape, X_test.shape)
# -
# Maybe someone will be confused in expanding the dimension. That's because tensorflow enforce image inputs shapes like `[batch_size, height, width, channel]`. But MNIST dataset included in keras, doesn't have information of channel. So we expand the dimension in the end of dataset for expressing its channel(you know that the channel in MNIST is grayscale, so it is 0)
#
# And its image is grayscale, so the range of data is from 0 to 255. And it is helpful for training while its dataset is normalized. So we apply the normalization.
X_train = tf.cast(X_train, tf.float32) / 255.0
X_test = tf.cast(X_test, tf.float32) / 255.0
# And the range of label is from 0 to 9. And its type is categorical. So we need to convert the label with one-hot encoding. Keras offers `to_categorical` APIs to do this. (There are so many approaches for one-hot encoding, we can try it by your mind).
y_train = to_categorical(y_train, num_classes=10)
y_test = to_categorical(y_test, num_classes=10)
# At last, we are going to implement network. In this case, we will build it with class object. Note that, to implement model with class object, we need to delegate the `tf.keras.Model` as an parent class.
#
# > Note: We add the `training` argument while implementing `call` function. Its purpose is to separate the feature between training and test(or inference). It`ll be used in Dropout section, later in the post.
class Model(tf.keras.Model):
def __init__(self, label_dim):
super(Model, self).__init__()
# Weight initialization (Normal Initializer)
weight_init = tf.keras.initializers.RandomNormal()
# Sequential Model
self.model = tf.keras.Sequential()
self.model.add(tf.keras.layers.Flatten()) # [N, 28, 28, 1] -> [N, 784]
for _ in range(2):
# [N, 784] -> [N, 256] -> [N, 256]
self.model.add(tf.keras.layers.Dense(256, use_bias=True, kernel_initializer=weight_init))
self.model.add(tf.keras.layers.Activation(tf.keras.activations.relu))
self.model.add(tf.keras.layers.Dense(label_dim, use_bias=True, kernel_initializer=weight_init))
def call(self, x, training=None, mask=None):
x = self.model(x)
return x
# Next, we need to define loss function. Here, we will use softmax cross entropy loss since ourl task is multi label classficiation. Of course, tensorflow offers simple API to calculate it easily. Just calculate the logits (the output generated from your model) and labels, and input it.
# +
# Loss function: Softmax Cross Entropy
def loss_fn(model, images, labels):
logits = model(images, training=True)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels))
return loss
# Accuracy function for inference
def accuracy_fn(model, images, labels):
logits = model(images, training=False)
predict = tf.equal(tf.argmax(logits, -1), tf.argmax(labels, -1))
accuracy = tf.reduce_mean(tf.cast(predict, tf.float32))
return accuracy
# Gradient function
def grad(model, images, labels):
with tf.GradientTape() as tape:
loss = loss_fn(model, images, labels)
return tape.gradient(loss, model.variables)
# -
# Then, we can set model hyperparameters such as learning rate, epochs, batch sizes and so on.
# +
# Parameters
learning_rate = 0.001
batch_size = 128
training_epochs = 1
training_iter = len(X_train) // batch_size
label_dim=10
optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)
# -
# We can make graph input from original dataset. We already saw this in previous examples. Since, the memory usage is very large if we load whole dataset into memory, we sliced each dataset with batch size.
# +
# Graph input using Dataset API
train_ds = tf.data.Dataset.from_tensor_slices((X_train, y_train)).\
shuffle(buffer_size=100000).\
prefetch(buffer_size=batch_size).\
batch(batch_size)
test_ds = tf.data.Dataset.from_tensor_slices((X_test, y_test)).\
prefetch(buffer_size=len(X_test)).\
batch(len(X_test))
# -
# In the training step, we instantiate the model and set the checkpoint. Checkpoint is the model save feature during training. So when the model training is failed due to the unexpected external problem, if we set the checkpoint, then we can reload the model at the beginning of last failure point.
# +
import os
from time import time
def load(model, checkpoint_dir):
print(" [*] Reading checkpoints...")
ckpt = tf.train.get_checkpoint_state(checkpoint_dir)
if ckpt :
ckpt_name = os.path.basename(ckpt.model_checkpoint_path)
checkpoint = tf.train.Checkpoint(dnn=model)
checkpoint.restore(save_path=os.path.join(checkpoint_dir, ckpt_name))
counter = int(ckpt_name.split('-')[1])
print(" [*] Success to read {}".format(ckpt_name))
return True, counter
else:
print(" [*] Failed to find a checkpoint")
return False, 0
def check_folder(dir):
if not os.path.exists(dir):
os.makedirs(dir)
return dir
""" Writer """
checkpoint_dir = 'checkpoints'
logs_dir = 'logs'
model_dir = 'nn_softmax'
checkpoint_dir = os.path.join(checkpoint_dir, model_dir)
check_folder(checkpoint_dir)
checkpoint_prefix = os.path.join(checkpoint_dir, model_dir)
logs_dir = os.path.join(logs_dir, model_dir)
# +
model = Model(label_dim)
start_time =time()
# Set checkpoint
checkpoint = tf.train.Checkpoint(dnn=model)
# Restore checkpoint if it exists
could_load, checkpoint_counter = load(model, checkpoint_dir)
if could_load:
start_epoch = (int)(checkpoint_counter / training_iter)
counter = checkpoint_counter
print(" [*] Load SUCCESS")
else:
start_epoch = 0
start_iteration = 0
counter = 0
print(" [!] Load failed...")
# train phase
for epoch in range(start_epoch, training_epochs):
for idx, (train_input, train_label) in enumerate(train_ds):
grads = grad(model, train_input, train_label)
optimizer.apply_gradients(grads_and_vars=zip(grads, model.variables))
train_loss = loss_fn(model, train_input, train_label)
train_accuracy = accuracy_fn(model, train_input, train_label)
for test_input, test_label in test_ds:
test_accuracy = accuracy_fn(model, test_input, test_label)
print(
"Epoch: [%2d] [%5d/%5d] time: %4.4f, train_loss: %.8f, train_accuracy: %.4f, test_Accuracy: %.4f" \
% (epoch, idx, training_iter, time() - start_time, train_loss, train_accuracy,
test_accuracy))
counter += 1
checkpoint.save(file_prefix=checkpoint_prefix + '-{}'.format(counter))
# -
# After training, we make a model with training accuracy of 98.9% and test accracy of 97.1%. Also, the checkpoint is generated, so we don't need to train at the beginning of the process, just load the model.
# +
# Restore checkpoint if it exists
could_load, checkpoint_counter = load(model, checkpoint_dir)
if could_load:
start_epoch = (int)(checkpoint_counter / training_iter)
counter = checkpoint_counter
print(" [*] Load SUCCESS")
else:
start_epoch = 0
start_iteration = 0
counter = 0
print(" [!] Load failed...")
# train phase
for epoch in range(start_epoch, training_epochs):
for idx, (train_input, train_label) in enumerate(train_ds):
grads = grad(model, train_input, train_label)
optimizer.apply_gradients(grads_and_vars=zip(grads, model.variables))
train_loss = loss_fn(model, train_input, train_label)
train_accuracy = accuracy_fn(model, train_input, train_label)
for test_input, test_label in test_ds:
test_accuracy = accuracy_fn(model, test_input, test_label)
print(
"Epoch: [%2d] [%5d/%5d] time: %4.4f, train_loss: %.8f, train_accuracy: %.4f, test_Accuracy: %.4f" \
% (epoch, idx, training_iter, time() - start_time, train_loss, train_accuracy,
test_accuracy))
counter += 1
checkpoint.save(file_prefix=checkpoint_prefix + '-{}'.format(counter))
# -
# ## Weight Initialization
# The purpose of Gradient Descent is to find the point that minimize the loss.
#
# 
#
# So in this example, whatever the loss is different with respect to x, y, z, when we apply gradient descent, we can find the minimum point. But what if the loss function space is like this, how can we find the minimum point when we use gradient descent?
#
# 
#
# Previously, we initialized our weight to sample randomly from normal distribution. But our weight is initialized with $A$, we cannot reach the global minima, just local minima. Or we may stuck in saddle point.
#
# There are many approaches to avoid stucking local minima or saddle point. One of the approaches may be initializing the weight with some rules. **Xavier initialization** is that kind of things. Instead of sampling from normal distribution, Xavier initialization samples its weight from some distribution that have variance,
#
# $$ Var_{Xe}(W) = \frac{2}{\text{Channel_in} + \text{Channel_out}} $$
#
# As you can see that, the number of channel input and output is related on the weight sampling, it has more probability that can find global minima. For the details, please check this [paper](http://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf).
#
# > Note: Tensorflow layer API has weight initialization argument(`kernel_initializer`). And its default value is `glorot_uniform`. Actually, Xavier initialization is also called glorot initialization, since the author of paper that introduced xavier initialization is glorot.
#
# **He Initialization** is another way to initialize weights, especially focused on ReLU activation function. Similar with xavier initialization, he initialization samples its weights from the distribution with variance,
#
# $$ Var_{He}(W) = \frac{4}{\text{Channel_in} + \text{Channel_out}} $$
# ### Code
#
# In the previous example, we initialized its weight from normali distribution. If we want to change this to Xavier or He, you can define the weight_init like this,
#
# ```python
# # Xavier Initializer
# weight_init = tf.keras.initializers.glorot_uniform()
#
# # He Initializer
# weight init = tf.keras.initializers.he_uniform()
# ```
# ## Dropout
# Suppose we have following three cases,
#
# 
#
# **Under-fitting** is that trained model doesn't predict well on training dataset. Of course, it doesn't work well on test dataset, that may be unseen while training. We know that this is the problem we need to care. But the problem is also occurred in **Over-fitting**. Over-fitting is the situation that trained model works well on training dataset, but not work well on test dataset. That's because the model is not trained in terms of generalization. Many approaches can handle overfitting problem such as training model with larger dataset, and Dropout method is introduced here.
#
# 
#
# Previously, we just define the layer while we build the model. Instead of using whole nodes in layer, we can disable some nodes with some probability. For example, we can define drop rate of 50%, then we can use 50% of nodes in layers.
#
# Thanks to Dropout, we can improve model performance in terms of generalization.
#
# ### Code
# Tensorflow implements [Dropout layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dropout) for an API. So if you want to use, you can add it after each hidden layers like this,
#
# ```python
# for _ in range(2):
# # [N, 784] -> [N, 256] -> [N, 256]
# self.model.add(tf.keras.layers.Dense(256, use_bias=True, kernel_initializer=weight_init))
# self.model.add(tf.keras.layers.Activation(tf.keras.activations.relu))
# self.model.add(tf.keras.layers.Dropout(rate=0.5))
# ```
# ## Batch Normalization
# This section is related on the information distribution. If the distribution of input and output is normally distributed, the trained model may work well. But what if the distribution is crashed while information is pass through the hidden layer?
#
# 
#
# Even if the information in input layer distributed normally, mean and variance may be shifted and changed. This is called **Internal Covariate Shift**. To avoid this, what can we do?
#
# If we remember the knowledge from statistics, there is a way to convert some distribution to unit normal distribution. Yes, it is **Standardization**. We can apply this and regenerate the distribution like this,
#
# $$ \bar{x} = \frac{x - \mu_B}{\sqrt{\sigma_B^2 + \epsilon}} \qquad \hat{x} = \gamma \bar{x} + \beta $$
#
# There is a noise term $\epsilon$, but it will make $\bar{x}$ to unit normal distribution (which has 0 mean and 1 variance). After adding $\gamma$ and $\beta$, we can make the distribution that we want to make.
#
# ### Code
# Tensorflow also implements [BatchNormalization layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization) for an API. So if you want to use, you can add it after each hidden layers like this,
#
# ```python
# for _ in range(2):
# # [N, 784] -> [N, 256] -> [N, 256]
# self.model.add(tf.keras.layers.Dense(256, use_bias=True, kernel_initializer=weight_init))
# self.model.add(tf.keras.layers.BatchNormalization())
# self.model.add(tf.keras.layers.Activation(tf.keras.activations.relu))
# ```
# ## Summary
# In this post, we covered some techniques for improving neural network model, ReLU activation function, Weight Initialization, Dropout, and BatchNormalization.
| _notebooks/2020-09-18-01-Several-Tips-for-Improving-Neural-Network.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
data = {'name': ['Jason', 'Molly', 'Tina', 'Jake', 'Amy'],
'age': [42, 52, 36, 24, 73],
'preTestScore': [4, 24, 31, 2, 3],
'postTestScore': [25, 94, 57, 62, 70]}
df = pd.DataFrame(data, columns = ['name', 'age', 'preTestScore', 'postTestScore'])
df
df['age'].sum()
df['preTestScore'].mean()
df['preTestScore'].cumsum()
df['preTestScore'].describe()
df['preTestScore'].count()
df['preTestScore'].min()
df['preTestScore'].max()
df['preTestScore'].median()
df['preTestScore'].var()
df['preTestScore'].std()
df['preTestScore'].skew()
df.corr()
df.cov()
# A nice video about descriptive statistics with Python here:
#
# https://www.youtube.com/watch?v=mWIwXqtZmd8
| Descriptive statistics with Pandas.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Chapter 6 projects
# pyperclip
# +
''' pw.py
An insecure password locker program.'''
PASSWORDS = {'email': '<PASSWORD>',
'blog': 'VmALvQyKAxiVH5G8v01if1MLZF3sdt',
'luggage': '12345'}
import sys,pyperclip
if len(sys.argv) < 2:
print('Usage: python pw.py [account] - copy account password')
sys.exit()
account = sys.argv[1] # first command line arg is the account name
if account in PASSWORDS.keys():
pyperclip.copy(PASSWORDS[account])
print('Password for ' + account + ' copied to clipboard.')
else:
print('There is no account named ' + account)
# +
''' bulletPointAdder.py
Adds * to everyline of text in the clipboard '''
import pyperclip
text = pyperclip.paste()
text = text.split('\n')
new_text = ''
for i in range(len(text)):
text[i] = '* ' + text[i]
text = '\n'.join(text)
pyperclip.copy(text)
# teste:
# ctr+c alguma lista (ex abaixo), rode o código e cole em outro lugar (ex word)
# Lists of animals
# Lists of aquarium life
# Lists of biologists by author abbreviation
# Lists of cultivars
# -
# # Chapter 12: Working with Excel Spreadsheets
# Métodos úteis do OpenPyxl
# +
import openpyxl
import os
path = r'C:\Users\lvspi\Documents\Programing\Python\Automate the boring stuff\automate_online-materials\example.xlsx'
wb = openpyxl.load_workbook(path)
wb.get_sheet_names()
# -
sheet = wb.get_sheet_by_name('Sheet1')
sheet['A1'].value
c = sheet['B1']
'Row ' + str(c.row) + ', Column ' + str(c.column ) + ' is ' + c.value
sheet.cell(1,1).value
# +
# slicing:
t = tuple(sheet['A1':'C3'])
t
# -
for row in sheet['A1':'C3']:
for cell in row:
print(cell.coordinate, cell.value)
print('--- end of row ---')
for cell in sheet.columns:
print(cell)
# +
from openpyxl.styles import Font
from openpyxl.styles import colors
fontObj = Font(name='Times New Roman', bold=True, color=colors.RED)
sheet['A1'].font = fontObj
# +
# abrir arquivo com fórmulas ou resultados
import openpyxl
wbFormulas = openpyxl.load_workbook('writeFormula.xlsx')
sheet = wbFormulas.get_active_sheet()
sheet['A3'].value
>>>'=SUM(A1:A2)'
wbDataOnly = openpyxl.load_workbook('writeFormula.xlsx', data_only=True)
sheet = wbDataOnly.get_active_sheet()
sheet['A3'].value
>>> 500
# +
# para escrever formulas, basta passar como string:
import openpyxl
wb = openpyxl.Workbook()
sheet = wb.get_active_sheet()
sheet['A1'] = 200
sheet['A2'] = 300
sheet['A3'] = '=SUM(A1:A2)'
wb.save('writeFormula.xlsx')
# -
# Project: Updating Spreadsheet
# +
'''updateProduce.py: Corrects costs in produce sales spreadsheet.'''
import openpyxl
path = r'C:\Users\lvspi\Documents\Programing\Python\Automate the boring stuff\automate_online-materials\produceSales.xlsx'
wb = openpyxl.load_workbook(path)
ws = wb['Sheet']
itens = {'Celery':1.19, 'Garlic':3.07,'Lemon':1.27}
for i in range(2,23760):
if ws.cell(i,1).value in itens.keys():
ws.cell(i,2).value = itens[ws.cell(i,1).value]
wb.save(r'C:\Users\lvspi\Documents\Programing\Python\Automate the boring stuff\automate_online-materials\updatedProduceSales.xlsx')
| Automate the boring stuff.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] toc=true
# <h1>Table of Contents<span class="tocSkip"></span></h1>
# <div class="toc"><ul class="toc-item"><li><span><a href="#-Probability-Mass-Function" data-toc-modified-id="-Probability-Mass-Function-1"><span class="toc-item-num">1 </span><font face="gotham" color="purple"> Probability Mass Function</font></a></span><ul class="toc-item"><li><span><a href="#-Cumulative-Distribution-Function" data-toc-modified-id="-Cumulative-Distribution-Function-1.1"><span class="toc-item-num">1.1 </span><font face="gotham" color="purple"> Cumulative Distribution Function</font></a></span></li></ul></li><li><span><a href="#-Probability-Density-Function" data-toc-modified-id="-Probability-Density-Function-2"><span class="toc-item-num">2 </span><font face="gotham" color="purple"> Probability Density Function</font></a></span></li><li><span><a href="#Independent-Random-Variables" data-toc-modified-id="Independent-Random-Variables-3"><span class="toc-item-num">3 </span><font face="gotham" color="purple">Independent Random Variables</font></a></span></li><li><span><a href="#-Expected-Value-and-Variances" data-toc-modified-id="-Expected-Value-and-Variances-4"><span class="toc-item-num">4 </span><font face="gotham" color="purple"> Expected Value and Variances</font></a></span></li></ul></div>
# -
import numpy as np
import matplotlib.pyplot as plt
# # <font face="gotham" color="purple"> Probability Mass Function
# If $X$ is a discrete variable with a finite range $R_X = \{x_1,x_2,x_3...\}$, the probability mass function of $X$ is
#
# $$
# P_X(x_k)=P(X=x_k),\ \text{for } k=1,2,3\ldots
# $$
#
# which is a function which maps the possible values to the corresponding probabilities.
# For instance, a discrete uniform distribution of a fair dice looks like,
pmf = 1/6*np.ones(6)
diceN = np.arange(1,7)
plt.stem(diceN, pmf,use_line_collection = True)
plt.axis([0, 7, 0, .3])
plt.title('PMF of A Fair Dice')
plt.show()
# ## <font face="gotham" color="purple"> Cumulative Distribution Function
# The cumulative distribution function (CDF) of random variable $X$ is defined as
# $$
# F_{X}(x)=P(X \leq x), \text { for all } x \in \mathbb{R}
# $$
# The CDF of dice example is
pmf = 1/6*np.ones(6)
cdf = np.cumsum(pmf)
plt.stem(diceN, cdf ,use_line_collection = True)
plt.axis([0, 7, 0, 1.1])
plt.title('CDF of A Fair Dice')
plt.show()
# One particular useful formula for calculating probability in an interval is
#
# $$
# P(a<X\leq b) = F_X(b)-F_X(a)
# $$
#
# It is commonly used when we would like to know the probability of a range, for instace, what is the probability $P(2<X\leq5)$?
# Using the formula,
#
# $$
# P(2<X\leq5) = F_X(5) - F_X(2) = 5/6 - 2/6 = 1/2
# $$
# # <font face="gotham" color="purple"> Probability Density Function
# Probability density function (PDF) is used for continuous distribution.
#
# We denote PDF as $f_X(x)$, however any realization of $x$ have $0$ probability, for instance $f_X(x = 1)= 0$. Therefore to obtain positive probability, we shall establish a span of the argument $(x,\ \Delta x)$
#
# $$
# f_{X}(x)=\lim _{\Delta \rightarrow 0^{+}} \frac{P(x<X \leq x+\Delta)}{\Delta}
# $$
# Furthermore, use CDF formula and definition of derivative, the relation of CDF and PDF
#
# $$
# f_{X}(x)=\lim _{\Delta \rightarrow 0} \frac{F_{X}(x+\Delta)-F_{X}(x)}{\Delta}=\frac{d F_{X}(x)}{d x}=F_{X}^{\prime}(x)
# $$
#
# We will see examples in next chapter.
# # <font face="gotham" color="purple">Independent Random Variables
# Independent R.V.s are similar to independent events, recall that independent events have the property
#
# $$
# p(A\cap B) =p (A)p(B)
# $$
# Now consider two random variable $A$ and $B$, they are independent as long as
#
# $$
# p(X=x,Y=y)=p(X=x)p(X=y)
# $$
# In general, independent variables have the property:
#
# $$
# p(X_1=x_1,X_2=x_2, ..., X_n = x_n)=p(X_1=x_1)p(X_2=x_2)...p(X_n=x_n)=\prod_{i=1}^np(X_i=x_i)
# $$
# # <font face="gotham" color="purple"> Expected Value and Variances
# The expected value of discrete and continuous random variables are
#
# $$
# \text{Discrete:}\qquad E(X)=\sum_{i=1}^k x_ip_X(x_i)=\sum_{i=1}^k x_i p(X_i = x_i)
# $$
#
# $$\text{Continuous:}\qquad E(X) = \int_{-\infty}^{\infty}xf_X(x)dx$$
# They are expressing the same idea that weighting each possibilites equally, then sum up.
# The variance of discrete and continuous random variables are similar
# $$
# \text{Discrete:}\qquad\operatorname{Var}(X)=E\left[\left(X-\mu_{X}\right)^{2}\right]=E (X^{2})-[E (X)]^{2}
# $$
#
#
# \begin{aligned}
# \text{Continuous:}\qquad
# \operatorname{Var}(X)&= E\left[\left(X-\mu_{X}\right)^{2}\right]=\int_{-\infty}^{\infty}\left(x-\mu_{X}\right)^{2} f_{X}(x) d x \\
# &=E (X^{2})-[E (X)]^{2}=\int_{-\infty}^{\infty} x^{2} f_{X}(x) d x-\mu_{X}^{2}
# \end{aligned}
# And a common method for manual calculation of variance
# $$
# \begin{aligned}
# \operatorname{Var}(X) &=E\left[\left(X-\mu_{X}\right)^{2}\right] \\
# &=E\left[X^{2}-2 \mu_{X} X+\mu_{X}^{2}\right] \\
# &=E\left[X^{2}\right]-2 E\left[\mu_{X} X\right]+E\left[\mu_{X}^{2}\right]\\
# &=E\left[X^{2}\right]-2 \mu_{X}^{2}+\mu_{X}^{2}\\
# &=E\left[X^{2}\right]-\mu_{X}^{2}
# \end{aligned}
# $$
| Chapter 3 - PMF, PDF and CDF.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Listas em Python
#
# ## Estrutura:
#
# lista = [valor, valor, valor, valor, ...]
#
# lista[i] -> é o valor de índice i da lista. <br>
# Obs: Lembre que no python o índice começa em 0, então o primeiro item de uma lista é o item lista[0]
#
# Para substituir um valor de uma lista você pode fazer:<br>
# lista[i] = novo_valor
#
# Listas de Produtos de uma Loja:
produtos = ['tv', 'celular', 'mouse', 'teclado', 'tablet']
# Lista de Unidades Vendidas de cada Produto da Loja
vendas = [1000, 1500, 350, 270, 900]
print('vendas do produto {} foram {}'.format(produtos[1],vendas[1]))
# ### Nesse caso, as listas funcionam da seguinte forma:
# + active=""
# produtos = ['tv', 'celular', 'mouse', 'teclado', 'tablet']
# 0 , 1 , 2 , 3 , 4
# vendas = [ 1000, 1500 , 350 , 270 , 900 ]
# -
vendas[3] = 600
print('vendas do produto {} foram {}'.format(produtos[3],vendas[3]))
texto = '<EMAIL>'
print(texto)
texto = '<EMAIL>'
texto = texto.replace('i','a')
print(texto)
| Arquivo aulas/Modulo 5/MOD5-Aula2.ipynb |
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.5.0
# language: julia
# name: julia-1.5
# ---
# + active=""
# Text provided under a Creative Commons Attribution license, CC-BY, Copyright (c) 2020, Cysor. All code is made available under the FSF-approved BSD-3 license. Adapted from CFDPython Copyright (c) Barba group - https://github.com/barbagroup/CFDPython
# -
# 12 steps to Navier–Stokes
# =====
# ***
# You see where this is going ... we'll do 2D diffusion now and next we will combine steps 6 and 7 to solve Burgers' equation. So make sure your previous steps work well before continuing.
# Step 7: 2D Diffusion
# ----
# ***
# And here is the 2D-diffusion equation:
#
# $$\frac{\partial u}{\partial t} = \nu \frac{\partial ^2 u}{\partial x^2} + \nu \frac{\partial ^2 u}{\partial y^2}$$
#
# You will recall that we came up with a method for discretizing second order derivatives in Step 3, when investigating 1-D diffusion. We are going to use the same scheme here, with our forward difference in time and two second-order derivatives.
# $$\frac{u_{i,j}^{n+1} - u_{i,j}^n}{\Delta t} = \nu \frac{u_{i+1,j}^n - 2 u_{i,j}^n + u_{i-1,j}^n}{\Delta x^2} + \nu \frac{u_{i,j+1}^n-2 u_{i,j}^n + u_{i,j-1}^n}{\Delta y^2}$$
#
# Once again, we reorganize the discretized equation and solve for $u_{i,j}^{n+1}$
# $$
# \begin{split}
# u_{i,j}^{n+1} = u_{i,j}^n &+ \frac{\nu \Delta t}{\Delta x^2}(u_{i+1,j}^n - 2 u_{i,j}^n + u_{i-1,j}^n) \\
# &+ \frac{\nu \Delta t}{\Delta y^2}(u_{i,j+1}^n-2 u_{i,j}^n + u_{i,j-1}^n)
# \end{split}
# $$
using Plots
# +
###variable declarations
nx = 31
ny = 31
nt = 17
ν = 0.05
Δx = 2 / (nx - 1)
Δy = 2 / (ny - 1)
σ = .25
Δt = σ * Δx * Δy / ν
x = range(0, stop=2, length=nx)
y = range(0, stop=2, length=ny)
u = ones(ny,nx)
###Assign initial conditions
##set hat function I.C. : u(.5<=x<=1 && .5<=y<=1 ) is 2
u[0.5 .≤ y .≤ 1, 0.5 .≤ x .≤ 1] .= 2
surface(x,y,u,colour=:viridis)
# -
function diffuse(nt)
u = ones(ny,nx)
u[0.5 .≤ y .≤ 1, 0.5 .≤ x .≤ 1] .= 2
uⁿ⁺¹ = u
for n in 1:nt+1 ##loop across number of time steps
uⁿ = copy(uⁿ⁺¹)
row, col = size(uⁿ⁺¹)
for j ∈ 1:row
for i ∈ 1:col
# Implement boundary conditions using conditional (if/else) statements
if j == 1 || j == row || i == 1 || i == col
uⁿ⁺¹[j,i] = 1.0
else
uⁿ⁺¹[j,i] = uⁿ[j,i] +
ν*Δt/Δx^2 * (uⁿ[j + 1,i] - 2*uⁿ[j,i] + uⁿ[j - 1,i]) +
ν*Δt/Δy^2 * (uⁿ[j,i + 1] - 2*uⁿ[j,i] + uⁿ[j,i - 1])
end
end
end
end
surface(x,y,u,colour=:viridis,zlims=((1.0,2.0)))
end
diffuse(10)
diffuse(14)
diffuse(50)
# ## Learn More
# The video lesson that walks you through the details for Steps 5 to 8 is **Video Lesson [6](https://youtube.com/watch?v=tUg_dE3NXoY)** on You Tube:
| Lessons/09_Step_7.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import re
import numpy as np
import pandas as pd
import collections
from sklearn import metrics
from sklearn.preprocessing import LabelEncoder
import tensorflow as tf
from sklearn.cross_validation import train_test_split
from unidecode import unidecode
from nltk.util import ngrams
from tqdm import tqdm
import time
# +
permulaan = [
'bel',
'se',
'ter',
'men',
'meng',
'mem',
'memper',
'di',
'pe',
'me',
'ke',
'ber',
'pen',
'per',
]
hujung = ['kan', 'kah', 'lah', 'tah', 'nya', 'an', 'wan', 'wati', 'ita']
def naive_stemmer(word):
assert isinstance(word, str), 'input must be a string'
hujung_result = [e for e in hujung if word.endswith(e)]
if len(hujung_result):
hujung_result = max(hujung_result, key = len)
if len(hujung_result):
word = word[: -len(hujung_result)]
permulaan_result = [e for e in permulaan if word.startswith(e)]
if len(permulaan_result):
permulaan_result = max(permulaan_result, key = len)
if len(permulaan_result):
word = word[len(permulaan_result) :]
return word
# +
def build_dataset(words, n_words):
count = [['GO', 0], ['PAD', 1], ['EOS', 2], ['UNK', 3]]
counter = collections.Counter(words).most_common(n_words)
count.extend(counter)
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
index = dictionary.get(word, 3)
if index == 0:
unk_count += 1
data.append(index)
count[0][1] = unk_count
reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reversed_dictionary
def classification_textcleaning(string):
string = re.sub(
'http\S+|www.\S+',
'',
' '.join(
[i for i in string.split() if i.find('#') < 0 and i.find('@') < 0]
),
)
string = unidecode(string).replace('.', ' . ').replace(',', ' , ')
string = re.sub('[^A-Za-z ]+', ' ', string)
string = re.sub(r'[ ]+', ' ', string).strip()
string = ' '.join(
[i for i in re.findall('[\\w\']+|[;:\-\(\)&.,!?"]', string) if len(i)]
)
string = string.lower().split()
string = [naive_stemmer(word) for word in string]
return ' '.join([word for word in string if len(word) > 1])
def str_idx(corpus, dic, maxlen, UNK = 3):
X = np.zeros((len(corpus), maxlen))
for i in range(len(corpus)):
for no, k in enumerate(corpus[i].split()[:maxlen][::-1]):
X[i, -1 - no] = dic.get(k, UNK)
return X
# -
classification_textcleaning('kerajaan sebenarnya sangat bencikan rakyatnya, minyak naik dan segalanya')
# +
with open('subjectivity-negative-translated.txt','r') as fopen:
texts = fopen.read().split('\n')
labels = [0] * len(texts)
with open('subjectivity-positive-translated.txt','r') as fopen:
positive_texts = fopen.read().split('\n')
labels += [1] * len(positive_texts)
texts += positive_texts
assert len(labels) == len(texts)
# -
for i in range(len(texts)):
texts[i] = classification_textcleaning(texts[i])
concat = ' '.join(texts).split()
vocabulary_size = len(list(set(concat)))
data, count, dictionary, rev_dictionary = build_dataset(concat, vocabulary_size)
print('vocab from size: %d'%(vocabulary_size))
print('Most common words', count[4:10])
print('Sample data', data[:10], [rev_dictionary[i] for i in data[:10]])
max_features = len(dictionary)
maxlen = 100
batch_size = 32
embedded_size = 256
X = str_idx(texts, dictionary, maxlen)
train_X, test_X, train_Y, test_Y = train_test_split(X,
labels,
test_size = 0.2)
class Model:
def __init__(
self, embedded_size, dict_size, dimension_output, learning_rate
):
self.X = tf.placeholder(tf.int32, [None, None])
self.Y = tf.placeholder(tf.int32, [None])
encoder_embeddings = tf.Variable(
tf.random_uniform([dict_size, embedded_size], -1, 1)
)
encoder_embedded = tf.nn.embedding_lookup(encoder_embeddings, self.X)
self.logits = tf.identity(
tf.layers.dense(
tf.reduce_mean(encoder_embedded, 1), dimension_output
),
name = 'logits',
)
self.cost = tf.reduce_mean(
tf.nn.sparse_softmax_cross_entropy_with_logits(
logits = self.logits, labels = self.Y
)
)
self.optimizer = tf.train.AdamOptimizer(learning_rate).minimize(
self.cost
)
correct_pred = tf.equal(
tf.argmax(self.logits, 1, output_type = tf.int32), self.Y
)
self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Model(embedded_size, max_features, 2, 5e-4)
sess.run(tf.global_variables_initializer())
saver = tf.train.Saver(tf.trainable_variables())
saver.save(sess, 'fast-text/model.ckpt')
strings = ','.join(
[
n.name
for n in tf.get_default_graph().as_graph_def().node
if ('Variable' in n.op
or 'Placeholder' in n.name
or 'logits' in n.name)
and 'Adam' not in n.name
and 'beta' not in n.name
]
)
strings.split(',')
tf.trainable_variables()
# +
EARLY_STOPPING, CURRENT_CHECKPOINT, CURRENT_ACC, EPOCH = 3, 0, 0, 0
while True:
lasttime = time.time()
if CURRENT_CHECKPOINT == EARLY_STOPPING:
print('break epoch:%d\n' % (EPOCH))
break
train_acc, train_loss, test_acc, test_loss = 0, 0, 0, 0
pbar = tqdm(
range(0, len(train_X), batch_size), desc = 'train minibatch loop'
)
for i in pbar:
batch_x = train_X[i : min(i + batch_size, train_X.shape[0])]
batch_y = train_Y[i : min(i + batch_size, train_X.shape[0])]
acc, cost, _ = sess.run(
[model.accuracy, model.cost, model.optimizer],
feed_dict = {
model.X: batch_x,
model.Y: batch_y
},
)
assert not np.isnan(cost)
train_loss += cost
train_acc += acc
pbar.set_postfix(cost = cost, accuracy = acc)
pbar = tqdm(range(0, len(test_X), batch_size), desc = 'test minibatch loop')
for i in pbar:
batch_x = test_X[i : min(i + batch_size, test_X.shape[0])]
batch_y = test_Y[i : min(i + batch_size, test_X.shape[0])]
batch_x_expand = np.expand_dims(batch_x,axis = 1)
acc, cost = sess.run(
[model.accuracy, model.cost],
feed_dict = {
model.X: batch_x,
model.Y: batch_y
},
)
test_loss += cost
test_acc += acc
pbar.set_postfix(cost = cost, accuracy = acc)
train_loss /= len(train_X) / batch_size
train_acc /= len(train_X) / batch_size
test_loss /= len(test_X) / batch_size
test_acc /= len(test_X) / batch_size
if test_acc > CURRENT_ACC:
print(
'epoch: %d, pass acc: %f, current acc: %f'
% (EPOCH, CURRENT_ACC, test_acc)
)
CURRENT_ACC = test_acc
CURRENT_CHECKPOINT = 0
else:
CURRENT_CHECKPOINT += 1
print('time taken:', time.time() - lasttime)
print(
'epoch: %d, training loss: %f, training acc: %f, valid loss: %f, valid acc: %f\n'
% (EPOCH, train_loss, train_acc, test_loss, test_acc)
)
EPOCH += 1
saver.save(sess, "fast-text/model.ckpt")
# +
real_Y, predict_Y = [], []
pbar = tqdm(
range(0, len(test_X), batch_size), desc = 'validation minibatch loop'
)
for i in pbar:
batch_x = test_X[i : min(i + batch_size, test_X.shape[0])]
batch_y = test_Y[i : min(i + batch_size, test_X.shape[0])]
predict_Y += np.argmax(
sess.run(
model.logits, feed_dict = {model.X: batch_x, model.Y: batch_y}
),
1,
).tolist()
real_Y += batch_y
# -
from sklearn import metrics
print(metrics.classification_report(real_Y, predict_Y, target_names = ['-','+']))
text = 'kerajaan sebenarnya sangat sayangkan rakyatnya, tetapi sebenarnya benci'
new_vector = str_idx([classification_textcleaning(text)],dictionary, len(text.split()))
sess.run(tf.nn.softmax(model.logits), feed_dict={model.X:new_vector})
import json
with open('fast-text-subjective.json','w') as fopen:
fopen.write(json.dumps({'dictionary':dictionary,'reverse_dictionary':rev_dictionary}))
def freeze_graph(model_dir, output_node_names):
if not tf.gfile.Exists(model_dir):
raise AssertionError(
"Export directory doesn't exists. Please specify an export "
'directory: %s' % model_dir
)
checkpoint = tf.train.get_checkpoint_state(model_dir)
input_checkpoint = checkpoint.model_checkpoint_path
absolute_model_dir = '/'.join(input_checkpoint.split('/')[:-1])
output_graph = absolute_model_dir + '/frozen_model.pb'
clear_devices = True
with tf.Session(graph = tf.Graph()) as sess:
saver = tf.train.import_meta_graph(
input_checkpoint + '.meta', clear_devices = clear_devices
)
saver.restore(sess, input_checkpoint)
output_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
tf.get_default_graph().as_graph_def(),
output_node_names.split(','),
)
with tf.gfile.GFile(output_graph, 'wb') as f:
f.write(output_graph_def.SerializeToString())
print('%d ops in the final graph.' % len(output_graph_def.node))
freeze_graph('fast-text', strings)
def load_graph(frozen_graph_filename):
with tf.gfile.GFile(frozen_graph_filename, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
with tf.Graph().as_default() as graph:
tf.import_graph_def(graph_def)
return graph
g = load_graph('fast-text/frozen_model.pb')
x = g.get_tensor_by_name('import/Placeholder:0')
logits = g.get_tensor_by_name('import/logits:0')
test_sess = tf.InteractiveSession(graph = g)
test_sess.run(tf.nn.softmax(logits), feed_dict = {x: new_vector})
| session/subjectivity/fast-text.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Tutorial 3.2. Response of a SDoF system
# ### Description: In some cases a single degree of freedom - SDoF - model can be used to model the structural behaviour. The response of a SDoF system under dynamic loads may be computed by different direct time integration schemes, which are presented here. The results are compared with the analytical solutions from basic structural dynamics. Some exercises are proposed.
#
# #### Students are advised to complete the exercises.
# Project : Structural Wind Engineering WS19-20
# Chair of Structural Analysis @ TUM - <NAME>, <NAME>
#
# Author : <EMAIL> <EMAIL>
#
# Created on: 15.11.2015
#
# Last update: 27.09.2019
# ##### Contents
#
# 1. Structural response of a SDoF system under dynamic loads
# 2. Comparison with analytical solutions
# 3. Comparison of the performance and accuracy of different numerical (time) integration schemes
# +
# import python modules
import time
import matplotlib.pyplot as plt
import numpy as np
# import own modules
import structure_sdof as s_sdof
# -
# #### Creating the time instances as an array
# The start time, end time and the number of time steps are specified here for generating the time series.
# start time
start_time = 0.0
# end time
end_time = 10.0
# steps
n_steps = 100000
# time step
delta_time = end_time / (n_steps-1)
# time series
# generate grid size vectors 1D
time_series = np.arange(start_time, end_time + delta_time, delta_time)
# ### Modeling of the structure
# <img src="example_sdof.png" alt="Drawing" style="width: 400px;"/>
# ### Dynamic analysis
# The response of SDoF under dynamic loading is computed by different time integration schemes, three of which are presented in this section.
# 1. __Generalised-Alpha__
# 2. __Euler First Order__
# 2. __Euler First and Second Order__
# _THE OBJECT-ORIENTED GENERALIZED-ALPHA SOLVER
# Implementation adapted from <NAME> (2014). Original implementation by <NAME> described in: Formulation of the Generalized-Alpha method for LAGRANGE. Technical Report, Chair of Structural Analysis @TUM, 2012.
# See <NAME>, <NAME>: A time integration algorithm for structural dynamics
# wih improved numerical dissipation: the generalized-aplha mehod. ASME J. Appl.
# Mech., 60:371-375,1993._
#
# _THE EULER ALGORITHM USING FIRST AND SECOND ORDER APPROXIMATION
# Implementation of the well-known finite difference approach, theory also
# described in <NAME>, <NAME>: On the Simulations of the Classical
# Harmonic Oscillator Equations by Difference Equations, PY 502, Hindawi Publishing
# Corporation, Advances in Difference Equations, Volume 2006. An algorithmic description
# can also be found in H.P. Gavin: Numerical Integration in Structural Dynamics,
# CEE 541, Structural Dynamics, Department of Civil & Environmental Engineering,
# Duke University Fall 2016._
# ###### Structural setup
# mass
m = 0.1
# eigenfrequency
eigen_f = 10.0
# stiffness
k = m * (eigen_f * 2 * np.pi)**2
# damping ratio
# zero damping in this case
xi = 0.00
# damping coefficient
b = xi * 2.0 * np.sqrt(m * k)
# ###### Initial conditions
# displacement
u0 = 1.0
# velocity
v0 = 0.0
# acceleration
a0 = 0.0
# ###### External loading
# Three types of loads are defined here:
# 1. Free vibration case - No external loads
# 2. Harmonic excitation
# 3. Superposed signal
# +
# sine with given amplitude = 1 and frequency = 10 Hz
sin_freq = 10
sin_ampl = 1
sin_series = sin_ampl * np.sin( 2 * np.pi * sin_freq * time_series)
# normal random signal with given mean m = 0 and standard dev std = 0.25 ->
rand_m = 0.0
rand_std = 0.25
rand_series = np.random.normal(rand_m, rand_std, len(time_series))
# constant signal with given amplitude = 10
const_ampl = 10
const_series = const_ampl * np.ones(len(time_series))
# superposing the above signals
# superposition weighting
coef_signal1 = 1
coef_signal2 = 0.25
coef_signal3 = 0.25
superposed_series = coef_signal1 * const_series + coef_signal2 * sin_series + coef_signal3 * rand_series
zero_series = np.zeros(len(time_series))
# the external force: here choosing the zero_series, so no external load
ext_force_series = zero_series
# -
# ###### Let us plot the excitation force function
# plot for force
plt.figure(num=1, figsize=(15, 4))
plt.plot(time_series, ext_force_series, "-k", lw=0.5)
plt.ylabel('Force [N]')
plt.xlabel('Time [s]')
plt.title("Force - external")
plt.grid(True)
# ###### Analytical solutions
# Analytical solutions are available for
# 1. Undamped free vibration with initial displacement
# 2. Undamped forced vibration under harmonic excitation with no initial displacement
#
# Refer to:
#
# [<NAME>, Dynamics of Structures: Theory and Applications to Earthquake Engineering,
# Person Prentice Hall, 2014](https://opac-ub-tum-de.eaccess.ub.tum.de/TouchPoint/perma.do?q=+1035%3D%22BV043635029%22+IN+%5B2%5D&v=tum&l=de)
#
# [<NAME>, <NAME>ukonstruktionen, 2017](https://link-springer-com.eaccess.ub.tum.de/book/10.1007%2F978-3-8348-2109-6)
#
# for detailed descriptions.
# +
# undamped harmonic oscillation with only initial displacement
omega = np.sqrt(k / m)
u0an = 1.0
ampl = u0an**2 + (v0/omega)**2
theta = 0 #numpy.atan(u0an*omega/v0)
disp_analytic_wo_ext_force = ampl * np.sin(np.multiply(omega, time_series) + theta)
disp_analytic_wo_ext_force_limit = u0an * np.ones(len(time_series))
# undamped harmonic oscillation with only external force
u0an = 0
# u0 = 0, v0 = 0, omegaF = omega
disp_analytic_w_ext_force = np.zeros(len(time_series))
disp_analytic_w_ext_force_limit = np.zeros(len(time_series))
for i in range(len(time_series)):
disp_analytic_w_ext_force[i] = sin_ampl /2/k * np.sin(omega * time_series[i])
disp_analytic_w_ext_force[i] -= sin_ampl /2/k * (omega * time_series[i]) * np.sin(omega * time_series[i])
disp_analytic_w_ext_force_limit[i] = sin_ampl /2/k * (omega * time_series[i])
delta_omega_ratio = 0.01
forcing_freq_ratio = np.arange(0.0, 3.0 + delta_omega_ratio, delta_omega_ratio)
def determine_transmissibility_funcion(forcing_freq_ratio, damping_ratio):
transmissibility_function = np.zeros(len(forcing_freq_ratio))
for i in range(len(forcing_freq_ratio)):
numerator = np.sqrt(1 + (2 * damping_ratio * forcing_freq_ratio[i])**2)
denominator = np.sqrt((1 - forcing_freq_ratio[i]**2)**2 + (2 * damping_ratio * forcing_freq_ratio[i])**2)
transmissibility_function[i] = numerator / denominator
return transmissibility_function
# -
# ###### Let us plot the function
# For 1. Undamped free vibration with initial displacement
plt.figure(num=2, figsize=(15, 4))
plt.title('Undamped free vibration with initial displacement u0 = ' + str(u0))
plt.plot(time_series, disp_analytic_wo_ext_force, "-k", lw=1)
# upper and lower limits as straight red dashed lines
plt.plot(time_series, disp_analytic_wo_ext_force_limit, "-.r", lw=1)
plt.plot(time_series, -disp_analytic_wo_ext_force_limit, "-.r", lw=1)
# x_axis_end = end_time
# plt.xlim([0, x_axis_end])
plt.ylabel('Displacement [m]')
plt.xlabel('Time [s]')
plt.grid(True)
# Observe the limiting values of the function
# For 2. Undamped force vibration under harmonic excitation with no initial displacement
plt.figure(num=3, figsize=(15, 4))
plt.title('Harmonic (sinusoidal) excitation of the undamped system without initial displacement')
plt.plot(time_series, disp_analytic_w_ext_force, "-k", lw=0.5)
# upper and lower limits as straight red dashed lines
plt.plot(time_series, disp_analytic_w_ext_force_limit, "-.r", lw=0.5)
plt.plot(time_series, -disp_analytic_w_ext_force_limit, "-.r", lw=0.5)
# plt.xlim([0, x_axis_end])
plt.ylabel('Displacement [m]')
plt.xlabel('Time [s]')
plt.grid()
# The dynamic amplification is dependent on the ratio of forcing frequency to eigen frequency
plt.figure(num=4, figsize=(8, 6))
plt.title('Frequency response -> transmissibility')
plt.plot(forcing_freq_ratio,
determine_transmissibility_funcion(forcing_freq_ratio, 0.00001),
"-r", lw=0.5)
plt.plot(forcing_freq_ratio,
determine_transmissibility_funcion(forcing_freq_ratio, 0.01),
"-.k", label="xi = 0.01", lw=0.5)
plt.plot(forcing_freq_ratio,
determine_transmissibility_funcion(forcing_freq_ratio, 0.05),
"--b", label="xi = 0.05", lw=0.5)
plt.plot(forcing_freq_ratio,
determine_transmissibility_funcion(forcing_freq_ratio, 0.1),
":g", label="xi = 0.1", lw=0.5)
plt.xlim([forcing_freq_ratio[0], forcing_freq_ratio[-1]])
plt.ylim([0.0, 60.0])
plt.ylabel('Transmissibility [-]')
plt.xlabel('Ratio of forcing frequency-to-eigenfrequency [-]')
plt.legend(loc="best")
plt.grid(True)
# ##### Time integration schemes
#
# For solving the equation of motion at each time step different time integration schemes can be used.
# Here in this exercise three time integration implementations are available.
# 1. Euler 1st : The acceleration is approximated by 1st order Euler of velocity and the velocity is approximated by !st order Euler of displacement
# 2. Euler !st and 2nd : Here the acceleration is approximated by 2nd order Euler of displacements and the displacement is approximated by 1st order Euler of displacements. The forward, backward and central Euler are available for the velocities ( check block 12 for details)
# 3. A Generalized alpha method for time integration.
# numerical parameter -> only needed for the GeneralizedAlpha time integration scheme
p_inf = 0.15
# create an object: structure - to be used by the GeneralizedAlpha scheme
structure = s_sdof.StructureSDOF(delta_time, m, b, k, p_inf, u0, v0, a0)
# structure.print_setup()
# ##### Tip: Have a look at "structure_sdof.py" for details
# +
# data for storing results
# using objects
# standard python dictionaries would also be a good option
# create a SampleData class
class SampleData(): pass
# initiate objects and labels
data_euler1 = SampleData()
data_euler1.label = "Euler 1st"
data_euler12 = SampleData()
data_euler12.label = "Euler 1st & 2nd"
data_gen_alpha = SampleData()
data_gen_alpha.label = "Gen Alpha"
# lists to store the results
data_euler1.disp = []
data_euler1.acc = []
data_euler1.vel = []
data_euler12.disp = []
data_euler12.acc = []
data_euler12.vel = []
data_gen_alpha.disp = []
data_gen_alpha.acc = []
data_gen_alpha.vel = []
# computation time for each method
data_euler1.computation_time = 0.0
data_euler12.computation_time = 0.0
data_gen_alpha.computation_time = 0.0
# initial values
data_euler1.disp.append(u0)
data_euler1.vel.append(v0)
data_euler1.acc.append(a0)
data_euler12.disp.append(u0)
data_euler12.vel.append(v0)
data_euler12.acc.append(a0)
data_gen_alpha.disp.append(u0)
data_gen_alpha.vel.append(v0)
data_gen_alpha.acc.append(a0)
# more initial values for the time integration schemes
data_euler1.un1 = u0
data_euler1.vn1 = v0
data_euler1.an1 = a0
data_euler12.un2 = u0
data_euler12.un1 = u0 - delta_time * v0 + delta_time**2 /2 * a0
data_euler12.vn1 = v0
data_euler12.an1 = a0
# -
# ###### Time loop: computing the response at each time instant
# interested students may refer to [<NAME>, <NAME>](https://link.springer.com/content/pdf/10.1155%2FADE%2F2006%2F40171.pdf) (2006) for details on discretization of Euler time integration
for i in range(1,len(time_series)):
currentTime = time_series[i]
#===========================================================================
## Euler 1st order
t = time.time()
# solve the time integration step
# first order approximation of acceleration and velocity, respectively
data_euler1.un0 = data_euler1.un1 + delta_time * data_euler1.vn1
data_euler1.vn0 = data_euler1.vn1 + delta_time * data_euler1.an1
data_euler1.an0 = 1/m * (ext_force_series[i] - b * data_euler1.vn0 - k * data_euler1.un0)
# append results to list
data_euler1.disp.append(data_euler1.un0)
data_euler1.vel.append(data_euler1.vn0)
data_euler1.acc.append(data_euler1.an0)
# update results
data_euler1.un1 = data_euler1.un0
data_euler1.vn1 = data_euler1.vn0
data_euler1.an1 = data_euler1.an0
# elapsed time accumulated
data_euler1.computation_time += time.time() - t
#===========================================================================
## Euler 1st and 2nd order
t = time.time()
# solve the time integration step
# second order approximation of acceleration, first order for velocity
# version 1 - eq. 5.3
# LHS = m
# RHS = ext_force_series[i-1] * delta_time**2
# RHS += data_euler12.un1 * (2*m - b * delta_time - k *dt**2)
# RHS += data_euler12.un2 * (-m + b * delta_time)
# version 2 - eq. 5.4 from <NAME>, <NAME> or eq. 6 from <NAME>
LHS = m + b * delta_time/2
RHS = ext_force_series[i-1] * delta_time**2
RHS += data_euler12.un1 * (2*m - k * delta_time**2)
RHS += data_euler12.un2 * (-m + b * delta_time /2)
# version 3 - eq. 5.5
# LHS = m + b * delta_time
# RHS = ext_force_series[i-1] * delta_time**2
# RHS += data_euler12.un1 * (2*m + b * delta_time - k * delta_time**2)
# RHS += data_euler12.un2 * (-m)
data_euler12.un0 = RHS/LHS
data_euler12.vn0 = (data_euler12.un0 - data_euler12.un2) /2 /delta_time
data_euler12.an0 = (data_euler12.un0 - 2 * data_euler12.un1 + data_euler12.un2) / delta_time**2
# append results to list
data_euler12.disp.append(data_euler12.un0)
data_euler12.vel.append(data_euler12.vn0)
data_euler12.acc.append(data_euler12.an0)
# update results
data_euler12.un2 = data_euler12.un1
data_euler12.un1 = data_euler12.un0
# elapsed time accumulated
data_euler12.computation_time += time.time() - t
#===========================================================================
## Generalized Alpha
t = time.time()
# solve the time integration step
structure.solve_structure(ext_force_series[i])
# append results to list
data_gen_alpha.disp.append(structure.get_displacement())
data_gen_alpha.vel.append(structure.get_velocity())
data_gen_alpha.acc.append(structure.get_acceleration())
# update results
structure.update_structure_timestep()
# elapsed time accumulated
data_gen_alpha.computation_time += time.time() - t
# print computation time
print('# Computation time')
print('Euler 1st order: ' + str(data_euler1.computation_time) + ' s')
print('Euler 1st & 2nd order: ' + str(data_euler12.computation_time) + ' s')
print('Generalized Alpha: ' + str(data_gen_alpha.computation_time) + ' s')
print()
# +
plt.figure(num=5, figsize=(15, 12))
# subplot for displacement
plt.subplot(3, 1, 1)
plt.plot(time_series, data_euler1.disp, "-k", label=data_euler1.label, lw=0.5)
plt.plot(time_series, data_euler12.disp, "-.r", label=data_euler12.label, lw=0.5)
plt.plot(time_series, data_gen_alpha.disp, "--g", label=data_gen_alpha.label, lw=0.5)
# plt.xlim([0, x_axis_end])
plt.ylabel('Displacement [m]')
plt.axhline(y=u0, linewidth=2, color = 'r')
plt.axhline(y=-u0, linewidth=2, color = 'r')
plt.legend(loc="best")
plt.grid(True)
# subplot for velocity
plt.subplot(3, 1, 2)
plt.plot(time_series, data_euler1.vel, "-k", label=data_euler1.label, lw=0.5)
plt.plot(time_series, data_euler12.vel, "-.r", label=data_euler12.label, lw=0.5)
plt.plot(time_series, data_gen_alpha.vel, "--g", label=data_gen_alpha.label, lw=0.5)
# plt.xlim([0, x_axis_end])
plt.ylabel('Velocity [m/s]')
plt.grid(True)
# subplot for acceleration
plt.subplot(3, 1, 3)
plt.plot(time_series, data_euler1.acc, "-k", label=data_euler1.label, lw=0.5)
plt.plot(time_series, data_euler12.acc, "-.r", label=data_euler12.label, lw=0.5)
plt.plot(time_series, data_gen_alpha.acc, "--g", label=data_gen_alpha.label, lw=0.5)
# plt.xlim([0, x_axis_end])
plt.ylabel('Acceleration [m/s^2]')
plt.ylabel('Time [s]')
plt.grid(True)
plt.show()
# -
# ### Exercise 1: Euler 1st order
# Reduce the time step by changing the number of steps n_steps. What do you observe about the numerical stability of Euler 1st order time integration scheme?
# Modify the time step delta_time by changing the number of timesteps 'n_steps'. Comment on the results.
# ### Exercise 2: Modify p_inf
# Modify the numerical parameter p_inf (for the Generalized Alpha scheme), observe and comment on the result.
# ### Exercise 3: Apply harmonic loads
# Apply harmonic loads and compare with the analytical results.
# ## Check Point: Discussion
#
# #### Discuss among groups the observations and outcomes regarding the exercises.
| Ex03StructuralAnalysis/.ipynb_checkpoints/swe_ws1819_3_2_sdof-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Import pakcage and load the data
import turicreate
sales = turicreate.SFrame('./data/home_data.sframe/')
sales.head(4)
sales.shape
# ### Q1. Selection and summary statistics
#
# In the notebook we covered in the module, we discovered which neighborhood (zip code) of Seattle had the highest average house sale price.
# Now, take the sales data, select only the houses with this zip code, and compute the average price. Save this result to answer the quiz at the end.
# +
## find zipcode for which the mean price is highest
import turicreate.aggregate as agg
zipcode_mprice = sales.groupby(['zipcode'],
operations={
'mean_price': agg.MEAN('price'),
})
zipcode_mprice.sort('mean_price', ascending=False)
## the first line/row is our solution
# -
## Select first row
zipcode_mprice.sort('mean_price', ascending=False)[0]
zipcode_highest_avg_price = zipcode_mprice.sort('mean_price', ascending=False)[0]['zipcode']
print(zipcode_highest_avg_price)
# +
## select all rows with zipcode == '98039'
sales_zhap = sales[sales['zipcode'] == zipcode_highest_avg_price]
# -
sales_zhap.head(3)
# +
## Average price for zipcode '98039'
print(sales_zhap['price'].mean())
# -
# ### Q2. Filtering data
#
# One of the key features we used in our model was the number of square feet of living space (‘sqft_living’) in the house. For this part, we are going to use the idea of filtering (selecting) data.
#
# In particular, we are going to use logical filters to select rows of an SFrame. You can find more info in the [Logical Filter section of this documentation](https://turi.com/products/create/docs/generated/graphlab.SFrame.html).
#
# Using such filters, first select the houses that have ‘sqft_living’ higher than 2000 sqft but no larger than 4000 sqft.
#
# *Question*
# - What fraction of the all houses have ‘sqft_living’ in this range? Save this result to answer the quiz at the end.
sales_2000_4000 = sales[(sales['sqft_living'] > 2000) & (sales['sqft_living'] < 4000)]
sales_2000_4000['sqft_living'].summary()
# +
## including bounds
sales_eq_2000_4000 = sales[(sales['sqft_living'] >= 2000) & (sales['sqft_living'] <= 4000)]
sales_eq_2000_4000['sqft_living'].summary()
# -
## fraction of the all houses have ‘sqft_living’ in the range (2000, 4000)
sales_tot_num_rows = sales.shape[0]
sales_2000_4000.shape[0] / sales_tot_num_rows, sales_eq_2000_4000.shape[0] / sales_tot_num_rows
# Filtering data: What fraction of the houses have living space between 2000 sq.ft. and 4000 sq.ft.?
#
# - [ ] Between 0.2 and 0.29
# - [ ] Between 0.3 and 0.39
# - [x] Between 0.4 and 0.49
# - [ ] Between 0.5 and 0.59
# - [ ] Between 0.6 and 0.69
# ### Q3. Building a regression model with several more features
#
# In the sample notebook, we built two regression models to predict house prices:
# - one using just ‘sqft_living’ and
# - the other one using a few more features, we called this set `my_features = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'zipcode']`
#
# Now, going back to the original dataset, you will build a model using the following features:
#
# ```python
# advanced_features = [
# 'bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'zipcode',
# 'condition', # condition of house
# 'grade', # measure of quality of construction
# 'waterfront', # waterfront property
# 'view', # type of view
# 'sqft_above', # square feet above ground
# 'sqft_basement', # square feet in basement
# 'yr_built', # the year built
# 'yr_renovated', # the year renovated
# 'lat', 'long', # the lat-long of the parcel
# 'sqft_living15', # average sq.ft. of 15 nearest neighbors
# 'sqft_lot15', # average lot size of 15 nearest neighbors
# ]
# ```
#
# - Q3.1 Compute the *RMSE* (*root mean squared error*) on the **test_data** for the model using just *my_features*, and for the one using *advanced_features*.
#
# **Remarks**:
# 1. Both models must be trained on the original sales train dataset, not the one filtered on `sqft_living`.
# 1. When doing the train-test split, make sure you use `seed=0`, so you get the same training and test sets, and thus results, as we do.
# 1. In the module we discussed residual sum of squares (RSS) as an error metric for regression, but Turi Create uses root mean squared error (RMSE).
# These are two common measures of error regression, and RMSE is simply the square root of the mean RSS:
#
# $$RMSE = \sqrt{\frac{RSS}{N}}$$
#
# where N is the number of data points. RMSE can be more intuitive than RSS, since its units are the same as that of the target column in the data, in our case the unit is dollars ($), and doesn't grow with the number of data points, like the RSS does.
#
# <br/>
#
# **Important note**: <br/>
# When answering the question below using Turi Create, when you call the linear_regression.create() function, make sure you use the parameter `validation_set=None`, as done in the notebook you download above. When you use regression Turi Create, it sets aside a small random subset of the data to validate some parameters.
# This process can cause fluctuations in the final RMSE, so we will avoid it to make sure everyone gets the same answer.
#
# <br/>
#
# - Q3.2 What is the difference in RMSE between the model trained with my_features and the one trained with advanced_features?
# Save this result to answer the quiz at the end.
# +
## Train/Test set split
train_set, test_set = sales.random_split(.8, seed=0)
train_set.shape, test_set.shape
# +
my_features = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'zipcode']
advanced_features = [*my_features,
'condition', # condition of house
'grade', # measure of quality of construction
'waterfront', # waterfront property
'view', # type of view
'sqft_above', # square feet above ground
'sqft_basement', # square feet in basement
'yr_built', # the year built
'yr_renovated', # the year renovated
'lat', 'long', # the lat-long of the parcel
'sqft_living15', # average sq.ft. of 15 nearest neighbors
'sqft_lot15', # average lot size of 15 nearest neighbors
]
len(my_features), len(advanced_features)
# -
my_features_model = turicreate.linear_regression.create(train_set, target='price', features=my_features,
validation_set=None)
advanced_features_model = turicreate.linear_regression.create(train_set, target='price', features=advanced_features,
validation_set=None)
# +
## Compute the RMSE for my_features_model
##
## cf. https://apple.github.io/turicreate/docs/api/generated/turicreate.linear_regression.LinearRegression.evaluate.html#turicreate.linear_regression.LinearRegression.evaluate
## use RMSE
res_my_features_model = my_features_model.evaluate(test_set)
print(res_my_features_model)
# +
## Compute the RMSE for advanced_features_model
res_advanced_features_model = advanced_features_model.evaluate(test_set)
print(res_advanced_features_model)
# +
## Difference in RMSE between the model trained with my_features and the one trained with advanced_features
diff_r = res_my_features_model['rmse'] - res_advanced_features_model['rmse']
print(f"diff: {diff_r:5.3f}")
# -
# Building a regression model with several more features:
# What is the difference in RMSE between the model trained with my_features and the one trained with advanced_features?
#
# - [ ] the RMSE of the model with advanced_features lower by less than \\$25,000
# - [x] the RMSE of the model with advanced_features lower by between \\$25,001 and \\$35,000
# - [ ] the RMSE of the model with advanced_features lower by between \\$35,001 and \\$45,000
# - [ ] the RMSE of the model with advanced_features lower by between \\$45,001 and \\$55,000
# - [ ] the RMSE of the model with advanced_features lower by more than \\$55,000
#
# **Note** 'autograder' answer is expecting a difference less than \$$25,000.
#
# cf. [Wrong answer in the final Week 2 Quiz](https://www.coursera.org/learn/ml-foundations/discussions/weeks/2/threads/_LsAHd5oEeqXNhLj2fFeZQ)
| C01/w02/C01w02_Linear_Regression.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# +
# A Basic bokeh line graph
from bokeh.plotting import figure
from bokeh.io import output_file, show
x = [1, 2, 3, 4, 5]
y = [6, 7, 8, 9, 10]
output_file('Line.html')
fig = figure()
fig.line(x, y)
show(fig)
# +
# Bokeh with Pandas
from bokeh.plotting import figure
from bokeh.io import output_file, show
import pandas
data_frame = pandas.read_csv('data.csv')
x = data_frame['x']
y = data_frame['y']
output_file('Line.html')
fig = figure()
fig.line(x, y)
show(fig)
# +
# Bokeh with Pandas - Bachelors_csv
from bokeh.plotting import figure
from bokeh.io import output_file, show
import pandas
data_frame = pandas.read_csv('http://pythonhow.com/data/bachelors.csv')
x = data_frame['Year']
y = data_frame['Engineering']
output_file('Line.html')
fig = figure()
fig.line(x, y)
show(fig)
# +
import pandas
from bokeh.plotting import figure, output_file, show
p=figure(plot_width=500,plot_height=400, tools='pan')
p.title.text="Cool Data"
p.title.text_color="Gray"
p.title.text_font="times"
p.title.text_font_style="bold"
p.xaxis.minor_tick_line_color=None
p.yaxis.minor_tick_line_color=None
p.xaxis.axis_label="Date"
p.yaxis.axis_label="Intensity"
p.line([1,2,3],[4,5,6])
output_file("graph.html")
show(p)
# +
import pandas
from bokeh.plotting import figure, output_file, show
df=pandas.read_excel("http://pythonhow.com/data/verlegenhuken.xlsx",sheet_name=0)
df["Temperature"]=df["Temperature"]/10
df["Pressure"]=df["Pressure"]/10
p=figure(plot_width=500,plot_height=400,tools='pan')
p.title.text="Temperature and Air Pressure"
p.title.text_color="Gray"
p.title.text_font="arial"
p.title.text_font_style="bold"
p.xaxis.minor_tick_line_color=None
p.yaxis.minor_tick_line_color=None
p.xaxis.axis_label="Temperature (°C)"
p.yaxis.axis_label="Pressure (hPa)"
p.circle(df["Temperature"],df["Pressure"],size=0.5)
output_file("Weather.html")
show(p)
# -
from bokeh.plotting import figure, output_file, show
p = figure(plot_width=500, plot_height=400, tools = 'pan, reset')
p.title.text = "Earthquakes"
p.title.text_color = "Orange"
p.title.text_font = "times"
p.title.text_font_style = "italic"
p.yaxis.minor_tick_line_color = "Yellow"
p.xaxis.axis_label = "Times"
p.yaxis.axis_label = "Value"
p.circle([1,2,3,4,5], [5,6,5,5,3], size = [i*2 for i in [8,12,14,15,20]], color="red", alpha=0.5)
output_file("Scatter_plotting.html")
show(p)
# +
from bokeh.plotting import figure, output_file, show
import pandas
data_frame = pandas.read_csv(
'adbe.csv',
parse_dates=['Date']
)
fig = figure(width=800, height=250, x_axis_type='datetime')
fig.line(data_frame['Date'], data_frame['Close'], color='Orange', alpha=0.5)
output_file('Timeseries.html')
show(fig)
| Bokeh/Basic_graph.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Job Listings
# +
# Dependencies & Setup
import pandas as pd
import numpy as np
import requests
import json
from os.path import exists
import simplejson as json
# Retrieve Google API Key from config.py
from config_3 import gkey
# +
# File to Load
wc_file = "data/west_coast_job_listings.csv"
ba_file = "data/bay_area_job_listings.csv"
# Read Scraped Data (CSV File) & Store Into Pandas DataFrame
wc_job_listings_df = pd.read_csv(wc_file, encoding="ISO-8859-1")
ba_job_listings_df = pd.read_csv(ba_file, encoding="ISO-8859-1")
# -
# Drop WC NaN's
revised_wc_job_listings_df = wc_job_listings_df.dropna()
revised_wc_job_listings_df.head()
cleaned_wc_job_listings_df = revised_wc_job_listings_df.drop(columns=["rating", "reviews", "job_description"])
cleaned_wc_job_listings_df.head()
# Drop BA NaN's
revised_ba_job_listings_df = ba_job_listings_df.dropna()
revised_ba_job_listings_df.head()
# +
# Reorganize WC File Column Names
organized_wc_job_listings_df = cleaned_wc_job_listings_df.rename(columns={"company":"Company Name",
"job_title":"Job Title",
"location":"Location"})
# Extract Only Job Titles with "Data" as String
new_organized_wc_job_listings_df = organized_wc_job_listings_df[organized_wc_job_listings_df["Job Title"].
str.contains("Data", case=True)]
new_organized_wc_job_listings_df.head()
# -
print(len(new_organized_wc_job_listings_df))
# +
# Extract Unique Locations
new_organized_wc_job_listings_df["company_address"] = new_organized_wc_job_listings_df["Company Name"] + ", " + new_organized_wc_job_listings_df["Location"]
unique_locations = new_organized_wc_job_listings_df["company_address"].unique().tolist()
print(len(unique_locations))
# -
# Reorganize BA File Column Names
organized_ba_job_listings_df = revised_ba_job_listings_df.rename(columns={"company":"Company Name",
"job_title":"Job Title",
"location":"Location"})
organized_ba_job_listings_df.head()
# Extract Only Company Names to Pass to Google Maps API to Gather GeoCoordinates
company = organized_ba_job_listings_df[["Company Name"]]
company.head()
# What are the geocoordinates (latitude/longitude) of the Company Names?
company_list = list(company["Company Name"])
# Build URL using the Google Maps API
base_url = "https://maps.googleapis.com/maps/api/geocode/json"
new_json = []
for target_company in company_list:
# print(target_company)
params = {"address": target_company + ", San Francisco CA", "key": gkey}
# print(params)
# print("The Geocoordinates of LinkedIn Company Names")
# Run Request
response = requests.get(base_url, params=params)
# print(response.url)
# Extract lat/lng
companies_geo = response.json()
lat = companies_geo["results"][0]["geometry"]["location"]["lat"]
lng = companies_geo["results"][0]["geometry"]["location"]["lng"]
new_json.append({"company":target_company,"lat":lat,"lng":lng})
# print(f"{target_company}, {lat}, {lng}")
print(new_json)
# +
# What are the GeoCoordinates (Latitude/Longitude) of the Companies?
# company_list = list(unique_locations["Company Name"])
# Build URL using the Google Maps API
base_url = "https://maps.googleapis.com/maps/api/geocode/json"
new_json = []
counter = 1
for location in unique_locations:
params = {"address": location, "key": gkey}
# Run Request
response = requests.get(base_url, params=params)
try:
# Extract lat/lng
companies_geo = response.json()
# print(companies_geo)
lat = companies_geo["results"][0]["geometry"]["location"]["lat"]
lng = companies_geo["results"][0]["geometry"]["location"]["lng"]
new_json.append({"company": location,"lat": lat,"lng": lng})
print(counter)
counter += 1
except IndexError:
print(location)
# -
# What are the geocoordinates (latitude/longitude) of the Company Names?
company_list = list(company["Company Name"])
# Build URL using the Google Maps API
base_url = "https://maps.googleapis.com/maps/api/geocode/json"
new_json = []
for target_company in company_list:
# print(target_company)
params = {"address": target_company + ", San Francisco CA", "key": gkey}
# print(params)
# print("The Geocoordinates of LinkedIn Company Names")
# Run Request
response = requests.get(base_url, params=params)
# print(response.url)
# Extract lat/lng
companies_geo = response.json()
lat = companies_geo["results"][0]["geometry"]["location"]["lat"]
lng = companies_geo["results"][0]["geometry"]["location"]["lng"]
new_json.append({"company":target_company,"lat":lat,"lng":lng})
# print(f"{target_company}, {lat}, {lng}")
print(new_json)
print(new_json)
# +
# Convert JSON into GeoJSON
geojson = {
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"company": d["company"],
"geometry" : {
"type": "Point",
"coordinates": [d["lat"], d["lng"]],
},
} for d in new_json]
}
print(geojson)
# -
job_listing_coordinates = pd.DataFrame(new_json)
job_listing_coordinates
# +
updated_job_listings = job_listings.merge(job_listing_coordinates, how="left", left_on="company_address", right_on="company")
updated_job_listings
# Drop NaN's
updated_job_listings_no_missing = updated_job_listings.dropna()
updated_job_listings_no_missing.head()
# -
updated_job_listings[["company","lat","lng"]].to_dict()
json_job_listings = updated_job_listings[["company","lat","lng"]].to_json(orient="records")
json_job_listings
with open('data.json', 'w') as outfile:
outfile.write(json_job_listings)
| .ipynb_checkpoints/Job_Listings (Draft)-checkpoint.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Julia 1.0.0
# language: julia
# name: julia-1.0
# ---
using LinearAlgebra
A = rand(1.:9.,4,4)
L = zeros(4,4); U = zeros(4,4);
U[1,:] = A[1,:]
L[:,1] = A[:,1]/U[1,1]
A = A - L[:,1]*U[1,:]'
function luouter(A)
m = size(A,1)
Aj = copy(A)
L = zeros(m,m); U = zeros(m,m);
p = zeros(Int,m)
for j = 1:m
p[j]=findmax(abs.(Aj[:,j]))[2]
U[j,:] = Aj[p[j],:]
L[:,j] = Aj[:,j]/U[j,j]
Aj -= L[:,j]*U[j,:]'
end
return L,U,p
end
L,U,p = luouter(A)
L
U
norm(A-L*U)
L[p,:]
A[1,1] = 0
A
L,U = luouter(A)
A[1,1] = 1e-12
L,U=luouter(A)
L
U
norm(A-L*U)
cond(A)
A
findmax(abs.(A[:,1]))[2]
p=zeros(Int,4)
p[1]=findmax(abs.(A[:,1]))[2]
Aj=copy(A);
U[1,:] = Aj[p[1],:]
L[:,1] = Aj[:,1]/U[1,1]
Aj = Aj - L[:,1]*U[1,:]'
p[2]=findmax(abs.(Aj[:,2]))[2]
U[2,:] = Aj[p[2],:]
L[:,2] = Aj[:,2]/U[2,2]
Aj = Aj - L[:,2]*U[2,:]'
function luouter(A)
m = size(A,1)
Aj = copy(A)
L = zeros(m,m); U = zeros(m,m);
p = zeros(Int,m)
for j = 1:m
p[j]=findmax(abs.(Aj[:,j]))[2]
U[j,:] = Aj[p[j],:]
L[:,j] = Aj[:,j]/U[j,j]
Aj -= L[:,j]*U[j,:]'
end
return L,U,p
end
L,U,p = luouter(A)
L
U
# $A = (P^TL) U$
# $PA=LU$
LL,UU,pp=lu(A)
pp
m = 8
A = tril(-ones(m,m))
A[1:m+1:end] .= 1
A[:,m] .= 1
A
L,U,p=lu(A)
p
m = 60
A = tril(-ones(m,m))
A[1:m+1:end] .= 1
A[:,m] .= 1
xact = rand(m)
b = A*xact
norm( xact - A\b )/ norm(xact)
cond(A)
L,U=lu(A)
norm(U)
| daily/daily-10-04.ipynb |
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] colab_type="text" id="XXDeo-aGOAXF"
# ##### Copyright 2020 The TensorFlow Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# + colab={} colab_type="code" id="9XRGdjHNOE9D"
#@title ##### Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# + [markdown] colab_type="text" id="KJihamFwOLUT"
# # Bayesian Neural Network
#
# <table class="tfo-notebook-buttons" align="left">
# <td>
# <a target="_blank" href="https://colab.research.google.com/github/tensorflow/probability/blob/main/tensorflow_probability/python/experimental/nn/examples/bnn_mnist_advi.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
# </td>
# <td>
# <a target="_blank" href="https://github.com/tensorflow/probability/blob/main/tensorflow_probability/python/experimental/nn/examples/bnn_mnist_advi.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
# </td>
# </table>
# + [markdown] colab_type="text" id="B0HrNKbJw2bA"
# ### 1 Imports
# + cellView="both" colab={} colab_type="code" id="cttwhYKYGhPj"
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import sys
import time
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import metrics as sklearn_metrics
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_datasets as tfds
import tensorflow_probability as tfp
from tensorflow_probability.python.internal import prefer_static
# Globally Enable XLA.
# tf.config.optimizer.set_jit(True)
try:
physical_devices = tf.config.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(physical_devices[0], True)
except:
# Invalid device or cannot modify virtual devices once initialized.
pass
tfb = tfp.bijectors
tfd = tfp.distributions
tfn = tfp.experimental.nn
# + [markdown] colab_type="text" id="nbQ3rcTowypZ"
# ### 2 Load Dataset
# + cellView="both" colab={"height": 0} colab_type="code" id="rjgnFMxvG9Ab" outputId="9d7d327f-aa0d-4a43-e3a9-5426cf26a37e"
dataset_name = 'emnist'
batch_size = 32
[train_dataset, eval_dataset], datasets_info = tfds.load(
name=dataset_name,
split=['train', 'test'],
with_info=True,
as_supervised=True,
shuffle_files=True)
def _preprocess(image, label):
image = tf.cast(image, dtype=tf.float32) / 255.
if dataset_name == 'emnist':
image = tf.transpose(image, perm=[1, 0, 2])
label = tf.cast(label, dtype=tf.int32)
return image, label
train_size = datasets_info.splits['train'].num_examples
eval_size = datasets_info.splits['test'].num_examples
num_classes = datasets_info.features['label'].num_classes
image_shape = datasets_info.features['image'].shape
if dataset_name == 'emnist':
import string
yhuman = np.array(list(string.digits +
string.ascii_uppercase +
string.ascii_lowercase))
else:
yhuman = np.range(num_classes).astype(np.int32)
if True:
orig_train_size = train_size
train_size = int(10e3)
train_dataset = train_dataset.shuffle(orig_train_size // 7).repeat(1).take(train_size)
train_dataset = tfn.util.tune_dataset(
train_dataset,
batch_size=batch_size,
shuffle_size=int(train_size / 7),
preprocess_fn=_preprocess)
if True:
orig_eval_size = eval_size
eval_size = int(10e3)
eval_dataset = eval_dataset.shuffle(orig_eval_size // 7).repeat(1).take(eval_size)
eval_dataset = tfn.util.tune_dataset(
eval_dataset,
repeat_count=None,
preprocess_fn=_preprocess)
x, y = next(iter(eval_dataset.batch(10)))
tfn.util.display_imgs(x, yhuman[y.numpy()]);
# + [markdown] colab_type="text" id="sbaPm7ABwvde"
# ### 3 Define Model
# + cellView="form" colab={} colab_type="code" id="Gh2BHHnEGY1x"
#@title Optional Custom Posterior
def make_posterior(
kernel_shape,
bias_shape,
dtype=tf.float32,
kernel_initializer=None,
bias_initializer=None,
kernel_name='posterior_kernel',
bias_name='posterior_bias'):
if kernel_initializer is None:
kernel_initializer = tf.initializers.glorot_uniform()
if bias_initializer is None:
bias_initializer = tf.zeros
make_loc = lambda shape, init, name: tf.Variable( # pylint: disable=g-long-lambda
init(shape, dtype=dtype),
name=name + '_loc')
make_scale = lambda shape, name: tfp.util.TransformedVariable( # pylint: disable=g-long-lambda
tf.fill(shape, tf.constant(0.01, dtype)),
tfb.Chain([tfb.Shift(1e-5), tfb.Softplus()]),
name=name + '_scale')
return tfd.JointDistributionSequential([
tfd.Independent(
tfd.Normal(loc=make_loc(kernel_shape, kernel_initializer, kernel_name),
scale=make_scale(kernel_shape, kernel_name)),
reinterpreted_batch_ndims=prefer_static.size(kernel_shape),
name=kernel_name),
tfd.Independent(
tfd.Normal(loc=make_loc(bias_shape, bias_initializer, bias_name),
scale=make_scale(bias_shape, bias_name)),
reinterpreted_batch_ndims=prefer_static.size(bias_shape),
name=bias_name),
])
# + cellView="form" colab={} colab_type="code" id="lEh7kUBeGdMN"
#@title Optional Custom Prior
def make_prior(
kernel_shape,
bias_shape,
dtype=tf.float32,
kernel_initializer=None, # pylint: disable=unused-argument
bias_initializer=None, # pylint: disable=unused-argument
kernel_name='prior_kernel',
bias_name='prior_bias'):
k = tfd.MixtureSameFamily(
tfd.Categorical(tf.zeros(3, dtype)),
tfd.StudentT(
df=[1,1.,1.], loc=[0,3,-3], scale=tf.constant([1, 10, 10], dtype)))
#df=[0.5, 1., 1.], loc=[0, 2, -2], scale=tf.constant([0.25, 5, 5], dtype)))
b = tfd.Normal(0, tf.constant(1000, dtype))
return tfd.JointDistributionSequential([
tfd.Sample(k, kernel_shape, name=kernel_name),
tfd.Sample(b, bias_shape, name=bias_name),
])
# + cellView="both" colab={"height": 338} colab_type="code" id="nhnbpf7IYBD6" outputId="7265b1e0-857f-4093-ba0f-32bef1db93bc"
max_pool = tf.keras.layers.MaxPooling2D( # Has no tf.Variables.
pool_size=(2, 2),
strides=(2, 2),
padding='SAME',
data_format='channels_last')
def batchnorm(axis):
def fn(x):
m = tf.math.reduce_mean(x, axis=axis, keepdims=True)
v = tf.math.reduce_variance(x, axis=axis, keepdims=True)
return (x - m) / tf.math.sqrt(v)
return fn
maybe_batchnorm = batchnorm(axis=[-4, -3, -2])
# maybe_batchnorm = lambda x: x
bnn = tfn.Sequential([
lambda x: 2. * tf.cast(x, tf.float32) - 1., # Center.
tfn.ConvolutionVariationalReparameterization(
input_size=1,
output_size=8,
filter_shape=5,
padding='SAME',
init_kernel_fn=tf.initializers.he_uniform(),
penalty_weight=1 / train_size,
# penalty_weight=1e2 / train_size, # Layer specific "beta".
# make_posterior_fn=make_posterior,
# make_prior_fn=make_prior,
name='conv1'),
maybe_batchnorm,
tf.nn.leaky_relu,
tfn.ConvolutionVariationalReparameterization(
input_size=8,
output_size=16,
filter_shape=5,
padding='SAME',
init_kernel_fn=tf.initializers.he_uniform(),
penalty_weight=1 / train_size,
# penalty_weight=1e2 / train_size, # Layer specific "beta".
# make_posterior_fn=make_posterior,
# make_prior_fn=make_prior,
name='conv2'),
maybe_batchnorm,
tf.nn.leaky_relu,
max_pool, # [28, 28, 8] -> [14, 14, 8]
tfn.ConvolutionVariationalReparameterization(
input_size=16,
output_size=32,
filter_shape=5,
padding='SAME',
init_kernel_fn=tf.initializers.he_uniform(),
penalty_weight=1 / train_size,
# penalty_weight=1e2 / train_size, # Layer specific "beta".
# make_posterior_fn=make_posterior,
# make_prior_fn=make_prior,
name='conv3'),
maybe_batchnorm,
tf.nn.leaky_relu,
max_pool, # [14, 14, 16] -> [7, 7, 16]
tfn.util.flatten_rightmost(ndims=3),
tfn.AffineVariationalReparameterizationLocal(
input_size=7 * 7 * 32,
output_size=num_classes - 1,
penalty_weight=1. / train_size,
# make_posterior_fn=make_posterior,
# make_prior_fn=make_prior,
name='affine1'),
tfb.Pad(),
lambda x: tfd.Categorical(logits=x, dtype=tf.int32),
], name='BNN')
# bnn_eval = tfn.Sequential([l for l in bnn.layers if l is not maybe_batchnorm],
# name='bnn_eval')
bnn_eval = bnn
print(bnn.summary())
# + [markdown] colab_type="text" id="J9XuHd6Iw7_a"
# ### 4 Loss / Eval
# + colab={} colab_type="code" id="45AcvITA9qci"
def compute_loss_bnn(x, y, beta=1., is_eval=False):
d = bnn_eval(x) if is_eval else bnn(x)
nll = -tf.reduce_mean(d.log_prob(y), axis=-1)
kl = bnn.extra_loss
loss = nll + beta * kl
return loss, (nll, kl), d
# + colab={} colab_type="code" id="zpoG0x6-AslV"
train_iter_bnn = iter(train_dataset)
def train_loss_bnn():
x, y = next(train_iter_bnn)
loss, (nll, kl), _ = compute_loss_bnn(x, y)
return loss, (nll, kl)
opt_bnn = tf.optimizers.Adam(learning_rate=0.003)
fit_bnn = tfn.util.make_fit_op(
train_loss_bnn,
opt_bnn,
bnn.trainable_variables,
grad_summary_fn=lambda gs: tf.nest.map_structure(tf.norm, gs))
# + cellView="form" colab={} colab_type="code" id="LS8nQqN3FMFv"
#@title Eval Helpers
def all_categories(d):
num_classes = tf.shape(d.logits_parameter())[-1]
batch_ndims = tf.size(d.batch_shape_tensor())
expand_shape = tf.pad(
[num_classes], paddings=[[0, batch_ndims]], constant_values=1)
return tf.reshape(tf.range(num_classes, dtype=d.dtype), expand_shape)
def rollaxis(x, shift):
return tf.transpose(x, tf.roll(tf.range(tf.rank(x)), shift=shift, axis=0))
def compute_eval_stats(y, d, threshold=None):
# Assume we have evidence `x`, targets `y`, and model function `dnn`.
all_pred_log_prob = tf.math.log_softmax(d.logits, axis=-1)
yhat = tf.argmax(all_pred_log_prob, axis=-1)
pred_log_prob = tf.reduce_max(all_pred_log_prob, axis=-1)
# all_pred_log_prob = d.log_prob(all_categories(d))
# yhat = tf.argmax(all_pred_log_prob, axis=0)
# pred_log_prob = tf.reduce_max(all_pred_log_prob, axis=0)
# Alternative #1:
# all_pred_log_prob = rollaxis(all_pred_log_prob, shift=-1)
# pred_log_prob, yhat = tf.math.top_k(all_pred_log_prob, k=1, sorted=False)
# Alternative #2:
# yhat = tf.argmax(all_pred_log_prob, axis=0)
# pred_log_prob = tf.gather(rollaxis(all_pred_log_prob, shift=-1),
# yhat,
# batch_dims=len(d.batch_shape))
if threshold is not None:
keep = pred_log_prob > tf.math.log(threshold)
pred_log_prob = tf.boolean_mask(pred_log_prob, keep)
yhat = tf.boolean_mask(yhat, keep)
y = tf.boolean_mask(y, keep)
hit = tf.equal(y, tf.cast(yhat, y.dtype))
avg_acc = tf.reduce_mean(tf.cast(hit, tf.float32), axis=-1)
num_buckets = 10
(
avg_calibration_error,
acc,
conf,
cnt,
edges,
bucket,
) = tf.cond(tf.size(y) > 0,
lambda: tfp.stats.expected_calibration_error_quantiles(
hit,
pred_log_prob,
num_buckets=num_buckets,
log_space_buckets=True),
lambda: (tf.constant(np.nan),
tf.fill([num_buckets], np.nan),
tf.fill([num_buckets], np.nan),
tf.fill([num_buckets], np.nan),
tf.fill([num_buckets + 1], np.nan),
tf.constant([], tf.int64)))
return avg_acc, avg_calibration_error, (acc, conf, cnt, edges, bucket)
# + cellView="code" colab={} colab_type="code" id="iUmS-7IATIcI"
eval_iter_bnn = iter(eval_dataset.batch(2000).repeat())
@tfn.util.tfcompile
def eval_bnn(threshold=None, num_inferences=5):
x, y = next(eval_iter_bnn)
loss, (nll, kl), d = compute_loss_bnn(x, y, is_eval=True)
if num_inferences > 1:
before_avg_predicted_log_probs = tf.map_fn(
lambda _: tf.math.log_softmax(bnn(x).logits, axis=-1),
elems=tf.range(num_inferences),
dtype=loss.dtype)
d = tfd.Categorical(logits=tfp.math.reduce_logmeanexp(
before_avg_predicted_log_probs, axis=0))
avg_acc, avg_calibration_error, (acc, conf, cnt, edges, bucket) = \
compute_eval_stats(y, d, threshold=threshold)
n = tf.reduce_sum(cnt, axis=0)
return loss, (nll, kl, avg_acc, avg_calibration_error, n)
# + [markdown] colab_type="text" id="YeEZZT0uAZjn"
# ### 5 Train
# + colab={} colab_type="code" id="CRHw0VNK_Acu"
DEBUG_MODE = False
tf.config.experimental_run_functions_eagerly(DEBUG_MODE)
# + cellView="code" colab={"height": 900} colab_type="code" id="ba5W_N6oTNbo" outputId="9ce96336-071a-4878-e1ae-37684c61c60a"
num_train_epochs = 2. # @param { isTemplate: true}
num_evals = 50 # @param { isTemplate: true
dur_sec = dur_num = 0
num_train_steps = int(num_train_epochs * train_size)
for i in range(num_train_steps):
start = time.time()
trn_loss, (trn_nll, trn_kl), g = fit_bnn()
stop = time.time()
dur_sec += stop - start
dur_num += 1
if i % int(num_train_steps / num_evals) == 0 or i == num_train_steps - 1:
tst_loss, (tst_nll, tst_kl, tst_acc, tst_ece, tst_tot) = eval_bnn()
f, x = zip(*[
('it:{:5}', opt_bnn.iterations),
('ms/it:{:6.4f}', dur_sec / max(1., dur_num) * 1000.),
('tst_acc:{:6.4f}', tst_acc),
('tst_ece:{:6.4f}', tst_ece),
('tst_tot:{:5}', tst_tot),
('trn_loss:{:6.4f}', trn_loss),
('tst_loss:{:6.4f}', tst_loss),
('tst_nll:{:6.4f}', tst_nll),
('tst_kl:{:6.4f}', tst_kl),
('sum_norm_grad:{:6.4f}', sum(g)),
])
print(' '.join(f).format(*[getattr(x_, 'numpy', lambda: x_)()
for x_ in x]))
sys.stdout.flush()
dur_sec = dur_num = 0
# if i % 1000 == 0 or i == maxiter - 1:
# bnn.save('/tmp/bnn.npz')
# + [markdown] colab_type="text" id="G1ImqkK7xAv1"
# ### 6 Evaluate
# + cellView="form" colab={} colab_type="code" id="Wt5fjxTLfFUo"
#@title More Eval Helpers
@tfn.util.tfcompile
def compute_log_probs_bnn(x, num_inferences):
lp = tf.map_fn(lambda _: tf.math.log_softmax(bnn_eval(x).logits, axis=-1),
elems=tf.range(num_inferences),
dtype=tf.float32)
log_mean_prob = tfp.math.reduce_logmeanexp(lp, axis=0)
# ovr = "one vs rest"
log_avg_std_ovr_prob = tfp.math.reduce_logmeanexp(lp + tf.math.log1p(-lp), axis=0)
#log_std_prob = 0.5 * tfp.math.log_sub_exp(log_mean2_prob, log_mean_prob * 2.)
tiny_ = np.finfo(lp.dtype.as_numpy_dtype).tiny
log_std_prob = 0.5 * tfp.math.reduce_logmeanexp(
2 * tfp.math.log_sub_exp(lp + tiny_, log_mean_prob),
axis=0)
return log_mean_prob, log_std_prob, log_avg_std_ovr_prob
num_inferences = 50
num_chunks = 10
eval_iter_bnn = iter(eval_dataset.batch(eval_size // num_chunks))
@tfn.util.tfcompile
def all_eval_labels_and_log_probs_bnn():
def _inner(_):
x, y = next(eval_iter_bnn)
return x, y, compute_log_probs_bnn(x, num_inferences)
x, y, (log_probs, log_std_probs, log_avg_std_ovr_prob) = tf.map_fn(
_inner,
elems=tf.range(num_chunks),
dtype=(tf.float32, tf.int32,) + ((tf.float32,) * 3,))
return (
tf.reshape(x, (-1,) + image_shape),
tf.reshape(y, [-1]),
tf.reshape(log_probs, [-1, num_classes]),
tf.reshape(log_std_probs, [-1, num_classes]),
tf.reshape(log_avg_std_ovr_prob, [-1, num_classes]),
)
(
x_, y_,
log_probs_, log_std_probs_,
log_avg_std_ovr_prob_,
) = all_eval_labels_and_log_probs_bnn()
# + cellView="form" colab={} colab_type="code" id="c65_RnFIwruI"
#@title Run Eval
x, y, log_probs, log_std_probs, log_avg_std_ovr_prob = (
x_, y_, log_probs_, log_std_probs_, log_avg_std_ovr_prob_)
yhat = tf.argmax(log_probs, axis=-1)
max_log_probs = tf.gather(log_probs, yhat, batch_dims=1)
max_log_std_probs = tf.gather(log_std_probs, yhat, batch_dims=1)
max_log_avg_std_ovr_prob = tf.gather(log_avg_std_ovr_prob, yhat, batch_dims=1)
# Sort by ascending confidence.
score = max_log_probs # Mean
#score = -max_log_std_probs # 1 / Sigma
#score = max_log_probs - max_log_std_probs # Mean / Sigma
#score = abs(tf.math.expm1(max_log_std_probs - (max_log_probs + tf.math.log1p(-max_log_probs))))
idx = tf.argsort(score)
score = tf.gather(score, idx)
x = tf.gather(x, idx)
y = tf.gather(y, idx)
yhat = tf.gather(yhat, idx)
hit = tf.cast(tf.equal(y, tf.cast(yhat,y.dtype)), tf.int32)
log_probs = tf.gather(log_probs, idx)
max_log_probs = tf.gather(max_log_probs, idx)
log_std_probs = tf.gather(log_std_probs, idx)
max_log_std_probs = tf.gather(max_log_std_probs, idx)
log_avg_std_ovr_prob = tf.gather(log_avg_std_ovr_prob, idx)
max_log_avg_std_ovr_prob = tf.gather(max_log_avg_std_ovr_prob, idx)
d = tfd.Categorical(logits=log_probs)
max_log_probs = tf.reduce_max(log_probs, axis=-1)
keep = tf.range(500,eval_size)
#threshold = 0.95;
# keep = tf.where(max_log_probs > tf.math.log(threshold))[..., 0]
x_keep = tf.gather(x, keep)
y_keep = tf.gather(y, keep)
log_probs_keep = tf.gather(log_probs, keep)
yhat_keep = tf.gather(yhat, keep)
d_keep = tfd.Categorical(logits=log_probs_keep)
(
avg_acc, ece,
(acc, conf, cnt, edges, bucket),
) = tfn.util.tfcompile(lambda: compute_eval_stats(y, d))()
(
avg_acc_keep, ece_keep,
(acc_keep, conf_keep, cnt_keep, edges_keep, bucket_keep),
) = tfn.util.tfcompile(lambda: compute_eval_stats(y_keep, d_keep))()
# + colab={"height": 101} colab_type="code" id="53J7TS0mim_A" outputId="82665715-6ecd-4628-fa76-4617f22ad11d"
print('Accurary (all) : {}'.format(avg_acc))
print('Accurary (certain) : {}'.format(avg_acc_keep))
print('ECE (all) : {}'.format(ece))
print('ECE (certain) : {}'.format(ece_keep))
print('Number undecided: {}'.format(eval_size - tf.size(keep)))
# + colab={"height": 675} colab_type="code" id="b_atF5vNiqJ2" outputId="58c6af3d-d46a-462e-8b7d-748cfb5e04f7"
print('Most uncertain:')
ss = (6,12); n = np.prod(ss); s = ss+image_shape
tfn.util.display_imgs(
tf.reshape(x[:n], s),
yhuman[tf.reshape(y[:n], ss).numpy()])
print(tf.reshape(hit[:n], ss).numpy())
print(yhuman[tf.reshape(yhat[:n], ss).numpy()])
# + colab={"height": 675} colab_type="code" id="Znmom4_visE9" outputId="0ebae465-5162-413d-908b-242e245a6df4"
print('Least uncertain:')
tfn.util.display_imgs(
tf.reshape(x[-n:], s),
yhuman[tf.reshape(y[-n:], ss).numpy()])
print(tf.reshape(hit[-n:], ss).numpy())
print(yhuman[tf.reshape(yhat[-n:], ss).numpy()])
# + colab={"height": 283} colab_type="code" id="4nDO74UmithH" outputId="47687a04-52ad-41fe-b257-01f06b3271bf"
a = tf.math.exp(max_log_probs)
b = tf.math.exp(max_log_std_probs)
plt.plot(a, b, '.', label='observed');
#sns.jointplot(a.numpy(), b.numpy())
plt.xlabel('mean');
plt.ylabel('std');
p = tf.linspace(0.,1,100)
plt.plot(p, tf.math.sqrt(p * (1 - p)), label='theoretical');
# + colab={"height": 283} colab_type="code" id="lD0U-zA3bRN7" outputId="61952d6a-5622-480f-da68-43d5ca587e21"
b = max_log_probs
# b = tf.boolean_mask(b, b < 0.)
sns.distplot(tf.math.exp(b).numpy(), bins=20);
plt.xlabel('Posterior Mean Pred Prob');
plt.ylabel('Freq');
# + colab={"height": 283} colab_type="code" id="pdWqz85HrqW5" outputId="bb773a3e-3dc0-449e-f3a2-043342f2ad74"
b = max_log_std_probs
tiny_ = np.finfo(b.dtype.as_numpy_dtype).tiny
b = tf.boolean_mask(b, b > tf.math.log(tiny_))
sns.distplot(tf.math.exp(b).numpy(), bins=20);
plt.xlabel('Posterior Std. Pred Prob');
plt.ylabel('Freq');
# + colab={"height": 283} colab_type="code" id="OBmjOrHOcky6" outputId="525fca08-7303-44e3-b61a-1722966454d0"
b = max_log_avg_std_ovr_prob
sns.distplot(tf.math.exp(b).numpy(), bins=20);
plt.xlabel('Posterior Avg Std. Pred Prob (OVR)');
plt.ylabel('Freq');
# + cellView="form" colab={"height": 51} colab_type="code" id="zedk947YWucS" outputId="03675a4b-443f-4539-e560-c3bf461d7c3d"
#@title Avg One-vs-Rest AUC
try:
bnn_auc = sklearn_metrics.roc_auc_score(
y_keep,
log_probs_keep,
average='macro',
multi_class='ovr')
print('Avg per class AUC:\n{}'.format(bnn_auc))
except TypeError:
bnn_auc = np.array([
sklearn_metrics.roc_auc_score(tf.equal(y_keep, i), log_probs_keep[:, i])
for i in range(num_classes)])
print('Avg per class AUC:\n{}'.format(bnn_auc.mean()))
# + [markdown] colab_type="text" id="a_LR5N47a1ce"
# ### 7 Appendix: Compare against DNN
# + colab={"height": 203} colab_type="code" id="xaE5mdj5a4Xj" outputId="21049ec8-70bc-4fdd-c0d6-bfddedce7e18"
max_pool = tf.keras.layers.MaxPooling2D( # Has no tf.Variables.
pool_size=(2, 2),
strides=(2, 2),
padding='SAME',
data_format='channels_last')
maybe_batchnorm = batchnorm(axis=[-4, -3, -2])
# maybe_batchnorm = lambda x: x
dnn = tfn.Sequential([
lambda x: 2. * tf.cast(x, tf.float32) - 1., # Center.
tfn.Convolution(
input_size=1,
output_size=8,
filter_shape=5,
padding='SAME',
init_kernel_fn=tf.initializers.he_uniform(),
name='conv1'),
maybe_batchnorm,
tf.nn.leaky_relu,
tfn.Convolution(
input_size=8,
output_size=16,
filter_shape=5,
padding='SAME',
init_kernel_fn=tf.initializers.he_uniform(),
name='conv1'),
maybe_batchnorm,
tf.nn.leaky_relu,
max_pool, # [28, 28, 8] -> [14, 14, 8]
tfn.Convolution(
input_size=16,
output_size=32,
filter_shape=5,
padding='SAME',
init_kernel_fn=tf.initializers.he_uniform(),
name='conv2'),
maybe_batchnorm,
tf.nn.leaky_relu,
max_pool, # [14, 14, 16] -> [7, 7, 16]
tfn.util.flatten_rightmost(ndims=3),
tfn.Affine(
input_size=7 * 7 * 32,
output_size=num_classes - 1,
name='affine1'),
tfb.Pad(),
lambda x: tfd.Categorical(logits=x, dtype=tf.int32),
], name='DNN')
# dnn_eval = tfn.Sequential([l for l in dnn.layers if l is not maybe_batchnorm],
# name='dnn_eval')
dnn_eval = dnn
print(dnn.summary())
# + colab={} colab_type="code" id="xEJ5Bd3jBcB5"
def compute_loss_dnn(x, y, is_eval=False):
d = dnn_eval(x) if is_eval else dnn(x)
nll = -tf.reduce_mean(d.log_prob(y), axis=-1)
return nll, d
# + colab={} colab_type="code" id="CtFMjI6LBuQL"
train_iter_dnn = iter(train_dataset)
def train_loss_dnn():
x, y = next(train_iter_dnn)
nll, _ = compute_loss_dnn(x, y)
return nll, None
opt_dnn = tf.optimizers.Adam(learning_rate=0.003)
fit_dnn = tfn.util.make_fit_op(
train_loss_dnn,
opt_dnn,
dnn.trainable_variables,
grad_summary_fn=lambda gs: tf.nest.map_structure(tf.norm, gs))
# + colab={} colab_type="code" id="9RWP0V2KbMdP"
eval_iter_dnn = iter(eval_dataset.batch(2000).repeat())
@tfn.util.tfcompile
def eval_dnn(threshold=None):
x, y = next(eval_iter_dnn)
loss, d = compute_loss_dnn(x, y, is_eval=True)
avg_acc, avg_calibration_error, _ = compute_eval_stats(
y, d, threshold=threshold)
return loss, (avg_acc, avg_calibration_error)
# + cellView="code" colab={"height": 477} colab_type="code" id="i5CCUIKXbQ0n" outputId="a51198c3-2045-4768-d07e-817cace06863"
num_train_epochs = 2. # @param { isTemplate: true}
num_evals = 25 # @param { isTemplate: true
dur_sec = dur_num = 0
num_train_steps = int(num_train_epochs * train_size)
for i in range(num_train_steps):
start = time.time()
trn_loss, _, g = fit_dnn()
stop = time.time()
dur_sec += stop - start
dur_num += 1
if i % int(num_train_steps / num_evals) == 0 or i == num_train_steps - 1:
tst_loss, (tst_acc, tst_ece) = eval_dnn()
f, x = zip(*[
('it:{:5}', opt_dnn.iterations),
('ms/it:{:6.4f}', dur_sec / max(1., dur_num) * 1000.),
('tst_acc:{:6.4f}', tst_acc),
('tst_ece:{:6.4f}', tst_ece),
('trn_loss:{:6.4f}', trn_loss),
('tst_loss:{:6.4f}', tst_loss),
('tst_nll:{:6.4f}', tst_nll),
('tst_kl:{:6.4f}', tst_kl),
('sum_norm_grad:{:6.4f}', sum(g)),
])
print(' '.join(f).format(*[getattr(x_, 'numpy', lambda: x_)()
for x_ in x]))
sys.stdout.flush()
dur_sec = dur_num = 0
# if i % 1000 == 0 or i == maxiter - 1:
# dnn.save('/tmp/dnn.npz')
# + cellView="form" colab={} colab_type="code" id="5aVSiYt4lT2R"
#@title Run Eval
eval_iter_dnn = iter(eval_dataset.batch(eval_size))
@tfn.util.tfcompile
def compute_log_probs_dnn():
x, y = next(eval_iter_dnn)
lp = tf.math.log_softmax(dnn_eval(x).logits, axis=-1)
return x, y, lp
x, y, log_probs = compute_log_probs_dnn()
max_log_probs = tf.reduce_max(log_probs, axis=-1)
idx = tf.argsort(max_log_probs)
x = tf.gather(x, idx)
y = tf.gather(y, idx)
log_probs = tf.gather(log_probs, idx)
max_log_probs = tf.gather(max_log_probs, idx)
yhat = tf.argmax(log_probs, axis=-1)
d = tfd.Categorical(logits=log_probs)
hit = tf.cast(tf.equal(y, tf.cast(yhat, y.dtype)), tf.int32)
#threshold = 1.-1e-5
#keep = tf.where(max_log_probs >= np.log(threshold))[..., 0]
keep = tf.range(500, eval_size)
x_keep = tf.gather(x, keep)
y_keep = tf.gather(y, keep)
yhat_keep = tf.gather(yhat, keep)
log_probs_keep = tf.gather(log_probs, keep)
max_log_probs_keep = tf.gather(max_log_probs, keep)
hit_keep = tf.gather(hit, keep)
d_keep = tfd.Categorical(logits=log_probs_keep)
(
avg_acc, ece,
(acc, conf, cnt, edges, bucket),
) = tfn.util.tfcompile(lambda: compute_eval_stats(y, d))()
(
avg_acc_keep, ece_keep,
(acc_keep, conf_keep, cnt_keep, edges_keep, bucket_keep),
) = tfn.util.tfcompile(lambda: compute_eval_stats(y_keep, d_keep))()
# + colab={"height": 85} colab_type="code" id="Z8yGU-b-jZdZ" outputId="7e777a1f-2b1c-4253-f1a4-d9fb36073520"
print('Number of examples undecided: {}'.format(eval_size - tf.size(keep)))
print('Accurary before excluding undecided ones: {}'.format(avg_acc))
print('Accurary after excluding undecided ones: {}'.format(avg_acc_keep))
print('ECE before/after.', ece.numpy(), ece_keep.numpy())
# + colab={"height": 675} colab_type="code" id="6T0xDQOhja4p" outputId="74da7e6e-07ab-494f-e1e8-7f07dd4a908a"
print('Most uncertain:')
ss = (6,12); n = np.prod(ss); s = ss+image_shape
tfn.util.display_imgs(
tf.reshape(x[:n], s),
yhuman[tf.reshape(y[:n], ss).numpy()])
print(tf.reshape(hit[:n], ss).numpy())
print(yhuman[tf.reshape(yhat[:n], ss).numpy()])
# + colab={"height": 675} colab_type="code" id="lcibcRp4jcHV" outputId="8cfd08cf-c2bd-408c-fb23-4fd1b8cba137"
print('Least uncertain:')
tfn.util.display_imgs(
tf.reshape(x[-n:], s),
yhuman[tf.reshape(y[-n:], ss).numpy()])
print(tf.reshape(hit[-n:], ss).numpy())
print(yhuman[tf.reshape(yhat[-n:], ss).numpy()])
# + colab={"height": 269} colab_type="code" id="k_x1Mvws5pXP" outputId="be2b6146-5d30-4d07-bc84-90d2baa30326"
b = max_log_probs + tf.math.log1p(-max_log_probs); b=tf.boolean_mask(b,b<-1e-12)
sns.distplot(tf.math.exp(b).numpy(), bins=20);
# + cellView="form" colab={"height": 51} colab_type="code" id="9M8hFsvLlymd" outputId="68ac5784-6bc0-486d-b8d3-766d191eafed"
#@title Avg One-vs-Rest AUC
try:
dnn_auc = sklearn_metrics.roc_auc_score(
y_keep,
log_probs_keep,
average='macro',
multi_class='ovr')
print('Avg per class AUC:\n{}'.format(dnn_auc))
except TypeError:
dnn_auc = np.array([
sklearn_metrics.roc_auc_score(tf.equal(y_keep, i), log_probs_keep[:, i])
for i in range(num_classes)])
print('Avg per class AUC:\n{}'.format(dnn_auc.mean()))
| tensorflow_probability/python/experimental/nn/examples/bnn_mnist_advi.ipynb |