code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
# Mentoria Evolution - Exercícios Python
https://minerandodados.com.br
* Para executar uma célula digite **Control + enter** ou clique em **Run**.
* As celulas para rodar script Python devem ser do tipo code.
* Crie células abaixo das celulas que foram escrito o enunciado das questões com as respostas.
**Obs**: Caso de dúvidas, volte na aula anterior de Python. Não desista :)
## Exercícios de Fixação
1) Responda: É possível ter elementos de tipos difentes em uma mesma lista? Exemplo: strings e números?
```
'Sim é possivel ter elementos de tipos diferentes em uma mesma lista'
exemplo = ['String', 22]
exemplo
```
2) Trabalhando com Listas, faça:
- a) Cria uma lista de valores inteiros com o **nome** e **idades**.
- b) Imprima apenas segundo elemento da lista.
- c) Imprima a quantidade de elementos da lista.
- d) Substitua o valor do segundo elemento da lista e imprima o resultado.
- e) Imprima apenas os valores do segundo elemento em diante.
- f) Remova qualquer elemento da lista e imprima o resultado.
- g) Defina uma lista chamada salarios com os valores : **900,1200,1500,800,12587,10000**.
- h) Verifique se contém o valor 10000 na lista de salarios.
- i) Imprima o menor e maior valor da lista.
- j) Adicione o valor 7000 a lista.
- l) Extenda a lista com dois novos elementos utilizando apenas um método.
- m) Imprima o índice do elemento de valor 800 da lista de salarios.
- n) Faça uma ordenação dos valores da lista de salarios em ordem crescente e decrescente.
```
nomes = ['Felipe', 22, 'Joao', 35, 'Maria', 18]
nomes
nomes[1]
len(nomes)
nomes[1] = 28
nomes
salarios = [900, 1200, 1500, 800, 12587, 10000]
salarios
10000 in salarios
min(salarios)
salarios.append(7000)
salarios
salarios = salarios + [12000, 8700]
salarios
```
3) Trabalhando com dicionários:
- a) Crie um dicionário para armazenar o nome e a idade de pessoas, Exemplo:
pessoas = {'Rodrigo':30, 'Fulana':18}
- b) Imprima a idade da pessoa "Fulana"
- c) Imprima as Chaves do dicionario criado anteriormente.
- d) Imprima os valores das chaves do dicionário
- e) Busque a chave "Felipe" se ela não existe insira esta e o valor 30 (obs: use o método setdefault())
```
pessoas = {'Cid': 40, 'Samara': 38, 'Beatriz': 36}
pessoas
pessoas['Samara']
pessoas.keys()
pessoas.values()
pessoas.setdefault('Felipe',30)
pessoas
```
4) Estruturas condicionais:
- a) Verifique se 5 é maior que 1, se sim, imprima "5 é maior que 1"
- b) crie as variávies x1 e y1, defina dois valores quaisquer para as duas variáveis.
Verifique se x1 é maior que y1. Se sim, imprima "x1 é maior que y1", senão imprima: "y1 é maior que x1"
- c) Crie uma lista de valores como [2,3,4,5,6,7] e faça um loop para imprimir todos os valores na tela multiplicados por 2.
```
if 5 > 1:
print('5 é maior que 1')
x1, y1 = 3, 8
print(x1)
print(y1)
if x1 > y1:
print("x1 é maior que y1")
else:
print("y1 é maior que x1")
for i in [2,3,4,5,6,7]:
print ("Valor: %s" %i)
```
- Ao concluir, salve seu notebook e envie suas respostas para **contato@minerandodados.com.br**
| github_jupyter |
# Lecture 14
### Wednesday, October 25th 2017
## Last time:
* Iterators and Iterables
* Trees, Binary trees, and BSTs
## This time:
* BST Traversal
* Generators
* Memory layouts
* Heaps?
# BST Traversal
* We've stored our data in a BST
* This seemed like a good idea at the time because BSTs have some nice properties
* To be able to access/use our data, we need to be able to traverse the tree
#### Traversal Choices
There are three traversal choices based on an implicit ordering of the tree from left to right:
1. In-order: Traverse left-subtree, then current root, then right sub tree
2. Post-order: Traverse left subtree, then traverse left subtree, and then current root
3. Pre-order: Current root, then traverse left subtree, then traverse right subtree
* Traversing a tree means performing some operation
* In our examples, the operation will be "displaying the data"
* However, an operation could be "deleting files"
## Example
Traverse the BST below using *in-order*, *post-order*, and *pre-order* traversals. Write the resulting sorted data structure (as a list is fine).

# Heaps
We listed several types of data structures at the beginning of our data structures unit.
So far, we have discussed lists and trees (in particular binary trees and binary search trees).
Heaps are a type of tree, a little different from binary trees.
## Some Motivation
### Priority Queues
* People may come to your customer service counter in a certain order, but you might want to serve your executive class first!
* In other words, there is an "ordering" on your customers and you want to serve people in the order of the most VIP.
* This problem requires us to then sort things by importance and then evaluate things in this sorted order.
* A priority queue is a data structure for this, which allows us to do things more efficiently than simple sorting every time a new thing comes in.
Items are inserted at one end and deleted from the other end of a queue (first in, first out [FIFO] buffer).
The basic priority queue is defined to be supporting three primary operations:
1. Insert: insert an item with "key" (e.g. an importance) $k$ into priority queue $Q$.
2. Find Minimum: get the item, or a pointer to the item, whose key value is smaller than any other key in $Q$.
3. Delete Minimum: Remove the item with minimum $k$ from $Q$.
### Comments on Implementation of Priorty Queues
One could use an unsorted array and store a pointer to the minimum index; accessing the minimum is an $O(1)$ operation.
* It's cheap to update the pointer when new items are inserted into the array because we update it in $O(1)$ only when the new value is less than the current one.
* Finding a new minimum after deleting the old one requires a scan of the array ($O(n)$ operation) and then resetting the pointer.
One could alternatively implement the priority queue with a *balanced* binary tree structure. Then we'll get performance of $O(\log(n))$!
This leads us to *heaps*. Heaps are a type of balanced binary tree.
* A heap providing access to minimum values is called a *min-heap*
* A heap providing access to maximum values is called a *max-heap*
* Note that you can't have a *min-heap* and *max-heap* together
### Heapsort
* Implementing a priority queue with `selection sort` takes $O(n^{2})$ operations
* Using a heap takes $O(n\log(n))$ operations
Implementing a sorting algorithm using a heap is called `heapsort`.
`Heapsort` is an *in-place* sort and requires no extra memory.
Note that there are many sorting algorithms nowadays. `Python` uses [`Timsort`](https://en.wikipedia.org/wiki/Timsort).
### Back to Heaps
A heap has two properties:
1. Shape property
* A leaf node at depth $k>0$ can exist only if all the nodes at the previous depth exist. Nodes at any partially filled level are added "from left to right".
2. Heap property
* For a *min-heap*, each node in the tree contains a key less than or equal to either of its two children (if they exist).
- This is also known as the labeling of a "parent node" dominating that of its children.
* For max heaps we use greater-than-or-equal.
#### Heap Mechanics
* The first element in the array is the root key
* The next two elements make up the first level of children. This is done from left to right
* Then the next four and so on.
#### More Details on Heap Mechanics
To construct a heap, insert each new element that comes in at the left-most open spot.
This maintains the shape property but not the heap property.
#### Restore the Heap Property by "Bubbling Up"
Look at the parent and if the child "dominates" we swap parent and child. Repeat this process until we bubble up to the root.
Identifying the dominant is now easy because it will be at the top of the tree.
This process is called `heapify` and must also be done at the first construction of the heap.
#### Deletion
Removing the dominant key creates a hole at the top (the first position in the array).
**Fill this hole with the rightmost position in the array**, or the rightmost leaf node.
This destroys the heap property!
So we now bubble this key down until it dominates all its children.
## Example
1. Construct a *min-heap* for the array $$\left[1, 8, 5, 9, 23, 2, 45, 6, 7, 99, -5\right].$$
2. Delete $-5$ and update the *min-heap*.
# Iterables/Iterators Again
We have been discussing data structures and simultaneously exploring iterators and iterables.
```
class SentenceIterator:
def __init__(self, words):
self.words = words
self.index = 0
def __next__(self):
try:
word = self.words[self.index]
except IndexError:
raise StopIteration()
self.index += 1
return word
def __iter__(self):
return self
class Sentence: # An iterable
def __init__(self, text):
self.text = text
self.words = text.split()
def __iter__(self):
return SentenceIterator(self.words)
def __repr__(self):
return 'Sentence(%s)' % reprlib.repr(self.text)
```
### Example Usage
```
a = Sentence("Dogs will save the world and cats will eat it.")
for item in a:
print(item)
print("\n")
it = iter(a) # it is an iterator
while True:
try:
nextval = next(it)
print(nextval)
except StopIteration:
del it
break
```
#### Every collection in Python is iterable.
We have already seen iterators are used to make for loops. They are also used to make other collections:
* To loop over a file line by line from disk
* In the making of list, dict, and set comprehensions
* In unpacking tuples
* In parameter unpacking in function calls (*args syntax)
An iterator defines both `__iter__` and a `__next__` (the first one is only required to make sure an iterator is an iterable).
**Recap:** An iterator retrieves items from a collection. The collection must implement `__iter__`.
## Generators
* A generator function looks like a normal function, but yields values instead of returning them.
* The syntax is (unfortunately) the same otherwise ([PEP 255 -- Simple Generators](https://www.python.org/dev/peps/pep-0255/)).
* A generator is a different beast. When the function runs, it creates a generator.
* The generator is an iterator and gets an internal implementation of `__iter__` and `__next__`.
```
def gen123():
print("A")
yield 1
print("B")
yield 2
print("C")
yield 3
g = gen123()
print(gen123, " ", type(gen123), " ", type(g))
print("A generator is an iterator.")
print("It has {} and {}".format(g.__iter__, g.__next__))
```
### Some notes on generators
* When `next` is called on the generator, the function proceeds until the first yield.
* The function body is now suspended and the value in the yield is then passed to the calling scope as the outcome of the `next`.
* When next is called again, it gets `__next__` called again (implicitly) in the generator, and the next value is yielded.
* This continues until we reach the end of the function, the return of which creates a `StopIteration` in next.
Any Python function that has the yield keyword in its body is a generator function.
```
print(next(g))
print(next(g))
print(next(g))
print(next(g))
```
### More notes on generators
* Generators yield one item at a time
* In this way, they feed the `for` loop one item at a time
```
for i in gen123():
print(i, "\n")
```
## Lecture Exercise
Create a `Sentence` iterator class that uses a generator expression. You will write the generator expression in the `__iter__` special method. Note that the generator automatically gets `__next__`.
| github_jupyter |
```
# Initial imports
import pandas as pd
import hvplot.pandas
from path import Path
import plotly.express as px
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
# Load the crypto_data.csv dataset.
file_path = "crypto_data.csv"
df_crypto = pd.read_csv(file_path,index_col = 0)
df_crypto.head()
# Keep all the cryptocurrencies that are being traded.
df_crypto = df_crypto[df_crypto.IsTrading.eq(True)]
df_crypto.shape
#Keep all cryptocurrencies where the algorithm is working
pd.isna(df_crypto['Algorithm'])
# Remove the "IsTrading" column.
df_crypto = df_crypto.drop(["IsTrading"],axis = 1)
# Remove rows that have at least 1 null value.
df_crypto = df_crypto.dropna(how='any',axis=0)
df_crypto
# Keep the rows where coins are mined.
df_crypto = df_crypto[df_crypto.TotalCoinsMined > 0]
df_crypto
# Create a new DataFrame that holds only the cryptocurrencies names.
names = df_crypto.filter(['CoinName'], axis=1)
# Drop the 'CoinName' column since it's not going to be used on the clustering algorithm.
df_crypto = df_crypto.drop(['CoinName'],axis = 1)
df_crypto
# Use get_dummies() to create variables for text features.
crypto = pd.get_dummies(df_crypto['Algorithm'])
dummy = pd.get_dummies(df_crypto['ProofType'])
combined = pd.concat([crypto,dummy],axis =1)
df = df_crypto.merge(combined,left_index = True,right_index = True)
df = df.drop(['Algorithm','ProofType'],axis = 1)
df
# Standardize the data with StandardScaler().
df_scaled = StandardScaler().fit_transform(df)
print(df_scaled)
# Using PCA to reduce dimension to three principal components.
pca = PCA(n_components=3)
df_pca = pca.fit_transform(df_scaled)
df_pca
# Create a DataFrame with the three principal components.
pcs_df = pd.DataFrame(
data = df_pca, columns = ['PC1','PC2','PC3'], index = df_crypto.index
)
pcs_df
# Create an elbow curve to find the best value for K.
inertia = []
k = list(range(1, 11))
# Calculate the inertia for the range of K values
for i in k:
km = KMeans(n_clusters=i, random_state=0)
km.fit(pcs_df)
inertia.append(km.inertia_)
elbow_data = {"k":k,"inertia":inertia}
df_elbow = pd.DataFrame(elbow_data)
df_elbow.hvplot.line(x="k",y="inertia",xticks=k,title="Elbow Curve")
#We did identify the classification 0f 531 cryptocurrencies based on similarities of their features The output is unknown, the best method would be unsupervised learning and clustering algorithms to group the currencies This classification report could be used by an investment bank to propose a new cryptocurrency investment portfolio to its clients.
```
| github_jupyter |
# Federated learning: pretrained model
In this notebook, we provide a simple example of how to perform an experiment in a federated environment with the help of the Sherpa.ai Federated Learning framework. We are going to use a popular dataset and a pretrained model.
## The data
The framework provides some functions for loading the [Emnist](https://www.nist.gov/itl/products-and-services/emnist-dataset) digits dataset.
```
import shfl
database = shfl.data_base.Emnist()
train_data, train_labels, test_data, test_labels = database.load_data()
```
Let's inspect some properties of the loaded data.
```
print(len(train_data))
print(len(test_data))
print(type(train_data[0]))
train_data[0].shape
```
So, as we have seen, our dataset is composed of a set of matrices that are 28 by 28. Before starting with the federated scenario, we can take a look at a sample in the training data.
```
import matplotlib.pyplot as plt
plt.imshow(train_data[0])
```
We are going to simulate a federated learning scenario with a set of client nodes containing private data, and a central server that will be responsible for coordinating the different clients. But, first of all, we have to simulate the data contained in every client. In order to do that, we are going to use the previously loaded dataset. The assumption in this example is that the data is distributed as a set of independent and identically distributed random variables, with every node having approximately the same amount of data. There are a set of different possibilities for distributing the data. The distribution of the data is one of the factors that can have the most impact on a federated algorithm. Therefore, the framework has some of the most common distributions implemented, which allows you to easily experiment with different situations. In [Federated Sampling](./federated_learning_sampling.ipynb), you can dig into the options that the framework provides, at the moment.
```
iid_distribution = shfl.data_distribution.IidDataDistribution(database)
federated_data, test_data, test_label = iid_distribution.get_federated_data(num_nodes=20, percent=10)
```
That's it! We have created federated data from the Emnist dataset using 20 nodes and 10 percent of the available data. This data is distributed to a set of data nodes in the form of private data. Let's learn a little more about the federated data.
```
print(type(federated_data))
print(federated_data.num_nodes())
federated_data[0].private_data
```
As we can see, private data in a node is not directly accessible but the framework provides mechanisms to use this data in a machine learning model.
## The model
A federated learning algorithm is defined by a machine learning model, locally deployed in each node, that learns from the respective node's private data and an aggregating mechanism to aggregate the different model parameters uploaded by the client nodes to a central node. In this example, we will use a deep learning model using Keras to build it. The framework provides classes on using Tensorflow (see notebook [Federated learning Tensorflow Model](./federated_learning_basic_concepts_tensorflow.ipynb)) and Keras (see notebook [Federated Learning basic concepts](./federated_learning_basic_concepts.ipynb)) models in a federated learning scenario, your only job is to create a function acting as model builder. Moreover, the framework provides classes to allow using pretrained Tensorflow and Keras models. In this example, we will use a pretrained Keras learning model.
```
import tensorflow as tf
#If you want execute in GPU, you must uncomment this two lines.
# physical_devices = tf.config.experimental.list_physical_devices('GPU')
# tf.config.experimental.set_memory_growth(physical_devices[0], True)
train_data = train_data.reshape(-1,28,28,1)
model = tf.keras.models.Sequential()
model.add(tf.keras.layers.Conv2D(32, kernel_size=(3, 3), padding='same', activation='relu', strides=1, input_shape=(28, 28, 1)))
model.add(tf.keras.layers.MaxPooling2D(pool_size=2, strides=2, padding='valid'))
model.add(tf.keras.layers.Dropout(0.4))
model.add(tf.keras.layers.Conv2D(32, kernel_size=(3, 3), padding='same', activation='relu', strides=1))
model.add(tf.keras.layers.MaxPooling2D(pool_size=2, strides=2, padding='valid'))
model.add(tf.keras.layers.Dropout(0.3))
model.add(tf.keras.layers.Flatten())
model.add(tf.keras.layers.Dense(128, activation='relu'))
model.add(tf.keras.layers.Dropout(0.1))
model.add(tf.keras.layers.Dense(64, activation='relu'))
model.add(tf.keras.layers.Dense(10, activation='softmax'))
model.compile(optimizer="rmsprop", loss="categorical_crossentropy", metrics=["accuracy"])
model.fit(x=train_data, y=train_labels, batch_size=128, epochs=3, validation_split=0.2,
verbose=1, shuffle=False)
def model_builder():
pretrained_model = model
criterion = tf.keras.losses.CategoricalCrossentropy()
optimizer = tf.keras.optimizers.RMSprop()
metrics = [tf.keras.metrics.categorical_accuracy]
return shfl.model.DeepLearningModel(model=pretrained_model, criterion=criterion, optimizer=optimizer, metrics=metrics)
```
Now, the only piece missing is the aggregation operator. Nevertheless, the framework provides some aggregation operators that we can use. In the following piece of code, we define the federated aggregation mechanism. Moreover, we define the federated government based on the Keras learning model, the federated data, and the aggregation mechanism.
```
aggregator = shfl.federated_aggregator.FedAvgAggregator()
federated_government = shfl.federated_government.FederatedGovernment(model_builder, federated_data, aggregator)
```
If you want to see all the aggregation operators, you can check out the [Aggregation Operators](./federated_learning_basic_concepts_aggregation_operators.ipynb) notebook. Before running the algorithm, we want to apply a transformation to the data. A good practice is to define a federated operation that will ensure that the transformation is applied to the federated data in all the client nodes. We want to reshape the data, so we define the following FederatedTransformation.
```
import numpy as np
class Reshape(shfl.private.FederatedTransformation):
def apply(self, labeled_data):
labeled_data.data = np.reshape(labeled_data.data, (labeled_data.data.shape[0], labeled_data.data.shape[1], labeled_data.data.shape[2],1))
class CastFloat(shfl.private.FederatedTransformation):
def apply(self, labeled_data):
labeled_data.data = labeled_data.data.astype(np.float32)
shfl.private.federated_operation.apply_federated_transformation(federated_data, Reshape())
shfl.private.federated_operation.apply_federated_transformation(federated_data, CastFloat())
```
## Run the federated learning experiment
We are now ready to execute our federated learning algorithm.
```
test_data = np.reshape(test_data, (test_data.shape[0], test_data.shape[1], test_data.shape[2],1))
test_data = test_data.astype(np.float32)
federated_government.run_rounds(2, test_data, test_label)
```
| github_jupyter |
# Tutorial: optimal binning with binary target under uncertainty
The drawback of performing optimal binning given only expected event rates is that variability of event rates in different periods is not taken into account. In this tutorial, we show how scenario-based stochastic programming allows incorporating uncertainty without much difficulty.
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from scipy import stats
from optbinning import OptimalBinning
from optbinning.binning.uncertainty import SBOptimalBinning
```
### Scenario generation
We generate three scenarios, all equally likely, aiming to represent three economic scenarios severity using the customer's score variable, for instance.
**Scenario 0 - Normal (Realistic)**: A low customer' score has a higher event rate (default rate, churn, etc) than a high customer's score. The population corresponding to non-event and event are reasonably separated.
```
N0 = int(1e5)
xe = stats.beta(a=4, b=15).rvs(size=N0, random_state=42)
ye = stats.bernoulli(p=0.7).rvs(size=N0, random_state=42)
xn = stats.beta(a=6, b=8).rvs(size=N0, random_state=42)
yn = stats.bernoulli(p=0.2).rvs(size=N0, random_state=42)
x0 = np.concatenate((xn, xe), axis=0)
y0 = np.concatenate((yn, ye), axis=0)
def plot_distribution(x, y):
plt.hist(x[y == 0], label="n_nonevent", color="b", alpha=0.5)
plt.hist(x[y == 1], label="n_event", color="r", alpha=0.5)
plt.legend()
plt.show()
plot_distribution(x0, y0)
```
**Scenario 1: Good (Optimistic)**: A low customer' score has a much higher event rate (default rate, churn, etc) than a high customer's score. The population corresponding to non-event and event rate are very well separated, showing minimum overlap regions.
```
N1 = int(5e4)
xe = stats.beta(a=25, b=50).rvs(size=N1, random_state=42)
ye = stats.bernoulli(p=0.9).rvs(size=N1, random_state=42)
xn = stats.beta(a=22, b=25).rvs(size=N1, random_state=42)
yn = stats.bernoulli(p=0.05).rvs(size=N1, random_state=42)
x1 = np.concatenate((xn, xe), axis=0)
y1 = np.concatenate((yn, ye), axis=0)
plot_distribution(x1, y1)
```
**Scenario 2: Bad (Pessimistic)**: Customer's behavior cannot be accurately segmented, and a general increase in event rates is exhibited. The populations corresponding to non-event and event are practically overlapped.
```
N2 = int(5e4)
xe = stats.beta(a=4, b=6).rvs(size=N2, random_state=42)
ye = stats.bernoulli(p=0.7).rvs(size=N2, random_state=42)
xn = stats.beta(a=8, b=10).rvs(size=N2, random_state=42)
yn = stats.bernoulli(p=0.4).rvs(size=N2, random_state=42)
x2 = np.concatenate((xn, xe), axis=0)
y2 = np.concatenate((yn, ye), axis=0)
plot_distribution(x2, y2)
```
### Scenario-based stochastic optimal binning
Prepare scenarios data and instantiate an ``SBOptimalBinning`` object class. We set a descending monotonicity constraint with respect to event rate and a minimum bin size.
```
X = [x0, x1, x2]
Y = [y0, y1, y2]
sboptb = SBOptimalBinning(monotonic_trend="descending", min_bin_size=0.05)
sboptb.fit(X, Y)
sboptb.status
```
We obtain "only" three splits guaranteeing feasibility for each scenario.
```
sboptb.splits
sboptb.information(print_level=2)
```
#### The binning table
As other optimal binning algorithms in OptBinning, ``SBOptimalBinning`` also returns a binning table displaying the binned data considering all scenarios.
```
sboptb.binning_table.build()
sboptb.binning_table.plot(metric="event_rate")
sboptb.binning_table.analysis()
```
### Expected value solution (EVS)
The expected value solution is calculated with the normal (expected) scenario.
```
optb = OptimalBinning(monotonic_trend="descending", min_bin_size=0.05)
optb.fit(x0, y0)
optb.binning_table.build()
optb.binning_table.plot(metric="event_rate")
optb.binning_table.analysis()
```
### Scenario analysis
#### Scenario 0 - Normal (Realistic)
```
bt0 = sboptb.binning_table_scenario(scenario_id=0)
bt0.build()
bt0.plot(metric="event_rate")
optb0 = OptimalBinning(monotonic_trend="descending", min_bin_size=0.05)
optb0.fit(x0, y0)
optb0.binning_table.build()
optb0.binning_table.plot(metric="event_rate")
```
Apply expected value solution to scenario 0.
```
evs_optb0 = OptimalBinning(user_splits=optb.splits)
evs_optb0.fit(x0, y0)
evs_optb0.binning_table.build()
evs_optb0.binning_table.plot(metric="event_rate")
```
The expected value solution applied to scenarion 0 does not satisfy the ``min_bin_size`` constraint, hence the solution is not feasible.
```
EVS_0 = 0.594974
```
**Scenario 1: Good (Optimistic)**
```
bt1 = sboptb.binning_table_scenario(scenario_id=1)
bt1.build()
bt1.plot(metric="event_rate")
optb1 = OptimalBinning(monotonic_trend="descending", min_bin_size=0.05)
optb1.fit(x1, y1)
optb1.binning_table.build()
optb1.binning_table.plot(metric="event_rate")
```
Apply expected value solution to scenario 1.
```
evs_optb1 = OptimalBinning(user_splits=optb.splits)
evs_optb1.fit(x1, y1)
evs_optb1.binning_table.build()
evs_optb1.binning_table.plot(metric="event_rate")
```
The expected value solution applied to scenario 1 satisfies neither the ``min_bin_size`` constraint nor the monotonicity constraint, hence the solution is not feasible.
```
EVS_1 = -np.inf
```
**Scenario 2: Bad (Pessimistic)**
```
bt2 = sboptb.binning_table_scenario(scenario_id=2)
bt2.build()
bt2.plot(metric="event_rate")
optb2 = OptimalBinning(monotonic_trend="descending", min_bin_size=0.05)
optb2.fit(x2, y2)
optb2.binning_table.build()
optb2.binning_table.plot(metric="event_rate")
```
Apply expected value solution to scenario 2.
```
evs_optb2 = OptimalBinning(user_splits=optb.splits)
evs_optb2.fit(x2, y2)
evs_optb2.binning_table.build()
evs_optb2.binning_table.plot(metric="event_rate")
```
The expected value solution applied to scenario 2 satisfies neither the ``min_bin_size`` constraint nor the monotonicity constraint, hence the solution is not feasible.
```
EVS_2 = -np.inf
```
### Expected value of perfect information (EVPI)
If we have prior information about the incoming economic scenarios, we could take optimal solutions for each scenario, with total IV:
```
DIV0 = optb0.binning_table.iv
DIV1 = optb1.binning_table.iv
DIV2 = optb2.binning_table.iv
DIV = (DIV0 + DIV1 + DIV2) / 3
DIV
```
However, this information is unlikely to be available in advance, so the best we can do in the long run is to use the stochastic programming, with expected total IV:
```
SIV = sboptb.binning_table.iv
SIV
```
The difference, in the case of perfect information, is the expected value of perfect information (EVPI) given by:
```
EVPI = DIV - SIV
EVPI
```
### Value of stochastic solution (VSS)
The loss in IV by not considering stochasticity is the difference between the application of the expected value solution for each scenario and the stochastic model IV. The application of the EVS to each scenario results in infeasible solutions, thus
```
VSS = SIV - (EVS_0 + EVS_1 + EVS_2)
VSS
```
| github_jupyter |
## Using Isolation Forest to Detect Criminally-Linked Properties
The goal of this notebook is to apply the Isolation Forest anomaly detection algorithm to the property data. The algorithm is particularly good at detecting anomalous data points in cases of extreme class imbalance. After normalizing the data and splitting into a training set and test set, I trained the first model.
Next, I manually selected a few features that, based on my experience investigating money-laundering and asset tracing, I thought would be most important and trained a model on just those.
```
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn import preprocessing
from sklearn.metrics import classification_report, confusion_matrix, recall_score, roc_auc_score
from sklearn.metrics import make_scorer, precision_score, accuracy_score
from sklearn.ensemble import IsolationForest
from sklearn.decomposition import PCA
import seaborn as sns
import itertools
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
sns.set_style('dark')
```
#### Load Data and Remove Columns
```
# Read in the data
df = pd.read_hdf('../data/processed/bexar_true_labels.h5')
print("Number of properties:", len(df))
# Get criminal property rate
crim_prop_rate = 1 - (len(df[df['crim_prop']==0]) / len(df))
print("Rate is: {:.5%}".format(crim_prop_rate))
# Re-label the normal properties with 1 and the criminal ones with -1
df['binary_y'] = [1 if x==0 else -1 for x in df.crim_prop]
print(df.binary_y.value_counts())
# Normalize the data
X = df.iloc[:,1:-2]
X_norm = preprocessing.normalize(X)
y = df.binary_y
# Split the data into training and test
X_train_norm, X_test_norm, y_train_norm, y_test_norm = train_test_split(
X_norm, y, test_size=0.33, random_state=42
)
```
#### UDFs
```
# Define function to plot resulting confusion matrix
def plot_confusion_matrix(conf_matrix, title, classes=['criminally-linked', 'normal'],
cmap=plt.cm.Oranges):
"""Plot confusion matrix with heatmap and classification statistics."""
conf_matrix = conf_matrix.astype('float') / conf_matrix.sum(axis=1)[:, np.newaxis]
plt.figure(figsize=(8,8))
plt.imshow(conf_matrix, interpolation='nearest', cmap=cmap)
plt.title(title,fontsize=18)
plt.colorbar(pad=.12)
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45,fontsize=11)
plt.yticks(tick_marks, classes, rotation=45, fontsize=11)
fmt = '.4%'
thresh = conf_matrix.max() / 2.
for i, j in itertools.product(range(conf_matrix.shape[0]), range(conf_matrix.shape[1])):
plt.text(j, i, format(conf_matrix[i, j], fmt),
horizontalalignment="center",
verticalalignment="top",
fontsize=16,
color="white" if conf_matrix[i, j] > thresh else "black")
plt.ylabel('True label',fontsize=14, rotation=0)
plt.xlabel('Predicted label',fontsize=14)
# Function for returning the model metrics
def metrics_iforest(y_true,y_pred):
"""Return model metrics."""
print('Model recall is',recall_score(
y_true,
y_pred,
zero_division=0,
pos_label=-1
))
print('Model precision is',precision_score(
y_true,
y_pred,
zero_division=0,
pos_label=-1
))
print("Model AUC is", roc_auc_score(y_true, y_pred))
# Function for histograms of anomaly scores
def anomaly_plot(anomaly_scores,anomaly_scores_list,title):
"""Plot histograms of anomaly scores."""
plt.figure(figsize=[15,9])
plt.subplot(211)
plt.hist(anomaly_scores,bins=100,log=False,color='royalblue')
for xc in anomaly_scores_list:
plt.axvline(x=xc,color='red',linestyle='--',linewidth=0.5,label='criminally-linked property')
plt.title(title,fontsize=16)
handles, labels = plt.gca().get_legend_handles_labels()
by_label = dict(zip(labels, handles))
plt.legend(by_label.values(), by_label.keys(),fontsize=14)
plt.ylabel('Number of properties',fontsize=13)
plt.subplot(212)
plt.hist(anomaly_scores,bins=100,log=True,color='royalblue')
for xc in anomaly_scores_list:
plt.axvline(x=xc,color='red',linestyle='--',linewidth=0.5,label='criminally-linked property')
plt.xlabel('Anomaly score',fontsize=13)
plt.ylabel('Number of properties',fontsize=13)
plt.title('{} (Log Scale)'.format(title),fontsize=16)
plt.show()
```
#### Gridsearch
Isolation Forest is fairly robust to parameter changes, but changes in the contamination rate affect performance. I will gridsearch based on a range of contamination from 0.01 to 0.25 in leaps of 0.05.
```
# Set what metrics to evaluate predictions
scoring = {
'AUC': 'roc_auc',
'Recall': make_scorer(recall_score,pos_label=-1),
'Precision': make_scorer(precision_score,pos_label=-1)
}
gs = GridSearchCV(
IsolationForest(max_samples=0.25, random_state=42,n_estimators=100),
param_grid={'contamination': np.arange(0.01, 0.25, 0.05)},
scoring=scoring,
refit='Recall',
verbose=0,
cv=3
)
# Fit to training data
gs.fit(X_train_norm,y_train_norm)
print(gs.best_params_)
```
##### Model Performance on Training Data
```
y_pred_train_gs = gs.predict(X_train_norm)
metrics_iforest(y_train_norm,y_pred_train_gs)
conf_matrix = confusion_matrix(y_train_norm, y_pred_train_gs)
print(conf_matrix)
plot_confusion_matrix(conf_matrix, title='Isolation Forest Confusion Matrix on Training Data')
```
Model recall is decent, but the precision is quite poor; the model is labeling >20% of innocent properties as criminal.
##### Model Performance on Test Data
```
y_pred_test_gs = gs.predict(X_test_norm)
metrics_iforest(y_test_norm,y_pred_test_gs)
conf_matrix = confusion_matrix(y_test_norm, y_pred_test_gs)
print(conf_matrix)
plot_confusion_matrix(conf_matrix, title='Isolation Forest Confusion Matrix on Test Data')
```
Similar to performance on the training data, the model has a tremendous amount of false positives. While better than false negatives, were this model to be implemented to screen properties, it would waste a lot of manual labor on checking falsely-labeled properties.
Given the context of detecting money-laundering and ill-gotten funds, more false positives are acceptable to reduce false negatives, but the model produces far too many.
#### Visualize Distribution of Anomaly Scores
Sklearn's Isolation Forest provides anomaly scores for each property where the lower the score, the more anomalous the datapoint is.
##### Training Data
```
# Grab anomaly scores for criminally-linked properties
train_df = pd.DataFrame(X_train_norm)
y_train_series = y_train_norm.reset_index()
train_df['y_value'] = y_train_series.binary_y
train_df['anomaly_scores'] = gs.decision_function(X_train_norm)
anomaly_scores_list = train_df[train_df.y_value==-1]['anomaly_scores']
print("Mean score for outlier properties:",np.mean(anomaly_scores_list))
print("Mean score for normal properties:",np.mean(train_df[train_df.y_value==1]['anomaly_scores']))
anomaly_plot(train_df['anomaly_scores'],
anomaly_scores_list,
title='Distribution of Anomaly Scores across Training Data')
```
##### Test Data
```
test_df = pd.DataFrame(X_test_norm)
y_test_series = y_test_norm.reset_index()
test_df['y_value'] = y_test_series.binary_y
test_df['anomaly_scores'] = gs.decision_function(X_test_norm)
anomaly_scores_list_test = test_df[test_df.y_value==-1]['anomaly_scores']
print("Mean score for outlier properties:",np.mean(anomaly_scores_list_test))
print("Mean score for normal properties:",np.mean(test_df[test_df.y_value==1]['anomaly_scores']))
anomaly_plot(test_df['anomaly_scores'],
anomaly_scores_list_test,
title='Distribution of Anomaly Scores across Test Data'
)
```
The top plots give a sense of how skewed the distribution is and how relatively lower the anomaly scores for the criminally-linked properties are when compared to the greater population. The log scale histogram highlights just how many properties do have quite low anomaly scores, which are returned as false positives.
#### Model with Select Features
With `feature_importances_` not existing for Isolation Forest, I wanted to see if I could use my background in investigating money laundering to select a few features that would be the best indicators of "abnormal" properties.
```
# Grab specific columns
X_trim = X[['partial_owner','just_established_owner',
'foreign_based_owner','out_of_state_owner',
'owner_legal_person','owner_likely_company',
'owner_owns_multiple','two_gto_reqs']]
# Normalize
X_trim_norm = preprocessing.normalize(X_trim)
# Split the data into train and test
X_train_trim, X_test_trim, y_train_trim, y_test_trim = train_test_split(
X_trim_norm, y, test_size=0.33, random_state=42
)
scoring = {
'AUC': 'roc_auc',
'Recall': make_scorer(recall_score, pos_label=-1),
'Precision': make_scorer(precision_score, pos_label=-1)
}
gs_trim = GridSearchCV(
IsolationForest(max_samples=0.25, random_state=42,n_estimators=100),
param_grid={'contamination': np.arange(0.01, 0.25, 0.05)},
scoring=scoring,
refit='Recall',
verbose=0,
cv=3
)
# Fit to training data
gs_trim.fit(X_train_trim,y_train_trim)
print(gs_trim.best_params_)
```
##### Training Data
```
y_pred_train_gs_trim = gs_trim.predict(X_train_trim)
metrics_iforest(y_train_trim,y_pred_train_gs_trim)
conf_matrix = confusion_matrix(y_train_trim, y_pred_train_gs_trim)
print(conf_matrix)
plot_confusion_matrix(conf_matrix, title='Conf Matrix on Training Data with Select Features')
```
Reducing the data to select features worsens the model's true positives by two properties, but massively improves the false positive rate (753 down to 269). Overall, model precision is still poor.
##### Test Data
```
y_pred_test_trim = gs_trim.predict(X_test_trim)
metrics_iforest(y_test_trim,y_pred_test_trim)
conf_matrix = confusion_matrix(y_test_trim, y_pred_test_trim)
print(conf_matrix)
plot_confusion_matrix(conf_matrix, title='Conf Matrix on Test Data with Select Features')
```
The model trained on select features performs better than the first on the test data both in terms of correct labels and reducing false positives.
#### Final Notes
- For both models, recall is strong, indicating the model is able to detect something anomalous about the criminal properties. However, model precision is awful, meaning it does so at the expense of many false positives.
- Selecting features based on my experience in the field improves model precision.
- There are many properties that the models find more "anomalous" than the true positives. This could indicate the criminals have done a good job of making their properties appear relatively "innocent" in the broad spectrum of residential property ownership in Bexar County.
| github_jupyter |
```
%reset -f
## PFLOTRAN
import jupypft.model as mo
import jupypft.parameter as pm
import jupypft.attachmentRateCFT as arCFT
import jupypft.plotBTC as plotBTC
```
# Build the Case Directory
```
## Temperatures
Ref,Atm,Tin = pm.Real(tag="<initialTemp>",value=10.,units="C",mathRep="$$T_{0}$$"),\
pm.Real(tag="<atmosphereTemp>",value=10,units="C",mathRep="$$T_{atm}$$"),\
pm.Real(tag="<leakageTemp>",value=10., units="m³/d",mathRep="$$T_{in}$$")
LongDisp = pm.Real(tag="<longDisp>",value=0.0,units="m",mathRep="$$\\alpha_L$$")
#Gradients
GX,GY,GZ = pm.Real(tag="<GradientX>",value=0.,units="-",mathRep="$$\partial_x h$$"),\
pm.Real(tag="<GradientY>",value=0.,units="-",mathRep="$$\partial_y h$$"),\
pm.Real(tag="<Gradient>>",value=0.,units="-",mathRep="$$\partial_z h$$")
## Dimensions
LX,LY,LZ = pm.Real("<LenX>",value=200,units="m",mathRep="$$LX$$"),\
pm.Real("<LenY>",value=100,units="m",mathRep="$$LY$$"),\
pm.Real("<LenZ>",value=20,units="m",mathRep="$$LZ$$")
## Permeability
kX,kY,kZ = pm.Real(tag="<PermX>",value=1.0E-8,units="m²",mathRep="$$k_{xx}$$"),\
pm.Real(tag="<PermY>",value=1.0E-8,units="m²",mathRep="$$k_{yy}$$"),\
pm.Real(tag="<PermZ>",value=1.0E-8,units="m²",mathRep="$$k_{zz}$$")
theta = pm.Real(tag="<porosity>",value=0.35,units="adim",mathRep="$$\\theta$$")
## Extraction well
outX1,outX2 = pm.Real(tag="<outX1>",value=LX.value/2.,units="m",mathRep="$$x_{1,Q_{out}}$$"),\
pm.Real(tag="<outX2>",value=LX.value/2.,units="m",mathRep="$$x_{2,Q_{out}}$$")
outY1,outY2 = pm.Real(tag="<outY1>",value=LY.value/2.,units="m",mathRep="$$y_{1,Q_{out}}$$"),\
pm.Real(tag="<outY2>",value=LY.value/2.,units="m",mathRep="$$y_{2,Q_{out}}$$")
outZ1,outZ2 = pm.Real(tag="<outZ1>",value=LZ.value/2. ,units="m",mathRep="$$z_{1,Q_{out}}$$"),\
pm.Real(tag="<outZ2>",value=LZ.value - 1.0,units="m",mathRep="$$z_{2,Q_{out}}$$")
## Extraction rate
Qout = pm.Real(tag="<outRate>",value=-21.0,units="m³/d",mathRep="$$Q_{out}$$")
setbackDist = 40.
## Injection point
inX1,inX2 = pm.Real(tag="<inX1>",value=outX1.value + setbackDist,units="m",mathRep="$$x_{1,Q_{in}}$$"),\
pm.Real(tag="<inX2>",value=outX2.value + setbackDist,units="m",mathRep="$$x_{2,Q_{in}}$$")
inY1,inY2 = pm.Real(tag="<inY1>",value=outY1.value + 0.0,units="m",mathRep="$$y_{1,Q_{in}}$$"),\
pm.Real(tag="<inY2>",value=outY2.value + 0.0,units="m",mathRep="$$y_{2,Q_{in}}$$")
inZ1,inZ2 = pm.Real(tag="<inZ1>",value=LZ.value - 5.0,units="m",mathRep="$$z_{1,Q_{in}}$$"),\
pm.Real(tag="<inZ2>",value=LZ.value - 1.0,units="m",mathRep="$$z_{2,Q_{in}}$$")
## Concentration
C0 = pm.Real("<initialConcentration>", value=1.0, units="mol/L")
## Injection rate
Qin = pm.Real(tag="<inRate>",value=0.24, units="m³/d",mathRep="$$Q_{in}$$")
## Grid
nX,nY,nZ = pm.Integer("<nX>",value=41,units="-",mathRep="$$nX$$"),\
pm.Integer("<nY>",value=21 ,units="-",mathRep="$$nY$$"),\
pm.Integer("<nZ>",value=1,units="-",mathRep="$$nZ$$")
dX,dY,dZ = pm.JustText("<dX>"),\
pm.JustText("<dY>"),\
pm.JustText("<dZ>")
CellRatio = { 'X' : 2.0, 'Y' : 2.0, 'Z' : 0.75 }
#CellRatio = { 'X' : 1.00, 'Y' : 0.50, 'Z' : 0.75 }
dX.value = mo.buildDXYZ(LX.value,CellRatio['X'],nX.value,hasBump=True)
dY.value = mo.buildDXYZ(LY.value,CellRatio['Y'],nY.value,hasBump=True)
if nZ == 1:
dZ.value = LZ.value
else:
dZ.value = mo.buildDXYZ(LZ.value,CellRatio['Z'],nZ.value,hasBump=False)
# Time config
endTime = pm.Real("<endTime>",value=100.,units="d")
## Bioparticle
kAtt,kDet = pm.Real(tag="<katt>",value=1.0E-30,units="1/s",mathRep="$$k_{att}$$"),\
pm.Real(tag="<kdet>",value=1.0E-30,units="1/s",mathRep="$$k_{det}$$")
decayAq,decayIm = pm.Real(tag="<decayAq>",value=1.0E-30,units="1/s",mathRep="$$\lambda_{aq}$$"),\
pm.Real(tag="<decayIm>",value=1.0E-30,units="1/s",mathRep="$$\lambda_{im}$$")
caseDict = {
"Temp":{
"Reference" : Ref,
"Atmosphere": Atm,
"Injection" : Tin },
"longDisp":LongDisp,
"Gradient":{
"X" :GX,
"Y" :GY,
"Z" :GZ },
"L":{
"X" :LX,
"Y" :LY,
"Z" :LZ },
"k":{
"X" :kX,
"Y" :kY,
"Z" :kZ },
"theta":theta,
"outCoord":{
"X" : { 1 : outX1,
2 : outX2},
"Y" : { 1 : outY1,
2 : outY2},
"Z" : { 1 : outZ1,
2 : outZ2}},
"inCoord":{
"X" : { 1 : inX1,
2 : inX2},
"Y" : { 1 : inY1,
2 : inY2},
"Z" : { 1 : inZ1,
2 : inZ2}},
"C0":C0,
"Q":{"In":Qin,
"Out":Qout},
"nGrid":{"X":nX,
"Y":nY,
"Z":nZ},
"dGrid":{"X":dX,
"Y":dY,
"Z":dZ},
"endTime":endTime,
"BIOPARTICLE":{
"katt" : kAtt,
"kdet" : kDet,
"decayAq" : decayAq,
"decayIm" : decayIm}
}
import pickle
with open('caseDict.pkl', 'wb') as f:
pickle.dump(caseDict,f)
```
| github_jupyter |
## GAN (Pytorch)
### Terminal : tensorboard --logdir=./GAN
Reference : https://github.com/aladdinpersson/Machine-Learning-Collection/blob/master/ML/Pytorch/GANs/1.%20SimpleGAN/fc_gan.py
$$
\underset{\theta_{g}}min \underset{\theta_{d}}max[E_{x\sim P_{data}}logD_{\theta_{d}}(x) + E_{z\sim P_{z}}log(1-D_{\theta_{d}}(G_{\theta_{g}}(z)))]
$$
- For D, maximize objective by making 𝑫(𝒙) is close to 1 and 𝑫(𝑮(𝒛)) is close to 0
- For G, minimize objective by making 𝑫(𝑮(𝒛))
```
import torch
import torch.nn as nn
import torch.optim as optim
import torchvision
import torchvision.datasets as datasets
from torch.utils.data import DataLoader
import torchvision.transforms as transforms
from torch.utils.tensorboard import SummaryWriter
import warnings
warnings.filterwarnings('ignore')
class Disciminator(nn.Module):
def __init__(self, img_dim):
super().__init__()
self.disc = nn.Sequential(
nn.Linear(img_dim, 128),
nn.LeakyReLU(0.1),
nn.Linear(128,1),
nn.Sigmoid(),
)
def forward(self, x):
return self.disc(x)
class Generator(nn.Module):
def __init__(self, z_dim, img_dim):
super().__init__()
self.gen = nn.Sequential(
nn.Linear(z_dim, 256),
nn.LeakyReLU(0.1),
nn.Linear(256, img_dim),
nn.Tanh(),
)
def forward(self, x):
return self.gen(x)
device = 'cuda' if torch.cuda.is_available() else 'cpu'
lr = 3e-4
z_dim = 64 #128, 256
image_dim = 28*28*1 #784
batch_size = 32
num_epochs = 10
disc = Disciminator(image_dim).to(device)
gen = Generator(z_dim, image_dim).to(device)
fixed_noise = torch.randn((batch_size, z_dim)).to(device)
transforms = transforms.Compose(
[transforms.ToTensor(), transforms.Normalize((0.1307,),(0.3081,))]
)
dataset = datasets.MNIST(root='./content',
transform=transforms,
download=False)
loader = DataLoader(dataset, batch_size=batch_size, shuffle=True)
opt_disc = optim.Adam(disc.parameters(),lr=lr)
opt_gen = optim.Adam(gen.parameters(),lr=lr)
criterion = nn.BCELoss()
writer_fake = SummaryWriter(f"./GAN")
writer_real = SummaryWriter(f"./GAN")
step = 0
for epoch in range(num_epochs):
for batch_idx, (real, _) in enumerate(loader):
real = real.view(-1, 784).to(device)
batch_size = real.shape[0]
### Train Disciminator : max log(D(real)) + log(1-D(G(z)))
noise = torch.randn(batch_size, z_dim).to(device)
fake = gen(noise)
disc_real = disc(real).view(-1)
lossD_real = criterion(disc_real, torch.ones_like(disc_real))
disc_fake = disc(fake).view(-1)
lossD_fake = criterion(disc_fake, torch.ones_like(disc_fake))
lossD = (lossD_real + lossD_fake) / 2
disc.zero_grad()
lossD.backward(retain_graph=True)
opt_disc.step()
### Train Generator min log(1-D(G(z))) <-> max log(D(G(z)))
output = disc(fake).view(-1)
lossG = criterion(output, torch.ones_like(output))
gen.zero_grad()
lossG.backward()
opt_gen.step()
if batch_idx == 0:
print(f"Epoch [{epoch}/{num_epochs}] \ "
f"Loss D : {lossD : .4f}, Loss G : {lossG : .4f}"
)
with torch.no_grad():
fake = gen(fixed_noise).reshape(-1,1,28,28)
data = real.reshape(-1,1,28,28)
img_grid_fake = torchvision.utils.make_grid(fake, normalize=True)
img_grid_real = torchvision.utils.make_grid(data, normalize=True)
writer_fake.add_image(
"MNIST Fake Images", img_grid_fake, global_step=step)
writer_real.add_image(
"MNIST Real Images", img_grid_real, global_step=step)
step += 1
```
+ cuda device
+ change learning rate
+ change Normalization
+ change batchnorm
+ architecture change CNN
| github_jupyter |
```
from math import sin, cos, log, ceil
import numpy
from matplotlib import pyplot
%matplotlib inline
from matplotlib import rcParams
rcParams['font.family'] = 'serif'
rcParams['font.size']=16
# model parameters:
g= 9.8 #[m/s^2]
v_t = 20.0 #[m/s] trim velocity
C_D = 1/40 #drag coef.
C_L = 1 #coefficient of lift
#ICs
v0 = v_t
theta0 = 0
x0 = 0
y0 = 1000
def f(u):
""" Returns RHS of phugoid system of eqns.
parameters:
u - array of float with solution at time n
returns:
dudt - array of float with solution of RHS given u
"""
v = u[0]
theta = u[1]
x = u[2]
y = u[3]
return numpy.array([-g*sin(theta) - C_D/C_L*g/v_t**2*v**2,
-g*cos(theta)/v +g/v_t**2*v,
v*cos(theta),
v*sin(theta)])
def euler(u,f,dt):
"""Euler's method, returns next time step
u: soln. at previous time step
f: function to compute RHS of system of equations
dt: dt.
"""
return u + dt*f(u)
T = 100 #t_final
dt = 0.1
N = int(T/dt) + 1
t = numpy.linspace(0,T,N)
#initialize array
u = numpy.empty((N,4))
u[0] = numpy.array([v0, theta0, x0, y0]) #ICs
#Euler's method
for n in range(N-1):
u[n+1] = euler(u[n], f, dt)
x = u[:,2]
y = u[:,3]
pyplot.figure(figsize=(8,6))
pyplot.grid(True)
pyplot.xlabel(r'x', fontsize=18)
pyplot.ylabel(r'y', fontsize=18)
pyplot.title('Glider trajectory, flight time = %.2f' %T, fontsize=18)
pyplot.plot(x,y, lw=2);
dt_values = numpy.array([0.1, 0.05, 0.01, 0.005, 0.001])
u_values = numpy.empty_like(dt_values, dtype=numpy.ndarray)
for i, dt in enumerate(dt_values):
N=int(T/dt) + 1
t = numpy.linspace(0.0, T, N)
#initialize solution array
u = numpy.empty((N,4))
u[0] = numpy.array([v0, theta0, x0, y0])
for n in range(N-1):
u[n+1] = euler(u[n], f, dt)
u_values[i] = u
def get_diffgrid(u_current, u_fine, dt):
"""Returns the difference between one grid and the finest grid using the L1 norm
parameters:
u_current: solution on current grid
u_finest: solution on fine grid
dt
returns:
diffgrid: difference computed in L1 norm
"""
N_current = len(u_current[:,0])
N_fine = len(u_fine[:,0])
grid_size_ratio = ceil(N_fine/N_current)
diffgrid = dt*numpy.sum(numpy.abs(u_current[:,2]-u_fine[::grid_size_ratio,2]))
return diffgrid
diffgrid = numpy.empty_like(dt_values)
for i, dt in enumerate(dt_values):
print('dt = {}'.format(dt))
diffgrid[i] = get_diffgrid(u_values[i], u_values[-1], dt)
pyplot.figure(figsize=(6,6))
pyplot.grid(True)
pyplot.xlabel('$\Delta t$', fontsize=18)
pyplot.ylabel('$L_1$-norm of the grid differences', fontsize=18)
pyplot.axis('equal')
pyplot.loglog(dt_values[:-1], diffgrid[:-1], ls='-', marker='o', lw=2);
r = 2 #what is r?
h = 0.001
dt_values2 = numpy.array([h, r*h, r**2*h])
u_values2 = numpy.empty_like(dt_values2, dtype=numpy.ndarray)
diffgrid2 = numpy.empty(2)
for i, dt in enumerate(dt_values2):
N = int(T/dt) + 1 # number of time-steps
### discretize the time t ###
t = numpy.linspace(0.0, T, N)
# initialize the array containing the solution for each time-step
u = numpy.empty((N, 4))
u[0] = numpy.array([v0, theta0, x0, y0])
for n in range(N-1):
u[n+1] = euler(u[n], f, dt)
# store the value of u related to one grid
u_values2[i] = u
#calculate f2 - f1
diffgrid2[0] = get_diffgrid(u_values2[1], u_values2[0], dt_values2[1])
#calculate f3 - f2
diffgrid2[1] = get_diffgrid(u_values2[2], u_values2[1], dt_values2[2])
# calculate the order of convergence
p = (log(diffgrid2[1]) - log(diffgrid2[0])) / log(r)
print('The order of convergence is p = {:.3f}'.format(p));
```
Paper Airplane Challenge:
- Find a combination of launch angle and velocity that gives best distance.
```
L_D = 5.0
C_D = 1/L_D
v_t = 4.9 #[m/s]
#ICs
theta0 = 0
x0 = 0
y0 = 2 #[m] - a realistic height to throw a paper airplane from
v0 = v_t
#height = y0
#t = [0]
dt = 0.001
T=20
N = int(T/dt)+1
t = numpy.linspace(0,T, N)
def challenge(arg1, arg2):
u = numpy.empty((N,4))
u[0] = numpy.array([arg1, arg2, x0, y0])
#print(numpy.shape(u))
#print(N)
for n in range(N-1):
u[n+1] = euler(u[n], f, dt)
n_max = n
if u[n][3] <= 0:
break
#print(numpy.shape(u))
u = u[:n_max]
#print(numpy.shape(u))
return u
u = challenge(5, 0)
print(u[-1][2])
#iterate over v0, theta0, taking large steps
#find a best solution
max_dist = 0.0
max_params = [0, 0]
for theta0 in range (-90, 90, 5):
for v0 in range (1, 10):
u = numpy.empty((1,4))
u_final = challenge(v0, theta0)
if u_final[-1][2] > max_dist:
max_params_low_res = [v0, theta0]
max_dist_low_res = u_final[-1][2]
best_run_low_res = u_final
print(max_dist_low_res)
print(max_params_low_res)
#iterate at a finer resolution over previous solution
max_dist = 0.0 #
max_params = [0, 0]
for theta0 in range (max_params_low_res[1] -5, max_params_low_res[1] +5):
for v0 in range (max_params_low_res[0] -5, max_params_low_res[0] + 5):
u = numpy.empty((1,4))
u_final = challenge(v0, theta0)
if u_final[-1][2] > max_dist:
max_params= [v0, theta0]
max_dist = u_final[-1][2]
best_run = u_final
print(max_dist)
print(max_params)
#print(u_longest)
x = best_run[:,2]
y = best_run[:,3]
pyplot.figure(figsize=(8,6))
pyplot.grid(True)
pyplot.xlabel(r'x', fontsize=18)
pyplot.ylabel(r'y', fontsize=18)
pyplot.title('Paper airplane trajectory', fontsize=18)
pyplot.plot(x,y, 'k-', lw=2);
print("max distance: {:.2f} m, v0= {:.2f}, theta0 = {:.2f}".format(max_dist, max_params[0], max_params[1]))
```
| github_jupyter |
<img src="../images/aeropython_logo.png" alt="AeroPython" style="width: 300px;"/>
# Secciones de arrays
_Hasta ahora sabemos cómo crear arrays y realizar algunas operaciones con ellos, sin embargo, todavía no hemos aprendido cómo acceder a elementos concretos del array_
## Arrays de una dimensión
```
# Accediendo al primer elemento
# Accediendo al último
```
##### __¡Atención!__
NumPy devuelve __vistas__ de la sección que le pidamos, no __copias__. Esto quiere decir que debemos prestar mucha atención a este comportamiento:
Lo mismo ocurre al revés:
`a` apunta a las direcciones de memoria donde están guardados los elementos del array `arr` que hemos seleccionado, no copia sus valores, a menos que explícitamente hagamos:
## Arrays de dos dimensiones
## Secciones de arrays
Hasta ahora hemos visto cómo acceder a elementos aislados del array, pero la potencia de NumPy está en poder acceder a secciones enteras. Para ello se usa la sintaxis `inicio:final:paso`: si alguno de estos valores no se pone toma un valor por defecto. Veamos ejemplos:
```
# De la segunda a la tercera fila, incluida
# Hasta la tercera fila sin incluir y de la segunda a la quinta columnas saltando dos
#M[1:2:1, 1:5:2] # Equivalente
```
##### Ejercicio
Pintar un tablero de ajedrez usando la función `plt.matshow`.
---
___Hemos aprendido:___
* A acceder a elementos de un array
* Que las secciones no devuelven copias, sino vistas
__¡Quiero más!__Algunos enlaces:
Algunos enlaces en Pybonacci:
* [Cómo crear matrices en Python con NumPy](http://pybonacci.wordpress.com/2012/06/11/como-crear-matrices-en-python-con-numpy/).
* [Números aleatorios en Python con NumPy y SciPy](http://pybonacci.wordpress.com/2013/01/11/numeros-aleatorios-en-python-con-numpy-y-scipy/).
Algunos enlaces en otros sitios:
* [100 numpy exercises](http://www.labri.fr/perso/nrougier/teaching/numpy.100/index.html). Es posible que de momento sólo sepas hacer los primeros, pero tranquilo, pronto sabrás más...
* [NumPy and IPython SciPy 2013 Tutorial](http://conference.scipy.org/scipy2013/tutorial_detail.php?id=100).
* [NumPy and SciPy documentation](http://docs.scipy.org/doc/).
---
<br/>
#### <h4 align="right">¡Síguenos en Twitter!
<br/>
###### <a href="https://twitter.com/AeroPython" class="twitter-follow-button" data-show-count="false">Follow @AeroPython</a> <script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js';fjs.parentNode.insertBefore(js,fjs);}}(document, 'script', 'twitter-wjs');</script>
<br/>
###### Este notebook ha sido realizado por: Juan Luis Cano y Álex Sáez
<br/>
##### <a rel="license" href="http://creativecommons.org/licenses/by/4.0/deed.es"><img alt="Licencia Creative Commons" style="border-width:0" src="http://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" property="dct:title">Curso AeroPython</span> por <span xmlns:cc="http://creativecommons.org/ns#" property="cc:attributionName">Juan Luis Cano Rodriguez y Alejandro Sáez Mollejo</span> se distribuye bajo una <a rel="license" href="http://creativecommons.org/licenses/by/4.0/deed.es">Licencia Creative Commons Atribución 4.0 Internacional</a>.
---
_Las siguientes celdas contienen configuración del Notebook_
_Para visualizar y utlizar los enlaces a Twitter el notebook debe ejecutarse como [seguro](http://ipython.org/ipython-doc/dev/notebook/security.html)_
File > Trusted Notebook
```
# Esta celda da el estilo al notebook
from IPython.core.display import HTML
css_file = '../styles/aeropython.css'
HTML(open(css_file, "r").read())
```
| github_jupyter |
# Transfer Learning Template
```
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os, json, sys, time, random
import numpy as np
import torch
from torch.optim import Adam
from easydict import EasyDict
import matplotlib.pyplot as plt
from steves_models.steves_ptn import Steves_Prototypical_Network
from steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper
from steves_utils.iterable_aggregator import Iterable_Aggregator
from steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig
from steves_utils.torch_sequential_builder import build_sequential
from steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader
from steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path)
from steves_utils.PTN.utils import independent_accuracy_assesment
from torch.utils.data import DataLoader
from steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory
from steves_utils.ptn_do_report import (
get_loss_curve,
get_results_table,
get_parameters_table,
get_domain_accuracies,
)
from steves_utils.transforms import get_chained_transform
```
# Allowed Parameters
These are allowed parameters, not defaults
Each of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)
Papermill uses the cell tag "parameters" to inject the real parameters below this cell.
Enable tags to see what I mean
```
required_parameters = {
"experiment_name",
"lr",
"device",
"seed",
"dataset_seed",
"n_shot",
"n_query",
"n_way",
"train_k_factor",
"val_k_factor",
"test_k_factor",
"n_epoch",
"patience",
"criteria_for_best",
"x_net",
"datasets",
"torch_default_dtype",
"NUM_LOGS_PER_EPOCH",
"BEST_MODEL_PATH",
"x_shape",
}
from steves_utils.CORES.utils import (
ALL_NODES,
ALL_NODES_MINIMUM_1000_EXAMPLES,
ALL_DAYS
)
from steves_utils.ORACLE.utils_v2 import (
ALL_DISTANCES_FEET_NARROWED,
ALL_RUNS,
ALL_SERIAL_NUMBERS,
)
standalone_parameters = {}
standalone_parameters["experiment_name"] = "STANDALONE PTN"
standalone_parameters["lr"] = 0.001
standalone_parameters["device"] = "cuda"
standalone_parameters["seed"] = 1337
standalone_parameters["dataset_seed"] = 1337
standalone_parameters["n_way"] = 8
standalone_parameters["n_shot"] = 3
standalone_parameters["n_query"] = 2
standalone_parameters["train_k_factor"] = 1
standalone_parameters["val_k_factor"] = 2
standalone_parameters["test_k_factor"] = 2
standalone_parameters["n_epoch"] = 50
standalone_parameters["patience"] = 10
standalone_parameters["criteria_for_best"] = "source_loss"
standalone_parameters["datasets"] = [
{
"labels": ALL_SERIAL_NUMBERS,
"domains": ALL_DISTANCES_FEET_NARROWED,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl"),
"source_or_target_dataset": "source",
"x_transforms": ["unit_mag", "minus_two"],
"episode_transforms": [],
"domain_prefix": "ORACLE_"
},
{
"labels": ALL_NODES,
"domains": ALL_DAYS,
"num_examples_per_domain_per_label": 100,
"pickle_path": os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
"source_or_target_dataset": "target",
"x_transforms": ["unit_power", "times_zero"],
"episode_transforms": [],
"domain_prefix": "CORES_"
}
]
standalone_parameters["torch_default_dtype"] = "torch.float32"
standalone_parameters["x_net"] = [
{"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}},
{"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":256}},
{"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features":256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
]
# Parameters relevant to results
# These parameters will basically never need to change
standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10
standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth"
# Parameters
parameters = {
"experiment_name": "tl_3A:cores+wisig -> oracle.run1.framed",
"device": "cuda",
"lr": 0.001,
"x_shape": [2, 200],
"n_shot": 3,
"n_query": 2,
"train_k_factor": 3,
"val_k_factor": 2,
"test_k_factor": 2,
"torch_default_dtype": "torch.float32",
"n_epoch": 50,
"patience": 3,
"criteria_for_best": "target_loss",
"x_net": [
{"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 200]}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 1,
"out_channels": 256,
"kernel_size": [1, 7],
"bias": False,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 256}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 256,
"out_channels": 80,
"kernel_size": [2, 7],
"bias": True,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 16000, "out_features": 256}},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features": 256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
],
"NUM_LOGS_PER_EPOCH": 10,
"BEST_MODEL_PATH": "./best_model.pth",
"n_way": 16,
"datasets": [
{
"labels": [
"1-10.",
"1-11.",
"1-15.",
"1-16.",
"1-17.",
"1-18.",
"1-19.",
"10-4.",
"10-7.",
"11-1.",
"11-14.",
"11-17.",
"11-20.",
"11-7.",
"13-20.",
"13-8.",
"14-10.",
"14-11.",
"14-14.",
"14-7.",
"15-1.",
"15-20.",
"16-1.",
"16-16.",
"17-10.",
"17-11.",
"17-2.",
"19-1.",
"19-16.",
"19-19.",
"19-20.",
"19-3.",
"2-10.",
"2-11.",
"2-17.",
"2-18.",
"2-20.",
"2-3.",
"2-4.",
"2-5.",
"2-6.",
"2-7.",
"2-8.",
"3-13.",
"3-18.",
"3-3.",
"4-1.",
"4-10.",
"4-11.",
"4-19.",
"5-5.",
"6-15.",
"7-10.",
"7-14.",
"8-18.",
"8-20.",
"8-3.",
"8-8.",
],
"domains": [1, 2, 3, 4, 5],
"num_examples_per_domain_per_label": 100,
"pickle_path": "/mnt/wd500GB/CSC500/csc500-main/datasets/cores.stratified_ds.2022A.pkl",
"source_or_target_dataset": "source",
"x_transforms": ["unit_mag", "take_200"],
"episode_transforms": [],
"domain_prefix": "C_A_",
},
{
"labels": [
"1-10",
"1-12",
"1-14",
"1-16",
"1-18",
"1-19",
"1-8",
"10-11",
"10-17",
"10-4",
"10-7",
"11-1",
"11-10",
"11-19",
"11-20",
"11-4",
"11-7",
"12-19",
"12-20",
"12-7",
"13-14",
"13-18",
"13-19",
"13-20",
"13-3",
"13-7",
"14-10",
"14-11",
"14-12",
"14-13",
"14-14",
"14-19",
"14-20",
"14-7",
"14-8",
"14-9",
"15-1",
"15-19",
"15-6",
"16-1",
"16-16",
"16-19",
"16-20",
"17-10",
"17-11",
"18-1",
"18-10",
"18-11",
"18-12",
"18-13",
"18-14",
"18-15",
"18-16",
"18-17",
"18-19",
"18-2",
"18-20",
"18-4",
"18-5",
"18-7",
"18-8",
"18-9",
"19-1",
"19-10",
"19-11",
"19-12",
"19-13",
"19-14",
"19-15",
"19-19",
"19-2",
"19-20",
"19-3",
"19-4",
"19-6",
"19-7",
"19-8",
"19-9",
"2-1",
"2-13",
"2-15",
"2-3",
"2-4",
"2-5",
"2-6",
"2-7",
"2-8",
"20-1",
"20-12",
"20-14",
"20-15",
"20-16",
"20-18",
"20-19",
"20-20",
"20-3",
"20-4",
"20-5",
"20-7",
"20-8",
"3-1",
"3-13",
"3-18",
"3-2",
"3-8",
"4-1",
"4-10",
"4-11",
"5-1",
"5-5",
"6-1",
"6-15",
"6-6",
"7-10",
"7-11",
"7-12",
"7-13",
"7-14",
"7-7",
"7-8",
"7-9",
"8-1",
"8-13",
"8-14",
"8-18",
"8-20",
"8-3",
"8-8",
"9-1",
"9-7",
],
"domains": [1, 2, 3, 4],
"num_examples_per_domain_per_label": 100,
"pickle_path": "/mnt/wd500GB/CSC500/csc500-main/datasets/wisig.node3-19.stratified_ds.2022A.pkl",
"source_or_target_dataset": "source",
"x_transforms": ["unit_mag", "take_200"],
"episode_transforms": [],
"domain_prefix": "W_A_",
},
{
"labels": [
"3123D52",
"3123D65",
"3123D79",
"3123D80",
"3123D54",
"3123D70",
"3123D7B",
"3123D89",
"3123D58",
"3123D76",
"3123D7D",
"3123EFE",
"3123D64",
"3123D78",
"3123D7E",
"3124E4A",
],
"domains": [32, 38, 8, 44, 14, 50, 20, 26],
"num_examples_per_domain_per_label": 2000,
"pickle_path": "/mnt/wd500GB/CSC500/csc500-main/datasets/oracle.Run1_framed_2000Examples_stratified_ds.2022A.pkl",
"source_or_target_dataset": "target",
"x_transforms": ["unit_mag", "take_200", "resample_20Msps_to_25Msps"],
"episode_transforms": [],
"domain_prefix": "ORACLE.run1_",
},
],
"seed": 500,
"dataset_seed": 500,
}
# Set this to True if you want to run this template directly
STANDALONE = False
if STANDALONE:
print("parameters not injected, running with standalone_parameters")
parameters = standalone_parameters
if not 'parameters' in locals() and not 'parameters' in globals():
raise Exception("Parameter injection failed")
#Use an easy dict for all the parameters
p = EasyDict(parameters)
if "x_shape" not in p:
p.x_shape = [2,256] # Default to this if we dont supply x_shape
supplied_keys = set(p.keys())
if supplied_keys != required_parameters:
print("Parameters are incorrect")
if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters))
if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys))
raise RuntimeError("Parameters are incorrect")
###################################
# Set the RNGs and make it all deterministic
###################################
np.random.seed(p.seed)
random.seed(p.seed)
torch.manual_seed(p.seed)
torch.use_deterministic_algorithms(True)
###########################################
# The stratified datasets honor this
###########################################
torch.set_default_dtype(eval(p.torch_default_dtype))
###################################
# Build the network(s)
# Note: It's critical to do this AFTER setting the RNG
###################################
x_net = build_sequential(p.x_net)
start_time_secs = time.time()
p.domains_source = []
p.domains_target = []
train_original_source = []
val_original_source = []
test_original_source = []
train_original_target = []
val_original_target = []
test_original_target = []
# global_x_transform_func = lambda x: normalize(x.to(torch.get_default_dtype()), "unit_power") # unit_power, unit_mag
# global_x_transform_func = lambda x: normalize(x, "unit_power") # unit_power, unit_mag
def add_dataset(
labels,
domains,
pickle_path,
x_transforms,
episode_transforms,
domain_prefix,
num_examples_per_domain_per_label,
source_or_target_dataset:str,
iterator_seed=p.seed,
dataset_seed=p.dataset_seed,
n_shot=p.n_shot,
n_way=p.n_way,
n_query=p.n_query,
train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),
):
if x_transforms == []: x_transform = None
else: x_transform = get_chained_transform(x_transforms)
if episode_transforms == []: episode_transform = None
else: raise Exception("episode_transforms not implemented")
episode_transform = lambda tup, _prefix=domain_prefix: (_prefix + str(tup[0]), tup[1])
eaf = Episodic_Accessor_Factory(
labels=labels,
domains=domains,
num_examples_per_domain_per_label=num_examples_per_domain_per_label,
iterator_seed=iterator_seed,
dataset_seed=dataset_seed,
n_shot=n_shot,
n_way=n_way,
n_query=n_query,
train_val_test_k_factors=train_val_test_k_factors,
pickle_path=pickle_path,
x_transform_func=x_transform,
)
train, val, test = eaf.get_train(), eaf.get_val(), eaf.get_test()
train = Lazy_Iterable_Wrapper(train, episode_transform)
val = Lazy_Iterable_Wrapper(val, episode_transform)
test = Lazy_Iterable_Wrapper(test, episode_transform)
if source_or_target_dataset=="source":
train_original_source.append(train)
val_original_source.append(val)
test_original_source.append(test)
p.domains_source.extend(
[domain_prefix + str(u) for u in domains]
)
elif source_or_target_dataset=="target":
train_original_target.append(train)
val_original_target.append(val)
test_original_target.append(test)
p.domains_target.extend(
[domain_prefix + str(u) for u in domains]
)
else:
raise Exception(f"invalid source_or_target_dataset: {source_or_target_dataset}")
for ds in p.datasets:
add_dataset(**ds)
# from steves_utils.CORES.utils import (
# ALL_NODES,
# ALL_NODES_MINIMUM_1000_EXAMPLES,
# ALL_DAYS
# )
# add_dataset(
# labels=ALL_NODES,
# domains = ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "cores.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"cores_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle1_{u}"
# )
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# add_dataset(
# labels=ALL_SERIAL_NUMBERS,
# domains = list(set(ALL_DISTANCES_FEET) - {2,62,56}),
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "oracle.Run2_framed_2000Examples_stratified_ds.2022A.pkl"),
# source_or_target_dataset="source",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"oracle2_{u}"
# )
# add_dataset(
# labels=list(range(19)),
# domains = [0,1,2],
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "metehan.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"met_{u}"
# )
# # from steves_utils.wisig.utils import (
# # ALL_NODES_MINIMUM_100_EXAMPLES,
# # ALL_NODES_MINIMUM_500_EXAMPLES,
# # ALL_NODES_MINIMUM_1000_EXAMPLES,
# # ALL_DAYS
# # )
# import steves_utils.wisig.utils as wisig
# add_dataset(
# labels=wisig.ALL_NODES_MINIMUM_100_EXAMPLES,
# domains = wisig.ALL_DAYS,
# num_examples_per_domain_per_label=100,
# pickle_path=os.path.join(get_datasets_base_path(), "wisig.node3-19.stratified_ds.2022A.pkl"),
# source_or_target_dataset="target",
# x_transform_func=global_x_transform_func,
# domain_modifier=lambda u: f"wisig_{u}"
# )
###################################
# Build the dataset
###################################
train_original_source = Iterable_Aggregator(train_original_source, p.seed)
val_original_source = Iterable_Aggregator(val_original_source, p.seed)
test_original_source = Iterable_Aggregator(test_original_source, p.seed)
train_original_target = Iterable_Aggregator(train_original_target, p.seed)
val_original_target = Iterable_Aggregator(val_original_target, p.seed)
test_original_target = Iterable_Aggregator(test_original_target, p.seed)
# For CNN We only use X and Y. And we only train on the source.
# Properly form the data using a transform lambda and Lazy_Iterable_Wrapper. Finally wrap them in a dataloader
transform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only
train_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda)
val_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda)
test_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda)
train_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda)
val_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda)
test_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda)
datasets = EasyDict({
"source": {
"original": {"train":train_original_source, "val":val_original_source, "test":test_original_source},
"processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source}
},
"target": {
"original": {"train":train_original_target, "val":val_original_target, "test":test_original_target},
"processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target}
},
})
from steves_utils.transforms import get_average_magnitude, get_average_power
print(set([u for u,_ in val_original_source]))
print(set([u for u,_ in val_original_target]))
s_x, s_y, q_x, q_y, _ = next(iter(train_processed_source))
print(s_x)
# for ds in [
# train_processed_source,
# val_processed_source,
# test_processed_source,
# train_processed_target,
# val_processed_target,
# test_processed_target
# ]:
# for s_x, s_y, q_x, q_y, _ in ds:
# for X in (s_x, q_x):
# for x in X:
# assert np.isclose(get_average_magnitude(x.numpy()), 1.0)
# assert np.isclose(get_average_power(x.numpy()), 1.0)
###################################
# Build the model
###################################
# easfsl only wants a tuple for the shape
model = Steves_Prototypical_Network(x_net, device=p.device, x_shape=tuple(p.x_shape))
optimizer = Adam(params=model.parameters(), lr=p.lr)
###################################
# train
###################################
jig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device)
jig.train(
train_iterable=datasets.source.processed.train,
source_val_iterable=datasets.source.processed.val,
target_val_iterable=datasets.target.processed.val,
num_epochs=p.n_epoch,
num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH,
patience=p.patience,
optimizer=optimizer,
criteria_for_best=p.criteria_for_best,
)
total_experiment_time_secs = time.time() - start_time_secs
###################################
# Evaluate the model
###################################
source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test)
target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test)
source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val)
target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val)
history = jig.get_history()
total_epochs_trained = len(history["epoch_indices"])
val_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val))
confusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl)
per_domain_accuracy = per_domain_accuracy_from_confusion(confusion)
# Add a key to per_domain_accuracy for if it was a source domain
for domain, accuracy in per_domain_accuracy.items():
per_domain_accuracy[domain] = {
"accuracy": accuracy,
"source?": domain in p.domains_source
}
# Do an independent accuracy assesment JUST TO BE SURE!
# _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device)
# _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device)
# _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device)
# _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device)
# assert(_source_test_label_accuracy == source_test_label_accuracy)
# assert(_target_test_label_accuracy == target_test_label_accuracy)
# assert(_source_val_label_accuracy == source_val_label_accuracy)
# assert(_target_val_label_accuracy == target_val_label_accuracy)
experiment = {
"experiment_name": p.experiment_name,
"parameters": dict(p),
"results": {
"source_test_label_accuracy": source_test_label_accuracy,
"source_test_label_loss": source_test_label_loss,
"target_test_label_accuracy": target_test_label_accuracy,
"target_test_label_loss": target_test_label_loss,
"source_val_label_accuracy": source_val_label_accuracy,
"source_val_label_loss": source_val_label_loss,
"target_val_label_accuracy": target_val_label_accuracy,
"target_val_label_loss": target_val_label_loss,
"total_epochs_trained": total_epochs_trained,
"total_experiment_time_secs": total_experiment_time_secs,
"confusion": confusion,
"per_domain_accuracy": per_domain_accuracy,
},
"history": history,
"dataset_metrics": get_dataset_metrics(datasets, "ptn"),
}
ax = get_loss_curve(experiment)
plt.show()
get_results_table(experiment)
get_domain_accuracies(experiment)
print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"])
print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"])
json.dumps(experiment)
```
| github_jupyter |
# Anna KaRNNa
In this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's [post on RNNs](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) and [implementation in Torch](https://github.com/karpathy/char-rnn). Also, some information [here at r2rt](http://r2rt.com/recurrent-neural-networks-in-tensorflow-ii.html) and from [Sherjil Ozair](https://github.com/sherjilozair/char-rnn-tensorflow) on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
```
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
```
First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
```
with open('anna.txt', 'r') as f:
text=f.read()
vocab = sorted(set(text))
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
```
Let's check out the first 100 characters, make sure everything is peachy. According to the [American Book Review](http://americanbookreview.org/100bestlines.asp), this is the 6th best first line of a book ever.
```
text[:100]
```
And we can see the characters encoded as integers.
```
encoded[:100]
```
Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
```
len(vocab)
```
## Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:
<img src="assets/sequence_batching@1x.png" width=500px>
<br>
We start with our text encoded as integers in one long array in `encoded`. Let's create a function that will give us an iterator for our batches. I like using [generator functions](https://jeffknupp.com/blog/2013/04/07/improve-your-python-yield-and-generators-explained/) to do this. Then we can pass `encoded` into this function and get our batch generator.
The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the total number of batches, $K$, we can make from the array `arr`, you divide the length of `arr` by the number of characters per batch. Once you know the number of batches, you can get the total number of characters to keep from `arr`, $N * M * K$.
After that, we need to split `arr` into $N$ sequences. You can do this using `arr.reshape(size)` where `size` is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (`batch_size` below), let's make that the size of the first dimension. For the second dimension, you can use `-1` as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$.
Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the $N \times (M * K)$ array. For each subsequent batch, the window moves over by `n_steps`. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character.
The way I like to do this window is use `range` to take steps of size `n_steps` from $0$ to `arr.shape[1]`, the total number of steps in each sequence. That way, the integers you get from `range` always point to the start of a batch, and each window is `n_steps` wide.
> **Exercise:** Write the code for creating batches in the function below. The exercises in this notebook _will not be easy_. I've provided a notebook with solutions alongside this notebook. If you get stuck, checkout the solutions. The most important thing is that you don't copy and paste the code into here, **type out the solution code yourself.**
```
def get_batches(arr, batch_size, n_steps):
'''Create a generator that returns batches of size
batch_size x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
batch_size: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the number of characters per batch and number of batches we can make
chars_per_batch = batch_size * n_steps
n_batches = len(arr) // chars_per_batch
# Keep only enough characters to make full batches
arr = arr[:n_batches * chars_per_batch]
# Reshape into batch_size rows
arr = arr.reshape((batch_size, -1))
for n in range(0, arr.shape[1], n_steps):
# The features - grab entire column at n : n + 1
x = arr[:, n:(n + n_steps)]
# The targets - same as above, shifted by + 1
# but for end of last column, sub first chars in x
y = np.zeros_like(x)
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
yield x, y
```
Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
```
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
```
If you implemented `get_batches` correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
```
although the exact numbers will be different. Check to make sure the data is shifted over one step for `y`.
## Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
### Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called `keep_prob`. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size.
> **Exercise:** Create the input placeholders in the function below.
```
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs = tf.placeholder(tf.int32, shape=[batch_size, num_steps], name='inputs')
targets = tf.placeholder(tf.int32, shape=[batch_size, num_steps], name='targets')
# Keep probability placeholder for drop out layers
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
return inputs, targets, keep_prob
```
### LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
```python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
```
where `num_units` is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
```python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
```
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with [`tf.contrib.rnn.MultiRNNCell`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/contrib/rnn/MultiRNNCell). With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this
```python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
```
This might look a little weird if you know Python well because this will create a list of the same `cell` object. However, TensorFlow 1.0 will create different weight matrices for all `cell` objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like
```python
def build_cell(num_units, keep_prob):
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
return drop
tf.contrib.rnn.MultiRNNCell([build_cell(num_units, keep_prob) for _ in range(num_layers)])
```
Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
```python
initial_state = cell.zero_state(batch_size, tf.float32)
```
Below, we implement the `build_lstm` function to create these LSTM cells and the initial state.
```
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
def build_cell():
# Use a basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell outputs
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
return drop
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([build_cell() for _ in range(num_layers)])
initial_state = cell.zero_state(batch_size, tf.float32)
return cell, initial_state
```
### RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, `lstm_output`. First we need to concatenate this whole list into one array with [`tf.concat`](https://www.tensorflow.org/api_docs/python/tf/concat). Then, reshape it (with `tf.reshape`) to size $(M * N) \times L$.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with `tf.variable_scope(scope_name)` because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
> **Exercise:** Implement the output layer in the function below.
```
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
lstm_output: List of output tensors from the LSTM layer
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# Concatenate lstm_output over axis 1 (the columns)
seq_output = tf.concat(lstm_output, axis=1)
# Reshape seq_output to a 2D tensor with lstm_size columns
x = tf.reshape(seq_output, [-1, in_size])
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
# Create the weight and bias variables here
softmax_w = tf.Variable(tf.truncated_normal((in_size, out_size), stddev=0.1))
softmax_b = tf.Variable(tf.zeros(out_size))
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits = tf.matmul(x, softmax_w) + softmax_b
# Use softmax to get the probabilities for predicted characters
out = tf.nn.softmax(logits, name='predictions')
return out, logits
```
### Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(M*N) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(M*N) \times C$.
Then we run the logits and targets through `tf.nn.softmax_cross_entropy_with_logits` and find the mean to get the loss.
>**Exercise:** Implement the loss calculation in the function below.
```
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per batch_size per step
y_one_hot = tf.one_hot(targets, num_classes)
y_reshaped = tf.reshape(y_one_hot, logits.get_shape())
# Softmax cross entropy loss
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped)
loss = tf.reduce_mean(loss)
return loss
```
### Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
```
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
```
### Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use [`tf.nn.dynamic_rnn`](https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/nn/dynamic_rnn). This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as `final_state` so we can pass it to the first LSTM cell in the the next mini-batch run. For `tf.nn.dynamic_rnn`, we pass in the cell and initial state we get from `build_lstm`, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
> **Exercise:** Use the functions you've implemented previously and `tf.nn.dynamic_rnn` to build the network.
```
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob = build_inputs(batch_size, num_steps)
# Build the LSTM cell
cell, self.initial_state = build_lstm(lstm_size, num_layers, batch_size, self.keep_prob)
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot = tf.one_hot(self.inputs, num_classes)
# Run each sequence step through the RNN and collect the outputs
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=self.initial_state)
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits = build_output(outputs, lstm_size, num_classes)
# Loss and optimizer (with gradient clipping)
self.loss = build_loss(self.logits, self.targets, lstm_size, num_classes)
self.optimizer = build_optimizer(self.loss, learning_rate, grad_clip)
```
## Hyperparameters
Here are the hyperparameters for the network.
* `batch_size` - Number of sequences running through the network in one pass.
* `num_steps` - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
* `lstm_size` - The number of units in the hidden layers.
* `num_layers` - Number of hidden LSTM layers to use
* `learning_rate` - Learning rate for training
* `keep_prob` - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to [where it originally came from](https://github.com/karpathy/char-rnn#tips-and-tricks).
> ## Tips and Tricks
>### Monitoring Validation Loss vs. Training Loss
>If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
> - If your training loss is much lower than validation loss then this means the network might be **overfitting**. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
> - If your training/validation loss are about equal then your model is **underfitting**. Increase the size of your model (either number of layers or the raw number of neurons per layer)
> ### Approximate number of parameters
> The two most important parameters that control the model are `lstm_size` and `num_layers`. I would advise that you always use `num_layers` of either 2/3. The `lstm_size` can be adjusted based on how much data you have. The two important quantities to keep track of here are:
> - The number of parameters in your model. This is printed when you start training.
> - The size of your dataset. 1MB file is approximately 1 million characters.
>These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
> - I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make `lstm_size` larger.
> - I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
> ### Best models strategy
>The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
>It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
>By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
```
batch_size = 10 # Sequences per batch
num_steps = 50 # Number of sequence steps per batch
lstm_size = 512 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.01 # Learning rate
keep_prob = 0.5 # Dropout keep probability
```
## Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by `save_every_n`) I save a checkpoint.
Here I'm saving checkpoints with the format
`i{iteration number}_l{# hidden layer units}.ckpt`
> **Exercise:** Set the hyperparameters above to train the network. Watch the training loss, it should be consistently dropping. Also, I highly advise running this on a GPU.
```
epochs = 20
# Print losses every N interations
print_every_n = 50
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
if (counter % print_every_n == 0):
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
```
#### Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
```
tf.train.get_checkpoint_state('checkpoints')
```
## Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
```
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
```
Here, pass in the path to a checkpoint and sample from the network.
```
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
```
| github_jupyter |
```
import pandas as pd
from scipy.spatial.distance import pdist
from scipy.cluster.hierarchy import *
from matplotlib import pyplot as plt
from matplotlib import rc
import numpy as np
from sklearn.cluster import KMeans
import seaborn as sns
from scipy.cluster.hierarchy import dendrogram, linkage
from scipy.cluster import hierarchy
xl1 = pd.ExcelFile('1.xlsx')
xl2 = pd.ExcelFile('2.xlsx') #!!!!!! измените имя файла и название рабочего листа
xl1 #эта команда выведет пять случайных строк таблицы, таблица не отобразиться полностью.
xl2
xl1.sheet_names
xl2.sheet_names
df = xl1.parse('Arkusz1') #wkinut tot sheet w dataframe
df.columns
df1 = xl2.parse('Arkusz2') #wkinut tot sheet w dataframe
df1.columns
# !!!!!! укажите количественные (int, float) столбцы, по которым выполним кластеризацию
col1=['połoczenia inhibitor betalaktamazy/ penicylina', 'cefalosporyny 3 generacji','karbapenemy','aminoglikozydy','fluorochinolony', 'sulfonamidy', 'tetracykliny', 'biofilm RT','biofilm 37C', 'swimming RT', 'swimming 37C', 'swarming RT',
'swarming 37C']
col2=[ 'biofilm RT', 'biofilm 37C', 'swimming RT',
'swimming 37C', 'swarming RT', 'swarming 37C', 'AMC', 'TZP',
'CXM', 'CTX', 'CAZ', 'FEP', 'IPM', 'MEM', 'ETP', 'AMK', 'CN', 'CIP',
'SXT', 'TGC', 'FOX']
pd.options.mode.chained_assignment = None
df[col1].fillna(0, inplace=True) # заменим пропуски данных нулями, в противном случае выдаст ошибку
pd.options.mode.chained_assignment = None
df1[col2].fillna(0, inplace=True) # заменим пропуски данных нулями, в противном случае выдаст ошибку
df[col1].corr() # посмотрим на парные корреляции
```
df[col1].corr() # посмотрим на парные корреляции
```
df1[col2].corr() # посмотрим на парные корреляции
# загружаем библиотеку препроцесинга данных
# эта библиотека автоматически приведен данные к нормальным значениям
from sklearn import preprocessing
dataNorm1 = preprocessing.MinMaxScaler().fit_transform(df[col1].values)
dataNorm2 = preprocessing.MinMaxScaler().fit_transform(df1[col2].values)
# Вычислим расстояния между каждым набором данных,
# т.е. строками массива data_for_clust
# Вычисляется евклидово расстояние (по умолчанию)
data_dist1 = pdist(dataNorm1, 'euclidean')
data_dist2 = pdist(dataNorm2, 'euclidean')
# Главная функция иерархической кластеризии
# Объедение элементов в кластера и сохранение в
# специальной переменной (используется ниже для визуализации
# и выделения количества кластеров
data_linkage1 = linkage(data_dist1, method='average')
data_linkage2 = linkage(data_dist2, method='average')
# Метод локтя. Позволячет оценить оптимальное количество сегментов.
# Показывает сумму внутри групповых вариаций
last = data_linkage1[-10:, 2]
last_rev = last[::-1]
idxs = np.arange(1, len(last) + 1)
plt.plot(idxs, last_rev)
acceleration = np.diff(last, 2)
acceleration_rev = acceleration[::-1]
plt.plot(idxs[:-2] + 1, acceleration_rev)
plt.show()
k = acceleration_rev.argmax() + 2
print("Рекомендованное количество кластеров:", k)
last = data_linkage2[-10:, 2]
last_rev = last[::-1]
idxs = np.arange(1, len(last) + 1)
plt.plot(idxs, last_rev)
acceleration = np.diff(last, 2)
acceleration_rev = acceleration[::-1]
plt.plot(idxs[:-2] + 1, acceleration_rev)
plt.show()
k = acceleration_rev.argmax() + 2
print("Рекомендованное количество кластеров:", k)
#функция построения дендрограмм
def fancy_dendrogram(*args, **kwargs):
max_d = kwargs.pop('max_d', None)
if max_d and 'color_threshold' not in kwargs:
kwargs['color_threshold'] = max_d
annotate_above = kwargs.pop('annotate_above', 0)
ddata = dendrogram(*args, **kwargs)
if not kwargs.get('no_plot', False):
plt.title('Hierarchical Clustering Dendrogram (truncated)')
plt.xlabel('sample id')
plt.ylabel('distance')
for i, d, c in zip(ddata['icoord'], ddata['dcoord'], ddata['color_list']):
x = 0.5 * sum(i[1:3])
y = d[1]
if y > annotate_above:
plt.plot(x, y, 'o', c=c)
plt.annotate("%.3g" % y, (x, y), xytext=(0, -5),
textcoords='offset points',
va='top', ha='center')
if max_d:
plt.axhline(y=max_d, c='k')
return ddata
# !!!!!!!!! укажите, какое количество кластеров будете использовать!
nClust1=34
nClust2=34
df.info()
df.describe()
fancy_dendrogram(
data_linkage1,
truncate_mode='level',
p=nClust1,
leaf_rotation=90.,
leaf_font_size=8.,
show_contracted=True,
annotate_above=100,
)
plt.savefig("wykres1.png",dpi = 300)
plt.show()
#строим дендрограмму
fancy_dendrogram(
data_linkage2,
truncate_mode='level',
p=nClust2,
leaf_rotation=90.,
leaf_font_size=8.,
show_contracted=True,
annotate_above=100,
)
plt.savefig("wykres2.png",dpi = 300)
plt.show()
# иерархическая кластеризация
clusters=fcluster(data_linkage1, nClust1, criterion='maxclust')
clusters
clusters=fcluster(data_linkage2, nClust2, criterion='maxclust')
clusters
df[df['I']==33] # !!!!! меняйте номер кластера
# строим кластеризаци методом KMeans
km = KMeans(n_clusters=nClust1).fit(dataNorm1)
# выведем полученное распределение по кластерам
# так же номер кластера, к котрому относится строка, так как нумерация начинается с нуля, выводим добавляя 1
km.labels_ +1
x=0 # Чтобы построить диаграмму в разных осях, меняйте номера столбцов
y=2 #
centroids = km.cluster_centers_
plt.figure(figsize=(10, 8))
plt.scatter(dataNorm1[:,x], dataNorm1[:,y], c=km.labels_, cmap='flag')
plt.scatter(centroids[:, x], centroids[:, y], marker='*', s=300,
c='r', label='centroid')
plt.xlabel(col1[x])
plt.ylabel(col1[y]);
plt.show()
#сохраним результаты в файл
df.to_excel('result_claster.xlsx', index=False)
sns.clustermap(df, metric="correlation", method="single", cmap="Blues", standard_scale=1)
plt.show()
sns.clustermap(df1, metric="correlation", figsize=(22, 22), method="single", cmap="Blues", standard_scale=1)
plt.savefig("clustmap2.png", dpi = 300)
plt.show()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W1D1_ModelTypes/W1D1_Tutorial2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Neuromatch Academy: Week 1, Day 1, Tutorial 2
# Model Types: "How" models
__Content creators:__ Matt Laporte, Byron Galbraith, Konrad Kording
__Content reviewers:__ Dalin Guo, Aishwarya Balwani, Madineh Sarvestani, Maryam Vaziri-Pashkam, Michael Waskom
___
# Tutorial Objectives
This is tutorial 2 of a 3-part series on different flavors of models used to understand neural data. In this tutorial we will explore models that can potentially explain *how* the spiking data we have observed is produced
To understand the mechanisms that give rise to the neural data we save in Tutorial 1, we will build simple neuronal models and compare their spiking response to real data. We will:
- Write code to simulate a simple "leaky integrate-and-fire" neuron model
- Make the model more complicated — but also more realistic — by adding more physiologically-inspired details
```
#@title Video 1: "How" models
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='PpnagITsb3E', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
# Setup
```
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
#@title Figure Settings
import ipywidgets as widgets #interactive display
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
#@title Helper Functions
def histogram(counts, bins, vlines=(), ax=None, ax_args=None, **kwargs):
"""Plot a step histogram given counts over bins."""
if ax is None:
_, ax = plt.subplots()
# duplicate the first element of `counts` to match bin edges
counts = np.insert(counts, 0, counts[0])
ax.fill_between(bins, counts, step="pre", alpha=0.4, **kwargs) # area shading
ax.plot(bins, counts, drawstyle="steps", **kwargs) # lines
for x in vlines:
ax.axvline(x, color='r', linestyle='dotted') # vertical line
if ax_args is None:
ax_args = {}
# heuristically set max y to leave a bit of room
ymin, ymax = ax_args.get('ylim', [None, None])
if ymax is None:
ymax = np.max(counts)
if ax_args.get('yscale', 'linear') == 'log':
ymax *= 1.5
else:
ymax *= 1.1
if ymin is None:
ymin = 0
if ymax == ymin:
ymax = None
ax_args['ylim'] = [ymin, ymax]
ax.set(**ax_args)
ax.autoscale(enable=False, axis='x', tight=True)
def plot_neuron_stats(v, spike_times):
fig, (ax1, ax2) = plt.subplots(ncols=2, figsize=(12, 5))
# membrane voltage trace
ax1.plot(v[0:100])
ax1.set(xlabel='Time', ylabel='Voltage')
# plot spike events
for x in spike_times:
if x >= 100:
break
ax1.axvline(x, color='red')
# ISI distribution
if len(spike_times)>1:
isi = np.diff(spike_times)
n_bins = np.arange(isi.min(), isi.max() + 2) - .5
counts, bins = np.histogram(isi, n_bins)
vlines = []
if len(isi) > 0:
vlines = [np.mean(isi)]
xmax = max(20, int(bins[-1])+5)
histogram(counts, bins, vlines=vlines, ax=ax2, ax_args={
'xlabel': 'Inter-spike interval',
'ylabel': 'Number of intervals',
'xlim': [0, xmax]
})
else:
ax2.set(xlabel='Inter-spike interval',
ylabel='Number of intervals')
plt.show()
```
# Section 1: The Linear Integrate-and-Fire Neuron
How does a neuron spike?
A neuron charges and discharges an electric field across its cell membrane. The state of this electric field can be described by the _membrane potential_. The membrane potential rises due to excitation of the neuron, and when it reaches a threshold a spike occurs. The potential resets, and must rise to a threshold again before the next spike occurs.
One of the simplest models of spiking neuron behavior is the linear integrate-and-fire model neuron. In this model, the neuron increases its membrane potential $V_m$ over time in response to excitatory input currents $I$ scaled by some factor $\alpha$:
\begin{align}
dV_m = {\alpha}I
\end{align}
Once $V_m$ reaches a threshold value a spike is produced, $V_m$ is reset to a starting value, and the process continues.
Here, we will take the starting and threshold potentials as $0$ and $1$, respectively. So, for example, if $\alpha I=0.1$ is constant---that is, the input current is constant---then $dV_m=0.1$, and at each timestep the membrane potential $V_m$ increases by $0.1$ until after $(1-0)/0.1 = 10$ timesteps it reaches the threshold and resets to $V_m=0$, and so on.
Note that we define the membrane potential $V_m$ as a scalar: a single real (or floating point) number. However, a biological neuron's membrane potential will not be exactly constant at all points on its cell membrane at a given time. We could capture this variation with a more complex model (e.g. with more numbers). Do we need to?
The proposed model is a 1D simplification. There are many details we could add to it, to preserve different parts of the complex structure and dynamics of a real neuron. If we were interested in small or local changes in the membrane potential, our 1D simplification could be a problem. However, we'll assume an idealized "point" neuron model for our current purpose.
#### Spiking Inputs
Given our simplified model for the neuron dynamics, we still need to consider what form the input $I$ will take. How should we specify the firing behavior of the presynaptic neuron(s) providing the inputs to our model neuron?
Unlike in the simple example above, where $\alpha I=0.1$, the input current is generally not constant. Physical inputs tend to vary with time. We can describe this variation with a distribution.
We'll assume the input current $I$ over a timestep is due to equal contributions from a non-negative ($\ge 0$) integer number of input spikes arriving in that timestep. Our model neuron might integrate currents from 3 input spikes in one timestep, and 7 spikes in the next timestep. We should see similar behavior when sampling from our distribution.
Given no other information about the input neurons, we will also assume that the distribution has a mean (i.e. mean rate, or number of spikes received per timestep), and that the spiking events of the input neuron(s) are independent in time. Are these reasonable assumptions in the context of real neurons?
A suitable distribution given these assumptions is the Poisson distribution, which we'll use to model $I$:
\begin{align}
I \sim \mathrm{Poisson}(\lambda)
\end{align}
where $\lambda$ is the mean of the distribution: the average rate of spikes received per timestep.
### Exercise 1: Compute $dV_m$
For your first exercise, you will write the code to compute the change in voltage $dV_m$ (per timestep) of the linear integrate-and-fire model neuron. The rest of the code to handle numerical integration is provided for you, so you just need to fill in a definition for `dv` in the `lif_neuron` function below. The value of $\lambda$ for the Poisson random variable is given by the function argument `rate`.
The [`scipy.stats`](https://docs.scipy.org/doc/scipy/reference/stats.html) package is a great resource for working with and sampling from various probability distributions. We will use the `scipy.stats.poisson` class and its method `rvs` to produce Poisson-distributed random samples. In this tutorial, we have imported this package with the alias `stats`, so you should refer to it in your code as `stats.poisson`.
```
def lif_neuron(n_steps=1000, alpha=0.01, rate=10):
""" Simulate a linear integrate-and-fire neuron.
Args:
n_steps (int): The number of time steps to simulate the neuron's activity.
alpha (float): The input scaling factor
rate (int): The mean rate of incoming spikes
"""
# precompute Poisson samples for speed
exc = stats.poisson(rate).rvs(n_steps)
v = np.zeros(n_steps)
spike_times = []
################################################################################
# Students: compute dv, then comment out or remove the next line
raise NotImplementedError("Excercise: compute the change in membrane potential")
################################################################################
for i in range(1, n_steps):
dv = ...
v[i] = v[i-1] + dv
if v[i] > 1:
spike_times.append(i)
v[i] = 0
return v, spike_times
# Set random seed (for reproducibility)
np.random.seed(12)
# Uncomment these lines after completing the lif_neuron function
# v, spike_times = lif_neuron()
# plot_neuron_stats(v, spike_times)
# to_remove solution
def lif_neuron(n_steps=1000, alpha=0.01, rate=10):
""" Simulate a linear integrate-and-fire neuron.
Args:
n_steps (int): The number of time steps to simulate the neuron's activity.
alpha (float): The input scaling factor
rate (int): The mean rate of incoming spikes
"""
# precompute Poisson samples for speed
exc = stats.poisson(rate).rvs(n_steps)
v = np.zeros(n_steps)
spike_times = []
for i in range(1, n_steps):
dv = alpha * exc[i]
v[i] = v[i-1] + dv
if v[i] > 1:
spike_times.append(i)
v[i] = 0
return v, spike_times
# Set random seed (for reproducibility)
np.random.seed(12)
v, spike_times = lif_neuron()
with plt.xkcd():
plot_neuron_stats(v, spike_times)
```
## Interactive Demo: Linear-IF neuron
Like last time, you can now explore how various parametes of the LIF model influence the ISI distribution.
```
#@title
#@markdown You don't need to worry about how the code works – but you do need to **run the cell** to enable the sliders.
def _lif_neuron(n_steps=1000, alpha=0.01, rate=10):
exc = stats.poisson(rate).rvs(n_steps)
v = np.zeros(n_steps)
spike_times = []
for i in range(1, n_steps):
dv = alpha * exc[i]
v[i] = v[i-1] + dv
if v[i] > 1:
spike_times.append(i)
v[i] = 0
return v, spike_times
@widgets.interact(
n_steps=widgets.FloatLogSlider(1000.0, min=2, max=4),
alpha=widgets.FloatLogSlider(0.01, min=-2, max=-1),
rate=widgets.IntSlider(10, min=5, max=20)
)
def plot_lif_neuron(n_steps=1000, alpha=0.01, rate=10):
v, spike_times = _lif_neuron(int(n_steps), alpha, rate)
plot_neuron_stats(v, spike_times)
#@title Video 2: Linear-IF models
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='QBD7kulhg4U', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
# Section 2: Inhibitory signals
Our linear integrate-and-fire neuron from the previous section was indeed able to produce spikes. However, our ISI histogram doesn't look much like empirical ISI histograms seen in Tutorial 1, which had an exponential-like shape. What is our model neuron missing, given that it doesn't behave like a real neuron?
In the previous model we only considered excitatory behavior -- the only way the membrane potential could decrease was upon a spike event. We know, however, that there are other factors that can drive $V_m$ down. First is the natural tendency of the neuron to return to some steady state or resting potential. We can update our previous model as follows:
\begin{align}
dV_m = -{\beta}V_m + {\alpha}I
\end{align}
where $V_m$ is the current membrane potential and $\beta$ is some leakage factor. This is a basic form of the popular Leaky Integrate-and-Fire model neuron (for a more detailed discussion of the LIF Neuron, see the Appendix).
We also know that in addition to excitatory presynaptic neurons, we can have inhibitory presynaptic neurons as well. We can model these inhibitory neurons with another Poisson random variable:
\begin{align}
I = I_{exc} - I_{inh} \\
I_{exc} \sim \mathrm{Poisson}(\lambda_{exc}) \\
I_{inh} \sim \mathrm{Poisson}(\lambda_{inh})
\end{align}
where $\lambda_{exc}$ and $\lambda_{inh}$ are the average spike rates (per timestep) of the excitatory and inhibitory presynaptic neurons, respectively.
### Exercise 2: Compute $dV_m$ with inhibitory signals
For your second exercise, you will again write the code to compute the change in voltage $dV_m$, though now of the LIF model neuron described above. Like last time, the rest of the code needed to handle the neuron dynamics are provided for you, so you just need to fill in a definition for `dv` below.
```
def lif_neuron_inh(n_steps=1000, alpha=0.5, beta=0.1, exc_rate=10, inh_rate=10):
""" Simulate a simplified leaky integrate-and-fire neuron with both excitatory
and inhibitory inputs.
Args:
n_steps (int): The number of time steps to simulate the neuron's activity.
alpha (float): The input scaling factor
beta (float): The membrane potential leakage factor
exc_rate (int): The mean rate of the incoming excitatory spikes
inh_rate (int): The mean rate of the incoming inhibitory spikes
"""
# precompute Poisson samples for speed
exc = stats.poisson(exc_rate).rvs(n_steps)
inh = stats.poisson(inh_rate).rvs(n_steps)
v = np.zeros(n_steps)
spike_times = []
###############################################################################
# Students: compute dv, then comment out or remove the next line
raise NotImplementedError("Excercise: compute the change in membrane potential")
################################################################################
for i in range(1, n_steps):
dv = ...
v[i] = v[i-1] + dv
if v[i] > 1:
spike_times.append(i)
v[i] = 0
return v, spike_times
# Set random seed (for reproducibility)
np.random.seed(12)
# Uncomment these lines do make the plot once you've completed the function
#v, spike_times = lif_neuron_inh()
#plot_neuron_stats(v, spike_times)
# to_remove solution
def lif_neuron_inh(n_steps=1000, alpha=0.5, beta=0.1, exc_rate=10, inh_rate=10):
""" Simulate a simplified leaky integrate-and-fire neuron with both excitatory
and inhibitory inputs.
Args:
n_steps (int): The number of time steps to simulate the neuron's activity.
alpha (float): The input scaling factor
beta (float): The membrane potential leakage factor
exc_rate (int): The mean rate of the incoming excitatory spikes
inh_rate (int): The mean rate of the incoming inhibitory spikes
"""
# precompute Poisson samples for speed
exc = stats.poisson(exc_rate).rvs(n_steps)
inh = stats.poisson(inh_rate).rvs(n_steps)
v = np.zeros(n_steps)
spike_times = []
for i in range(1, n_steps):
dv = -beta * v[i-1] + alpha * (exc[i] - inh[i])
v[i] = v[i-1] + dv
if v[i] > 1:
spike_times.append(i)
v[i] = 0
return v, spike_times
# Set random seed (for reproducibility)
np.random.seed(12)
v, spike_times = lif_neuron_inh()
with plt.xkcd():
plot_neuron_stats(v, spike_times)
```
## Interactive Demo: LIF + inhibition neuron
```
#@title
#@markdown **Run the cell** to enable the sliders.
def _lif_neuron_inh(n_steps=1000, alpha=0.5, beta=0.1, exc_rate=10, inh_rate=10):
""" Simulate a simplified leaky integrate-and-fire neuron with both excitatory
and inhibitory inputs.
Args:
n_steps (int): The number of time steps to simulate the neuron's activity.
alpha (float): The input scaling factor
beta (float): The membrane potential leakage factor
exc_rate (int): The mean rate of the incoming excitatory spikes
inh_rate (int): The mean rate of the incoming inhibitory spikes
"""
# precompute Poisson samples for speed
exc = stats.poisson(exc_rate).rvs(n_steps)
inh = stats.poisson(inh_rate).rvs(n_steps)
v = np.zeros(n_steps)
spike_times = []
for i in range(1, n_steps):
dv = -beta * v[i-1] + alpha * (exc[i] - inh[i])
v[i] = v[i-1] + dv
if v[i] > 1:
spike_times.append(i)
v[i] = 0
return v, spike_times
@widgets.interact(n_steps=widgets.FloatLogSlider(1000.0, min=2.5, max=4),
alpha=widgets.FloatLogSlider(0.5, min=-1, max=1),
beta=widgets.FloatLogSlider(0.1, min=-1, max=0),
exc_rate=widgets.IntSlider(12, min=10, max=20),
inh_rate=widgets.IntSlider(12, min=10, max=20))
def plot_lif_neuron(n_steps=1000, alpha=0.5, beta=0.1, exc_rate=10, inh_rate=10):
v, spike_times = _lif_neuron_inh(int(n_steps), alpha, beta, exc_rate, inh_rate)
plot_neuron_stats(v, spike_times)
#@title Video 3: LIF + inhibition
from IPython.display import YouTubeVideo
video = YouTubeVideo(id='Aq7JrxRkn2w', width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
#Summary
In this tutorial we gained some intuition for the mechanisms that produce the observed behavior in our real neural data. First, we built a simple neuron model with excitatory input and saw that it's behavior, measured using the ISI distribution, did not match our real neurons. We then improved our model by adding leakiness and inhibitory input. The behavior of this balanced model was much closer to the real neural data.
# Bonus
### Why do neurons spike?
A neuron stores energy in an electric field across its cell membrane, by controlling the distribution of charges (ions) on either side of the membrane. This energy is rapidly discharged to generate a spike when the field potential (or membrane potential) crosses a threshold. The membrane potential may be driven toward or away from this threshold, depending on inputs from other neurons: excitatory or inhibitory, respectively. The membrane potential tends to revert to a resting potential, for example due to the leakage of ions across the membrane, so that reaching the spiking threshold depends not only on the amount of input ever received following the last spike, but also the timing of the inputs.
The storage of energy by maintaining a field potential across an insulating membrane can be modeled by a capacitor. The leakage of charge across the membrane can be modeled by a resistor. This is the basis for the leaky integrate-and-fire neuron model.
### The LIF Model Neuron
The full equation for the LIF neuron is
\begin{align}
C_{m}\frac{dV_m}{dt} = -(V_m - V_{rest})/R_{m} + I
\end{align}
where $C_m$ is the membrane capacitance, $R_M$ is the membrane resistance, $V_{rest}$ is the resting potential, and $I$ is some input current (from other neurons, an electrode, ...).
In our above examples we set many of these parameters to convenient values ($C_m = R_m = dt = 1$, $V_{rest} = 0$) to focus more on the general behavior of the model. However, these too can be manipulated to achieve different dynamics, or to ensure the dimensions of the problem are preserved between simulation units and experimental units (e.g. with $V_m$ given in millivolts, $R_m$ in megaohms, $t$ in milliseconds).
| github_jupyter |
## A quick Gender Recognition model
Grabbed from [nlpforhackers](https://nlpforhackers.io/introduction-machine-learning/) webpage.
1. Firstly convert the dataset into a numpy array to keep only gender and names
2. Set the feature parameters which takes in different parameters
3. Vectorize the parametes
4. Get varied train, test split and test it for validity by checking out the count of the train test split
5. Transform lists of feature-value mappings to vectors. (When feature values are strings, this transformer will do a binary one-hot (aka one-of-K) coding: one boolean-valued feature is constructed for each of the possible string values that the feature can take on)
6. Train a decision tree classifier on this and save the model as a pickle file
```
import pandas as pd
import numpy as np
from sklearn.utils import shuffle
from sklearn.feature_extraction import DictVectorizer
from sklearn.tree import DecisionTreeClassifier
names = pd.read_csv('names_dataset.csv')
print(names.head(10))
print("%d names in dataset" % len(names))
# Get the data out of the dataframe into a numpy matrix and keep only the name and gender columns
names = names.as_matrix()[:, 1:]
print(names)
# We're using 90% of the data for training
TRAIN_SPLIT = 0.90
def features(name):
name = name.lower()
return {
'first-letter': name[0], # First letter
'first2-letters': name[0:2], # First 2 letters
'first3-letters': name[0:3], # First 3 letters
'last-letter': name[-1], # Last letter
'last2-letters': name[-2:], # Last 2 letters
'last3-letters': name[-3:], # Last 3 letters
}
# Feature Extraction
print(features("Alex"))
# Vectorize the features function
features = np.vectorize(features)
print(features(["Anna", "Hannah", "Paul"]))
# [ array({'first2-letters': 'an', 'last-letter': 'a', 'first-letter': 'a', 'last2-letters': 'na', 'last3-letters': 'nna', 'first3-letters': 'ann'}, dtype=object)
# array({'first2-letters': 'ha', 'last-letter': 'h', 'first-letter': 'h', 'last2-letters': 'ah', 'last3-letters': 'nah', 'first3-letters': 'han'}, dtype=object)
# array({'first2-letters': 'pa', 'last-letter': 'l', 'first-letter': 'p', 'last2-letters': 'ul', 'last3-letters': 'aul', 'first3-letters': 'pau'}, dtype=object)]
# Extract the features for the whole dataset
X = features(names[:, 0]) # X contains the features
# Get the gender column
y = names[:, 1] # y contains the targets
# Test if we built the dataset correctly
print("\n\nName: %s, features=%s, gender=%s" % (names[0][0], X[0], y[0]))
X, y = shuffle(X, y)
X_train, X_test = X[:int(TRAIN_SPLIT * len(X))], X[int(TRAIN_SPLIT * len(X)):]
y_train, y_test = y[:int(TRAIN_SPLIT * len(y))], y[int(TRAIN_SPLIT * len(y)):]
# Check to see if the datasets add up
print len(X_train), len(X_test), len(y_train), len(y_test)
# Transforms lists of feature-value mappings to vectors.
vectorizer = DictVectorizer()
vectorizer.fit(X_train)
transformed = vectorizer.transform(features(["Mary", "John"]))
print transformed
print type(transformed) # <class 'scipy.sparse.csr.csr_matrix'>
print transformed.toarray()[0][12] # 1.0
print vectorizer.feature_names_[12] # first-letter=m
clf = DecisionTreeClassifier(criterion = 'gini')
clf.fit(vectorizer.transform(X_train), y_train)
# Accuracy on training set
print clf.score(vectorizer.transform(X_train), y_train)
# Accuracy on test set
print clf.score(vectorizer.transform(X_test), y_test)
# Therefore, we are getting a decent result from the names
print clf.predict(vectorizer.transform(features(["SMYSLOV", "CHASTITY", "MISS PERKY", "SHARON", "ALONSO", "SECONDARY OFFICER"])))
# Save the model using pickle
import pickle
pickle_out = open("gender_recog.pickle", "wb")
pickle.dump(clf, pickle_out)
pickle_out.close()
```
| github_jupyter |
# Get started
<a href="https://mybinder.org/v2/gh/tinkoff-ai/etna/master?filepath=examples/get_started.ipynb">
<img src="https://mybinder.org/badge_logo.svg" align='left'>
</a>
This notebook contains the simple examples of time series forecasting pipeline
using ETNA library.
**Table of Contents**
* [Creating TSDataset](#chapter1)
* [Plotting](#chapter2)
* [Forecast single time series](#chapter3)
* [Simple forecast](#section_3_1)
* [Prophet](#section_3_2)
* [Catboost](#section_3_3)
* [Forecast multiple time series](#chapter4)
* [Pipeline](#chapter5)
## 1. Creating TSDataset <a class="anchor" id="chapter1"></a>
Let's load and look at the dataset
```
import pandas as pd
original_df = pd.read_csv("data/monthly-australian-wine-sales.csv")
original_df.head()
```
etna_ts is strict about data format:
* column we want to predict should be called `target`
* column with datatime data should be called `timestamp`
* because etna is always ready to work with multiple time series, column `segment` is also compulsory
Our library works with the special data structure TSDataset. So, before starting anything, we need to convert the classical DataFrame to TSDataset.
Let's rename first
```
original_df["timestamp"] = pd.to_datetime(original_df["month"])
original_df["target"] = original_df["sales"]
original_df.drop(columns=["month", "sales"], inplace=True)
original_df["segment"] = "main"
original_df.head()
```
Time to convert to TSDataset!
To do this, we initially need to convert the classical DataFrame to the special format.
```
from etna.datasets.tsdataset import TSDataset
df = TSDataset.to_dataset(original_df)
df.head()
```
Now we can construct the TSDataset.
Additionally to passing dataframe we should specify frequency of our data.
In this case it is monthly data.
```
ts = TSDataset(df, freq="1M")
```
Oups. Let's fix that
```
ts = TSDataset(df, freq="MS")
```
We can look at the basic information about the dataset
```
ts.info()
```
Or in DataFrame format
```
ts.describe()
```
## 2. Plotting <a class="anchor" id="chapter2"></a>
Let's take a look at the time series in the dataset
```
ts.plot()
```
## 3. Forecasting single time series <a class="anchor" id="chapter3"></a>
Our library contains a wide range of different models for time series forecasting. Let's look at some of them.
### 3.1 Simple forecast<a class="anchor" id="section_3_1"></a>
Let's predict the monthly values in 1994 in our dataset using the ```NaiveModel```
```
train_ts, test_ts = ts.train_test_split(train_start="1980-01-01",
train_end="1993-12-01",
test_start="1994-01-01",
test_end="1994-08-01")
HORIZON = 8
from etna.models import NaiveModel
#Fit the model
model = NaiveModel(lag=12)
model.fit(train_ts)
#Make the forecast
future_ts = train_ts.make_future(HORIZON)
forecast_ts = model.forecast(future_ts)
```
Now let's look at a metric and plot the prediction.
All the methods already built-in in etna.
```
from etna.metrics import SMAPE
smape = SMAPE()
smape(y_true=test_ts, y_pred=forecast_ts)
from etna.analysis import plot_forecast
plot_forecast(forecast_ts, test_ts, train_ts, n_train_samples=10)
```
### 3.2 Prophet<a class="anchor" id="section_3_2"></a>
Now try to improve the forecast and predict the values with the Facebook Prophet.
```
from etna.models import ProphetModel
model = ProphetModel()
model.fit(train_ts)
#Make the forecast
future_ts = train_ts.make_future(HORIZON)
forecast_ts = model.forecast(future_ts)
smape(y_true=test_ts, y_pred=forecast_ts)
plot_forecast(forecast_ts, test_ts, train_ts, n_train_samples=10)
```
### 3.2 Catboost<a class="anchor" id="section_3_3"></a>
And finally let's try the Catboost model.
Also etna has wide range of transforms you may apply to your data.
Here how it is done:
```
from etna.transforms import LagTransform
lags = LagTransform(in_column="target", lags=list(range(8, 24, 1)))
train_ts.fit_transform([lags])
from etna.models import CatBoostModelMultiSegment
model = CatBoostModelMultiSegment()
model.fit(train_ts)
future_ts = train_ts.make_future(HORIZON)
forecast_ts = model.forecast(future_ts)
from etna.metrics import SMAPE
smape = SMAPE()
smape(y_true=test_ts, y_pred=forecast_ts)
from etna.analysis import plot_forecast
train_ts.inverse_transform()
plot_forecast(forecast_ts, test_ts, train_ts, n_train_samples=10)
```
## 4. Forecasting multiple time series <a class="anchor" id="chapter4"></a>
In this section you may see example of how easily etna works
with multiple time series and get acquainted with other transforms etna contains.
```
original_df = pd.read_csv("data/example_dataset.csv")
original_df.head()
df = TSDataset.to_dataset(original_df)
ts = TSDataset(df, freq="D")
ts.plot()
ts.info()
import warnings
from etna.transforms import MeanTransform, LagTransform, LogTransform, \
SegmentEncoderTransform, DateFlagsTransform, LinearTrendTransform
warnings.filterwarnings("ignore")
log = LogTransform(in_column="target")
trend = LinearTrendTransform(in_column="target")
seg = SegmentEncoderTransform()
lags = LagTransform(in_column="target", lags=list(range(30, 96, 1)))
d_flags = DateFlagsTransform(day_number_in_week=True,
day_number_in_month=True,
week_number_in_month=True,
week_number_in_year=True,
month_number_in_year=True,
year_number=True,
special_days_in_week=[5, 6])
mean30 = MeanTransform(in_column="target", window=30)
HORIZON = 31
train_ts, test_ts = ts.train_test_split(train_start="2019-01-01",
train_end="2019-11-30",
test_start="2019-12-01",
test_end="2019-12-31")
train_ts.fit_transform([log, trend, lags, d_flags, seg, mean30])
from etna.models import CatBoostModelMultiSegment
model = CatBoostModelMultiSegment()
model.fit(train_ts)
future_ts = train_ts.make_future(HORIZON)
forecast_ts = model.forecast(future_ts)
smape = SMAPE()
smape(y_true=test_ts, y_pred=forecast_ts)
train_ts.inverse_transform()
plot_forecast(forecast_ts, test_ts, train_ts, n_train_samples=20)
```
## 5. Pipeline <a class="anchor" id="chapter5"></a>
Let's wrap everything into pipeline to create the end-to-end model from previous section.
```
from etna.pipeline import Pipeline
train_ts, test_ts = ts.train_test_split(train_start="2019-01-01",
train_end="2019-11-30",
test_start="2019-12-01",
test_end="2019-12-31")
```
We put: **model**, **transforms** and **horizon** in a single object, which has the similar interface with the model(fit/forecast)
```
model = Pipeline(model=CatBoostModelMultiSegment(),
transforms=[log, trend, lags, d_flags, seg, mean30],
horizon=HORIZON)
model.fit(train_ts)
forecast_ts = model.forecast()
```
As in the previous section, let's calculate the metrics and plot the forecast
```
smape = SMAPE()
smape(y_true=test_ts, y_pred=forecast_ts)
plot_forecast(forecast_ts, test_ts, train_ts, n_train_samples=20)
```
| github_jupyter |
# Refitting NumPyro models with ArviZ (and xarray)
ArviZ is backend agnostic and therefore does not sample directly. In order to take advantage of algorithms that require refitting models several times, ArviZ uses `SamplingWrappers` to convert the API of the sampling backend to a common set of functions. Hence, functions like Leave Future Out Cross Validation can be used in ArviZ independently of the sampling backend used.
Below there is one example of `SamplingWrapper` usage for NumPyro.
```
import arviz as az
import numpyro
import numpyro.distributions as dist
import jax.random as random
from numpyro.infer import MCMC, NUTS
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stats
import xarray as xr
numpyro.set_host_device_count(4)
```
For the example we will use a linear regression.
```
np.random.seed(26)
xdata = np.linspace(0, 50, 100)
b0, b1, sigma = -2, 1, 3
ydata = np.random.normal(loc=b1 * xdata + b0, scale=sigma)
plt.plot(xdata, ydata)
```
Now we will write the NumPyro code:
```
def model(N, x, y=None):
b0 = numpyro.sample("b0", dist.Normal(0, 10))
b1 = numpyro.sample("b1", dist.Normal(0, 10))
sigma_e = numpyro.sample("sigma_e", dist.HalfNormal(10))
numpyro.sample("y", dist.Normal(b0 + b1 * x, sigma_e), obs=y)
data_dict = {
"N": len(ydata),
"y": ydata,
"x": xdata,
}
kernel = NUTS(model)
sample_kwargs = dict(
sampler=kernel,
num_warmup=1000,
num_samples=1000,
num_chains=4,
chain_method="parallel"
)
mcmc = MCMC(**sample_kwargs)
mcmc.run(random.PRNGKey(0), **data_dict)
```
We have defined a dictionary `sample_kwargs` that will be passed to the `SamplingWrapper` in order to make sure that all refits use the same sampler parameters. We follow the same pattern with `az.from_numpyro`.
```
dims = {"y": ["time"], "x": ["time"]}
idata_kwargs = {
"dims": dims,
"constant_data": {"x": xdata}
}
idata = az.from_numpyro(mcmc, **idata_kwargs)
del idata.log_likelihood
idata
```
We are now missing the `log_likelihood` group because we have not used the `log_likelihood` argument in `idata_kwargs`. We are doing this to ease the job of the sampling wrapper. Instead of going out of our way to get Stan to calculate the pointwise log likelihood values for each refit and for the excluded observation at every refit, we will compromise and manually write a function to calculate the pointwise log likelihood.
Even though it is not ideal to lose part of the straight out of the box capabilities of PyStan-ArviZ integration, this should generally not be a problem. We are basically moving the pointwise log likelihood calculation from the Stan code to the Python code, in both cases we need to manually write the function to calculate the pointwise log likelihood.
Moreover, the Python computation could even be written to be compatible with Dask. Thus it will work even in cases where the large number of observations makes it impossible to store pointwise log likelihood values (with shape `n_samples * n_observations`) in memory.
```
def calculate_log_lik(x, y, b0, b1, sigma_e):
mu = b0 + b1 * x
return stats.norm(mu, sigma_e).logpdf(y)
```
This function should work for any shape of the input arrays as long as their shapes are compatible and can broadcast. There is no need to loop over each draw in order to calculate the pointwise log likelihood using scalars.
Therefore, we can use `xr.apply_ufunc` to handle the broadasting and preserve the dimension names:
```
log_lik = xr.apply_ufunc(
calculate_log_lik,
idata.constant_data["x"],
idata.observed_data["y"],
idata.posterior["b0"],
idata.posterior["b1"],
idata.posterior["sigma_e"],
)
idata.add_groups(log_likelihood=log_lik)
```
The first argument is the function, followed by as many positional arguments as needed by the function, 5 in our case. As this case does not have many different dimensions nor combinations of these, we do not need to use any extra kwargs passed to [`xr.apply_ufunc`](http://xarray.pydata.org/en/stable/generated/xarray.apply_ufunc.html#xarray.apply_ufunc).
We are now passing the arguments to `calculate_log_lik` initially as `xr.DataArrays`. What is happening here behind the scenes is that `xr.apply_ufunc` is broadcasting and aligning the dimensions of all the DataArrays involved and afterwards passing numpy arrays to `calculate_log_lik`. Everything works automagically.
Now let's see what happens if we were to pass the arrays directly to `calculate_log_lik` instead:
```
calculate_log_lik(
idata.constant_data["x"].values,
idata.observed_data["y"].values,
idata.posterior["b0"].values,
idata.posterior["b1"].values,
idata.posterior["sigma_e"].values
)
```
If you are still curious about the magic of xarray and `xr.apply_ufunc`, you can also try to modify the `dims` used to generate the InferenceData a couple cells before:
dims = {"y": ["time"], "x": ["time"]}
What happens to the result if you use a different name for the dimension of `x`?
```
idata
```
We will create a subclass of `az.SamplingWrapper`. Therefore, instead of having to implement all functions required by `az.reloo` we only have to implement `sel_observations` (we are cloning `sample` and `get_inference_data` from the `PyStanSamplingWrapper` in order to use `apply_ufunc` instead of assuming the log likelihood is calculated within Stan).
Note that of the 2 outputs of `sel_observations`, `data__i` is a dictionary because it is an argument of `sample` which will pass it as is to `model.sampling`, whereas `data_ex` is a list because it is an argument to `log_likelihood__i` which will pass it as `*data_ex` to `apply_ufunc`. More on `data_ex` and `apply_ufunc` integration below.
```
class NumPyroSamplingWrapper(az.SamplingWrapper):
def __init__(self, model, **kwargs):
self.rng_key = kwargs.pop("rng_key", random.PRNGKey(0))
super(NumPyroSamplingWrapper, self).__init__(model, **kwargs)
def sample(self, modified_observed_data):
self.rng_key, subkey = random.split(self.rng_key)
mcmc = MCMC(**self.sample_kwargs)
mcmc.run(subkey, **modified_observed_data)
return mcmc
def get_inference_data(self, fit):
# Cloned from PyStanSamplingWrapper.
idata = az.from_numpyro(mcmc, **self.idata_kwargs)
return idata
class LinRegWrapper(NumPyroSamplingWrapper):
def sel_observations(self, idx):
xdata = self.idata_orig.constant_data["x"]
ydata = self.idata_orig.observed_data["y"]
mask = np.isin(np.arange(len(xdata)), idx)
# data__i is passed to numpyro to sample on it -> dict of numpy array
# data_ex is passed to apply_ufunc -> list of DataArray
data__i = {"x": xdata[~mask].values, "y": ydata[~mask].values, "N": len(ydata[~mask])}
data_ex = [xdata[mask], ydata[mask]]
return data__i, data_ex
loo_orig = az.loo(idata, pointwise=True)
loo_orig
```
In this case, the Leave-One-Out Cross Validation (LOO-CV) approximation using Pareto Smoothed Importance Sampling (PSIS) works for all observations, so we will use modify `loo_orig` in order to make `az.reloo` believe that PSIS failed for some observations. This will also serve as a validation of our wrapper, as the PSIS LOO-CV already returned the correct value.
```
loo_orig.pareto_k[[13, 42, 56, 73]] = np.array([0.8, 1.2, 2.6, 0.9])
```
We initialize our sampling wrapper. Let's stop and analyze each of the arguments.
We then use the `log_lik_fun` and `posterior_vars` argument to tell the wrapper how to call `xr.apply_ufunc`. `log_lik_fun` is the function to be called, which is then called with the following positional arguments:
log_lik_fun(*data_ex, *[idata__i.posterior[var_name] for var_name in posterior_vars]
where `data_ex` is the second element returned by `sel_observations` and `idata__i` is the InferenceData object result of `get_inference_data` which contains the fit on the subsetted data. We have generated `data_ex` to be a tuple of DataArrays so it plays nicely with this call signature.
We use `idata_orig` as a starting point, and mostly as a source of observed and constant data which is then subsetted in `sel_observations`.
Finally, `sample_kwargs` and `idata_kwargs` are used to make sure all refits and corresponding InferenceData are generated with the same properties.
```
pystan_wrapper = LinRegWrapper(
mcmc,
rng_key=random.PRNGKey(7),
log_lik_fun=calculate_log_lik,
posterior_vars=("b0", "b1", "sigma_e"),
idata_orig=idata,
sample_kwargs=sample_kwargs,
idata_kwargs=idata_kwargs
)
```
And eventually, we can use this wrapper to call `az.reloo`, and compare the results with the PSIS LOO-CV results.
```
loo_relooed = az.reloo(pystan_wrapper, loo_orig=loo_orig)
loo_relooed
loo_orig
```
| github_jupyter |
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import pickle
import functools as ft
X = x_train[1]
X
not ft.reduce(lambda old, new: old == new,X >= 0)
```
## XOR
```
def xor(X):
if not ft.reduce(lambda old, new: old == new,X >= 0):
return 1
else:
return 0
x_train = np.array([(np.random.random_sample(5000) - 0.5) * 2 for dim in range(2)]).transpose()
x_test = np.array([(np.random.random_sample(100) - 0.5) * 2 for dim in range(2)]).transpose()
y_train = np.apply_along_axis(xor, 1, x_train)
y_test = np.apply_along_axis(xor, 1, x_test)
with open('data/xor.tuple', 'wb') as xtuple:
pickle.dump((x_train, y_train, x_test, y_test), xtuple)
```
## Multivariante Regression - Housing Data Set
https://archive.ics.uci.edu/ml/datasets/Housing
1. CRIM: per capita crime rate by town
2. ZN: proportion of residential land zoned for lots over 25,000 sq.ft.
3. INDUS: proportion of non-retail business acres per town
4. CHAS: Charles River dummy variable (= 1 if tract bounds river; 0 otherwise)
5. NOX: nitric oxides concentration (parts per 10 million)
6. RM: average number of rooms per dwelling
7. AGE: proportion of owner-occupied units built prior to 1940
8. DIS: weighted distances to five Boston employment centres
9. RAD: index of accessibility to radial highways
10. TAX: full-value property-tax rate per \$10,000
11. PTRATIO: pupil-teacher ratio by town
12. B: 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town
13. LSTAT: \% lower status of the population
14. MEDV: Median value of owner-occupied homes in $1000's
```
!wget -P data/ https://archive.ics.uci.edu/ml/machine-learning-databases/housing/housing.data
housing = pd.read_csv('data/housing.data', delim_whitespace=True,
names=['CRIM',
'ZM',
'INDUS',
'CHAS',
'NOX',
'RM',
'AGE',
'DIS',
'RAD',
'TAX',
'PTRATIO',
'B',
'LSTAT',
'MEDV'])
housing.head()
with open('data/housing.dframe', 'wb') as dhousing:
pickle.dump(housing, dhousing)
```
## Binary Classification - Pima Indians Diabetes Data Set
https://archive.ics.uci.edu/ml/datasets/Pima+Indians+Diabetes
1. Number of times pregnant
2. Plasma glucose concentration a 2 hours in an oral glucose tolerance test
3. Diastolic blood pressure (mm Hg)
4. Triceps skin fold thickness (mm)
5. 2-Hour serum insulin (mu U/ml)
6. Body mass index (weight in kg/(height in m)^2)
7. Diabetes pedigree function
8. Age (years)
9. Class variable (0 or 1)
```
!wget -P data/ https://archive.ics.uci.edu/ml/machine-learning-databases/pima-indians-diabetes/pima-indians-diabetes.data
data = pd.read_csv('data/pima-indians-diabetes.data',
names=['n_pregnant',
'glucose',
'mmHg',
'triceps',
'insulin',
'BMI',
'pedigree',
'age',
'class'])
data.head()
x = np.array(data)[:,:-1]
y = np.array(data)[:,-1]
n_train = int(len(x) * 0.70)
x_train = x[:n_train]
x_test = x[n_train:]
y_train = y[:n_train]
y_test = y[n_train:]
with open('data/pima-indians-diabetes.tuple', 'wb') as xtuple:
pickle.dump((x_train, y_train, x_test, y_test), xtuple)
```
## Image Classification - MNIST dataset
http://deeplearning.net/data/mnist/mnist.pkl.gz
```
!wget -P data/ http://deeplearning.net/data/mnist/mnist.pkl.gz
import cPickle, gzip, numpy
# Load the dataset
f = gzip.open('data/mnist.pkl.gz', 'rb')
train_set, valid_set, test_set = cPickle.load(f)
f.close()
plt.imshow(train_set[0][0].reshape((28,28)),cmap='gray', interpolation=None)
!wget -P data/ http://data.dmlc.ml/mxnet/data/mnist.zip
!unzip -d data/ -u data/mnist.zip
```
## Image Classification - CIFAR-10 dataset
https://www.cs.toronto.edu/~kriz/cifar.html
```
!wget -P data/ https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
!tar -xzf data/cifar-10-python.tar.gz -C data/
with open('data/cifar-10-batches-py/data_batch_1', 'rb') as batch:
cifar1 = cPickle.load(batch)
cifar1.keys()
img = np.stack([cifar1['data'][0].reshape((3,32,32))[0,:,:],
cifar1['data'][0].reshape((3,32,32))[1,:,:],
cifar1['data'][0].reshape((3,32,32))[2,:,:]],axis=2)
plt.imshow(img, cmap='gray')
```
| github_jupyter |
# <div align="center">What is a Tensor</div>
---------------------------------------------------------------------
you can Find me on Github:
> ###### [ GitHub](https://github.com/lev1khachatryan)
***Tensors are not generalizations of vectors***. It’s very slightly more understandable to say that tensors are generalizations of matrices, in the same way that it is slightly more accurate to say “vanilla ice cream is a generalization of chocolate ice cream” than it is to say that “vanilla ice cream is a generalization of dessert”, closer, but still false. Vanilla and Chocolate are both ice cream, but chocolate ice cream is not a type of vanilla ice cream, and “dessert” certainly isn’t a type of vanilla ice cream. In fact, technically, ***vectors are generalizations of tensors.*** What we generally think of as vectors are geometrical points in space, and we normally represent them as an array of numbers. That array of numbers is what people are referring to when they say "Tensors are generalizations of vectors", but really, even this adjusted claim is fundamentally false and extremely misleading.
At first let's define what is a vector.
## Definition of a vector space
The set ***V*** is a vector space with respect to the operations + (which is any operation that maps two elements of the space to another element of the space, not necessarily addition) and * (which is any operation that maps an element in the space and a scalar to another element in the space, not necessarily multiplication) if and only if, for every $x,y,z ∈ V$ and $a,b ∈ R$
* \+ is commutative, that is x+y=y+x
* \+ is associative, that is (x+y)+z=x+(y+z)
* There exists an identity element in the space, that is there exists an element 0 such that x+0=x
* Every element has an inverse, that is for every element x there exists an element −x such that x+−x=0
* \* is associative, that is a(b∗x)=(ab)∗x
* There is scalar distributivity over +, that is a∗(x+y)=a∗x+a∗y
* There is vector distributivity over scalar addition, that is (a+b)∗x=a∗x+b∗x
* And finally, 1∗x=x (it can be obtained from above mentioned 7 points)
***A vector is defined as a member of such a space***. Notice how nothing here is explicitly stated to be numerical. We could be talking about colors, or elephants, or glasses of milk; as long as we meaningfully define these two operations, anything can be a vector. The special case of vectors that we usually think about in physics and geometry satisfy this definition ( i.e. points in space or “arrows”). Thus, “arrows” are special cases of vectors. More formally, every “arrow” v represents the line segment from 0, "the origin", which is the identity element of the vector space, to some other point in space. In this view, you can construct a vector space of “arrows” by first picking a point in space, and taking the set of all line segments from that point. (From now on, I will use the term “arrows” to formally distinguish between formal vectors and the type of vectors that have “magnitude and direction”.)
Okay, so anyone trying to understand tensors probably already knows this stuff.
But here is something you may not have heard about before if you are learning about tensors. When we define a vector space like this, we generally find that it is natural to define an operation that gives us lengths and angles. ***A vector space with lengths and angles is called an inner product space.***
## Definition of a Inner product space
An inner product space is a vector space V with an additional operation ***⋅*** such that, for all x,y,z ∈ V
* x⋅y ∈ R
* x⋅x ≥ 0
* x⋅x=0 ⟺ x=0
* x⋅(ay)=a(x⋅y)
* x⋅y=y⋅x
* x⋅(y+z)=x⋅y+x⋅z
We define the length of a vector x in an inner product space to be $||x|| = \sqrt[2]{x⋅x}$ , and the angle between two vectors x,y to be $arccos(\frac{x⋅y}{||x||||y||}).$
This is the equivalent of the dot product, which is defined to be $||x||||y||cos(θ)$, but note that this is not defined in terms of any sort of "components" of the vector, there are no arrays of numbers mentioned. I.e. the dot product is a geometrical operation.
So I have secretly given you your first glimpse at a tensor. Where was it? Was it x? Was it y? Was it V? Was it the glass of milk???
It was none of these things; ***it was the operation itself . The dot product itself is an example of a tensor.***
Well, again, ***tensors aren’t generalizations of vectors at all. Vectors, as we defined them above, are generalizations of tensors. And tensors aren’t technically generalizations of matrices. But tensors can certainly be thought of as kind of the same sort of object as a matrix.***
There are two things that tensors and matrices have in common. The first, and most important thing, is that they are both n-linear maps. This is why tensors are almost generalizations of matrices. The second, and more misleading, thing is that they can be represented as a 2d array of numbers. This second thing is a huge, and I mean HUGE red herring, and has undoubtedly caused an innumerable number of people to be confused.
*Let’s tackle the concept of bilinear maps, and then we can use that knowledge of bilinear maps to help us tackle the concept of representing rank 2 tensors as 2d arrays.*
## Bilinear maps
The dot product is what the cool kids like to call a bilinear map. This just means that the dot product has the following properties:
* x⋅(y+z)=x⋅y+x⋅z
* (y+z)⋅x=y⋅x+z⋅x
* x⋅(ay)=a(x⋅y)
Why is this important? Well if we represent the vector x as $x=x_{1}i+x_{2}j$, and we represent the vector $y=y_{1}i+y_{2}j$, then because ⋅ is linear, the following is true: $x⋅y=y_{1} x_{1} i⋅i + y_{2} x_{2} j⋅j + (x_{1} y_{2} + x_{2} y_{1})i⋅j$
This means if we know the values of i⋅i, j⋅j, and i⋅j, then we have completely defined the operation ⋅. In other words, knowing just these 3 values allows us to calculate the value of x⋅y for any x and y.
Now we can describe how ⋅ might be represented as a 2d array. If ⋅ is the standard cartesian dot product that you learned about on the first day of your linear algebra or physics class, and i and j are both the standard cartesian unit vectors, then i⋅i=1, j⋅j=1, and j⋅i=i⋅j=0.
To represent this tensor ⋅ as a 2d array, we would create a table holding these values, i.e.
\begin{bmatrix}
⋅ & i & j \\[0.3em]
i & 1 & 0 \\[0.3em]
j & 0 & 1
\end{bmatrix}
Or, more compactly
\begin{bmatrix}
1 & 0 \\[0.3em]
0 & 1
\end{bmatrix}
DO NOT LET THIS SIMILARITY TO THE SIMILAR MATRIX NOTATION FOOL YOU. Multiplying this by a vector will clearly give the wrong answer for many reasons, the most important of which is that the dot product produces a scalar quantity, a matrix produces a vector quantity. This notation is simply a way of neatly writing what the dot product represents, it is not a way of making the dot product into a matrix.
If we become more general, then we can take arbitrary values for these dot products i⋅i=a, j⋅j=b, and j⋅i=i⋅j=c.
Which would be represented as
\begin{bmatrix}
a & c \\[0.3em]
c & b
\end{bmatrix}
***A tensor defined in this way is called the metric tensor. The reason it is called that, and the reason it is so important in general relativity, is that just by changing the values we can change the definition of lengths and angles*** (remember that inner product spaces define length and angles in terms of ⋅), and we can enumerate over all possible definitions of lengths and angles. We call this a rank 2 tensor because it is a 2d array (i.e. it looks like a square), if we had a 3x3 tensor, such as a metric tensor for 3 dimensional space it would still be an example of a rank 2 tensor.
\begin{bmatrix}
a & s & d \\[0.3em]
f & g & h \\[0.3em]
z & x & b
\end{bmatrix}
(Note: the table is symmetric along the diagonal only because the metric tensor is commutative. A general tensor does not have to be commutative and thus its representation does not have to be symmetric.)
To get a rank 3 tensor, we would create a cube-like table of values as opposed to a square-like one (I can’t do this in latex so you’ll have to imagine it). A rank 3 tensor would be a trilinear map. A trilinear map m takes 3 vectors from a vector space V, and can be defined in terms of the values it takes when its arguments are the basis vectors of V. E.g. if V has two basis vectors i and j, then m can be defined by defining the values of m(i,i,i), m(i,i,j), m(i,j,i), m(i,j,j), m(j,i,i), m(j,i,j), m(j,j,i), and m(j,j,j) in a 3d array.
A rank 4 tensor would be a 4-linear, A.K.A quadrlinear map that would take 4 arguments, and thus be represented as a 4 dimensional array etc.
## Why do people think tensors are generalizations of vectors?
So now we come to why people think tensors are generalizations of vectors. Its because, if we take a function $f(y)=x⋅y$, then f, being the linear scallawag it is, can be defined with only 2 values. $f(y)=y_{1}f(i)+y_{2}f(j)$, so knowing the values of f(i)and f(j) completely define f. And therefore, f is a rank 1 tensor, i.e. a multilinear map with one argument. This would be represented as a 1d array, very much like the common notion of a vector. Furthermore, these values completely define x as well. If ⋅ is specifically the cartesian metric tensor, then the values of the representation of x and the values of the representation of f are exactly the same. This is why people think tensors are generalizations of vectors.
But if ⋅ is given different values, then the representation of x and the representation of f will have different values. ***Vectors by themselves are not linear maps, they can just be thought of as linear maps***. In order for them to actually be linear maps, they need to be combined with some sort of linear operator such as ⋅.
So here is the definition: ***A tensor is any multilinear map from a vector space to a scalar field***. (Note: A multilinear map is just a generalization of linear and bilinear maps to maps that have more than 2 arguments. I.e. any map which is distributive over addition and scalar multiplication. Linear maps are considered a type of multilinear map)
This definition as a multilinear maps is another reason people think tensors are generalization of matrices, because matrices are linear maps just like tensors. But the distinction is that matrices take a vector space to itself, while tensors take a vector space to a scalar field. So a matrix is not strictly speaking a tensor.
| github_jupyter |
# Pytorch : Classification Problem - Diabetics with NN
```
#import necessary libraries
#describe reason for import each libraries
import numpy as np # converting data from pandas to torch
import torch
import torch.nn as nn #main library to define the architecture of the neural network
import pandas as pd # to read the data from the csv file
from sklearn.preprocessing import StandardScaler # used for feature normalization
from torch.utils.data import Dataset,DataLoader
import matplotlib.pyplot as plt # to plot loss with epochs
```
# Data Preprocessing
```
data = pd.read_csv('diabetes.csv')
data
#Extract features X and o/p y from the data
X = data.iloc[:,:-1]
X = np.array(X)
y = data.iloc[:,-1]
y = np.array(y) # need to convert datatype into float else not possible to convert into tensor
y[y=='positive']=1.
y[y=='negative']=0.
y = np.array(y,dtype=np.float64)
y = y.reshape(len(y),1)
```
# Feature normalization
# Formula: $x^{\prime}=\frac{x-\mu}{\sigma}$ *where $\mu$ is mean and $\sigma$ is std
```
mean = X.mean(axis = 0) # taking mean along
std = X.std(axis = 0)
X_norm = (X-mean)/std
X_norm
#alternate approach
sc = StandardScaler()
X_norm1 = sc.fit_transform(X)
X_norm1
#Converting numpy array into tensor
X_tensor = torch.tensor(X_norm)
y_tensor = torch.tensor(y)
print(X_tensor.shape)
print(y_tensor.shape)
# We need to create custom dataset class to feed the data into dataloader
#because as per pytorch standard dataloader accepts dataset class
#this part can be copy pasted in case of use of custom data in X,y format
class Dataset(Dataset):
def __init__(self,x,y):
self.x = x
self.y = y
def __getitem__(self,index):
# Get one item from the dataset
return self.x[index], self.y[index]
def __len__(self):
return len(self.x)
dataset = Dataset(X_tensor,y_tensor)
print(dataset.__getitem__(2))
print(dataset.__len__())
#create the dataloader for the model
dataloader = DataLoader(dataset, batch_size=32, shuffle=True)
#Let's check the dataloader and iterate through it
print('Length of the dataloader:{}'.format(str(len(dataloader)))) # i.e no of batches
for (x,y) in dataloader:
print("For one iteration (batch), there is:")
print("Data: {}".format(x.shape))
print("Labels: {}".format(y.shape))
break
```

```
#create NN architecture as depicted above
class Model(nn.Module):
def __init__(self,input_features,labels):
super(Model, self).__init__() # if we dont include this then we will get error cannot assign module before Module.__init__() call
self.input_features = input_features
self.fc1 = nn.Linear(input_features,5)
self.fc2 = nn.Linear(5,4)
self.fc3 = nn.Linear(4,3)
self.fc4 = nn.Linear(3,labels)
self.sigmoid = nn.Sigmoid() # activation fn for the o/p
self.tanh = nn.Tanh() # activation function for the hidden layers
def forward(self,X):
output = self.tanh(self.fc1(X))
output = self.tanh(self.fc2(output))
output = self.tanh(self.fc3(output))
output = self.sigmoid(self.fc4(output))
return output
```
**Reference only
# Note: we don't need to manually derive this, in pytorch we can use the cost function provided in the library
$H_{p}(q)=-\frac{1}{N} \sum_{i=1}^{N} y_{i} \cdot \log \left(p\left(y_{i}\right)\right)+\left(1-y_{i}\right) \cdot \log \left(1-p\left(y_{i}\right)\right)$
cost = -(Y * torch.log(hypothesis) + (1 - Y) * torch.log(1 - hypothesis)).mean()
```
def train(model,epochs,criterion,optimizer,dataloader):
average_loss =[]
for epoch in range(epochs):
minibatch_loss=0
for x,y in dataloader:
x = x.float() # converting into float to avoid any error for type casting
y = y.float()
#forward propagation
output = model(x)
#calculate loss
loss = criterion(output,y)
minibatch_loss += loss
#cleat gradient buffer # check the reason why the optimizer is zero
optimizer.zero_grad()
# Backward propagation
loss.backward()
# Weight Update: w <-- w - lr * gradient
optimizer.step()
#calculate accuracy for one epoch
average_loss.append(minibatch_loss/len(dataloader))
output = (output>0.5).float()
#how many correct prection taking average of that
accuracy = (output == y).float().mean()
if (epoch+1)%50 ==0:
print("Epoch {}/{}, Loss: {:.3f}, Accuracy: {:.3f}".format(epoch+1,epochs, average_loss[-1], accuracy))
return [average_loss,model]
# call train function:
model = Model(X_tensor.shape[1],y_tensor.shape[1])
epochs = 150
# Loss function - Binary Cross Entropy
#In Binary Cross Entropy: the input and output should have the same shape
#size_average = True --> the losses are averaged over observations for each minibatch
#criterion = nn.BCELoss(size_average=True)
criterion = nn.BCELoss(reduction='mean')
#We will use SGD with momentum
# Need to read about torch modules that is getting used here
optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)
dataloader =dataloader
average_loss,model = train(model,epochs,criterion,optimizer,dataloader)
def BCELossfn(output,y):
loss = -(torch.sum(y*torch.log(output)+(1-y)*torch.log(1-output)))
return loss
def predict(model,X,y):
output = model(X)
if output>0.5:
output = 1.
else:
output = 0.
print('Predicted output:{}'.format(str(output)))
print('Ground truth:{}'.format(str(y.item())))
predict(model,X_tensor[0].float(),y_tensor[0].float())
```
| github_jupyter |
# Image Classifier
## Dataset : 28x28 pixel Low Res Images
```
import tensorflow as tf
from tensorflow import keras
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import os
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 7
CATEGORIES = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat','Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
```
## Getting the Data
```
data = keras.datasets.fashion_mnist
(train_images, train_labels), (test_images, test_labels) = data.load_data()
print("Train Dataset Dimensions : ",train_images.shape)
print("Test Dataset Dimensions : ",test_images.shape)
train_labels
test_labels
```
### Sample Train Data
```
plt.figure()
plt.imshow(train_images[17])
plt.colorbar()
plt.grid(False)
```
### Sample Test Data
```
plt.figure()
plt.imshow(test_images[17])
plt.colorbar()
plt.grid(False)
```
## Preprocessing (Normalisation)
```
train_images = train_images / 255.0
test_images = test_images / 255.0
```
### Neural Network
##### Structure of the NNs:
### Model I
• Input Feature (Flattened/Linearised)<br>
• Hidden Dense Layer 1: 128 neurons (AF : RELU)<br>
• Output Layer : 10 neurons (AF : SOFTMAX)<br>
### Model II
• Input Feature (Flattened/Linearised)<br>
• Hidden Dense Layer 1: 128 neurons (AF : RELU)<br>
• Hidden Dense Layer 2: 128 neurons (AF : RELU)<br>
• Output Layer : 10 neurons (AF : SOFTMAX)<br>
### Model III
• Input Feature (Flattened/Linearised)<br>
• Hidden Dense Layer 1: 128 neurons (AF : RELU)<br>
• Hidden Dense Layer 2: 128 neurons (AF : RELU)<br>
• Hidden Dense Layer 3: 128 neurons (AF : RELU)<br>
• Output Layer : 10 neurons (AF : SOFTMAX)<br>
```
classifier1 = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation=tf.nn.relu),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
classifier2 = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation=tf.nn.relu),
keras.layers.Dense(128, activation=tf.nn.relu),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
classifier3 = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(128, activation=tf.nn.relu),
keras.layers.Dense(128, activation=tf.nn.relu),
keras.layers.Dense(128, activation=tf.nn.relu),
keras.layers.Dense(10, activation=tf.nn.softmax)
])
classifier1.compile(optimizer=tf.train.AdamOptimizer(), loss='sparse_categorical_crossentropy', metrics=['accuracy'])
classifier2.compile(optimizer=tf.train.AdamOptimizer(), loss='sparse_categorical_crossentropy', metrics=['accuracy'])
classifier3.compile(optimizer=tf.train.AdamOptimizer(), loss='sparse_categorical_crossentropy', metrics=['accuracy'])
```
## Training All Classifiers/Models
```
classifier1.fit(train_images, train_labels, epochs=7)
classifier2.fit(train_images, train_labels, epochs=7)
classifier3.fit(train_images, train_labels, epochs=7)
```
## Finding Accuracy Of Each Classifier On Test Set
```
test_loss, test_acc = classifier1.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
test_loss, test_acc = classifier2.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
test_loss, test_acc = classifier3.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
class1_predict = classifier1.predict(test_images)
class2_predict = classifier2.predict(test_images)
class3_predict = classifier3.predict(test_images)
def plot_image(i, arr, actual_label, img):
arr, actual_label, img = arr[i], actual_label[i], img[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.imshow(img, cmap=plt.cm.binary)
predicted_label = np.argmax(arr)
if predicted_label == actual_label:
color = 'blue'
else:
color = 'red'
plt.xlabel("{} {:2.0f}% ({})".format(CATEGORIES[predicted_label],
100*np.max(arr),
CATEGORIES[actual_label]),
color=color)
def plot_value_array(i, arr, actual_label):
arr, actual_label = arr[i], actual_label[i]
plt.grid(False)
plt.xticks([])
plt.yticks([])
thisplot = plt.bar(range(10), arr, color="#777777")
plt.ylim([0, 1])
predicted_label = np.argmax(arr)
thisplot[predicted_label].set_color('red')
thisplot[actual_label].set_color('blue')
num_rows = 4
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, class1_predict, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, class1_predict, test_labels)
num_rows = 4
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, class3_predict, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, class3_predict, test_labels)
num_rows = 4
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
plot_image(i, class2_predict, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
plot_value_array(i, class2_predict, test_labels)
```
| github_jupyter |
Dependencies and starter code
Observations:
1. The number of data points per drug regimen group were not equal. Capomulin and Ramicane had the most amount of data points and they were also a part of the top four most promising treatment regimens. Their inclusion in this category may be because there was a larger data set of points thus allowing for greater accuracy when analyzing their efficacy and promising nature.
2. Capomulin seems to be the most effective drug regimen because it greatly reduced tumor volume as it has the lowest mean tumor volume and the second lowest SEM.
3. Tumor volume is positively correlated to mouse weight for a mouse that is treated with Capomulin. This is supported by the R value of this relationship being, +0.96.
```
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
from scipy.stats import linregress
# Study data files
mouse_metadata = "data/Mouse_metadata.csv"
study_results = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata)
study_results = pd.read_csv(study_results)
# Combine the data into a single dataset
combined_mouse = pd.merge(mouse_metadata, study_results,
how='outer', on='Mouse ID')
combined_mouse
```
Summary statistics
```
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
Regimens = combined_mouse.groupby(["Drug Regimen"])
Regimens
regimen_mean = Regimens["Tumor Volume (mm3)"].mean()
regimen_median = Regimens["Tumor Volume (mm3)"].median()
regimen_variance = Regimens["Tumor Volume (mm3)"].var()
regimen_std = Regimens["Tumor Volume (mm3)"].std()
regimen_sem = Regimens["Tumor Volume (mm3)"].sem()
summary_stats = pd.DataFrame({"Mean": regimen_mean, "Median":regimen_median, "Variance":regimen_variance, "Standard Deviation": regimen_std, "SEM": regimen_sem})
summary_stats
```
Bar plots
```
# Generate a bar plot showing number of data points for each treatment regimen using pandas
regimen_data_points = Regimens.count()["Mouse ID"]
regimen_data_points
# Generate a bar plot showing number of data points for each treatment regimen using pyplot
bar_regimen = regimen_data_points.plot(kind='bar')
plt.title("Data Points and Drug Regimen")
plt.xlabel("Drug Regimen")
plt.ylabel("Data Points")
plt.ylim(0, 240)
# Generate a bar plot showing number of data points for each treatment regimen using pyplot
data_points = [230, 178,178,188,186,181,161,228,181,182]
x_axis = np.arange(len(regimen_data_points))
plt.bar(x_axis, data_points, color='r', alpha=0.5, align="center")
tick_locations = [value for value in x_axis]
plt.xticks(tick_locations, ["Capomulin", "Ceftamin", "Infubrinol", "Ketapril","Naftisol", "Placebo","Propriva","Ramicane","Stekasyn","Zoniferol"],rotation='vertical')
plt.title("Data Points Using Pyplot")
plt.xlabel("Drug Regimen")
plt.ylabel("Data Points")
plt.xlim(-0.75, len(x_axis)-0.25)
plt.ylim(0, 250)
```
Pie Plots
```
# Generate a pie plot showing the distribution of female versus male mice using pandas
gender_count = combined_mouse.groupby("Sex")["Mouse ID"].nunique()
gender_count.head()
total_count = len(combined_mouse["Mouse ID"].unique())
total_count
gender_percent = (gender_count/total_count)*100
gp= gender_percent.round(2)
gender_df = pd.DataFrame({"Sex Count":gender_count,
"Sex Percentage":gp})
gender_df
colors = ['pink', 'lightblue']
explode = (0.1, 0)
plot = gender_df.plot.pie(y="Sex Count",figsize=(6,6), colors = colors, startangle=140, explode = explode, shadow = True, autopct="%1.1f%%")
plt.title("Percentage of Female vs. Male Mice Using Pandas")
# Generate a pie plot showing the distribution of female versus male mice using pyplot
sex = ["Female","Male"]
sex_percent = [gp]
colors = ["pink","lightblue"]
explode = (0.1,0)
plt.pie(sex_percent, explode=explode, labels=sex, colors=colors,
autopct="%1.1f%%", shadow=True, startangle=140)
plt.axis("equal")
plt.title("Percentage of Female vs. Male Mice Using Pyplot")
plt.show()
```
Quartiles, outliers and boxplots
```
# Calculate the final tumor volume of each mouse across four of the most promising treatment regimens. Calculate the IQR and quantitatively determine if there are any potential outliers.
top_4 = combined_mouse[["Drug Regimen", "Mouse ID", "Timepoint", "Tumor Volume (mm3)"]]
top_4
top_four = top_4.sort_values("Timepoint", ascending=False)
top_four.head(4)
tumor_naftisol= top_four.loc[(top_four["Drug Regimen"] == "Naftisol") | (top_four["Timepoint"] == "45"),:]
tumor_naftisol
quartiles = tumor_naftisol['Tumor Volume (mm3)'].quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq-lowerq
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
print(lower_bound)
print(upper_bound)
outliers = []
for vol in tumor_naftisol:
for row in tumor_naftisol["Tumor Volume (mm3)"]:
if row > upper_bound:
print (f'{row} is an outlier')
if row < lower_bound:
print(f'{row} is an outlier')
else:
print(f"This mouse is not an outlier for Drug Regimen Naftisol")
tumor_capomulin= top_four.loc[(top_four["Drug Regimen"] == "Capomulin") | (top_four["Timepoint"] == "45"),:]
tumor_capomulin
quartiles = tumor_capomulin['Tumor Volume (mm3)'].quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq-lowerq
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
print(lower_bound)
print(upper_bound)
for vol in tumor_capomulin:
for row in tumor_capomulin["Tumor Volume (mm3)"]:
if row < lower_bound:
print(f'{row} is an outlier')
if row > upper_bound:
print(f'{row} is an outlier')
else:
print("This mouse is not an outlier for Drug Regimen Capomulin")
tumor_placebo = top_four.loc[(top_four["Drug Regimen"] == "Placebo") | (top_four["Timepoint"] == "45"),:]
tumor_placebo
quartiles = tumor_placebo['Tumor Volume (mm3)'].quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq-lowerq
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
print(lower_bound)
print(upper_bound)
for vol in tumor_placebo:
for row in tumor_placebo["Tumor Volume (mm3)"]:
if row < lower_bound:
print(f'{row} is an outlier')
if row > upper_bound:
print(f'{row} is an outlier')
else:
print("This mouse is not an outlier for Drug Regimen Placebo")
tumor_ramicane= top_four.loc[(top_four["Drug Regimen"] == "Ramicane") | (top_four["Timepoint"] == "45"),:]
tumor_ramicane
quartiles = tumor_ramicane['Tumor Volume (mm3)'].quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq-lowerq
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
print(lower_bound)
print(upper_bound)
for vol in tumor_ramicane:
for row in tumor_ramicane["Tumor Volume (mm3)"]:
if row < lower_bound:
print(f'{row} is an outlier')
if row > upper_bound:
print(f'{row} is an outlier')
else:
print("This mouse is not an outlier for Drug Regimen Placebo")
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
naftisol_vol = tumor_naftisol["Tumor Volume (mm3)"]
capomulin_vol = tumor_capomulin["Tumor Volume (mm3)"]
placebo_vol = tumor_placebo["Tumor Volume (mm3)"]
ramicane_vol = tumor_ramicane["Tumor Volume (mm3)"]
naf = plt.boxplot(naftisol_vol,positions = [1],widths= 0.5)
cap = plt.boxplot(capomulin_vol,positions = [2],widths = 0.5)
plac = plt.boxplot(placebo_vol,positions = [3],widths = 0.5)
ram = plt.boxplot(ramicane_vol,positions = [4],widths =0.5)
plt.title("Final tumor volume of each mouse across four of the most promising treatment regimens")
plt.ylabel("Tumor Volume")
plt.xlabel("Treatments")
plt.xticks([1, 2, 3,4], ['Naftisol', 'Capomulin', 'Placebo','Ramicane'])
plt.ylim(10, 80)
plt.show()
```
Line and scatter plots
```
# Generate a line plot of time point versus tumor volume for a mouse treated with Capomulin
cap_mouse = combined_mouse.loc[(combined_mouse["Mouse ID"] == "j119"),:]
cap_mouse
x_axis = cap_mouse["Timepoint"]
y_axis = cap_mouse["Tumor Volume (mm3)"]
plt.plot(x_axis,y_axis, marker ='o', color='blue')
plt.title("Time point versus tumor volume for a mouse treated with Capomulin")
plt.xlabel("Timepoint")
plt.ylabel("Tumor Volume (mm3)")
plt.show()
# Generate a scatter plot of mouse weight versus average tumor volume for the Capomulin regimen
cap_df = combined_mouse[["Mouse ID","Weight (g)", "Tumor Volume (mm3)","Drug Regimen"]]
cap_df
cap_scatter = cap_df.loc[(cap_df["Drug Regimen"] == "Capomulin"),:]
cap_scatter
cap_weight = cap_scatter.groupby("Weight (g)")["Tumor Volume (mm3)"].mean()
cap_weight
cap_weight_df = pd.DataFrame(cap_weight)
cap_weight_df
capo_final = pd.DataFrame(cap_weight_df).reset_index()
capo_final
plt.scatter(x=capo_final['Weight (g)'], y=capo_final['Tumor Volume (mm3)'])
plt.title("Mouse weight versus average tumor volume for the Capomulin regimen")
plt.xlabel("Weight (g)")
plt.ylabel("Average Tumor Volume (mm3)")
plt.show()
# Calculate the correlation coefficient and linear regression model for mouse weight and average tumor volume for the Capomulin regimen
x_values = capo_final["Weight (g)"]
y_values = capo_final["Tumor Volume (mm3)"]
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)
regress_values = x_values * slope + intercept
line_eq = "y =" + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(x_values, y_values)
plt.plot(x_values,regress_values,"r-")
plt.annotate(line_eq,(10,10),fontsize=15,color="black")
plt.xlabel("Weight (g)")
plt.ylabel("Average Tumor Volume (mm3)")
plt.title("Mouse weight versus average tumor volume for the Capomulin regimen")
print(f"The r-squared is: {rvalue}")
plt.show()
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import numpy as np
np.set_printoptions(precision=2)
import matplotlib.pyplot as plt
import copy as cp
import sys, json, pickle
PROJECT_PATHS = ['/home/nbuckman/Dropbox (MIT)/DRL/2020_01_cooperative_mpc/mpc-multiple-vehicles/', '/Users/noambuckman/mpc-multiple-vehicles/']
for p in PROJECT_PATHS:
sys.path.append(p)
import src.traffic_world as tw
import src.multiagent_mpc as mpc
import src.car_plotting_multiple as cmplot
import src.solver_helper as helper
import src.vehicle as vehicle
i_mpc_start = 1
i_mpc = i_mpc_start
log_directory = '/home/nbuckman/mpc_results/f509-425f-20200907-153800/'
with open(log_directory + "params.json",'rb') as fp:
params = json.load(fp)
n_rounds_mpc = params['n_rounds_mpc']
number_ctrl_pts_executed = params['number_ctrl_pts_executed']
xamb_actual, uamb_actual = np.zeros((6, n_rounds_mpc*number_ctrl_pts_executed + 1)), np.zeros((2, n_rounds_mpc*number_ctrl_pts_executed))
xothers_actual = [np.zeros((6, n_rounds_mpc*number_ctrl_pts_executed + 1)) for i in range(params['n_other'])]
uothers_actual = [np.zeros((2, n_rounds_mpc*number_ctrl_pts_executed)) for i in range(params['n_other'])]
actual_t = 0
last_mpc_i = 104
for i_mpc_start in range(1,last_mpc_i+2):
previous_mpc_file = folder + 'data/mpc_%02d'%(i_mpc_start - 1)
xamb_executed, uamb_executed, _, all_other_x_executed, all_other_u_executed, _, = mpc.load_state(previous_mpc_file, params['n_other'])
all_other_u_mpc = all_other_u_executed
uamb_mpc = uamb_executed
previous_all_file = folder + 'data/all_%02d'%(i_mpc_start -1)
# xamb_actual_prev, uamb_actual_prev, _, xothers_actual_prev, uothers_actual_prev, _ = mpc.load_state(previous_all_file, params['n_other'], ignore_des = True)
t_end = actual_t+number_ctrl_pts_executed+1
xamb_actual[:, actual_t:t_end] = xamb_executed[:,:number_ctrl_pts_executed+1]
uamb_actual[:, actual_t:t_end] = uamb_executed[:,:number_ctrl_pts_executed+1]
for i in range(len(xothers_actual_prev)):
xothers_actual[i][:, actual_t:t_end] = all_other_x_executed[i][:,:number_ctrl_pts_executed+1]
uothers_actual[i][:, actual_t:t_end] = all_other_u_executed[i][:,:number_ctrl_pts_executed+1]
# print(xamb_actual[0,:t_end])
# print(" ")
file_name = folder + "data/"+'all_%02d'%(i_mpc_start-1)
mpc.save_state(file_name, xamb_actual, uamb_actual, None, xothers_actual, uothers_actual, None, end_t = actual_t+number_ctrl_pts_executed+1)
actual_t += number_ctrl_pts_executed
print("Loaded initial positions from %s"%(previous_mpc_file))
print(xothers_actual[0][0,:t_end])
```
| github_jupyter |
# Stock Prediction Research Proposal
### Introduction
The main purpose of this research project is to create a Stock-prediction application to be used as a day-trading application to support investment decisions for beginners. The Target Audience for the project would be a tech company to whom I would be selling my model my model, or pushing my model to the implementation team who would integrate with a web app or on a mobile device.
### Research Design
[Day Trading](https://www.thestreet.com/investing/how-much-money-do-you-need-to-start-day-trading-15176512) has a few legal definitions outlined by the [SEC](https://www.sec.gov/files/daytrading.pdf), with some possible broader definitions being applied by your broker. For a beginning investor that won't necessarily want to commit $25,000, I will develop a model and application that can accurately predict high-yield trades over a timeframe that doesn't quite qualify as day-trading. For the purposes of this research, we will not be including options and other securities as possible transactions for the model.
>_Executing four or more day trades within five business days_
>
> #### _-SEC_
So with a limit of four buy/sell transactions per week I will begin the model-building process with the assumed structure that we will buy a single stock on Monday Morning at opening and sell on Tuesday Morning at opening, and so on.
* First, we'd need to define our stock universe
* **Consideration 1:** Liquidity is one of the primary concerns when trading on short timespans. Just because you want to sell your stocks from what your algorithm is showing, doesn't mean that there are people buying.
* **Consideration 2:** Volatility is the next factor we'll be considering. A higher volatility offers the opportunity for more gains at an increased risk. This measure is often calculated with the [Sharpe Ratio](https://www.investopedia.com/terms/s/sharperatio.asp) but the [Sortino Ratio](https://www.investopedia.com/terms/s/sortinoratio.asp) will likely be a better metric since it doesn't penalize net positive volatility.
* Taking both of these consideration, I will be building the model with mid-cap stocks
* Define the Historic timeframe for or predictions (2 days, 1 week, 2 weeks, etc.)
* Decompose the time series and build a Deep Learning ML model on the data
* Validate model accuracy and test live with paper trading on Alpaca
As we all know, predicting stock values based purely on the time series data is difficult due to the volatile nature of mid-cap stocks. So I will also include some of the following as market indicators and techniques for a multi-input deeplearning model:
* Text data, vectorized with TFIDF or similar vectorizing techniques in NLP to measure sentiment toward a company in a given time period from one of the following sources:
* Financial News Headlines
* [Twitter data](https://developer.twitter.com/en/docs)
* [stockwits](https://api.stocktwits.com/developers/docs)
* [Applying CNN techniques](https://arxiv.org/abs/2001.09769) is a relatively new, novel way of modeling stock data that can apparently get better prediction accuracy than traditional time series modeling techniques used for stock data.
### Data
The primary focus for data will be acquiring market data from either [Polygon](https://polygon.io/stocks?gclid=Cj0KCQjwudb3BRC9ARIsAEa-vUu_C5pdMe26WRhWcj7nSpezXzIyXIs-Dec_LxrlkweD0nFN0MUlCPMaAo_OEALw_wcB) ([Alpaca](https://alpaca.markets/docs/) uses live Polygon data and can be used for live trading and paper trading to validate the model's efficacy) or [Yahoo Finance](https://rapidapi.com/apidojo/api/yahoo-finance1) and then adding predictors to increase the accuracy of the model, via [Twitter](https://developer.twitter.com/en/docs), [stockwits](https://api.stocktwits.com/developers/docs), News, etc.
### Conclusion
The final, intended use-case for this model would be an optimized, automated, investor that would be able to perform multiple 'smart' decisions on trading. Once model integrity is established with a goal performance of 15% portfolio growth (average market return for both the S&P500 and Dow Jones Industrial Average in 2019)(**Note:** the projected market return for 2020 is around _6-10%_ , and the running 10-year average stays around _10-12%_ ). The model will also keep a live, ranked list of top stocks to invest in as an additional output for investors that may not be comfortable using algorithmic trading.
This will all be done without input from the end user. Future features can be added that would allow an investor to define their own stock universe, transaction limit frequency, portfolio diversity, etc. Alpaca is the broker for the trading in this model, so the end user would need to have an account created and linked to their bank account for them to be able to use trading script. The final script would need to be hosted on some sort of cloud system like [Amazon EC2](https://aws.amazon.com/ec2/), [Ubuntu Server](https://ubuntu.com/server), [OVHCloud](https://www.ovh.co.uk/), etc.
### Additional Resources
* [SimFin](https://github.com/SimFin/simfin-tutorials)
* [Alpaca Pipeline](https://github.com/alpacahq/pipeline-live)
* [Quantitative Finance on Udemy](https://www.udemy.com/course/quantitative-finance-algorithmic-trading-in-python/)
* [Pandas Technical Analysis](https://github.com/twopirllc/pandas-ta)
* [Stock Universe](https://www.robertbrain.com/share-market/your-stock-universe.html)
| github_jupyter |
# Classification example 2 using Health Data with PyCaret
```
#Code from https://github.com/pycaret/pycaret/
# check version
from pycaret.utils import version
version()
```
# 1. Data Repository
```
import pandas as pd
url = 'https://raw.githubusercontent.com/davidrkearney/colab-notebooks/main/datasets/strokes_training.csv'
df = pd.read_csv(url, error_bad_lines=False)
df
data=df
```
# 2. Initialize Setup
```
from pycaret.classification import *
clf1 = setup(df, target = 'stroke', session_id=123, log_experiment=True, experiment_name='health2')
```
# 3. Compare Baseline
```
best_model = compare_models()
```
# 4. Create Model
```
lr = create_model('lr')
dt = create_model('dt')
rf = create_model('rf', fold = 5)
models()
models(type='ensemble').index.tolist()
#ensembled_models = compare_models(whitelist = models(type='ensemble').index.tolist(), fold = 3)
```
# 5. Tune Hyperparameters
```
tuned_lr = tune_model(lr)
tuned_rf = tune_model(rf)
```
# 6. Ensemble Model
```
bagged_dt = ensemble_model(dt)
boosted_dt = ensemble_model(dt, method = 'Boosting')
```
# 7. Blend Models
```
blender = blend_models(estimator_list = [boosted_dt, bagged_dt, tuned_rf], method = 'soft')
```
# 8. Stack Models
```
stacker = stack_models(estimator_list = [boosted_dt,bagged_dt,tuned_rf], meta_model=rf)
```
# 9. Analyze Model
```
plot_model(rf)
plot_model(rf, plot = 'confusion_matrix')
plot_model(rf, plot = 'boundary')
plot_model(rf, plot = 'feature')
plot_model(rf, plot = 'pr')
plot_model(rf, plot = 'class_report')
evaluate_model(rf)
```
# 10. Interpret Model
```
catboost = create_model('rf', cross_validation=False)
interpret_model(catboost)
interpret_model(catboost, plot = 'correlation')
interpret_model(catboost, plot = 'reason', observation = 12)
```
# 11. AutoML()
```
best = automl(optimize = 'Recall')
best
```
# 12. Predict Model
```
pred_holdouts = predict_model(lr)
pred_holdouts.head()
new_data = data.copy()
new_data.drop(['Purchase'], axis=1, inplace=True)
predict_new = predict_model(best, data=new_data)
predict_new.head()
```
# 13. Save / Load Model
```
save_model(best, model_name='best-model')
loaded_bestmodel = load_model('best-model')
print(loaded_bestmodel)
from sklearn import set_config
set_config(display='diagram')
loaded_bestmodel[0]
from sklearn import set_config
set_config(display='text')
```
# 14. Deploy Model
```
deploy_model(best, model_name = 'best-aws', authentication = {'bucket' : 'pycaret-test'})
```
# 15. Get Config / Set Config
```
X_train = get_config('X_train')
X_train.head()
get_config('seed')
from pycaret.classification import set_config
set_config('seed', 999)
get_config('seed')
```
# 16. MLFlow UI
```
# !mlflow ui
```
| github_jupyter |
<a href="https://colab.research.google.com/github/patil-suraj/question_generation/blob/master/question_generation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!pip install -U transformers==3.0.0
!python -m nltk.downloader punkt
!pip3 install nlp
!pip3 install git+https://github.com/Maluuba/nlg-eval.git@master
!nlg-eval --setup
from google.colab import drive
drive.mount('/content/drive')
%cd drive/My\ Drive/question_generation/question_generation/
ls data/tweet_manual_multitask/
# Already DONE !!!!!
#!python3 prepare_data.py --task qg --model_type t5 --dataset_path data/tweet_manual_multitask --qg_format highlight_qg_format --max_source_length 512 --max_target_length 32 --train_file_name train_data_qg_hl_t5_tweet_manual.pt --valid_file_name valid_data_qg_hl_t5_tweet_manual.pt
######################
# valhalla/t5-small-qg-hl
######################
#08/15/2020 15:57:21 - INFO - transformers.configuration_utils - Configuration saved in t5-small-qg-hl/config.json
#08/15/2020 15:57:23 - INFO - transformers.modeling_utils - Model weights saved in t5-small-qg-hl/pytorch_model.bin
%%capture loggingfine
!python3 run_qg.py \
--model_name_or_path valhalla/t5-small-qg-hl \
--model_type t5 \
--tokenizer_name_or_path t5_qg_tokenizer \
--output_dir t5-small-qg-hl-15-manual \
--train_file_path data/train_data_qg_hl_t5_tweet_manual.pt \
--valid_file_path data/valid_data_qg_hl_t5_tweet_manual.pt \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 16 \
--gradient_accumulation_steps 8 \
--learning_rate 1e-4 \
--num_train_epochs 15 \
--seed 42 \
--do_train \
--do_eval \
--evaluate_during_training \
--logging_steps 100\
--logging_dir t5-small-qg-15-manual-log\
--overwrite_output_dir
#using originally pretrained model small t5 valhalla
!python eval.py \
--model_name_or_path valhalla/t5-small-qg-hl \
--valid_file_path data/valid_data_qg_hl_t5_tweet_manual.pt \
--model_type t5 \
--num_beams 4 \
--max_decoding_length 32 \
--output_path hypothesis_t5-original-small-qg-hl_tweet_manual_model.txt
, #evaluate valhalla small model
!nlg-eval --hypothesis=hypothesis_t5-original-small-qg-hl_tweet_manual_model.txt --references=data/tweet_dev_automatic_reference.txt --no-skipthoughts --no-glove
#using finetuned small model with 15 epochs
!python eval.py \
--model_name_or_path t5-small-qg-hl-15-manual \
--valid_file_path data/valid_data_qg_hl_t5_tweet_manual.pt \
--model_type t5 \
--num_beams 4 \
--max_decoding_length 32 \
--output_path hypothesis_t5-small-fine-tuned-qg-hl_t5_tweet_manual.txt
, #evaluate finetuned small model
!nlg-eval --hypothesis=hypothesis_t5-small-fine-tuned-qg-hl_t5_tweet_manual.txt --references=data/tweet_dev_automatic_reference.txt --no-skipthoughts --no-glove
#valhalla/t5-base-qg-hl
###########################
#08/15/2020 15:57:21 - INFO - transformers.configuration_utils - Configuration saved in t5-small-qg-hl/config.json
#08/15/2020 15:57:23 - INFO - transformers.modeling_utils - Model weights saved in t5-small-qg-hl/pytorch_model.bin
%%capture loggingfine
!python3 run_qg.py \
--model_name_or_path valhalla/t5-base-qg-hl \
--model_type t5 \
--tokenizer_name_or_path t5_qg_tokenizer \
--output_dir t5-base-qg-hl-15-manual \
--train_file_path data/train_data_qg_hl_t5_tweet_manual.pt \
--valid_file_path data/valid_data_qg_hl_t5_tweet_manual.pt \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 16 \
--gradient_accumulation_steps 8 \
--learning_rate 1e-4 \
--num_train_epochs 15 \
--seed 42 \
--do_train \
--do_eval \
--evaluate_during_training \
--logging_steps 100\
--logging_dir t5-base-qg-15-manual-log\
--overwrite_output_dir
#using originally pretrained model base t5 valhalla
!python eval.py \
--model_name_or_path valhalla/t5-base-qg-hl \
--valid_file_path data/valid_data_qg_hl_t5_tweet_manual.pt \
--model_type t5 \
--num_beams 4 \
--max_decoding_length 32 \
--output_path hypothesis_original_base_base-qg-hl_tweet_manual_model.txt
, #evaluate valhalla base model
!nlg-eval --hypothesis=hypothesis_original_base_base-qg-hl_tweet_manual_model.txt --references=data/tweet_dev_automatic_reference.txt --no-skipthoughts --no-glove
#using originally pretrained model base t5 valhalla
!python eval.py \
--model_name_or_path t5-base-qg-hl-15-manual \
--valid_file_path data/valid_data_qg_hl_t5_tweet_manual.pt \
--model_type t5 \
--num_beams 4 \
--max_decoding_length 32 \
--output_path hypothesis_fine-tuned_base_qg-hl_tweet_manual_model.txt
#evaluate fine-tuned base model
!nlg-eval --hypothesis=hypothesis_fine-tuned_base_qg-hl_tweet_manual_model.txt --references=data/tweet_dev_automatic_reference.txt --no-skipthoughts --no-glove
#valhalla/t5-small-e2e-qg
###########################
#08/15/2020 15:57:21 - INFO - transformers.configuration_utils - Configuration saved in t5-small-qg-hl/config.json
#08/15/2020 15:57:23 - INFO - transformers.modeling_utils - Model weights saved in t5-small-qg-hl/pytorch_model.bin
%%capture loggingfine
!python3 run_qg.py \
--model_name_or_path valhalla/t5-small-e2e-qg \
--model_type t5 \
--tokenizer_name_or_path t5_qg_tokenizer \
--output_dir t5-small-E2E-qg-15-manual \
--train_file_path data/train_data_qg_hl_t5_tweet_manual.pt \
--valid_file_path data/valid_data_qg_hl_t5_tweet_manual.pt \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 16 \
--gradient_accumulation_steps 8 \
--learning_rate 1e-4 \
--num_train_epochs 15 \
--seed 42 \
--do_train \
--do_eval \
--evaluate_during_training \
--logging_steps 100\
--logging_dir t5-small-E2E-qg-15-manual-log\
--overwrite_output_dir
#using ORIGINAL small valhalla/t5-small-e2e-qg 15 epochs
!python eval.py \
--model_name_or_path valhalla/t5-small-e2e-qg \
--valid_file_path data/valid_data_qg_hl_t5_tweet_manual.pt \
--model_type t5 \
--num_beams 4 \
--max_decoding_length 32 \
--output_path hypothesis_t5-small-original-E2E-qg-hl_t5_tweet_manual.txt
#evaluate ORGIINAL E2E small model
!nlg-eval --hypothesis=hypothesis_t5-small-original-E2E-qg-hl_t5_tweet_manual.txt --references=data/tweet_dev_automatic_reference.txt --no-skipthoughts --no-glove
#using FINE-TUNED small valhalla/t5-small-e2e-qg 15 epochs
!python eval.py \
--model_name_or_path t5-small-E2E-qg-15-manual \
--valid_file_path data/valid_data_qg_hl_t5_tweet_manual.pt \
--model_type t5 \
--num_beams 4 \
--max_decoding_length 32 \
--output_path hypothesis_t5-small-fine-tuned-E2E-qg-hl_t5_tweet_manual.txt
#evaluate fine-tuned small model
!nlg-eval --hypothesis=hypothesis_t5-small-fine-tuned-E2E-qg-hl_t5_tweet_manual.txt --references=data/tweet_dev_automatic_reference.txt --no-skipthoughts --no-glove
#valhalla/t5-base-e2e-qg
##########################
#08/15/2020 15:57:21 - INFO - transformers.configuration_utils - Configuration saved in t5-small-qg-hl/config.json
#08/15/2020 15:57:23 - INFO - transformers.modeling_utils - Model weights saved in t5-small-qg-hl/pytorch_model.bin
%%capture loggingfine
!python3 run_qg.py \
--model_name_or_path valhalla/t5-base-e2e-qg \
--model_type t5 \
--tokenizer_name_or_path t5_qg_tokenizer \
--output_dir t5-base-E2E-qg-15-manual \
--train_file_path data/train_data_qg_hl_t5_tweet_manual.pt \
--valid_file_path data/valid_data_qg_hl_t5_tweet_manual.pt \
--per_device_train_batch_size 16 \
--per_device_eval_batch_size 16 \
--gradient_accumulation_steps 8 \
--learning_rate 1e-4 \
--num_train_epochs 15 \
--seed 42 \
--do_train \
--do_eval \
--evaluate_during_training \
--logging_steps 100\
--logging_dir t5-base-E2E-qg-15-manual-log\
--overwrite_output_dir
#using ORIGINAL base valhalla/t5-base-e2e-qg 15 epochs
!python eval.py \
--model_name_or_path valhalla/t5-base-e2e-qg \
--valid_file_path data/valid_data_qg_hl_t5_tweet_manual.pt \
--model_type t5 \
--num_beams 4 \
--max_decoding_length 32 \
--output_path hypothesis_t5-base-original-E2E-qg-hl_t5_tweet_manual.txt
#evaluate ORIGINAL E2E base model
!nlg-eval --hypothesis=hypothesis_t5-base-original-E2E-qg-hl_t5_tweet_manual.txt --references=data/tweet_dev_automatic_reference.txt --no-skipthoughts --no-glove
#using FINE-TUNED base valhalla/t5-base-e2e-qg 15 epochs
!python eval.py \
--model_name_or_path t5-base-E2E-qg-15-manual \
--valid_file_path data/valid_data_qg_hl_t5_tweet_manual.pt \
--model_type t5 \
--num_beams 4 \
--max_decoding_length 32 \
--output_path hypothesis_t5-base-fine-tuned-E2E-qg-hl_t5_tweet_manual.txt
#evaluate fine-tuned base model
!nlg-eval --hypothesis=hypothesis_t5-base-fine-tuned-E2E-qg-hl_t5_tweet_manual.txt --references=data/tweet_dev_automatic_reference.txt --no-skipthoughts --no-glove
```
| github_jupyter |
## Bias and Variance
For the sake of this discussion, let's assume we are looking at a regression problem. For values of
$x$ in the interval $[-1,1]$ there is a function $f(x)$ so that
$$
Y = f(x)+\epsilon
$$
where $\epsilon$ is a noise term -- say, normally distributed with mean zero and variance $\sigma^2$.
We have a set $D$ of "training data" which is a collection of $N$ points $(x_i,y_i)$ drawn from the model and
we want to try to reconstruct the function $f(x)$ so that we can accurately predict $f(x)$ for some new,
test value of the function $x$.
Given $D$, we construct a function $h_{D}$ so as to minimize the mean squared error (or loss)
$$
L = \frac{1}{N}\sum_{i=1}^{N} (h_{D}(x_i)-y_i)^2.
$$
This value is called the *training loss* or the *training error*.
Now suppose we pick a point $x_0$ and we'd like to understand the error in our prediction $h_{D}(x_0)$.
We can ask what the expected value of the squared error $(h_D(x_0)-Y)^2$ where $Y$ is the random value
obtained from the "true" model. First we write
$$
h_D(x_0)-Y = h_D(x_0)-E(Y)+E(Y)-Y
$$
and use that $E(Y)=f(x_0)$
to get
$$
E((h_D(x_0)-Y)^2) = E((h_D(x_0)-f(x_0)^2)+E((E(Y)-Y)^2) + 2E((h_D(x_0)-E(Y))(E(Y)-Y)).
$$
Since $E(Y)-Y=\epsilon$ is independent of $h_D(x_0)-E(Y)$ and $E(\epsilon)=0$ the third term vanishes
and the second term is $\sigma^2$. We further split of the first term as
$$
(h_D(x_0)-f(x_0))=(h_D(x_0)-Eh_D(x_0)+Eh_D(x_0)-f(x_0)
$$
where $Eh_D(x_0)$ is the average prediction at $x_0$ as $h$ varies over all possible training sets.
From this we get
$$
E((h_D(x_0)-f(x_0))^2) = E((h_D(x_0)-Eh_D(x_0))^2) + E((f(x_0)-Eh_D(x_0))^2)
$$
The first of these terms has nothing do do with the "true" function $f$; it measures how much the
predicted value at $x_0$ varies as the training set varies. This term is called the *variance*.
In the second term, the expectation is
irrelevant because the term inside doesn't depend on the training set; it measures the (square of) the
difference between the average predicted value and the value of $f(x_0)$; it is called the *(squared) bias.*
Putting all of this together, the error in prediction at a single point $x_0$ is made up of three terms:
- the underlying variance in the process that generated the data, $\sigma^2$.
- the sensitivity of the predictive function to the choice of training set (the variance)
- the degree to which the predictive function accurately guesses the true value *on average*.
```
import numpy as np
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression, Lasso
from sklearn.pipeline import make_pipeline
from numpy.random import normal,uniform
import matplotlib.pyplot as plt
plt.style.use('ggplot')
def sample(f, sigma=.5, N=5):
"""choose N x values as a training set from the interval at random and return f(x)+n(0,sigma^2) as the data at
that training set"""
x = uniform(-1,1,size=(N,1))
y = f(x)+normal(0,sigma,size=(N,1))
return np.concatenate([x,y],axis=1)
def bias_variance_plot(degree=2,samples=20,training_set_size=10,truth=None,sigma=.5):
plt.figure(figsize=(10,10))
if not truth:
truth = lambda x: 0
pipeline = make_pipeline(PolynomialFeatures(degree=degree),LinearRegression())
_=plt.title("Fitting {} polynomials of degree {} to training sets of size {}\nsigma={}".format(samples,degree,training_set_size,sigma))
x =np.linspace(-1,1,20)
plt.ylim([-2,2])
avg = np.zeros(x.shape)
for i in range(samples):
T= sample(truth,sigma=sigma,N=training_set_size)
plt.scatter(T[:,0],T[:,1])
model=pipeline.fit(T[:,0].reshape(-1,1),T[:,1])
y = model.predict(x.reshape(-1,1))
avg += y
_=plt.plot(x,model.predict(x.reshape(-1,1)))
_=plt.plot(x,avg/samples,color='black',label='mean predictor')
_=plt.legend()
```
Here are some examples. Suppose that the underlying data comes from the parabola $f(x)=-x+x^2$ with the
standard deviation of the underlying error equal to $.5$.
First we underfit the data, by using a least squares fit of a linear equation, getting a group of fits that have high bias (the average fit doesn't match the data well) but the solutions don't vary much with the training set.
```
bias_variance_plot(degree=1,training_set_size=20,samples=10,truth=lambda x: -x+x**2)
```
Next we plot a quadratic fit, which has low bias (in fact the average fit is exactly right) and the variance is also controlled.
```
bias_variance_plot(degree=2,training_set_size=10,samples=20,truth=lambda x: -x+x**2)
```
Finally we overfit the training data using a degree 3 polynomial. Again, the bias is good -- in the limit the
cubic fits average to the quadratic solution -- but now the variance is very high so there is a lot of dependence on the test set.
```
bias_variance_plot(degree=3,training_set_size=10,samples=20,truth=lambda x: -x+x**2)
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import sys
sys.path.append("..")
import numpy as np
import pandas as pd
pd.set_option('display.max_columns', 100)
# viz
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(rc={'figure.figsize':(12.7,10.27)})
# notebook settings
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('retina')
import os
os.environ["CUDA_VISIBLE_DEVICES"]="3"
ls /srv/nas/mk2/projects/pan-cancer/TCGA_CCLE_GCP/TCGA
def getTCGA(disease):
path = "/srv/nas/mk2/projects/pan-cancer/TCGA_CCLE_GCP/TCGA/TCGA_{}_counts.tsv.gz"
files = [path.format(d) for d in disease]
return files
def readGCP(files):
"""
Paths to count matrices.
"""
data_dict = {}
for f in files:
key = os.path.basename(f).split("_")[1]
data = pd.read_csv(f, sep='\t', index_col=0)
meta = pd.DataFrame([row[:-1] for row in data.index.str.split("|")],
columns=['ENST', 'ENSG', 'OTTHUMG', 'OTTHUMT', 'GENE-NUM', 'GENE', 'NUM', 'TYPE'])
data.index = meta['GENE']
data_dict[key] = data.T
return data_dict
def renameTCGA(data_dict, mapper):
for key in data_dict.keys():
data_dict[key] = data_dict[key].rename(mapper)
return data_dict
def uq_norm(df, q=0.75):
"""
Upper quartile normalization of GEX for samples.
"""
quantiles = df.quantile(q=q, axis=1)
norm = df.divide(quantiles, axis=0)
return norm
base = "/srv/nas/mk2/projects/pan-cancer/TCGA_CCLE_GCP"
disease = ['BRCA', 'LUAD', 'KIRC', 'THCA', 'PRAD', 'SKCM']
tcga_files = getTCGA(disease)
tcga_meta = pd.read_csv(os.path.join(base, "TCGA/TCGA_GDC_ID_MAP.tsv"), sep="\t")
tcga = readGCP(tcga_files)
# rename samples to reflect canonical IDs
tcga = renameTCGA(tcga, mapper=dict(zip(tcga_meta['CGHubAnalysisID'], tcga_meta['Sample ID'])))
# combine samples
tcga = pd.concat(tcga.values())
```
## Normalization
```
# Upper quartile normalization
tcga = uq_norm(tcga)
# log norm
tcga = tcga.transform(np.log1p)
# downsample
tcga = tcga.sample(n=15000, axis=1)
train_data.shap
tcga_meta[tcga_meta['Sample ID'] == 'TCGA-A7-A26F-01B']
```
# Model
### Experimental Setup
```
from collections import OrderedDict
hierarchy = OrderedDict({'Disease':['BRCA', 'LUAD', 'KIRC', 'THCA', 'PRAD', 'SKCM'],
'Sample Type':['Primary Tumor', 'Solid Tissue Normal', 'Metastatic']})
class Experiment():
"""
Defines an experimental class hierarchy object.
"""
def __init__(self, meta_data, hierarchy, cases, min_samples):
self.hierarchy = hierarchy
self.meta_data = self.categorize(meta_data, self.hierarchy, min_samples)
self.cases = self.meta_data[cases].unique()
self.labels = self.meta_data['meta'].cat.codes.values.astype('int')
self.labels_dict = {key:val for key,val in enumerate(self.meta_data['meta'].cat.categories.values)}
def categorize(self, meta_data, hierarchy, min_samples):
assert isinstance(hierarchy, OrderedDict), "Argument of wrong type."
# downsample data
for key,val in hierarchy.items():
meta_data = meta_data[meta_data[key].isin(val)]
# unique meta classes
meta_data['meta'] = meta_data[list(hierarchy.keys())].apply(lambda row: ':'.join(row.values.astype(str)), axis=1)
# filter meta classes
counts = meta_data['meta'].value_counts()
keep = counts[counts > min_samples].index
meta_data = meta_data[meta_data['meta'].isin(keep)]
# generate class categories
meta_data['meta'] = meta_data['meta'].astype('category')
return meta_data
def holdout(self, holdout):
self.holdout = holdout
self.holdout_samples = self.meta_data[self.meta_data['meta'].isin(holdout)]
self.meta_data = self.meta_data[~self.meta_data['meta'].isin(holdout)]
from dutils import train_test_split_case
exp = Experiment(meta_data=tcga_meta,
hierarchy=hierarchy,
cases='Case ID',
min_samples=20)
exp.holdout(holdout=['SKCM:Metastatic'])
exp.meta_data['meta'].value_counts()
exp.holdout_samples['meta'].value_counts()
# Define Train / Test sample split
target = 'meta'
train, test = train_test_split_case(exp.meta_data, cases='Case ID')
# stratification is not quite perfect but close
# in order to preserve matched samples for each case together
# in train or test set
case_counts = exp.meta_data[target].value_counts()
train[target].value_counts()[case_counts.index.to_numpy()] / case_counts
test[target].value_counts()[case_counts.index.to_numpy()] / case_counts
# split data
train_data = tcga[tcga.index.isin(train['Sample ID'])].astype(np.float16)
test_data = tcga[tcga.index.isin(test['Sample ID'])].astype(np.float16)
import torch
from torch.optim import lr_scheduler
import torch.optim as optim
from torch.autograd import Variable
#torch.manual_seed(123)
from trainer import fit
import visualization as vis
import numpy as np
cuda = torch.cuda.is_available()
print("Cuda is available: {}".format(cuda))
import torch
from torch.utils.data import Dataset
class SiameseDataset(Dataset):
"""
Train: For each sample creates randomly a positive or a negative pair
Test: Creates fixed pairs for testing
"""
def __init__(self, experiment, data, train=False):
self.train = train
self.labels = experiment.meta_data[experiment
.meta_data['Sample ID']
.isin(data.index)]['meta'].cat.codes.values.astype('int')
assert len(data) == len(self.labels)
if self.train:
self.train_labels = self.labels
self.train_data = torch.from_numpy(data.values).float()
self.labels_set = set(self.train_labels)
self.label_to_indices = {label: np.where(self.train_labels == label)[0]
for label in self.labels_set}
else:
# generate fixed pairs for testing
self.test_labels = self.labels
self.test_data = torch.from_numpy(data.values).float()
self.labels_set = set(self.test_labels)
self.label_to_indices = {label: np.where(self.test_labels == label)[0]
for label in self.labels_set}
random_state = np.random.RandomState(29)
positive_pairs = [[i,
random_state.choice(self.label_to_indices[self.test_labels[i].item()]),
1]
for i in range(0, len(self.test_data), 2)]
negative_pairs = [[i,
random_state.choice(self.label_to_indices[
np.random.choice(
list(self.labels_set - set([self.test_labels[i].item()]))
)
]),
0]
for i in range(1, len(self.test_data), 2)]
self.test_pairs = positive_pairs + negative_pairs
def __getitem__(self, index):
if self.train:
target = np.random.randint(0, 2)
img1, label1 = self.train_data[index], self.train_labels[index].item()
if target == 1:
siamese_index = index
while siamese_index == index:
siamese_index = np.random.choice(self.label_to_indices[label1])
else:
siamese_label = np.random.choice(list(self.labels_set - set([label1])))
siamese_index = np.random.choice(self.label_to_indices[siamese_label])
img2 = self.train_data[siamese_index]
else:
img1 = self.test_data[self.test_pairs[index][0]]
img2 = self.test_data[self.test_pairs[index][1]]
target = self.test_pairs[index][2]
return (img1, img2), target
def __len__(self):
if self.train:
return len(self.train_data)
else:
return len(self.test_data)
```
# Siamese Network
```
siamese_train_dataset = SiameseDataset(experiment=exp,
data=train_data,
train=True)
siamese_test_dataset = SiameseDataset(experiment=exp,
data=test_data,
train=False)
batch_size = 8
kwargs = {'num_workers': 10, 'pin_memory': True} if cuda else {'num_workers': 10}
siamese_train_loader = torch.utils.data.DataLoader(siamese_train_dataset, batch_size=batch_size, shuffle=True, **kwargs)
siamese_test_loader = torch.utils.data.DataLoader(siamese_test_dataset, batch_size=batch_size, shuffle=False, **kwargs)
# Set up the network and training parameters
from tcga_networks import EmbeddingNet, SiameseNet
from losses import ContrastiveLoss, TripletLoss
from metrics import AccumulatedAccuracyMetric
# Step 2
n_samples, n_features = siamese_train_dataset.train_data.shape
embedding_net = EmbeddingNet(n_features, 2)
# Step 3
model = SiameseNet(embedding_net)
if cuda:
model.cuda()
# Step 4
margin = 1.
loss_fn = ContrastiveLoss(margin)
lr = 1e-3
optimizer = optim.Adam(model.parameters(), lr=lr)
scheduler = lr_scheduler.StepLR(optimizer, 8, gamma=0.1, last_epoch=-1)
n_epochs = 10
# print training metrics every log_interval * batch_size
log_interval = round(len(siamese_train_dataset)/4/batch_size)
print('Active CUDA Device: GPU', torch.cuda.current_device())
print ('Available devices ', torch.cuda.device_count())
print ('Current cuda device ', torch.cuda.current_device())
train_loss, val_loss = fit(siamese_train_loader, siamese_test_loader, model, loss_fn, optimizer, scheduler,
n_epochs, cuda, log_interval)
plt.plot(range(0, n_epochs), train_loss, 'rx-', label='train')
plt.plot(range(0, n_epochs), val_loss, 'bx-', label='validation')
plt.legend()
def extract_embeddings(samples, target, model):
cuda = torch.cuda.is_available()
with torch.no_grad():
model.eval()
assert len(samples) == len(target)
embeddings = np.zeros((len(samples), 2))
labels = np.zeros(len(target))
k = 0
if cuda:
samples = samples.cuda()
if isinstance(model, torch.nn.DataParallel):
embeddings[k:k+len(samples)] = model.module.get_embedding(samples).data.cpu().numpy()
else:
embeddings[k:k+len(samples)] = model.get_embedding(samples).data.cpu().numpy()
labels[k:k+len(samples)] = target
k += len(samples)
return embeddings, labels
train_embeddings_cl, train_labels_cl = extract_embeddings(siamese_train_dataset.train_data, siamese_train_dataset.labels, model)
vis.sns_plot_embeddings(train_embeddings_cl, train_labels_cl, exp.labels_dict,
hue='meta', style='Sample Type', alpha=0.5)
plt.title('PanCancer Train: Siamese')
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
val_embeddings_baseline, val_labels_baseline = extract_embeddings(siamese_test_dataset.test_data, siamese_test_dataset.labels, model)
vis.sns_plot_embeddings(val_embeddings_baseline, val_labels_baseline, exp.labels_dict,
hue='meta', style='Sample Type', alpha=0.5)
plt.title('PanCancer Test: Siamese')
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
```
| github_jupyter |
# Finviz Analytics
### What is Finviz?
FinViz aims to make market information accessible and provides a lot of data in visual snapshots, allowing traders and investors to quickly find the stock, future or forex pair they are looking for. The site provides advanced screeners, market maps, analysis, comparative tools and charts.
### Why?
Leverage NRT financial stats to create custom stock screens and perspectives to identify value with in volatile market conditions.
***
## Prerequisites
### finviz
Finviz is a stock screener and trading tool used for creating financial displays. Professional traders frequently use this platform to save time because Finviz allows traders and investors to quickly screen and find stocks based on set criteria.
### pandas, pandas_profiling
Pandas needs no introduction. Pandas_profiling creates beautiful html data profiles.
### nest_asyncio
Nest_asyncio supports asynchronous call for use with an interactive broker.
```
import pandas as pd
import logging
from finviz.screener import Screener
log = logging.getLogger()
console = logging.StreamHandler()
format_str = '%(asctime)s\t%(levelname)s -- %(processName)s -- %(message)s'
console.setFormatter(logging.Formatter(format_str))
log.addHandler(console)
log.setLevel(logging.INFO)
```
***
## Load environment and runtime variables
```
'''
MODULE is used to identify and segment runtime and environment variables from config files.
'''
MODULE = 'ldg_finviz'
'''
Load configuration fiiles from \config. Instansiate variables with config file names.
'''
import os
d = os.getcwd()
df = d + '\\config\\'
try:
for i in os.listdir(df):
k = i[:-4]
v = open(df + i).read()
v = eval(v)
exec("%s=%s" % (k,v))
log.info('loaded: ' + k)
except:
log.error('issue encountered with eval(data): ' + str(v))
def get_data_finviz(generate_data_profile=False):
'''
Download FinViz 15min delayed stock data.
* filter - Filter stock universe using filters variable.
* Select datasets to download using the map_api_fv_table_key.config
* Dataset options include:
'Overview': '111',
'Valuation': '121',
'Ownership': '131',
'Performance': '141',
'Custom': '152',
'Financial': '161',
'Technical': '171'
* Refer to /docs for dataset details.
Output data in .csv format to landing.
'''
import nest_asyncio
nest_asyncio.apply()
#load variables
ldg_path = env_var.get('ldg_path')
filters = api_params['filter']
#loop through datasets to download from Screener & write to file.
for i in api_params['datasets']:
log.info('downloading:' + i.get('dataset').lower())
stock_list = Screener(filters=filters, table= i.get('dataset'))
stock_list.to_csv(ldg_path + 'stock_screener_' + i.get('dataset').lower() + '.csv')
if generate_data_profile == True:
log.info('begin pandas profile.html generation')
generate_docs('ldg_path')
def get_transform(target):
'''
Get transform from transform.cfg for target dataset
#returns a list of dicts with transform logic in format {field:value, fn:value}
'''
lst = [i.get(target) for i in transform['root'] if i.get(target) != None]
if lst != []: lst = lst[0]
return lst
def apply_transform(df, transform, target):
'''
apply list of tranformstions to dataframe
'''
import numpy as np
log.info('begin transforms for: ' + target)
try:
for t in transform:
#get function to apply
fn = t.get('fn')
#get reusable function from map_generic_fn if fn starts with $
if fn[0] == '$': fn = map_generic_fn.get(fn)
#get field or fields to update
field = t.get('field')
if field[0] == '[':field = eval(field)
#log.info('apply transform: {field, function} ' + str(field) + ' ' + fn)
#apply transformation
df[field] = eval(fn)
except:
log.error('error encountered with table:' + str(target) + ' field:' + str(field) + ' fn:' + str(fn) )
log.info('end transforms for: ' + target)
return
def normalize_data():
'''
Perform preprocessing and copy data to staging area.
Preprocessing steps are included in transform.cfg and typically include:
- metadata validation / data contract
- preliminary schema normalization
- data type validation & associated cleansing.
'''
ldg_path = env_var.get('ldg_path')
stg_path = env_var.get('stg_path')
try:
#for each dataset in map_landing_dataset_code
for i in map_landing_dataset_code.get(MODULE):
#load file for meta contract and data type conversion
file = ldg_path + i.get('file')
code = i.get('code')
df = pd.read_csv(file)
trns = get_transform(code)
if trns != []:
#apply transformation to dataframe
apply_transform(df,trns,code)
df.to_csv(stg_path + i.get('file'))
#TODO Normalize column names, tolwower() with underscores...
except:
log.error('error in normalize_data()')
def generate_docs(path):
'''
generate pandas_profile.html reports
'''
from pandas_profiling import ProfileReport
data_path = env_var.get(path)
try:
for i in map_landing_dataset_code[MODULE]:
file = i.get('file')
print(file)
df = pd.read_csv(data_path + file)
profile = ProfileReport(df, title= 'Profile: ' + file + ' (Landing)')
fpdf = data_path + 'profile_' + file[0:-4] + '.pdf'
profile.to_file(fpdf)
#convert_file_format(fhtml,fpdf)
except:
log.error('error in generating pandas_profile.html')
def convert_file_format(fromfile, tofile):
import pdfkit as pk
pk.from_file(fromfile,tofile)
def convert_unit(u):
if len(u) != 1:
ua = u
u = str(u[-1])
val = str(ua[0:-1]).replace('.','')
else: val = ''
u=u.lower()
if u == 'm':
val += '0000'
elif u == 'b':
val += '0000000'
return val
```
# Download & stage data
```
log.info('begin downloading finviz')
get_data_finviz(generate_data_profile=True)
log.info('end downloading finviz')
log.info('begin finviz preprocessing')
normalize_data()
log.info('end finviz preprocessing')
def enrich_data_finviz():
import numpy as np
ENF = err_var.get('no_param')
stg_path = env_var.get('stg_path')
fown =stg_path + data_var[MODULE].get('stock_ownership',ENF)
fovr =stg_path + data_var[MODULE].get('stock_overview',ENF)
key = data_var[MODULE].get('stock_key',ENF)
if (fown==ENF) or (fovr== ENF) or (key == ENF):
e = 'missing file or key name'
log_diagnostics('enrich_data_finviz',e,env_var)
return
#generate additional attibutes
df_own = pd.read_csv(fown).reset_index()
df_own.set_index(key,inplace=True)
df_view = pd.read_csv(fovr).reset_index()
df_view.set_index(key,inplace=True)
df = pd.merge(df_own,df_view,how='inner',left_index = True, right_index=True)
df.reset_index(inplace=True)
target = 'stg_finviz_summary'
transform = get_transform(target)
apply_transform(df,transform,target)
df = df[['Ticker','eps','earnings','P/E','e/p','Outstanding']]
df.to_csv(stg_path + 'stock_screener_summary.csv')
enrich_data_finviz()
!pip uninstall pdfkit
!pip uninstall weasyprint
```
| github_jupyter |
# Logistic regression example
### Dr. Tirthajyoti Sarkar, Fremont, CA 94536
---
This notebook demonstrates solving a logistic regression problem of predicting Hypothyrodism with **Scikit-learn** and **Statsmodels** libraries.
The dataset is taken from UCI ML repository.
<br>Here is the link: https://archive.ics.uci.edu/ml/datasets/Thyroid+Disease
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
```
### Read the dataset
```
names = 'response age sex on_thyroxine query_on_thyroxine antithyroid_medication thyroid_surgery query_hypothyroid query_hyperthyroid pregnant \
sick tumor lithium goitre TSH_measured TSH T3_measured \
T3 TT4_measured TT4 T4U_measured T4U FTI_measured FTI TBG_measured TBG'
names = names.split(' ')
#!wget https://raw.githubusercontent.com/tirthajyoti/Machine-Learning-with-Python/master/Datasets/hypothyroid.csv
#!mkdir Data
#!mv hypothyroid.csv Data/
df = pd.read_csv('Data/hypothyroid.csv',index_col=False,names=names,na_values=['?'])
df.head()
to_drop=[]
for c in df.columns:
if 'measured' in c or 'query' in c:
to_drop.append(c)
to_drop
to_drop.append('TBG')
df.drop(to_drop,axis=1,inplace=True)
df.head()
```
### Let us see the basic statistics on the dataset
```
df.describe().T
```
### Are any data points are missing? We can check it using `df.isna()` method
The `df.isna()` method gives back a full DataFrame with Boolean values - True for data present, False for missing data. We can use `sum()` on that DataFrame to see and calculate the number of missing values per column.
```
df.isna().sum()
```
### We can use `df.dropna()` method to drop those missing rows
```
df.dropna(inplace=True)
df.shape
```
### Creating a transformation function to convert `+` or `-` responses to 1 and 0
```
def class_convert(response):
if response=='hypothyroid':
return 1
else:
return 0
df['response']=df['response'].apply(class_convert)
df.head()
df.columns
```
### Exploratory data analysis
```
for var in ['age','TSH','T3','TT4','T4U','FTI']:
sns.boxplot(x='response',y=var,data=df)
plt.show()
sns.pairplot(data=df[df.columns[1:]],diag_kws={'edgecolor':'k','bins':25},plot_kws={'edgecolor':'k'})
plt.show()
```
### Create dummy variables for the categorical variables
```
df_dummies = pd.get_dummies(data=df)
df_dummies.shape
df_dummies.sample(10)
```
### Test/train split
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(df_dummies.drop('response',axis=1),
df_dummies['response'], test_size=0.30,
random_state=42)
print("Training set shape",X_train.shape)
print("Test set shape",X_test.shape)
```
### Using `LogisticRegression` estimator from Scikit-learn
We are using the L2 regularization by default
```
from sklearn.linear_model import LogisticRegression
clf1 = LogisticRegression(penalty='l2',solver='newton-cg')
clf1.fit(X_train,y_train)
```
### Intercept, coefficients, and score
```
clf1.intercept_
clf1.coef_
clf1.score(X_test,y_test)
```
### For `LogisticRegression` estimator, there is a special `predict_proba` method which computes the raw probability values
```
prob_threshold = 0.5
prob_df=pd.DataFrame(clf1.predict_proba(X_test[:10]),columns=['Prob of NO','Prob of YES'])
prob_df['Decision']=(prob_df['Prob of YES']>prob_threshold).apply(int)
prob_df
y_test[:10]
```
### Classification report, and confusion matrix
```
from sklearn.metrics import classification_report, confusion_matrix
print(classification_report(y_test, clf1.predict(X_test)))
pd.DataFrame(confusion_matrix(y_test, clf1.predict(X_test)),columns=['Predict-YES','Predict-NO'],index=['YES','NO'])
```
### Using `statsmodels` library
```
import statsmodels.formula.api as smf
import statsmodels.api as sm
df_dummies.columns
```
### Create a 'formula' in the same style as in R language
```
formula = 'response ~ ' + '+'.join(df_dummies.columns[1:])
formula
```
### Fit a GLM (Generalized Linear model) with this formula and choosing `Binomial` as the family of function
```
model = smf.glm(formula = formula, data=df_dummies, family=sm.families.Binomial())
result=model.fit()
```
### `summary` method shows a R-style table with all kind of statistical information
```
print(result.summary())
```
### The `predict` method computes probability for the test dataset
```
result.predict(X_test[:10])
```
### To create binary predictions, you have to apply a threshold probability and convert the booleans into integers
```
y_pred=(result.predict(X_test)>prob_threshold).apply(int)
print(classification_report(y_test,y_pred))
pd.DataFrame(confusion_matrix(y_test, y_pred),columns=['Predict-YES','Predict-NO'],index=['YES','NO'])
```
### A smaller model with only first few variables
We saw that majority of variables in the logistic regression model has p-values very high and therefore they are not statistically significant. We create another smaller model removing those variables.
```
formula = 'response ~ ' + '+'.join(df_dummies.columns[1:7])
formula
model = smf.glm(formula = formula, data=df_dummies, family=sm.families.Binomial())
result=model.fit()
print(result.summary())
y_pred=(result.predict(X_test)>prob_threshold).apply(int)
print(classification_report(y_pred,y_test))
pd.DataFrame(confusion_matrix(y_test, y_pred),columns=['Predict-YES','Predict-NO'],index=['YES','NO'])
```
### How do the probabilities compare between `Scikit-learn` and `Statsmodels` predictions?
```
sklearn_prob = clf1.predict_proba(X_test)[...,1][:10]
statsmodels_prob = result.predict(X_test[:10])
prob_comp_df=pd.DataFrame(data={'Scikit-learn Prob':list(sklearn_prob),'Statsmodels Prob':list(statsmodels_prob)})
prob_comp_df
```
### Coefficient interpretation
What is the interpretation of the coefficient value for `age` and `FTI`?
- With every one year of age increase, the log odds of the hypothyrodism **increases** by 0.0248 or the odds of hypothyroidsm increases by a factor of exp(0.0248) = 1.025 i.e. almost 2.5%.
- With every one unit of FTI increase, the log odds of the hypothyrodism **decreases** by 0.1307 or the odds of hypothyroidsm decreases by a factor of exp(0.1307) = 1.1396 i.e. almost by 12.25%.
| github_jupyter |
```
import pandas as pd
import numpy as np
import tensorflow as tf
from datetime import datetime
import matplotlib.pyplot as plt
import seaborn as sns
features = pd.read_csv('../Data/training_set_features.csv')
labels = pd.read_csv('../Data/training_set_labels.csv')
df = pd.merge(features, labels, on='respondent_id', how='inner')
df = df.drop(columns=['employment_occupation', 'employment_industry', 'health_insurance', 'respondent_id'])
seas_df = df.drop(columns=['h1n1_concern',
'h1n1_knowledge',
'doctor_recc_h1n1',
'opinion_h1n1_vacc_effective',
'opinion_h1n1_risk',
'opinion_h1n1_sick_from_vacc',
'h1n1_vaccine'])
h1n1_df = df.drop(columns=['doctor_recc_seasonal',
'opinion_seas_vacc_effective',
'opinion_seas_risk',
'opinion_seas_sick_from_vacc',
'seasonal_vaccine'])
categorical_columns = [
'sex',
'hhs_geo_region',
'census_msa',
'race',
'age_group',
'behavioral_face_mask',
'behavioral_wash_hands',
'behavioral_antiviral_meds',
'behavioral_outside_home',
'behavioral_large_gatherings',
'behavioral_touch_face',
'behavioral_avoidance',
'health_worker',
'child_under_6_months',
'chronic_med_condition',
'education',
'marital_status',
'employment_status',
'rent_or_own',
'doctor_recc_h1n1',
'doctor_recc_seasonal',
'income_poverty'
]
numerical_columns = [
'household_children',
'household_adults',
'h1n1_concern',
'h1n1_knowledge',
'opinion_h1n1_risk',
'opinion_h1n1_vacc_effective',
'opinion_h1n1_sick_from_vacc',
'opinion_seas_vacc_effective',
'opinion_seas_risk',
'opinion_seas_sick_from_vacc',
]
for column in categorical_columns:
curr_col = df[column]
df.loc[df[column] == 1, column] = 'Yes'
df.loc[df[column] == 0, column] = 'No'
```
## Deal with NAs
```
((df.isnull().sum() / len(df)) * 100).sort_values()
for column in numerical_columns:
df[column] = df[column].fillna(df[column].mean())
df = df.dropna()
```
## Initial Run
```
X = df.drop(columns=['h1n1_vaccine', 'seasonal_vaccine'])
y = df[['h1n1_vaccine', 'seasonal_vaccine']]
y_h1n1 = df[['h1n1_vaccine']]
y_seas = df[['seasonal_vaccine']]
```
#### Categorical
```
#Get Binary Data for Categorical Variables
cat_df = X[categorical_columns]
recat_df = pd.get_dummies(data=cat_df)
```
#### Numerical
```
num_df = X[numerical_columns]
from sklearn.preprocessing import StandardScaler
#Scale Numerical Data
scaler = StandardScaler()
scaled_num = scaler.fit_transform(num_df)
scaled_num_df = pd.DataFrame(scaled_num, index=num_df.index, columns=num_df.columns)
encoded_df = pd.concat([recat_df, scaled_num_df], axis=1)
encoded_df
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(encoded_df, y, test_size=0.3, random_state=42)
X_train = np.asarray(X_train)
X_test = np.asarray(X_test)
y_train = np.asarray(y_train)
y_test = np.asarray(y_test)
X = np.asarray(encoded_df)
```
# Neural Network
```
from tensorflow import keras
model = keras.Sequential([
keras.layers.Dense(60, activation='selu', input_dim=84),
keras.layers.Dense(100, activation='relu'),
keras.layers.Dense(200, activation='selu'),
keras.layers.Dense(32, activation='relu'),
keras.layers.Dense(5, activation='swish'),
keras.layers.Dense(2, activation='swish')
])
model.compile(optimizer='adam',
loss=tf.losses.CategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
history = model.fit(
X_train,
y_train,
batch_size=200,
epochs=5000,
validation_data=(X_test, y_test)
)
y_true = y_test
y_predicted = model.predict(X_test)
y_predicted_binary = np.where(y_predicted > 0.5, 1, 0)
from sklearn.metrics import roc_auc_score
roc_auc_score(y_true, y_predicted)
```
# Random Forest
```
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import r2_score
model = RandomForestRegressor()
model.fit(X_train, y_train)
y_predicted = model.predict(X_test)
r2_score(y_predicted, y_test)
from sklearn.model_selection import RandomizedSearchCV
# Number of trees in random forest
n_estimators = [int(x) for x in np.linspace(start = 200, stop = 2000, num = 10)]
# Number of features to consider at every split
max_features = ['auto', 'sqrt']
# Maximum number of levels in tree
max_depth = [int(x) for x in np.linspace(10, 110, num = 11)]
max_depth.append(None)
# Minimum number of samples required to split a node
min_samples_split = [2, 5, 10]
# Minimum number of samples required at each leaf node
min_samples_leaf = [1, 2, 4]
# Method of selecting samples for training each tree
bootstrap = [True, False]
# Create the random grid
random_grid = {'n_estimators': n_estimators,
'max_features': max_features,
'max_depth': max_depth,
'min_samples_split': min_samples_split,
'min_samples_leaf': min_samples_leaf,
'bootstrap': bootstrap}
print(random_grid)
rf = RandomForestRegressor()
rf_random = RandomizedSearchCV(estimator = rf, param_distributions = random_grid, n_iter = 100, cv = 3, verbose=2, random_state=42, n_jobs = -1)
model = rf_random
model.fit(X_train, y_train)
model.best_params_
def evaluate(model, X_test, y_test):
predictions = model.predict(X_test)
errors = abs(predictions - y_test)
mape = 100 * np.mean(errors / y_test)
accuracy = 100 - mape
print('Model Performance')
print('Average Error: {:0.4f} degrees.'.format(np.mean(errors)))
print('Accuracy = {:0.2f}%.'.format(accuracy))
return accuracy
base_model = RandomForestRegressor(n_estimators = 10, random_state = 42)
base_model.fit(X_train, y_train)
base_accuracy = evaluate(base_model, X_test, y_test)
best_random = rf_random.best_estimator_
random_accuracy = evaluate(best_random, X_test, y_test)
y_predicted = best_random.predict(X_test)
model = best_random
from sklearn.metrics import roc_auc_score
roc_auc_score(y_test, y_predicted)
```
## Submission Data
```
test_data = pd.read_csv('../Data/test_set_features.csv')
df_full = test_data
df = df_full.drop(columns=['employment_occupation', 'employment_industry', 'health_insurance', 'respondent_id'])
categorical_columns = [
'sex',
'hhs_geo_region',
'census_msa',
'race',
'age_group',
'behavioral_face_mask',
'behavioral_wash_hands',
'behavioral_antiviral_meds',
'behavioral_outside_home',
'behavioral_large_gatherings',
'behavioral_touch_face',
'behavioral_avoidance',
'health_worker',
'child_under_6_months',
'chronic_med_condition',
'education',
'marital_status',
'employment_status',
'rent_or_own',
'doctor_recc_h1n1',
'doctor_recc_seasonal',
'income_poverty'
]
numerical_columns = [
'household_children',
'household_adults',
'h1n1_concern',
'h1n1_knowledge',
'opinion_h1n1_risk',
'opinion_h1n1_vacc_effective',
'opinion_h1n1_sick_from_vacc',
'opinion_seas_vacc_effective',
'opinion_seas_risk',
'opinion_seas_sick_from_vacc',
]
for column in categorical_columns:
curr_col = df[column]
df.loc[df[column] == 1, column] = 'Yes'
df.loc[df[column] == 0, column] = 'No'
for column in numerical_columns:
df[column] = df[column].fillna(df[column].mean())
df['health_worker'] = df['health_worker'].fillna(0)
df['behavioral_face_mask'] = df['behavioral_face_mask'].fillna(0)
df['behavioral_wash_hands'] = df['behavioral_wash_hands'].fillna(0)
df['behavioral_antiviral_meds'] = df['behavioral_antiviral_meds'].fillna(0)
df['behavioral_outside_home'] = df['behavioral_outside_home'].fillna(0)
df['behavioral_large_gatherings'] = df['behavioral_large_gatherings'].fillna(0)
df['behavioral_touch_face'] = df['behavioral_touch_face'].fillna(0)
df['behavioral_avoidance'] = df['behavioral_avoidance'].fillna(0)
df['child_under_6_months'] = df['child_under_6_months'].fillna(0)
df['chronic_med_condition'] = df['chronic_med_condition'].fillna(0)
df['marital_status'] = df['marital_status'].fillna('Not Married')
df['rent_or_own'] = df['rent_or_own'].fillna('Rent')
df['education'] = df['education'].fillna('Some College')
df['employment_status'] = df['employment_status'].fillna('Employed')
df['doctor_recc_h1n1'] = df['doctor_recc_h1n1'].fillna(1)
df['doctor_recc_seasonal'] = df['doctor_recc_seasonal'].fillna(1)
df['income_poverty'] = df['income_poverty'].fillna('<= $75,000, Above Poverty')
X = df
#Get Binary Data for Categorical Variables
cat_df = X[categorical_columns]
recat_df = pd.get_dummies(data=cat_df)
num_df = X[numerical_columns]
from sklearn.preprocessing import StandardScaler
#Scale Numerical Data
scaler = StandardScaler()
scaled_num = scaler.fit_transform(num_df)
scaled_num_df = pd.DataFrame(scaled_num, index=num_df.index, columns=num_df.columns)
encoded_df = pd.concat([recat_df, scaled_num_df], axis=1)
X = np.asarray(encoded_df)
y = model.predict(X)
y_df = pd.DataFrame(y, columns=['h1n1_vaccine', 'seasonal_vaccine'])
results = pd.concat([df_full, y_df], axis=1)
results = results[['respondent_id', 'h1n1_vaccine', 'seasonal_vaccine']]
results.to_csv('../Submissions/Submission 6.29.21.csv', index=False)
from sklearn.feature_selection import SelectFromModel
sel = SelectFromModel(RandomForestRegressor(n_estimators= 800,
min_samples_split= 2,
min_samples_leaf= 4,
max_features= 'sqrt',
max_depth= 20,
bootstrap= False))
sel.fit(X_train, y_train)
selected_feat= encoded_df.columns[(sel.get_support())]
len(selected_feat)
selected_feat
pd.Series(sel.estimator_, feature_importances_.ravel()).hist()
for x in selected_feat:
print(x)
```
| github_jupyter |
## Automatic Ticket Assignment
One of the key activities of any IT function is to ensure there is no
impact to the Business operations. <b>IT leverages Incident Management process to achieve the
above Objective.</b> An incident is something that is unplanned interruption to an IT service or
reduction in the quality of an IT service that affects the Users and the Business. <b><i>The main goal
of Incident Management process is to provide a quick fix / workarounds or solutions that resolves the interruption and restores the service to its full capacity to ensure no business impact.</i></b>
In most of the organizations, incidents are created by various Business and IT Users, End Users/ Vendors if they have access to ticketing systems, and from the integrated monitoring
systems and tools. <b>Assigning the incidents to the appropriate person or unit in the support team has critical importance to provide improved user satisfaction while ensuring better allocation of support resources.</b>
<i> Manual assignment of incidents is time consuming and requires human efforts. There may be mistakes due to human errors and resource consumption is carried out ineffectively because of the misaddressing. On the other hand, manual assignment increases the response and resolution times which result in user satisfaction deterioration / poor customer service.</i>
#### <b>Business Domain Value:</b>
In the support process, incoming incidents are analyzed and assessed by organization’s support teams to fulfill the request. In many organizations, better allocation and effective usage of the valuable support resources will directly result in substantial cost savings.
Currently the incidents are created by various stakeholders (Business Users, IT Users and Monitoring Tools) within IT Service Management Tool and are assigned to Service Desk teams (L1 / L2 teams). This team will review the incidents for right ticket categorization, priorities and then carry out initial diagnosis to see if they can resolve. Around ~54% of the incidents are resolved by L1 / L2 teams. Incase L1 / L2 is unable to resolve, they will then escalate / assign the tickets to Functional teams from Applications and Infrastructure (L3 teams). Some portions of incidents are directly assigned to L3 teams by either Monitoring tools or Callers / Requestors. L3 teams will carry out detailed diagnosis and resolve the incidents. Around ~56%
of incidents are resolved by Functional / L3 teams. Incase if vendor support is needed, they will reach out for their support towards incident closure.
L1 / L2 needs to spend time reviewing Standard Operating Procedures (SOPs) before assigning to Functional teams (Minimum ~25-30% of incidents needs to be reviewed for SOPs before ticket assignment). 15 min is being spent for SOP review for each incident. Minimum of ~1 FTE effort needed only for incident assignment to L3 teams. During the process of incident assignments by L1 / L2 teams to functional groups, there were multiple instances of incidents getting assigned to wrong functional groups. Around ~25% of Incidents are wrongly assigned to functional teams. Additional effort needed for Functional teams to re-assign to right functional groups. During this process, some of the incidents are in queue and not addressed timely resulting in poor customer service.
## Objective:
### Build Multi-Class classifier that can classify the tickets by analysing text.###
Guided by powerful AI techniques that can classify incidents to right functional groups can help organizations to reduce the resolving time of the issue and can focus on more productive tasks. In the previous milestone we've already covered Data cleaning, preprocessing, Exploratory Data Analysis
Milestone 2: Test the Model, Fine-tuning and Repeat
1. Test the model and report as per evaluation metrics
2. Try different models
3. Try different evaluation metrics
4. Set different hyper parameters, by trying different optimizers, loss functions, epochs, learning rate, batch size, checkpointing, early stopping etc..for these models to fine-tune them
5. Report evaluation metrics for these models along with your observation on how changing different hyper parameters leads to change in the final evaluation metric.
---
### <u>Imports and Configurations</u>
Section to import all necessary packages. Install the libraries which are not included in Anaconda distribution by default using pypi channel or conda forge
**``!pip install ftfy wordcloud goslate spacy plotly cufflinks gensim pyLDAvis``**<br/>
**``conda install -c conda-forge ftfy wordcloud goslate spacy plotly cufflinks gensim pyLDAvis``**
```
# Utilities
from time import time
from PIL import Image
from pprint import pprint
from zipfile import ZipFile
import os, sys, itertools, re, calendar
import warnings, pickle, string, timestring
from IPython.display import IFrame
from ftfy import fix_encoding, fix_text, badness
# Translation APIs
from goslate import Goslate # Provided by Google
# Numerical calculation
import numpy as np
# Data Handling
import pandas as pd
# Data Visualization
import matplotlib.pyplot as plt
import seaborn as sns
import cufflinks as cf
import plotly as py
import plotly.graph_objs as go
from plotly.offline import download_plotlyjs,init_notebook_mode,plot,iplot
import pyLDAvis
import pyLDAvis.gensim
# Sequential Modeling
import keras.backend as K
from keras.datasets import imdb
from keras.models import Sequential, Model
from keras.layers.merge import Concatenate
from keras.layers import Input, Dropout, Flatten, Dense, Embedding, LSTM, GRU
from keras.layers import BatchNormalization, TimeDistributed, Conv1D, MaxPooling1D
from keras.constraints import max_norm, unit_norm
from keras.preprocessing.text import Tokenizer, text_to_word_sequence
from keras.preprocessing.sequence import pad_sequences
from keras.callbacks import EarlyStopping, ModelCheckpoint
# Traditional Modeling
from sklearn.naive_bayes import MultinomialNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer, TfidfTransformer
from sklearn.svm import SVC, LinearSVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
# Topic Modeling
import gensim
import gensim.corpora as corpora
from gensim.utils import simple_preprocess
from gensim.parsing import preprocessing
from gensim.test.utils import common_texts
from gensim.models.doc2vec import Doc2Vec, TaggedDocument
from gensim.models.phrases import Phraser
from gensim.models import Phrases, CoherenceModel
# Tools & Evaluation metrics
from sklearn.metrics import confusion_matrix, classification_report, auc
from sklearn.metrics import roc_curve, accuracy_score, precision_recall_curve
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import train_test_split, GridSearchCV
from collections import Counter
from imblearn.under_sampling import RandomUnderSampler
# NLP toolkits
import spacy
import nltk
from nltk import tokenize
from nltk.corpus import stopwords
# Configure for any default setting of any library
nltk.download('stopwords')
warnings.filterwarnings('ignore')
get_ipython().magic(u'matplotlib inline')
plt.style.use('ggplot')
pyLDAvis.enable_notebook()
init_notebook_mode(connected=True)
cf.go_offline()
%matplotlib inline
```
### <u>Set the working directory</u>
Mount the drive and set the project path to cureent working directory, when running in Google Colab. No changes are required in case of running in Local PC.
```
# Block which runs on both Google Colab and Local PC without any modification
if 'google.colab' in sys.modules:
project_path = "/content/drive/My Drive/Colab Notebooks/DLCP/Capstone-NLP/"
# Google Colab lib
from google.colab import drive
# Mount the drive
drive.mount('/content/drive/', force_remount=True)
sys.path.append(project_path)
%cd $project_path
# Let's look at the sys path
print('Current working directory', os.getcwd())
```
### <u>Create Word Embbeddings</u>
We've observed poor performance in the 1st milestone, which enables us to create our own word embbeddings. Let's load the preprocessed dataset and use Gensim model to create Word2Vec embbeddings.
Word embedding is one of the most important techniques in natural language processing(NLP), where words are mapped to vectors of real numbers. Word embedding is capable of capturing the meaning of a word in a document, semantic and syntactic similarity, relation with other words.
The word2vec algorithms include skip-gram and CBOW models, using either hierarchical softmax or negative sampling.

```
# Load the preprocessed pickle dataset
with open('preprocessed_ticket.pkl','rb') as f:
ticket = pickle.load(f)
# Function to create the tokenized sentence
def tokenize_sentences(sentence):
doc = nlp(sentence)
return [token.lemma_ for token in doc if token.lemma_ !='-PRON-' and not token.is_stop]
sentence_stream=[]
for sent in ticket.Summary.values.tolist():
sentence_stream.append(tokenize_sentences(sent))
# Create the Bigram and Trigram models
bigram = Phrases(sentence_stream, min_count=2, threshold=2)
trigram = Phrases(bigram[sentence_stream], min_count=2, threshold=1)
bigram_phraser = Phraser(bigram)
trigram_phraser = Phraser(trigram)
ngram_sentences=[]
for sent in sentence_stream:
tokens_ = bigram_phraser[sent]
#print("Bigrams Tokens:\t", tokens_)
tokens_ = trigram_phraser[tokens_]
ngram_sentences.append(tokens_)
#Serialize bigram and trigram for future
bigram_phraser.save('bigram_mdl_14_03_2020.pkl')
trigram_phraser.save('trigram_mdl_14_03_2020.pkl')
# Create the tagged documents
documents = [TaggedDocument(words=doc, tags=[i]) for i, doc in enumerate(ngram_sentences)]
print("Length of Tagged Documents:",len(documents))
print("Tagged Documents[345]:",documents[345])
# Build the Word2Vec model
max_epochs = 100
vec_size = 300
alpha = 0.025
model = Doc2Vec(vector_size=vec_size,window=2,
alpha=alpha,
min_alpha=0.00025,
min_count=2,
dm =1)
model.build_vocab(documents)
for epoch in range(max_epochs):
model.train(documents,
total_examples=model.corpus_count,
epochs=model.iter)
# decrease the learning rate
model.alpha -= 0.0002
# fix the learning rate, no decay
model.min_alpha = model.alpha
model.save("d2v_inc_model.mdl")
print("Model Saved")
```
**Comments**:
Word Embbeddings are generated from the corpus of our tickets dataset and serialized for further use.
### <u>Load the dataset</u>
We've observed poor performance in the 1st milestone, which enables us to introduce 2 more attributes, such as:
- **Shift**: Working shift of the support associate in which the ticket was recieved OR failure occured
- **Lines**: Lines of text present in the ticket description column
Load the searialized dataset stored after 1st milestone's EDA and append the above attributes to them. Also drop sd_len, sd_word_count, desc_len, desc_word_count columns
```
# Function to determine the Part of the Day (POD)
def get_POD(tkt):
dt1 = r"(?:\d{1,2}[\/-]){2}\d{4} (?:\d{2}:?){3}"
dt2 = r"\d{4}(?:[\/-]\d{1,2}){2} (?:\d{2}:?){3}"
months = '|'.join(calendar.month_name[1:])
dt3 = fr'[a-zA-Z]+day, (?i:{months}) \d{{1,2}}, \d{{4}} \d{{1,2}}:\d{{1,2}} (?i:am|pm)'
matches = set(re.findall('|'.join([dt1,dt2,dt3]), tkt))
if len(matches):
try:
hr = timestring.Date(list(matches)[0]).hour
return 'Morning' if (hr >= 6) and (hr < 18) else 'Night'
except:
pass
return 'General'
# Get POD and lines of Desc from the unprocessed pickle
with open('translated_ticket.pkl','rb') as f:
ticket = pickle.load(f)
lines = ticket.Description.apply(lambda x: len(str(x).split('\n')))
shifts = ticket[['Short description', 'Description']].agg(lambda x: get_POD(str(x[0]) + str(x[1])), axis=1)
shifts.value_counts()
# Load the serialized dataset after milestone-1
with open('model_ready.pkl','rb') as handle:
ticket = pickle.load(handle)
# Drop the unwanted columns
ticket.drop(['sd_len','sd_word_count','desc_len','desc_word_count','Caller'], axis=1, inplace=True)
# Insert the new attributes
ticket.insert(loc=ticket.shape[1]-1, column='Shift', value=shifts)
ticket.insert(loc=ticket.shape[1]-1, column='Lines', value=lines)
# Check the head of the dataset
ticket.head()
```
#### Observation from Milestone-1
Out of all the models we've tried in Milestone-1, Support Vector Machine (SVM) under statistical ML algorithms and Neural Networks are performing better than all others. The models were highly overfitted and one of the obvious reason was the dataset was highly imbalanced. Ratio of GRP_0 to all others is 47:53 and there are 40 groups having less than or equal to 30 tickets assigned each.
Let's address this problem to fine tune the model accuracy by implementing
- Dealing with imbalanced dataset.
- Creating distinctive clusters under GRP_0 and downsampling top clusters
- Clubbing together all those groups into one which has 30 or less tickets assigned
- Replacing TF-IDF vectorizer technique with word embeddings for statistical ML algorithms.
### <u>Resampling the Imbalanced dataset</u>
A widely adopted technique for dealing with highly unbalanced datasets is called resampling. It consists of removing samples from the majority class (under-sampling) and / or adding more examples from the minority class (over-sampling).

### Topic Modeling
Topic Modeling is a technique to extract the hidden topics from large volumes of text. **Latent Dirichlet Allocation(LDA)** is a popular algorithm for topic modeling with excellent implementations in the Python’s Gensim package.
Let's first use gensim to implement LDA and find out any distinctive topics among GRP_0, followed by down-sampling the top 3 topics to contain maximum number of tickets created for.
Installation:<br/>
using pypi: **`!pip install gensim`**<br/>
using conda: **`conda install -c conda-forge gensim`**
#### 1. Prepare Stopwords
Used english stopwords from NLTK and extended it to include domain specific frequent words
```
# Records assigned to only GRP_0
grp0_tickets = ticket[ticket['Assignment group'] == 'GRP_0']
# Prepare NLTK STOPWORDS
STOP_WORDS = stopwords.words('english')
STOP_WORDS.extend(['yes','na','hi','receive','hello','regards','thanks','see','help',
'from','greeting','forward','reply','will','please','able'])
```
#### 2. Tokenize words and Clean-up text
Tokenize each sentence into a list of words, removing punctuations and unnecessary characters altogether.
```
# Vectorizations
def sent_to_words(sentences):
for sentence in sentences:
yield(gensim.utils.simple_preprocess(str(sentence), deacc=True)) # deacc=True removes punctuations
# Tokenize the Summary attribute of GRP_0 records
data_words = list(sent_to_words(grp0_tickets['Summary'].values.tolist()))
data_words_nostops = [[word for word in simple_preprocess(str(doc)) if word not in STOP_WORDS] for doc in data_words]
```
#### 3. Bigram and Trigram Models
Bigrams and Trigrams are two and three words frequently occurring together respectively in a document.
```
# Build the bigram and trigram models
bigram = gensim.models.Phrases(data_words, min_count=5, threshold=100) # higher threshold fewer phrases.
trigram = gensim.models.Phrases(bigram[data_words], threshold=100)
# Faster way to get a sentence clubbed as a trigram/bigram
bigram_mod = gensim.models.phrases.Phraser(bigram)
data_words_bigrams = [bigram_mod[doc] for doc in data_words_nostops]
trigram_mod = gensim.models.phrases.Phraser(trigram)
data_words_trigrams = [trigram_mod[doc] for doc in data_words_nostops]
```
#### 4. Dictionary and Corpus needed for Topic Modeling
Creare the two main inputs to the LDA topic model are the dictionary(id2word) and the corpus.
```
# Create Dictionary
id2word = corpora.Dictionary(data_words_bigrams)
# Term Document Frequency
corpus = [id2word.doc2bow(text) for text in data_words_bigrams]
```
#### 5. Building the Topic Model
Build a Topic Model with top 3 different topics where each topic is a combination of keywords and each keyword contributes a certain weightage to the topic.
```
# Build LDA model
lda_model = gensim.models.ldamodel.LdaModel(corpus=corpus,
id2word=id2word,
num_topics=3,
random_state=100,
update_every=1,
chunksize=100,
passes=10,
alpha='auto',
per_word_topics=True)
for idx, topic in lda_model.print_topics():
print('Topic: {} \nWords: {}'.format(idx+1, topic))
print()
```
**How to interpret this?**
Topic 1 is a represented as `0.060*"company" + 0.028*"windows" + 0.026*"device" + 0.021*"vpn" + 0.021*"connect" + 0.018*"message" + 0.014*"link" + 0.013*"window" + 0.011*"follow" + 0.011*"use"`
It means the top 10 keywords that contribute to this topic are: ‘company’, ‘windows’, ‘device’.. and so on and the weight of ‘windows’ on topic 1 is 0.028.
The weights reflect how important a keyword is to that topic.

#### 6. Model Perplexity and Coherence Score
Model perplexity and topic coherence provide a convenient measure to judge how good a given topic model is.
```
# Compute Perplexity
print('\nPerplexity: ', lda_model.log_perplexity(corpus)) # a measure of how good the model is. lower the better.
# Compute Coherence Score
coherence_model_lda = CoherenceModel(model=lda_model, texts=data_words_bigrams, dictionary=id2word, coherence='c_v')
coherence_lda = coherence_model_lda.get_coherence()
print('\nCoherence Score: ', coherence_lda)
```
#### 7. Visualize the topics-keywords
Examine the produced topics and the associated keywords using pyLDAvis.
```
# Visualize the topics
pyLDAvis.save_html(pyLDAvis.gensim.prepare(lda_model, corpus, id2word), 'lda.html')
IFrame(src='./lda.html', width=1220, height=858)
```

#### 8. Topic assignment for GRP_0 tickets
Run LDA for each record of GRP_0 to find the associated topic based on the LDA score. As the topic modeling has been trained to accomodate only top 3 topics for entire GRP_0 data, any record scoring less than 50%, we categorize them into 4th(other) topic and such tickets are not the candidates for resampling.
```
# Function to Determine topic
TOPICS = {1:"Communication Issue", 2:"Account/Password Reset", 3:"Access Issue", 4:"Other Issues"}
def get_groups(text):
bow_vector = id2word.doc2bow([word for word in simple_preprocess(text) if word not in STOP_WORDS])
index, score = sorted(lda_model[bow_vector][0], key=lambda tup: tup[1], reverse=True)[0]
return TOPICS[index+1 if score > 0.5 else 4], round(score, 2)
# Check for a Random record
text = grp0_tickets.reset_index().loc[np.random.randint(0, grp0_tickets.shape[1]),'Summary']
topic, score = get_groups(text)
print("\033[1mText:\033[0m {}\n\033[1mTopic:\033[0m {}\n\033[1mScore:\033[0m {}".format(text, topic, score))
# Apply the function to the dataset
grp0_tickets.insert(loc=grp0_tickets.shape[1]-1,
column='Topic',
value=[get_groups(text)[0] for text in grp0_tickets.Summary])
grp0_tickets.head()
# Count the records based on Topics
grp0_tickets.Topic.value_counts()
```
**Observations**:
- From the above analysis, it's evident that the tope 3 topics are present in maximum numbers. The ratio of top 3 topics and other topic is $33:33:26:8$
- Except for the Other Issues, rest 3 categories of records can be down sampled to balance the dataset
#### 9. Down-sampling the majority topics under GRP_0
Under-sample the majority class(es) by randomly picking samples with or without replacement. We're using RandomUnderSampler class from imblearn.
```
# Instantiate the UnderSampler class
sampler = RandomUnderSampler(sampling_strategy='auto', random_state=0)
# Fit the data
X_res, y_res = sampler.fit_resample(grp0_tickets.drop(['Assignment group','Topic'], axis=1), grp0_tickets.Topic)
# Check the ratio of output topics
y_res.value_counts()
```
**Observation:**<br/>
The output of the UnderSampling technique shows that all the 4 distinct topics are resampled to exactly match the records in each topic making them a perfectly balanced distribution under GRP_0.
Let's combine the Topic and Assignment group columns to maintain a single target attribute.
```
# Combine Topic and Assignment Group columns
grp0_tickets = pd.concat([X_res, y_res], axis=1)
grp0_tickets['Assignment group'] = grp0_tickets['Topic'].apply(lambda x: f'GRP_0 ({x})')
# Drop the Topic column
grp0_tickets.drop(['Topic'], axis=1, inplace=True)
print(f"\033[1mNew size of GRP_0 tickets:\033[0m {grp0_tickets.shape}")
grp0_tickets.head()
```
#### 10. Club groups with lesser tickets assigned
Combine all groups with less than 25 tickets assigned into one separate group named ***Miscellaneous***
```
# Find out the Assignment Groups with less than equal to 25 tickets assigned
rare_tickets = ticket.groupby(['Assignment group']).filter(lambda x: len(x) <= 25)
print('\033[1m#Groups with less than equal to 25 tickets assigned:\033[0m', rare_tickets['Assignment group'].nunique())
# Visualize the distribution
rare_tickets['Assignment group'].iplot(
kind='hist',
xTitle='Assignment Group',
yTitle='count',
colorscale='-orrd',
title='#Records by rare Assignment Groups- Histogram')
# Rename the Assignment group attribute
rare_tickets['Assignment group'] = 'Miscellaneous'
```
#### 11. Join and prepare the balanced dataset
Let's club together resampled topics under GRP_0 with Miscellaneous group with less than 25 tickets with all others
```
# Find tickets with good number of tickets assigned
good_tickets = ticket.iloc[[idx for idx in ticket.index if idx not in rare_tickets.index]]
good_tickets = good_tickets[good_tickets['Assignment group'] != 'GRP_0']
# Join all the 3 datasets
ticket = pd.concat([grp0_tickets, good_tickets, rare_tickets]).reset_index(drop=True)
# Serialize the balanced dataset
with open('balanced_ticket.pkl','wb') as f:
pickle.dump(ticket[['Summary','Assignment group']], f, pickle.HIGHEST_PROTOCOL)
ticket.head()
# Visualize the assignment groups distribution
print('\033[1m#Unique groups remaining:\033[0m', ticket['Assignment group'].nunique())
pd.DataFrame(ticket.groupby('Assignment group').size(),columns = ['Count']).reset_index().iplot(
kind='pie',
labels='Assignment group',
values='Count',
title='#Records by Assignment groups',
pull=np.linspace(0,0.3,ticket['Assignment group'].nunique()))
```
**Comments:**
- It's evident from the pie chart above the dataset is nearly balanced which can be considered for model building.
## <u>Model Building</u>
Let's load the balanced dataset and Word2Vec model to generate word embbeddings and feed it into LSTM.
### <u>RNN with LSTM networks</u>
Long Short-Term Memory~(LSTM) was introduced by S. Hochreiter and J. Schmidhuber and developed by many research scientists.
To deal with these problems Long Short-Term Memory (LSTM) is a special type of RNN that preserves long term dependency in a more effective way compared to the basic RNNs. This is particularly useful to overcome vanishing gradient problem as LSTM uses multiple gates to carefully regulate the amount of information that will be allowed into each node state. The figure shows the basic cell of a LSTM model.

Let's create another column of categorical datatype from Assignment groups. Let's write some generic methods for utilities and to plot evaluation metrics.
```
# A class that logs the time
class Timer():
'''
A generic class to log the time
'''
def __init__(self):
self.start_ts = None
def start(self):
self.start_ts = time()
def stop(self):
return 'Time taken: %2fs' % (time()-self.start_ts)
timer = Timer()
# A method that plots the Precision-Recall curve
def plot_prec_recall_vs_thresh(precisions, recalls, thresholds):
plt.figure(figsize=(10,5))
plt.plot(thresholds, precisions[:-1], 'b--', label='precision')
plt.plot(thresholds, recalls[:-1], 'g--', label = 'recall')
plt.xlabel('Threshold')
plt.legend()
# A method to train and test the model
def run_classification(estimator, X_train, X_test, y_train, y_test, arch_name=None, pipelineRequired=True, isDeepModel=False):
timer.start()
# train the model
clf = estimator
if pipelineRequired :
clf = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', estimator),
])
if isDeepModel :
clf.fit(X_train, y_train, validation_data=(X_test, y_test),epochs=10, batch_size=128,verbose=1,callbacks=call_backs(arch_name))
# predict from the claffier
y_pred = clf.predict(X_test)
y_pred = np.argmax(y_pred, axis=1)
y_train_pred = clf.predict(X_train)
y_train_pred = np.argmax(y_train_pred, axis=1)
else :
clf.fit(X_train, y_train)
# predict from the claffier
y_pred = clf.predict(X_test)
y_train_pred = clf.predict(X_train)
print('Estimator:', clf)
print('='*80)
print('Training accuracy: %.2f%%' % (accuracy_score(y_train,y_train_pred) * 100))
print('Testing accuracy: %.2f%%' % (accuracy_score(y_test, y_pred) * 100))
print('='*80)
print('Confusion matrix:\n %s' % (confusion_matrix(y_test, y_pred)))
print('='*80)
print('Classification report:\n %s' % (classification_report(y_test, y_pred)))
print(timer.stop(), 'to run the model')
# Load the balanced dataset
with open('balanced_ticket.pkl','rb') as f:
ticket = pickle.load(f)
# Load the Word2Vec model
wmodel = Doc2Vec.load('d2v_inc_model.mdl')
w2v_weights = wmodel.wv.vectors
vocab_size, embedding_size = w2v_weights.shape
print("Vocabulary Size: {} - Embedding Dim: {}".format(vocab_size, embedding_size))
# Sequences will be padded or truncated to this length
MAX_SEQUENCE_LENGTH = 75
# Prepare the embbedings with 0's padding to max sequence length
X = ticket.Summary.values.tolist()
set_X=[]
for sent in X:
#print(sent[0])
set_X.append(np.array([word2token(w) for w in tokenize_sentences(sent[0])[:MAX_SEQUENCE_LENGTH] if w != '']))
set_X = pad_sequences(set_X, maxlen=MAX_SEQUENCE_LENGTH, padding='pre', value=0)
y = pd.get_dummies(upsmpl_dset['group']).values
print('Shape of label Y:', (27470, 41))
print('Shape of label X:', (27470, 75))
# Divide the original dataset into train and test split
X_train, X_test, y_train, y_test = train_test_split(set_X, y, test_size=0.3, random_state=47)
X_train.shape, X_test.shape, y_train.shape, y_test.shape
# Visualize a random training sample
X_train[67]
# CREATE the MODEL
# Samples of categories with less than this number of samples will be ignored
DROP_THRESHOLD = 10000
model_seq = Sequential()
model_seq.add(Embedding(input_dim=vocab_size,
output_dim=embedding_size,
weights=[w2v_weights],
input_length=MAX_SEQUENCE_LENGTH,
mask_zero=True,
trainable=False))
model_seq.add(SpatialDropout1D(0.2))
model_seq.add(LSTM(100, dropout=0.2, recurrent_dropout=0.2))
model_seq.add(Dense(41, activation='softmax'))
model_seq.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
epochs = 20
batch_size = 64
history = model_seq.fit(X_train,
Y_train,
epochs=epochs,
batch_size=batch_size,
validation_split=0.1,
callbacks=[EarlyStopping(monitor='val_loss', patience=3, min_delta=0.0001)])
## Iteration 1 ...changing the dropout value
model_seq = Sequential()
model_seq.add(Embedding(input_dim=vocab_size,
output_dim=embedding_size,
weights=[w2v_weights],
input_length=MAX_SEQUENCE_LENGTH,
mask_zero=True,
trainable=False))
model_seq.add(SpatialDropout1D(0.1))
model_seq.add(LSTM(100, dropout=0.1, recurrent_dropout=0.1))
model_seq.add(Dense(41, activation='softmax'))
model_seq.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model_seq.fit(X_train,
Y_train,
epochs=epochs,
batch_size=batch_size,
validation_split=0.1,
callbacks=[EarlyStopping(monitor='val_loss', patience=3, min_delta=0.0001)])
## Iteration 2 ..adding more core to LTSM
model_seq = Sequential()
model_seq.add(Embedding(input_dim=vocab_size,
output_dim=embedding_size,
weights=[w2v_weights],
input_length=MAX_SEQUENCE_LENGTH,
mask_zero=True,
trainable=False))
model_seq.add(SpatialDropout1D(0.1))
model_seq.add(LSTM(150, dropout=0.1, recurrent_dropout=0.1))
model_seq.add(Dense(41, activation='softmax'))
model_seq.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model_seq.fit(X_train,
Y_train,
epochs=epochs,
batch_size=batch_size,
validation_split=0.1,
callbacks=[EarlyStopping(monitor='val_loss', patience=3, min_delta=0.0001)])
```
#### Finding Confidence Interval
As this iteration is having more accuracy and no overfitting, let's find out the confidence interval.
```
acc = history.history['acc']
plt.figure(figsize=(10,7), dpi= 80)
sns.distplot(acc, color="dodgerblue", label="Compact")
accr = model_seq.evaluate(X_test,Y_test)
print('Test set\n Loss: {:0.3f}\n Accuracy: {:0.3f}'.format(accr[0],accr[1]*100))
```

```
accuracy=0.9276
n = 8241
interval = 1.96 * np.sqrt( (accuracy * (1 - accuracy)) / n)
print(interval*100)
```
**Observation**:
- There is a 95% likelihood that the confidence interval [92.21, 93.31] covers the true classification of the model on unseen data.
```
## Iteration 3 ....adding a dense and dropout and batchNormalistaion layer
model_seq = Sequential()
model_seq.add(Embedding(input_dim=vocab_size,
output_dim=embedding_size,
weights=[w2v_weights],
input_length=MAX_SEQUENCE_LENGTH,
mask_zero=True,
trainable=False))
model_seq.add(SpatialDropout1D(0.1))
model_seq.add(LSTM(150, dropout=0.1, recurrent_dropout=0.1))
model_seq.add(Dense(150, activation='relu'))
model_seq.add(BatchNormalization(momentum=0.9,epsilon=0.02))
model_seq.add(Dropout(0.1))
model_seq.add(Dense(41, activation='softmax'))
model_seq.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
history = model_seq.fit(X_train,
Y_train,
epochs=epochs,
batch_size=batch_size,
validation_split=0.1,
callbacks=[EarlyStopping(monitor='val_loss', patience=3, min_delta=0.0001)])
## iteration 4 ...optimizing adam
from keras.optimizers import Adam
adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0, amsgrad=False)
model_seq = Sequential()
model_seq.add(Embedding(input_dim=vocab_size,
output_dim=embedding_size,
weights=[w2v_weights],
input_length=MAX_SEQUENCE_LENGTH,
mask_zero=True,
trainable=False))
model_seq.add(SpatialDropout1D(0.1))
model_seq.add(LSTM(150, dropout=0.1, recurrent_dropout=0.1))
model_seq.add(Dense(150, activation='relu'))
model_seq.add(BatchNormalization(momentum=0.9,epsilon=0.02))
model_seq.add(Dropout(0.1))
model_seq.add(Dense(41, activation='softmax'))
model_seq.compile(loss='categorical_crossentropy', optimizer=adam, metrics=['accuracy'])
history = model_seq.fit(X_train,
Y_train,
epochs=epochs,
batch_size=batch_size,
validation_split=0.1,
callbacks=[EarlyStopping(monitor='val_loss', patience=3, min_delta=0.0001)])
accr = model_seq.evaluate(X_test,Y_test)
print('Test set\n Loss: {:0.3f}\n Accuracy: {:0.3f}'.format(accr[0],accr[1]*100))
# Data Visualization
import matplotlib.pyplot as plt
plt.title('Loss')
plt.plot(history.history['loss'], label='train')
plt.plot(history.history['val_loss'], label='validation')
plt.legend()
plt.show();
plt.title('Accuracy')
plt.plot(history.history['acc'], label='train')
plt.plot(history.history['val_acc'], label='validation')
plt.legend()
plt.show();
```
### Summary
The accuracy of each flavors of LSTM model is as follows in the table. This is clear indicative of how LSTM, in the family of RNN is efficient of dealing with textual data.
- We've been able to bump up the model performance upto the range 92.21 to 93.31 with 95% confidence level.
- Making the dataset balanced, helped the model to be trained more accurately.
- Creating our own word embbeddings helped finding better representation of keywords of our corpus.
- Hyperparameter tuning resulted in finding the model with more accuracy without overfitting, which is evident from the train vs. validation accuracy curve.

### Automation of Ticket Assignment has following benefits: -
1. Increase in Customer Satisfaction.
2. Decrease in the response and resolution time.
3. Eliminate human error in Ticket Assignment. (Which was ~25% Incidents)
4. Avoid missing SLAs due to error in Ticket Assignment.
5. Eliminate any Financial penalty associated with missed SLAs.
6. Excellent Customer Service.
7. Reallocate (~1 FTE) requirement for Productive Work.
8. Increase in morale of L1 / L2 Team.
9. Eradicate 15 mins Effort spent for SOP review (~25-30% of Incidents OR 531.25-637.5 Person Hours).
10. Decrease in associated Expense.
11. L1 / L2 Team can focus on resolving ~54% of the incidents
12. Functional / L3 teams can focus on resolving ~56% of incidents
**~1 FTE from L1 / L2 Team saved through automating Ticket Assignment can focus on Continuous Improvement activities.
~25% of Incidents which is 2125 additional Incidents will now get resolved within SLA.**
### Additional Business Insights
1. Root cause analysis (RCA) need to be performed on job_scheduler, to understand the cause of failure.
No. of Incident Ticket reduction expected by performing RCA:- 1928.
22.68% of Total Incident volume of 8500.
Hence, we can reduce the Resource / FTE allocation also by approximately 22.68%.
2. Password Rest process need to be automated.
No. of Incident Ticket reduction expected by automating password reset process:- 1246
14.66% of Total Incident volume of 8500.
Hence, we can reduce the Resource / FTE allocation also by approximately 14.66%.
Hence a cumulative reduction of 3174 Incidents means 37.34% reduction in Total Incident volume of 8500.
Hence, cumulative Resource / FTE allocation reduction by approximately 37.34%.
Business can operate at ~62.66% of original Estimates.
| github_jupyter |
# Keras Functional API
```
# sudo pip3 install --ignore-installed --upgrade tensorflow
import keras
import tensorflow as tf
print(keras.__version__)
print(tf.__version__)
# To ignore keep_dims warning
tf.logging.set_verbosity(tf.logging.ERROR)
```
Let’s start with a minimal example that shows side by side a simple Sequential model and its equivalent in the functional API:
```
from keras.models import Sequential, Model
from keras import layers
from keras import Input
seq_model = Sequential()
seq_model.add(layers.Dense(32, activation='relu', input_shape=(64,)))
seq_model.add(layers.Dense(32, activation='relu'))
seq_model.add(layers.Dense(10, activation='softmax'))
input_tensor = Input(shape=(64,))
x = layers.Dense(32, activation='relu')(input_tensor)
x = layers.Dense(32, activation='relu')(x)
output_tensor = layers.Dense(10, activation='softmax')(x)
model = Model(input_tensor, output_tensor)
model.summary()
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(model,show_shapes=True).create(prog='dot', format='svg'))
```
The only part that may seem a bit magical at this point is instantiating a Model object using only an input tensor and an output tensor. Behind the scenes, Keras retrieves every layer involved in going from input_tensor to output_tensor, bringing them together into a graph-like data structure—a Model. Of course, the reason it works is that output_tensor was obtained by repeatedly transforming input_tensor.
If you tried to build a model from **inputs and outputs that weren’t related**, you’d get a RuntimeError:
```
unrelated_input = Input(shape=(32,))
bad_model = Model(unrelated_input, output_tensor)
```
This error tells you, in essence, that Keras couldn’t reach input_2 from the provided output tensor.
When it comes to compiling, training, or evaluating such an instance of Model, the API is *the same as that of Sequential*:
```
model.compile(optimizer='rmsprop', loss='categorical_crossentropy')
import numpy as np
x_train = np.random.random((1000, 64))
y_train = np.random.random((1000, 10))
model.fit(x_train, y_train, epochs=10, batch_size=128)
score = model.evaluate(x_train, y_train)
```
## Multi-input models
#### A question-answering model example
Following is an example of how you can build such a model with the functional API. You set up two independent branches, encoding the text input and the question input as representation vectors; then, concatenate these vectors; and finally, add a softmax classifier on top of the concatenated representations.
```
from keras.models import Model
from keras import layers
from keras import Input
text_vocabulary_size = 10000
question_vocabulary_size = 10000
answer_vocabulary_size = 500
# The text input is a variable-length sequence of integers.
# Note that you can optionally name the inputs.
text_input = Input(shape=(None,), dtype='int32', name='text')
# Embeds the inputs into a sequence of vectors of size 64
# embedded_text = layers.Embedding(64, text_vocabulary_size)(text_input)
# embedded_text = layers.Embedding(output_dim=64, input_dim=text_vocabulary_size)(text_input)
embedded_text = layers.Embedding(text_vocabulary_size,64)(text_input)
# Encodes the vectors in a single vector via an LSTM
encoded_text = layers.LSTM(32)(embedded_text)
# Same process (with different layer instances) for the question
question_input = Input(shape=(None,),dtype='int32',name='question')
# embedded_question = layers.Embedding(32, question_vocabulary_size)(question_input)
# embedded_question = layers.Embedding(output_dim=32, input_dim=question_vocabulary_size)(question_input)
embedded_question = layers.Embedding(question_vocabulary_size,32)(question_input)
encoded_question = layers.LSTM(16)(embedded_question)
# Concatenates the encoded question and encoded text
concatenated = layers.concatenate([encoded_text, encoded_question],axis=-1)
# Adds a softmax classifier on top
answer = layers.Dense(answer_vocabulary_size, activation='softmax')(concatenated)
# At model instantiation, you specify the two inputs and the output.
model = Model([text_input, question_input], answer)
model.compile(optimizer='rmsprop',loss='categorical_crossentropy',metrics=['acc'])
model.summary()
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(model,show_shapes=True).create(prog='dot', format='svg'))
```
Now, how do you **train** this two-input model?
There are two possible APIs:
* you can feed the model a list of Numpy arrays as inputs
* you can feed it a dictionary that maps input names to Numpy arrays.
Naturally, the latter option is available only if you give names to your inputs.
#### Training the multi-input model
```
import numpy as np
num_samples = 1000
max_length = 100
# Generates dummy Numpy data
text = np.random.randint(1, text_vocabulary_size,size=(num_samples, max_length))
question = np.random.randint(1, question_vocabulary_size,size=(num_samples, max_length))
# Answers are one-hot encoded, not integers
# answers = np.random.randint(0, 1,size=(num_samples, answer_vocabulary_size))
answers = np.random.randint(answer_vocabulary_size, size=(num_samples))
answers = keras.utils.to_categorical(answers, answer_vocabulary_size)
# Fitting using a list of inputs
print('-'*10,"First training run with list of NumPy arrays",'-'*60)
model.fit([text, question], answers, epochs=10, batch_size=128)
print()
# Fitting using a dictionary of inputs (only if inputs are named)
print('-'*10,"Second training run with dictionary and named inputs",'-'*60)
model.fit({'text': text, 'question': question}, answers,epochs=10, batch_size=128)
```
## Multi-output models
You can also use the functional API to build models with multiple outputs (or multiple *heads*).
#### Example - prediction of Age, Gender and Income from social media posts
A simple example is a network that attempts to simultaneously predict different properties of the data, such as a network that takes as input a series of social media posts from a single anonymous person and tries to predict attributes of that person, such as age, gender, and income level.
```
from keras import layers
from keras import Input
from keras.models import Model
vocabulary_size = 50000
num_income_groups = 10
posts_input = Input(shape=(None,), dtype='int32', name='posts')
#embedded_posts = layers.Embedding(256, vocabulary_size)(posts_input)
embedded_posts = layers.Embedding(vocabulary_size,256)(posts_input)
x = layers.Conv1D(128, 5, activation='relu', padding='same')(embedded_posts)
x = layers.MaxPooling1D(5)(x)
x = layers.Conv1D(256, 5, activation='relu', padding='same')(x)
x = layers.Conv1D(256, 5, activation='relu', padding='same')(x)
x = layers.MaxPooling1D(5)(x)
x = layers.Conv1D(256, 5, activation='relu', padding='same')(x)
x = layers.Conv1D(256, 5, activation='relu', padding='same')(x)
x = layers.GlobalMaxPooling1D()(x)
x = layers.Dense(128, activation='relu')(x)
# Note that the output layers are given names.
age_prediction = layers.Dense(1, name='age')(x)
income_prediction = layers.Dense(num_income_groups, activation='softmax',name='income')(x)
gender_prediction = layers.Dense(1, activation='sigmoid', name='gender')(x)
model = Model(posts_input,[age_prediction, income_prediction, gender_prediction])
print("Model is ready!")
```
#### Compilation options of a multi-output model: multiple losses
```
model.compile(optimizer='rmsprop', loss=['mse', 'categorical_crossentropy', 'binary_crossentropy'])
# Equivalent (possible only if you give names to the output layers)
model.compile(optimizer='rmsprop',loss={'age': 'mse',
'income': 'categorical_crossentropy',
'gender': 'binary_crossentropy'})
model.compile(optimizer='rmsprop',
loss=['mse', 'categorical_crossentropy', 'binary_crossentropy'],
loss_weights=[0.25, 1., 10.])
# Equivalent (possible only if you give names to the output layers)
model.compile(optimizer='rmsprop',
loss={'age': 'mse','income': 'categorical_crossentropy','gender': 'binary_crossentropy'},
loss_weights={'age': 0.25,
'income': 1.,
'gender': 10.})
model.summary()
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(model,show_shapes=True).create(prog='dot', format='svg'))
```
#### Feeding data to a multi-output model
Much as in the case of multi-input models, you can pass Numpy data to the model for training either via a list of arrays or via a dictionary of arrays.
#### Training a multi-output model
```
import numpy as np
TRACE = False
num_samples = 1000
max_length = 100
posts = np.random.randint(1, vocabulary_size, size=(num_samples, max_length))
if TRACE:
print("*** POSTS ***")
print(posts.shape)
print(posts[:10])
print()
age_targets = np.random.randint(0, 100, size=(num_samples,1))
if TRACE:
print("*** AGE ***")
print(age_targets.shape)
print(age_targets[:10])
print()
income_targets = np.random.randint(1, num_income_groups, size=(num_samples,1))
income_targets = keras.utils.to_categorical(income_targets,num_income_groups)
if TRACE:
print("*** INCOME ***")
print(income_targets.shape)
print(income_targets[:10])
print()
gender_targets = np.random.randint(0, 2, size=(num_samples,1))
if TRACE:
print("*** GENDER ***")
print(gender_targets.shape)
print(gender_targets[:10])
print()
print('-'*10, "First training run with NumPy arrays", '-'*60)
# age_targets, income_targets, and gender_targets are assumed to be Numpy arrays.
model.fit(posts, [age_targets, income_targets, gender_targets], epochs=10, batch_size=64)
print('-'*10,"Second training run with dictionary and named outputs",'-'*60)
# Equivalent (possible only if you give names to the output layers)
model.fit(posts, {'age': age_targets,
'income': income_targets,
'gender': gender_targets},
epochs=10, batch_size=64)
```
### 7.1.4 Directed acyclic graphs of layers
With the functional API, not only can you build models with multiple inputs and multiple outputs, but you can also implement networks with a complex internal topology.
Neural networks in Keras are allowed to be arbitrary directed acyclic graphs of layers (the only processing loops that are allowed are those internal to recurrent layers).
Several common neural-network components are implemented as graphs. Two notable ones are <i>Inception modules</i> and <i>residual connections</i>. To better understand how the functional API can be used to build graphs of layers, let’s take a look at how you can implement both of them in Keras.
#### Inception modules
Inception [3] is a popular type of network architecture for convolutional neural networks. It consists of a stack of modules that themselves look like small independent networks, split into several parallel branches.
##### The purpose of 1 × 1 convolutions
1 × 1 convolutions (also called pointwise convolutions) are featured in Inception modules, where they contribute to factoring out channel-wise feature learning and space-wise feature learning.
```
from keras import layers
from keras.layers import Input
# This example assumes the existence of a 4D input tensor x:
# This returns a typical image tensor like those of MNIST dataset
x = Input(shape=(28, 28, 1), dtype='float32', name='images')
print("x.shape:",x.shape)
# Every branch has the same stride value (2), which is necessary to
# keep all branch outputs the same size so you can concatenate them
branch_a = layers.Conv2D(128, 1, padding='same', activation='relu', strides=2)(x)
# In this branch, the striding occurs in the spatial convolution layer.
branch_b = layers.Conv2D(128, 1, padding='same', activation='relu')(x)
branch_b = layers.Conv2D(128, 3, padding='same', activation='relu', strides=2)(branch_b)
# In this branch, the striding occurs in the average pooling layer.
branch_c = layers.AveragePooling2D(3, padding='same', strides=2)(x)
branch_c = layers.Conv2D(128, 3, padding='same', activation='relu')(branch_c)
branch_d = layers.Conv2D(128, 1, padding='same', activation='relu')(x)
branch_d = layers.Conv2D(128, 3, padding='same', activation='relu')(branch_d)
branch_d = layers.Conv2D(128, 3, padding='same', activation='relu', strides=2)(branch_d)
# Concatenates the branch outputs to obtain the module output
output = layers.concatenate([branch_a, branch_b, branch_c, branch_d], axis=-1)
# Adding a classifier on top of the convnet
output = layers.Flatten()(output)
output = layers.Dense(512, activation='relu')(output)
predictions = layers.Dense(10, activation='softmax')(output)
model = keras.models.Model(inputs=x, outputs=predictions)
```
#### Train the Inception model using the Dataset API and the MNIST data
Inspired by: https://github.com/keras-team/keras/blob/master/examples/mnist_dataset_api.py
```
import numpy as np
import os
import tempfile
import keras
from keras import backend as K
from keras import layers
from keras.datasets import mnist
import tensorflow as tf
if K.backend() != 'tensorflow':
raise RuntimeError('This example can only run with the TensorFlow backend,'
' because it requires the Dataset API, which is not'
' supported on other platforms.')
batch_size = 128
buffer_size = 10000
steps_per_epoch = int(np.ceil(60000 / float(batch_size))) # = 469
epochs = 5
num_classes = 10
def cnn_layers(x):
# This example assumes the existence of a 4D input tensor x:
# This returns a typical image tensor like those of MNIST dataset
print("x.shape:",x.shape)
# Every branch has the same stride value (2), which is necessary to
# keep all branch outputs the same size so you can concatenate them
branch_a = layers.Conv2D(128, 1, padding='same', activation='relu', strides=2)(x)
# In this branch, the striding occurs in the spatial convolution layer.
branch_b = layers.Conv2D(128, 1, padding='same', activation='relu')(x)
branch_b = layers.Conv2D(128, 3, padding='same', activation='relu', strides=2)(branch_b)
# In this branch, the striding occurs in the average pooling layer.
branch_c = layers.AveragePooling2D(3, padding='same', strides=2)(x)
branch_c = layers.Conv2D(128, 3, padding='same', activation='relu')(branch_c)
branch_d = layers.Conv2D(128, 1, padding='same', activation='relu')(x)
branch_d = layers.Conv2D(128, 3, padding='same', activation='relu')(branch_d)
branch_d = layers.Conv2D(128, 3, padding='same', activation='relu', strides=2)(branch_d)
# Concatenates the branch outputs to obtain the module output
output = layers.concatenate([branch_a, branch_b, branch_c, branch_d], axis=-1)
# Adding a classifier on top of the convnet
output = layers.Flatten()(output)
output = layers.Dense(512, activation='relu')(output)
predictions = layers.Dense(num_classes, activation='softmax')(output)
return predictions
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.astype(np.float32) / 255
x_train = np.expand_dims(x_train, -1)
y_train = tf.one_hot(y_train, num_classes)
# Create the dataset and its associated one-shot iterator.
dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
dataset = dataset.repeat()
dataset = dataset.shuffle(buffer_size)
dataset = dataset.batch(batch_size)
iterator = dataset.make_one_shot_iterator()
# Model creation using tensors from the get_next() graph node.
inputs, targets = iterator.get_next()
print("inputs.shape:",inputs.shape)
print("targets.shape:",targets.shape)
model_input = layers.Input(tensor=inputs)
model_output = cnn_layers(model_input)
model = keras.models.Model(inputs=model_input, outputs=model_output)
model.compile(optimizer=keras.optimizers.RMSprop(lr=2e-3, decay=1e-5),
loss='categorical_crossentropy',
metrics=['accuracy'],
target_tensors=[targets])
model.summary()
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(model,show_shapes=True).create(prog='dot', format='svg'))
```
#### Train Inception model
```
model.fit(epochs=epochs,
steps_per_epoch=steps_per_epoch)
# Save the model weights.
weight_path = os.path.join(tempfile.gettempdir(), 'saved_Inception_wt.h5')
model.save_weights(weight_path)
```
#### Test the Inception model
Second session to test loading trained model without tensors.
```
# Clean up the TF session.
K.clear_session()
# Second session to test loading trained model without tensors.
x_test = x_test.astype(np.float32)
x_test = np.expand_dims(x_test, -1)
x_test_inp = layers.Input(shape=x_test.shape[1:])
test_out = cnn_layers(x_test_inp)
test_model = keras.models.Model(inputs=x_test_inp, outputs=test_out)
weight_path = os.path.join(tempfile.gettempdir(), 'saved_Inception_wt.h5')
test_model.load_weights(weight_path)
test_model.compile(optimizer='rmsprop',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
test_model.summary()
SVG(model_to_dot(test_model).create(prog='dot', format='svg'))
loss, acc = test_model.evaluate(x_test, y_test, num_classes)
print('\nTest accuracy: {0}'.format(acc))
```
#### Residual connections - ResNET
Residual connections or ResNET are a common graph-like network component found in many post-2015 network architectures, including Xception. They were introduced by He et al. from Microsoft and are figthing two common problems with large-scale deep-learning model: vanishing gradients and representational bottlenecks.
A residual connection consists of making the output of an earlier layer available as input to a later layer, effectively creating a shortcut in a sequential network. Rather than being concatenated to the later activation, the earlier output is summed with the later activation, which assumes that both activations are the same size. If they’re different sizes, you can use a linear transformation to reshape the earlier activation into the target shape (for example, a Dense layer without an activation or, for convolutional feature maps, a 1 × 1 convolution without an activation).
###### ResNET implementation when the feature-map sizes are the same
Here’s how to implement a residual connection in Keras when the feature-map sizes are the same, using identity residual connections. This example assumes the existence of a 4D input tensor x:
```
from keras import layers
from keras.layers import Input
# This example assumes the existence of a 4D input tensor x:
# This returns a typical image tensor like those of MNIST dataset
x = Input(shape=(28, 28, 1), dtype='float32', name='images')
print("x.shape:",x.shape)
# Applies a transformation to x
y = layers.Conv2D(128, 3, activation='relu', padding='same')(x)
y = layers.Conv2D(128, 3, activation='relu', padding='same')(y)
y = layers.Conv2D(128, 3, activation='relu', padding='same')(y)
# Adds the original x back to the output features
output = layers.add([y, x])
# Adding a classifier on top of the convnet
output = layers.Flatten()(output)
output = layers.Dense(512, activation='relu')(output)
predictions = layers.Dense(10, activation='softmax')(output)
model.summary()
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(model,show_shapes=True).create(prog='dot', format='svg'))
```
###### ResNET implementation when the feature-map sizes differ
And the following implements a residual connection when the feature-map sizes differ, using a linear residual connection (again, assuming the existence of a 4D input tensor x):
```
from keras import layers
from keras.layers import Input
# This example assumes the existence of a 4D input tensor x:
# This returns a typical image tensor like those of MNIST dataset
x = Input(shape=(28, 28, 1), dtype='float32', name='images')
print("x.shape:",x.shape)
# Applies a transformation to x
y = layers.Conv2D(128, 3, activation='relu', padding='same')(x)
y = layers.Conv2D(128, 3, activation='relu', padding='same')(y)
y = layers.MaxPooling2D(2, strides=2)(y)
# Uses a 1 × 1 convolution to linearly downsample the original x tensor to the same shape as y
residual = layers.Conv2D(128, 1, strides=2, padding='same')(x)
# Adds the residual tensor back to the output features
output = layers.add([y, residual])
# Adding a classifier on top of the convnet
output = layers.Flatten()(output)
output = layers.Dense(512, activation='relu')(output)
predictions = layers.Dense(10, activation='softmax')(output)
model = keras.models.Model(inputs=x, outputs=predictions)
model.summary()
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(model,show_shapes=True).create(prog='dot', format='svg'))
```
#### Train the ResNET model using the Dataset API and the MNIST data
(when the feature-map sizes are the same)
```
import numpy as np
import os
import tempfile
import keras
from keras import backend as K
from keras import layers
from keras.datasets import mnist
import tensorflow as tf
if K.backend() != 'tensorflow':
raise RuntimeError('This example can only run with the TensorFlow backend,'
' because it requires the Dataset API, which is not'
' supported on other platforms.')
batch_size = 128
buffer_size = 10000
steps_per_epoch = int(np.ceil(60000 / float(batch_size))) # = 469
epochs = 5
num_classes = 10
def cnn_layers(x):
# This example assumes the existence of a 4D input tensor x:
# This returns a typical image tensor like those of MNIST dataset
print("x.shape:",x.shape)
# Applies a transformation to x
y = layers.Conv2D(128, 3, activation='relu', padding='same')(x)
y = layers.Conv2D(128, 3, activation='relu', padding='same')(y)
y = layers.Conv2D(128, 3, activation='relu', padding='same')(y)
# Adds the original x back to the output features
output = layers.add([y, x])
# Adding a classifier on top of the convnet
output = layers.Flatten()(output)
output = layers.Dense(512, activation='relu')(output)
predictions = layers.Dense(10, activation='softmax')(output)
return predictions
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.astype(np.float32) / 255
x_train = np.expand_dims(x_train, -1)
y_train = tf.one_hot(y_train, num_classes)
# Create the dataset and its associated one-shot iterator.
dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
dataset = dataset.repeat()
dataset = dataset.shuffle(buffer_size)
dataset = dataset.batch(batch_size)
iterator = dataset.make_one_shot_iterator()
# Model creation using tensors from the get_next() graph node.
inputs, targets = iterator.get_next()
print("inputs.shape:",inputs.shape)
print("targets.shape:",targets.shape)
model_input = layers.Input(tensor=inputs)
model_output = cnn_layers(model_input)
model = keras.models.Model(inputs=model_input, outputs=model_output)
model.compile(optimizer=keras.optimizers.RMSprop(lr=2e-3, decay=1e-5),
loss='categorical_crossentropy',
metrics=['accuracy'],
target_tensors=[targets])
model.summary()
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(model,show_shapes=True).create(prog='dot', format='svg'))
```
#### Train and Save the ResNet model
```
model.fit(epochs=epochs,
steps_per_epoch=steps_per_epoch)
# Save the model weights.
weight_path = os.path.join(tempfile.gettempdir(), 'saved_ResNet_wt.h5')
model.save_weights(weight_path)
```
#### Second session to test loading trained model without tensors.
```
# Clean up the TF session.
K.clear_session()
# Second session to test loading trained model without tensors.
x_test = x_test.astype(np.float32)
x_test = np.expand_dims(x_test, -1)
x_test_inp = layers.Input(shape=x_test.shape[1:])
test_out = cnn_layers(x_test_inp)
test_model = keras.models.Model(inputs=x_test_inp, outputs=test_out)
weight_path = os.path.join(tempfile.gettempdir(), 'saved_ResNet_wt.h5')
test_model.load_weights(weight_path)
test_model.compile(optimizer='rmsprop',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
test_model.summary()
loss, acc = test_model.evaluate(x_test, y_test, num_classes)
print('\nTest accuracy: {0}'.format(acc))
```
Not very good... probably normal since residual connection are good with very deep network but here we have only 2 hidden layers.
### 7.1.5. Layer weights sharing
One more important feature of the functional API is the ability to reuse a layer instance several times where instead of instantiating a new layer for each call, you reuse the same weights with every call. This allows you to build models that have shared branches—several branches that all share the same knowledge and perform the same operations.
#### Example - semantic similarity between two sentences
For example, consider a model that attempts to assess the semantic similarity between two sentences. The model has two inputs (the two sentences to compare) and outputs a score between 0 and 1, where 0 means unrelated sentences and 1 means sentences that are either identical or reformulations of each other. Such a model could be useful in many applications, including deduplicating natural-language queries in a dialog system.
In this setup, the two input sentences are interchangeable, because semantic similarity is a symmetrical relationship: the similarity of A to B is identical to the similarity of B to A. For this reason, it wouldn’t make sense to learn two independent models for processing each input sentence. Rather, you want to process both with a single LSTM layer. The representations of this LSTM layer (its weights) are learned based on both inputs simultaneously. This is what we call a Siamese LSTM model or a shared LSTM.
Note: Siamese network is a special type of neural network architecture. Instead of learning to classify its
inputs, the Siamese neural network learns to differentiate between two inputs. It learns the similarity.
Here’s how to implement such a model using layer sharing (layer reuse) in the Keras functional API:
```
from keras import layers
from keras import Input
from keras.models import Model
# Instantiates a single LSTM layer, once
lstm = layers.LSTM(32)
# Building the left branch of the model:
# inputs are variable-length sequences of vectors of size 128.
left_input = Input(shape=(None, 128))
left_output = lstm(left_input)
# Building the right branch of the model:
# when you call an existing layer instance, you reuse its weights.
right_input = Input(shape=(None, 128))
right_output = lstm(right_input)
# Builds the classifier on top
merged = layers.concatenate([left_output, right_output], axis=-1)
predictions = layers.Dense(1, activation='sigmoid')(merged)
# Instantiating the model
model = Model([left_input, right_input], predictions)
model.summary()
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(model,show_shapes=True).create(prog='dot', format='svg'))
import numpy as np
num_samples = 100
num_symbols = 2
TRACE = False
left_data = np.random.randint(0,num_symbols, size=(num_samples,1,128))
if TRACE:
print(type(left_data))
print(left_data.shape)
print(left_data)
print('-'*50)
right_data = np.random.randint(0,num_symbols, size=(num_samples,1,128))
if TRACE:
print(type(right_data))
print(right_data.shape)
print(right_data)
print('-'*50)
matching_list = [np.random.randint(0,num_symbols) for _ in range(num_samples)]
targets = np.array(matching_list)
if TRACE:
print(type(targets))
print(targets.shape)
print(targets)
print('-'*50)
# We must compile a model before training/testing.
model.compile(optimizer='rmsprop',loss='binary_crossentropy',metrics=['acc'])
# Training the model: when you train such a model,
# the weights of the LSTM layer are updated based on both inputs.
model.fit([left_data, right_data],targets)
```
### 7.1.6. Models as layers
Importantly, in the functional API, models can be used as you’d use layers—effectively, you can think of a model as a “bigger layer.” This is true of both the Sequential and Model classes. This means you can call a model on an input tensor and retrieve an output tensor:
y = model(x)
If the model has multiple input tensors and multiple output tensors, it should be called with a list of tensors:
y1, y2 = model([x1, x2])
When you call a model instance, you’re reusing the weights of the model—exactly like what happens when you call a layer instance. Calling an instance, whether it’s a layer instance or a model instance, will always reuse the existing learned representations of the instance—which is intuitive.
```
from keras import layers
from keras import applications
from keras import Input
nbr_classes = 10
# The base image-processing model is the Xception network (convolutional base only).
xception_base = applications.Xception(weights=None,include_top=False)
# The inputs are 250 × 250 RGB images.
left_input = Input(shape=(250, 250, 3))
right_input = Input(shape=(250, 250, 3))
left_features = xception_base(left_input)
# right_input = xception_base(right_input)
right_features = xception_base(right_input)
merged_features = layers.concatenate([left_features, right_features], axis=-1)
predictions = layers.Dense(nbr_classes, activation='softmax')(merged_features)
# Instantiating the model
model = Model([left_input, right_input], predictions)
model.summary()
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(model,show_shapes=True).create(prog='dot', format='svg'))
```
| github_jupyter |
# Basic Init
**Imports**
```
import nibabel as nib
import matplotlib.pyplot as plt
import numpy as np
from random import randint
import tensorflow as tf
import glob
import pickle
import os
from keras.layers import Input, Dense, Conv3D, MaxPooling3D, UpSampling3D, Conv3DTranspose
from keras.models import Model, load_model
from keras import backend as K
from google.colab import drive
from keras import optimizers
```
**Connection to gdrive and file listing**
```
drive.mount('/content/gdrive')
!ls
base_path = 'gdrive/My Drive/projects/Brain MRI BTech Project/PreprocData/'
```
**List Resource**
```
# memory footprint support libraries/code
!ln -sf /opt/bin/nvidia-smi /usr/bin/nvidia-smi
!pip install gputil
!pip install psutil
!pip install humanize
import psutil
import humanize
import os
import GPUtil as GPU
GPUs = GPU.getGPUs()
# XXX: only one GPU on Colab and isn’t guaranteed
gpu = GPUs[0]
def printm():
process = psutil.Process(os.getpid())
print("Gen RAM Free: " + humanize.naturalsize( psutil.virtual_memory().available ), " | Proc size: " + humanize.naturalsize( process.memory_info().rss))
print("GPU RAM Free: {0:.0f}MB | Used: {1:.0f}MB | Util {2:3.0f}% | Total {3:.0f}MB".format(gpu.memoryFree, gpu.memoryUsed, gpu.memoryUtil*100, gpu.memoryTotal))
printm()
```
# Dataset Handling
**MRI handling functions**
```
def get_rand_slice_list(data_shape):
x_max, y_max, z_max = data_shape
x_curr = randint((x_max/2)-(x_max/4), (x_max/2)+(x_max/4))
y_curr = randint((y_max/2)-(y_max/4), (y_max/2)+(y_max/4))
z_curr = randint((z_max/2)-(z_max/4), (z_max/2)+(z_max/4))
return x_curr, y_curr, z_curr
def show_mri_slices_random(mri_data, explicit_pos=None):
""" Function to display random image slices """
'''Provision to give exact slice numbers'''
'''Random numbers biased towards middle'''
print('Data Shape = ',mri_data.shape)
if explicit_pos==None:
x_curr, y_curr, z_curr = get_rand_slice_list(mri_data.shape)
else:
x_curr, y_curr, z_curr = explicit_pos
print('Data Positions = ',x_curr, y_curr, z_curr)
slice_0 = mri_data[x_curr, :, :]
slice_1 = mri_data[:, y_curr, :]
slice_2 = mri_data[:, :, z_curr]
print('Slice 1: value: ',x_curr)
plt.imshow(slice_0.T, cmap='gray', origin=0)
plt.show()
print('Slice 2: value: ',y_curr)
plt.imshow(slice_1.T, cmap='gray', aspect=0.5, origin=0)
plt.show()
print('Slice 3: value: ',z_curr)
plt.imshow(slice_2.T, cmap='gray', aspect=0.5, origin=0)
plt.show()
def get_mri_data(path):
img_obj = nib.load(path)
return img_obj.get_fdata()
def get_mri_data_scaler(path,scale_vals,type_mri):
img_obj = nib.load(path)
smax,smin = scale_vals[type_mri]
curr_data = img_obj.get_fdata()
curr_data = ((curr_data - smin)/(smax-smin))*smax
return curr_data
def id_extract(stringpath):
name_parts = stringpath.split(os.sep)
name_parts.pop()
dataset_name = name_parts.pop()
return int(dataset_name[-2:])
def print_Details(dat_paths):
for dat in dat_paths:
print(dat['id'])
for key,val in dat.items():
if key != 'id':
aaa = get_mri_data(val)
print(key,aaa.shape,aaa.max(),aaa.min())
```
**Dataset Loading Function**
```
def load_MS_dataset(base_dataset_path):
total_dataset = []
patient_folders =glob.glob(base_dataset_path+'*/')
patient_folders.sort()
for curr_data_path in patient_folders:
curr_dataset={}
curr_dataset['id'] = id_extract(curr_data_path)
curr_dataset['flair'] = glob.glob(curr_data_path+'/*flair.nii.gz')[-1]
curr_dataset['t1'] =glob.glob(curr_data_path+'/*t1.nii.gz')[-1]
curr_dataset['t2'] = glob.glob(curr_data_path+'/*t2.nii.gz')[-1]
curr_dataset['label'] = glob.glob(curr_data_path+'/*label.nii.gz')[-1]
total_dataset.append(curr_dataset)
print(curr_dataset['id'])
with open(base_dataset_path+'data_details.pickle', "rb") as data_details_file:
dataset_details = pickle.load(data_details_file)
return total_dataset, dataset_details
```
**Model Helper functions**
```
def dice_coef_modified(y_true, y_pred):
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
intersection = K.sum(y_true_f * y_pred_f)
return (2. * intersection + K.epsilon()) / (K.sum(y_true_f) + K.sum(y_pred_f) + K.epsilon())
def dice_coefficient(y_true, y_pred, smooth=1.):
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
intersection = K.sum(y_true_f * y_pred_f)
return (2. * intersection + smooth) / (K.sum(y_true_f) + K.sum(y_pred_f) + smooth)
def dice_coefficient_loss(y_true, y_pred):
return 1.0-dice_coef_modified(y_true, y_pred)
def binarise_lesion(lesion_data):
lesion_data[lesion_data <= 0] = 0
lesion_data[lesion_data > 0] = 1
return lesion_data
def Mean_IOU(y_true, y_pred):
nb_classes = K.int_shape(y_pred)[-1]
iou = []
true_pixels = K.argmax(y_true, axis=-1)
pred_pixels = K.argmax(y_pred, axis=-1)
void_labels = K.equal(K.sum(y_true, axis=-1), 0)
for i in range(0, nb_classes): # exclude first label (background) and last label (void)
true_labels = K.equal(true_pixels, i) & ~void_labels
pred_labels = K.equal(pred_pixels, i) & ~void_labels
inter = tf.to_int32(true_labels & pred_labels)
union = tf.to_int32(true_labels | pred_labels)
legal_batches = K.sum(tf.to_int32(true_labels), axis=1)>0
ious = K.sum(inter, axis=1)/K.sum(union, axis=1)
iou.append(K.mean(tf.gather(ious, indices=tf.where(legal_batches)))) # returns average IoU of the same objects
iou = tf.stack(iou)
legal_labels = ~tf.debugging.is_nan(iou)
iou = tf.gather(iou, indices=tf.where(legal_labels))
return K.mean(iou)
```
**Find Scaling Constants**
```
def scalervals(img_paths):
flair_max=float('-inf')
flair_min=float('inf')
t1_max=float('-inf')
t1_min=float('inf')
t2_max=float('-inf')
t2_min=float('inf')
for dat in img_paths:
print(dat['id'])
curr_flair_data = get_mri_data(dat['flair'])
curr_t1_data = get_mri_data(dat['t1'])
curr_t2_data = get_mri_data(dat['t2'])
curr_flair_max = np.max(curr_flair_data)
curr_flair_min = np.min(curr_flair_data)
if(curr_flair_max>flair_max):
flair_max = curr_flair_max
if(curr_flair_min<flair_min):
flair_min = curr_flair_min
curr_t1_max = np.max(curr_t1_data)
curr_t1_min = np.min(curr_t1_data)
if(curr_t1_max>t1_max):
t1_max = curr_t1_max
if(curr_t1_min<t1_min):
t1_min = curr_t1_min
curr_t2_max = np.max(curr_t2_data)
curr_t2_min = np.min(curr_t2_data)
if(curr_t2_max>t2_max):
t2_max = curr_t2_max
if(curr_t2_min<t2_min):
t2_min = curr_t2_min
return {'flair': [flair_max, flair_min],'t1': [t1_max,t1_min],'t2': [t2_max,t2_min]}
```
# Autoencoder Architecture
**Main Model Function**
Model Details:
3D autoencoder
```
#Model Constants
model_input_size = (192, 512, 512, 1) #channels last
total_epochs=400
epochs_per_item=10
learning_rate = 0.0001
def build_model_3dautoencoder():
model_Input = Input(shape=model_input_size)
#Encoder
Conv3D_layer = Conv3D(filters = 8, kernel_size = (3, 3, 3), activation='relu', padding='same')(model_Input)
MaxPooling3D_layer = MaxPooling3D(pool_size=(2, 2, 2), padding='same')(Conv3D_layer)
Conv3D_layer = Conv3D(filters = 16, kernel_size = (3, 3, 3), activation='relu', padding='same')(MaxPooling3D_layer)
MaxPooling3D_layer = MaxPooling3D(pool_size=(2, 2, 2), padding='same')(Conv3D_layer)
Conv3D_layer = Conv3D(filters = 32, kernel_size = (3, 3, 3), activation='relu', padding='same')(MaxPooling3D_layer)
encoding_layer = MaxPooling3D(pool_size=(2, 2, 2), padding='same')(Conv3D_layer)
#decoder
Conv3D_layer = Conv3D(filters = 32, kernel_size = (3, 3, 3), activation='relu', padding='same')(encoding_layer)
UpSampling3D_layer = UpSampling3D(size=(2, 2, 2))(Conv3D_layer)
Conv3D_layer = Conv3D(filters = 16, kernel_size = (3, 3, 3), activation='relu', padding='same')(UpSampling3D_layer)
UpSampling3D_layer = UpSampling3D(size=(2, 2, 2))(Conv3D_layer)
Conv3D_layer = Conv3D(filters = 8, kernel_size = (3, 3, 3), activation='relu', padding='same')(UpSampling3D_layer)
UpSampling3D_layer = UpSampling3D(size=(2, 2, 2))(Conv3D_layer)
decoding_layer = Conv3D(filters = 1, kernel_size = (3, 3, 3), activation='relu', padding='same')(UpSampling3D_layer)
model_autoencoder_3d = Model(model_Input, decoding_layer)
model_autoencoder_3d.compile(loss=[dice_coefficient_loss], optimizer=optimizers.Adam(lr=learning_rate))
model_autoencoder_3d.summary()
return model_autoencoder_3d
```
**Train and Test**
```
def train_lesion_Flair(model, dat_paths, tot_epochs, epoch_per_item):
loopval=True
while(loopval):
for dat in dat_paths:
if tot_epochs<0:
loopval=False
break
print("Epochs left: ",tot_epochs)
print("ID: ",dat['id'])
curr_flair_data = get_mri_data(dat['flair'])
curr_label_lesion_data = get_mri_data(dat['label'])
if np.array_equal(curr_flair_data.shape,( 192, 512, 512)):
curr_flair_data = np.reshape(curr_flair_data.astype('float32'), (1, 192, 512, 512, 1))
curr_label_lesion_data = np.reshape(curr_label_lesion_data.astype('float32'), (1, 192, 512, 512, 1))
model.fit(curr_flair_data, curr_label_lesion_data, epochs=epoch_per_item)
tot_epochs = tot_epochs - epoch_per_item
else:
print("size does not match.. Skipping..")
def test_lesion_Flair_show(model,dat_paths,num):
num=num-1
curr_flair_data = get_mri_data(dat_paths[num]['flair'])
curr_flair_data_reshaped = np.reshape(curr_flair_data.astype('float32'), (1, 192, 512, 512, 1))
curr_label_lesion_data = get_mri_data(dat_paths[num]['label'])
curr_label_lesion_data_reshaped = np.reshape(curr_label_lesion_data.astype('float32'), (1, 192, 512, 512, 1))
predict_lesion_data = model.predict(curr_flair_data_reshaped)
predict_lesion_data = (predict_lesion_data.reshape((192, 512, 512))).astype(int)
curr_slice_list = get_rand_slice_list((192, 512, 512))
print('MRI')
show_mri_slices_random(curr_flair_data,curr_slice_list)
print('Label')
show_mri_slices_random(curr_label_lesion_data,curr_slice_list)
print('Predicted')
predict_lesion_data = binarise_lesion(predict_lesion_data)
show_mri_slices_random(predict_lesion_data,curr_slice_list)
print("Max=",predict_lesion_data.max(),"Min=",predict_lesion_data.min())
scores = model.evaluate(curr_flair_data_reshaped,curr_label_lesion_data_reshaped)
print("Scores: ",scores)
return predict_lesion_data
def save_output_mri(model,dat_paths,num):
num=num-1
curr_flair_data = get_mri_data(dat_paths[num]['flair'])
curr_flair_data_reshaped = np.reshape(curr_flair_data.astype('float32'), (1, 192, 512, 512, 1))
curr_label_lesion_data = get_mri_data(dat_paths[num]['label'])
curr_label_lesion_data_reshaped = np.reshape(curr_label_lesion_data.astype('float32'), (1, 192, 512, 512, 1))
predict_lesion_data = model.predict(curr_flair_data_reshaped)
predict_lesion_data = (predict_lesion_data.reshape((192, 512, 512))).astype(int)
curr_slice_list = get_rand_slice_list((192, 512, 512))
predict_lesion_data = binarise_lesion(predict_lesion_data).astype(float)
label_obj = nib.load(dat_paths[num]['label'])
print(label_obj.get_data().shape)
print(predict_lesion_data.shape)
output_obj = nib.Nifti1Image(predict_lesion_data, label_obj.affine)
nib.save(output_obj, 'output.nii.gz')
```
**Batching Support added**
```
def create_batch_flair(dat_paths, batch_size, curr_offset):
total_size = len(dat_paths)
curr_batch_flair=[]
curr_batch_lesion=[]
for curr_iter in range(batch_size):
curr_id = (curr_offset + curr_iter) % total_size
print('id: ',curr_id)
curr_flair_data = get_mri_data(dat_paths[curr_id]['flair'])
curr_label_lesion_data = curr_flair_data
if np.array_equal(curr_flair_data.shape,( 192, 512, 512)):
curr_flair_data = np.reshape(curr_flair_data.astype('float32'), (192, 512, 512, 1))
curr_label_lesion_data = np.reshape(curr_label_lesion_data.astype('float32'), (192, 512, 512, 1))
curr_batch_flair.append(curr_flair_data)
curr_batch_lesion.append(curr_label_lesion_data)
else:
print("size does not match.. Skipping..")
curr_batch_flair = np.array(curr_batch_flair)
curr_batch_lesion = np.array(curr_batch_lesion)
print('flair batch:',curr_batch_flair.shape)
print('lesion batch',curr_batch_lesion.shape)
new_offset = (curr_offset + batch_size) % total_size
return new_offset, curr_batch_flair, curr_batch_lesion
def train_lesion_Flair_in_batches(model, dat_paths, tot_epochs, epoch_per_batch, batch_size):
curr_offset = 0
while(tot_epochs>0):
print("Epochs left: ",tot_epochs)
print('seed offset: ', curr_offset)
curr_offset, curr_batch_flair, curr_batch_lesion = create_batch_flair(dat_paths, batch_size, curr_offset)
model.fit(curr_batch_flair, curr_batch_lesion, epochs=epoch_per_batch)
tot_epochs = tot_epochs - epoch_per_batch
```
# **Main PipeLine**
**Load Dataset and Generate Model**
```
dataset_paths,dataset_details = load_MS_dataset(base_path)
lesion3dAutoencoder = build_model_3dautoencoder()
```
**Model Training and Save**
```
train_lesion_Flair(lesion3dAutoencoder,dataset_paths,total_epochs,epochs_per_item)
lesion3dAutoencoder.save('my_model.h5')
```
**Model Test**
```
hhh=test_lesion_Flair_show(lesion3dAutoencoder,dataset_paths,6)
```
**Save ouput**
```
save_output_mri(lesion3dAutoencoder,dataset_paths,6)
```
**Load Model**
```
autoencoder_model = load_model('my_model.h5')
hhh=test_lesion_Flair_show(autoencoder_model,dataset_paths,6)
```
# Generating output for each layer
```
def generate_all_layer_ouputs(model,dat_paths,num):
num=num-1
curr_flair_data = get_mri_data(dat_paths[num]['flair'])
curr_flair_data_reshaped = np.reshape(curr_flair_data.astype('float32'), (1, 192, 512, 512, 1))
curr_label_lesion_data = get_mri_data(dat_paths[num]['label'])
curr_label_lesion_data_reshaped = np.reshape(curr_label_lesion_data.astype('float32'), (1, 192, 512, 512, 1))
inp = model.input # input placeholder
outputs = [layer.output for layer in model.layers] # all layer outputs
functor = K.function([inp, K.learning_phase()], outputs ) # evaluation function
# Testing
layer_outs = functor([curr_label_lesion_data_reshaped, 1.])
print(layer_outs)
generate_all_layer_ouputs(autoencoder_model,dataset_paths,6)
```
# Try out Batching... Requires huge amt of VRAM
```
train_lesion_Flair_in_batches(lesion3dAutoencoder,dataset_paths,total_epochs,epochs_per_item, 2)
```
# **Kill session**
```
!pkill -9 -f ipykernel_launcher
```
| github_jupyter |
```
# Libraries for R^2 visualization
from ipywidgets import interactive, IntSlider, FloatSlider
from math import floor, ceil
from sklearn.base import BaseEstimator, RegressorMixin
# Libraries for model building
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_absolute_error, mean_squared_error, r2_score
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# Library for working locally or Colab
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Applied-Modeling/master/data/'
# If you're working locally:
else:
DATA_PATH = '../data/'
```
# I. Wrangle Data
```
df = wrangle(DATA_PATH + 'elections/bread_peace_voting.csv')
```
# II. Split Data
**First** we need to split our **target vector** from our **feature matrix**.
```
```
**Second** we need to split our dataset into **training** and **test** sets.
Two strategies:
- Random train-test split using [`train_test_split`](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html). Generally we use 80% of the data for training, and 20% of the data for testing.
- If you have **timeseries**, then you need to do a "cutoff" split.
```
```
# III. Establish Baseline
```
```
# IV. Build Model
```
```
# V. Check Metrics
## Mean Absolute Error
The unit of measurement is the same as the unit of measurment for your target (in this case, vote share [%]).
```
```
## Root Mean Squared Error
The unit of measurement is the same as the unit of measurment for your target (in this case, vote share [%]).
```
```
## $R^2$ Score
TL;DR: Usually ranges between 0 (bad) and 1 (good).
```
class BruteForceRegressor(BaseEstimator, RegressorMixin):
def __init__(self, m=0, b=0):
self.m = m
self.b = b
self.mean = 0
def fit(self, X, y):
self.mean = np.mean(y)
return self
def predict(self, X, return_mean=True):
if return_mean:
return [self.mean] * len(X)
else:
return X * self.m + self.b
def plot(slope, intercept):
# Assign data to variables
x = df['income']
y = df['incumbent_vote_share']
# Create figure
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,6))
# Set ax limits
mar = 0.2
x_lim = floor(x.min() - x.min()*mar), ceil(x.max() + x.min()*mar)
y_lim = floor(y.min() - y.min()*mar), ceil(y.max() + y.min()*mar)
# Instantiate and train model
bfr = BruteForceRegressor(slope, intercept)
bfr.fit(x, y)
# ax1
## Plot data
ax1.set_xlim(x_lim)
ax1.set_ylim(y_lim)
ax1.scatter(x, y)
## Plot base model
ax1.axhline(bfr.mean, color='orange', label='baseline model')
## Plot residual lines
y_base_pred = bfr.predict(x)
ss_base = mean_squared_error(y, y_base_pred) * len(y)
for x_i, y_i, yp_i in zip(x, y, y_base_pred):
ax1.plot([x_i, x_i], [y_i, yp_i],
color='gray', linestyle='--', alpha=0.75)
## Formatting
ax1.legend()
ax1.set_title(f'Sum of Squares: {np.round(ss_base, 2)}')
ax1.set_xlabel('Growth in Personal Incomes')
ax1.set_ylabel('Incumbent Party Vote Share [%]')
# ax2
ax2.set_xlim(x_lim)
ax2.set_ylim(y_lim)
## Plot data
ax2.scatter(x, y)
## Plot model
x_model = np.linspace(*ax2.get_xlim(), 10)
y_model = bfr.predict(x_model, return_mean=False)
ax2.plot(x_model, y_model, color='green', label='our model')
for x_coord, y_coord in zip(x, y):
ax2.plot([x_coord, x_coord], [y_coord, x_coord * slope + intercept],
color='gray', linestyle='--', alpha=0.75)
ss_ours = mean_squared_error(y, bfr.predict(x, return_mean=False)) * len(y)
## Formatting
ax2.legend()
ax2.set_title(f'Sum of Squares: {np.round(ss_ours, 2)}')
ax2.set_xlabel('Growth in Personal Incomes')
ax2.set_ylabel('Incumbent Party Vote Share [%]')
y = df['incumbent_vote_share']
slope_slider = FloatSlider(min=-5, max=5, step=0.5, value=0)
intercept_slider = FloatSlider(min=int(y.min()), max=y.max(), step=2, value=y.mean())
interactive(plot, slope=slope_slider, intercept=intercept_slider)
```
# VI. Communicate Results
**Challenge:** How can we find the coefficients and intercept for our `model`?
```
```
| github_jupyter |
<div>
<img src="https://drive.google.com/uc?export=view&id=1vK33e_EqaHgBHcbRV_m38hx6IkG0blK_" width="350"/>
</div>
#**Artificial Intelligence - MSc**
##ET5003 - MACHINE LEARNING APPLICATIONS
###Instructor: Enrique Naredo
###ET5003_NLP_SpamClasiffier-2
### Spam Classification
[Spamming](https://en.wikipedia.org/wiki/Spamming) is the use of messaging systems to send multiple unsolicited messages (spam) to large numbers of recipients for the purpose of commercial advertising, for the purpose of non-commercial proselytizing, for any prohibited purpose (especially the fraudulent purpose of phishing), or simply sending the same message over and over to the same user.
Spam Classification: Deciding whether an email is spam or not.
## Imports
```
# standard libraries
import pandas as pd
import numpy as np
# Scikit-learn is an open source machine learning library
# that supports supervised and unsupervised learning
# https://scikit-learn.org/stable/
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import accuracy_score, confusion_matrix
# Regular expression operations
#https://docs.python.org/3/library/re.html
import re
# Natural Language Toolkit
# https://www.nltk.org/install.html
import nltk
# Stemming maps different forms of the same word to a common “stem”
# https://pypi.org/project/snowballstemmer/
from nltk.stem import SnowballStemmer
# https://www.nltk.org/book/ch02.html
from nltk.corpus import stopwords
```
## Step 1: Load dataset
```
# Mount Google Drive
from google.colab import drive
drive.mount('/content/drive')
# path to your (local/cloud) drive
path = '/content/drive/MyDrive/Colab Notebooks/Enrique/Data/spam/'
# load dataset
df = pd.read_csv(path+'spam.csv', encoding='latin-1')
df.rename(columns = {'v1':'class_label', 'v2':'message'}, inplace = True)
df.drop(['Unnamed: 2', 'Unnamed: 3', 'Unnamed: 4'], axis = 1, inplace = True)
# original dataset
df.head()
```
The dataset has 4825 ham messages and 747 spam messages.
```
# histogram
import seaborn as sns
sns.countplot(df['class_label'])
# explore dataset
vc = df['class_label'].value_counts()
print(vc)
```
This is an imbalanced dataset
* The number of ham messages is much higher than those of spam.
* This can potentially cause our model to be biased.
* To fix this, we could resample our data to get an equal number of spam/ham messages.
```
# convert class label to numeric
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
le.fit(df.class_label)
df2 = df
df2['class_label'] = le.transform(df.class_label)
df2.head()
# another histogram
df2.hist()
```
## Step 2: Pre-processing
Next, we’ll convert our DataFrame to a list, where every element of that list will be a spam message. Then, we’ll join each element of our list into one big string of spam messages. The lowercase form of that string is the required format needed for our word cloud creation.
```
spam_list = df['message'].tolist()
spam_list
new_df = pd.DataFrame({'message':spam_list})
# removing everything except alphabets
new_df['clean_message'] = new_df['message'].str.replace("[^a-zA-Z#]", " ")
# removing short words
short_word = 4
new_df['clean_message'] = new_df['clean_message'].apply(lambda x: ' '.join([w for w in x.split() if len(w)>short_word]))
# make all text lowercase
new_df['clean_message'] = new_df['clean_message'].apply(lambda x: x.lower())
import nltk
from nltk.corpus import stopwords
nltk.download('stopwords')
swords = stopwords.words('english')
# tokenization
tokenized_doc = new_df['clean_message'].apply(lambda x: x.split())
# remove stop-words
tokenized_doc = tokenized_doc.apply(lambda x: [item for item in x if item not in swords])
# de-tokenization
detokenized_doc = []
for i in range(len(new_df)):
t = ' '.join(tokenized_doc[i])
detokenized_doc.append(t)
new_df['clean_message'] = detokenized_doc
new_df.head()
```
## Step 3: TfidfVectorizer
**[TfidfVectorizer](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html)**
Convert a collection of raw documents to a matrix of TF-IDF features.
```
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer = TfidfVectorizer(stop_words='english', max_features= 300, max_df=0.5, smooth_idf=True)
print(vectorizer)
X = vectorizer.fit_transform(new_df['clean_message'])
X.shape
y = df['class_label']
y.shape
```
Handle imbalance data through SMOTE
```
from imblearn.combine import SMOTETomek
smk= SMOTETomek()
X_bal, y_bal = smk.fit_sample(X, y)
# histogram
import seaborn as sns
sns.countplot(y_bal)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X_bal, y_bal, test_size = 0.20, random_state = 0)
X_train.todense()
```
## Step 4: Learning
Training the classifier and making predictions on the test set
```
# create a model
MNB = MultinomialNB()
# fit to data
MNB.fit(X_train, y_train)
# testing the model
prediction_train = MNB.predict(X_train)
print('training prediction\t', prediction_train)
prediction_test = MNB.predict(X_test)
print('test prediction\t\t', prediction_test)
np.set_printoptions(suppress=True)
# Ham and Spam probabilities in test
class_prob = MNB.predict_proba(X_test)
print(class_prob)
# show emails classified as 'spam'
threshold = 0.5
spam_ind = np.where(class_prob[:,1]>threshold)[0]
```
## Step 5: Accuracy
```
# accuracy in training set
y_pred_train = prediction_train
print("Train Accuracy: "+str(accuracy_score(y_train, y_pred_train)))
# accuracy in test set (unseen data)
y_true = y_test
y_pred_test = prediction_test
print("Test Accuracy: "+str(accuracy_score(y_true, y_pred_test)))
# confusion matrix
conf_mat = confusion_matrix(y_true, y_pred_test)
print("Confusion Matrix\n", conf_mat)
import matplotlib.pyplot as plt
from sklearn.metrics import ConfusionMatrixDisplay
labels = ['Ham','Spam']
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(conf_mat)
plt.title('Confusion matrix of the classifier\n')
fig.colorbar(cax)
ax.set_xticklabels([''] + labels)
ax.set_yticklabels([''] + labels)
plt.xlabel('Predicted')
plt.ylabel('True')
plt.show()
```
| github_jupyter |
Lambda School Data Science
*Unit 2, Sprint 1, Module 4*
---
# Logistic Regression
## Overview
We'll begin with the **majority class baseline.**
[Will Koehrsen](https://twitter.com/koehrsen_will/status/1088863527778111488)
> A baseline for classification can be the most common class in the training dataset.
[*Data Science for Business*](https://books.google.com/books?id=4ZctAAAAQBAJ&pg=PT276), Chapter 7.3: Evaluation, Baseline Performance, and Implications for Investments in Data
> For classification tasks, one good baseline is the _majority classifier,_ a naive classifier that always chooses the majority class of the training dataset (see Note: Base rate in Holdout Data and Fitting Graphs). This may seem like advice so obvious it can be passed over quickly, but it is worth spending an extra moment here. There are many cases where smart, analytical people have been tripped up in skipping over this basic comparison. For example, an analyst may see a classification accuracy of 94% from her classifier and conclude that it is doing fairly well—when in fact only 6% of the instances are positive. So, the simple majority prediction classifier also would have an accuracy of 94%.
## Follow Along
Determine majority class
```
y_train.value_counts(normalize=True)
```
What if we guessed the majority class for every prediction?
```
majority_class = y_train.mode()[0]
y_pred = [majority_class] * len(y_train)
```
#### Use a classification metric: accuracy
[Classification metrics are different from regression metrics!](https://scikit-learn.org/stable/modules/model_evaluation.html)
- Don't use _regression_ metrics to evaluate _classification_ tasks.
- Don't use _classification_ metrics to evaluate _regression_ tasks.
[Accuracy](https://scikit-learn.org/stable/modules/model_evaluation.html#accuracy-score) is a common metric for classification. Accuracy is the ["proportion of correct classifications"](https://en.wikipedia.org/wiki/Confusion_matrix): the number of correct predictions divided by the total number of predictions.
What is the baseline accuracy if we guessed the majority class for every prediction?
```
from sklearn.metrics import accuracy_score
accuracy_score(y_train, y_pred)
y_pred = [majority_class] * len(y_val)
accuracy_score(y_val, y_pred)
# Using Sklearn DummyClassifier
from sklearn.dummy import DummyClassifier
# Fit the DummyClassifier
baseline = DummyClassifier(strategy='most_frequent')
baseline.fit(X_train, y_train)
# Make predictions on validation data
y_pred = baseline.predict(X_val)
accuracy_score(y_val, y_pred)
```
## Overview
To help us get an intuition for *Logistic* Regression, let's start by trying *Linear* Regression instead, and see what happens...
### Logistic Regression!
```
from sklearn.linear_model import LogisticRegression
log_reg = LogisticRegression(solver='lbfgs')
log_reg.fit(X_train_imputed, y_train)
print('Validation Accuracy', log_reg.score(X_val_imputed, y_val))
# The predictions look like this
log_reg.predict(X_val_imputed)
log_reg.predict(test_case)
log_reg.predict_proba(test_case)
# What's the math?
log_reg.coef_
log_reg.intercept_
# The logistic sigmoid "squishing" function, implemented to accept numpy arrays
import numpy as np
def sigmoid(x):
return 1 / (1 + np.e**(-x))
sigmoid(log_reg.intercept_ + np.dot(log_reg.coef_, np.transpose(test_case)))
```
So, clearly a more appropriate model in this situation! For more on the math, [see this Wikipedia example](https://en.wikipedia.org/wiki/Logistic_regression#Probability_of_passing_an_exam_versus_hours_of_study).
# Use sklearn.linear_model.LogisticRegression to fit and interpret Logistic Regression models
## Overview
Now that we have more intuition and interpretation of Logistic Regression, let's use it within a realistic, complete scikit-learn workflow, with more features and transformations.
## Follow Along
Select these features: `['Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare', 'Embarked']`
(Why shouldn't we include the `Name` or `Ticket` features? What would happen here?)
Fit this sequence of transformers & estimator:
- [category_encoders.one_hot.OneHotEncoder](https://contrib.scikit-learn.org/categorical-encoding/onehot.html)
- [sklearn.impute.SimpleImputer](https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html)
- [sklearn.preprocessing.StandardScaler](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html)
- [sklearn.linear_model.LogisticRegressionCV](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegressionCV.html)
Get validation accuracy.
```
import category_encoders as ce
from sklearn.linear_model import LogisticRegressionCV
from sklearn.preprocessing import StandardScaler
target = 'Survived'
features = ['Pclass', 'Sex', 'Age', 'SibSp', 'Parch', 'Fare', 'Embarked']
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
print(X_train.shape, X_val.shape)
encoder = ce.OneHotEncoder(use_cat_names=True)
X_train_encoded = encoder.fit_transform(X_train)
X_val_encoded = encoder.transform(X_val)
print(X_train_encoded.shape, X_val_encoded.shape)
imputer = SimpleImputer(strategy='mean')
X_train_imputed = imputer.fit_transform(X_train_encoded)
X_val_imputed = imputer.transform(X_val_encoded)
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train_imputed)
X_val_scaled = scaler.transform(X_val_imputed)
model = LogisticRegressionCV(cv=5, n_jobs=-1, random_state=42)
model.fit(X_train_scaled, y_train)
print('Validation Accuracy', model.score(X_val_scaled, y_val))
```
Plot coefficients:
```
%matplotlib inline
coefficients = pd.Series(model.coef_[0], X_train_encoded.columns)
coefficients.sort_values().plot.barh();
```
Generate [Kaggle](https://www.kaggle.com/c/titanic) submission:
```
X_test = test[features]
X_test_encoded = encoder.transform(X_test)
X_test_imputed = imputer.transform(X_test_encoded)
X_test_scaled = scaler.transform(X_test_imputed)
y_pred = model.predict(X_test_scaled)
submission = test[['PassengerId']].copy()
submission['Survived'] = y_pred
submission.to_csv('titanic-submission-01.csv', index=False)
```
| github_jupyter |
# Vector-space models: dimensionality reduction
```
__author__ = "Christopher Potts"
__version__ = "CS224u, Stanford, Spring 2020"
```
## Contents
1. [Overview](#Overview)
1. [Set-up](#Set-up)
1. [Latent Semantic Analysis](#Latent-Semantic-Analysis)
1. [Overview of the LSA method](#Overview-of-the-LSA-method)
1. [Motivating example for LSA](#Motivating-example-for-LSA)
1. [Applying LSA to real VSMs](#Applying-LSA-to-real-VSMs)
1. [Other resources for matrix factorization](#Other-resources-for-matrix-factorization)
1. [GloVe](#GloVe)
1. [Overview of the GloVe method](#Overview-of-the-GloVe-method)
1. [GloVe implementation notes](#GloVe-implementation-notes)
1. [Applying GloVe to our motivating example](#Applying-GloVe-to-our-motivating-example)
1. [Testing the GloVe implementation](#Testing-the-GloVe-implementation)
1. [Applying GloVe to real VSMs](#Applying-GloVe-to-real-VSMs)
1. [Autoencoders](#Autoencoders)
1. [Overview of the autoencoder method](#Overview-of-the-autoencoder-method)
1. [Testing the autoencoder implementation](#Testing-the-autoencoder-implementation)
1. [Applying autoencoders to real VSMs](#Applying-autoencoders-to-real-VSMs)
1. [word2vec](#word2vec)
1. [Training data](#Training-data)
1. [Basic skip-gram](#Basic-skip-gram)
1. [Skip-gram with noise contrastive estimation ](#Skip-gram-with-noise-contrastive-estimation-)
1. [word2vec resources](#word2vec-resources)
1. [Other methods](#Other-methods)
1. [Exploratory exercises](#Exploratory-exercises)
## Overview
The matrix weighting schemes reviewed in the first notebook for this unit deliver solid results. However, they are not capable of capturing higher-order associations in the data.
With dimensionality reduction, the goal is to eliminate correlations in the input VSM and capture such higher-order notions of co-occurrence, thereby improving the overall space.
As a motivating example, consider the adjectives _gnarly_ and _wicked_ used as slang positive adjectives. Since both are positive, we expect them to be similar in a good VSM. However, at least stereotypically, _gnarly_ is Californian and _wicked_ is Bostonian. Thus, they are unlikely to occur often in the same texts, and so the methods we've reviewed so far will not be able to model their similarity.
Dimensionality reduction techniques are often capable of capturing such semantic similarities (and have the added advantage of shrinking the size of our data structures).
## Set-up
* Make sure your environment meets all the requirements for [the cs224u repository](https://github.com/cgpotts/cs224u/). For help getting set-up, see [setup.ipynb](setup.ipynb).
* Make sure you've downloaded [the data distribution for this course](http://web.stanford.edu/class/cs224u/data/data.tgz), unpacked it, and placed it in the current directory (or wherever you point `DATA_HOME` to below).
```
from mittens import GloVe
import numpy as np
import os
import pandas as pd
import scipy.stats
from torch_autoencoder import TorchAutoencoder
import utils
import vsm
# Set all the random seeds for reproducibility:
utils.fix_random_seeds()
DATA_HOME = os.path.join('data', 'vsmdata')
imdb5 = pd.read_csv(
os.path.join(DATA_HOME, 'imdb_window5-scaled.csv.gz'), index_col=0)
imdb20 = pd.read_csv(
os.path.join(DATA_HOME, 'imdb_window20-flat.csv.gz'), index_col=0)
giga5 = pd.read_csv(
os.path.join(DATA_HOME, 'giga_window5-scaled.csv.gz'), index_col=0)
giga20 = pd.read_csv(
os.path.join(DATA_HOME, 'giga_window20-flat.csv.gz'), index_col=0)
```
## Latent Semantic Analysis
Latent Semantic Analysis (LSA) is a prominent dimensionality reduction technique. It is an application of __truncated singular value decomposition__ (SVD) and so uses only techniques from linear algebra (no machine learning needed).
### Overview of the LSA method
The central mathematical result is that, for any matrix of real numbers $X$ of dimension $m \times n$, there is a factorization of $X$ into matrices $T$, $S$, and $D$ such that
$$X_{m \times n} = T_{m \times m}S_{m\times m}D_{n \times m}^{\top}$$
The matrices $T$ and $D$ are __orthonormal__ – their columns are length-normalized and orthogonal to one another (that is, they each have cosine distance of $1$ from each other). The singular-value matrix $S$ is a diagonal matrix arranged by size, so that the first dimension corresponds to the greatest source of variability in the data, followed by the second, and so on.
Of course, we don't want to factorize and rebuild the original matrix, as that wouldn't get us anywhere. The __truncation__ part means that we include only the top $k$ dimensions of $S$. Given our row-oriented perspective on these matrices, this means using
$$T[1{:}m, 1{:}k]S[1{:}k, 1{:}k]$$
which gives us a version of $T$ that includes only the top $k$ dimensions of variation.
To build up intuitions, imagine that everyone on the Stanford campus is associated with a 3d point representing their position: $x$ is east–west, $y$ is north–south, and $z$ is zenith–nadir. Since the campus is spread out and has relatively few deep basements and tall buildings, the top two dimensions of variation will be $x$ and $y$, and the 2d truncated SVD of this space will leave $z$ out. This will, for example, capture the sense in which someone at the top of Hoover Tower is close to someone at its base.
### Motivating example for LSA
We can also return to our original motivating example of _wicked_ and _gnarly_. Here is a matrix reflecting those assumptions:
```
gnarly_df = pd.DataFrame(
np.array([
[1,0,1,0,0,0],
[0,1,0,1,0,0],
[1,1,1,1,0,0],
[0,0,0,0,1,1],
[0,0,0,0,0,1]], dtype='float64'),
index=['gnarly', 'wicked', 'awesome', 'lame', 'terrible'])
gnarly_df
```
No column context includes both _gnarly_ and _wicked_ together so our count matrix places them far apart:
```
vsm.neighbors('gnarly', gnarly_df)
```
Reweighting doesn't help. For example, here is the attempt with Positive PMI:
```
vsm.neighbors('gnarly', vsm.pmi(gnarly_df))
```
However, both words tend to occur with _awesome_ and not with _lame_ or _terrible_, so there is an important sense in which they are similar. LSA to the rescue:
```
gnarly_lsa_df = vsm.lsa(gnarly_df, k=2)
vsm.neighbors('gnarly', gnarly_lsa_df)
```
### Applying LSA to real VSMs
Here's an example that begins to convey the effect that this can have empirically.
First, the original count matrix:
```
vsm.neighbors('superb', imdb5).head()
```
And then LSA with $k=100$:
```
imdb5_svd = vsm.lsa(imdb5, k=100)
vsm.neighbors('superb', imdb5_svd).head()
```
A common pattern in the literature is to apply PMI first. The PMI values tend to give the count matrix a normal (Gaussian) distribution that better satisfies the assumptions underlying SVD:
```
imdb5_pmi = vsm.pmi(imdb5, positive=False)
imdb5_pmi_svd = vsm.lsa(imdb5_pmi, k=100)
vsm.neighbors('superb', imdb5_pmi_svd).head()
```
### Other resources for matrix factorization
The [sklearn.decomposition](http://scikit-learn.org/stable/modules/classes.html#module-sklearn.decomposition) module contains an implementation of LSA ([TruncatedSVD](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.TruncatedSVD.html#sklearn.decomposition.TruncatedSVD)) that you might want to switch to for real experiments:
* The `sklearn` version is more flexible than the above in that it can operate on both dense matrices (Numpy arrays) and sparse matrices (from Scipy).
* The `sklearn` version will make it easy to try out other dimensionality reduction methods in your own code; [Principal Component Analysis (PCA)](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html#sklearn.decomposition.PCA) and [Non-Negative Matrix Factorization (NMF)](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.NMF.html#sklearn.decomposition.NMF) are closely related methods that are worth a look.
## GloVe
### Overview of the GloVe method
[Pennington et al. (2014)](http://www.aclweb.org/anthology/D/D14/D14-1162.pdf) introduce an objective function for semantic word representations. Roughly speaking, the objective is to learn vectors for words $w_{i}$ and $w_{j}$ such that their dot product is proportional to their probability of co-occurrence:
$$w_{i}^{\top}\widetilde{w}_{k} + b_{i} + \widetilde{b}_{k} = \log(X_{ik})$$
The paper is exceptionally good at motivating this objective from first principles. In their equation (6), they define
$$w_{i}^{\top}\widetilde{w}_{k} = \log(P_{ik}) = \log(X_{ik}) - \log(X_{i})$$
If we allow that the rows and columns can be different, then we would do
$$w_{i}^{\top}\widetilde{w}_{k} = \log(P_{ik}) = \log(X_{ik}) - \log(X_{i} \cdot X_{*k})$$
where, as in the paper, $X_{i}$ is the sum of the values in row $i$, and $X_{*k}$ is the sum of the values in column $k$.
The rightmost expression is PMI by the equivalence $\log(\frac{x}{y}) = \log(x) - \log(y)$, and hence we can see GloVe as aiming to make the dot product of two learned vectors equal to the PMI!
The full model is a weighting of this objective:
$$\sum_{i, j=1}^{|V|} f\left(X_{ij}\right)
\left(w_i^\top \widetilde{w}_j + b_i + \widetilde{b}_j - \log X_{ij}\right)^2$$
where $V$ is the vocabulary and $f$ is a scaling factor designed to diminish the impact of very large co-occurrence counts:
$$f(x)
\begin{cases}
(x/x_{\max})^{\alpha} & \textrm{if } x < x_{\max} \\
1 & \textrm{otherwise}
\end{cases}$$
Typically, $\alpha$ is set to $0.75$ and $x_{\max}$ to $100$ (though it is worth assessing how many of your non-zero counts are above this; in dense word $\times$ word matrices, you could be flattening more than you want to).
### GloVe implementation notes
* The implementation in `vsm.glove` is the most stripped-down, bare-bones version of the GloVe method I could think of. As such, it is quite slow.
* The required [mittens](https://github.com/roamanalytics/mittens) package includes a vectorized implementation that is much, much faster, so we'll mainly use that.
* For really large jobs, [the official C implementation released by the GloVe team](http://nlp.stanford.edu/projects/glove/) is probably the best choice.
### Applying GloVe to our motivating example
GloVe should do well on our _gnarly/wicked_ evaluation, though you will see a lot variation due to the small size of this VSM:
```
gnarly_glove = vsm.glove(gnarly_df, n=5, max_iter=1000)
vsm.neighbors('gnarly', gnarly_glove)
```
### Testing the GloVe implementation
It is not easy analyze GloVe values derived from real data, but the following little simulation suggests that `vsm.glove` is working as advertised: it does seem to reliably deliver vectors whose dot products are proportional to the log co-occurrence probability:
```
glove_test_count_df = pd.DataFrame(
np.array([
[10.0, 2.0, 3.0, 4.0],
[ 2.0, 10.0, 4.0, 1.0],
[ 3.0, 4.0, 10.0, 2.0],
[ 4.0, 1.0, 2.0, 10.0]]),
index=['A', 'B', 'C', 'D'],
columns=['A', 'B', 'C', 'D'])
glove_test_df = vsm.glove(glove_test_count_df, max_iter=1000, n=4)
def correlation_test(true, pred):
mask = true > 0
M = pred.dot(pred.T)
with np.errstate(divide='ignore'):
log_cooccur = np.log(true)
log_cooccur[np.isinf(log_cooccur)] = 0.0
row_log_prob = np.log(true.sum(axis=1))
row_log_prob = np.outer(row_log_prob, np.ones(true.shape[1]))
prob = log_cooccur - row_log_prob
return np.corrcoef(prob[mask], M[mask])[0, 1]
correlation_test(glove_test_count_df.values, glove_test_df.values)
```
### Applying GloVe to real VSMs
The `vsm.glove` implementation is too slow to use on real matrices. The distribution in the `mittens` package is significantly faster, making its use possible even without a GPU (and it will be very fast indeed on a GPU machine):
```
glove_model = GloVe()
imdb5_glv = glove_model.fit(imdb5.values)
imdb5_glv = pd.DataFrame(imdb5_glv, index=imdb5.index)
vsm.neighbors('superb', imdb5_glv).head()
```
## Autoencoders
An autoencoder is a machine learning model that seeks to learn parameters that predict its own input. This is meaningful when there are intermediate representations that have lower dimensionality than the inputs. These provide a reduced-dimensional view of the data akin to those learned by LSA, but now we have a lot more design choices and a lot more potential to learn higher-order associations in the underyling data.
### Overview of the autoencoder method
The module `torch_autoencoder` uses PyToch to implement a simple one-layer autoencoder:
$$
\begin{align}
h &= \mathbf{f}(xW + b_{h}) \\
\widehat{x} &= hW^{\top} + b_{x}
\end{align}$$
Here, we assume that the hidden representation $h$ has a low dimensionality like 100, and that $\mathbf{f}$ is a non-linear activation function (the default for `TorchAutoencoder` is `tanh`). These are the major design choices internal to the network. It might also be meaningful to assume that there are two matrices of weights $W_{xh}$ and $W_{hx}$, rather than using $W^{\top}$ for the output step.
The objective function for autoencoders will implement some kind of assessment of the distance between the inputs and their predicted outputs. For example, one could use the one-half mean squared error:
$$\frac{1}{m}\sum_{i=1}^{m} \frac{1}{2}(\widehat{X[i]} - X[i])^{2}$$
where $X$ is the input matrix of examples (dimension $m \times n$) and $X[i]$ corresponds to the $i$th example.
When you call the `fit` method of `TorchAutoencoder`, it returns the matrix of hidden representations $h$, which is the new embedding space: same row count as the input, but with the column count set by the `hidden_dim` parameter.
For much more on autoencoders, see the 'Autoencoders' chapter of [Goodfellow et al. 2016](http://www.deeplearningbook.org).
### Testing the autoencoder implementation
Here's an evaluation that is meant to test the autoencoder implementation – we expect it to be able to full encode the input matrix because we know its rank is equal to the dimensionality of the hidden representation.
```
def randmatrix(m, n, sigma=0.1, mu=0):
return sigma * np.random.randn(m, n) + mu
def autoencoder_evaluation(nrow=1000, ncol=100, rank=20, max_iter=20000):
"""This an evaluation in which `TfAutoencoder` should be able
to perfectly reconstruct the input data, because the
hidden representations have the same dimensionality as
the rank of the input matrix.
"""
X = randmatrix(nrow, rank).dot(randmatrix(rank, ncol))
ae = TorchAutoencoder(hidden_dim=rank, max_iter=max_iter)
ae.fit(X)
X_pred = ae.predict(X)
mse = (0.5 * (X_pred - X)**2).mean()
return(X, X_pred, mse)
ae_max_iter = 100
_, _, ae = autoencoder_evaluation(max_iter=ae_max_iter)
print("Autoencoder evaluation MSE after {0} evaluations: {1:0.04f}".format(ae_max_iter, ae))
```
### Applying autoencoders to real VSMs
You can apply the autoencoder directly to the count matrix, but this could interact very badly with the internal activation function: if the counts are all very high or very low, then everything might get pushed irrevocably towards the extreme values of the activation.
Thus, it's a good idea to first normalize the values somehow. Here, I use `vsm.length_norm`:
```
imdb5_l2 = imdb5.apply(vsm.length_norm, axis=1)
imdb5_l2_ae = TorchAutoencoder(
max_iter=100, hidden_dim=50, eta=0.001).fit(imdb5_l2)
vsm.neighbors('superb', imdb5_l2_ae).head()
```
This is very slow and seems not to work all that well. To speed things up, one can first apply LSA or similar:
```
imdb5_l2_svd100 = vsm.lsa(imdb5_l2, k=100)
imdb_l2_svd100_ae = TorchAutoencoder(
max_iter=1000, hidden_dim=50, eta=0.01).fit(imdb5_l2_svd100)
vsm.neighbors('superb', imdb_l2_svd100_ae).head()
```
## word2vec
The label __word2vec__ picks out a family of models in which the embedding for a word $w$ is trained to predict the words that co-occur with $w$. This intuition can be cashed out in numerous ways. Here, we review just the __skip-gram model__, due to [Mikolov et al. 2013](https://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality).
### Training data
The most natural starting point is to transform a corpus into a supervised data set by mapping each word to a subset (maybe all) of the words that it occurs with in a given window. Schematically:
__Corpus__: `it was the best of times, it was the worst of times, ...`
With window size 2:
```
(it, was)
(it, the)
(was, it)
(was, the)
(was, best)
(the, was)
(the, it)
(the, best)
(the, of)
...
```
### Basic skip-gram
The basic skip-gram model estimates the probability of an input–output pair $(a, b)$ as
$$P(b \mid a) = \frac{\exp(x_{a}w_{b})}{\sum_{b'\in V}\exp(x_{a}w_{b'})}$$
where $x_{a}$ is the row (word) vector representation of word $a$ and $w_{b}$ is the column (context) vector representation of word $b$. The objective is to minimize the following quantity:
$$
-\sum_{i=1}^{m}\sum_{k=1}^{|V|}
\textbf{1}\{c_{i}=k\}
\log
\frac{
\exp(x_{i}w_{k})
}{
\sum_{j=1}^{|V|}\exp(x_{i}w_{j})
}$$
where $V$ is the vocabulary.
The inputs $x_{i}$ are the word representations, which get updated during training, and the outputs are one-hot vectors $c$. For example, if `was` is the 560th element in the vocab, then the output $c$ for the first example in the corpus above would be a vector of all $0$s except for a $1$ in the 560th position. $x$ would be the representation of `it` in the embedding space.
The distribution over the entire output space for a given input word $a$ is thus a standard softmax classifier; here we add a bias term for good measure:
$$c = \textbf{softmax}(x_{a}W + b)$$
If we think of this model as taking the entire matrix $X$ as input all at once, then it becomes
$$c = \textbf{softmax}(XW + b)$$
and it is now very clear that we are back to the core insight that runs through all of our reweighting and dimensionality reduction methods: we have a word matrix $X$ and a context matrix $W$, and we are trying to push the dot products of these two embeddings in a specific direction: here, to maximize the likelihood of the observed co-occurrences in the corpus.
### Skip-gram with noise contrastive estimation
Training the basic skip-gram model directly is extremely expensive for large vocabularies, because $W$, $b$, and the outputs $c$ get so large.
A straightforward way to address this is to change the objective to use __noise contrastive estimation__ (negative sampling). Where $\mathcal{D}$ is the original training corpus and $\mathcal{D}'$ is a sample of pairs not in the corpus, we minimize
$$\sum_{a, b \in \mathcal{D}}-\log\sigma(x_{a}w_{b}) + \sum_{a, b \in \mathcal{D}'}\log\sigma(x_{a}w_{b})$$
with $\sigma$ the sigmoid activation function $\frac{1}{1 + \exp(-x)}$.
The advice of Mikolov et al. is to sample $\mathcal{D}'$ proportional to a scaling of the frequency distribution of the underlying vocabulary in the corpus:
$$P(w) = \frac{\textbf{count}(w)^{0.75}}{\sum_{w'\in V} \textbf{count}(w')}$$
where $V$ is the vocabulary.
Although this new objective function is a substantively different objective than the previous one, Mikolov et al. (2013) say that it should approximate it, and it is building on the same insight about words and their contexts. See [Levy and Golberg 2014](http://papers.nips.cc/paper/5477-neural-word-embedding-as-implicit-matrix-factorization) for a proof that this objective reduces to PMI shifted by a constant value. See also [Cotterell et al. 2017](https://aclanthology.coli.uni-saarland.de/papers/E17-2028/e17-2028) for an interpretation of this model as a variant of PCA.
### word2vec resources
* In the usual presentation, word2vec training involves looping repeatedly over the sequence of tokens in the corpus, sampling from the context window from each word to create the positive training pairs. I assume that this same process could be modeled by sampling (row, column) index pairs from our count matrices proportional to their cell values. However, I couldn't get this to work well. I'd be grateful if someone got it work or figured out why it won't!
* Luckily, there are numerous excellent resources for word2vec. [The TensorFlow tutorial Vector representations of words](https://www.tensorflow.org/tutorials/word2vec) is very clear and links to code that is easy to work with. Because TensorFlow has a built in loss function called `tf.nn.nce_loss`, it is especially simple to define these models – one pretty much just sets up an embedding $X$, a context matrix $W$, and a bias $b$, and then feeds them plus a training batch to the loss function.
* The excellent [Gensim package](https://radimrehurek.com/gensim/) has an implementation that handles the scalability issues related to word2vec.
## Other methods
Learning word representations is one of the most active areas in NLP right now, so I can't hope to offer a comprehensive summary. I'll settle instead for identifying some overall trends and methods:
* The LexVec model of [Salle et al. 2016](https://aclanthology.coli.uni-saarland.de/papers/P16-2068/p16-2068) combines the core insight of GloVe (learn vectors that approximate PMI) with the insight from word2vec that we should additionally try to push words that don't appear together farther apart in the VSM. (GloVe simply ignores 0 count cells and so can't do this.)
* There is growing awareness that many apparently diverse models can be expressed as matrix factorization methods like SVD/LSA. See especially
[Singh and Gordon 2008](http://www.cs.cmu.edu/~ggordon/singh-gordon-unified-factorization-ecml.pdf),
[Levy and Golberg 2014](http://papers.nips.cc/paper/5477-neural-word-embedding-as-implicit-matrix-factorization), [Cotterell et al. 2017](https://www.aclweb.org/anthology/E17-2028/).
* Subword modeling ([reviewed briefly in the previous notebook](vsm_01_distributional.ipynb#Subword-information)) is increasingly yielding dividends. (It would already be central if most of NLP focused on languages with complex morphology!) Check out the papers at the Subword and Character-Level Models for NLP Workshops: [SCLeM 2017](https://sites.google.com/view/sclem2017/home), [SCLeM 2018](https://sites.google.com/view/sclem2018/home).
* Contextualized word representations have proven valuable in many contexts. These methods do not provide representations for individual words, but rather represent them in their linguistic context. This creates space for modeling how word senses vary depending on their context of use. We will study these methods later in the quarter, mainly in the context of identifying ways that might achieve better results on your projects.
## Exploratory exercises
These are largely meant to give you a feel for the material, but some of them could lead to projects and help you with future work for the course. These are not for credit.
1. Try out some pipelines of reweighting, `vsm.lsa` at various dimensions, and `TorchAutoencoder` to see which seems best according to your sampling around with `vsm.neighbors` and high-level visualization with `vsm.tsne_viz`. Feel free to use other factorization methods defined in [sklearn.decomposition](http://scikit-learn.org/stable/modules/classes.html#module-sklearn.decomposition) as well.
1. What happens if you set `k=1` using `vsm.lsa`? What do the results look like then? What do you think this first (and now only) dimension is capturing?
1. Modify `vsm.glove` so that it uses [the AdaGrad optimization method](http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf) as in the original paper. It's fine to use [the authors' implementation](http://nlp.stanford.edu/projects/glove/), [Jon Gauthier's implementation](http://www.foldl.me/2014/glove-python/), or the [mittens Numpy implementation](https://github.com/roamanalytics/mittens/blob/master/mittens/np_mittens.py) as references, but you might enjoy the challenge of doing this with no peeking at their code.
| github_jupyter |
<a href="https://colab.research.google.com/github/jarek-pawlowski/machine-learning-applications/blob/main/ecg_classification.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Heart beats classification problem
A typical task for applied Machine Learning in medicine is an automatic classification of signals from diagnostic devices such as ECG or EEG
Typical pipeline:
- detect QRS compexes (beats)
- classify them:
> - normal beat N
> - arrhytmia, e.g. *venticular* V, *supraventicular* S arrytmia, or *artial fibrillation* AF


a couple of links:
- [exemplary challenge from Physionet](https://physionet.org/content/challenge-2017/1.0.0/)
- [some recent paper on ECG classification](https://doi.org/10.1016/j.knosys.2020.106589)
## our challenge: classify beats as normal or abnormal (arrhytmia)
- we will utilize signals from **svdb** database, and grab subsequent beats (data preprocessing)
- then construct binary classifier using NN, decision trees, ensemble metchods, and SVM or NaiveBayes
# Dataset preparation
1. Download ecg waves from **svdb** database provided by *PhysioNet*
2. Divide signals into samples, each containing single heartbeat (with window size of 96 points, *sampling ratio* = 128 points/s)
3. Take only samples annotated as 'N' (normal beat), or 'S' and 'V' (arrhythmias)
```
import os
import numpy as np
# install PhysioNet ecg data package
!pip install wfdb
import wfdb
# list of available datasets
dbs = wfdb.get_dbs()
display(dbs)
# we choose svdb
svdb_dir = os.path.join(os.getcwd(), 'svdb_dir')
wfdb.dl_database('svdb', dl_dir=svdb_dir)
# Display the downloaded content
svdb_in_files = [os.path.splitext(f)[0] for f in os.listdir(svdb_dir) if f.endswith('.dat')]
print(svdb_in_files)
time_window = 48
all_beats = []
all_annotations = []
for in_file in svdb_in_files:
print('...processing...' + in_file + '...file')
signal, fields = wfdb.rdsamp(os.path.join(svdb_dir,in_file), channels=[0])
annotations = wfdb.rdann(os.path.join(svdb_dir,in_file), 'atr')
signal=np.array(signal).flatten()
# grab subsequent heartbeats within [position-48,position+48] window
beats = np.zeros((len(annotations.sample[5:-5]), time_window*2))
# note that we remove first and last few beats to ensure that all beats have equal lengths
for i, ann_position in enumerate(annotations.sample[5:-5]):
beats[i] = signal[ann_position-time_window:ann_position+time_window]
all_beats.append(beats)
# consequently, we remove first and last few annotations
all_annotations.append(annotations.symbol[5:-5])
all_beats = np.concatenate(all_beats)
all_annotations = np.concatenate(all_annotations)
# check which annotations are usable for us, are of N or S or V class
indices = [i for i, ann in enumerate(all_annotations) if ann in {'N','S','V'}]
# and get only these
all_beats = all_beats[indices]
all_annotations = np.array([all_annotations[i] for i in indices])
# print data statistics
print(all_beats.shape, all_annotations.shape)
print('no of N beats: ' + str(np.count_nonzero(all_annotations == 'N')))
print('no of S beats: ' + str(np.count_nonzero(all_annotations == 'S')))
print('no of V beats: ' + str(np.count_nonzero(all_annotations == 'V')))
# show example samples
!pip install matplotlib==3.1.3
import matplotlib.pyplot as plt
fig, ax = plt.subplots(1,3)
fig.set_size_inches(15, 3)
plt.subplots_adjust(wspace=0.2)
print(all_annotations[:100])
sample_number = [0,6,8]
for i, sn in enumerate(sample_number):
ax[i].plot(all_beats[sn])
ax[i].set(xlabel='time', ylabel='ecg signal', title='beat type ' + all_annotations[sn])
ax[i].grid()
plt.show()
```
# Experiments
0. Preliminaries
> - Divide dataset into train/validation/test subset, and normalize each of them.
> - Define classification accuracy metrics (dataset is imbalanced)
>>Confusion matrix
```
____Prediction
T | n s v
r |N Nn Ns Nv
u |S Sn Ss Sv
t |V Vn Vs Vv
h |
```
>> - Total accuracy
$Acc_T = \frac{Nn+Ss+Vv}{\Sigma_N+\Sigma_S+\Sigma_V}$,
>> - Arrhythmia accuracy (S or V cases are more important to be detected):
$Acc_A = \frac{Ss+Vv}{\Sigma_S+\Sigma_V}$,
>> - $\Sigma_N=Nn+Ns+Nv$, $\Sigma_S=Sn+Ss+Sv$,
$\Sigma_V=Vn+Vs+Vv$
1. Standard classifiers: *naive Bayes* and *SVM*
2. Decision Tree with optimized max_depth
3. Random Forest with vector of features
```
# prepare datasets and define error metrics
from sklearn import preprocessing
from sklearn.model_selection import train_test_split
# to simplify experiments and speedup training
# we take only some part of the whole dataset
X, y = all_beats[::10], all_annotations[::10]
# train/validation/test set splitting
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.15, random_state=0)
X_train, X_val, y_train, y_val = train_test_split(X_train, y_train, test_size=0.15/0.85, random_state=0)
print(len(y_train), len(y_val), len(y_test))
# perform data normalization: z = (x - u)/s
scaler = preprocessing.StandardScaler().fit(X_train)
X_train = scaler.transform(X_train)
# same for the validation subset
X_val = preprocessing.StandardScaler().fit_transform(X_val)
# and for the test subset
X_test = preprocessing.StandardScaler().fit_transform(X_test)
# define accuracy
def calculate_accuracy(y_pred, y_gt, comment='', printout=True):
acc_t = np.count_nonzero(y_pred == y_gt)/len(y_gt)
acc_a = np.count_nonzero(
np.logical_and(y_pred == y_gt, y_gt != 'N'))/np.count_nonzero(y_gt != 'N')
if printout is True:
print('-----------------------------------')
print(comment)
print('Total accuracy, Acc_T = {:.4f}'.format(acc_t))
print('Arrhythmia accuracy, Acc_A = {:.4f}'.format(acc_a))
print('-----------------------------------')
else: return acc_t, acc_a
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
gnb = GaussianNB()
y_pred = gnb.fit(X_train, y_train).predict(X_test)
calculate_accuracy(y_pred, y_test, comment='naive Bayes classifier')
svc = SVC()
y_pred = svc.fit(X_train, y_train).predict(X_test)
calculate_accuracy(y_pred, y_test, comment='SVM classifier')
svc = SVC(class_weight='balanced')
y_pred = svc.fit(X_train, y_train).predict(X_test)
calculate_accuracy(y_pred, y_test, comment='balanced SVM classifier')
```
Summary of this part:
1. The goal is to maximize both metrics Acc_T and Acc_A at the same time
1. naive Bayes performs rather poorly
> - problem with data imbalace
2. SVM has simillar problem, but after data balacing works quite good
```
from sklearn.tree import DecisionTreeClassifier
dtc = DecisionTreeClassifier(criterion='entropy',
class_weight='balanced',
min_samples_leaf=10)
y_pred = dtc.fit(X_train, y_train).predict(X_test)
calculate_accuracy(y_pred, y_test, comment='balanced DT')
# tunning max_dept hyperparameter (DT likes to overfit)
train_acc_t = []
train_acc_a = []
val_acc_t = []
val_acc_a = []
depth_range = range(1,26)
for max_depth in depth_range:
dtc = DecisionTreeClassifier(criterion='entropy',
class_weight='balanced',
min_samples_leaf=10,
max_depth=max_depth)
dt_fit = dtc.fit(X_train, y_train)
y_pred_train = dt_fit.predict(X_train)
y_pred_val = dt_fit.predict(X_val)
acc_t_train, acc_a_train = calculate_accuracy(y_pred_train, y_train, printout=False)
acc_t_val, acc_a_val = calculate_accuracy(y_pred_val, y_val, printout=False)
train_acc_t.append(acc_t_train)
train_acc_a.append(acc_a_train)
val_acc_t.append(acc_t_val)
val_acc_a.append(acc_a_val)
print('{0:d} {1:.4f} {2:4.4f}'.format(max_depth, acc_t_val, acc_a_val))
import matplotlib.pyplot as plt
_, ax = plt.subplots()
ax.plot(depth_range, train_acc_t, label='train acc_t')
ax.plot(depth_range, train_acc_a, label='train acc_a')
ax.plot(depth_range, val_acc_t, label='validation acc_t')
ax.plot(depth_range, val_acc_a , label='validation acc_a')
ax.set(xlabel='max_depth', ylabel='accuracy')
ax.xaxis.set_ticks([1, 5, 10, 15, 20, 25])
ax.legend()
plt.show()
# optimum acc_a max_depth
dtc = DecisionTreeClassifier(criterion='entropy',
class_weight='balanced',
min_samples_leaf=10,
max_depth=10)
y_pred = dtc.fit(X_train, y_train).predict(X_test)
calculate_accuracy(y_pred, y_test, comment='DT: Acc_A maximized')
# optimum acc_t & acc_a max_depth
dtc = DecisionTreeClassifier(criterion='entropy',
class_weight='balanced',
min_samples_leaf=10,
max_depth=14)
y_pred = dtc.fit(X_train, y_train).predict(X_test)
calculate_accuracy(y_pred, y_test, comment='DT: Acc_T + Acc_A maximized')
# feature vector via PCA (dimensionlality reduction) works poorly
from sklearn.decomposition import PCA
pca = PCA(n_components=15)
X_train_ = pca.fit_transform(X_train)
X_test_ = pca.transform(X_test)
dtc = DecisionTreeClassifier(criterion='entropy',
class_weight='balanced',
min_samples_leaf=10,
max_depth=10)
y_pred = dtc.fit(X_train_, y_train).predict(X_test_)
calculate_accuracy(y_pred, y_test, comment='DT with PCA')
```
Summary:
1. Decision Tree works a bit worse (than SVM) and has tendency to overfit. We consider two types of hyperparameters:
> - *max_depth*
> - *min_samples_leaf*
2. Tunning *max_depth* gives Acc_A (*max_depth*=8), or Acc_T & Acc_A (*max_depth*=13) maximum value
3. Simple dimensionality reduction using PCA works rather poorly
```
import pywt
# extract features using different wavelets and simple differences
def extract_features(input_sample):
out = np.array([])
# sym8
cA = pywt.downcoef('a', input_sample, 'sym8', level=4, mode='per')
out = np.append(out,cA)
cD = pywt.downcoef('d', input_sample, 'sym8', level=4, mode='per')
out = np.append(out,cD)
# db6/9
cA = pywt.downcoef('a', input_sample, 'db6', level=4, mode='per')
out = np.append(out,cA)
cD = pywt.downcoef('d', input_sample, 'db6', level=4, mode='per')
out = np.append(out,cD)
cA = pywt.downcoef('a', input_sample, 'db9', level=4, mode='per')
out = np.append(out,cA)
cD = pywt.downcoef('d', input_sample, 'db9', level=4, mode='per')
out = np.append(out,cD)
# dmey
cA = pywt.downcoef('a', input_sample, 'dmey', level=4, mode='per')
out = np.append(out,cA)
cD = pywt.downcoef('d', input_sample, 'dmey', level=4, mode='per')
out = np.append(out,cD)
# differences
differences = np.zeros(16)
for i, t in enumerate(range(40, 56)):
differences[i] = input_sample[t+1]-input_sample[t]
out = np.append(out,differences)
return out
# collect vector of features for all samples
def data_features(input_data):
return np.array([extract_features(sample) for sample in input_data])
X_train_ = data_features(X_train)
print(X_train_.shape)
X_test_ = data_features(X_test)
print(X_test_.shape)
dtc = DecisionTreeClassifier(criterion='entropy',
class_weight='balanced',
min_samples_leaf=10,
max_depth=15)
y_pred = dtc.fit(X_train_, y_train).predict(X_test_)
calculate_accuracy(y_pred, y_test, comment='DT with wavelets')
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier(criterion='entropy',
n_estimators=500,
max_depth=10,
class_weight='balanced')
y_pred = rfc.fit(X_train_, y_train).predict(X_test_)
calculate_accuracy(y_pred, y_test, comment='RF with wavelets')
from sklearn.ensemble import AdaBoostClassifier
abc = AdaBoostClassifier(n_estimators=200)
y_pred = abc.fit(X_train_, y_train).predict(X_test_)
calculate_accuracy(y_pred, y_test, comment='Ada with wavelets')
```
# Tasks to do
Please choose and complete just **one** of them:
1. Modify classifier to get **accuracy > 0.81** for both Acc_T *and* Acc_A
> - play with classifier hyperparameters
> - add some other features, e.g:
>> - [mean of absolute value (MAV) of signal](https://www.researchgate.net/publication/46147272_Sequential_algorithm_for_life_threatening_cardiac_pathologies_detection_based_on_mean_signal_strength_and_EMD_functions)
>> - some other signal features from [scipy signal](https://docs.scipy.org/doc/scipy/reference/signal.html#peak-finding),
>> - distances between previous and next heartbeats are strong features, see e.g. [here](https://link.springer.com/article/10.1007/s11760-009-0136-1),
>> - it may be also usefull to perform some feature selection, e.g. choose these with variance higher than some assumed threshold (*intuition*: variance measures amount of information in a given feature), or use *model.feature_importances_* attribute (for more see [here](https://scikit-learn.org/stable/modules/feature_selection.html))
> - balance dataset by yourself: equalize the size of each of 3 groups (hint: take the whole dataset)
> - or build your own classifier using [MLP](https://scikit-learn.org/stable/modules/neural_networks_supervised.html#classification)
2. Compare results for Random Forest with AdaBoost classifier
> - try to figure out why the default Ada setup won't work good
> - and fix this problem (hint: resampling)
> - try Ada with different *base_estimators*
3. Add deep-neural classifier (like one in previous lab) and compare its preformance with today's best classifier
> - at first you should create *torch.utils.data.DataLoader* object, see [here](https://stanford.edu/~shervine/blog/pytorch-how-to-generate-data-parallel)
| github_jupyter |
```
import matplotlib.pyplot as plt
import numpy as np
from pymongo import MongoClient
import tldextract
import math
import re
import pickle
from tqdm import tqdm_notebook as tqdm
import spacy
from numpy import dot
from numpy.linalg import norm
import csv
import random
import statistics
import copy
import itertools
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer as SIA
from sklearn import svm
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import KFold
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.ensemble import RandomForestClassifier
from sklearn.feature_extraction.text import TfidfTransformer
import scipy
nlp = spacy.load('en')
client = MongoClient('mongodb://gdelt:meidnocEf1@gdeltmongo1:27017/')
db = client.gdelt.metadata
def valid(s, d):
if len(d) > 0 and d[0] not in ["/", "#", "{"] and s not in d :
return True
else:
return False
re_3986 = re.compile(r"^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?")
wgo = re.compile("www.")
whitelist = ["NOUN", "PROPN", "ADJ", "ADV"]
bias = []
biasnames = []
pol = ['L', 'LC', 'C', 'RC', 'R']
rep = ['VERY LOW', 'LOW', 'MIXED', 'HIGH', 'VERY HIGH']
flag = ['F', 'X', 'S']
cats = pol
s2l = {}
with open('bias.csv', 'r') as csvfile:
reader = csv.reader(csvfile)
for row in reader:
name = re_3986.match(row[4]).group(4)
p = -1
r = -1
f = -1
if row[1] in pol:
p = pol.index(row[1])
s2l[name] = row[1]
if row[2] in rep:
r = rep.index(row[2])
if row[3] in flag:
f = flag.index(row[3])
s2l[name] = row[3]
bias.append(row + [name, p, r, f, 1 if p == -1 else 0])
biasnames.append(name)
sample = 1000000
stuff = db.find({},{'text':1,'sourceurl':1}).sort("_id",-1).limit(sample)
arts = []
for obj in tqdm(stuff):
if 'text' in obj:
sdom = wgo.sub("", re_3986.match(obj['sourceurl']).group(4))
if sdom in biasnames:
doc = nlp.tokenizer(obj['text'][:100*8])
nlp.tagger(doc)
arts.append((sdom, doc))
N = len(arts)
doc_tdf = {}
doc_bgdf = {}
doc_tf = {}
doc_bgf = {}
doc_ts = {}
doc_bgs = {}
site_tf = {}
site_bgf = {}
site_ts = {}
site_bgs = {}
cat_tf = {cat : {} for cat in cats}
cat_bgf = {cat : {} for cat in cats}
cat_ts = {cat : {} for cat in cats}
cat_bgs = {cat : {} for cat in cats}
sa = SIA()
for (sdom, obj) in tqdm(leads):
if sdom not in site_tf:
site_tf[sdom] = {}
site_bgf[sdom] = {}
site_ts[sdom] = {}
site_bgs[sdom] = {}
for (sdom, doc) in tqdm(arts):
#seen = {}
mycat = s2l[sdom]
if mycat in cats:
c = sa.polarity_scores(doc.text)['compound']
for word in doc[:-1]:
if not word.is_stop and word.is_alpha and word.pos_ in whitelist:
# Save the sentiments in a list
# To be averaged into means later
if word.lemma_ not in doc_ts:
doc_ts[word.lemma_] = []
doc_ts[word.lemma_].append(c)
if word.lemma_ not in site_ts[sdom]:
site_ts[sdom][word.lemma_] = []
site_ts[sdom][word.lemma_].append(c)
if word.lemma_ not in cat_ts[mycat]:
cat_ts[mycat][word.lemma_] = []
cat_ts[mycat][word.lemma_].append(c)
# Record counts of this term
# To be divided by total to make term frequency later
if word.lemma_ not in doc_tf:
doc_tf[word.lemma_] = 0
doc_tf[word.lemma_] += 1
if word.lemma_ not in site_tf[sdom]:
site_tf[sdom][word.lemma_] = 0
site_tf[sdom][word.lemma_] += 1
if word.lemma_ not in cat_tf[mycat]:
cat_tf[mycat][word.lemma_] = 0
cat_tf[mycat][word.lemma_] += 1
# # Record number of documents it appears in
# if word.lemma not in seen:
# seen[word.lemma] = 1
# if word.lemma_ not in doc_tf:
# doc_tf[word.lemma_] = 0
# doc_tf[word.lemma_] += 1
neigh = word.nbor()
if not neigh.is_stop and neigh.pos_ in whitelist:
bigram = word.lemma_+" "+neigh.lemma_
# # Save the sentiments in a list
# # To be averaged into means later
if bigram not in doc_bgs:
doc_bgs[bigram] = []
doc_bgs[bigram].append(c)
if bigram not in site_bgs[sdom]:
site_bgs[sdom][bigram] = []
site_bgs[sdom][bigram].append(c)
if bigram not in cat_bgs[mycat]:
cat_bgs[mycat][bigram] = []
cat_bgs[mycat][bigram].append(c)
# # Record counts of this bigram
# # To be divided by total to make term frequency later
if bigram not in doc_bgf:
doc_bgf[bigram] = 0
doc_bgf[bigram] += 1
if bigram not in site_bgf[sdom]:
site_bgf[sdom][bigram] = 0
site_bgf[sdom][bigram] += 1
if bigram not in cat_bgf[mycat]:
cat_bgf[mycat][bigram] = 0
cat_bgf[mycat][bigram] += 1
# # if bigram not in seen:
# # seen[bigram] = 1
# # if bigram not in doc_bgf:
# # doc_bgf[bigram] = 0
# # doc_bgf[bigram] += 1
doc_tls = copy.deepcopy(doc_ts)
doc_bgls = copy.deepcopy(doc_bgs)
site_tls = copy.deepcopy(site_ts)
site_bgls = copy.deepcopy(site_bgs)
cat_tls = copy.deepcopy(cat_ts)
cat_bgls = copy.deepcopy(cat_bgs)
for word in tqdm(doc_ts):
doc_ts[word] = sum(doc_ts[word])/len(doc_ts[word])
for word in tqdm(doc_bgs):
doc_bgs[word] = sum(doc_bgs[word])/len(doc_bgs[word])
for site in tqdm(site_bgs):
for word in site_ts[site]:
site_ts[site][word] = sum(site_ts[site][word])/len(site_ts[site][word])
for word in site_bgs[site]:
site_bgs[site][word] = sum(site_bgs[site][word])/len(site_bgs[site][word])
for cat in tqdm(cats):
for word in cat_ts[cat]:
cat_ts[cat][word] = sum(cat_ts[cat][word])/len(cat_ts[cat][word])
for word in cat_bgs[cat]:
cat_bgs[cat][word] = sum(cat_bgs[cat][word])/len(cat_bgs[cat][word])
doc_tc = copy.deepcopy(doc_tf)
doc_bgc = copy.deepcopy(doc_bgf)
site_tc = copy.deepcopy(site_tf)
site_bgc = copy.deepcopy(site_bgf)
cat_tc = copy.deepcopy(cat_tf)
cat_bgc = copy.deepcopy(cat_bgf)
tot = sum(doc_tf.values())
for word in tqdm(doc_tf):
doc_tf[word] = doc_tf[word]/tot
tot = sum(doc_bgf.values())
for word in tqdm(doc_bgf):
doc_bgf[word] = doc_bgf[word]/tot
for site in tqdm(site_tf):
tot = sum(site_tf[site].values())
for word in site_tf[site]:
site_tf[site][word] = site_tf[site][word]/tot
tot = sum(site_bgf[site].values())
for word in site_bgf[site]:
site_bgf[site][word] = site_bgf[site][word]/tot
for cat in tqdm(cats):
tot = sum(cat_tf[cat].values())
for word in cat_tf[cat]:
cat_tf[cat][word] = cat_tf[cat][word]/tot
tot = sum(cat_bgf[cat].values())
for word in cat_bgf[cat]:
cat_bgf[cat][word] = cat_bgf[cat][word]/tot
def cos_sim(a, b):
a = site_v[a]
b = site_v[b]
return dot(a, b)/(norm(a)*norm(b))
def isReal(site):
if s2l[site] in pol:
return True
return False
sites = [site for site in site_ts.keys() if site in biasnames]
α = 0.001
tp = {}
t_exp = [sum(cat_tc[cat].values()) for cat in cats]
t_exp = [t/sum(t_exp) for t in t_exp]
sig_terms = []
for term in tqdm(doc_ts.keys()):
ds = [0]*len(cats)
df = [0]*len(cats)
#f = False
for i, cat in enumerate(cats):
if term in cat_ts[cat]:
ds[i] = cat_ts[cat][term]-doc_ts[term]
df[i] = cat_tc[cat][term]
χ, p1 = scipy.stats.chisquare(df, f_exp=[t*sum(df) for t in t_exp])
if p1 < α or scipy.stats.chisquare(ds)[1] < α:
sig_terms.append(term)
tp[term] = p
#print(term + " " + str(p))
sig_terms = sorted(sig_terms, key=lambda x:tp[x])
print(len(sig_terms))
print(sig_terms[:10])
bgp = {}
t_exp = [sum(cat_bgc[cat].values()) for cat in cats]
t_exp = [t/sum(t_exp) for t in t_exp]
sig_bigrams = []
for bigram in tqdm(doc_bgs.keys()):
ds = [0]*len(cats)
df = [0]*len(cats)
for i, cat in enumerate(cats):
if bigram in cat_bgs[cat]:
ds[i] = cat_bgs[cat][bigram]-doc_bgs[bigram]
df[i] = cat_bgc[cat][bigram]
χ, p1 = scipy.stats.chisquare(df, f_exp=[t*sum(df) for t in t_exp])
if p1 < α or scipy.stats.chisquare(ds)[1] < α:
sig_bigrams.append(bigram)
bgp[bigram] = p
sig_bigrams = sorted(sig_bigrams, key=lambda x:bgp[x])
print(len(sig_bigrams))
print(sig_bigrams[:10])
site_v = {}
for site in tqdm(site_ts.keys()):
if site in site_bgs:
v = [0]*(len(sig_terms)+len(sig_bigrams))*2
#tot_term = sum(site_ts[site].values())
for i, term in enumerate(sig_terms):
if term in site_ts[site]:
v[2*i] = site_ts[site][term]-doc_ts[term]
if term in site_tf[site]:
v[2*i+1] = site_tf[site][term]-doc_tf[term]
for j, bigram in enumerate(sig_bigrams):
if bigram in site_bgs[site]:
v[2*i+2*j+2] = site_bgs[site][bigram]-doc_bgs[bigram]
if bigram in site_bgf[site]:
v[2*i+2*j+3] = site_bgf[site][bigram]-doc_bgf[bigram]
site_v[site] = v
print(len(site_v))
clf = RandomForestClassifier(random_state=42)
#clf = svm.SVC(random_state=42)
sites = [s for s in s2l if s in site_ts.keys()]
X = [site_v[s] for s in sites if s2l[s] in cats]
y = [cats.index(s2l[s]) for s in sites if s2l[s] in cats]
#y = [1 if s2l[s] in ["L", "LC", "C"] else -1 for s in sites]
X = np.asarray(X)
y = np.asarray(y)
vn = sig_terms+sig_bigrams
vn = list(itertools.chain(*zip(vn,vn)))
cscore = cross_val_score(clf, X, y, cv=3)
print(cscore)
print(sum(cscore)/3)
clf.fit(X, y)
mask = [i for i, x in enumerate(clf.feature_importances_) if x > 0.0001]
cscore = cross_val_score(clf, [x[mask] for x in X], y, cv=3)
print(cscore)
print(sum(cscore)/3)
fi = clf.feature_importances_
plt.figure(figsize=(10,10))
plt.plot(sorted(fi[mask]))
plt.xticks(range(0, len(mask)), sorted([vn[m] for m in mask], key=lambda x:fi[vn.index(x)]), rotation=90)
plt.show()
cms = []
for train, test in KFold(n_splits=3).split(X):
clf.fit([x[mask] for x in X[train]], y[train])
cms.append(confusion_matrix(y[test], clf.predict([x[mask] for x in X[test]])))
# clf.fit(X[train], y[train])
# cms.append(confusion_matrix(y[test], clf.predict(X[test])))
print(sum(cms))
plt.imshow(sum(cms))
plt.show()
print(sum(sum(sum(cms))))
sorted(site_v.keys(), key=lambda x:cos_sim("breitbart.com", x), reverse=False)
site_id = {}
for site in site_v:
site_id[site] = cos_sim("breitbart.com", site) - cos_sim("huffingtonpost.com", site)
#print(site_id)
l = sorted(site_id.keys(), key = lambda x : site_id[x])
print(l)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/amathsow/wolof_speech_recognition/blob/master/Speech_recognition_project.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!pip3 install torch
!pip3 install torchvision
!pip3 install torchaudio
!pip install comet_ml
import os
from comet_ml import Experiment
import torch
import torch.nn as nn
import torch.utils.data as data
import torch.optim as optim
import torch.nn.functional as F
import torchaudio
import numpy as np
import pandas as pd
import librosa
```
## ETL process
```
from google.colab import drive
drive.mount('/content/drive')
path_audio= 'drive/My Drive/Speech Recognition project/recordings/'
path_text = 'drive/My Drive/Speech Recognition project/wolof_text/'
wav_text = 'drive/My Drive/Speech Recognition project/Wavtext_dataset2.csv'
```
## Data preparation for created the char.txt file from my dataset.
```
datapath = 'drive/My Drive/Speech Recognition project/data/records'
trainpath = '../drive/My Drive/Speech Recognition project/data/records/train/'
valpath = '../drive/My Drive/Speech Recognition project/data/records/val/'
testpath = '../drive/My Drive/Speech Recognition project/data/records/test/'
```
## Let's create the dataset
```
! git clone https://github.com/facebookresearch/CPC_audio.git
!pip install soundfile
!pip install torchaudio
!mkdir checkpoint_data
!wget https://dl.fbaipublicfiles.com/librilight/CPC_checkpoints/not_hub/2levels_6k_top_ctc/checkpoint_30.pt -P checkpoint_data
!wget https://dl.fbaipublicfiles.com/librilight/CPC_checkpoints/not_hub/2levels_6k_top_ctc/checkpoint_logs.json -P checkpoint_data
!wget https://dl.fbaipublicfiles.com/librilight/CPC_checkpoints/not_hub/2levels_6k_top_ctc/checkpoint_args.json -P checkpoint_data
!ls checkpoint_data
import torch
import torchaudio
%cd CPC_audio/
from cpc.model import CPCEncoder, CPCAR
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
DIM_ENCODER=256
DIM_CONTEXT=256
KEEP_HIDDEN_VECTOR=False
N_LEVELS_CONTEXT=1
CONTEXT_RNN="LSTM"
N_PREDICTIONS=12
LEARNING_RATE=2e-4
N_NEGATIVE_SAMPLE =128
encoder = CPCEncoder(DIM_ENCODER).to(device)
context = CPCAR(DIM_ENCODER, DIM_CONTEXT, KEEP_HIDDEN_VECTOR, 1, mode=CONTEXT_RNN).to(device)
# Several functions that will be necessary to load the data later
from cpc.dataset import findAllSeqs, AudioBatchData, parseSeqLabels
SIZE_WINDOW = 20480
BATCH_SIZE=8
def load_dataset(path_dataset, file_extension='.flac', phone_label_dict=None):
data_list, speakers = findAllSeqs(path_dataset, extension=file_extension)
dataset = AudioBatchData(path_dataset, SIZE_WINDOW, data_list, phone_label_dict, len(speakers))
return dataset
class CPCModel(torch.nn.Module):
def __init__(self,
encoder,
AR):
super(CPCModel, self).__init__()
self.gEncoder = encoder
self.gAR = AR
def forward(self, batch_data):
encoder_output = self.gEncoder(batch_data)
#print(encoder_output.shape)
# The output of the encoder data does not have the good format
# indeed it is Batch_size x Hidden_size x temp size
# while the context requires Batch_size x temp size x Hidden_size
# thus you need to permute
context_input = encoder_output.permute(0, 2, 1)
context_output = self.gAR(context_input)
#print(context_output.shape)
return context_output, encoder_output
datapath ='../drive/My Drive/Speech Recognition project/data/records/'
datapath2 ='../drive/My Drive/Speech Recognition project/data/'
!ls .. /checkpoint_data/checkpoint_30.pt
%cd CPC_audio/
from cpc.dataset import parseSeqLabels
from cpc.feature_loader import loadModel
checkpoint_path = '../checkpoint_data/checkpoint_30.pt'
cpc_model, HIDDEN_CONTEXT_MODEL, HIDDEN_ENCODER_MODEL = loadModel([checkpoint_path])
cpc_model = cpc_model.cuda()
label_dict, N_PHONES = parseSeqLabels(datapath2+'chars2.txt')
dataset_train = load_dataset(datapath+'train', file_extension='.wav', phone_label_dict=label_dict)
dataset_val = load_dataset(datapath+'val', file_extension='.wav', phone_label_dict=label_dict)
dataset_test = load_dataset(datapath+'test', file_extension='.wav', phone_label_dict=label_dict)
data_loader_train = dataset_train.getDataLoader(BATCH_SIZE, "speaker", True)
data_loader_val = dataset_val.getDataLoader(BATCH_SIZE, "sequence", False)
data_loader_test = dataset_test.getDataLoader(BATCH_SIZE, "sequence", False)
```
## Create Model
```
class PhoneClassifier(torch.nn.Module):
def __init__(self,
input_dim : int,
n_phones : int):
super(PhoneClassifier, self).__init__()
self.linear = torch.nn.Linear(input_dim, n_phones)
def forward(self, x):
return self.linear(x)
phone_classifier = PhoneClassifier(HIDDEN_CONTEXT_MODEL, N_PHONES).to(device)
loss_criterion = torch.nn.CrossEntropyLoss()
parameters = list(phone_classifier.parameters()) + list(cpc_model.parameters())
LEARNING_RATE = 2e-4
optimizer = torch.optim.Adam(parameters, lr=LEARNING_RATE)
optimizer_frozen = torch.optim.Adam(list(phone_classifier.parameters()), lr=LEARNING_RATE)
def train_one_epoch(cpc_model,
phone_classifier,
loss_criterion,
data_loader,
optimizer):
cpc_model.train()
loss_criterion.train()
avg_loss = 0
avg_accuracy = 0
n_items = 0
for step, full_data in enumerate(data_loader):
# Each batch is represented by a Tuple of vectors:
# sequence of size : N x 1 x T
# label of size : N x T
#
# With :
# - N number of sequence in the batch
# - T size of each sequence
sequence, label = full_data
bs = len(sequence)
seq_len = label.size(1)
optimizer.zero_grad()
context_out, enc_out, _ = cpc_model(sequence.to(device),label.to(device))
scores = phone_classifier(context_out)
scores = scores.permute(0,2,1)
loss = loss_criterion(scores,label.to(device))
loss.backward()
optimizer.step()
avg_loss+=loss.item()*bs
n_items+=bs
correct_labels = scores.argmax(1)
avg_accuracy += ((label==correct_labels.cpu()).float()).mean(1).sum().item()
avg_loss/=n_items
avg_accuracy/=n_items
return avg_loss, avg_accuracy
avg_loss, avg_accuracy = train_one_epoch(cpc_model, phone_classifier, loss_criterion, data_loader_train, optimizer_frozen)
avg_loss, avg_accuracy
def validation_step(cpc_model,
phone_classifier,
loss_criterion,
data_loader):
cpc_model.eval()
phone_classifier.eval()
avg_loss = 0
avg_accuracy = 0
n_items = 0
with torch.no_grad():
for step, full_data in enumerate(data_loader):
# Each batch is represented by a Tuple of vectors:
# sequence of size : N x 1 x T
# label of size : N x T
#
# With :
# - N number of sequence in the batch
# - T size of each sequence
sequence, label = full_data
bs = len(sequence)
seq_len = label.size(1)
context_out, enc_out, _ = cpc_model(sequence.to(device),label.to(device))
scores = phone_classifier(context_out)
scores = scores.permute(0,2,1)
loss = loss_criterion(scores,label.to(device))
avg_loss+=loss.item()*bs
n_items+=bs
correct_labels = scores.argmax(1)
avg_accuracy += ((label==correct_labels.cpu()).float()).mean(1).sum().item()
avg_loss/=n_items
avg_accuracy/=n_items
return avg_loss, avg_accuracy
import matplotlib.pyplot as plt
from google.colab import files
def run(cpc_model,
phone_classifier,
loss_criterion,
data_loader_train,
data_loader_val,
optimizer,
n_epoch):
epoches = []
train_losses = []
train_accuracies = []
val_losses = []
val_accuracies = []
for epoch in range(n_epoch):
epoches.append(epoch)
print(f"Running epoch {epoch + 1} / {n_epoch}")
loss_train, acc_train = train_one_epoch(cpc_model, phone_classifier, loss_criterion, data_loader_train, optimizer)
print("-------------------")
print(f"Training dataset :")
print(f"Average loss : {loss_train}. Average accuracy {acc_train}")
train_losses.append(loss_train)
train_accuracies.append(acc_train)
print("-------------------")
print("Validation dataset")
loss_val, acc_val = validation_step(cpc_model, phone_classifier, loss_criterion, data_loader_val)
print(f"Average loss : {loss_val}. Average accuracy {acc_val}")
print("-------------------")
print()
val_losses.append(loss_val)
val_accuracies.append(acc_val)
plt.plot(epoches, train_losses, label = "train loss")
plt.plot(epoches, val_losses, label = "val loss")
plt.xlabel('epoches')
plt.ylabel('loss')
plt.title('train and validation loss')
plt.legend()
# Display a figure.
plt.savefig("loss1.png")
files.download("loss1.png")
plt.show()
plt.plot(epoches, train_accuracies, label = "train accuracy")
plt.plot(epoches, val_accuracies, label = "vali accuracy")
plt.xlabel('epoches')
plt.ylabel('accuracy')
plt.title('train and validation accuracy')
plt.legend()
plt.savefig("val1.png")
files.download("val1.png")
# Display a figure.
plt.show()
```
## The Training and Evaluating Script
```
run(cpc_model,phone_classifier,loss_criterion,data_loader_train,data_loader_val,optimizer_frozen,n_epoch=10)
loss_ctc = torch.nn.CTCLoss(zero_infinity=True)
%cd CPC_audio/
from cpc.eval.common_voices_eval import SingleSequenceDataset, parseSeqLabels, findAllSeqs
path_train_data_per = datapath+'train'
path_val_data_per = datapath+'val'
path_phone_data_per = datapath2+'chars2.txt'
BATCH_SIZE=8
phone_labels, N_PHONES = parseSeqLabels(path_phone_data_per)
data_train_per, _ = findAllSeqs(path_train_data_per, extension='.wav')
dataset_train_non_aligned = SingleSequenceDataset(path_train_data_per, data_train_per, phone_labels)
data_loader_train = torch.utils.data.DataLoader(dataset_train_non_aligned, batch_size=BATCH_SIZE,
shuffle=True)
data_val_per, _ = findAllSeqs(path_val_data_per, extension='.wav')
dataset_val_non_aligned = SingleSequenceDataset(path_val_data_per, data_val_per, phone_labels)
data_loader_val = torch.utils.data.DataLoader(dataset_val_non_aligned, batch_size=BATCH_SIZE,
shuffle=True)
from cpc.feature_loader import loadModel
checkpoint_path = '../checkpoint_data/checkpoint_30.pt'
cpc_model, HIDDEN_CONTEXT_MODEL, HIDDEN_ENCODER_MODEL = loadModel([checkpoint_path])
cpc_model = cpc_model.cuda()
phone_classifier = PhoneClassifier(HIDDEN_CONTEXT_MODEL, N_PHONES).to(device)
parameters = list(phone_classifier.parameters()) + list(cpc_model.parameters())
LEARNING_RATE = 2e-4
optimizer = torch.optim.Adam(parameters, lr=LEARNING_RATE)
optimizer_frozen = torch.optim.Adam(list(phone_classifier.parameters()), lr=LEARNING_RATE)
import torch.nn.functional as F
def train_one_epoch_ctc(cpc_model,
phone_classifier,
loss_criterion,
data_loader,
optimizer):
cpc_model.train()
loss_criterion.train()
avg_loss = 0
avg_accuracy = 0
n_items = 0
for step, full_data in enumerate(data_loader):
x, x_len, y, y_len = full_data
x_batch_len = x.shape[-1]
x, y = x.to(device), y.to(device)
bs=x.size(0)
optimizer.zero_grad()
context_out, enc_out, _ = cpc_model(x.to(device),y.to(device))
scores = phone_classifier(context_out)
scores = scores.permute(1,0,2)
scores = F.log_softmax(scores,2)
yhat_len = torch.tensor([int(scores.shape[0]*x_len[i]/x_batch_len) for i in range(scores.shape[1])]) # this is an approximation, should be good enough
loss = loss_criterion(scores.float(),y.float().to(device),yhat_len,y_len)
loss.backward()
optimizer.step()
avg_loss+=loss.item()*bs
n_items+=bs
avg_loss/=n_items
return avg_loss
def validation_step(cpc_model,
phone_classifier,
loss_criterion,
data_loader):
cpc_model.eval()
phone_classifier.eval()
avg_loss = 0
avg_accuracy = 0
n_items = 0
with torch.no_grad():
for step, full_data in enumerate(data_loader):
x, x_len, y, y_len = full_data
x_batch_len = x.shape[-1]
x, y = x.to(device), y.to(device)
bs=x.size(0)
context_out, enc_out, _ = cpc_model(x.to(device),y.to(device))
scores = phone_classifier(context_out)
scores = scores.permute(1,0,2)
scores = F.log_softmax(scores,2)
yhat_len = torch.tensor([int(scores.shape[0]*x_len[i]/x_batch_len) for i in range(scores.shape[1])]) # this is an approximation, should be good enough
loss = loss_criterion(scores,y.to(device),yhat_len,y_len)
avg_loss+=loss.item()*bs
n_items+=bs
avg_loss/=n_items
#print(loss)
return avg_loss
def run_ctc(cpc_model,
phone_classifier,
loss_criterion,
data_loader_train,
data_loader_val,
optimizer,
n_epoch):
epoches = []
train_losses = []
val_losses = []
for epoch in range(n_epoch):
print(f"Running epoch {epoch + 1} / {n_epoch}")
loss_train = train_one_epoch_ctc(cpc_model, phone_classifier, loss_criterion, data_loader_train, optimizer)
print("-------------------")
print(f"Training dataset :")
print(f"Average loss : {loss_train}.")
print("-------------------")
print("Validation dataset")
loss_val = validation_step(cpc_model, phone_classifier, loss_criterion, data_loader_val)
print(f"Average loss : {loss_val}")
print("-------------------")
print()
epoches.append(epoch)
train_losses.append(loss_train)
val_losses.append(loss_val)
plt.plot(epoches, train_losses, label = "ctc_train loss")
plt.plot(epoches, val_losses, label = "ctc_val loss")
plt.xlabel('epoches')
plt.ylabel('loss')
plt.title('train and validation ctc loss')
plt.legend()
# Display and save a figure.
plt.savefig("ctc_loss.png")
files.download("ctc_loss.png")
plt.show()
run_ctc(cpc_model,phone_classifier,loss_ctc,data_loader_train,data_loader_val,optimizer_frozen,n_epoch=10)
import numpy as np
def get_PER_sequence(ref_seq, target_seq):
# re = g.split()
# h = h.split()
n = len(ref_seq)
m = len(target_seq)
D = np.zeros((n+1,m+1))
for i in range(1,n+1):
D[i,0] = D[i-1,0]+1
for j in range(1,m+1):
D[0,j] = D[0,j-1]+1
### TODO compute the alignment
for i in range(1,n+1):
for j in range(1,m+1):
D[i,j] = min(
D[i-1,j]+1,
D[i-1,j-1]+1,
D[i,j-1]+1,
D[i-1,j-1]+ 0 if ref_seq[i-1]==target_seq[j-1] else float("inf")
)
return D[n,m]/len(ref_seq)
#return PER
ref_seq = [0, 1, 1, 2, 0, 2, 2]
pred_seq = [1, 1, 2, 2, 0, 0]
expected_PER = 4. / 7.
print(get_PER_sequence(ref_seq, pred_seq) == expected_PER)
import progressbar
from multiprocessing import Pool
def cut_data(seq, sizeSeq):
maxSeq = sizeSeq.max()
return seq[:, :maxSeq]
def prepare_data(data):
seq, sizeSeq, phone, sizePhone = data
seq = seq.cuda()
phone = phone.cuda()
sizeSeq = sizeSeq.cuda().view(-1)
sizePhone = sizePhone.cuda().view(-1)
seq = cut_data(seq.permute(0, 2, 1), sizeSeq).permute(0, 2, 1)
return seq, sizeSeq, phone, sizePhone
def get_per(test_dataloader,
cpc_model,
phone_classifier):
downsampling_factor = 160
cpc_model.eval()
phone_classifier.eval()
avgPER = 0
nItems = 0
per = []
Item = []
print("Starting the PER computation through beam search")
bar = progressbar.ProgressBar(maxval=len(test_dataloader))
bar.start()
for index, data in enumerate(test_dataloader):
bar.update(index)
with torch.no_grad():
seq, sizeSeq, phone, sizePhone = prepare_data(data)
c_feature, _, _ = cpc_model(seq.to(device),phone.to(device))
sizeSeq = sizeSeq / downsampling_factor
predictions = torch.nn.functional.softmax(
phone_classifier(c_feature), dim=2).cpu()
phone = phone.cpu()
sizeSeq = sizeSeq.cpu()
sizePhone = sizePhone.cpu()
bs = c_feature.size(0)
data_per = [(predictions[b].argmax(1), phone[b]) for b in range(bs)]
# data_per = [(predictions[b], sizeSeq[b], phone[b], sizePhone[b],
# "criterion.module.BLANK_LABEL") for b in range(bs)]
with Pool(bs) as p:
poolData = p.starmap(get_PER_sequence, data_per)
avgPER += sum([x for x in poolData])
nItems += len(poolData)
per.append(sum([x for x in poolData]))
Item.append(index)
bar.finish()
avgPER /= nItems
print(f"Average CER {avgPER}")
plt.plot(Item, per, label = "Per by item")
plt.xlabel('Items')
plt.ylabel('PER')
plt.title('trends of the PER')
plt.legend()
# Display and save a figure.
plt.savefig("Per.png")
files.download("Per.png")
plt.show()
return avgPER
get_per(data_loader_val,cpc_model,phone_classifier)
# Load a dataset labelled with the letters of each sequence.
%cd /content/CPC_audio
from cpc.eval.common_voices_eval import SingleSequenceDataset, parseSeqLabels, findAllSeqs
path_train_data_cer = datapath+'train'
path_val_data_cer = datapath+'val'
path_letter_data_cer = datapath2+'chars2.txt'
BATCH_SIZE=8
letters_labels, N_LETTERS = parseSeqLabels(path_letter_data_cer)
data_train_cer, _ = findAllSeqs(path_train_data_cer, extension='.wav')
dataset_train_non_aligned = SingleSequenceDataset(path_train_data_cer, data_train_cer, letters_labels)
data_val_cer, _ = findAllSeqs(path_val_data_cer, extension='.wav')
dataset_val_non_aligned = SingleSequenceDataset(path_val_data_cer, data_val_cer, letters_labels)
# The data loader will generate a tuple of tensors data, labels for each batch
# data : size N x T1 x 1 : the audio sequence
# label : size N x T2 the sequence of letters corresponding to the audio data
# IMPORTANT NOTE: just like the PER the CER is computed with non-aligned phone data.
data_loader_train_letters = torch.utils.data.DataLoader(dataset_train_non_aligned, batch_size=BATCH_SIZE,
shuffle=True)
data_loader_val_letters = torch.utils.data.DataLoader(dataset_val_non_aligned, batch_size=BATCH_SIZE,
shuffle=True)
from cpc.feature_loader import loadModel
checkpoint_path = '../checkpoint_data/checkpoint_30.pt'
cpc_model, HIDDEN_CONTEXT_MODEL, HIDDEN_ENCODER_MODEL = loadModel([checkpoint_path])
cpc_model = cpc_model.cuda()
character_classifier = PhoneClassifier(HIDDEN_CONTEXT_MODEL, N_LETTERS).to(device)
parameters = list(character_classifier.parameters()) + list(cpc_model.parameters())
LEARNING_RATE = 2e-4
optimizer = torch.optim.Adam(parameters, lr=LEARNING_RATE)
optimizer_frozen = torch.optim.Adam(list(character_classifier.parameters()), lr=LEARNING_RATE)
loss_ctc = torch.nn.CTCLoss(zero_infinity=True)
run_ctc(cpc_model,character_classifier,loss_ctc,data_loader_train_letters,data_loader_val_letters,optimizer_frozen,n_epoch=10)
get_per(data_loader_val_letters,cpc_model,character_classifier)
```
| github_jupyter |
# A Brief Overview of Network Data Science
Networks are extremely rich data structures which admit a wide variety of insightful data analysis tasks. In this set of notes, we'll consider two of the fundamental tasks in network data science: centrality and clustering. We'll also get a bit more practice with network visualization.
```
import networkx as nx
import numpy as np
from matplotlib import pyplot as plt
import pandas as pd
```
We'll mostly stick with the Karate Club network for today, as this is a very good network for visualization.
```
G = nx.karate_club_graph()
pos = {i : (0, 0) for i in G.nodes()}
# layout = nx.drawing.layout.spring_layout(G)
nx.draw(G,
with_labels = True,
node_color = "steelblue")
```
### Centrality in Networks
Given a system, how can we determine *important* components in that system? In networks, the idea of importance is often cashed out in terms of *centrality*: important nodes are the nodes that are most "central" to the network. But how should we define or measure this?
One good way is by computing the degree (i.e. the number of friends possessed by each node).
The degree is a direct measure of popularity. But what if it matters not only *how many* friends you have, but *who* those friends are? Maybe we'd like to measure importance using the following, apparently circular idea:
> Central nodes tend to be connected to other central nodes.
As it turns out, one way to cash out this idea is in terms of...linear algebra! In particular, let's suppose that *my* importance should be proportional to the sum of the importances of my friends. So, if $v_i$ is the importance of node $i$, then we can write
$$ v_i = \alpha\sum_{j \;\text{friends with} \;i} v_j\;, $$
where $\alpha$ is some constant of proportionality. Let's write this up a little more concisely. Let $\mathbf{A} \in \mathbb{R}^{n \times n}$ be the *adjacency* matrix, with entries
$$ a_{ij} = \begin{cases}1 &\quad i \;\text{is friends with} \;j \\ 0 &\quad \text{otherwise.}\end{cases}$$
Now, our equation above can be written in matrix-vector form as:
$$\mathbf{v} = \alpha \mathbf{A} \mathbf{v}$$.
Wait! This says that $\mathbf{v}$ is an eigenvector of $\mathbf{A}$ with eigenvalue $\frac{1}{\alpha}$! So, we can compute centralities by finding eigenvectors of $\mathbf{A}$. Usually, we just take the largest one.
Let's try it out! Our first step is to obtain the adjacency matrix $\mathbf{A}$.
Now let's find the eigenvector corresponding to the largest eigenvalue.
Now let's use this to create a plot:
Compared to our calculation of degrees, this *eigenvector centrality* views nodes such as 17 as highly important, not because 17 has many neighbors but rather because 17 is connected to other important nodes.
## PageRank (Again)
Did somebody say...PageRank?
As you may remember from either PIC16A or our lectures on linear algebra, PageRank is an algorithm for finding important entities in a complex, relational system. In fact, it's a form of centrality! While we could obtain the adjacency matrix and do the linear algebra manipulations to compute PageRank, an easier way is to use one of the many centrality measures built in to NetworkX.
*Yes, we could have done this for eigenvector centrality as well.*
Different centrality measures have different mathematical definitions and properties, which means that appropriately interpreting a given measure can be somewhat tricky. One should be cautious before leaping to conclusions about "the most important node in the network." For example, the results look noticeably different when we use betweenness centrality, a popular heuristic that considers nodes to be more important if they are "between" lots of pairs of other nodes.
```
bc = nx.algorithms.centrality.betweenness_centrality(G)
nx.draw(G, layout,
with_labels=True,
node_color = "steelblue",
node_size = [5000*bc[i] for i in G.nodes()],
edgecolors = "black")
```
## Graph Clustering
Graph clustering refers to the problem of finding collections of related nodes in the graph. It is one form of unsupervised machine learning, and is similar to problems that you may have seen like k-means and spectral clustering. Indeed, spectral clustering works well on graphs!
As mentioned above, a common benchmark for graph clustering algorithms is to attempt to reproduce the observed division of the Karate Club graph. Recall that that looks like this:
The core idea of most clustering algorithms is that densely-connected sets of nodes are more likely to be members of the same cluster. There are *many* algorithms for graph clustering, which can lead to very different results.
Here's one example.
The variable `comms` is now a list of sets. Nodes in the same set are viewed as belonging to the same cluster. Let's visualize these:
The result is clearly related to the observed partition, but we haven't recovered it exactly. Indeed, the algorithm picked up 3 clusters, when the clut only divided into two components! Some algorithms allow you to specify the desired number of clusters in advance, while others don't.
What about our good friend, spectral clustering? The adjacency matrix of the graph actually can serve as the affinity or similarity matrix we used when studying point blogs. In fact, spectral clustering is often presented as an algorithm for graphs.
There's no implementation of spectral clustering within NetworkX, but it's easy enough to obtain the adjacency matrix and use the implementation in Scikit-learn`:
The resulting clusters are fairly similar to the "real" clusters observed in the fracturing of the club. However, fundamentally, graph clustering is an *unsupervised* machine learning task, which means that the problem of defining what makes a "good" set of clusters is quite subtle and depends strongly on the data domain.
## Graphs From Data
The easiest way to construct a graph from data is by converting from a Pandas data frame. When constructing a graph from data, using this or any other method, it's often necessary to do a bit of cleaning in order to produce a reasonable result.
For example, let's revisit the Hamilton mentions network:
```
url = "https://philchodrow.github.io/PIC16A/homework/HW3-hamilton-data.csv"
df = pd.read_csv(url, names = ["source", "target"])
df.head()
```
We can think of this dataframe as a list of *directed* edges.
Let's visualize?
Well, this isn't very legible. Let's filter out all the characters who are only mentioned, but never mention anyone themselves. The number of outgoing edges from a node is often called the *out-degree*.
The `subgraph()` method can be used to filter down a network to just a desired set of nodes.
Now the plot is a bit easier to read, especially if we add a pleasant layout.
Great! Having performed our cleaning steps, we could proceed to analyze this graph.
| github_jupyter |
<a href="https://colab.research.google.com/github/lmcanavals/algorithmic_complexity/blob/main/05_01_UCS_dijkstra.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Breadth First Search
BFS para los amigos
```
import graphviz as gv
import numpy as np
import pandas as pd
import heapq as hq
import math
def readAdjl(fn, haslabels=False, weighted=False, sep="|"):
with open(fn) as f:
labels = None
if haslabels:
labels = f.readline().strip().split()
L = []
for line in f:
if weighted:
L.append([tuple(map(int, p.split(sep))) for p in line.strip().split()])
# line => "1|3 2|5 4|4" ==> [(1, 3), (2, 5), (4, 4)]
else:
L.append(list(map(int, line.strip().split()))) # "1 3 5" => [1, 3, 5]
# L.append([int(x) for x in line.strip().split()])
return L, labels
def adjlShow(L, labels=None, directed=False, weighted=False, path=[],
layout="sfdp"):
g = gv.Digraph("G") if directed else gv.Graph("G")
g.graph_attr["layout"] = layout
g.edge_attr["color"] = "gray"
g.node_attr["color"] = "orangered"
g.node_attr["width"] = "0.1"
g.node_attr["height"] = "0.1"
g.node_attr["fontsize"] = "8"
g.node_attr["fontcolor"] = "mediumslateblue"
g.node_attr["fontname"] = "monospace"
g.edge_attr["fontsize"] = "8"
g.edge_attr["fontname"] = "monospace"
n = len(L)
for u in range(n):
g.node(str(u), labels[u] if labels else str(u))
added = set()
for v, u in enumerate(path):
if u != None:
if weighted:
for vi, w in G[u]:
if vi == v:
break
g.edge(str(u), str(v), str(w), dir="forward", penwidth="2", color="orange")
else:
g.edge(str(u), str(v), dir="forward", penwidth="2", color="orange")
added.add(f"{u},{v}")
added.add(f"{v},{u}")
if weighted:
for u in range(n):
for v, w in L[u]:
if not directed and not f"{u},{v}" in added:
added.add(f"{u},{v}")
added.add(f"{v},{u}")
g.edge(str(u), str(v), str(w))
elif directed:
g.edge(str(u), str(v), str(w))
else:
for u in range(n):
for v in L[u]:
if not directed and not f"{u},{v}" in added:
added.add(f"{u},{v}")
added.add(f"{v},{u}")
g.edge(str(u), str(v))
elif directed:
g.edge(str(u), str(v))
return g
```
## Dijkstra
```
def dijkstra(G, s):
n = len(G)
visited = [False]*n
path = [None]*n
cost = [math.inf]*n
cost[s] = 0
queue = [(0, s)]
while queue:
g_u, u = hq.heappop(queue)
if not visite[u]:
visited[u] = True
for v, w in G[u]:
f = g_u + w
if f < cost[v]:
cost[v] = f
path[v] = u
hq.heappush(queue, (f, v))
return path, cost
%%file 1.in
2|4 7|8 14|3
2|7 5|7
0|4 1|7 3|5 6|1
2|5
7|7
1|7 6|1 8|5
2|1 5|1
0|8 4|7 8|8
5|5 7|8 9|8 11|9 12|6
8|8 10|8 12|9 13|7
9|8 13|3
8|9
8|6 9|9 13|2 15|5
9|7 10|13 12|2 16|9
0|3 15|9
12|5 14|9 17|7
13|9 17|8
15|7 16|8
G, _ = readAdjl("1.in", weighted=True)
for i, edges in enumerate(G):
print(f"{i:2}: {edges}")
adjlShow(G, weighted=True)
path, cost = dijkstra(G, 8)
print(path)
adjlShow(G, weighted=True, path=path)
```
| github_jupyter |
# Distributed DeepRacer RL training with SageMaker and RoboMaker
---
## Introduction
In this notebook, we will train a fully autonomous 1/18th scale race car using reinforcement learning using Amazon SageMaker RL and AWS RoboMaker's 3D driving simulator. [AWS RoboMaker](https://console.aws.amazon.com/robomaker/home#welcome) is a service that makes it easy for developers to develop, test, and deploy robotics applications.
This notebook provides a jailbreak experience of [AWS DeepRacer](https://console.aws.amazon.com/deepracer/home#welcome), giving us more control over the training/simulation process and RL algorithm tuning.

---
## How it works?

The reinforcement learning agent (i.e. our autonomous car) learns to drive by interacting with its environment, e.g., the track, by taking an action in a given state to maximize the expected reward. The agent learns the optimal plan of actions in training by trial-and-error through repeated episodes.
The figure above shows an example of distributed RL training across SageMaker and two RoboMaker simulation envrionments that perform the **rollouts** - execute a fixed number of episodes using the current model or policy. The rollouts collect agent experiences (state-transition tuples) and share this data with SageMaker for training. SageMaker updates the model policy which is then used to execute the next sequence of rollouts. This training loop continues until the model converges, i.e. the car learns to drive and stops going off-track. More formally, we can define the problem in terms of the following:
1. **Objective**: Learn to drive autonomously by staying close to the center of the track.
2. **Environment**: A 3D driving simulator hosted on AWS RoboMaker.
3. **State**: The driving POV image captured by the car's head camera, as shown in the illustration above.
4. **Action**: Six discrete steering wheel positions at different angles (configurable)
5. **Reward**: Positive reward for staying close to the center line; High penalty for going off-track. This is configurable and can be made more complex (for e.g. steering penalty can be added).
## Prequisites
### Imports
To get started, we'll import the Python libraries we need, set up the environment with a few prerequisites for permissions and configurations.
You can run this notebook from your local machine or from a SageMaker notebook instance. In both of these scenarios, you can run the following to launch a training job on `SageMaker` and a simulation job on `RoboMaker`.
```
import sagemaker
import boto3
import sys
import os
import glob
import re
import subprocess
from IPython.display import Markdown
from time import gmtime, strftime
sys.path.append("common")
from misc import get_execution_role, wait_for_s3_object
from sagemaker.rl import RLEstimator, RLToolkit, RLFramework
from markdown_helper import *
```
### Setup S3 bucket
Set up the linkage and authentication to the S3 bucket that we want to use for checkpoint and metadata.
```
# S3 bucket
sage_session = sagemaker.session.Session()
s3_bucket = sage_session.default_bucket()
s3_output_path = 's3://{}/'.format(s3_bucket) # SDK appends the job name and output folder
```
### Define Variables
We define variables such as the job prefix for the training jobs and s3_prefix for storing metadata required for synchronization between the training and simulation jobs
```
job_name_prefix = 'rl-deepracer'
# create unique job name
job_name = s3_prefix = job_name_prefix + "-sagemaker-" + strftime("%y%m%d-%H%M%S", gmtime())
# Duration of job in seconds (5 hours)
job_duration_in_seconds = 3600 * 5
aws_region = sage_session.boto_region_name
if aws_region not in ["us-west-2", "us-east-1", "eu-west-1"]:
raise Exception("This notebook uses RoboMaker which is available only in US East (N. Virginia), US West (Oregon) and EU (Ireland). Please switch to one of these regions.")
print("Model checkpoints and other metadata will be stored at: {}{}".format(s3_output_path, job_name))
```
### Create an IAM role
Either get the execution role when running from a SageMaker notebook `role = sagemaker.get_execution_role()` or, when running from local machine, use utils method `role = get_execution_role('role_name')` to create an execution role.
```
try:
role = sagemaker.get_execution_role()
except:
role = get_execution_role('sagemaker')
print("Using IAM role arn: {}".format(role))
```
> Please note that this notebook cannot be run in `SageMaker local mode` as the simulator is based on AWS RoboMaker service.
### Permission setup for invoking AWS RoboMaker from this notebook
In order to enable this notebook to be able to execute AWS RoboMaker jobs, we need to add one trust relationship to the default execution role of this notebook.
```
display(Markdown(generate_help_for_robomaker_trust_relationship(role)))
```
### Configure VPC
Since SageMaker and RoboMaker have to communicate with each other over the network, both of these services need to run in VPC mode. This can be done by supplying subnets and security groups to the job launching scripts.
We will use the default VPC configuration for this example.
```
ec2 = boto3.client('ec2')
default_vpc = [vpc['VpcId'] for vpc in ec2.describe_vpcs()['Vpcs'] if vpc["IsDefault"] == True][0]
default_security_groups = [group["GroupId"] for group in ec2.describe_security_groups()['SecurityGroups'] \
if group["GroupName"] == "default" and group["VpcId"] == default_vpc]
default_subnets = [subnet["SubnetId"] for subnet in ec2.describe_subnets()["Subnets"] \
if subnet["VpcId"] == default_vpc and subnet['DefaultForAz']==True]
print("Using default VPC:", default_vpc)
print("Using default security group:", default_security_groups)
print("Using default subnets:", default_subnets)
```
A SageMaker job running in VPC mode cannot access S3 resourcs. So, we need to create a VPC S3 endpoint to allow S3 access from SageMaker container. To learn more about the VPC mode, please visit [this link.](https://docs.aws.amazon.com/sagemaker/latest/dg/train-vpc.html)
```
try:
route_tables = [route_table["RouteTableId"] for route_table in ec2.describe_route_tables()['RouteTables']\
if route_table['VpcId'] == default_vpc]
except Exception as e:
if "UnauthorizedOperation" in str(e):
display(Markdown(generate_help_for_s3_endpoint_permissions(role)))
else:
display(Markdown(create_s3_endpoint_manually(aws_region, default_vpc)))
raise e
print("Trying to attach S3 endpoints to the following route tables:", route_tables)
assert len(route_tables) >= 1, "No route tables were found. Please follow the VPC S3 endpoint creation "\
"guide by clicking the above link."
try:
ec2.create_vpc_endpoint(DryRun=False,
VpcEndpointType="Gateway",
VpcId=default_vpc,
ServiceName="com.amazonaws.{}.s3".format(aws_region),
RouteTableIds=route_tables)
print("S3 endpoint created successfully!")
except Exception as e:
if "RouteAlreadyExists" in str(e):
print("S3 endpoint already exists.")
elif "UnauthorizedOperation" in str(e):
display(Markdown(generate_help_for_s3_endpoint_permissions(role)))
raise e
else:
display(Markdown(create_s3_endpoint_manually(aws_region, default_vpc)))
raise e
```
## Setup the environment
The environment is defined in a Python file called “deepracer_env.py” and the file can be found at `src/robomaker/environments/`. This file implements the gym interface for our Gazebo based RoboMakersimulator. This is a common environment file used by both SageMaker and RoboMaker. The environment variable - `NODE_TYPE` defines which node the code is running on. So, the expressions that have `rospy` dependencies are executed on RoboMaker only.
We can experiment with different reward functions by modifying `reward_function` in this file. Action space and steering angles can be changed by modifying the step method in `DeepRacerDiscreteEnv` class.
### Configure the preset for RL algorithm
The parameters that configure the RL training job are defined in `src/robomaker/presets/deepracer.py`. Using the preset file, you can define agent parameters to select the specific agent algorithm. We suggest using Clipped PPO for this example.
You can edit this file to modify algorithm parameters like learning_rate, neural network structure, batch_size, discount factor etc.
```
!pygmentize src/robomaker/presets/deepracer.py
```
### Training Entrypoint
The training code is written in the file “training_worker.py” which is uploaded in the /src directory. At a high level, it does the following:
- Uploads SageMaker node's IP address.
- Starts a Redis server which receives agent experiences sent by rollout worker[s] (RoboMaker simulator).
- Trains the model everytime after a certain number of episodes are received.
- Uploads the new model weights on S3. The rollout workers then update their model to execute the next set of episodes.
```
# Uncomment the line below to see the training code
#!pygmentize src/training_worker.py
```
### Train the RL model using the Python SDK Script mode¶
First, we upload the preset and envrionment file to a particular location on S3, as expected by RoboMaker.
```
s3_location = "s3://%s/%s" % (s3_bucket, s3_prefix)
# Make sure nothing exists at this S3 prefix
!aws s3 rm --recursive {s3_location}
# Make any changes to the envrironment and preset files below and upload these files
!aws s3 cp src/robomaker/environments/ {s3_location}/environments/ --recursive --exclude ".ipynb_checkpoints*" --exclude "*.pyc"
!aws s3 cp src/robomaker/presets/ {s3_location}/presets/ --recursive --exclude ".ipynb_checkpoints*" --exclude "*.pyc"
```
Next, we define the following algorithm metrics that we want to capture from cloudwatch logs to monitor the training progress. These are algorithm specific parameters and might change for different algorithm. We use [Clipped PPO](https://coach.nervanasys.com/algorithms/policy_optimization/cppo/index.html) for this example.
```
metric_definitions = [
# Training> Name=main_level/agent, Worker=0, Episode=19, Total reward=-102.88, Steps=19019, Training iteration=1
{'Name': 'reward-training',
'Regex': '^Training>.*Total reward=(.*?),'},
# Policy training> Surrogate loss=-0.32664725184440613, KL divergence=7.255815035023261e-06, Entropy=2.83156156539917, training epoch=0, learning_rate=0.00025
{'Name': 'ppo-surrogate-loss',
'Regex': '^Policy training>.*Surrogate loss=(.*?),'},
{'Name': 'ppo-entropy',
'Regex': '^Policy training>.*Entropy=(.*?),'},
# Testing> Name=main_level/agent, Worker=0, Episode=19, Total reward=1359.12, Steps=20015, Training iteration=2
{'Name': 'reward-testing',
'Regex': '^Testing>.*Total reward=(.*?),'},
]
```
We use the RLEstimator for training RL jobs.
1. Specify the source directory which has the environment file, preset and training code.
2. Specify the entry point as the training code
3. Specify the choice of RL toolkit and framework. This automatically resolves to the ECR path for the RL Container.
4. Define the training parameters such as the instance count, instance type, job name, s3_bucket and s3_prefix for storing model checkpoints and metadata. **Only 1 training instance is supported for now.**
4. Set the RLCOACH_PRESET as "deepracer" for this example.
5. Define the metrics definitions that you are interested in capturing in your logs. These can also be visualized in CloudWatch and SageMaker Notebooks.
```
RLCOACH_PRESET = "deepracer"
instance_type = "ml.c5.4xlarge"
estimator = RLEstimator(entry_point="training_worker.py",
source_dir='src',
dependencies=["common/sagemaker_rl"],
toolkit=RLToolkit.COACH,
toolkit_version='0.10.1',
framework=RLFramework.TENSORFLOW,
role=role,
train_instance_type=instance_type,
train_instance_count=1,
output_path=s3_output_path,
base_job_name=job_name_prefix,
train_max_run=job_duration_in_seconds, # Maximum runtime in seconds
hyperparameters={"s3_bucket": s3_bucket,
"s3_prefix": s3_prefix,
"aws_region": aws_region,
"RLCOACH_PRESET": RLCOACH_PRESET,
},
metric_definitions = metric_definitions,
subnets=default_subnets, # Required for VPC mode
security_group_ids=default_security_groups, # Required for VPC mode
)
estimator.fit(job_name=job_name, wait=False)
```
### Start the Robomaker job
```
from botocore.exceptions import UnknownServiceError
robomaker = boto3.client("robomaker")
```
### Create Simulation Application
We first create a RoboMaker simulation application using the `DeepRacer public bundle`. Please refer to [RoboMaker Sample Application Github Repository](https://github.com/aws-robotics/aws-robomaker-sample-application-deepracer) if you want to learn more about this bundle or modify it.
```
bundle_s3_key = 'deepracer/simulation_ws.tar.gz'
bundle_source = {'s3Bucket': s3_bucket,
's3Key': bundle_s3_key,
'architecture': "X86_64"}
simulation_software_suite={'name': 'Gazebo',
'version': '7'}
robot_software_suite={'name': 'ROS',
'version': 'Kinetic'}
rendering_engine={'name': 'OGRE', 'version': '1.x'}
```
Download the public DeepRacer bundle provided by RoboMaker and upload it in our S3 bucket to create a RoboMaker Simulation Application
```
simulation_application_bundle_location = "https://s3-us-west-2.amazonaws.com/robomaker-applications-us-west-2-11d8d0439f6a/deep-racer/deep-racer-1.0.57.0.1.0.66.0/simulation_ws.tar.gz"
!wget {simulation_application_bundle_location}
!aws s3 cp simulation_ws.tar.gz s3://{s3_bucket}/{bundle_s3_key}
!rm simulation_ws.tar.gz
app_name = "deepracer-sample-application" + strftime("%y%m%d-%H%M%S", gmtime())
try:
response = robomaker.create_simulation_application(name=app_name,
sources=[bundle_source],
simulationSoftwareSuite=simulation_software_suite,
robotSoftwareSuite=robot_software_suite,
renderingEngine=rendering_engine
)
simulation_app_arn = response["arn"]
print("Created a new simulation app with ARN:", simulation_app_arn)
except Exception as e:
if "AccessDeniedException" in str(e):
display(Markdown(generate_help_for_robomaker_all_permissions(role)))
raise e
else:
raise e
```
### Launch the Simulation job on RoboMaker
We create [AWS RoboMaker](https://console.aws.amazon.com/robomaker/home#welcome) Simulation Jobs that simulates the environment and shares this data with SageMaker for training.
```
# Use more rollout workers for faster convergence
num_simulation_workers = 1
envriron_vars = {
"MODEL_S3_BUCKET": s3_bucket,
"MODEL_S3_PREFIX": s3_prefix,
"ROS_AWS_REGION": aws_region,
"WORLD_NAME": "hard_track", # Can be one of "easy_track", "medium_track", "hard_track"
"MARKOV_PRESET_FILE": "%s.py" % RLCOACH_PRESET,
"NUMBER_OF_ROLLOUT_WORKERS": str(num_simulation_workers)}
simulation_application = {"application": simulation_app_arn,
"launchConfig": {"packageName": "deepracer_simulation",
"launchFile": "distributed_training.launch",
"environmentVariables": envriron_vars}
}
vpcConfig = {"subnets": default_subnets,
"securityGroups": default_security_groups,
"assignPublicIp": True}
responses = []
for job_no in range(num_simulation_workers):
response = robomaker.create_simulation_job(iamRole=role,
clientRequestToken=strftime("%Y-%m-%d-%H-%M-%S", gmtime()),
maxJobDurationInSeconds=job_duration_in_seconds,
failureBehavior="Continue",
simulationApplications=[simulation_application],
vpcConfig=vpcConfig
)
responses.append(response)
print("Created the following jobs:")
job_arns = [response["arn"] for response in responses]
for job_arn in job_arns:
print("Job ARN", job_arn)
```
### Visualizing the simulations in RoboMaker
You can visit the RoboMaker console to visualize the simulations or run the following cell to generate the hyperlinks.
```
display(Markdown(generate_robomaker_links(job_arns, aws_region)))
```
### Plot metrics for training job
```
tmp_dir = "/tmp/{}".format(job_name)
os.system("mkdir {}".format(tmp_dir))
print("Create local folder {}".format(tmp_dir))
intermediate_folder_key = "{}/output/intermediate".format(job_name)
%matplotlib inline
import pandas as pd
csv_file_name = "worker_0.simple_rl_graph.main_level.main_level.agent_0.csv"
key = intermediate_folder_key + "/" + csv_file_name
wait_for_s3_object(s3_bucket, key, tmp_dir)
csv_file = "{}/{}".format(tmp_dir, csv_file_name)
df = pd.read_csv(csv_file)
df = df.dropna(subset=['Training Reward'])
x_axis = 'Episode #'
y_axis = 'Training Reward'
plt = df.plot(x=x_axis,y=y_axis, figsize=(12,5), legend=True, style='b-')
plt.set_ylabel(y_axis);
plt.set_xlabel(x_axis);
```
### Clean Up
Execute the cells below if you want to kill RoboMaker and SageMaker job.
```
for job_arn in job_arns:
robomaker.cancel_simulation_job(job=job_arn)
sage_session.sagemaker_client.stop_training_job(TrainingJobName=estimator._current_job_name)
```
### Evaluation
```
envriron_vars = {"MODEL_S3_BUCKET": s3_bucket,
"MODEL_S3_PREFIX": s3_prefix,
"ROS_AWS_REGION": aws_region,
"NUMBER_OF_TRIALS": str(20),
"MARKOV_PRESET_FILE": "%s.py" % RLCOACH_PRESET,
"WORLD_NAME": "hard_track",
}
simulation_application = {"application":simulation_app_arn,
"launchConfig": {"packageName": "deepracer_simulation",
"launchFile": "evaluation.launch",
"environmentVariables": envriron_vars}
}
vpcConfig = {"subnets": default_subnets,
"securityGroups": default_security_groups,
"assignPublicIp": True}
response = robomaker.create_simulation_job(iamRole=role,
clientRequestToken=strftime("%Y-%m-%d-%H-%M-%S", gmtime()),
maxJobDurationInSeconds=job_duration_in_seconds,
failureBehavior="Continue",
simulationApplications=[simulation_application],
vpcConfig=vpcConfig
)
print("Created the following job:")
print("Job ARN", response["arn"])
```
### Clean Up Simulation Application Resource
```
robomaker.delete_simulation_application(application=simulation_app_arn)
```
| github_jupyter |
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
```
# EDA
<hr>
## Table infos
```
infos = pd.read_csv('infos.csv', sep = '|')
infos.head()
infos.dtypes
infos.shape
len(infos) - infos.count()
infos['promotion'].unique()
```
## Table items
```
items = pd.read_csv('items.csv', sep = '|')
items.head()
items.shape
items.count()
items.nunique()
```
## Table orders
```
orders = pd.read_csv('orders.csv', sep = '|', parse_dates=['time'])
orders.head()
orders.shape
orders.count()
orders.dtypes
orders.time
orders.time.dt.week
orders.groupby('itemID')['salesPrice'].nunique().max()
```
# Other things
<hr>
## Evalutation function
```
# custo
# np.sum((prediction - np.maximum(prediction - target, 0) * 1.6) * simulatedPrice)
```
## Submission structure
```
# submission = items[['itemID']]
# submission['demandPrediction'] = 0 # prediction here
# submission.to_csv('submission.csv', sep = '|', index=False)
```
# First Model (aggregating by every two weeks before target)
## - Creating the structure
```
df = orders.copy()
df.tail()
df.tail().time.dt.dayofweek
# We want the last dayofweek from training to be 6
(df.tail().time.dt.dayofyear + 2) // 7
(df.head().time.dt.dayofyear + 2) // 7
df['week'] = (df.time.dt.dayofyear + 2 + 7) // 14
# + 7 because we want weeks 25 and 26 to be together, week 0 will be discarded
maxx = df.week.max()
minn = df.week.min()
minn, maxx
n_items = items['itemID'].nunique()
print('total number of items:', n_items)
print('expected number of instances:', n_items * (maxx + 1))
mi = pd.MultiIndex.from_product([range(0, maxx + 1), items['itemID']], names=['week', 'itemID'])
data = pd.DataFrame(index = mi)
data = data.join(df.groupby(['week', 'itemID'])[['order']].sum(), how = 'left')
data.fillna(0, inplace = True)
data.groupby('itemID').count().min()
df
df.groupby('itemID')['salesPrice'].nunique().describe()
df.groupby('itemID')['salesPrice'].median()
```
## - Creating features
```
# rolling window example with shift
random_df = pd.DataFrame({'B': [0, 1, 2, 3, 4]})
random_df.shift(1).rolling(2).sum()
data.reset_index(inplace = True)
data = pd.merge(data, items[['itemID', 'manufacturer', 'category1', 'category2', 'category3']], on = 'itemID')
# I am going to create three features: the mean of the orders of the last [1, 2, 4] weeks for each item
# TODO:
# longer windows
# aggregating by other features
# week pairs since last peek
# week pairs from 2nd last to last peek
#
data.sort_values('week', inplace = True)
data
features = [
('itemID', 'item'),
('manufacturer', 'manuf'),
('category1', 'cat1'),
('category2', 'cat2'),
('category3', 'cat3')
]
for f, n in features:
if f not in data.columns:
print('ops', f)
# f, name = ('manufacturer', 'manuf')
for f, name in features:
print(f)
temp = data.groupby([f, 'week'])[['order']].sum()
shifted = temp.groupby(f)[['order']].shift(1)
new_feature_block = pd.DataFrame()
for n in range(3):
rolled = shifted.groupby(f, as_index = False)['order'].rolling(2 ** n).mean()
new_feature_block['%s_%d' % (name, 2 ** n)] = rolled.reset_index(0, drop = True) # rolling has a weird index behavior...
data = pd.merge(data, new_feature_block.reset_index(), on = [f, 'week'])
data.count() # the larger the window, more NaN are expected
data.fillna(-1, inplace=True)
# checking if we got what we wanted
data.query('itemID == 1')
```
## - fit, predict
```
# max expected rmse
from sklearn.metrics import mean_squared_error as mse
# pred = data.loc[1:12].groupby('itemID')['order'].mean().sort_index()
# target_week = data.loc[13:, 'order'].reset_index(level = 0, drop = True).sort_index()
# mse(target_week, pred) ** .5
train = data.query('1 <= week <= 12').reset_index()
test = data.query('week == 13').reset_index()
y_train = train.pop('order').values
y_test = test.pop('order').values
X_train = train.values
X_test = test.values
import xgboost as xgb
dtrain = xgb.DMatrix(X_train, y_train, missing = -1)
dtest = xgb.DMatrix(X_test, y_test, missing = -1)
# specify parameters via map
param = {'max_depth':6, 'eta':0.01, 'objective':'reg:squarederror'}
num_round = 200
bst = xgb.train(param, dtrain,
num_round, early_stopping_rounds = 5,
evals = [(dtrain, 'train'), (dtest, 'test')])
# wtf is happening?
data.query('itemID == 10')
data.query('itemID == 100')
data.query('itemID == 1000')
zeros = data.groupby('itemID')['order'].apply(lambda x : (x == 0).mean())
plt.hist(zeros, bins = 60);
```
| github_jupyter |
# [Introduction to Data Science: A Comp-Math-Stat Approach](https://lamastex.github.io/scalable-data-science/as/2019/)
## YOIYUI001, Summer 2019
©2019 Raazesh Sainudiin. [Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/)
# 08. Pseudo-Random Numbers, Simulating from Some Discrete and Continuous Random Variables
- The $Uniform(0,1)$ RV
- The $Bernoulli(\theta)$ RV
- Simulating from the $Bernoulli(\theta)$ RV
- The Equi-Probable $de\,Moivre(k)$ RV
- Simulating from the Equi-Probable $de\,Moivre(k)$ RV
- The $Uniform(\theta_1, \theta_2)$ RV
- Simulating from the $Uniform(\theta_1, \theta_2)$ RV
- The $Exponential(\lambda)$ RV
- Simulating from the $Exponential(\lambda)$ RV
- The standard $Cauchy$ RV
- Simulating from the standard $Cauchy$ RV
- Investigating running means
- Replicable samples
- A simple simulation
In the last notebook, we started to look at how we can produce realisations from the most elementary $Uniform(0,1)$ random variable.
i.e., how can we produce samples $(x_1, x_2, \ldots, x_n)$ from $X_1, X_2, \ldots, X_n$ $\overset{IID}{\thicksim}$ $Uniform(0,1)$?
What is SageMath doing when we ask for random()?
```
random()
```
We looked at how Modular arithmetic and number theory gives us pseudo-random number generators.
We used linear congruential generators (LCG) as simple pseudo-random number generators.
Remember that "pseudo-random" means that the numbers are not really random. We saw that some linear congruential generators (LCG) have much shorter, more predictable, patterns than others and we learned what makes a good LCG.
We introduced the pseudo-random number generator (PRNG) called the Mersenne Twister that we will use for simulation purposes in this course. It is based on more sophisticated theory than that of LCG but the basic principles of recurrence relations are the same.
# The $Uniform(0,1)$ Random Variable
Recall that the $Uniform(0,1)$ random variable is the fundamental model as we can transform it to any other random variable, random vector or random structure. The PDF $f$ and DF $F$ of $X \sim Uniform(0,1)$ are:
$f(x) = \begin{cases} 0 & \text{if} \ x \notin [0,1] \\ 1 & \text{if} \ x \in [0,1] \end{cases}$
$F(x) = \begin{cases} 0 & \text{if} \ x < 0 \\ 1 & \text{if} \ x > 1 \\ x & \text{if} \ x \in [0,1] \end{cases}$
We use the Mersenne twister pseudo-random number generator to mimic independent and identically distributed draws from the $uniform(0,1)$ RV.
In Sage, we use the python random module to generate pseudo-random numbers for us. (We have already used it: remember randint?)
random() will give us one simulation from the $Uniform(0,1)$ RV:
```
random()
```
If we want a whole simulated sample we can use a list comprehension. We will be using this technique frequently so make sure you understand what is going on. "for i in range(3)" is acting like a counter to give us 3 simulated values in the list we are making
```
[random() for i in range(3)]
listOfUniformSamples = [random() for i in range(3) ]
listOfUniformSamples
```
If we do this again, we will get a different sample:
```
listOfUniformSamples2 = [random() for i in range(3) ]
listOfUniformSamples2
```
Often is it useful to be able to replicate the same random sample. For example, if we were writing some code to do some simulations using samples from a PRNG, and we "improved" the way that we were doing it, how would we want to test our improvement? If we could replicate the same samples then we could show that our new code was equivalent to our old code, just more efficient.
Remember when we were using the LCGs, and we could set the seed $x_0$? More sophisticated PRNGs like the Mersenne Twister also have a seed. By setting this seed to a specified value we can make sure that we can replicate samples.
```
?set_random_seed
set_random_seed(256526)
listOfUniformSamples = [random() for i in range(3) ]
listOfUniformSamples
initial_seed()
```
Now we can replicate the same sample again by setting the seed to the same value:
```
set_random_seed(256526)
listOfUniformSamples2 = [random() for i in range(3) ]
listOfUniformSamples2
initial_seed()
set_random_seed(2676676766)
listOfUniformSamples2 = [random() for i in range(3) ]
listOfUniformSamples2
initial_seed()
```
We can compare some samples visually by plotting them:
```
set_random_seed(256526)
listOfUniformSamples = [(i,random()) for i in range(100)]
plotsSeed1 = points(listOfUniformSamples)
t1 = text('Seed 1 = 256626', (60,1.2), rgbcolor='blue',fontsize=10)
set_random_seed(2676676766)
plotsSeed2 = points([(i,random()) for i in range(100)],rgbcolor="red")
t2 = text('Seed 2 = 2676676766', (60,1.2), rgbcolor='red',fontsize=10)
bothSeeds = plotsSeed1 + plotsSeed2
t31 = text('Seed 1 and', (30,1.2), rgbcolor='blue',fontsize=10)
t32 = text('Seed 2', (65,1.2), rgbcolor='red',fontsize=10)
show(graphics_array( (plotsSeed1+t1,plotsSeed2+t2, bothSeeds+t31+t32)),figsize=[9,3])
```
### YouTry
Try looking at the more advanced documentation and play a bit.
```
#?sage.misc.randstate
```
(end of You Try)
---
---
### Question:
What can we do with samples from a $Uniform(0,1)$ RV? Why bother?
### Answer:
We can use them to sample or simulate from other, more complex, random variables.
# The $Bernoulli(\theta)$ Random Variable
The $Bernoulli(\theta)$ RV $X$ with PMF $f(x;\theta)$ and DF $F(x;\theta)$ parameterised by some real $\theta\in [0,1]$ is a discrete random variable with only two possible outcomes.
$f(x;\theta)= \theta^x (1-\theta)^{1-x} \mathbf{1}_{\{0,1\}}(x) =
\begin{cases}
\theta & \text{if} \ x=1,\\
1-\theta & \text{if} \ x=0,\\
0 & \text{otherwise}
\end{cases}$
$F(x;\theta) =
\begin{cases}
1 & \text{if} \ 1 \leq x,\\
1-\theta & \text{if} \ 0 \leq x < 1,\\
0 & \text{otherwise}
\end{cases}$
Here are some functions for the PMF and DF for a $Bernoulli$ RV along with various useful functions for us in the sequel. Let's take a quick look at them.
```
def bernoulliPMF(x, theta):
'''Probability mass function for Bernoulli(theta).
Param x is the value to find the Bernoulli probability mass of.
Param theta is the theta parameterising this Bernoulli RV.'''
retValue = 0
if x == 1:
retValue = theta
elif x == 0:
retValue = 1 - theta
return retValue
def bernoulliCDF(x, theta):
'''DF for Bernoulli(theta).
Param x is the value to find the Bernoulli cumulative density function of.
Param theta is the theta parameterising this Bernoulli RV.'''
retValue = 0
if x >= 1:
retValue = 1
elif x >= 0:
retValue = 1 - theta
# in the case where x < 0, retValue is the default of 0
return retValue
# PFM plot
def pmfPlot(outcomes, pmf_values):
'''Returns a pmf plot for a discrete distribution.'''
pmf = points(zip(outcomes,pmf_values), rgbcolor="blue", pointsize='20')
for i in range(len(outcomes)):
pmf += line([(outcomes[i], 0),(outcomes[i], pmf_values[i])], rgbcolor="blue", linestyle=":")
# padding
pmf += point((0,1), rgbcolor="black", pointsize="0")
return pmf
# CDF plot
def cdfPlot(outcomes, cdf_values):
'''Returns a DF plot for a discrete distribution.'''
cdf_pairs = zip(outcomes, cdf_values)
cdf = point(cdf_pairs, rgbcolor = "red", faceted = false, pointsize="20")
for k in range(len(cdf_pairs)):
x, kheight = cdf_pairs[k] # unpack tuple
previous_x = 0
previous_height = 0
if k > 0:
previous_x, previous_height = cdf_pairs[k-1] # unpack previous tuple
cdf += line([(previous_x, previous_height),(x, previous_height)], rgbcolor="grey")
cdf += points((x, previous_height),rgbcolor = "white", faceted = true, pointsize="20")
cdf += line([(x, previous_height),(x, kheight)], rgbcolor="blue", linestyle=":")
# padding
max_index = len(outcomes)-1
cdf += line([(outcomes[0]-0.2, 0),(outcomes[0], 0)], rgbcolor="grey")
cdf += line([(outcomes[max_index],cdf_values[max_index]),(outcomes[max_index]+0.2, cdf_values[max_index])], \
rgbcolor="grey")
return cdf
def makeFreqDictHidden(myDataList):
'''Make a frequency mapping out of a list of data.
Param myDataList, a list of data.
Return a dictionary mapping each data value from min to max in steps of 1 to its frequency count.'''
freqDict = {} # start with an empty dictionary
sortedMyDataList = sorted(myDataList)
for k in sortedMyDataList:
freqDict[k] = myDataList.count(k)
return freqDict # return the dictionary created
def makeEMFHidden(myDataList):
'''Make an empirical mass function from a data list.
Param myDataList, list of data to make emf from.
Return list of tuples comprising (data value, relative frequency) ordered by data value.'''
freqs = makeFreqDictHidden(myDataList) # make the frequency counts mapping
totalCounts = sum(freqs.values())
relFreqs = [fr/(1.0*totalCounts) for fr in freqs.values()] # use a list comprehension
numRelFreqPairs = zip(freqs.keys(), relFreqs) # zip the keys and relative frequencies together
numRelFreqPairs.sort() # sort the list of tuples
return numRelFreqPairs
from pylab import array
def makeEDFHidden(myDataList):
'''Make an empirical distribution function from a data list.
Param myDataList, list of data to make emf from.
Return list of tuples comprising (data value, cumulative relative frequency) ordered by data value.'''
freqs = makeFreqDictHidden(myDataList) # make the frequency counts mapping
totalCounts = sum(freqs.values())
relFreqs = [fr/(1.0*totalCounts) for fr in freqs.values()] # use a list comprehension
relFreqsArray = array(relFreqs)
cumFreqs = list(relFreqsArray.cumsum())
numCumFreqPairs = zip(freqs.keys(), cumFreqs) # zip the keys and culm relative frequencies together
numCumFreqPairs.sort() # sort the list of tuples
return numCumFreqPairs
# EPMF plot
def epmfPlot(samples):
'''Returns an empirical probability mass function plot from samples data.'''
epmf_pairs = makeEMFHidden(samples)
epmf = point(epmf_pairs, rgbcolor = "blue", pointsize="20")
for k in epmf_pairs: # for each tuple in the list
kkey, kheight = k # unpack tuple
epmf += line([(kkey, 0),(kkey, kheight)], rgbcolor="blue", linestyle=":")
# padding
epmf += point((0,1), rgbcolor="black", pointsize="0")
return epmf
# ECDF plot
def ecdfPlot(samples):
'''Returns an empirical probability mass function plot from samples data.'''
ecdf_pairs = makeEDFHidden(samples)
ecdf = point(ecdf_pairs, rgbcolor = "red", faceted = false, pointsize="20")
for k in range(len(ecdf_pairs)):
x, kheight = ecdf_pairs[k] # unpack tuple
previous_x = 0
previous_height = 0
if k > 0:
previous_x, previous_height = ecdf_pairs[k-1] # unpack previous tuple
ecdf += line([(previous_x, previous_height),(x, previous_height)], rgbcolor="grey")
ecdf += points((x, previous_height),rgbcolor = "white", faceted = true, pointsize="20")
ecdf += line([(x, previous_height),(x, kheight)], rgbcolor="blue", linestyle=":")
# padding
ecdf += line([(ecdf_pairs[0][0]-0.2, 0),(ecdf_pairs[0][0], 0)], rgbcolor="grey")
max_index = len(ecdf_pairs)-1
ecdf += line([(ecdf_pairs[max_index][0], ecdf_pairs[max_index][1]),(ecdf_pairs[max_index][0]+0.2, \
ecdf_pairs[max_index][1])],rgbcolor="grey")
return ecdf
```
We can see the effect of varying $\theta$ interactively:
```
@interact
def _(theta=(0.5)):
'''Interactive function to plot the bernoulli pmf and cdf.'''
if theta <=1 and theta >= 0:
outcomes = (0, 1) # define the bernoulli outcomes
print "Bernoulli (", RR(theta).n(digits=2), ") pmf and cdf"
# pmf plot
pmf_values = [bernoulliPMF(x, theta) for x in outcomes]
pmf = pmfPlot(outcomes, pmf_values) # this is one of our own, hidden, functions
# cdf plot
cdf_values = [bernoulliCDF(x, theta) for x in outcomes]
cdf = cdfPlot(outcomes, cdf_values) # this is one of our own, hidden, functions
show(graphics_array([pmf, cdf]),figsize=[8,3])
else:
print "0 <= theta <= 1"
```
Don't worry about how these plots are done: you are not expected to be able to understand all of these details now.
Just use them to see the effect of varying $\theta$.
## Simulating a sample from the $Bernoulli(\theta)$ RV
We can simulate a sample from a $Bernoulli$ distribution by transforming input from a $Uniform(0,1)$ distribution using the floor() function in Sage. In maths, $\lfloor x \rfloor$, the 'floor of $x$' is the largest integer that is smaller than or equal to $x$. For example, $\lfloor 3.8 \rfloor = 3$.
```
z=3.8
floor(z)
```
Using floor, we can do inversion sampling from the $Bernoulli(\theta)$ RV using the the $Uniform(0,1)$ random variable that we said is the fundamental model.
We will introduce inversion sampling more formally later. In general, inversion sampling means using the inverse of the CDF $F$, $F^{[-1]}$, to transform input from a $Uniform(0,1)$ distribution.
To simulate from the $Bernoulli(\theta)$, we can use the following algorithm:
### Input:
- $u \thicksim Uniform(0,1)$ from a PRNG, $\qquad \qquad \text{where, } \sim$ means "sample from"
- $\theta$, the parameter
### Output:
$x \thicksim Bernoulli(\theta)$
### Steps:
- $u \leftarrow Uniform(0,1)$
- $x \leftarrow \lfloor u + \theta \rfloor$
- Return $x$
We can illustrate this with SageMath:
```
theta = 0.5 # theta must be such that 0 <= theta <= 1
u = random()
x = floor(u + theta)
x
```
To make a number of simulations, we can use list comprehensions again:
```
theta = 0.5
n = 20
randomUs = [random() for i in range(n)]
simulatedBs = [floor(u + theta) for u in randomUs]
simulatedBs
```
To make modular reusable code we can package up what we have done as functions.
The function `bernoulliFInverse(u, theta)` codes the inverse of the CDF of a Bernoulli distribution parameterised by `theta`. The function `bernoulliSample(n, theta)` uses `bernoulliFInverse(...)` in a list comprehension to simulate n samples from a Bernoulli distribution parameterised by theta, i.e., the distribution of our $Bernoulli(\theta)$ RV.
```
def bernoulliFInverse(u, theta):
'''A function to evaluate the inverse CDF of a bernoulli.
Param u is the value to evaluate the inverse CDF at.
Param theta is the distribution parameters.
Returns inverse CDF under theta evaluated at u'''
return floor(u + theta)
def bernoulliSample(n, theta):
'''A function to simulate samples from a bernoulli distribution.
Param n is the number of samples to simulate.
Param theta is the bernoulli distribution parameter.
Returns a simulated Bernoulli sample as a list'''
us = [random() for i in range(n)]
# use bernoulliFInverse in a list comprehension
return [bernoulliFInverse(u, theta) for u in us]
```
Note that we are using a list comprehension and the built-in SageMath `random()` function to make a list of pseudo-random simulations from the $Uniform(0,1)$. The length of the list is determined by the value of n. Inside the body of the function we assign this list to a variable named `us` (i.e., u plural). We then use another list comprehension to make our simulated sample. This list comprehension works by calling our function `bernoulliFInverse(...)` and passing in values for theta together with each u in us in turn.
Let's try a small number of samples:
```
theta = 0.2
n = 10
samples = bernoulliSample(n, theta)
samples
```
Now lets explore the effect of interactively varying n and $\theta$:
```
@interact
def _(theta=(0.5), n=(10,(0..1000))):
'''Interactive function to plot samples from bernoulli distribution.'''
if theta >= 0 and theta <= 1:
print "epmf and ecdf for ", n, " samples from Bernoulli (", theta, ")"
samples = bernoulliSample(n, theta)
# epmf plot
epmf = epmfPlot(samples) # this is one of our hidden functions
# ecdf plot
ecdf = ecdfPlot(samples) # this is one of our hidden functions
show(graphics_array([epmf, ecdf]),figsize=[8,3])
else:
print "0 <= theta <=1, n>0"
```
You can vary $\theta$ and $n$ on the interactive plot. You should be able to see that as $n$ increases, the empirical plots should get closer to the theoretical $f$ and $F$.
### YouTry
Check that you understand what `floor` is doing. We have put some extra print statements into our demonstration of floor so that you can see what is going on in each step. Try evaluating this cell several times so that you see what happens with different values of `u`.
```
theta = 0.5 # theta must be such that 0 <= theta <= 1
u = random()
print "u is", u
print "u + theta is", (u + theta)
print "floor(u + theta) is", floor(u + theta)
```
In the cell below we use floor to get 1's and 0's from the pseudo-random u's given by random(). It is effectively doing exactly the same thing as the functions above that we use to simulate a specified number of $Bernoulli(\theta)$ RVs, but the why that it is written may be easier to understand. If `floor` is doing what we want it to, then when `n` is sufficiently large, we'd expect our proportion of `1`s to be close to `theta` (remember Kolmogorov's axiomatic motivations for probability!). Try changing the value assigned to the variable `theta` and re-evaluting the cell to check this.
```
theta = 0.7 # theta must be such that 0 <= theta <= 1
listFloorResults = [] # an empty list to store results in
n = 100000 # how many iterations to do
for i in range(n): # a for loop to do something n times
u = random() # generate u
x = floor(u + theta) # use floor
listFloorResults.append(x) # add x to the list of results
listFloorResults.count(1)*1.0/len(listFloorResults) # proportion of 1s in the results
```
# The equi-probable $de~Moivre(\theta)$ Random Variable
The $de~Moivre(\theta_1,\theta_2,\ldots,\theta_k)$ RV is the natural generalisation of the $Bernoulli (\theta)$ RV to more than two outcomes. Take a die (i.e. one of a pair of dice): there are 6 possible outcomes from tossing a die if the die is a normal six-sided one (the outcome is which face is the on the top). To start with we can allow the possibility that the different faces could be loaded so that they have different probabilities of being the face on the top if we throw the die. In this case, k=6 and the parameters $\theta_1$, $\theta_2$, ...$\theta_6$ specify how the die is loaded, and the number on the upper-most face if the die is tossed is a $de\,Moivre$ random variable parameterised by $\theta_1,\theta_2,\ldots,\theta_6$.
If $\theta_1=\theta_2=\ldots=\theta_6= \frac{1}{6}$ then we have a fair die.
Here are some functions for the equi-probable $de\, Moivre$ PMF and CDF where we code the possible outcomes as the numbers on the faces of a k-sided die, i.e, 1,2,...k.
```
def deMoivrePMF(x, k):
'''Probability mass function for equi-probable de Moivre(k).
Param x is the value to evaluate the deMoirve pmf at.
Param k is the k parameter for an equi-probable deMoivre.
Returns the evaluation of the deMoivre(k) pmf at x.'''
if (int(x)==x) & (x > 0) & (x <= k):
return 1.0/k
else:
return 0
def deMoivreCDF(x, k):
'''DF for equi-probable de Moivre(k).
Param x is the value to evaluate the deMoirve cdf at.
Param k is the k parameter for an equi-probable deMoivre.
Returns the evaluation of the deMoivre(k) cdf at x.'''
return 1.0*x/k
@interact
def _(k=(6)):
'''Interactive function to plot the de Moivre pmf and cdf.'''
if (int(k) == k) and (k >= 1):
outcomes = range(1,k+1,1) # define the outcomes
pmf_values = [deMoivrePMF(x, k) for x in outcomes]
print "equi-probable de Moivre (", k, ") pmf and cdf"
# pmf plot
pmf = pmfPlot(outcomes, pmf_values) # this is one of our hidden functions
# cdf plot
cdf_values = [deMoivreCDF(x, k) for x in outcomes]
cdf = cdfPlot(outcomes, cdf_values) # this is one of our hidden functions
show(graphics_array([pmf, cdf]),figsize=[8,3])
else:
print "k must be an integer, k>0"
```
### YouTry
Try changing the value of k in the above interact.
## Simulating a sample from the equi-probable $de\,Moivre(k)$ random variable
We use floor ($\lfloor \, \rfloor$) again for simulating from the equi-probable $de \, Moivre(k)$ RV, but because we are defining our outcomes as 1, 2, ... k, we just add 1 to the result.
```
k = 6
u = random()
x = floor(u*k)+1
x
```
To simulate from the equi-probable $de\,Moivre(k)$, we can use the following algorithm:
#### Input:
- $u \thicksim Uniform(0,1)$ from a PRNG
- $k$, the parameter
#### Output:
- $x \thicksim \text{equi-probable } de \, Moivre(k)$
#### Steps:
- $u \leftarrow Uniform(0,1)$
- $x \leftarrow \lfloor uk \rfloor + 1$
- return $x$
We can illustrate this with SageMath:
```
def deMoivreFInverse(u, k):
'''A function to evaluate the inverse CDF of an equi-probable de Moivre.
Param u is the value to evaluate the inverse CDF at.
Param k is the distribution parameter.
Returns the inverse CDF for a de Moivre(k) distribution evaluated at u.'''
return floor(k*u) + 1
def deMoivreSample(n, k):
'''A function to simulate samples from an equi-probable de Moivre.
Param n is the number of samples to simulate.
Param k is the bernoulli distribution parameter.
Returns a simulated sample of size n from an equi-probable de Moivre(k) distribution as a list.'''
us = [random() for i in range(n)]
return [deMoivreFInverse(u, k) for u in us]
```
A small sample:
```
deMoivreSample(15,6)
```
You should understand the `deMoivreFInverse` and `deMoivreSample` functions and be able to write something like them if you were asked to.
You are not expected to be to make the interactive plots below (but this is not too hard to do by syntactic mimicry and google searches!).
Now let's do some interactive sampling where you can vary $k$ and the sample size $n$:
```
@interact
def _(k=(6), n=(10,(0..500))):
'''Interactive function to plot samples from equi-probable de Moivre distribution.'''
if n > 0 and k >= 0 and int(k) == k:
print "epmf and ecdf for ", n, " samples from equi-probable de Moivre (", k, ")"
outcomes = range(1,k+1,1) # define the outcomes
samples = deMoivreSample(n, k) # get the samples
epmf = epmfPlot(samples) # this is one of our hidden functions
ecdf = ecdfPlot(samples) # this is one of our hidden functions
show(graphics_array([epmf, ecdf]),figsize=[10,3])
else:
print "k>0 must be an integer, n>0"
```
Try changing $n$ and/or $k$. With $k = 40$ for example, you could be simulating the number on the first ball for $n$ Lotto draws.
### YouTry
A useful counterpart to the floor of a number is the ceiling, denoted $\lceil \, \rceil$. In maths, $\lceil x \rceil$, the 'ceiling of $x$' is the smallest integer that is larger than or equal to $x$. For example, $\lceil 3.8 \rceil = 4$. We can use the ceil function to do this in Sage:
```
ceil(3.8)
```
Try using `ceil` to check that you understand what it is doing. What would `ceil(0)` be?
# Inversion Sampler for Continuous Random Variables
When we simulated from the discrete RVs above, the $Bernoulli(\theta)$ and the equi-probable $de\,Moivre(k)$, we transformed some $u \thicksim Uniform(0,1)$ into some value for the RV.
Now we will look at the formal idea of an inversion sampler for continuous random variables. Inversion sampling for continuous random variables is a way to simulate values for a continuous random variable $X$ using $u \thicksim Uniform(0,1)$.
The idea of the inversion sampler is to treat $u \thicksim Uniform(0,1)$ as some value taken by the CDF $F$ and find the value $x$ at which $F(X \le x) = u$.
To find x where $F(X \le x) = u$ we need to use the inverse of $F$, $F^{[-1]}$. This is why it is called an **inversion sampler**.
Formalising this,
### Proposition
Let $F(x) := \int_{- \infty}^{x} f(y) \,d y : \mathbb{R} \rightarrow [0,1]$ be a continuous DF with density $f$, and let its inverse $F^{[-1]} $ be:
$$ F^{[-1]}(u) := \inf \{ x : F(x) = u \} : [0,1] \rightarrow \mathbb{R} $$
Then, $F^{[-1]}(U)$ has the distribution function $F$, provided $U \thicksim Uniform(0,1)$ ($U$ is a $Uniform(0,1)$ RV).
Note:
The infimum of a set A of real numbers, denoted by $\inf(A)$, is the greatest lower bound of every element of $A$.
Proof
The "one-line proof" of the proposition is due to the following equalities:
$$P(F^{[-1]}(U) \leq x) = P(\inf \{ y : F(y) = U)\} \leq x ) = P(U \leq F(x)) = F(x), \quad \text{for all } x \in \mathbb{R} . $$
# Algorithm for Inversion Sampler
#### Input:
- A PRNG for $Uniform(0,1)$ samples
- A procedure to give us $F^{[-1]}(u)$, inverse of the DF of the target RV $X$ evaluated at $u$
#### Output:
- A sample $x$ from $X$ distributed according to $F$
#### Algorithm steps:
- Draw $u \sim Uniform(0,1)$
- Calculate $x = F^{[-1]}(u)$
# The $Uniform(\theta_1, \theta_2)$RV
We have already met the$Uniform(\theta_1, \theta_2)$ RV.
Given two real parameters $\theta_1,\theta_2 \in \mathbb{R}$, such that $\theta_1 < \theta_2$, the PDF of the $Uniform(\theta_1,\theta_2)$ RV $X$ is:
$$f(x;\theta_1,\theta_2) =
\begin{cases}
\frac{1}{\theta_2 - \theta_1} & \text{if }\theta_1 \leq x \leq \theta_2\text{,}\\
0 & \text{otherwise}
\end{cases}
$$
and its DF given by $F(x;\theta_1,\theta_2) = \int_{- \infty}^x f(y; \theta_1,\theta_2) \, dy$ is:
$$
F(x; \theta_1,\theta_2) =
\begin{cases}
0 & \text{if }x < \theta_1 \\
\frac{x-\theta_1}{\theta_2-\theta_1} & \text{if}~\theta_1 \leq x \leq \theta_2,\\
1 & \text{if} x > \theta_2
\end{cases}
$$
For example, here are the PDF, CDF and inverse CDF for the $Uniform(-1,1)$:
<img src="images/UniformMinus11ThreeCharts.png" width=800>
As usual, we can make some SageMath functions for the PDF and CDF:
```
# uniform pdf
def uniformPDF(x, theta1, theta2):
'''Uniform(theta1, theta2) pdf function f(x; theta1, theta2).
x is the value to evaluate the pdf at.
theta1, theta2 are the distribution parameters.'''
retvalue = 0 # default return value
if x >= theta1 and x <= theta2:
retvalue = 1.0/(theta2-theta1)
return retvalue
# uniform cdf
def uniformCDF(x, theta1, theta2):
'''Uniform(theta1, theta2) CDF or DF function F(x; theta1, theta2).
x is the value to evaluate the cdf at.
theta1, theta2 are the distribution parameters.'''
retvalue = 0 # default return value
if (x > theta2):
retvalue = 1
elif (x > theta1): # else-if
retvalue = (x - theta1) / (theta2-theta1)
# if (x < theta1), retvalue will be 0
return retvalue
```
Using these functions in an interactive plot, we can see the effect of changing the distribution parameters $\theta_1$ and $\theta_2$.
```
@interact
def InteractiveUniformPDFCDFPlots(theta1=0,theta2=1):
if theta2 > theta1:
print "Uniform(", + RR(theta1).n(digits=2), ",", RR(theta2).n(digits=2), ") pdf and cdf"
p1 = line([(theta1-1,0), (theta1,0)], rgbcolor='blue')
p1 += line([(theta1,1/(theta2-theta1)), (theta2,1/(theta2-theta1))], rgbcolor='blue')
p1 += line([(theta2,0), (theta2+1,0)], rgbcolor='blue')
p2 = line([(theta1-1,0), (theta1,0)], rgbcolor='red')
p2 += line([(theta1,0), (theta2,1)], rgbcolor='red')
p2 += line([(theta2,1), (theta2+1,1)], rgbcolor='red')
show(graphics_array([p1, p2]),figsize=[8,3])
else:
print "theta2 must be greater than theta1"
```
# Simulating from the $Uniform(\theta_1, \theta_2)$ RV
We can simulate from the $Uniform(\theta_1,\theta_2)$ using the inversion sampler, provided that we can get an expression for $F^{[-1]}$ that can be implemented as a procedure.
We can get this by solving for $x$ in terms of $u=F(x;\theta_1,\theta_2)$:
$$
u = \frac{x-\theta_1}{\theta_2-\theta_1} \quad \iff \quad x = (\theta_2-\theta_1)u+\theta_1 \quad \iff \quad F^{[-1]}(u;\theta_1,\theta_2) = \theta_1+(\theta_2-\theta_1)u
$$
<img src="images/Week7InverseUniformSampler.png" width=600>
## Algorithm for Inversion Sampler for the $Uniform(\theta_1, \theta_2)$ RV
#### Input:
- $u \thicksim Uniform(0,1)$
- $F^{[-1]}(u)$
- $\theta_1$, $\theta_2$
#### Output:
- A sample $x \thicksim Uniform(\theta_1, \theta_2)$
#### Algorithm steps:
- Draw $u \sim Uniform(0,1)$
- Calculate $x = F^{[-1]}(u) = (\theta_1 + u(\theta_2 - \theta_1))$
- Return $x$
We can illustrate this with SageMath by writing a function to calculate the inverse of the CDF of a uniform distribution parameterised by theta1 and theta2. Given a value between 0 and 1 for the parameter u, it returns the height of the inverse CDF at this point, i.e. the value in the range theta1 to theta2 where the CDF evaluates to u.
```
def uniformFInverse(u, theta1, theta2):
'''A function to evaluate the inverse CDF of a uniform(theta1, theta2) distribution.
u, u should be 0 <= u <= 1, is the value to evaluate the inverse CDF at.
theta1, theta2, theta2 > theta1, are the uniform distribution parameters.'''
return theta1 + (theta2 - theta1)*u
```
This function transforms a single $u$ into a single simulated value from the $Uniform(\theta_1, \theta_2)$, for example:
```
u = random()
theta1, theta2 = 3, 6
uniformFInverse(u, theta1, theta2)
```
Then we can use this function inside another function to generate a number of samples:
```
def uniformSample(n, theta1, theta2):
'''A function to simulate samples from a uniform distribution.
n > 0 is the number of samples to simulate.
theta1, theta2 (theta2 > theta1) are the uniform distribution parameters.'''
us = [random() for i in range(n)]
return [uniformFInverse(u, theta1, theta2) for u in us]
```
The basic strategy is the same as for simulating $Bernoulli$ and $de \, Moirve$ samples: we are using a list comprehension and the built-in SAGE random() function to make a list of pseudo-random simulations from the $Uniform(0,1)$. The length of the list is determined by the value of n. Inside the body of the function we assign this list to a variable named us (i.e., u plural). We then use another list comprehension to make our simulated sample. This list comprehension works by calling our function uniformFInverse(...) and passing in values for theta1 and theta2 together with each u in us in turn.
You should be able to write simple functions like uniformFinverse and uniformSample yourself.
Try this for a small sample:
```
param1 = -5
param2 = 5
nToGenerate = 30
myUniformSample = uniformSample(nToGenerate, param1, param2)
print(myUniformSample)
```
Much more fun, we can make an interactive plot which uses the uniformSample(...) function to generate and plot while you choose the parameters and number to generate (you are not expected to be able to make interactive plots like this):
```
@interact
def _(theta1=-1, theta2=1, n=(1..5000)):
'''Interactive function to plot samples from uniform distribution.'''
if theta2 > theta1:
if n == 1:
print n, "uniform(", + RR(theta1).n(digits=2), ",", RR(theta2).n(digits=2), ") sample"
else:
print n, "uniform(", + RR(theta1).n(digits=2), ",", RR(theta2).n(digits=2), ") samples"
sample = uniformSample(n, theta1, theta2)
pts = zip(range(1,n+1,1),sample) # plot so that first sample is at x=1
p=points(pts)
p+= text(str(theta1), (0, theta1), fontsize=10, color='black') # add labels manually
p+= text(str(theta2), (0, theta2), fontsize=10, color='black')
p.show(xmin=0, xmax = n+1, ymin=theta1, ymax = theta2, axes=false, gridlines=[[0,n+1],[theta1,theta2]], \
figsize=[7,3])
else:
print "Theta1 must be less than theta2"
```
We can get a better idea of the distribution of our sample using a histogram (the minimum sample size has been set to 50 here because the automatic histogram generation does not do a very good job with small samples).
```
import pylab
@interact
def _(theta1=0, theta2=1, n=(50..5000), Bins=5):
'''Interactive function to plot samples from uniform distribution as a histogram.'''
if theta2 > theta1:
sample = uniformSample(n, theta1, theta2)
pylab.clf() # clear current figure
n, bins, patches = pylab.hist(sample, Bins, density=true)
pylab.ylabel('normalised count')
pylab.title('Normalised histogram')
pylab.savefig('myHist') # to actually display the figure
pylab.show()
else:
print "Theta1 must be less than theta2"
```
# The $Exponential(\lambda)$ Random Variable
For a given $\lambda$ > 0, an $Exponential(\lambda)$ Random Variable has the following PDF $f$ and DF $F$:
$$
f(x;\lambda) =\begin{cases}\lambda e^{-\lambda x} & \text{if }x \ge 0\text{,}\\ 0 & \text{otherwise}\end{cases}
$$
$$
F(x;\lambda) =\begin{cases}1 - e^{-\lambda x} & \text{if }x \ge 0\text{,}\\ 0 & \text{otherwise}\end{cases}
$$
An exponential distribution is useful because is can often be used to model inter-arrival times or making inter-event measurements (if you are familiar with the $Poisson$ distribution, a discrete distribution, you may have also met the $Exponential$ distribution as the time between $Poisson$ events). Here are some examples of random variables which are sometimes modelled with an exponential distribution:
time between the arrival of buses at a bus-stop
distance between roadkills on a stretch of highway
In SageMath, the we can use `exp(x)` to calculate $e^x$, for example:
```
x = 3.0
exp(x)
```
We can code some functions for the PDF and DF of an $Exponential$ parameterised by lambda like this $\lambda$.
**Note** that we cannot or should not use the name `lambda` for the parameter because in SageMath (and Python), the term `lambda` has a special meaning. Do you recall lambda expressions?
```
def exponentialPDF(x, lam):
'''Exponential pdf function.
x is the value we want to evaluate the pdf at.
lam is the exponential distribution parameter.'''
return lam*exp(-lam*x)
def exponentialCDF(x, lam):
'''Exponential cdf or df function.
x is the value we want to evaluate the cdf at.
lam is the exponential distribution parameter.'''
return 1 - exp(-lam*x)
```
You should be able to write simple functions like `exponentialPDF` and `exponentialCDF` yourself, but you are not expected to be able to make the interactive plots.
You can see the shapes of the PDF and CDF for different values of $\lambda$ using the interactive plot below.
```
@interact
def _(lam=('lambda',0.5),Xmax=(5..100)):
'''Interactive function to plot the exponential pdf and cdf.'''
if lam > 0:
print "Exponential(", RR(lam).n(digits=2), ") pdf and cdf"
from pylab import arange
xvalues = list(arange(0.1, Xmax, 0.1))
p1 = line(zip(xvalues, [exponentialPDF(y, lam) for y in xvalues]), rgbcolor='blue')
p2 = line(zip(xvalues, [exponentialCDF(y, lam) for y in xvalues]), rgbcolor='red')
show(graphics_array([p1, p2]),figsize=[8,3])
else:
print "Lambda must be greater than 0"
```
We are going to write some functions to help us to do inversion sampling from the $Exponential(\lambda)$ RV.
As before, we need an expression for $F^{[-1]}$ that can be implemented as a procedure.
We can get this by solving for $x$ in terms of $u=F(x;\lambda)$
### YouTry later
Show that
$$
F^{[-1]}(u;\lambda) =\frac{-1}{\lambda} \ln(1-u)
$$
$\ln = \log_e$ is the natural logarthim.
(end of You try)
---
---
# Simulating from the $Exponential(\lambda)$ RV
Algorithm for Inversion Sampler for the $Exponential(\lambda)$ RV
#### Input:
- $u \thicksim Uniform(0,1)$
- $F^{[-1]}(u)$
- $\lambda$
### Output:
- sample $x \thicksim Exponential(\lambda)$
#### Algorithm steps:
- Draw $u \sim Uniform(0,1)$
- Calculate $x = F^{[-1]}(u) = \frac{-1}{\lambda}\ln(1-u)$
- Return $x$
The function `exponentialFInverse(u, lam)` codes the inverse of the CDF of an exponential distribution parameterised by `lam`. Given a value between 0 and 1 for the parameter `u`, it returns the height of the inverse CDF of the exponential distribution at this point, i.e. the value where the CDF evaluates to `u`. The function `exponentialSample(n, lam)` uses `exponentialFInverse(...)` to simulate `n` samples from an exponential distribution parameterised by `lam`.
```
def exponentialFInverse(u, lam):
'''A function to evaluate the inverse CDF of a exponential distribution.
u is the value to evaluate the inverse CDF at.
lam is the exponential distribution parameter.'''
# log without a base is the natural logarithm
return (-1.0/lam)*log(1 - u)
def exponentialSample(n, lam):
'''A function to simulate samples from an exponential distribution.
n is the number of samples to simulate.
lam is the exponential distribution parameter.'''
us = [random() for i in range(n)]
return [exponentialFInverse(u, lam) for u in us]
```
We can have a look at a small sample:
```
lam = 0.5
nToGenerate = 30
sample = exponentialSample(nToGenerate, lam)
print(sorted(sample)) # recall that sorted makes a new sorted list
```
You should be able to write simple functions like `exponentialFinverse` and `exponentialSample` yourself by now.
The best way to visualise the results is to use a histogram. With this interactive plot you can explore the effect of varying lambda and n:
```
import pylab
@interact
def _(lam=('lambda',0.5), n=(50,(10..10000)), Bins=(5,(1,1000))):
'''Interactive function to plot samples from exponential distribution.'''
if lam > 0:
pylab.clf() # clear current figure
n, bins, patches = pylab.hist(exponentialSample(n, lam), Bins, density=true)
pylab.ylabel('normalised count')
pylab.title('Normalised histogram')
pylab.savefig('myHist') # to actually display the figure
pylab.show()
else:
print "Lambda must be greater than 0"
```
# The Standard $Cauchy$ Random Variable
A standard $Cauchy$ Random Variable has the following PDF $f$ and DF $F$:
$$
f(x) =\frac{1}{\pi(1+x^2)}\text{,}\,\, -\infty < x < \infty
$$
$$
F(x) = \frac{1}{\pi}\tan^{-1}(x) + 0.5
$$
The $Cauchy$ distribution is an interesting distribution because the expectation does not exist:
$$
\int \left|x\right|\,dF(x) = \frac{2}{\pi} \int_0^{\infty} \frac{x}{1+x^2}\,dx = \left(x \tan^{-1}(x) \right]_0^{\infty} - \int_0^{\infty} \tan^{-1}(x)\, dx = \infty \ .
$$
In SageMath, we can use the `arctan` function for $tan^{-1}$, and `pi` for $\pi$ and code some functions for the PDF and DF of the standard Cauchy as follows.
```
def cauchyPDF(x):
'''Standard Cauchy pdf function.
x is the value to evaluate the pdf at.'''
return 1.0/(pi.n()*(1+x^2))
def cauchyCDF(x):
'''Standard Cauchy cdf function.
x is the value to evaluate the cdf at.'''
return (1.0/pi.n())*arctan(x) + 0.5
```
You can see the shapes of the PDF and CDF using the plot below. Note from the PDF $f$ above is defined for $-\infty < x < \infty$. This means we should set some arbitrary limits on the minimum and maximum values to use for the x-axis on the plots. You can change these limits interactively.
```
@interact
def _(lower=(-4), upper=(4)):
'''Interactive function to plot the Cauchy pdf and cdf.'''
if lower < upper:
print "Standard Cauchy pdf and cdf"
p1 = plot(cauchyPDF, lower,upper, rgbcolor='blue')
p2 = plot(cauchyCDF, lower,upper, rgbcolor='red')
show(graphics_array([p1, p2]),figsize=[8,3])
else:
print "Upper must be greater than lower"
```
#### Constructing a standard $Cauchy$ RVs
- Place a double light sabre (i.e., one that can shoot its lazer beam from both ends, like that of Darth Mole in Star Wars) on a cartesian axis so that it is centred on $(0, 1)$.
- Randomly spin it (so that its spin angle to the x-axis is $\theta \thicksim Uniform (0, 2\pi)$).
- Let it come to rest.
- The y-coordinate of the point of intersection with the y-axis is a standard Cauchy RV.
You can see that we are equally likely to get positive and negative values (the density function of the standard $Cauchy$ RV is symmetrical about 0) and whenever the spin angle is close to $\frac{\pi}{4}$ ($90^{\circ}$) or $\frac{3\pi}{2}$ ($270^{\circ}$), the intersections will be a long way out up or down the y-axis, i.e. very negative or very positive values. If the light sabre is exactly parallel to the y-axis there will be no intersection: a $Cauchy$ RV $X$ can take values $-\infty < x < \infty$
<img src="images/Week7CauchyLightSabre.png" width=300>
## Simulating from the standard $Cauchy$
We can perform inversion sampling on the $Cauchy$ RV by transforming a $Uniform(0,1)$ random variable into a $Cauchy$ random variable using the inverse CDF.
We can get this by replacing $F(x)$ by $u$ in the expression for $F(x)$:
$$
\frac{1}{\pi}tan^{-1}(x) + 0.5 = u
$$
and solving for $x$:
$$
\begin{array}{lcl} \frac{1}{\pi}tan^{-1}(x) + 0.5 = u & \iff & \frac{1}{\pi} tan^{-1}(x) = u - \frac{1}{2}\\ & \iff & tan^{-1}(x) = (u - \frac{1}{2})\pi\\ & \iff & tan(tan^{-1}(x)) = tan((u - \frac{1}{2})\pi)\\ & \iff & x = tan((u - \frac{1}{2})\pi) \end{array}
$$
## Inversion Sampler for the standard $Cauchy$ RV
#### Input:
- $u \thicksim Uniform(0,1)$
- $F^{[-1]}(u)$
#### Output:
- A sample $x \thicksim \text{standard } Cauchy$
#### Algorithm steps:
- Draw $u \sim Uniform(0,1)$
- Calculate $x = F^{[-1]}(u) = tan((u - \frac{1}{2})\pi)$
- Return $x$
The function `cauchyFInverse(u)` codes the inverse of the CDF of the standard Cauchy distribution. Given a value between 0 and 1 for the parameter u, it returns the height of the inverse CDF of the standard $Cauchy$ at this point, i.e. the value where the CDF evaluates to u. The function `cauchySample(n`) uses `cauchyFInverse(...)` to simulate `n` samples from a standard Cauchy distribution.
```
def cauchyFInverse(u):
'''A function to evaluate the inverse CDF of a standard Cauchy distribution.
u is the value to evaluate the inverse CDF at.'''
return RR(tan(pi*(u-0.5)))
def cauchySample(n):
'''A function to simulate samples from a standard Cauchy distribution.
n is the number of samples to simulate.'''
us = [random() for i in range(n)]
return [cauchyFInverse(u) for u in us]
```
And we can visualise these simulated samples with an interactive plot:
```
@interact
def _(n=(50,(0..5000))):
'''Interactive function to plot samples from standard Cauchy distribution.'''
if n == 1:
print n, "Standard Cauchy sample"
else:
print n, "Standard Cauchy samples"
sample = cauchySample(n)
pts = zip(range(1,n+1,1),sample)
p=points(pts)
p+= text(str(floor(min(sample))), (0, floor(min(sample))), \
fontsize=10, color='black') # add labels manually
p+= text(str(ceil(max(sample))), (0, ceil(max(sample))), \
fontsize=10, color='black')
p.show(xmin=0, xmax = n+1, ymin=floor(min(sample)), \
ymax = ceil(max(sample)), axes=false, \
gridlines=[[0,n+1],[floor(min(sample)),ceil(max(sample))]],\
figsize=[7,3])
```
Notice how we can get some very extreme values This is because of the 'thick tails' of the density function of the $Cauchy$ RV. Think about this in relation to the double light sabre visualisation. We can see effect of the extreme values with a histogram visualisation as well. The interactive plot below will only use values between lower and upper in the histogram. Try increasing the sample size to something like 1000 and then gradually widening the limits:
```
import pylab
@interact
def _(n=(50,(0..5000)), lower=(-4), upper=(4), Bins=(5,(1,100))):
'''Interactive function to plot samples from
standard Cauchy distribution.'''
if lower < upper:
if n == 1:
print n, "Standard Cauchy sample"
else:
print n, "Standard Cauchy samples"
sample = cauchySample(n) # the whole sample
sampleToShow=[c for c in sample if (c >= lower and c <= upper)]
pylab.clf() # clear current figure
n, bins, patches = pylab.hist(sampleToShow, Bins, density=true)
pylab.ylabel('normalised count')
pylab.title('Normalised histogram, values between ' \
+ str(floor(lower)) + ' and ' + str(ceil(upper)))
pylab.savefig('myHist') # to actually display the figure
pylab.show()
else:
print "lower must be less than upper"
```
# Running means
When we introduced the $Cauchy$ distribution, we noted that the expectation of the $Cauchy$ RV does not exist. This means that attempts to estimate the mean of a $Cauchy$ RV by looking at a sample mean will not be successful: as you take larger and larger samples, the effect of the extreme values will still cause the sample mean to swing around wildly (we will cover estimation properly soon). You are going to investigate the sample mean of simulated $Cauchy$ samples of steadily increasing size and show how unstable this is. A convenient way of doing this is to look at a running mean. We will start by working through the process of calculating some running means for the $Uniform(0,10)$, which do stabilise. You will then do the same thing for the $Cauchy$ and be able to see the instability.
We will be using the pylab.cumsum function, so we make sure that we have it available. We then generate a sample from the $Uniform(0,10)$
```
from pylab import cumsum
nToGenerate = 10 # sample size to generate
theta1, theta2 = 0, 10 # uniform parameters
uSample = uniformSample(nToGenerate, theta1, theta2)
print(uSample)
```
We are going to treat this sample as though it is actually 10 samples of increasing size:
- sample 1 is the first element in uSample
- sample 2 contains the first 2 elements in uSample
- sample 3 contains the first 3 elements in uSample
- ...
- sample10 contains the first 10 elements in uSample
We know that a sample mean is the sum of the elements in the sample divided by the number of elements in the sample $n$:
$$
\bar{x} = \frac{1}{n} \sum_{i=1}^n x_i
$$
We can get the sum of the elements in each of our 10 samples with the cumulative sum of `uSample`.
We use `cumsum` to get the cumulative sum. This will be a `pylab.array` (or `numpy.arrat`) type, so we use the `list` function to turn it back into a list:
```
csUSample = list(cumsum(uSample))
print(csUSample)
```
What we have now is effectively a list
$$\left[\displaystyle\sum_{i=1}^1x_i, \sum_{i=1}^2x_i, \sum_{i=1}^3x_i, \ldots, \sum_{i=1}^{10}x_i\right]$$
So all we have to do is divide each element in `csUSample` by the number of elements that were summed to make it, and we have a list of running means
$$\left[\frac{1}{1}\displaystyle\sum_{i=1}^1x_i, \frac{1}{2}\sum_{i=1}^2x_i, \frac{1}{3}\sum_{i=1}^3x_i, \ldots, \frac{1}{10}\sum_{i=1}^{10}x_i\right]$$
We can get the running sample sizes using the `range` function:
```
samplesizes = range(1, len(uSample)+1,1)
samplesizes
```
And we can do the division with list comprehension:
```
uniformRunningMeans = [csUSample[i]/samplesizes[i] for i in range(nToGenerate)]
print(uniformRunningMeans)
```
We could pull all of this together into a function which produced a list of running means for sample sizes 1 to $n$.
```
def uniformRunningMeans(n, theta1, theta2):
'''Function to give a list of n running means from uniform(theta1, theta2).
n is the number of running means to generate.
theta1, theta2 are the uniform distribution parameters.
return a list of n running means.'''
sample = uniformSample(n, theta1, theta2)
from pylab import cumsum # we can import in the middle of code!
csSample = list(cumsum(sample))
samplesizes = range(1, n+1,1)
return [csSample[i]/samplesizes[i] for i in range(n)]
```
Have a look at the running means of 10 incrementally-sized samples:
```
nToGenerate = 10
theta1, theta2 = 0, 10
uRunningMeans = uniformRunningMeans(nToGenerate, theta1, theta2)
pts = zip(range(1, len(uRunningMeans)+1,1),uRunningMeans)
p = points(pts)
show(p, figsize=[5,3])
```
Recall that the expectation $E_{(\theta_1, \theta_2)}(X)$ of a $X \thicksim Uniform(\theta_1, \theta_2) = \frac{(\theta_1 +\theta_2)}{2}$
In our simulations we are using $\theta_1 = 0$, $\theta_2 = 10$, so if $X \thicksim Uniform(0,10)$, $E(X) = 5$
To show that the running means of different simulations from a $Uniform$ distribution settle down to be close to the expectation, we can plot say 5 different groups of running means for sample sizes $1, \ldots, 1000$. We will use a line plot rather than plotting individual points.
```
nToGenerate = 1000
theta1, theta2 = 0, 10
iterations = 5
xvalues = range(1, nToGenerate+1,1)
for i in range(iterations):
redshade = 0.5*(iterations - 1 - i)/iterations # to get different colours for the lines
uRunningMeans = uniformRunningMeans(nToGenerate, theta1, theta2)
pts = zip(xvalues,uRunningMeans)
if (i == 0):
p = line(pts, rgbcolor = (redshade,0,1))
else:
p += line(pts, rgbcolor = (redshade,0,1))
show(p, figsize=[5,3])
```
### YouTry!
Your task is to now do the same thing for some standard Cauchy running means.
To start with, do not put everything into a function, just put statements into the cell(s) below to:
Make variable for the number of running means to generate; assign it a small value like 10 at this stage
Use the cauchySample function to generate the sample from the standard $Cauchy$; have a look at your sample
Make a named list of cumulative sums of your $Cauchy$ sample using list and cumsum, as we did above; have a look at your cumulative sums
Make a named list of sample sizes, as we did above
Use a list comprehension to turn the cumulative sums and sample sizes into a list of running means, as we did above
Have a look at your running means; do they make sense to you given the individual sample values?
Add more cells as you need them.
When you are happy that you are doing the right things, **write a function**, parameterised by the number of running means to do, that returns a list of running means. Try to make your own function rather than copying and changing the one we used for the $Uniform$: you will learn more by trying to do it yourself. Please call your function `cauchyRunningMeans`, so that (if you have done everything else right), you'll be able to use some code we will supply you with to plot the results.
Try checking your function by using it to create a small list of running means. Check that the function does not report an error and gives you the kind of list you expect.
When you think that your function is working correctly, try evaluating the cell below: this will put the plot of 5 groups of $Uniform(0,10)$ running means beside a plot of 5 groups of standard $Cauchy$ running means produced by your function.
```
nToGenerate = 10000
theta1, theta2 = 0, 10
iterations = 5
xvalues = range(1, nToGenerate+1,1)
for i in range(iterations):
shade = 0.5*(iterations - 1 - i)/iterations # to get different colours for the lines
uRunningMeans = uniformRunningMeans(nToGenerate, theta1, theta2)
problemStr="" # an empty string
# use try to catch problems with cauchyRunningMeans functions
try:
cRunningMeans = cauchyRunningMeans(nToGenerate)
##cRunningMeans = hiddenCauchyRunningMeans(nToGenerate)
cPts = zip(xvalues, cRunningMeans)
except NameError, e:
# cauchyRunningMeans is not defined
cRunningMeans = [1 for c in range(nToGenerate)] # default value
problemStr = "No "
except Exception, e:
# some other problem with cauchyRunningMeans
cRunningMeans = [1 for c in range(nToGenerate)]
problemStr = "Problem with "
uPts = zip(xvalues, uRunningMeans)
cPts = zip(xvalues, cRunningMeans)
if (i < 1):
p1 = line(uPts, rgbcolor = (shade, 0, 1))
p2 = line(cPts, rgbcolor = (1-shade, 0, shade))
cauchyTitleMax = max(cRunningMeans) # for placement of cauchy title
else:
p1 += line(uPts, rgbcolor = (shade, 0, 1))
p2 += line(cPts, rgbcolor = (1-shade, 0, shade))
if max(cRunningMeans) > cauchyTitleMax:
cauchyTitleMax = max(cRunningMeans)
titleText1 = "Uniform(" + str(theta1) + "," + str(theta2) + ") running means" # make title text
t1 = text(titleText1, (nToGenerate/2,theta2), rgbcolor='blue',fontsize=10)
titleText2 = problemStr + "standard Cauchy running means" # make title text
t2 = text(titleText2, (nToGenerate/2,ceil(cauchyTitleMax)+1), rgbcolor='red',fontsize=10)
show(graphics_array((p1+t1,p2+t2)),figsize=[10,5])
```
# Replicable samples
Remember that we know how to set the seed of the PRNG used by `random()` with `set_random_seed`? If we wanted our sampling functions to give repeatable samples, we could also pass the functions the seed to use. Try making a new version of `uniformSample` which has a parameter for a value to use as the random number generator seed. Call your new version `uniformSampleSeeded` to distinguish it from the original one.
Try out your new `uniformSampleSeeded` function: if you generate two samples using the same seed they should be exactly the same. You could try using a large sample and checking on sample statistics such as the mean, min, max, variance etc, rather than comparing small samples by eye.
Recall that you can also give parameters default values in SageMath. Using a default value means that if no value is passed to the function for that parameter, the default value is used. Here is an example with a very simple function:
```
# we already saw default parameters in use - here's a careful walkthrough of how it works
def simpleDefaultExample(x, y=0):
'''A simple function to demonstrate default parameter values.
x is the first parameter, with no default value.
y is the second parameter, defaulting to 0.'''
return x + y
```
Note that parameters with default values need to come after parameters without default values when we define the function.
Now you can try the function - evaluate the following cells to see what you get:
```
simpleDefaultExample (1,3) # specifying two arguments for the function
simpleDefaultExample (1) # specifying one argument for the function
# another way to specify one argument for the function
simpleDefaultExample (x=6)
# uncomment next line and evaluate - but this will give an error because x has no default value
#simpleDefaultExample()
# uncomment next line and evaluate - but this will also give an error because x has no default value
# simpleDefaultExample (y=9)
```
Try making yet another version of the uniform sampler which takes a value to be used as a random number generator seed, but defaults to `None` if no value is supplied for that parameter. `None` is a special Python type.
```
x = None
type(x)
```
Using `set_random_seed(None)` will mean that the random seed is actually reset to a new ('random') value. You can see this by testing what happens when you do this twice in succession and then check what seed is being used with `initial_seed`:
```
set_random_seed(None)
initial_seed()
set_random_seed(None)
initial_seed()
```
Do another version of the `uniformSampleSeeded` function with a default value for the seed of `None`.
Check your function again by testing with both when you supply a value for the seed and when you don't.
---
## Assignment 2, PROBLEM 4
Maximum Points = 1
First read and understand the following simple simulation (originally written by Jenny Harlow). Then you will modify the simulation to find the solution to this problem.
### A Simple Simulation
We could use the samplers we have made to do a very simple simulation. Suppose the inter-arrival times, in minutes, of Orbiter buses at an Orbiter stop in Christchurch follows an $Exponential(\lambda = 0.1)$ distribution. Also suppose that this is quite a popular bus stop, and the arrival of people is very predictable: one new person will arrive in each whole minute. This means that the longer another bus takes in coming, the more people arrive to join the queue. Also suppose that the number of free seats available on any bus follows a $de\, Moivre(k=40)$ distribution, i.e, there are equally like to to be 1, or 2, or 3 ... or 40 spare seats. If there are more spare seats than people in the queue, everyone can get onto the bus and nobody is left waiting, but if there are not enough spare seats some people will be left waiting for the next bus. As they wait, more people arrive to join the queue....
This is not very realistic - we would want a better model for how many people arrive at the stop at least, and for the number of spare seats there will be on the bus. However, we are just using this as a simple example that you can do using the random variables you already know how to simulate samples from.
Try to code this example yourself, using our suggested steps. We have put our version the code into a cell below, but you will get more out of this example by trying to do it yourself first.
#### Suggested steps:
- Get a list of 100 $Exponential(\lambda = 0.1)$ samples using the `exponentialSamples` function. Assign the list to a variable named something like `busTime`s. These are your 100 simulated bus inter-arrival times.
- Choose a value for the number of people who will be waiting at the busstop when you start the simulation. Call this something like `waiting`.
- Make a list called something like `leftWaiting`, which to begin with contains just the value assigned to `waiting`.
- Make an empty list called something like `boardBus`.
- Start a for loop which takes each element in `busTimes` in turn, i.e. each bus inter-arrival time, and within the for loop:
- Calculate the number of people arriving at the stop as the floor of the time taken for that bus to arrive (i.e., one person for each whole minute until the bus arrives)
- Add this to the number of people waiting (e.g., if the number of arrivals is assigned to a variable arrivals, then waiting = waiting + arrivals will increment the value assigned to the waiting variable by the value of arrivals).
- Simulate a value for the number of seats available on the bus as one simulation from a $de \, Moirve(k=40)$ RV (it may be easier to use `deMoivreFInverse` rather than `deMoivreSample` because you only need one value - remember that you will have to pass a simulated $u \thicksim Uniform(0,1)$ to `deMoivreFInverse` as well as the value of the parameter $k$).
- The number of people who can get on the bus is the minimum of the number of people waiting in the queue and the number of seats on the bus. Calculate this value and assign it to a variable called something like `getOnBus`.
- Append `getOnBus` to the list `boardBus`.
- Subtract `getOnBus` from the number of people waiting, waiting (e.g., `waiting = waiting - getOnBus` will decrement waiting by the number of people who get on the bus).
- Append the new value of `waiting` to the list `leftWaiting`.
- That is the end of the for loop: you now have two lists, one for the number of people waiting at the stop and one for the number of people who can board each bus as it arrives.
## YouTry
Here is our code to do the bus stop simulation.
Yours may be different - maybe it will be better!
*You are expected to find the needed functions from the latest notebook this assignment came from and be able to answer this question. Unless you can do it in your head.*
```
def busStopSimulation(buses, lam, seats):
'''A Simple Simulation - see description above!'''
BusTimes = exponentialSample(buses,lam)
waiting = 0 # how many people are waiting at the start of the simulation
BoardBus = [] # empty list
LeftWaiting = [waiting] # list with just waiting in it
for time in BusTimes: # for each bus inter-arrival time
arrivals = floor(time) # people who arrive at the stop before the bus gets there
waiting = waiting + arrivals # add them to the queue
busSeats = deMoivreFInverse(random(), seats) # how many seats available on the bus
getOnBus = min(waiting, busSeats) # how many people can get on the bus
BoardBus.append(getOnBus) # add to the list
waiting = waiting - getOnBus # take the people who board the bus out of the queue
LeftWaiting.append(waiting) # add to the list
return [LeftWaiting, BoardBus, BusTimes]
# let's simulate the people left waiting at the bus stop
set_random_seed(None) # replace None by a integer to fix seed and output of simulation
buses = 100
lam = 0.1
seats = 40
leftWaiting, boardBus, busTimes = busStopSimulation(buses, lam, seats)
print(leftWaiting) # look at the leftWaiting list
print(boardBus) # boad bus
print(busTimes)
```
We could do an interactive visualisation of this by evaluating the next cell. This will be showing the number of people able to board the bus and the number of people left waiting at the bus stop by the height of lines on the plot.
```
@interact
def _(seed=[0,123,456], lam=[0.1,0.01], seats=[40,10,1000]):
set_random_seed(seed)
buses=100
leftWaiting, boardBus, busTimes = busStopSimulation(buses, lam,seats)
p1 = line([(0.5,0),(0.5,leftWaiting[0])])
from pylab import cumsum
csBusTimes=list(cumsum(busTimes))
for i in range(1, len(leftWaiting), 1):
p1+= line([(csBusTimes[i-1],0),(csBusTimes[i-1],boardBus[i-1])], rgbcolor='green')
p1+= line([(csBusTimes[i-1]+.01,0),(csBusTimes[i-1]+.01,leftWaiting[i])], rgbcolor='red')
t1 = text("Boarding the bus", (csBusTimes[len(busTimes)-1]/3,max(max(boardBus),max(leftWaiting))+1), \
rgbcolor='green',fontsize=10)
t2 = text("Waiting", (csBusTimes[len(busTimes)-1]*(2/3),max(max(boardBus),max(leftWaiting))+1), \
rgbcolor='red',fontsize=10)
xaxislabel = text("Time", (csBusTimes[len(busTimes)-1],-10),fontsize=10,color='black')
yaxislabel = text("People", (-50,max(max(boardBus),max(leftWaiting))+1),fontsize=10,color='black')
show(p1+t1+t2+xaxislabel+yaxislabel,figsize=[8,5])
```
Very briefly explain the effect of varying one of the three parameters:
- `seed`
- `lam`
- `seats`
while holding the other two parameters fixed on:
- the number of people waiting at the bus stop and
- the number of people boarding the bus
by using the dropdown menus in the `@interact` above. Think if the simulation makes sense and explain why. You can write down your answers using keyboard by double-clicking this cell and writing between `---` and `---`.
---
---
#### Solution for CauchyRunningMeans
```
def hiddenCauchyRunningMeans(n):
'''Function to give a list of n running means from standardCauchy.
n is the number of running means to generate.'''
sample = cauchySample(n)
from pylab import cumsum
csSample = list(cumsum(sample))
samplesizes = range(1, n+1,1)
return [csSample[i]/samplesizes[i] for i in range(n)]
```
| github_jupyter |
Jeremy Thaller - Aug. 2021
*Write a quick summary of the project here. For example: CNN to predict MSD values from XANES spectra.*
```
import numpy as np
import pandas as pd
import datetime
import seaborn as sns
sns.set_style('whitegrid')
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.keras.layers.experimental import preprocessing
from tensorflow import keras
from tensorflow.keras import layers
# often I re-use code I've already written, or hide boiler plate code in a
# separate script to keep the main notebook cleaner
from scripts.nn_buddy import *
# Make numpy values easier to read.
np.set_printoptions(precision=6, suppress=True)
#fix blas GEMM error
physical_devices = tf.config.list_physical_devices('GPU')
# un-comment the line below when using a GPU
# tf.config.experimental.set_memory_growth(physical_devices[0], True)
print("GPU power activated 🚀🚀" if len(physical_devices) > 0 else "No GPU found")
```
# EDA and Dataloading
With any notebook, the first thing to do after importing everything is to load the data and do some basic data exploratory analysis. Even if you have done some in-depth EDA in another notebook, it's worth printing the dataframe and maybe a plot to double check everything loaded correctly.
Note, I'm calling `load_all_spectra`, a function I wrote in nn_buddy.py. The first time you run it, it will load all the CSV files into one large dataframe. The columns are the energy values (your features), and each row is an absorption spectrum. After it does this the first time, it saves the dataframe as a HDF file, which it will import next time instead. If you have 1000's of csv files and have to do lots of operations (mainly transposing the data can be time-intensive), loading all the spectra can become slow. Saving the dataframe as an HDF might save you time in the long-run.
```
DATA_PATH = 'DATA'
dataset = load_all_spectra(path_=DATA_PATH, header=3)
dataset.head()
energy_grid = dataset.columns[:-1].to_numpy()
sns.lineplot(x=energy_grid, y=dataset.iloc[0,:-1])
plt.ylabel('$\mu$'), plt.xlabel('E (eV)')
plt.title('Example Plot');
```
Or, if you know you can write a plotting function in `nn_buddy.py` to make repetitive plotting easier
```
plot_spectrum(dataset, index=1, title='Example Spectrum', save_as='Figures/example.pdf');
```
Split your testing set into a training set and testing set. Don't touch the testing set until the very end (i.e. data leakage). If you have a sparse dataset, you might consider using k-fold cross validation instead
```
# train-test split
train_dataset = dataset.sample(frac=0.8, random_state=0)
test_dataset = dataset.drop(train_dataset.index)
# features dataframe
train_features = train_dataset.copy()
test_features = test_dataset.copy()
# labels dataframe
train_labels = train_features.pop('MSD')
test_labels = test_features.pop('MSD')
```
# Preprocessing
## Normalization
From TF docs: *This layer will coerce its inputs into a distribution centered around 0 with standard deviation 1. It accomplishes this by precomputing the mean and variance of the data, and calling $\frac{(input-mean)}{\sqrt{variance}}$ at runtime.*
What happens in adapt: *Compute mean and variance of the data and store them as the layer's weights. adapt should be called before fit, evaluate, or predict.*
```
# I'm using keras' version, but you can define your own noramlization
normalizer = preprocessing.Normalization()
normalizer.adapt(np.array(train_features))
```
We need to scale the training labels as well, because they have a very limit range and will restrict the ability of the network to learn. A network that just guesses the mean of the labels will have a very small loss if the range is also very small. This will scale the labels from 0-1 (for the training data, and something close to that for the validation data.)
We will rescale via $ z_i = \frac{x_i - \text{min}(x)}{\text{max}(x) - \text{min}(x)}$, but fix the min, and max values so that we can reliably "unscale" the data afterwords to retrieve the correct NN predictions. Note, the min, and max are of just the training data. This is on purpose to prevent data leakage. The testing data needs to be scaled with this same value.
To "unscale" or "denormalize", we use $x_i = z_i (\text{max}(x) - \text{min}(x)) + \text{min}(x)$
```
# given a np.array
def normalize_labels(labels):
min, max = np.min(train_labels), np.max(train_labels)
return labels.apply(lambda x: (x - min)/(max-min))
def unnormalize_labels(labels):
min, max = np.min(train_labels), np.max(train_labels)
if isinstance(labels, np.ndarray):
labels = labels.flatten()
return [x*(max - min) + min for x in labels]
else:
return labels.apply(lambda x: x*(max - min) + min)
normalized_train_labels = normalize_labels(train_labels)
normalized_train_labels.describe()
```
# Model Building and Testing
Now the fun part. You can try out different architectures and models here. I'd suggest putting each unique type of model in a subheading so you can minimize the section when working on somethign else. Note nothing here should work well in this examle notebook because I'm only including a few spectra in the dataset
## Simple Neural Network
For a nice procedure on how to build a neural network, see check out the end of Chapter 3 of [my thesis](https://github.com/jthaller/BNL_Thesis/blob/main/MainDraft.pdf)
For hyperparameter tuning, check out the `Optuna package` and utilize `tensorboard` to compare you models. You can find specifc examples of how I used it in the `nn-rdf.ipynb` script in [this repository](https://github.com/BNL-ML-Group/xanes-disorder-nn). This also is a good example for how to write a useful `README.md` for a project
```
# norm = normalizer
def build_and_compile_model(norm):
model = tf.keras.Sequential([
norm,
layers.Dense(64, activation='relu'), #kernal_initializer = tf.keras.initializers.LecunNormal()
layers.Dense(32, activation='relu'),
layers.Dense(64, activation='relu'),
layers.Dense(1)
])
model.compile(loss='mean_absolute_error', # L1 = Lass0 = 'mean_absolute_error'; L2 = ridge = 'mean_squared_error'
optimizer=tf.keras.optimizers.Adam(0.001),
metrics=[tf.keras.metrics.MeanAbsolutePercentageError()]
)
return model
nn_model = build_and_compile_model(normalizer)
log_dir = "./logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = tf.keras.callbacks.TensorBoard(log_dir=log_dir)
history = linear_model.fit(train_features, normalized_train_labels,
epochs=5, callbacks=[tensorboard_callback],
verbose=0, validation_split=.2) # histogram_frequency=1
nn_model.save('./Models/nn_model')
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
plot_loss(history)
```
## XG-Boost
```
import xgboost as xgb
regressor = xgb.XGBRegressor(
n_estimators=100,
reg_lambda=0,
gamma=0,
max_depth=10
)
# note I'm not normalizing the features for xgboost
# https://datascience.stackexchange.com/questions/60950/is-it-necessary-to-normalise-data-for-xgboost/60954
regressor.fit(train_features, normalized_train_labels)
```
# Make Predictions
## NN - linear model
```
nn_model = keras.models.load_model('./Models/nn_model', compile = True)
nn_model.summary()
preds = unnormalize_labels(linear_model.predict(test_features))
plot_true_vs_pred(test_labels, preds, limit=.001)
```
## XG-Boost
```
preds = regressor.predict(test_features)
plot_true_vs_pred(test_labels, preds, limit=.15)
preds
```
| github_jupyter |
```
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import glob
import pickle as pkl
from scipy import stats
import random
import time
import utility_funcs as uf
```
### The following Hurst function was taken in part from <a href = "https://www.quantstart.com/articles/Basics-of-Statistical-Mean-Reversion-Testing">here</a>
```
def hurst(p):
'''
Description:
Given an iterable (p), this functions calculates the Hurst exponent
by sampling from the linear space
Inputs:
p: an iterable
Outputs:
the Hurst exponent
'''
# find variances for different sets of price differences:
p = np.array(p)
tau = np.arange(2,100)
variancetau = [np.var(np.subtract(p[lag:], p[:-lag])) for lag in tau]
# find the slope of the fitting line in the log-log plane:
tau = np.log(tau)
variancetau = np.log(variancetau)
# find and remove mean:
xb = np.mean(tau)
yb = np.mean(variancetau)
tau -= xb
variancetau -= yb
# find the slope:
m = np.dot(tau, variancetau) / np.dot(tau, tau)
return m / 2
def add_cur_name(df,cur_name):
df["cur_name"] = cur_name
print(cur_name,"done!")
def remove_old_days(df,yr='2018'):
cond = df.Date > yr+"-01-01"
df = df[cond].copy()
return df
def func_collection(df,cur_name,yr="2018"):
df = remove_old_days(df,yr)
add_cur_name(df,cur_name)
return df
def gaussian(x, mu, sig):
return np.exp(-np.power(x - mu, 2.) / (2 * np.power(sig, 2.)))
def rnd_walk_simulator(sigma = 1, candle_bundle = 100, num_bundles = 200, initial = 1,\
generator = 'normal', seed = None):
'''
Description:
Generates random-walks of various size, and puts them in a pandas dataframe, in a column
named 'close'
Inputs:
sigma: the scale to be used for each step
candle_bundle: the number of samples to bundle together
num_bundles: the total random-walk length
initial: the initial value to use, first element of the random-walk
'''
df = pd.DataFrame()
close_var = initial
close_list = []
np.random.seed(seed)
for x in range(num_bundles):
tick_data = []
if generator == 'normal':
rnd = np.random.normal(loc=0.0, scale=sigma, size = candle_bundle)
close_var += np.sum(rnd)
elif generator == 'uniform':
rnd = np.random.uniform(low=0, high= 1, size = candle_bundle)
close_var += np.sum((rnd - 0.5)*sigma)
elif generator == 'poisson':
rnd = np.random.poisson(lam = 1, size = candle_bundle)
close_var += np.sum((rnd - 1)*sigma)
close_list.append(close_var)
df["close"] = close_list
return df
file_list = glob.glob("./data/*")
file_dict = {f:f.split("/")[-1][:-4] for f in file_list}
print(file_list)
df = uf.read_many_files(file_list,add_to_each_df_func=lambda df,x: func_collection(df,x,yr="2017"),\
func_args=file_dict)
df = df.dropna(axis = 0)
df.head()
cond = df.cur_name == "GBP_USD"
print(hurst(df[cond].close ))
df[cond].close.plot()
frame = plt.gca()
frame.axes.get_xaxis().set_ticks([])
frame.axes.get_yaxis().set_visible(False)
plt.xlabel('Example A',fontsize = 14)
cond = df.cur_name == "NZD_CHF"
print(hurst(df[cond].close))
df[cond].close.plot()
frame = plt.gca()
frame.axes.get_xaxis().set_ticks([])
frame.axes.get_yaxis().set_visible(False)
plt.xlabel('Example B',fontsize = 14)
df_rnd1 = rnd_walk_simulator(seed=10, sigma= 0.00005, num_bundles=300000)
print(hurst(df_rnd1.close))
df_rnd1.close.plot()
frame = plt.gca()
frame.axes.get_xaxis().set_ticks([])
frame.axes.get_yaxis().set_visible(False)
plt.xlabel('Example C',fontsize = 14)
df_rnd2 = rnd_walk_simulator(seed=100, sigma= 0.00005, num_bundles=300000)
print(hurst(df_rnd2.close))
df_rnd2.close.plot()
frame = plt.gca()
frame.axes.get_xaxis().set_ticks([])
frame.axes.get_yaxis().set_visible(False)
plt.xlabel('Example D',fontsize = 14)
```
# Hurst exponent for Forex market:
### For a nice post on Hurst exponent and its indications look at <a href = "http://epchan.blogspot.com/2016/04/mean-reversion-momentum-and-volatility.html">here</a>.
## all data:
```
for pair in df.cur_name.unique():
cond = df.cur_name == pair
hs = hurst(df[cond].close)
print("Hurst for %s is %.5f"%(pair,hs),end = ' , ')
print("total len of the df is:",len(df[cond]))
```
# Random Walks:
### rnd_steps = 10000:
### normal:
```
hurst_li10n = []
st = time.time()
for ii in range(10000):
df_norm = rnd_walk_simulator(sigma = 0.002,\
candle_bundle=1,\
num_bundles = 10000,\
seed = ii,\
generator='normal')
hs = hurst(df_norm.close.values)
hurst_li10n.append(hs)
if ii%500 == 0:
print("%d done, time= %.4f"%(ii,time.time()-st),end=", ")
st = time.time()
pkl.dump(hurst_li10n,open("./hurst_li10_n.pkl","wb"))
plt.figure(figsize=(12,8))
print(np.mean(hurst_li10n),np.std(hurst_li10n) )
a = plt.hist(hurst_li10n,bins=30,normed=True)
x_range = np.arange(0.42,0.58,0.002)
amp = np.max(a[0])
plt.plot(x_range, amp*gaussian(x_range,np.mean(hurst_li10n),np.std(hurst_li10n)),'r')
plt.text(0.412,amp,"Random-Walk Length = 10000",fontsize = 20)
plt.text(0.412,amp-1.5,"mean = "+'{0:.4f}'.format(np.mean(hurst_li10n)),fontsize = 20)
plt.text(0.412,amp-3,"std = "+'{0:.4f}'.format(np.std(hurst_li10n)),fontsize = 20)
plt.xlabel("Hurst Exponent",fontsize=18)
plt.ylabel("frequency",fontsize=18)
plt.xticks(fontsize=16)
plt.yticks(fontsize=16)
plt.xlim(0.41,0.58)
```
### uniform:
```
hurst_li10u = []
st = time.time()
for ii in range(10000):
df_norm = rnd_walk_simulator(sigma = 0.002,\
candle_bundle=1,\
num_bundles = 10000,\
seed = ii,\
generator='uniform')
hs = hurst(df_norm.close.values)
hurst_li10u.append(hs)
if ii%500 == 0:
print("%d done, time= %.4f"%(ii,time.time()-st),end=", ")
st = time.time()
pkl.dump(hurst_li10u,open("./hurst_li10_u.pkl","wb"))
plt.figure(figsize=(12,8))
print(np.mean(hurst_li10u),np.std(hurst_li10u) )
a = plt.hist(hurst_li10u,bins=25,normed=True)
x_range = np.arange(0.42,0.58,0.002)
amp = np.max(a[0])
plt.plot(x_range, amp*gaussian(x_range,np.mean(hurst_li10u),np.std(hurst_li10u)),'r')
plt.text(0.412,amp,"Random-Walk Length = 10000",fontsize = 19)
plt.text(0.412,amp-1.5,"mean = "+'{0:.4f}'.format(np.mean(hurst_li10u)),fontsize = 19)
plt.text(0.412,amp-3,"std = "+'{0:.4f}'.format(np.std(hurst_li10u)),fontsize = 19)
plt.xlabel("Hurst Exponent",fontsize=18)
plt.ylabel("frequency",fontsize=18)
plt.xticks(fontsize=16)
plt.yticks(fontsize=16)
plt.xlim(0.41,0.58)
```
### Poisson:
```
hurst_li10p = []
st = time.time()
for ii in range(10000):
df_norm = rnd_walk_simulator(sigma = 0.002,\
candle_bundle=1,\
num_bundles = 10000,\
seed = ii,\
generator='poisson')
hs = hurst(df_norm.close.values)
hurst_li10p.append(hs)
if ii%500 == 0:
print("%d done, time= %.4f"%(ii,time.time()-st),end=", ")
st = time.time()
pkl.dump(hurst_li10p,open("./hurst_li10_p.pkl","wb"))
plt.figure(figsize=(12,8))
print(np.mean(hurst_li10p),np.std(hurst_li10p) )
a = plt.hist(hurst_li10p,bins=25,normed=True)
x_range = np.arange(0.42,0.58,0.002)
amp = np.max(a[0])
plt.plot(x_range, amp*gaussian(x_range,np.mean(hurst_li10p),np.std(hurst_li10p)),'r')
plt.text(0.412,amp,"Random-Walk Length = 10000",fontsize = 19)
plt.text(0.412,amp-1.5,"mean = "+'{0:.4f}'.format(np.mean(hurst_li10p)),fontsize = 19)
plt.text(0.412,amp-3,"std = "+'{0:.4f}'.format(np.std(hurst_li10p)),fontsize = 19)
plt.xlabel("Hurst Exponent",fontsize=18)
plt.ylabel("frequency",fontsize=18)
plt.xticks(fontsize=16)
plt.yticks(fontsize=16)
plt.xlim(0.41,0.58)
```
### rnd_steps = 100000:
### normal:
```
hurst_li100 = []
st = time.time()
for ii in range(10000):
df_norm = rnd_walk_simulator(sigma = 0.002,\
candle_bundle=1,\
num_bundles = 100000,\
seed = ii,\
generator='normal')
hs = hurst(df_norm.close.values)
hurst_li100.append(hs)
if ii%500 == 0:
print("%d done, time= %.4f"%(ii,time.time()-st),end=", ")
st = time.time()
hurst_li100 = pkl.load(open("./hurst_li100.pkl","rb"))
plt.figure(figsize=(12,8))
print(np.mean(hurst_li100),np.std(hurst_li100) )
a = plt.hist(hurst_li100,bins=30,normed=True)
x_range = np.arange(0.47,0.53,0.0005)
amp = np.max(a[0])
plt.plot(x_range, amp*gaussian(x_range,np.mean(hurst_li100),np.std(hurst_li100)),'r')
plt.text(0.4755,amp,"Random-Walk Length = 100000",fontsize = 20)
plt.text(0.4755,amp-5,"mean = "+'{0:.4f}'.format(np.mean(hurst_li100)),fontsize = 20)
plt.text(0.4755,amp-10,"std = "+'{0:.4f}'.format(np.std(hurst_li100)),fontsize = 20)
plt.xlabel("Hurst Exponent",fontsize=18)
plt.ylabel("frequency",fontsize=18)
plt.xticks(fontsize=16)
plt.yticks(fontsize=16)
plt.xlim(0.475,0.525)
print(stats.skew(hurst_li100))
print(stats.kurtosis(hurst_li100))
pkl.dump(hurst_li100,open("./hurst_li100.pkl","wb"))
```
### uniform:
```
hurst_li100u = []
st = time.time()
for ii in range(10000):
df_norm = rnd_walk_simulator(sigma = 0.002,\
candle_bundle=1,\
num_bundles = 100000,\
seed = ii,\
generator='uniform')
hs = hurst(df_norm.close.values)
hurst_li100u.append(hs)
if ii%500 == 0:
print("%d done, time= %.4f"%(ii,time.time()-st),end=", ")
st = time.time()
plt.figure(figsize=(12,8))
print(np.mean(hurst_li100u),np.std(hurst_li100u) )
a = plt.hist(hurst_li100u,bins=30,normed=True)
x_range = np.arange(0.47,0.53,0.0005)
amp = np.max(a[0])
plt.plot(x_range, amp*gaussian(x_range,np.mean(hurst_li100u),np.std(hurst_li100u)),'r')
plt.text(0.4755,amp,"Random-Walk Length = 10000",fontsize = 20)
plt.text(0.4755,amp-5,"mean = "+'{0:.4f}'.format(np.mean(hurst_li100u)),fontsize = 20)
plt.text(0.4755,amp-10,"std = "+'{0:.4f}'.format(np.std(hurst_li100u)),fontsize = 20)
plt.xlabel("Hurst Exponent",fontsize=18)
plt.ylabel("frequency",fontsize=18)
plt.xticks(fontsize=16)
plt.yticks(fontsize=16)
plt.xlim(0.475,0.525)
pkl.dump(hurst_li100u,open("./hurst_li100u.pkl","wb"))
```
### rnd_steps = 300000:
```
hurst_li300 = []
st = time.time()
for ii in range(1000):
df_norm = rnd_walk_simulator(sigma = 0.002,\
num_bundles = 300000,\
seed = ii)
hs = hurst(df_norm.close.values)
hurst_li300.append(hs)
if ii%100 == 0:
print("%d done, time= %.4f"%(ii,time.time()-st),end=", ")
st = time.time()
plt.figure(figsize=(12,8))
print(np.mean(hurst_li300),np.std(hurst_li300) )
_ = plt.hist(hurst_li300,bins=12)
plt.text(0.486,200,"Random-Walk Length = 300000",fontsize = 15)
plt.text(0.486,180,"mean = "+'{0:.4f}'.format(np.mean(hurst_li300)),fontsize = 15)
plt.text(0.486,160,"std = "+'{0:.4f}'.format(np.std(hurst_li300)),fontsize = 15)
plt.xlabel("Hurst Exponent",fontsize=14)
plt.ylabel("frequency",fontsize=14)
plt.xticks(fontsize=13)
plt.yticks(fontsize=13)
```
| github_jupyter |
```
import os
import numpy as np
import pandas as pd
import spikeextractors as se
import spiketoolkit as st
import spikewidgets as sw
import tqdm.notebook as tqdm
from scipy.signal import periodogram, spectrogram
import matplotlib.pyplot as plt
# %matplotlib inline
# %config InlineBackend.figure_format='retina'
import holoviews as hv
import holoviews.operation.datashader
import holoviews.operation.timeseries
hv.extension("bokeh")
import panel as pn
import panel.widgets as pnw
pn.extension()
from LoisLFPutils.utils import *
# Path to the data folder in the repo
data_path = r""
# !!! start assign jupyter notebook parameter(s) !!!
data_path = '2021-02-12_22-13-24_Or179_Or177_overnight'
# !!! end assign jupyter notebook parameter(s) !!!
data_path = os.path.join('../../../../data/',data_path)
# Path to the raw data in the hard drive
with open(os.path.normpath(os.path.join(data_path, 'LFP_location.txt'))) as f:
OE_data_path = f.read()
```
### Get each bird's recording, and their microphone channels
```
# This needs to be less repetitive
if 'Or177' in data_path:
# Whole recording from the hard drive
recording = se.BinDatRecordingExtractor(OE_data_path,30000,40, dtype='int16')
# Note I am adding relevant ADC channels
# First bird
Or179_recording = se.SubRecordingExtractor(
recording,
channel_ids=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,11,12,13,14,15, 32])
# Second bird
Or177_recording = se.SubRecordingExtractor(
recording,
channel_ids=[16, 17,18,19,20,21,22,23,24,25,26,27,28,29,30,31, 33])
# Bandpass fiter microphone recoridngs
mic_recording = st.preprocessing.bandpass_filter(
se.SubRecordingExtractor(recording,channel_ids=[32,33]),
freq_min=500,
freq_max=1400
)
else:
# Whole recording from the hard drive
recording = se.BinDatRecordingExtractor(OE_data_path, 30000, 24, dtype='int16')
# Note I am adding relevant ADC channels
# First bird
Or179_recording = se.SubRecordingExtractor(
recording,
channel_ids=[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10,11,12,13,14,15,16])
# Bandpass fiter microphone recoridngs
mic_recording = st.preprocessing.bandpass_filter(
se.SubRecordingExtractor(recording,channel_ids=[16]),
freq_min=500,
freq_max=1400
)
# Get wav files
wav_names = [file_name for file_name in os.listdir(data_path) if file_name.endswith('.wav')]
wav_paths = [os.path.join(data_path,wav_name) for wav_name in wav_names]
# Get tranges for wav files in the actual recording
# OE_data_path actually contains the path all the way to the .bin. We just need the parent directory
# with the timestamp.
# Split up the path
OE_data_path_split= OE_data_path.split(os.sep)
# Take only the first three. os.path is weird so we manually add the separator after the
# drive name.
OE_parent_path = os.path.join(OE_data_path_split[0] + os.sep, *OE_data_path_split[1:3])
# Get all time ranges given the custom offset.
tranges=np.array([
get_trange(OE_parent_path, path, offset=datetime.timedelta(seconds=0), duration=3)
for path in wav_paths])
wav_df = pd.DataFrame({'wav_paths':wav_paths, 'wav_names':wav_names, 'trange0':tranges[:, 0], 'trange1':tranges[:, 1]})
wav_df.head()
```
Connect the wav files to the recording. Manually input to gut check yourself. If it is before 2021 02 21 at 11:00 am PST, you need to add a time delay.
```
wav_f,_,_,_=wav_df.loc[0,:]
wav_f, data_path
datetime.datetime(2021,2,23,8,11,1) - datetime.datetime(2021, 2, 22,22,0,20)
paths, name, tr0, tr1 = wav_df.loc[0,:]
sw.plot_spectrogram(mic_recording, trange= [tr0,tr1+10], freqrange=[300,4000], nfft=2**10, channel=32)
np.linspace(0,130,14)
# Set up widgets
wav_selector = pnw.Select(options=[(i, name) for i, name in enumerate(wav_df.wav_names.values)], name="Select song file")
# offset_selector = pnw.Select(options=np.linspace(-10,10,21).tolist(), name="Select offset")
window_radius_selector = pnw.Select(options=[10,20,30,40,60], name="Select window radius")
spect_chan_selector = pnw.Select(options=list(range(16)), name="Spectrogram channel")
spect_freq_lo = pnw.Select(options=np.linspace(0,130,14).tolist(), name="Low frequency for spectrogram (Hz)")
spect_freq_hi = pnw.Select(options=np.linspace(130,0,14).tolist(), name="Hi frequency for spectrogram (Hz)")
log_nfft_selector = pnw.Select(options=np.linspace(10,16,7).tolist(), value=14, name="magnitude of nfft (starts at 256)")
@pn.depends(
wav_selector=wav_selector.param.value,
# offset=offset_selector.param.value,
window_radius=window_radius_selector.param.value,
spect_chan=spect_chan_selector.param.value,
spect_freq_lo=spect_freq_lo.param.value,
spect_freq_hi=spect_freq_hi.param.value,
log_nfft=log_nfft_selector.param.value
)
def create_figure(wav_selector,
# offset,
window_radius, spect_chan,
spect_freq_lo, spect_freq_hi, log_nfft):
# Each column in each row to a tuple that we unpack
wav_file_path, wav_file_name, tr0, tr1 = wav_df.loc[wav_selector[0],:]
# Set up figure
fig,axes = plt.subplots(4,1, figsize=(16,12))
# Get wav file numpy recording object
wav_recording = get_wav_recording(wav_file_path)
# Apply offset and apply window radius
offset = 0
tr0 = tr0+ offset-window_radius
# Add duration of wav file
tr1 = tr1+ offset+window_radius+wav_recording.get_num_frames()/wav_recording.get_sampling_frequency()
'''Plot sound spectrogram (Hi fi mic)'''
sw.plot_spectrogram(wav_recording, channel=0, freqrange=[300,14000],ax=axes[0])
axes[0].set_title('Hi fi mic spectrogram')
'''Plot sound spectrogram (Lo fi mic)'''
if 'Or179' in wav_file_name:
LFP_recording = Or179_recording
elif 'Or177' in wav_file_name:
LFP_recording = Or177_recording
mic_channel = LFP_recording.get_channel_ids()[-1]
sw.plot_spectrogram(
mic_recording,
mic_channel,
trange=[tr0, tr1],
freqrange=[600,4000],
ax=axes[1]
)
axes[1].set_title('Lo fi mic spectrogram')
'''Plot LFP timeseries'''
chan_ids = np.array([LFP_recording.get_channel_ids()]).flatten()
sw.plot_timeseries(
LFP_recording,
channel_ids=[chan_ids[spect_chan]],
trange=[tr0, tr1],
ax=axes[2]
)
axes[2].set_title('Raw LFP')
# Clean lines
for line in plt.gca().lines:
line.set_linewidth(0.5)
'''Plot LFP spectrogram'''
sw.plot_spectrogram(
LFP_recording,
channel=chan_ids[spect_chan],
freqrange=[spect_freq_lo,spect_freq_hi],
trange=[tr0, tr1],
ax=axes[3],
nfft=int(2**log_nfft)
)
axes[3].set_title('LFP')
for i, ax in enumerate(axes):
ax.set_yticks([ax.get_ylim()[1]])
ax.set_yticklabels([ax.get_ylim()[1]])
ax.set_xlabel('')
# Show 30 Hz
ax.set_yticks([30, ax.get_ylim()[1]])
ax.set_yticklabels([30, ax.get_ylim()[1]])
return fig
dash = pn.Column(
pn.Row(wav_selector, window_radius_selector,spect_chan_selector),
pn.Row(spect_freq_lo,spect_freq_hi,log_nfft_selector),
create_figure
);
dash
```
## Looking at all channels at a time:
```
# Make chanmap
chanmap=np.array([[3, 7, 11, 15],[2, 4, 10, 14],[4, 8, 12, 16],[1, 5, 9, 13]])
# Set up widgets
wav_selector = pnw.Select(options=[(i, name) for i, name in enumerate(wav_df.wav_names.values)], name="Select song file")
window_radius_selector = pnw.Select(options=[10,20,30,40,60], name="Select window radius")
spect_freq_lo = pnw.Select(options=np.linspace(0,130,14).tolist(), name="Low frequency for spectrogram (Hz)")
spect_freq_hi = pnw.Select(options=np.linspace(130,0,14).tolist(), name="Hi frequency for spectrogram (Hz)")
log_nfft_selector = pnw.Select(options=np.linspace(10,16,7).tolist(),value=14, name="magnitude of nfft (starts at 256)")
def housekeeping(wav_selector, window_radius):
# Each column in each row to a tuple that we unpack
wav_file_path, wav_file_name, tr0, tr1 = wav_df.loc[wav_selector[0],:]
# Get wav file numpy recording object
wav_recording = get_wav_recording(wav_file_path)
# Apply offset and apply window radius
offset = 0
tr0 = tr0+ offset-window_radius
# Add duration of wav file
tr1 = tr1+ offset+window_radius+wav_recording.get_num_frames()/wav_recording.get_sampling_frequency()
return wav_recording, tr0, tr1
@pn.depends(
wav_selector=wav_selector.param.value,
window_radius=window_radius_selector.param.value)
def create_sound_figure(wav_selector, window_radius):
# Housekeeping
wav_recording, tr0, tr1 = housekeeping(wav_selector, window_radius)
# Set up figure for sound
fig,axes = plt.subplots(1,2, figsize=(16,2))
'''Plot sound spectrogram (Hi fi mic)'''
sw.plot_spectrogram(wav_recording, channel=0, freqrange=[300,14000], ax=axes[0])
axes[0].set_title('Hi fi mic spectrogram')
'''Plot sound spectrogram (Lo fi mic)'''
if 'Or179' in wav_file_name:
LFP_recording = Or179_recording
elif 'Or177' in wav_file_name:
LFP_recording = Or177_recording
mic_channel = LFP_recording.get_channel_ids()[-1]
sw.plot_spectrogram(
mic_recording,
mic_channel,
trange=[tr0, tr1],
freqrange=[600,4000],
ax=axes[1]
)
axes[1].set_title('Lo fi mic spectrogram')
for ax in axes:
ax.axis('off')
return fig
@pn.depends(
wav_selector=wav_selector.param.value,
window_radius=window_radius_selector.param.value,
spect_freq_lo=spect_freq_lo.param.value,
spect_freq_hi=spect_freq_hi.param.value,
log_nfft=log_nfft_selector.param.value
)
def create_LFP_figure(wav_selector, window_radius,
spect_freq_lo, spect_freq_hi, log_nfft):
# Housekeeping
wav_recording, tr0, tr1 = housekeeping(wav_selector, window_radius)
fig,axes=plt.subplots(4,4,figsize=(16,8))
'''Plot LFP'''
for i in range(axes.shape[0]):
for j in range(axes.shape[1]):
ax = axes[i][j]
sw.plot_spectrogram(recording, chanmap[i][j], trange=[tr0, tr1],
freqrange=[spect_freq_lo,spect_freq_hi],
nfft=int(2**log_nfft), ax=ax, cmap='magma')
ax.axis('off')
# Set channel as title
ax.set_title(chanmap[i][j])
# Clean up
for i in range(axes.shape[0]):
for j in range(axes.shape[1]):
ax=axes[i][j]
ax.set_yticks([ax.get_ylim()[1]])
ax.set_yticklabels([ax.get_ylim()[1]])
ax.set_xlabel('')
# Show 30 Hz
ax.set_yticks([30, ax.get_ylim()[1]])
ax.set_yticklabels([30, ax.get_ylim()[1]])
return fig
dash = pn.Column(
pn.Row(wav_selector,window_radius_selector),
pn.Row(spect_freq_lo,spect_freq_hi,log_nfft_selector),
create_sound_figure, create_LFP_figure
);
```
# Sleep data analysis!
```
csvs = [os.path.normpath(os.path.join(data_path,file)) for file in os.listdir(data_path) if file.endswith('.csv')]
csvs
csv = csvs[0]
df = pd.read_csv(csv)
del df['Unnamed: 0']
df.head()
csv_name = csv.split(os.sep)[-1]
rec=None
if 'Or179' in csv_name:
rec = Or179_recording
elif 'Or177' in csv_name:
rec = Or177_recording
# Get second to last element in split
channel = int(csv_name.split('_')[-2])
window_slider = pn.widgets.DiscreteSlider(
name='window size',
options=[*range(1,1000)],
value=1
)
freq_slider_1 = pn.widgets.DiscreteSlider(
name='f (Hz)',
options=[*range(1,200)],
value=30
)
freq_slider_2 = pn.widgets.DiscreteSlider(
name='f (Hz)',
options=[*range(1,200)],
value=10
)
freq_slider_3 = pn.widgets.DiscreteSlider(
name='f (Hz)',
options=[*range(1,200)],
value=4
)
range_slider = pn.widgets.RangeSlider(
start=0,
end=df.t.max(),
step=10,
value=(0, 500),
name="Time range",
value_throttled=(0,500)
)
@pn.depends(window=window_slider.param.value,
freq_1=freq_slider_1.param.value,
freq_2=freq_slider_2.param.value,
freq_3=freq_slider_3.param.value,
rang=range_slider.param.value_throttled)
def plot_ts(window, freq_1, freq_2, freq_3, rang):
# subdf = df.loc[
# ((df['f']==freq_1)|(df['f']==freq_2)|(df['f']==freq_3)) & (df['t'] < 37800),:]
subdf = df.loc[
((df['f']==freq_1)|(df['f']==freq_2)|(df['f']==freq_3))
& ((df['t'] > rang[0]) & (df['t'] < rang[1])),:]
return hv.operation.timeseries.rolling(
hv.Curve(
data = subdf,
kdims=["t", "f"],
vdims="logpower"
).groupby("f").overlay().opts(width=1200, height=300),
rolling_window=window
)
@pn.depends(rang=range_slider.param.value_throttled)
def plot_raw_ts(rang):
sr = rec.get_sampling_frequency()
return hv.operation.datashader.datashade(
hv.Curve(
rec.get_traces(channel_ids=[channel], start_frame=sr*rang[0], end_frame=sr*rang[1]).flatten()
),
aggregator="any"
).opts(width=1200, height=300)
pn.Column(
window_slider,freq_slider_1, freq_slider_2, freq_slider_3,range_slider,
plot_ts,
plot_raw_ts
)
```
# TODOs:
- Does phase vary systematically with frequency???
- Does the log power increase with time over the nzight??
- Observation: these birds start singing around 6, before the lights turn on.
- Possibly add spikes for when song occurs
- Possibly add timerange slider
| github_jupyter |
<a href="https://colab.research.google.com/github/ginttone/test_visuallization/blob/master/2_autompg_linearregression.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# 머신러닝
- 정보(데이터)단계<br>
dropna:info(), describe()<br>
fillna, replace:describe(), value_counts()<br>
- 시각화 :통계 선택<br>
standard scaler
혹은 one hot encoding<br> 뭐로 할지 정하기
- 교육단계<br>
stadard scaler, get_dummies(one hot encoding)<br>
model learning<br>
check score<br>
- 서비스단계 <br>
pickle: dump,load<br>
recive data <br>
apply prediction<br>
## 데이터 로딩
```
import pandas as pd
df= pd.read_csv('./auto-mpg.csv', header=None)
df.columns=['mpg','cylinders','displacement','horsepower','weight',
'acceleration','model year','origin','name']
df.info()
df[['horsepower','name']].describe(include='all')
```
## replace
```
df['horsepower'].value_counts()
df['horsepower'].unique()
df_horsepower=df['horsepower'].replace(to_replace='?',value=None,inplace=False)
df_horsepower.unique()
df_horsepower=df_horsepower.astype('float')
df_horsepower.mean()
df['horsepower']=df_horsepower.fillna(104)
df.info()
df['name'].unique()
df.head()
```
## 분류와 연속 컬럼 구분
```
df.head(8)
```
### 컬럼 형태 분류(check columns)
* 연속형: displacement, horsepower, weigth,acceleration,
* 중립형: mpg,cylinders,origin
* 분류형: model year, name
(대게 카테고리형은 소숫점 및 값이 존재하지 않는다.)
standard scaler : 연속형<br>
one hot incoding : 분류형
하나의 데이터에 300개는 있어야 교육이된다
그래서 name은 교육시키기에 카운터 수가 작아서 빼고 진행
#### 중립형을 연속형인지 분륳형인지 판단
```
df['name'].value_counts()
df['mpg'].describe(include='all')
```
예) count 398인데 mpg length가 129 값이 나왔다. 이때 분류형일까 연속형일까?
* 왼쪽카테고리에는 소숫점 이하 값 있어 이럴땐 연속형으로 분류할수 있어/ 오른쪽 반복된카운트값
```
df['mpg'].value_counts()
df['cylinders'].describe()
df['cylinders'].value_counts()
df['origin'].describe()
df['origin'].value_counts()
```
* 연속형: displacement, horsepower, weigth,acceleration, mpg
* 분류형: model year, name,cylinders,origin
label:mpg<br>
feature: others without name
## 시각화: 통계
### 정규화 단계:
* standard scaler<br>
z = (x-u) / s <br>
z(모집단)<br>
x(샘플의 표준 점수)<br>
u(훈련 샘플의 평균)<br>
s(훈련 샘플의 표준 편차)<br>
fit(X [, y, 샘플 _ 가중치])
```
Y = df['mpg']
X_continus = df[['displacement','horsepower', 'weight','acceleration']]
X_category = df[['model year','cylinders','origin']]
from sklearn import preprocessing
scaler = preprocessing.StandardScaler()
type(scaler)
#연속형만 적용해서 교육시킨다(패턴만 갖게 됨)
scaler.fit(X_continus)
#패턴에 의해 값을 변환해서 X에 담는다
X = scaler.transform(X_continus)
from sklearn.linear_model import LinearRegression
#리니어 레그래이션 쌍둥이 만들기 lr
lr = LinearRegression()
type(lr)
#해당 내용을 fitting한다, 교육끝
#fit은 하면 패턴만 갖게된다~ 1차방정식
lr.fit(X,Y)
lr.score(X,Y)
```
## predict로 전달 하기
X_continus = df[['displacement','horsepower', 'weight','acceleration']]
훈련시킬때 넣은 순서로 넣어줘야함
```
df.head(1)
#소비자에게 알려줘야 할 값이 결과로 나온다
#lr.predict([[307.0,130.0,3504.0,12.0]])
#scaler를 이미 해서 패턴을 만들어놨기 때문에 transform을 사용한다.
x_cusmter = scaler.transform([[307.0,130.0,3504.0,12.0]])
x_cusmter.shape
#lr.predict에 넣어주기
lr.predict(x_cusmter)
```
## 서비스 단계
pickle — Python object serialization
pickle은 바이너리 방식(wb) 파일로 저장이 된다. <br>
바이너리 사람이 익숙한 형태가 아니다.<br>
class만 담을수 있다.<br>
```
import pickle
pickle.dump(lr, open('./autompg_lr.pkl', 'wb'))
```
pickle이 저장 된 상태에서 저장된 autompg_lr.pkl을 다운로드한다
saves 폴더를 만들고 다운로드 했던 autompg_lr.pkl 을 끌어온다
pickle은 바이너리 방식(rb) 파일 불러온다.
```
!ls -l ./autompg_lr.pkl
pickle.load(open('./autompg_lr.pkl', 'rb'))
```
* pickle로 상대 서비스개발자에게 scaler를 넘겨줄때 쓰는 방법
```
pickle.dump(scaler, open('./autompt_standardscaler.pkl', 'wb'))
```
서비스 개발자는 저장한 autompt_standardscaler.pkl를 다운로드해서 본인 일하는곳에 로드하기
## one hot encoding
```
X_category
X_category['origin'].value_counts()
#카테고리
#1 , 2 , 3
#? | ? | ?
#1 | 0 | 0
#0 | 1 | 0
#0 | 0 | 1
df_origin=pd.get_dummies(X_category['origin'], prefix='origin')
df_cylinders=pd.get_dummies(X_category['cylinders'],prefix='cylinders')
df_origin.shape, df_cylinders.shape
# X_continus + df_cylinders + df_origin
X_continus.head(3)
```
컬럼명 잘 봐야 함, 서비스하려면 작업을해줘야함
```
#원한인코딩응로 붙히는 작업:pd.concat([X_continus, df_cylinders, df_origin], axis='columns')
X= pd.concat([X_continus, df_cylinders, df_origin], axis='columns')
from sklearn.model_selection import train_test_split
X_train,X_test,Y_train,Y_test = train_test_split(X,Y)
X_train.shape,X_test.shape,Y_train.shape,Y_test.shape
#좋은 스코어를 뽑아내기 위한 모델중의 하나가 xgboot(dicision tree 태생)
import xgboost
xgb= xgboost.XGBRegressor()
xgb
xgb.fit(X_train,Y_train)
#교육시킨 데이터 train으로 스코어 확인
xgb.score(X_train,Y_train)
#교육시키지 않은 데이터 test로 스코어 확인
xgb.score(X_test,Y_test)
```
| github_jupyter |
# Convolutional Neural Networks: Application
Welcome to Course 4's second assignment! In this notebook, you will:
- Implement helper functions that you will use when implementing a TensorFlow model
- Implement a fully functioning ConvNet using TensorFlow
**After this assignment you will be able to:**
- Build and train a ConvNet in TensorFlow for a classification problem
We assume here that you are already familiar with TensorFlow. If you are not, please refer the *TensorFlow Tutorial* of the third week of Course 2 ("*Improving deep neural networks*").
### <font color='darkblue'> Updates to Assignment <font>
#### If you were working on a previous version
* The current notebook filename is version "1a".
* You can find your work in the file directory as version "1".
* To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory.
#### List of Updates
* `initialize_parameters`: added details about tf.get_variable, `eval`. Clarified test case.
* Added explanations for the kernel (filter) stride values, max pooling, and flatten functions.
* Added details about softmax cross entropy with logits.
* Added instructions for creating the Adam Optimizer.
* Added explanation of how to evaluate tensors (optimizer and cost).
* `forward_propagation`: clarified instructions, use "F" to store "flatten" layer.
* Updated print statements and 'expected output' for easier visual comparisons.
* Many thanks to Kevin P. Brown (mentor for the deep learning specialization) for his suggestions on the assignments in this course!
## 1.0 - TensorFlow model
In the previous assignment, you built helper functions using numpy to understand the mechanics behind convolutional neural networks. Most practical applications of deep learning today are built using programming frameworks, which have many built-in functions you can simply call.
As usual, we will start by loading in the packages.
```
import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import scipy
from PIL import Image
from scipy import ndimage
import tensorflow as tf
from tensorflow.python.framework import ops
from cnn_utils import *
%matplotlib inline
np.random.seed(1)
```
Run the next cell to load the "SIGNS" dataset you are going to use.
```
# Loading the data (signs)
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
```
As a reminder, the SIGNS dataset is a collection of 6 signs representing numbers from 0 to 5.
<img src="images/SIGNS.png" style="width:800px;height:300px;">
The next cell will show you an example of a labelled image in the dataset. Feel free to change the value of `index` below and re-run to see different examples.
```
# Example of a picture
index = 6
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
```
In Course 2, you had built a fully-connected network for this dataset. But since this is an image dataset, it is more natural to apply a ConvNet to it.
To get started, let's examine the shapes of your data.
```
X_train = X_train_orig/255.
X_test = X_test_orig/255.
Y_train = convert_to_one_hot(Y_train_orig, 6).T
Y_test = convert_to_one_hot(Y_test_orig, 6).T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
conv_layers = {}
```
### 1.1 - Create placeholders
TensorFlow requires that you create placeholders for the input data that will be fed into the model when running the session.
**Exercise**: Implement the function below to create placeholders for the input image X and the output Y. You should not define the number of training examples for the moment. To do so, you could use "None" as the batch size, it will give you the flexibility to choose it later. Hence X should be of dimension **[None, n_H0, n_W0, n_C0]** and Y should be of dimension **[None, n_y]**. [Hint: search for the tf.placeholder documentation"](https://www.tensorflow.org/api_docs/python/tf/placeholder).
```
# GRADED FUNCTION: create_placeholders
def create_placeholders(n_H0, n_W0, n_C0, n_y):
"""
Creates the placeholders for the tensorflow session.
Arguments:
n_H0 -- scalar, height of an input image
n_W0 -- scalar, width of an input image
n_C0 -- scalar, number of channels of the input
n_y -- scalar, number of classes
Returns:
X -- placeholder for the data input, of shape [None, n_H0, n_W0, n_C0] and dtype "float"
Y -- placeholder for the input labels, of shape [None, n_y] and dtype "float"
"""
### START CODE HERE ### (≈2 lines)
X = tf.placeholder(tf.float32, shape=(None, n_H0, n_W0, n_C0), name='X')
Y = tf.placeholder(tf.float32, shape=(None, n_y), name='Y')
### END CODE HERE ###
return X, Y
X, Y = create_placeholders(64, 64, 3, 6)
print ("X = " + str(X))
print ("Y = " + str(Y))
```
**Expected Output**
<table>
<tr>
<td>
X = Tensor("Placeholder:0", shape=(?, 64, 64, 3), dtype=float32)
</td>
</tr>
<tr>
<td>
Y = Tensor("Placeholder_1:0", shape=(?, 6), dtype=float32)
</td>
</tr>
</table>
### 1.2 - Initialize parameters
You will initialize weights/filters $W1$ and $W2$ using `tf.contrib.layers.xavier_initializer(seed = 0)`. You don't need to worry about bias variables as you will soon see that TensorFlow functions take care of the bias. Note also that you will only initialize the weights/filters for the conv2d functions. TensorFlow initializes the layers for the fully connected part automatically. We will talk more about that later in this assignment.
**Exercise:** Implement initialize_parameters(). The dimensions for each group of filters are provided below. Reminder - to initialize a parameter $W$ of shape [1,2,3,4] in Tensorflow, use:
```python
W = tf.get_variable("W", [1,2,3,4], initializer = ...)
```
#### tf.get_variable()
[Search for the tf.get_variable documentation](https://www.tensorflow.org/api_docs/python/tf/get_variable). Notice that the documentation says:
```
Gets an existing variable with these parameters or create a new one.
```
So we can use this function to create a tensorflow variable with the specified name, but if the variables already exist, it will get the existing variable with that same name.
```
# GRADED FUNCTION: initialize_parameters
def initialize_parameters():
"""
Initializes weight parameters to build a neural network with tensorflow. The shapes are:
W1 : [4, 4, 3, 8]
W2 : [2, 2, 8, 16]
Note that we will hard code the shape values in the function to make the grading simpler.
Normally, functions should take values as inputs rather than hard coding.
Returns:
parameters -- a dictionary of tensors containing W1, W2
"""
tf.set_random_seed(1) # so that your "random" numbers match ours
### START CODE HERE ### (approx. 2 lines of code)
W1 = tf.get_variable('W1', shape=(4,4,3,8), initializer=tf.contrib.layers.xavier_initializer(seed=0))
W2 = tf.get_variable('W2', shape=(2,2,8,16), initializer=tf.contrib.layers.xavier_initializer(seed=0))
### END CODE HERE ###
parameters = {"W1": W1,
"W2": W2}
return parameters
tf.reset_default_graph()
with tf.Session() as sess_test:
parameters = initialize_parameters()
init = tf.global_variables_initializer()
sess_test.run(init)
print("W1[1,1,1] = \n" + str(parameters["W1"].eval()[1,1,1]))
print("W1.shape: " + str(parameters["W1"].shape))
print("\n")
print("W2[1,1,1] = \n" + str(parameters["W2"].eval()[1,1,1]))
print("W2.shape: " + str(parameters["W2"].shape))
```
** Expected Output:**
```
W1[1,1,1] =
[ 0.00131723 0.14176141 -0.04434952 0.09197326 0.14984085 -0.03514394
-0.06847463 0.05245192]
W1.shape: (4, 4, 3, 8)
W2[1,1,1] =
[-0.08566415 0.17750949 0.11974221 0.16773748 -0.0830943 -0.08058
-0.00577033 -0.14643836 0.24162132 -0.05857408 -0.19055021 0.1345228
-0.22779644 -0.1601823 -0.16117483 -0.10286498]
W2.shape: (2, 2, 8, 16)
```
### 1.3 - Forward propagation
In TensorFlow, there are built-in functions that implement the convolution steps for you.
- **tf.nn.conv2d(X,W, strides = [1,s,s,1], padding = 'SAME'):** given an input $X$ and a group of filters $W$, this function convolves $W$'s filters on X. The third parameter ([1,s,s,1]) represents the strides for each dimension of the input (m, n_H_prev, n_W_prev, n_C_prev). Normally, you'll choose a stride of 1 for the number of examples (the first value) and for the channels (the fourth value), which is why we wrote the value as `[1,s,s,1]`. You can read the full documentation on [conv2d](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d).
- **tf.nn.max_pool(A, ksize = [1,f,f,1], strides = [1,s,s,1], padding = 'SAME'):** given an input A, this function uses a window of size (f, f) and strides of size (s, s) to carry out max pooling over each window. For max pooling, we usually operate on a single example at a time and a single channel at a time. So the first and fourth value in `[1,f,f,1]` are both 1. You can read the full documentation on [max_pool](https://www.tensorflow.org/api_docs/python/tf/nn/max_pool).
- **tf.nn.relu(Z):** computes the elementwise ReLU of Z (which can be any shape). You can read the full documentation on [relu](https://www.tensorflow.org/api_docs/python/tf/nn/relu).
- **tf.contrib.layers.flatten(P)**: given a tensor "P", this function takes each training (or test) example in the batch and flattens it into a 1D vector.
* If a tensor P has the shape (m,h,w,c), where m is the number of examples (the batch size), it returns a flattened tensor with shape (batch_size, k), where $k=h \times w \times c$. "k" equals the product of all the dimension sizes other than the first dimension.
* For example, given a tensor with dimensions [100,2,3,4], it flattens the tensor to be of shape [100, 24], where 24 = 2 * 3 * 4. You can read the full documentation on [flatten](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/flatten).
- **tf.contrib.layers.fully_connected(F, num_outputs):** given the flattened input F, it returns the output computed using a fully connected layer. You can read the full documentation on [full_connected](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/fully_connected).
In the last function above (`tf.contrib.layers.fully_connected`), the fully connected layer automatically initializes weights in the graph and keeps on training them as you train the model. Hence, you did not need to initialize those weights when initializing the parameters.
#### Window, kernel, filter
The words "window", "kernel", and "filter" are used to refer to the same thing. This is why the parameter `ksize` refers to "kernel size", and we use `(f,f)` to refer to the filter size. Both "kernel" and "filter" refer to the "window."
**Exercise**
Implement the `forward_propagation` function below to build the following model: `CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED`. You should use the functions above.
In detail, we will use the following parameters for all the steps:
- Conv2D: stride 1, padding is "SAME"
- ReLU
- Max pool: Use an 8 by 8 filter size and an 8 by 8 stride, padding is "SAME"
- Conv2D: stride 1, padding is "SAME"
- ReLU
- Max pool: Use a 4 by 4 filter size and a 4 by 4 stride, padding is "SAME"
- Flatten the previous output.
- FULLYCONNECTED (FC) layer: Apply a fully connected layer without an non-linear activation function. Do not call the softmax here. This will result in 6 neurons in the output layer, which then get passed later to a softmax. In TensorFlow, the softmax and cost function are lumped together into a single function, which you'll call in a different function when computing the cost.
```
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
"""
Implements the forward propagation for the model:
CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED
Note that for simplicity and grading purposes, we'll hard-code some values
such as the stride and kernel (filter) sizes.
Normally, functions should take these values as function parameters.
Arguments:
X -- input dataset placeholder, of shape (input size, number of examples)
parameters -- python dictionary containing your parameters "W1", "W2"
the shapes are given in initialize_parameters
Returns:
Z3 -- the output of the last LINEAR unit
"""
# Retrieve the parameters from the dictionary "parameters"
W1 = parameters['W1']
W2 = parameters['W2']
### START CODE HERE ###
# CONV2D: stride of 1, padding 'SAME'
Z1 = tf.nn.conv2d(X, W1, strides=[1, 1, 1, 1], padding='SAME')
# RELU
A1 = tf.nn.relu(Z1)
# MAXPOOL: window 8x8, sride 8, padding 'SAME'
P1 = tf.nn.max_pool(A1, ksize=[1, 8, 8, 1], strides=[1, 8, 8, 1], padding='SAME')
# CONV2D: filters W2, stride 1, padding 'SAME'
Z2 = tf.nn.conv2d(P1, W2, strides=[1, 1, 1, 1], padding='SAME')
# RELU
A2 = tf.nn.relu(Z2)
# MAXPOOL: window 4x4, stride 4, padding 'SAME'
P2 = tf.nn.max_pool(A2, ksize=[1, 4, 4, 1], strides=[1, 4, 4, 1], padding='SAME')
# FLATTEN
P2 = tf.contrib.layers.flatten(P2)
# FULLY-CONNECTED without non-linear activation function (not not call softmax).
# 6 neurons in output layer. Hint: one of the arguments should be "activation_fn=None"
Z3 = tf.contrib.layers.fully_connected(P2, 6, activation_fn=None)
### END CODE HERE ###
return Z3
tf.reset_default_graph()
with tf.Session() as sess:
np.random.seed(1)
X, Y = create_placeholders(64, 64, 3, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
init = tf.global_variables_initializer()
sess.run(init)
a = sess.run(Z3, {X: np.random.randn(2,64,64,3), Y: np.random.randn(2,6)})
print("Z3 = \n" + str(a))
```
**Expected Output**:
```
Z3 =
[[-0.44670227 -1.57208765 -1.53049231 -2.31013036 -1.29104376 0.46852064]
[-0.17601591 -1.57972014 -1.4737016 -2.61672091 -1.00810647 0.5747785 ]]
```
### 1.4 - Compute cost
Implement the compute cost function below. Remember that the cost function helps the neural network see how much the model's predictions differ from the correct labels. By adjusting the weights of the network to reduce the cost, the neural network can improve its predictions.
You might find these two functions helpful:
- **tf.nn.softmax_cross_entropy_with_logits(logits = Z, labels = Y):** computes the softmax entropy loss. This function both computes the softmax activation function as well as the resulting loss. You can check the full documentation [softmax_cross_entropy_with_logits](https://www.tensorflow.org/api_docs/python/tf/nn/softmax_cross_entropy_with_logits).
- **tf.reduce_mean:** computes the mean of elements across dimensions of a tensor. Use this to calculate the sum of the losses over all the examples to get the overall cost. You can check the full documentation [reduce_mean](https://www.tensorflow.org/api_docs/python/tf/reduce_mean).
#### Details on softmax_cross_entropy_with_logits (optional reading)
* Softmax is used to format outputs so that they can be used for classification. It assigns a value between 0 and 1 for each category, where the sum of all prediction values (across all possible categories) equals 1.
* Cross Entropy is compares the model's predicted classifications with the actual labels and results in a numerical value representing the "loss" of the model's predictions.
* "Logits" are the result of multiplying the weights and adding the biases. Logits are passed through an activation function (such as a relu), and the result is called the "activation."
* The function is named `softmax_cross_entropy_with_logits` takes logits as input (and not activations); then uses the model to predict using softmax, and then compares the predictions with the true labels using cross entropy. These are done with a single function to optimize the calculations.
** Exercise**: Compute the cost below using the function above.
```
# GRADED FUNCTION: compute_cost
def compute_cost(Z3, Y):
"""
Computes the cost
Arguments:
Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (number of examples, 6)
Y -- "true" labels vector placeholder, same shape as Z3
Returns:
cost - Tensor of the cost function
"""
### START CODE HERE ### (1 line of code)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=Z3, labels=Y))
### END CODE HERE ###
return cost
tf.reset_default_graph()
with tf.Session() as sess:
np.random.seed(1)
X, Y = create_placeholders(64, 64, 3, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
cost = compute_cost(Z3, Y)
init = tf.global_variables_initializer()
sess.run(init)
a = sess.run(cost, {X: np.random.randn(4,64,64,3), Y: np.random.randn(4,6)})
print("cost = " + str(a))
```
**Expected Output**:
```
cost = 2.91034
```
## 1.5 Model
Finally you will merge the helper functions you implemented above to build a model. You will train it on the SIGNS dataset.
**Exercise**: Complete the function below.
The model below should:
- create placeholders
- initialize parameters
- forward propagate
- compute the cost
- create an optimizer
Finally you will create a session and run a for loop for num_epochs, get the mini-batches, and then for each mini-batch you will optimize the function. [Hint for initializing the variables](https://www.tensorflow.org/api_docs/python/tf/global_variables_initializer)
#### Adam Optimizer
You can use `tf.train.AdamOptimizer(learning_rate = ...)` to create the optimizer. The optimizer has a `minimize(loss=...)` function that you'll call to set the cost function that the optimizer will minimize.
For details, check out the documentation for [Adam Optimizer](https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer)
#### Random mini batches
If you took course 2 of the deep learning specialization, you implemented `random_mini_batches()` in the "Optimization" programming assignment. This function returns a list of mini-batches. It is already implemented in the `cnn_utils.py` file and imported here, so you can call it like this:
```Python
minibatches = random_mini_batches(X, Y, mini_batch_size = 64, seed = 0)
```
(You will want to choose the correct variable names when you use it in your code).
#### Evaluating the optimizer and cost
Within a loop, for each mini-batch, you'll use the `tf.Session` object (named `sess`) to feed a mini-batch of inputs and labels into the neural network and evaluate the tensors for the optimizer as well as the cost. Remember that we built a graph data structure and need to feed it inputs and labels and use `sess.run()` in order to get values for the optimizer and cost.
You'll use this kind of syntax:
```
output_for_var1, output_for_var2 = sess.run(
fetches=[var1, var2],
feed_dict={var_inputs: the_batch_of_inputs,
var_labels: the_batch_of_labels}
)
```
* Notice that `sess.run` takes its first argument `fetches` as a list of objects that you want it to evaluate (in this case, we want to evaluate the optimizer and the cost).
* It also takes a dictionary for the `feed_dict` parameter.
* The keys are the `tf.placeholder` variables that we created in the `create_placeholders` function above.
* The values are the variables holding the actual numpy arrays for each mini-batch.
* The sess.run outputs a tuple of the evaluated tensors, in the same order as the list given to `fetches`.
For more information on how to use sess.run, see the documentation [tf.Sesssion#run](https://www.tensorflow.org/api_docs/python/tf/Session#run) documentation.
```
# GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.009,
num_epochs = 100, minibatch_size = 64, print_cost = True):
"""
Implements a three-layer ConvNet in Tensorflow:
CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED
Arguments:
X_train -- training set, of shape (None, 64, 64, 3)
Y_train -- test set, of shape (None, n_y = 6)
X_test -- training set, of shape (None, 64, 64, 3)
Y_test -- test set, of shape (None, n_y = 6)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 100 epochs
Returns:
train_accuracy -- real number, accuracy on the train set (X_train)
test_accuracy -- real number, testing accuracy on the test set (X_test)
parameters -- parameters learnt by the model. They can then be used to predict.
"""
ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
tf.set_random_seed(1) # to keep results consistent (tensorflow seed)
seed = 3 # to keep results consistent (numpy seed)
(m, n_H0, n_W0, n_C0) = X_train.shape
n_y = Y_train.shape[1]
costs = [] # To keep track of the cost
# Create Placeholders of the correct shape
### START CODE HERE ### (1 line)
X, Y = create_placeholders(n_H0, n_W0, n_C0, n_y)
### END CODE HERE ###
# Initialize parameters
### START CODE HERE ### (1 line)
parameters = initialize_parameters()
### END CODE HERE ###
# Forward propagation: Build the forward propagation in the tensorflow graph
### START CODE HERE ### (1 line)
Z3 = forward_propagation(X, parameters)
### END CODE HERE ###
# Cost function: Add cost function to tensorflow graph
### START CODE HERE ### (1 line)
cost = compute_cost(Z3, Y)
### END CODE HERE ###
# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer that minimizes the cost.
### START CODE HERE ### (1 line)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
### END CODE HERE ###
# Initialize all the variables globally
init = tf.global_variables_initializer()
# Start the session to compute the tensorflow graph
with tf.Session() as sess:
# Run the initialization
sess.run(init)
# Do the training loop
for epoch in range(num_epochs):
minibatch_cost = 0.
num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
seed = seed + 1
minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
"""
# IMPORTANT: The line that runs the graph on a minibatch.
# Run the session to execute the optimizer and the cost.
# The feedict should contain a minibatch for (X,Y).
"""
### START CODE HERE ### (1 line)
_ , temp_cost =sess.run([optimizer, cost], {X: minibatch_X, Y: minibatch_Y})
### END CODE HERE ###
minibatch_cost += temp_cost / num_minibatches
# Print the cost every epoch
if print_cost == True and epoch % 5 == 0:
print ("Cost after epoch %i: %f" % (epoch, minibatch_cost))
if print_cost == True and epoch % 1 == 0:
costs.append(minibatch_cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
# Calculate the correct predictions
predict_op = tf.argmax(Z3, 1)
correct_prediction = tf.equal(predict_op, tf.argmax(Y, 1))
# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print(accuracy)
train_accuracy = accuracy.eval({X: X_train, Y: Y_train})
test_accuracy = accuracy.eval({X: X_test, Y: Y_test})
print("Train Accuracy:", train_accuracy)
print("Test Accuracy:", test_accuracy)
return train_accuracy, test_accuracy, parameters
```
Run the following cell to train your model for 100 epochs. Check if your cost after epoch 0 and 5 matches our output. If not, stop the cell and go back to your code!
```
_, _, parameters = model(X_train, Y_train, X_test, Y_test)
```
**Expected output**: although it may not match perfectly, your expected output should be close to ours and your cost value should decrease.
<table>
<tr>
<td>
**Cost after epoch 0 =**
</td>
<td>
1.917929
</td>
</tr>
<tr>
<td>
**Cost after epoch 5 =**
</td>
<td>
1.506757
</td>
</tr>
<tr>
<td>
**Train Accuracy =**
</td>
<td>
0.940741
</td>
</tr>
<tr>
<td>
**Test Accuracy =**
</td>
<td>
0.783333
</td>
</tr>
</table>
Congratulations! You have finished the assignment and built a model that recognizes SIGN language with almost 80% accuracy on the test set. If you wish, feel free to play around with this dataset further. You can actually improve its accuracy by spending more time tuning the hyperparameters, or using regularization (as this model clearly has a high variance).
Once again, here's a thumbs up for your work!
```
fname = "images/thumbs_up.jpg"
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(64,64))
plt.imshow(my_image)
```
| github_jupyter |
# k-Nearest Neighbor (kNN) implementation
*Credits: this notebook is deeply based on Stanford CS231n course assignment 1. Source link: http://cs231n.github.io/assignments2019/assignment1/*
The kNN classifier consists of two stages:
- During training, the classifier takes the training data and simply remembers it
- During testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples
- The value of k is cross-validated
In this exercise you will implement these steps and understand the basic Image Classification pipeline and gain proficiency in writing efficient, vectorized code.
We will work with the handwritten digits dataset. Images will be flattened (8x8 sized image -> 64 sized vector) and treated as vectors.
```
'''
If you are using Google Colab, uncomment the next line to download `k_nearest_neighbor.py`.
You can open and change it in Colab using the "Files" sidebar on the left.
'''
# !wget https://raw.githubusercontent.com/girafe-ai/ml-mipt/basic_s20/homeworks_basic/assignment0_01_kNN/k_nearest_neighbor.py
from sklearn import datasets
dataset = datasets.load_digits()
print(dataset.DESCR)
# First 100 images will be used for testing. This dataset is not sorted by the labels, so it's ok
# to do the split this way.
# Please be careful when you split your data into train and test in general.
test_border = 100
X_train, y_train = dataset.data[test_border:], dataset.target[test_border:]
X_test, y_test = dataset.data[:test_border], dataset.target[:test_border]
print('Training data shape: ', X_train.shape)
print('Training labels shape: ', y_train.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
num_test = X_test.shape[0]
# Run some setup code for this notebook.
import random
import numpy as np
import matplotlib.pyplot as plt
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (14.0, 12.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = list(np.arange(10))
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].reshape((8, 8)).astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
```
Autoreload is a great stuff, but sometimes it does not work as intended. The code below aims to fix than. __Do not forget to save your changes in the `.py` file before reloading the `KNearestNeighbor` class.__
```
# This dirty hack might help if the autoreload has failed for some reason
try:
del KNearestNeighbor
except:
pass
from k_nearest_neighbor import KNearestNeighbor
# Create a kNN classifier instance.
# Remember that training a kNN classifier is a noop:
# the Classifier simply remembers the data and does no further processing
classifier = KNearestNeighbor()
classifier.fit(X_train, y_train)
X_train.shape
```
We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps:
1. First we must compute the distances between all test examples and all train examples.
2. Given these distances, for each test example we find the k nearest examples and have them vote for the label
Lets begin with computing the distance matrix between all training and test examples. For example, if there are **Ntr** training examples and **Nte** test examples, this stage should result in a **Nte x Ntr** matrix where each element (i,j) is the distance between the i-th test and j-th train example.
**Note: For the three distance computations that we require you to implement in this notebook, you may not use the np.linalg.norm() function that numpy provides.**
First, open `k_nearest_neighbor.py` and implement the function `compute_distances_two_loops` that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time.
```
# Open k_nearest_neighbor.py and implement
# compute_distances_two_loops.
# Test your implementation:
dists = classifier.compute_distances_two_loops(X_test)
print(dists.shape)
# We can visualize the distance matrix: each row is a single test example and
# its distances to training examples
plt.imshow(dists, interpolation='none')
plt.show()
```
**Inline Question 1**
Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)
- What in the data is the cause behind the distinctly bright rows?
- What causes the columns?
$\color{blue}{\textit Your Answer:}$ *To my mind, if some point in the test data is noisy (we can't recognize it), this corresponding row will be brighter. For columns, we have the same situation: if some noisy point exists in train data, we will have a brighter column.*
```
# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
```
You should expect to see approximately `95%` accuracy. Now lets try out a larger `k`, say `k = 5`:
```
y_test_pred = classifier.predict_labels(dists, k=5)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print('Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy))
```
Accuracy should slightly decrease with `k = 5` compared to `k = 1`.
**Inline Question 2**
We can also use other distance metrics such as L1 distance.
For pixel values $p_{ij}^{(k)}$ at location $(i,j)$ of some image $I_k$,
the mean $\mu$ across all pixels over all images is $$\mu=\frac{1}{nhw}\sum_{k=1}^n\sum_{i=1}^{h}\sum_{j=1}^{w}p_{ij}^{(k)}$$
And the pixel-wise mean $\mu_{ij}$ across all images is
$$\mu_{ij}=\frac{1}{n}\sum_{k=1}^np_{ij}^{(k)}.$$
The general standard deviation $\sigma$ and pixel-wise standard deviation $\sigma_{ij}$ is defined similarly.
Which of the following preprocessing steps will not change the performance of a Nearest Neighbor classifier that uses L1 distance? Select all that apply.
1. Subtracting the mean $\mu$ ($\tilde{p}_{ij}^{(k)}=p_{ij}^{(k)}-\mu$.)
2. Subtracting the per pixel mean $\mu_{ij}$ ($\tilde{p}_{ij}^{(k)}=p_{ij}^{(k)}-\mu_{ij}$.)
3. Subtracting the mean $\mu$ and dividing by the standard deviation $\sigma$.
4. Subtracting the pixel-wise mean $\mu_{ij}$ and dividing by the pixel-wise standard deviation $\sigma_{ij}$.
5. Rotating the coordinate axes of the data.
$\color{blue}{\textit Your Answer:}$ 1, 2, 3, 4
$\color{blue}{\textit Your Explanation:}$
1. We just substruct some value from all points. It means that all points will move along the axis.
2. The same.
3. We just scale the distances between points, but it doesn't change the performance.
4. The same
5. Rotating the coordinate changed the performance of kNN. It relates to projection, which can change distances between points.
```
# Now lets speed up distance matrix computation by using partial vectorization
# with one loop. Implement the function compute_distances_one_loop and run the
# code below:
dists_one = classifier.compute_distances_one_loop(X_test)
# To ensure that our vectorized implementation is correct, we make sure that it
# agrees with the naive implementation. There are many ways to decide whether
# two matrices are similar; one of the simplest is the Frobenius norm. In case
# you haven't seen it before, the Frobenius norm of two matrices is the square
# root of the squared sum of differences of all elements; in other words, reshape
# the matrices into vectors and compute the Euclidean distance between them.
difference = np.linalg.norm(dists - dists_one, ord='fro')
print('One loop difference was: %f' % (difference, ))
if difference < 0.001:
print('Good! The distance matrices are the same')
else:
print('Uh-oh! The distance matrices are different')
# Now implement the fully vectorized version inside compute_distances_no_loops
# and run the code
dists_two = classifier.compute_distances_no_loops(X_test)
# check that the distance matrix agrees with the one we computed before:
difference = np.linalg.norm(dists - dists_two, ord='fro')
print('No loop difference was: %f' % (difference, ))
if difference < 0.001:
print('Good! The distance matrices are the same')
else:
print('Uh-oh! The distance matrices are different')
```
### Comparing handcrafted and `sklearn` implementations
In this section we will just compare the performance of handcrafted and `sklearn` kNN algorithms. The predictions should be the same. No need to write any code in this section.
```
from sklearn import neighbors
implemented_knn = KNearestNeighbor()
implemented_knn.fit(X_train, y_train)
n_neighbors = 1
external_knn = neighbors.KNeighborsClassifier(n_neighbors=n_neighbors)
external_knn.fit(X_train, y_train)
print('sklearn kNN (k=1) implementation achieves: {} accuracy on the test set'.format(
external_knn.score(X_test, y_test)
))
y_predicted = implemented_knn.predict(X_test, k=n_neighbors).astype(int)
accuracy_score = sum((y_predicted==y_test).astype(float)) / num_test
print('Handcrafted kNN (k=1) implementation achieves: {} accuracy on the test set'.format(accuracy_score))
assert np.array_equal(
external_knn.predict(X_test),
y_predicted
), 'Labels predicted by handcrafted and sklearn kNN implementations are different!'
print('\nsklearn and handcrafted kNN implementations provide same predictions')
print('_'*76)
n_neighbors = 5
external_knn = neighbors.KNeighborsClassifier(n_neighbors=n_neighbors)
external_knn.fit(X_train, y_train)
print('sklearn kNN (k=5) implementation achieves: {} accuracy on the test set'.format(
external_knn.score(X_test, y_test)
))
y_predicted = implemented_knn.predict(X_test, k=n_neighbors).astype(int)
accuracy_score = sum((y_predicted==y_test).astype(float)) / num_test
print('Handcrafted kNN (k=5) implementation achieves: {} accuracy on the test set'.format(accuracy_score))
assert np.array_equal(
external_knn.predict(X_test),
y_predicted
), 'Labels predicted by handcrafted and sklearn kNN implementations are different!'
print('\nsklearn and handcrafted kNN implementations provide same predictions')
print('_'*76)
```
### Measuring the time
Finally let's compare how fast the implementations are.
To make the difference more noticable, let's repeat the train and test objects (there is no point but to compute the distance between more pairs).
```
X_train_big = np.vstack([X_train]*5)
X_test_big = np.vstack([X_test]*5)
y_train_big = np.hstack([y_train]*5)
y_test_big = np.hstack([y_test]*5)
classifier_big = KNearestNeighbor()
classifier_big.fit(X_train_big, y_train_big)
# Let's compare how fast the implementations are
def time_function(f, *args):
"""
Call a function f with args and return the time (in seconds) that it took to execute.
"""
import time
tic = time.time()
f(*args)
toc = time.time()
return toc - tic
two_loop_time = time_function(classifier_big.compute_distances_two_loops, X_test_big)
print('Two loop version took %f seconds' % two_loop_time)
one_loop_time = time_function(classifier_big.compute_distances_one_loop, X_test_big)
print('One loop version took %f seconds' % one_loop_time)
no_loop_time = time_function(classifier_big.compute_distances_no_loops, X_test_big)
print('No loop version took %f seconds' % no_loop_time)
# You should see significantly faster performance with the fully vectorized implementation!
# NOTE: depending on what machine you're using,
# you might not see a speedup when you go from two loops to one loop,
# and might even see a slow-down.
```
The improvement seems significant. (On some hardware one loop version may take even more time, than two loop, but no loop should definitely be the fastest.
**Inline Question 3**
Which of the following statements about $k$-Nearest Neighbor ($k$-NN) are true in a classification setting, and for all $k$? Select all that apply.
1. The decision boundary (hyperplane between classes in feature space) of the k-NN classifier is linear.
2. The training error of a 1-NN will always be lower than that of 5-NN.
3. The test error of a 1-NN will always be lower than that of a 5-NN.
4. The time needed to classify a test example with the k-NN classifier grows with the size of the training set.
5. None of the above.
$\color{blue}{\textit Your Answer:}$ 2, 4
$\color{blue}{\textit Your Explanation:}$
1. Decision boundary depends on distances to closest k points. But we don't have any supposes about linearity.
2. Yes, because of overfitting on one closest point.
3. Not always, but in most cases test error of a 1-NN will be higher than test error of a 5-NN because of using just the one-point to predict the label for test point.
4. Yes, it is true because of computing distances to all points.
### Submitting your work
To submit your work you need to log into Yandex contest (link will be provided later) and upload the `k_nearest_neighbor.py` file for the corresponding problem
| github_jupyter |
# Run Modes
Running MAGICC in different modes can be non-trivial. In this notebook we show how to set MAGICC's config flags so that it will run as desired for a few different cases.
```
# NBVAL_IGNORE_OUTPUT
from os.path import join
import datetime
import dateutil
from copy import deepcopy
import numpy as np
import pandas as pd
from pymagicc import MAGICC6, rcp26, zero_emissions
from pymagicc.io import MAGICCData
%matplotlib inline
from matplotlib import pyplot as plt
plt.style.use("ggplot")
plt.rcParams["figure.figsize"] = (12, 6)
```
## Concentration to emissions hybrid
This is MAGICC's default run mode. In this run mode, MAGICC will run with prescribed concentrations (or a quantity which scales linearly with radiative forcing for aerosol species) until a given point in time and will then switch to running in emissions driven mode.
```
with MAGICC6() as magicc:
res = magicc.run(rcp26)
# NBVAL_IGNORE_OUTPUT
res.head()
plt.figure()
res.filter(variable="Emis*CO2*", region="World").line_plot(hue="variable")
plt.figure()
res.filter(variable="Atmos*Conc*CO2", region="World").line_plot(hue="variable");
```
The switches which control the time at which MAGICC switches from concentrations driven to emissions driven are all in the form `GAS_SWITCHFROMXXX2EMIS_YEAR` e.g. `CO2_SWITCHFROMCONC2EMIS_YEAR` and `BCOC_SWITCHFROMRF2EMIS_YEAR`.
Changing the value of these switches will alter how MAGICC runs.
```
# NBVAL_IGNORE_OUTPUT
df = deepcopy(rcp26)
df["scenario"] = "RCP26_altered_co2_switch"
with MAGICC6() as magicc:
res = res.append(magicc.run(df, co2_switchfromconc2emis_year=1850))
plt.figure()
res.filter(variable="Emis*CO2*", region="World").line_plot(hue="variable")
plt.figure()
res.filter(variable="Atmos*Conc*CO2", region="World").line_plot(hue="variable");
# NBVAL_IGNORE_OUTPUT
res.timeseries()
```
As we can see, the emissions remain unchanged but the concentrations are altered as MAGICC is now running emissions driven from 1850 rather than 2005 (the default).
To get a fully emissions driven run, you need to change all of the relevant `GAS_SWITCHXXX2EMIS_YEAR` flags.
## CO$_2$ Emissions Driven Only
We can get a CO$_2$ emissions only driven run like shown.
```
df = zero_emissions.timeseries()
time = zero_emissions["time"]
df.loc[
(
df.index.get_level_values("variable")
== "Emissions|CO2|MAGICC Fossil and Industrial"
),
:,
] = np.linspace(0, 30, len(time))
scen = MAGICCData(df)
scen.filter(variable="Em*CO2*Fossil*").line_plot(
x="time", label="CO2 Fossil", hue=None
)
scen.filter(variable="Em*CO2*Fossil*", keep=False).line_plot(
x="time", label="Everything else", hue=None
);
# NBVAL_IGNORE_OUTPUT
with MAGICC6() as magicc:
co2_only_res = magicc.run(
scen,
endyear=scen["time"].max().year,
rf_total_constantafteryr=5000,
rf_total_runmodus="CO2",
co2_switchfromconc2emis_year=min(scen["time"]).year,
)
for v in [
"Emis*CO2*",
"Atmos*Conc*CO2",
"Radiative Forcing",
"Surface Temperature",
]:
plt.figure()
co2_only_res.filter(variable=v, region="World").line_plot(hue="variable")
```
## Prescribed Forcing Driven Only
It is also possible to examine MAGICC's response to a prescribed radiative forcing only.
```
time = zero_emissions["time"]
forcing_external = 2.0 * np.arange(0, len(time)) / len(time)
forcing_ext = MAGICCData(
forcing_external,
index=time,
columns={
"scenario": ["idealised"],
"model": ["unspecified"],
"climate_model": ["unspecified"],
"variable": ["Radiative Forcing|Extra"],
"unit": ["W / m^2"],
"todo": ["SET"],
"region": ["World"],
},
)
forcing_ext.metadata = {
"header": "External radiative forcing with linear increase"
}
forcing_ext.line_plot(x="time");
with MAGICC6() as magicc:
forcing_ext_filename = "CUSTOM_EXTRA_RF.IN"
forcing_ext.write(
join(magicc.run_dir, forcing_ext_filename), magicc.version
)
ext_forc_only_res = magicc.run(
rf_extra_read=1,
file_extra_rf=forcing_ext_filename,
rf_total_runmodus="QEXTRA",
endyear=max(time).year,
rf_initialization_method="ZEROSTARTSHIFT", # this is default but to be sure
rf_total_constantafteryr=5000,
)
ext_forc_only_res.filter(
variable=["Radiative Forcing", "Surface Temperature"], region="World"
).line_plot(hue="variable")
```
## Zero Temperature Output
Getting MAGICC to return zero for its temperature output is surprisingly difficult. To help address this, we add the `set_zero_config` method to our MAGICC classes.
```
print(MAGICC6.set_zero_config.__doc__)
# NBVAL_IGNORE_OUTPUT
with MAGICC6() as magicc:
magicc.set_zero_config()
res_zero = magicc.run()
res_zero.filter(
variable=["Surface Temperature", "Radiative Forcing"], region="World"
).line_plot(x="time");
```
## CO$_2$ Emissions and Prescribed Forcing
It is also possible to run MAGICC in a mode which is CO$_2$ emissions driven but also includes a prescribed external forcing.
```
df = zero_emissions.timeseries()
time = zero_emissions["time"]
emms_fossil_co2 = (
np.linspace(0, 3, len(time))
- (1 + (np.arange(len(time)) - 500) / 500) ** 2
)
df.loc[
(
df.index.get_level_values("variable")
== "Emissions|CO2|MAGICC Fossil and Industrial"
),
:,
] = emms_fossil_co2
scen = MAGICCData(df)
scen.filter(variable="Em*CO2*Fossil*").line_plot(x="time", hue="variable")
scen.filter(variable="Em*CO2*Fossil*", keep=False).line_plot(
x="time", label="Everything Else"
)
forcing_external = 3.0 * np.arange(0, len(time)) / len(time)
forcing_ext = MAGICCData(
forcing_external,
index=time,
columns={
"scenario": ["idealised"],
"model": ["unspecified"],
"climate_model": ["unspecified"],
"variable": ["Radiative Forcing|Extra"],
"unit": ["W / m^2"],
"todo": ["SET"],
"region": ["World"],
},
)
forcing_ext.metadata = {
"header": "External radiative forcing with linear increase"
}
forcing_ext.line_plot(x="time", hue="variable");
# NBVAL_IGNORE_OUTPUT
scen.timeseries()
with MAGICC6() as magicc:
magicc.set_zero_config() # very important, try commenting this out and see what happens
forcing_ext_filename = "CUSTOM_EXTRA_RF.IN"
forcing_ext.write(
join(magicc.run_dir, forcing_ext_filename), magicc.version
)
co2_emms_ext_forc_res = magicc.run(
scen,
endyear=scen["time"].max().year,
co2_switchfromconc2emis_year=min(scen["time"]).year,
rf_extra_read=1,
file_extra_rf=forcing_ext_filename,
rf_total_runmodus="ALL", # default but just in case
rf_initialization_method="ZEROSTARTSHIFT", # this is default but to be sure
rf_total_constantafteryr=5000,
)
plt.figure()
co2_emms_ext_forc_res.filter(variable="Emis*CO2*", region="World").line_plot(
x="time", hue="variable"
)
plt.figure()
co2_emms_ext_forc_res.filter(
variable="Atmos*Conc*CO2", region="World"
).line_plot(x="time")
plt.figure()
co2_emms_ext_forc_res.filter(
variable="Radiative Forcing", region="World"
).line_plot(x="time")
plt.figure()
co2_emms_ext_forc_res.filter(
variable="Surface Temperature", region="World"
).line_plot(x="time");
```
If we adjust MAGICC's CO$_2$ temperature feedback start year, it is easier to see what is going on.
```
with MAGICC6() as magicc:
magicc.set_zero_config()
forcing_ext_filename = "CUSTOM_EXTRA_RF.IN"
forcing_ext.write(
join(magicc.run_dir, forcing_ext_filename), magicc.version
)
for temp_feedback_year in [2000, 2100, 3000]:
scen["scenario"] = "idealised_{}_CO2_temperature_feedback".format(
temp_feedback_year
)
co2_emms_ext_forc_res.append(
magicc.run(
scen,
endyear=scen["time"].max().year,
co2_switchfromconc2emis_year=min(scen["time"]).year,
rf_extra_read=1,
file_extra_rf=forcing_ext_filename,
rf_total_runmodus="ALL",
rf_initialization_method="ZEROSTARTSHIFT",
rf_total_constantafteryr=5000,
co2_tempfeedback_yrstart=temp_feedback_year,
)
)
co2_emms_ext_forc_res.filter(variable="Emis*CO2*", region="World").line_plot(
x="time", hue="variable"
)
plt.figure()
co2_emms_ext_forc_res.filter(
variable="Atmos*Conc*CO2", region="World"
).line_plot(x="time")
plt.figure()
co2_emms_ext_forc_res.filter(
variable="Radiative Forcing", region="World"
).line_plot(x="time")
plt.figure()
co2_emms_ext_forc_res.filter(
variable="Surface Temperature", region="World"
).line_plot(x="time");
```
## CO$_2$ Concentrations Driven
```
time = zero_emissions["time"]
co2_concs = 278 * np.ones_like(time)
co2_concs[105:] = 278 * 1.01 ** (np.arange(0, len(time[105:])))
co2_concs = MAGICCData(
co2_concs,
index=time,
columns={
"scenario": ["1%/yr CO2"],
"model": ["unspecified"],
"climate_model": ["unspecified"],
"variable": ["Atmospheric Concentrations|CO2"],
"unit": ["ppm"],
"todo": ["SET"],
"region": ["World"],
},
)
co2_concs = co2_concs.filter(year=range(1700, 2001))
time = co2_concs["time"]
co2_concs.metadata = {"header": "1%/yr atmospheric CO2 concentration increase"}
co2_concs.line_plot(x="time");
with MAGICC6() as magicc:
co2_conc_filename = "1PCT_CO2_CONC.IN"
co2_concs.write(join(magicc.run_dir, co2_conc_filename), magicc.version)
co2_conc_driven_res = magicc.run(
file_co2_conc=co2_conc_filename,
co2_switchfromconc2emis_year=max(time).year,
co2_tempfeedback_switch=1,
co2_tempfeedback_yrstart=1870,
co2_fertilization_yrstart=1870,
rf_total_runmodus="CO2",
rf_total_constantafteryr=max(time).year,
endyear=max(time).year,
out_inverseemis=1,
)
plt.figure()
co2_conc_driven_res.filter(
variable="Inverse Emis*CO2*", region="World"
).line_plot()
plt.figure()
co2_conc_driven_res.filter(
variable="Atmos*Conc*CO2", region="World"
).line_plot()
plt.figure()
co2_conc_driven_res.filter(
variable="Radiative Forcing", region="World"
).line_plot()
plt.figure()
co2_conc_driven_res.filter(
variable="Surface Temperature", region="World"
).line_plot();
```
| github_jupyter |
<img src="https://raw.githubusercontent.com/EdsonAvelar/auc/master/molecular_banner.png" width=1900px height=400px />
# Predicting Molecular Properties
<h3 style="color:red">If this kernel helps you, up vote to keep me motivated 😁<br>Thanks!</h3>
<h3> Can you measure the magnetic interactions between a pair of atoms? </h3>
This kernel is a combination of multiple kernels. The goal is to organize and explain the code to beginner competitors like me.<br>
This Kernels creates lots of new features and uses lightgbm as model.
> Update: Using Bond Calculation
# Table of Contents:
**1. [Problem Definition](#id1)** <br>
**2. [Get the Data (Collect / Obtain)](#id2)** <br>
**3. [Load the Dataset](#id3)** <br>
**4. [Data Pre-processing](#id4)** <br>
**5. [Model](#id5)** <br>
**6. [Visualization and Analysis of Results](#id6)** <br>
**7. [Submittion](#id7)** <br>
**8. [References](#ref)** <br>
<a id="id1"></a> <br>
# **1. Problem Definition:**
This challenge aims to predict interactions between atoms. The main task is develop an algorithm that can predict the magnetic interaction between two atoms in a molecule (i.e., the scalar coupling constant)<br>
In this competition, you will be predicting the scalar_coupling_constant between atom pairs in molecules, given the two atom types (e.g., C and H), the coupling type (e.g., 2JHC), and any features you are able to create from the molecule structure (xyz) files.
**Data**
* **train.csv** - the training set, where the first column (molecule_name) is the name of the molecule where the coupling constant originates, the second (atom_index_0) and third column (atom_index_1) is the atom indices of the atom-pair creating the coupling and the fourth column (**scalar_coupling_constant**) is the scalar coupling constant that we want to be able to predict
* **test.csv** - the test set; same info as train, without the target variable
* **sample_submission.csv** - a sample submission file in the correct format
* **structures.csv** - this file contains the same information as the individual xyz structure files, but in a single file
**Additional Data**<br>
*NOTE: additional data is provided for the molecules in Train only!*
* **scalar_coupling_contributions.csv** - The scalar coupling constants in train.csv are a sum of four terms. The first column (**molecule_name**) are the name of the molecule, the second (**atom_index_0**) and third column (**atom_index_1**) are the atom indices of the atom-pair, the fourth column indicates the **type** of coupling, the fifth column (**fc**) is the Fermi Contact contribution, the sixth column (**sd**) is the Spin-dipolar contribution, the seventh column (**pso**) is the Paramagnetic spin-orbit contribution and the eighth column (**dso**) is the Diamagnetic spin-orbit contribution.
<a id="id2"></a> <br>
# **2. Get the Data (Collect / Obtain):**
## All imports used in this kernel
```
import numpy as np
import pandas as pd
import os
import matplotlib.pyplot as plt
%matplotlib inline
from tqdm import tqdm_notebook
from sklearn.preprocessing import StandardScaler
from sklearn.svm import NuSVR, SVR
from sklearn.metrics import mean_absolute_error
pd.options.display.precision = 15
import lightgbm as lgb
import xgboost as xgb
import time
import datetime
from catboost import CatBoostRegressor
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import StratifiedKFold, KFold, RepeatedKFold
from sklearn import metrics
from sklearn import linear_model
import gc
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
from IPython.display import HTML
import json
import altair as alt
import networkx as nx
import matplotlib.pyplot as plt
%matplotlib inline
import os
import time
import datetime
import json
import gc
from numba import jit
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from tqdm import tqdm_notebook
import lightgbm as lgb
import xgboost as xgb
from catboost import CatBoostRegressor, CatBoostClassifier
from sklearn import metrics
from itertools import product
import altair as alt
from altair.vega import v3
from IPython.display import HTML
alt.renderers.enable('notebook')
```
## All function used in this kernel
```
# using ideas from this kernel: https://www.kaggle.com/notslush/altair-visualization-2018-stackoverflow-survey
def prepare_altair():
"""
Helper function to prepare altair for working.
"""
vega_url = 'https://cdn.jsdelivr.net/npm/vega@' + v3.SCHEMA_VERSION
vega_lib_url = 'https://cdn.jsdelivr.net/npm/vega-lib'
vega_lite_url = 'https://cdn.jsdelivr.net/npm/vega-lite@' + alt.SCHEMA_VERSION
vega_embed_url = 'https://cdn.jsdelivr.net/npm/vega-embed@3'
noext = "?noext"
paths = {
'vega': vega_url + noext,
'vega-lib': vega_lib_url + noext,
'vega-lite': vega_lite_url + noext,
'vega-embed': vega_embed_url + noext
}
workaround = f""" requirejs.config({{
baseUrl: 'https://cdn.jsdelivr.net/npm/',
paths: {paths}
}});
"""
return workaround
def add_autoincrement(render_func):
# Keep track of unique <div/> IDs
cache = {}
def wrapped(chart, id="vega-chart", autoincrement=True):
if autoincrement:
if id in cache:
counter = 1 + cache[id]
cache[id] = counter
else:
cache[id] = 0
actual_id = id if cache[id] == 0 else id + '-' + str(cache[id])
else:
if id not in cache:
cache[id] = 0
actual_id = id
return render_func(chart, id=actual_id)
# Cache will stay outside and
return wrapped
@add_autoincrement
def render(chart, id="vega-chart"):
"""
Helper function to plot altair visualizations.
"""
chart_str = """
<div id="{id}"></div><script>
require(["vega-embed"], function(vg_embed) {{
const spec = {chart};
vg_embed("#{id}", spec, {{defaultStyle: true}}).catch(console.warn);
console.log("anything?");
}});
console.log("really...anything?");
</script>
"""
return HTML(
chart_str.format(
id=id,
chart=json.dumps(chart) if isinstance(chart, dict) else chart.to_json(indent=None)
)
)
def reduce_mem_usage(df, verbose=True):
numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']
start_mem = df.memory_usage().sum() / 1024**2
for col in df.columns:
col_type = df[col].dtypes
if col_type in numerics:
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == 'int':
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
end_mem = df.memory_usage().sum() / 1024**2
if verbose: print('Mem. usage decreased to {:5.2f} Mb ({:.1f}% reduction)'.format(end_mem, 100 * (start_mem - end_mem) / start_mem))
return df
@jit
def fast_auc(y_true, y_prob):
"""
fast roc_auc computation: https://www.kaggle.com/c/microsoft-malware-prediction/discussion/76013
"""
y_true = np.asarray(y_true)
y_true = y_true[np.argsort(y_prob)]
nfalse = 0
auc = 0
n = len(y_true)
for i in range(n):
y_i = y_true[i]
nfalse += (1 - y_i)
auc += y_i * nfalse
auc /= (nfalse * (n - nfalse))
return auc
def eval_auc(y_true, y_pred):
"""
Fast auc eval function for lgb.
"""
return 'auc', fast_auc(y_true, y_pred), True
def group_mean_log_mae(y_true, y_pred, types, floor=1e-9):
"""
Fast metric computation for this competition: https://www.kaggle.com/c/champs-scalar-coupling
Code is from this kernel: https://www.kaggle.com/uberkinder/efficient-metric
"""
maes = (y_true-y_pred).abs().groupby(types).mean()
return np.log(maes.map(lambda x: max(x, floor))).mean()
def train_model_regression(X, X_test, y, params, folds, model_type='lgb', eval_metric='mae', columns=None, plot_feature_importance=False, model=None,
verbose=10000, early_stopping_rounds=200, n_estimators=50000):
"""
A function to train a variety of regression models.
Returns dictionary with oof predictions, test predictions, scores and, if necessary, feature importances.
:params: X - training data, can be pd.DataFrame or np.ndarray (after normalizing)
:params: X_test - test data, can be pd.DataFrame or np.ndarray (after normalizing)
:params: y - target
:params: folds - folds to split data
:params: model_type - type of model to use
:params: eval_metric - metric to use
:params: columns - columns to use. If None - use all columns
:params: plot_feature_importance - whether to plot feature importance of LGB
:params: model - sklearn model, works only for "sklearn" model type
"""
columns = X.columns if columns is None else columns
X_test = X_test[columns]
# to set up scoring parameters
metrics_dict = {'mae': {'lgb_metric_name': 'mae',
'catboost_metric_name': 'MAE',
'sklearn_scoring_function': metrics.mean_absolute_error},
'group_mae': {'lgb_metric_name': 'mae',
'catboost_metric_name': 'MAE',
'scoring_function': group_mean_log_mae},
'mse': {'lgb_metric_name': 'mse',
'catboost_metric_name': 'MSE',
'sklearn_scoring_function': metrics.mean_squared_error}
}
result_dict = {}
# out-of-fold predictions on train data
oof = np.zeros(len(X))
# averaged predictions on train data
prediction = np.zeros(len(X_test))
# list of scores on folds
scores = []
feature_importance = pd.DataFrame()
# split and train on folds
for fold_n, (train_index, valid_index) in enumerate(folds.split(X)):
print(f'Fold {fold_n + 1} started at {time.ctime()}')
if type(X) == np.ndarray:
X_train, X_valid = X[columns][train_index], X[columns][valid_index]
y_train, y_valid = y[train_index], y[valid_index]
else:
X_train, X_valid = X[columns].iloc[train_index], X[columns].iloc[valid_index]
y_train, y_valid = y.iloc[train_index], y.iloc[valid_index]
if model_type == 'lgb':
model = lgb.LGBMRegressor(**params, n_estimators = n_estimators, n_jobs = -1)
model.fit(X_train, y_train,
eval_set=[(X_train, y_train), (X_valid, y_valid)], eval_metric=metrics_dict[eval_metric]['lgb_metric_name'],
verbose=verbose, early_stopping_rounds=early_stopping_rounds)
y_pred_valid = model.predict(X_valid)
y_pred = model.predict(X_test, num_iteration=model.best_iteration_)
if model_type == 'xgb':
train_data = xgb.DMatrix(data=X_train, label=y_train, feature_names=X.columns)
valid_data = xgb.DMatrix(data=X_valid, label=y_valid, feature_names=X.columns)
watchlist = [(train_data, 'train'), (valid_data, 'valid_data')]
model = xgb.train(dtrain=train_data, num_boost_round=20000, evals=watchlist, early_stopping_rounds=200, verbose_eval=verbose, params=params)
y_pred_valid = model.predict(xgb.DMatrix(X_valid, feature_names=X.columns), ntree_limit=model.best_ntree_limit)
y_pred = model.predict(xgb.DMatrix(X_test, feature_names=X.columns), ntree_limit=model.best_ntree_limit)
if model_type == 'sklearn':
model = model
model.fit(X_train, y_train)
y_pred_valid = model.predict(X_valid).reshape(-1,)
score = metrics_dict[eval_metric]['sklearn_scoring_function'](y_valid, y_pred_valid)
print(f'Fold {fold_n}. {eval_metric}: {score:.4f}.')
print('')
y_pred = model.predict(X_test).reshape(-1,)
if model_type == 'cat':
model = CatBoostRegressor(iterations=20000, eval_metric=metrics_dict[eval_metric]['catboost_metric_name'], **params,
loss_function=metrics_dict[eval_metric]['catboost_metric_name'])
model.fit(X_train, y_train, eval_set=(X_valid, y_valid), cat_features=[], use_best_model=True, verbose=False)
y_pred_valid = model.predict(X_valid)
y_pred = model.predict(X_test)
oof[valid_index] = y_pred_valid.reshape(-1,)
if eval_metric != 'group_mae':
scores.append(metrics_dict[eval_metric]['sklearn_scoring_function'](y_valid, y_pred_valid))
else:
scores.append(metrics_dict[eval_metric]['scoring_function'](y_valid, y_pred_valid, X_valid['type']))
prediction += y_pred
if model_type == 'lgb' and plot_feature_importance:
# feature importance
fold_importance = pd.DataFrame()
fold_importance["feature"] = columns
fold_importance["importance"] = model.feature_importances_
fold_importance["fold"] = fold_n + 1
feature_importance = pd.concat([feature_importance, fold_importance], axis=0)
prediction /= folds.n_splits
print('CV mean score: {0:.4f}, std: {1:.4f}.'.format(np.mean(scores), np.std(scores)))
result_dict['oof'] = oof
result_dict['prediction'] = prediction
result_dict['scores'] = scores
if model_type == 'lgb':
if plot_feature_importance:
feature_importance["importance"] /= folds.n_splits
cols = feature_importance[["feature", "importance"]].groupby("feature").mean().sort_values(
by="importance", ascending=False)[:50].index
best_features = feature_importance.loc[feature_importance.feature.isin(cols)]
plt.figure(figsize=(16, 12));
sns.barplot(x="importance", y="feature", data=best_features.sort_values(by="importance", ascending=False));
plt.title('LGB Features (avg over folds)');
result_dict['feature_importance'] = feature_importance
return result_dict
def train_model_classification(X, X_test, y, params, folds, model_type='lgb', eval_metric='auc', columns=None, plot_feature_importance=False, model=None,
verbose=10000, early_stopping_rounds=200, n_estimators=50000):
"""
A function to train a variety of regression models.
Returns dictionary with oof predictions, test predictions, scores and, if necessary, feature importances.
:params: X - training data, can be pd.DataFrame or np.ndarray (after normalizing)
:params: X_test - test data, can be pd.DataFrame or np.ndarray (after normalizing)
:params: y - target
:params: folds - folds to split data
:params: model_type - type of model to use
:params: eval_metric - metric to use
:params: columns - columns to use. If None - use all columns
:params: plot_feature_importance - whether to plot feature importance of LGB
:params: model - sklearn model, works only for "sklearn" model type
"""
columns = X.columns if columns == None else columns
X_test = X_test[columns]
# to set up scoring parameters
metrics_dict = {'auc': {'lgb_metric_name': eval_auc,
'catboost_metric_name': 'AUC',
'sklearn_scoring_function': metrics.roc_auc_score},
}
result_dict = {}
# out-of-fold predictions on train data
oof = np.zeros((len(X), len(set(y.values))))
# averaged predictions on train data
prediction = np.zeros((len(X_test), oof.shape[1]))
# list of scores on folds
scores = []
feature_importance = pd.DataFrame()
# split and train on folds
for fold_n, (train_index, valid_index) in enumerate(folds.split(X)):
print(f'Fold {fold_n + 1} started at {time.ctime()}')
if type(X) == np.ndarray:
X_train, X_valid = X[columns][train_index], X[columns][valid_index]
y_train, y_valid = y[train_index], y[valid_index]
else:
X_train, X_valid = X[columns].iloc[train_index], X[columns].iloc[valid_index]
y_train, y_valid = y.iloc[train_index], y.iloc[valid_index]
if model_type == 'lgb':
model = lgb.LGBMClassifier(**params, n_estimators=n_estimators, n_jobs = -1)
model.fit(X_train, y_train,
eval_set=[(X_train, y_train), (X_valid, y_valid)], eval_metric=metrics_dict[eval_metric]['lgb_metric_name'],
verbose=verbose, early_stopping_rounds=early_stopping_rounds)
y_pred_valid = model.predict_proba(X_valid)
y_pred = model.predict_proba(X_test, num_iteration=model.best_iteration_)
if model_type == 'xgb':
train_data = xgb.DMatrix(data=X_train, label=y_train, feature_names=X.columns)
valid_data = xgb.DMatrix(data=X_valid, label=y_valid, feature_names=X.columns)
watchlist = [(train_data, 'train'), (valid_data, 'valid_data')]
model = xgb.train(dtrain=train_data, num_boost_round=n_estimators, evals=watchlist, early_stopping_rounds=early_stopping_rounds, verbose_eval=verbose, params=params)
y_pred_valid = model.predict(xgb.DMatrix(X_valid, feature_names=X.columns), ntree_limit=model.best_ntree_limit)
y_pred = model.predict(xgb.DMatrix(X_test, feature_names=X.columns), ntree_limit=model.best_ntree_limit)
if model_type == 'sklearn':
model = model
model.fit(X_train, y_train)
y_pred_valid = model.predict(X_valid).reshape(-1,)
score = metrics_dict[eval_metric]['sklearn_scoring_function'](y_valid, y_pred_valid)
print(f'Fold {fold_n}. {eval_metric}: {score:.4f}.')
print('')
y_pred = model.predict_proba(X_test)
if model_type == 'cat':
model = CatBoostClassifier(iterations=n_estimators, eval_metric=metrics_dict[eval_metric]['catboost_metric_name'], **params,
loss_function=metrics_dict[eval_metric]['catboost_metric_name'])
model.fit(X_train, y_train, eval_set=(X_valid, y_valid), cat_features=[], use_best_model=True, verbose=False)
y_pred_valid = model.predict(X_valid)
y_pred = model.predict(X_test)
oof[valid_index] = y_pred_valid
scores.append(metrics_dict[eval_metric]['sklearn_scoring_function'](y_valid, y_pred_valid[:, 1]))
prediction += y_pred
if model_type == 'lgb' and plot_feature_importance:
# feature importance
fold_importance = pd.DataFrame()
fold_importance["feature"] = columns
fold_importance["importance"] = model.feature_importances_
fold_importance["fold"] = fold_n + 1
feature_importance = pd.concat([feature_importance, fold_importance], axis=0)
prediction /= folds.n_splits
print('CV mean score: {0:.4f}, std: {1:.4f}.'.format(np.mean(scores), np.std(scores)))
result_dict['oof'] = oof
result_dict['prediction'] = prediction
result_dict['scores'] = scores
if model_type == 'lgb':
if plot_feature_importance:
feature_importance["importance"] /= folds.n_splits
cols = feature_importance[["feature", "importance"]].groupby("feature").mean().sort_values(
by="importance", ascending=False)[:50].index
best_features = feature_importance.loc[feature_importance.feature.isin(cols)]
plt.figure(figsize=(16, 12));
sns.barplot(x="importance", y="feature", data=best_features.sort_values(by="importance", ascending=False));
plt.title('LGB Features (avg over folds)');
result_dict['feature_importance'] = feature_importance
return result_dict
# setting up altair
workaround = prepare_altair()
HTML("".join((
"<script>",
workaround,
"</script>",
)))
```
<a id="id3"></a> <br>
# **3. Load the Dataset**
Let's load all necessary datasets
```
train = pd.read_csv('../input/train.csv')
test = pd.read_csv('../input/test.csv')
sub = pd.read_csv('../input/sample_submission.csv')
structures = pd.read_csv('../input/structures.csv')
scalar_coupling_contributions = pd.read_csv('../input/scalar_coupling_contributions.csv')
print('Train dataset shape is -> rows: {} cols:{}'.format(train.shape[0],train.shape[1]))
print('Test dataset shape is -> rows: {} cols:{}'.format(test.shape[0],test.shape[1]))
print('Sub dataset shape is -> rows: {} cols:{}'.format(sub.shape[0],sub.shape[1]))
print('Structures dataset shape is -> rows: {} cols:{}'.format(structures.shape[0],structures.shape[1]))
print('Scalar_coupling_contributions dataset shape is -> rows: {} cols:{}'.format(scalar_coupling_contributions.shape[0],
scalar_coupling_contributions.shape[1]))
```
For an fast model/feature evaluation, get only 10% of dataset. Final submission must remove/coments this code
```
n_estimators_default = 3000
'''
size = round(0.10*train.shape[0])
train = train[:size]
test = test[:size]
sub = sub[:size]
structures = structures[:size]
scalar_coupling_contributions = scalar_coupling_contributions[:size]
print('Train dataset shape is now rows: {} cols:{}'.format(train.shape[0],train.shape[1]))
print('Test dataset shape is now rows: {} cols:{}'.format(test.shape[0],test.shape[1]))
print('Sub dataset shape is now rows: {} cols:{}'.format(sub.shape[0],sub.shape[1]))
print('Structures dataset shape is now rows: {} cols:{}'.format(structures.shape[0],structures.shape[1]))
print('Scalar_coupling_contributions dataset shape is now rows: {} cols:{}'.format(scalar_coupling_contributions.shape[0],
scalar_coupling_contributions.shape[1]))
'''
```
The importante things to know is that the scalar coupling constants in train.csv are a sum of four terms.
```
* fc is the Fermi Contact contribution
* sd is the Spin-dipolar contribution
* pso is the Paramagnetic spin-orbit contribution
* dso is the Diamagnetic spin-orbit contribution.
```
Let's merge this into train
```
train = pd.merge(train, scalar_coupling_contributions, how = 'left',
left_on = ['molecule_name', 'atom_index_0', 'atom_index_1', 'type'],
right_on = ['molecule_name', 'atom_index_0', 'atom_index_1', 'type'])
train.head(10)
test.head(10)
scalar_coupling_contributions.head(5)
```
`train['scalar_coupling_constant'] and scalar_coupling_contributions['fc']` quite similar
```
pd.concat(objs=[train['scalar_coupling_constant'],scalar_coupling_contributions['fc'] ],axis=1)[:10]
```
Based in others ideais we can:<br>
- train a model to predict `fc` feature;
- add this feature to train and test and train the same model to compare performance;
- train a better model;
<a id="id4"></a> <br>
# **4. Data Pre-processing**
## Feature generation
I use this great kernel to get x,y,z position. https://www.kaggle.com/seriousran/just-speed-up-calculate-distance-from-benchmark
```
from tqdm import tqdm_notebook as tqdm
atomic_radius = {'H':0.38, 'C':0.77, 'N':0.75, 'O':0.73, 'F':0.71} # Without fudge factor
fudge_factor = 0.05
atomic_radius = {k:v + fudge_factor for k,v in atomic_radius.items()}
print(atomic_radius)
electronegativity = {'H':2.2, 'C':2.55, 'N':3.04, 'O':3.44, 'F':3.98}
#structures = pd.read_csv(structures, dtype={'atom_index':np.int8})
atoms = structures['atom'].values
atoms_en = [electronegativity[x] for x in tqdm(atoms)]
atoms_rad = [atomic_radius[x] for x in tqdm(atoms)]
structures['EN'] = atoms_en
structures['rad'] = atoms_rad
display(structures.head())
```
### Chemical Bond Calculation
```
i_atom = structures['atom_index'].values
p = structures[['x', 'y', 'z']].values
p_compare = p
m = structures['molecule_name'].values
m_compare = m
r = structures['rad'].values
r_compare = r
source_row = np.arange(len(structures))
max_atoms = 28
bonds = np.zeros((len(structures)+1, max_atoms+1), dtype=np.int8)
bond_dists = np.zeros((len(structures)+1, max_atoms+1), dtype=np.float32)
print('Calculating bonds')
for i in tqdm(range(max_atoms-1)):
p_compare = np.roll(p_compare, -1, axis=0)
m_compare = np.roll(m_compare, -1, axis=0)
r_compare = np.roll(r_compare, -1, axis=0)
mask = np.where(m == m_compare, 1, 0) #Are we still comparing atoms in the same molecule?
dists = np.linalg.norm(p - p_compare, axis=1) * mask
r_bond = r + r_compare
bond = np.where(np.logical_and(dists > 0.0001, dists < r_bond), 1, 0)
source_row = source_row
target_row = source_row + i + 1 #Note: Will be out of bounds of bonds array for some values of i
target_row = np.where(np.logical_or(target_row > len(structures), mask==0), len(structures), target_row) #If invalid target, write to dummy row
source_atom = i_atom
target_atom = i_atom + i + 1 #Note: Will be out of bounds of bonds array for some values of i
target_atom = np.where(np.logical_or(target_atom > max_atoms, mask==0), max_atoms, target_atom) #If invalid target, write to dummy col
bonds[(source_row, target_atom)] = bond
bonds[(target_row, source_atom)] = bond
bond_dists[(source_row, target_atom)] = dists
bond_dists[(target_row, source_atom)] = dists
bonds = np.delete(bonds, axis=0, obj=-1) #Delete dummy row
bonds = np.delete(bonds, axis=1, obj=-1) #Delete dummy col
bond_dists = np.delete(bond_dists, axis=0, obj=-1) #Delete dummy row
bond_dists = np.delete(bond_dists, axis=1, obj=-1) #Delete dummy col
print('Counting and condensing bonds')
bonds_numeric = [[i for i,x in enumerate(row) if x] for row in tqdm(bonds)]
bond_lengths = [[dist for i,dist in enumerate(row) if i in bonds_numeric[j]] for j,row in enumerate(tqdm(bond_dists))]
bond_lengths_mean = [ np.mean(x) for x in bond_lengths]
n_bonds = [len(x) for x in bonds_numeric]
#bond_data = {'bond_' + str(i):col for i, col in enumerate(np.transpose(bonds))}
#bond_data.update({'bonds_numeric':bonds_numeric, 'n_bonds':n_bonds})
bond_data = {'n_bonds':n_bonds, 'bond_lengths_mean': bond_lengths_mean }
bond_df = pd.DataFrame(bond_data)
structures = structures.join(bond_df)
display(structures.head(20))
def map_atom_info(df, atom_idx):
df = pd.merge(df, structures, how = 'left',
left_on = ['molecule_name', f'atom_index_{atom_idx}'],
right_on = ['molecule_name', 'atom_index'])
#df = df.drop('atom_index', axis=1)
df = df.rename(columns={'atom': f'atom_{atom_idx}',
'x': f'x_{atom_idx}',
'y': f'y_{atom_idx}',
'z': f'z_{atom_idx}'})
return df
train = map_atom_info(train, 0)
train = map_atom_info(train, 1)
test = map_atom_info(test, 0)
test = map_atom_info(test, 1)
```
Let's get the distance between atoms first.
```
train_p_0 = train[['x_0', 'y_0', 'z_0']].values
train_p_1 = train[['x_1', 'y_1', 'z_1']].values
test_p_0 = test[['x_0', 'y_0', 'z_0']].values
test_p_1 = test[['x_1', 'y_1', 'z_1']].values
train['dist'] = np.linalg.norm(train_p_0 - train_p_1, axis=1)
test['dist'] = np.linalg.norm(test_p_0 - test_p_1, axis=1)
train['dist_x'] = (train['x_0'] - train['x_1']) ** 2
test['dist_x'] = (test['x_0'] - test['x_1']) ** 2
train['dist_y'] = (train['y_0'] - train['y_1']) ** 2
test['dist_y'] = (test['y_0'] - test['y_1']) ** 2
train['dist_z'] = (train['z_0'] - train['z_1']) ** 2
test['dist_z'] = (test['z_0'] - test['z_1']) ** 2
train['type_0'] = train['type'].apply(lambda x: x[0])
test['type_0'] = test['type'].apply(lambda x: x[0])
def create_features(df):
df['molecule_couples'] = df.groupby('molecule_name')['id'].transform('count')
df['molecule_dist_mean'] = df.groupby('molecule_name')['dist'].transform('mean')
df['molecule_dist_min'] = df.groupby('molecule_name')['dist'].transform('min')
df['molecule_dist_max'] = df.groupby('molecule_name')['dist'].transform('max')
df['atom_0_couples_count'] = df.groupby(['molecule_name', 'atom_index_0'])['id'].transform('count')
df['atom_1_couples_count'] = df.groupby(['molecule_name', 'atom_index_1'])['id'].transform('count')
df[f'molecule_atom_index_0_x_1_std'] = df.groupby(['molecule_name', 'atom_index_0'])['x_1'].transform('std')
df[f'molecule_atom_index_0_y_1_mean'] = df.groupby(['molecule_name', 'atom_index_0'])['y_1'].transform('mean')
df[f'molecule_atom_index_0_y_1_mean_diff'] = df[f'molecule_atom_index_0_y_1_mean'] - df['y_1']
df[f'molecule_atom_index_0_y_1_mean_div'] = df[f'molecule_atom_index_0_y_1_mean'] / df['y_1']
df[f'molecule_atom_index_0_y_1_max'] = df.groupby(['molecule_name', 'atom_index_0'])['y_1'].transform('max')
df[f'molecule_atom_index_0_y_1_max_diff'] = df[f'molecule_atom_index_0_y_1_max'] - df['y_1']
df[f'molecule_atom_index_0_y_1_std'] = df.groupby(['molecule_name', 'atom_index_0'])['y_1'].transform('std')
df[f'molecule_atom_index_0_z_1_std'] = df.groupby(['molecule_name', 'atom_index_0'])['z_1'].transform('std')
df[f'molecule_atom_index_0_dist_mean'] = df.groupby(['molecule_name', 'atom_index_0'])['dist'].transform('mean')
df[f'molecule_atom_index_0_dist_mean_diff'] = df[f'molecule_atom_index_0_dist_mean'] - df['dist']
df[f'molecule_atom_index_0_dist_mean_div'] = df[f'molecule_atom_index_0_dist_mean'] / df['dist']
df[f'molecule_atom_index_0_dist_max'] = df.groupby(['molecule_name', 'atom_index_0'])['dist'].transform('max')
df[f'molecule_atom_index_0_dist_max_diff'] = df[f'molecule_atom_index_0_dist_max'] - df['dist']
df[f'molecule_atom_index_0_dist_max_div'] = df[f'molecule_atom_index_0_dist_max'] / df['dist']
df[f'molecule_atom_index_0_dist_min'] = df.groupby(['molecule_name', 'atom_index_0'])['dist'].transform('min')
df[f'molecule_atom_index_0_dist_min_diff'] = df[f'molecule_atom_index_0_dist_min'] - df['dist']
df[f'molecule_atom_index_0_dist_min_div'] = df[f'molecule_atom_index_0_dist_min'] / df['dist']
df[f'molecule_atom_index_0_dist_std'] = df.groupby(['molecule_name', 'atom_index_0'])['dist'].transform('std')
df[f'molecule_atom_index_0_dist_std_diff'] = df[f'molecule_atom_index_0_dist_std'] - df['dist']
df[f'molecule_atom_index_0_dist_std_div'] = df[f'molecule_atom_index_0_dist_std'] / df['dist']
df[f'molecule_atom_index_1_dist_mean'] = df.groupby(['molecule_name', 'atom_index_1'])['dist'].transform('mean')
df[f'molecule_atom_index_1_dist_mean_diff'] = df[f'molecule_atom_index_1_dist_mean'] - df['dist']
df[f'molecule_atom_index_1_dist_mean_div'] = df[f'molecule_atom_index_1_dist_mean'] / df['dist']
df[f'molecule_atom_index_1_dist_max'] = df.groupby(['molecule_name', 'atom_index_1'])['dist'].transform('max')
df[f'molecule_atom_index_1_dist_max_diff'] = df[f'molecule_atom_index_1_dist_max'] - df['dist']
df[f'molecule_atom_index_1_dist_max_div'] = df[f'molecule_atom_index_1_dist_max'] / df['dist']
df[f'molecule_atom_index_1_dist_min'] = df.groupby(['molecule_name', 'atom_index_1'])['dist'].transform('min')
df[f'molecule_atom_index_1_dist_min_diff'] = df[f'molecule_atom_index_1_dist_min'] - df['dist']
df[f'molecule_atom_index_1_dist_min_div'] = df[f'molecule_atom_index_1_dist_min'] / df['dist']
df[f'molecule_atom_index_1_dist_std'] = df.groupby(['molecule_name', 'atom_index_1'])['dist'].transform('std')
df[f'molecule_atom_index_1_dist_std_diff'] = df[f'molecule_atom_index_1_dist_std'] - df['dist']
df[f'molecule_atom_index_1_dist_std_div'] = df[f'molecule_atom_index_1_dist_std'] / df['dist']
df[f'molecule_atom_1_dist_mean'] = df.groupby(['molecule_name', 'atom_1'])['dist'].transform('mean')
df[f'molecule_atom_1_dist_min'] = df.groupby(['molecule_name', 'atom_1'])['dist'].transform('min')
df[f'molecule_atom_1_dist_min_diff'] = df[f'molecule_atom_1_dist_min'] - df['dist']
df[f'molecule_atom_1_dist_min_div'] = df[f'molecule_atom_1_dist_min'] / df['dist']
df[f'molecule_atom_1_dist_std'] = df.groupby(['molecule_name', 'atom_1'])['dist'].transform('std')
df[f'molecule_atom_1_dist_std_diff'] = df[f'molecule_atom_1_dist_std'] - df['dist']
df[f'molecule_type_0_dist_std'] = df.groupby(['molecule_name', 'type_0'])['dist'].transform('std')
df[f'molecule_type_0_dist_std_diff'] = df[f'molecule_type_0_dist_std'] - df['dist']
df[f'molecule_type_dist_mean'] = df.groupby(['molecule_name', 'type'])['dist'].transform('mean')
df[f'molecule_type_dist_mean_diff'] = df[f'molecule_type_dist_mean'] - df['dist']
df[f'molecule_type_dist_mean_div'] = df[f'molecule_type_dist_mean'] / df['dist']
df[f'molecule_type_dist_max'] = df.groupby(['molecule_name', 'type'])['dist'].transform('max')
df[f'molecule_type_dist_min'] = df.groupby(['molecule_name', 'type'])['dist'].transform('min')
df[f'molecule_type_dist_std'] = df.groupby(['molecule_name', 'type'])['dist'].transform('std')
df[f'molecule_type_dist_std_diff'] = df[f'molecule_type_dist_std'] - df['dist']
df = reduce_mem_usage(df)
return df
train = create_features(train)
test = create_features(test)
def map_atom_info(df_1,df_2, atom_idx):
df = pd.merge(df_1, df_2, how = 'left',
left_on = ['molecule_name', f'atom_index_{atom_idx}'],
right_on = ['molecule_name', 'atom_index'])
df = df.drop('atom_index', axis=1)
return df
def create_closest(df_train):
#I apologize for my poor coding skill. Please make the better one.
df_temp=df_train.loc[:,["molecule_name","atom_index_0","atom_index_1","dist","x_0","y_0","z_0","x_1","y_1","z_1"]].copy()
df_temp_=df_temp.copy()
df_temp_= df_temp_.rename(columns={'atom_index_0': 'atom_index_1',
'atom_index_1': 'atom_index_0',
'x_0': 'x_1',
'y_0': 'y_1',
'z_0': 'z_1',
'x_1': 'x_0',
'y_1': 'y_0',
'z_1': 'z_0'})
df_temp=pd.concat(objs=[df_temp,df_temp_],axis=0)
df_temp["min_distance"]=df_temp.groupby(['molecule_name', 'atom_index_0'])['dist'].transform('min')
df_temp= df_temp[df_temp["min_distance"]==df_temp["dist"]]
df_temp=df_temp.drop(['x_0','y_0','z_0','min_distance'], axis=1)
df_temp= df_temp.rename(columns={'atom_index_0': 'atom_index',
'atom_index_1': 'atom_index_closest',
'distance': 'distance_closest',
'x_1': 'x_closest',
'y_1': 'y_closest',
'z_1': 'z_closest'})
for atom_idx in [0,1]:
df_train = map_atom_info(df_train,df_temp, atom_idx)
df_train = df_train.rename(columns={'atom_index_closest': f'atom_index_closest_{atom_idx}',
'distance_closest': f'distance_closest_{atom_idx}',
'x_closest': f'x_closest_{atom_idx}',
'y_closest': f'y_closest_{atom_idx}',
'z_closest': f'z_closest_{atom_idx}'})
return df_train
#dtrain = create_closest(train)
#dtest = create_closest(test)
#print('dtrain size',dtrain.shape)
#print('dtest size',dtest.shape)
```
### cosine angles calculation
```
def add_cos_features(df):
df["distance_0"]=((df['x_0']-df['x_closest_0'])**2+(df['y_0']-df['y_closest_0'])**2+(df['z_0']-df['z_closest_0'])**2)**(1/2)
df["distance_1"]=((df['x_1']-df['x_closest_1'])**2+(df['y_1']-df['y_closest_1'])**2+(df['z_1']-df['z_closest_1'])**2)**(1/2)
df["vec_0_x"]=(df['x_0']-df['x_closest_0'])/df["distance_0"]
df["vec_0_y"]=(df['y_0']-df['y_closest_0'])/df["distance_0"]
df["vec_0_z"]=(df['z_0']-df['z_closest_0'])/df["distance_0"]
df["vec_1_x"]=(df['x_1']-df['x_closest_1'])/df["distance_1"]
df["vec_1_y"]=(df['y_1']-df['y_closest_1'])/df["distance_1"]
df["vec_1_z"]=(df['z_1']-df['z_closest_1'])/df["distance_1"]
df["vec_x"]=(df['x_1']-df['x_0'])/df["dist"]
df["vec_y"]=(df['y_1']-df['y_0'])/df["dist"]
df["vec_z"]=(df['z_1']-df['z_0'])/df["dist"]
df["cos_0_1"]=df["vec_0_x"]*df["vec_1_x"]+df["vec_0_y"]*df["vec_1_y"]+df["vec_0_z"]*df["vec_1_z"]
df["cos_0"]=df["vec_0_x"]*df["vec_x"]+df["vec_0_y"]*df["vec_y"]+df["vec_0_z"]*df["vec_z"]
df["cos_1"]=df["vec_1_x"]*df["vec_x"]+df["vec_1_y"]*df["vec_y"]+df["vec_1_z"]*df["vec_z"]
df=df.drop(['vec_0_x','vec_0_y','vec_0_z','vec_1_x','vec_1_y','vec_1_z','vec_x','vec_y','vec_z'], axis=1)
return df
#train = add_cos_features(train)
#test = add_cos_features(test)
#print('train size',train.shape)
#print('test size',test.shape)
```
Dropping molecule_name and encode atom_0, atom_1 and type_0.<br>
**@TODO:** Try others encoders
```
del_cols_list = ['id','molecule_name','sd','pso','dso']
def del_cols(df, cols):
del_cols_list_ = [l for l in del_cols_list if l in df]
df = df.drop(del_cols_list_,axis=1)
return df
train = del_cols(train,del_cols_list)
test = del_cols(test,del_cols_list)
def encode_categoric_single(df):
lbl = LabelEncoder()
cat_cols=[]
try:
cat_cols = df.describe(include=['O']).columns.tolist()
for cat in cat_cols:
df[cat] = lbl.fit_transform(list(df[cat].values))
except Exception as e:
print('error: ', str(e) )
return df
def encode_categoric(dtrain,dtest):
lbl = LabelEncoder()
objs_n = len(dtrain)
dfmerge = pd.concat(objs=[dtrain,dtest],axis=0)
cat_cols=[]
try:
cat_cols = dfmerge.describe(include=['O']).columns.tolist()
for cat in cat_cols:
dfmerge[cat] = lbl.fit_transform(list(dfmerge[cat].values))
except Exception as e:
print('error: ', str(e) )
dtrain = dfmerge[:objs_n]
dtest = dfmerge[objs_n:]
return dtrain,dtest
train = encode_categoric_single(train)
test = encode_categoric_single(test)
y_fc = train['fc']
X = train.drop(['scalar_coupling_constant','fc'],axis=1)
y = train['scalar_coupling_constant']
X_test = test.copy()
print('X size',X.shape)
print('X_test size',X_test.shape)
print('dtest size',test.shape)
print('y_fc size',y_fc.shape)
del train, test
gc.collect()
good_columns = ['bond_lengths_mean_y',
'molecule_atom_index_0_dist_max',
'bond_lengths_mean_x',
'molecule_atom_index_0_dist_mean',
'molecule_atom_index_0_dist_std',
'molecule_couples',
'molecule_atom_index_0_y_1_std',
'molecule_dist_mean',
'molecule_dist_max',
'dist_y',
'molecule_atom_index_0_z_1_std',
'molecule_atom_index_1_dist_max',
'molecule_atom_index_1_dist_min',
'molecule_atom_index_0_x_1_std',
'molecule_atom_index_1_dist_std',
'molecule_atom_index_0_y_1_mean_div',
'y_0',
'molecule_atom_index_1_dist_mean',
'molecule_atom_1_dist_mean',
'x_0',
'dist_x',
'molecule_type_dist_std',
'dist_z',
'molecule_atom_index_1_dist_std_diff',
'molecule_type_dist_mean_diff',
'molecule_atom_index_0_dist_max_div',
'molecule_atom_1_dist_std',
'molecule_type_0_dist_std',
'z_0',
'molecule_type_dist_std_diff',
'molecule_atom_index_0_y_1_mean_diff',
'molecule_atom_index_0_dist_std_diff',
'molecule_atom_index_0_dist_mean_div',
'molecule_atom_index_0_dist_max_diff',
'x_1',
'molecule_type_dist_max',
'molecule_atom_index_0_dist_std_div',
'molecule_atom_index_0_dist_mean_diff',
'molecule_atom_1_dist_std_diff',
'molecule_atom_index_0_y_1_max_diff',
'z_1',
'molecule_atom_index_0_y_1_max',
'molecule_atom_index_0_y_1_mean',
'y_1',
'molecule_type_0_dist_std_diff',
'molecule_dist_min',
'molecule_atom_index_1_dist_std_div',
'molecule_atom_1_dist_min',
'molecule_atom_index_1_dist_max_diff','type']
X = X[good_columns].copy()
X_test = X_test[good_columns].copy()
```
<a id="id5"></a> <br>
# **5. Model**
```
n_fold = 3
folds = KFold(n_splits=n_fold, shuffle=True, random_state=11)
```
## Create out of fold feature
```
params = {'num_leaves': 50,
'min_child_samples': 79,
'min_data_in_leaf' : 100,
'objective': 'regression',
'max_depth': 9,
'learning_rate': 0.2,
"boosting_type": "gbdt",
"subsample_freq": 1,
"subsample": 0.9,
"bagging_seed": 11,
"metric": 'mae',
"verbosity": -1,
'reg_alpha': 0.1,
'reg_lambda': 0.3,
'colsample_bytree': 1.0
}
result_dict_lgb_oof = train_model_regression(X=X, X_test=X_test, y=y_fc, params=params, folds=folds, model_type='lgb', eval_metric='group_mae', plot_feature_importance=False,
verbose=500, early_stopping_rounds=200, n_estimators=n_estimators_default)
X['oof_fc'] = result_dict_lgb_oof['oof']
X_test['oof_fc'] = result_dict_lgb_oof['prediction']
good_columns = ['oof_fc',
'bond_lengths_mean_y',
'molecule_atom_index_0_dist_max',
'bond_lengths_mean_x',
'molecule_atom_index_0_dist_mean',
'molecule_atom_index_0_dist_std',
'molecule_couples',
'molecule_atom_index_0_y_1_std',
'molecule_dist_mean',
'molecule_dist_max',
'dist_y',
'molecule_atom_index_0_z_1_std',
'molecule_atom_index_1_dist_max',
'molecule_atom_index_1_dist_min',
'molecule_atom_index_0_x_1_std',
'molecule_atom_index_1_dist_std',
'molecule_atom_index_0_y_1_mean_div',
'y_0',
'molecule_atom_index_1_dist_mean',
'molecule_atom_1_dist_mean',
'x_0',
'dist_x',
'molecule_type_dist_std',
'dist_z',
'molecule_atom_index_1_dist_std_diff',
'molecule_type_dist_mean_diff',
'molecule_atom_index_0_dist_max_div',
'molecule_atom_1_dist_std',
'molecule_type_0_dist_std',
'z_0',
'molecule_type_dist_std_diff',
'molecule_atom_index_0_y_1_mean_diff',
'molecule_atom_index_0_dist_std_diff',
'molecule_atom_index_0_dist_mean_div',
'molecule_atom_index_0_dist_max_diff',
'x_1',
'molecule_type_dist_max',
'molecule_atom_index_0_dist_std_div',
'molecule_atom_index_0_dist_mean_diff',
'molecule_atom_1_dist_std_diff',
'molecule_atom_index_0_y_1_max_diff',
'z_1',
'molecule_atom_index_0_y_1_max',
'molecule_atom_index_0_y_1_mean',
'y_1',
'molecule_type_0_dist_std_diff',
'molecule_dist_min',
'molecule_atom_index_1_dist_std_div',
'molecule_atom_1_dist_min',
'molecule_atom_index_1_dist_max_diff','type']
X = X[good_columns].copy()
X_test = X_test[good_columns].copy()
def create_bunch_of_features(dtrain,dtest,cat_features):
n_new_features = 0
train_objs_num = len(dtrain)
df_merge = pd.concat(objs=[dtrain, dtest], axis=0)
for feature in cat_features:
#Log Transform
df_merge[feature+'_log'] = np.log (1 + df_merge[feature])
n_new_features = n_new_features +1
dtrain = df_merge[:train_objs_num]
dtest = df_merge[train_objs_num:]
del df_merge
gc.collect()
print('Features Created: {} \nTotal Features {}'.format(n_new_features,len(dtrain.columns)))
return dtrain,dtest
#features = list(X.columns)
#X, X_test = create_bunch_of_features(X,X_test,features)
```
# Checking Best Feature for Final Model
```
params = {'num_leaves': 128,
'min_child_samples': 79,
'objective': 'regression',
'max_depth': 9,
'learning_rate': 0.2,
"boosting_type": "gbdt",
"subsample_freq": 1,
"subsample": 0.9,
"bagging_seed": 11,
"metric": 'mae',
"verbosity": -1,
'reg_alpha': 0.1,
'reg_lambda': 0.3,
'colsample_bytree': 1.0
}
#result_dict_lgb2 = train_model_regression(X=X, X_test=X_test, y=y, params=params, folds=folds, model_type='lgb', eval_metric='group_mae', plot_feature_importance=True,
# verbose=500, early_stopping_rounds=200, n_estimators=n_estimators_default)
#Best Features?
'''
feature_importance = result_dict_lgb2['feature_importance']
best_features = feature_importance[['feature','importance']].groupby(['feature']).mean().sort_values(
by='importance',ascending=False).iloc[:50,0:0].index.tolist()
best_features'''
```
<a id="id6"></a> <br>
# **6. Final Model**
## Training models for each type
```
X_short = pd.DataFrame({'ind': list(X.index), 'type': X['type'].values, 'oof': [0] * len(X), 'target': y.values})
X_short_test = pd.DataFrame({'ind': list(X_test.index), 'type': X_test['type'].values, 'prediction': [0] * len(X_test)})
for t in X['type'].unique():
print(f'Training of type {t}')
X_t = X.loc[X['type'] == t]
X_test_t = X_test.loc[X_test['type'] == t]
y_t = X_short.loc[X_short['type'] == t, 'target']
result_dict_lgb3 = train_model_regression(X=X_t, X_test=X_test_t, y=y_t, params=params, folds=folds, model_type='lgb', eval_metric='group_mae', plot_feature_importance=False,
verbose=500, early_stopping_rounds=200, n_estimators=n_estimators_default)
X_short.loc[X_short['type'] == t, 'oof'] = result_dict_lgb3['oof']
X_short_test.loc[X_short_test['type'] == t, 'prediction'] = result_dict_lgb3['prediction']
```
<a id="id7"></a> <br>
# **7. Submittion**
```
#Training models for type
sub['scalar_coupling_constant'] = X_short_test['prediction']
sub.to_csv('submission_type.csv', index=False)
sub.head()
```
<a id="ref"></a> <br>
# **8. References**
[1] OOF Model: https://www.kaggle.com/adarshchavakula/out-of-fold-oof-model-cross-validation<br>
[2] Using Meta Features: https://www.kaggle.com/artgor/using-meta-features-to-improve-model<br>
[3] Lot of Features: https://towardsdatascience.com/understanding-feature-engineering-part-1-continuous-numeric-data-da4e47099a7b <br>
[4] Angle Feature: https://www.kaggle.com/kmat2019/effective-feature <br>
[5] Recovering bonds from structure: https://www.kaggle.com/aekoch95/bonds-from-structure-data <br>
<h3 style="color:red">If this kernel helps you, up vote to keep me motivated 😁<br>Thanks!</h3>
| github_jupyter |
```
import pandas as pd
import numpy as np
import os
from matplotlib.pyplot import *
from IPython.display import display, HTML
import glob
import scanpy as sc
import pandas as pd
import seaborn as sns
import scipy.stats
%matplotlib inline
file = '/nfs/leia/research/stegle/dseaton/hipsci/singlecell_neuroseq/data/ipsc_singlecell_analysis/sarkar2019_yoruba_ipsc/version0/sarkar2019_yoruba_ipsc.scanpy.dimreduction.harmonyPCA.clustered.h5'
adata_clustered = sc.read(file)
file = '/nfs/leia/research/stegle/dseaton/hipsci/singlecell_neuroseq/data/ipsc_singlecell_analysis/sarkar2019_yoruba_ipsc/version0/sarkar2019_yoruba_ipsc.scanpy.h5'
adatafull = sc.read(file)
in_dir = os.path.dirname(file)
adatafull.obs['cluster_id'] = adata_clustered.obs['louvain'].astype(str)
adatafull.obsm['X_umap'] = adata_clustered.obsm['X_umap']
adatafull.obs['day'] = 'day0'
adatafull.obs['donor_long_id'] = adatafull.obs['chip_id']
adatafull.obs.head()
#subsample
fraction = 1.0
adata = sc.pp.subsample(adatafull, fraction, copy=True)
adata.raw = adata
fig_format = 'png'
# fig_format = 'pdf'
sc.set_figure_params(dpi_save=200,format=fig_format)
#rcParams['figure.figsize'] = 5,4
rcParams['figure.figsize'] = 5,4
plotting_fcn = sc.pl.umap
plotting_fcn(adata, color='cluster_id',size=10)
adata.var
# gene_list = ['NANOG','SOX2','POU5F1','UTF1','SP8']
# ensembl gene ids correspoinding
# gene_list = ['ENSG00000111704','ENSG00000181449','ENSG00000204531','ENSG00000171794','ENSG00000164651']
gene_list = ['ENSG00000111704','ENSG00000181449','ENSG00000204531','ENSG00000171794','ENSG00000166863']
sc.pl.stacked_violin(adata, gene_list, groupby='cluster_id', figsize=(5,4))
df = adata.obs.groupby(['donor_long_id','experiment','cluster_id'])[['day']].count().fillna(0.0).rename(columns={'day':'count'})
total_counts = adata.obs.groupby(['donor_long_id','experiment'])[['day']].count().rename(columns={'day':'total_count'})
df = df.reset_index()
#.join(donor_total_counts)
df['f_cells'] = df.apply(lambda x: x['count']/total_counts.loc[(x['donor_long_id'],x['experiment']),'total_count'], axis=1)
df = df.dropna()
df.head()
mydir = "/hps/nobackup/stegle/users/acuomo/all_scripts/sc_neuroseq/iPSC_scanpy/"
filename = mydir + 'Sarkar_cluster_cell_fractions_by_donor_experiment.csv'
df.to_csv(filename)
sc.tl.rank_genes_groups(adata, groupby='cluster_id', n_genes=1e6)
# group_names = pval_df.columns
group_names = [str(x) for x in range(4)]
df_list = []
for group_name in group_names:
column_names = ['names','pvals','pvals_adj','logfoldchanges','scores']
data = [pd.DataFrame(adata.uns['rank_genes_groups'][col])[group_name] for col in column_names]
temp_df = pd.DataFrame(data, index=column_names).transpose()
temp_df['cluster_id'] = group_name
df_list.append(temp_df)
diff_expression_df = pd.concat(df_list)
diff_expression_df.head()
diff_exp_file = mydir + 'Sarkar2019' + '.cluster_expression_markers.tsv'
diff_expression_df.to_csv(diff_exp_file, sep='\t', index=False)
diff_expression_df.query('cluster_id=="0"').to_csv(diff_exp_file.replace('.tsv','.cluster0.tsv'), sep='\t', index=False)
diff_expression_df.query('cluster_id=="1"').to_csv(diff_exp_file.replace('.tsv','.cluster1.tsv'), sep='\t', index=False)
diff_expression_df.query('cluster_id=="2"').to_csv(diff_exp_file.replace('.tsv','.cluster2.tsv'), sep='\t', index=False)
diff_expression_df.query('cluster_id=="3"').to_csv(diff_exp_file.replace('.tsv','.cluster3.tsv'), sep='\t', index=False)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/hnishi/jupyterbook-hnishi/blob/colab-dev/pca.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# 主成分分析 (主成分解析、Principal component analysis : PCA)
## 概要
- 主成分分析は、教師なし線形変換法の1つ
- データセットの座標軸を、データの分散が最大になる方向に変換し、元の次元と同じもしくは、元の次元数より低い新しい特徴部分空間を作成する手法
- 主なタスク
- 次元削減
- 次元削減を行うことで以下の目的を達成できる
- 特徴抽出
- データの可視化
- 次元削減を行うメリット
- 計算コスト(計算時間、メモリ使用量)を削減できる
- 特徴量を削減したことによる情報の喪失をできるだけ小さくする
- モデルを簡素化できる(パラメータが減る)ため、オーバーフィッティングを防げる
- 人間が理解可能な空間にデータを投影することができる(非常に高次元な空間を、身近な3次元、2次元に落とし込むことができる)
## 応用例
- タンパク質分子の立体構造モデルの構造空間の次元削減と可視化
- タンパク質の全原子モデルの立体構造は、分子内に含まれる原子の座標情報で表すことができる (原子数 × 3 (x, y, z) 次元のベクトル)
以下は、タンパク質の分子シミュレーションで使われるモデルの1例。
(紫色とオレンジ色で表されたリボンモデルがタンパク質で、周りに水とイオンが表示されている)
(この場合、3547 個の原子 --> 10641 次元)
<img src="https://github.com/hnishi/hnishi_da_handson/blob/master/images/cdr-h3-pbc.png?raw=true" width="50%">
主成分分析により、この立体構造空間を、2次元空間に投影することができる。
以下は、その投影に対して自由エネルギーを計算した図。

2次元空間上の1点が、1つの立体構造を表している。
つまり、この例では、もともと10641次元あった空間を2次元にまで削減している。
Ref) [Nishigami, H., Kamiya, N., & Nakamura, H. (2016). Revisiting antibody modeling assessment for CDR-H3 loop. Protein Engineering, Design and Selection, 29(11), 477-484.](https://academic.oup.com/peds/article/29/11/477/2462452)
## 主成分分析 (PCA) が行う座標変換のイメージ
以下は、PCAが行う座標変換の例
$x_1$ , $x_2$ は、データセットの元々の座標軸であり、
PC1, PC2 は座標変換後に得られる新しい座標軸、主成分1、主成分2 である (Principal Components)。
<img src="https://github.com/rasbt/python-machine-learning-book-2nd-edition/blob/master/code/ch05/images/05_01.png?raw=true" width="50%">
- PCA は、高次元データにおいて分散が最大となる方向を見つけ出し、座標を変換する (これはつまり、すべての主成分が、他の主成分と相関がない(直交する) ように座標変換している)
- 最初の主成分 (PC1) の分散が最大となる
## 主成分分析の主要な手順
d 次元のデータを k 次元に削減する場合
1. d 次元のデータの標準化(特徴量間のスケールが異なる場合のみ)
1. 分散共分散行列の作成
1. 分散共分散行列の固有値と固有ベクトルを求める
1. 固有値を降順にソートして、固有ベクトルをランク付けする
1. 最も大きい k 個の固有値に対応する k 個の固有ベクトルを選択 (k ≦ d)
1. k 個の固有ベクトルから射影(変換)行列 W を作成
1. 射影(変換)行列を使って d 次元の入力データセットを新しい k 次元の特徴部分空間を取得する
---
固有値問題を解くことで、線形独立な基底ベクトルを得ることができる。
詳細は、線形代数の書籍等を参考にする(ここでは詳細な解説をしない)。
参考)
https://dora.bk.tsukuba.ac.jp/~takeuchi/?%E7%B7%9A%E5%BD%A2%E4%BB%A3%E6%95%B0II%2F%E5%9B%BA%E6%9C%89%E5%80%A4%E5%95%8F%E9%A1%8C%E3%83%BB%E5%9B%BA%E6%9C%89%E7%A9%BA%E9%96%93%E3%83%BB%E3%82%B9%E3%83%9A%E3%82%AF%E3%83%88%E3%83%AB%E5%88%86%E8%A7%A3
## python による PCA の実行
以下、Python を使った PCA の実行を順番に見ていく。
その後、scikit-learn ライブラリを使った PCA の簡単で効率のよい実装を見る。
### データセット
- データセットは、 [Wine](https://archive.ics.uci.edu/ml/datasets/Wine) というオープンソースのデータセットを使う。
- 178 行のワインサンプルと、それらの化学的性質を表す 13 列の特徴量で構成されている。
- それぞれのサンプルに、クラス 1, 2, 3 のいずれかがラベルされており、
イタリアの同じ地域で栽培されている異なる品種のブドウを表している
(PCA は教師なし学習なので、学習時にラベルは使わない)。
```
from IPython.display import Image
%matplotlib inline
import pandas as pd
# df_wine = pd.read_csv('https://archive.ics.uci.edu/ml/'
# 'machine-learning-databases/wine/wine.data',
# header=None)
# if the Wine dataset is temporarily unavailable from the
# UCI machine learning repository, un-comment the following line
# of code to load the dataset from a local path:
df_wine = pd.read_csv('https://github.com/rasbt/python-machine-learning-book-2nd-edition'
'/raw/master/code/ch05/wine.data',
header=None)
df_wine.columns = ['Class label', 'Alcohol', 'Malic acid', 'Ash',
'Alcalinity of ash', 'Magnesium', 'Total phenols',
'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins',
'Color intensity', 'Hue',
'OD280/OD315 of diluted wines', 'Proline']
df_wine.head()
```
Wine データセットの先頭 5 行のデータは上記。
```
for i_label in df_wine['Class label'].unique():
print('label:', i_label)
print('shape:', df_wine[df_wine['Class label'] == i_label].shape)
```
ラベルの数はおおよそ揃っている。
次に、ラベルごとにデータの分布を見てみる。
```
import numpy as np
import matplotlib.pyplot as plt
for i_feature in df_wine.columns:
if i_feature == 'Class label': continue
print('feature: ' + str(i_feature))
# ヒストグラムの描画
plt.hist(df_wine[df_wine['Class label'] == 1][i_feature], alpha=0.5, bins=20, label="1")
plt.hist(df_wine[df_wine['Class label'] == 2][i_feature], alpha=0.3, bins=20, label="2", color='r')
plt.hist(df_wine[df_wine['Class label'] == 3][i_feature], alpha=0.1, bins=20, label="3", color='g')
plt.legend(loc="upper left", fontsize=13) # 凡例表示
plt.show()
```
データを70%のトレーニングと30%のテストサブセットに分割する。
```
from sklearn.model_selection import train_test_split
X, y = df_wine.iloc[:, 1:].values, df_wine.iloc[:, 0].values
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.3,
stratify=y,
random_state=0)
```
データの標準化を行う。
```
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train_std = sc.fit_transform(X_train) # トレーニングセットの標準偏差と平均値を使って、標準化を行う
X_test_std = sc.transform(X_test) # "トレーニングセット"の標準偏差と平均値を使って、標準化を行う
# いずれの特徴量も、値がおおよそ、-1 から +1 の範囲にあることを確認する。
print('standardize train', X_train_std[0:2])
print('standardize test', X_test_std[0:2])
```
---
**注意**
テストデータの標準化の際に、テストデータの標準偏差と平均値を用いてはいけない(トレーニングデータの標準偏差と平均値を用いること)。
また、ここで求めた標準偏差と平均値は、未知のデータを標準化する際にも再使用するので、記録しておくこと。
(今回は、ノートブックだけで完結するので、外部ファイル等に記録しなくても問題ない)
- 分散共分散行列を作成
- 固有値問題を解いて、固有値と固有ベクトルを求める
固有値問題とは、以下の条件を満たす、固有ベクトル $v$ と、スカラー値である固有値 $\lambda$ を求める問題のことである
(詳細は線形代数の書籍等を参考)。
$$\Sigma v=\lambda v$$
$\Sigma$ は分散共分散行列である(総和記号ではないことに注意)。
分散共分散行列に関しては、 [前回の資料](https://github.com/hnishi/hnishi_da_handson/blob/master/da_handson_basic_statistic_values.ipynb) を参照。
```
import numpy as np
import seaborn as sns
cov_mat = np.cov(X_train_std.T)
# 共分散行列のヒートマップ
df = pd.DataFrame(cov_mat, index=df_wine.columns[1:], columns=df_wine.columns[1:])
ax = sns.heatmap(df, cmap="YlGnBu")
# 固有値問題を解く(固有値分解)
eigen_vals, eigen_vecs = np.linalg.eigh(cov_mat)
print('\nEigenvalues \n%s' % eigen_vals)
print('\nShape of eigen vectors\n', eigen_vecs.shape)
```
**注意**:
固有値分解(固有分解とも呼ばれる)する numpy の関数は、
- [`numpy.linalg.eig`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eig.html)
- [`numpy.linalg.eigh`](http://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.eigh.html)
がある。
`numpy.linalg.eig` は対称正方行列と非対称正方行列を固有値分解する関数。複素数の固有値を返すことがある。
`numpy.linalg.eigh` はエルミート行列(各成分が複素数で、転置させた各成分の虚部の値の正負を反転させたものがもとの行列と等しくなる行列)を固有値分解する関数。常に実数の固有値を返す。
分散共分散行列は、対称正方行列であり、虚数部が 0 のエルミート行列でもある。
対称正方行列の操作では、`numpy.linalg.eigh` の方が数値的に安定しているらしい。
Ref) *Python Machine Learning 2nd Edition* by [Sebastian Raschka](https://sebastianraschka.com), Packt Publishing Ltd. 2017.
## 全分散と説明分散(Total and explained variance)
- 固有値の大きさは、データに含まれる情報(分散)の大きさに対応している
- 主成分j (PCj: j-th principal component) に対応する固有値 $\lambda_j$ の分散説明率(寄与率、contribution ratio/propotion とも呼ばれる)は以下のように定義される。
$$\dfrac {\lambda _{j}}{\sum ^{d}_{j=1}\lambda j}$$
$\lambda_j$ は、j 番目の固有値、d は全固有値の数(元々の特徴量の数/次元数)。
分散説明率を見ることで、その主成分が特徴量全体がもつ情報のうち、どれぐらいの情報を表すことができているかを確認できる。
以下に、分散説明率と、その累積和をプロットする。
```
# 固有値の合計
tot = sum(eigen_vals)
# 分散説明率の配列を作成
var_exp = [(i / tot) for i in sorted(eigen_vals, reverse=True)]
# 分散説明率の累積和を作成
cum_var_exp = np.cumsum(var_exp)
import matplotlib.pyplot as plt
plt.bar(range(1, 14), var_exp, alpha=0.5, align='center',
label='individual explained variance')
plt.step(range(1, 14), cum_var_exp, where='mid',
label='cumulative explained variance')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal component index')
plt.legend(loc='best')
plt.tight_layout()
# plt.savefig('images/05_02.png', dpi=300)
plt.show()
```
グラフから以下のことがわかる。
- 最初の主成分だけで、全体の約 4 割の分散を占めている
- 2 つの主成分も用いるだけで、もともとあった特徴量全体の約 6 割を説明できている
## 因子負荷量 (Factor loading)
各主成分軸の意味を知るためには、因子負荷量を見れば良い。
因子負荷量とは、主成分軸における値と、変換前の軸における値との間の相関値のことである。
この相関が大きくなるほど、その特徴量が、その主成分に寄与する程度が大きいということを意味している。
Ref: https://statistics.co.jp/reference/software_R/statR_9_principal.pdf
## 特徴変換 (Feature transformation)
射影(変換)行列を取得し、適用して特徴変換を行う。
---
$X' = XW$
$X'$ : 射影(変換)後の座標(行列)
$X$ : もともとの座標(行列)
$W$ : 射影(変換)行列
$W$ は、次元削減後の次元数の固有ベクトルから構成される。
$W = [v_1 v_2 ... v_k] \in \mathbb{R} ^{n\times k}$
```
# Make a list of (eigenvalue, eigenvector) tuples
eigen_pairs = [(np.abs(eigen_vals[i]), eigen_vecs[:, i])
for i in range(len(eigen_vals))]
# Sort the (eigenvalue, eigenvector) tuples from high to low
eigen_pairs.sort(key=lambda k: k[0], reverse=True)
```
### まずは、次元削減を行わずに、13 次元 --> 13 次元の座標変換を見てみる
$X' = XW$
$W = [v_1 v_2 ... v_13] \in \mathbb{R} ^{13\times 13}$
$x \in \mathbb{R} ^{13}$
$x' \in \mathbb{R} ^{13}$
```
# 変換行列 w の作成
w = eigen_pairs[0][1][:, np.newaxis]
for i in range(1, len(eigen_pairs)):
# print(i)
w = np.hstack((w, eigen_pairs[i][1][:, np.newaxis]))
w.shape
# 座標変換
X_train_pca = X_train_std.dot(w)
# print(X_train_pca.shape)
cov_mat = np.cov(X_train_pca.T)
# 共分散行列のヒートマップ
df = pd.DataFrame(cov_mat)
ax = sns.heatmap(df, cmap="YlGnBu")
```
主成分空間に変換後の各特徴量は、互いに相関が全くないことがわかる(互いに線形独立)。
対角成分は分散値であり、第1主成分から大きい順に並んでいることがわかる。
### 座標変換された空間から元の空間への復元
## 座標変換された空間から元の空間への復元
$X = X'W^T$
$X'$ : 座標変換後の座標(行列)
$X$ : もともとの空間に復元された座標(行列)
$W^T \in \mathbb{R} ^{n\times n}$ : 転置された変)行列
$x' \in \mathbb{R} ^{n}$
$x_{approx} \in \mathbb{R} ^{n}$
```
# 1つ目のサンプルに射影行列を適用(内積を作用させる)
x0 = X_train_std[0]
print('もともとの特徴量:', x0)
z0 = x0.dot(w)
print('変換後の特徴量:', z0)
x0_reconstructed = z0.dot(w.T)
print('復元された特徴量:', x0_reconstructed)
```
完全に復元できていることがわかる。
### 13 次元 --> 2 次元に次元削減する
$X' = XW$
$W = [v_1 v_2] \in \mathbb{R} ^{13\times 2}$
$x \in \mathbb{R} ^{13}$
$x' \in \mathbb{R} ^{2}$
```
w = np.hstack((eigen_pairs[0][1][:, np.newaxis],
eigen_pairs[1][1][:, np.newaxis]))
print('Matrix W:\n', w)
```
**注意**
NumPy と LAPACK のバージョンによっては、上記の例とは符号が反転した射影行列 w が作成されることがあるが、問題はない。
以下の式が成り立つからである。
行列 $\Sigma$ に対して、 $v$ が固有ベクトル、$\lambda$ が固有値のとき、
$$\Sigma v = \lambda v,$$
ここで $-v$ もまた同じ固有値をもつ固有ベクトルとなる。
$$\Sigma \cdot (-v) = -\Sigma v = -\lambda v = \lambda \cdot (-v).$$
(主成分軸のベクトルの向きの違い)
```
# 各サンプルに射影行列を適用(内積を作用)させることで、変換後の座標(特徴量)を得ることができる。
X_train_std[0].dot(w)
```
### 2次元に射影後の全データを、ラベルごとに色付けしてプロットする
```
X_train_pca = X_train_std.dot(w)
colors = ['r', 'b', 'g']
markers = ['s', 'x', 'o']
for l, c, m in zip(np.unique(y_train), colors, markers):
plt.scatter(X_train_pca[y_train == l, 0],
X_train_pca[y_train == l, 1],
c=c, label=l, marker=m)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('images/05_03.png', dpi=300)
plt.show()
```
PC1 軸方向をみると、PC2 軸方向よりもよりもデータが広く分布しており、データをよりよく区別できていることがわかる。
## 次元削減された空間から元の空間への復元
$X_{approx} = X'W^T$
$X'$ : 射影後の座標(行列)
$X_{approx}$ : もともとの空間に、近似的に、復元された座標(行列)
$W^T \in \mathbb{R} ^{n\times k}$ : 転置された射影(変換)行列
$x' \in \mathbb{R} ^{k}$
$x_{approx} \in \mathbb{R} ^{n}$
$k = n$ のとき、$X = X_{approx}$ が成り立つ(上述)。
```
# 1つ目のサンプルに射影行列を適用(内積を作用させる)
x0 = X_train_std[0]
print('もともとの特徴量:', x0)
z0 = x0.dot(w)
print('変換後の特徴量:', z0)
x0_reconstructed = z0.dot(w.T)
print('復元された特徴量:', x0_reconstructed)
```
完全には復元できていないことがわかる(近似値に復元される)。
## Principal component analysis in scikit-learn
上記で行った PCA の実装は、scikit-learn を使うことで簡単に実装できる。
以下にその実装を示す。
```
from sklearn.decomposition import PCA
pca = PCA()
# 主成分分析の実行
X_train_pca = pca.fit_transform(X_train_std)
# 分散説明率の表示
pca.explained_variance_ratio_
# 分散説明率とその累積和のプロット
plt.bar(range(1, 14), pca.explained_variance_ratio_, alpha=0.5, align='center')
plt.step(range(1, 14), np.cumsum(pca.explained_variance_ratio_), where='mid')
plt.ylabel('Explained variance ratio')
plt.xlabel('Principal components')
plt.show()
# 2次元に削減
pca = PCA(n_components=2)
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
# 2次元空間にプロット
plt.scatter(X_train_pca[:, 0], X_train_pca[:, 1])
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.show()
```
### 因子負荷量の確認
以下のように、`pca.components_` を見ることで、因子負荷量を確認することができる。
```
pd.DataFrame(pca.components_.T,index=df_wine.columns[1:],columns=['PC1','PC2']).sort_values('PC1')
```
値の絶対値が大きい特徴量を見れば良い。
つまり、第1主成分 (PC1) でよく表されている特徴量は、 "Flavanoids" と "Total phenols" である。
一方、第2主成分 (PC2) でよく表されている特徴量は、 "Color intensity" と "Alcohol" である。
## 2次元に次元削減された特徴量を用いてロジスティック回帰を行ってみる
```
from matplotlib.colors import ListedColormap
def plot_decision_regions(X, y, classifier, resolution=0.02):
# setup marker generator and color map
markers = ('s', 'x', 'o', '^', 'v')
colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
# plot the decision surface
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution))
Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
# plot class samples
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x=X[y == cl, 0],
y=X[y == cl, 1],
alpha=0.6,
c=cmap(idx),
edgecolor='black',
marker=markers[idx],
label=cl)
```
Training logistic regression classifier using the first 2 principal components.
```
from sklearn.linear_model import LogisticRegression
pca = PCA(n_components=2)
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
lr = LogisticRegression(penalty='l2', C=1.0)
# lr = LogisticRegression(penalty='none')
lr = lr.fit(X_train_pca, y_train)
print(X_train_pca.shape)
print('Cumulative explained variance ratio:', sum(pca.explained_variance_ratio_))
```
### 学習時間の計測
```
%timeit lr.fit(X_train_pca, y_train)
from sklearn.metrics import plot_confusion_matrix
# 精度
print('accuracy', lr.score(X_train_pca, y_train))
# confusion matrix
plot_confusion_matrix(lr, X_train_pca, y_train)
```
### トレーニングデータセットの予測結果
```
plot_decision_regions(X_train_pca, y_train, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('images/05_04.png', dpi=300)
plt.show()
```
### テストデータに対する予測結果
```
from sklearn.metrics import plot_confusion_matrix
# 精度
print('accuracy', lr.score(X_test_pca, y_test))
# confusion matrix
plot_confusion_matrix(lr, X_test_pca, y_test)
plot_decision_regions(X_test_pca, y_test, classifier=lr)
plt.xlabel('PC 1')
plt.ylabel('PC 2')
plt.legend(loc='lower left')
plt.tight_layout()
# plt.savefig('images/05_05.png', dpi=300)
plt.show()
```
次元削減せずに全てのの主成分を取得したい場合は、 `n_components=None` にする。
```
pca = PCA(n_components=None)
X_train_pca = pca.fit_transform(X_train_std)
pca.explained_variance_ratio_
```
## 3 次元に次元削減された特徴量を用いてロジスティック回帰を行ってみる
```
from sklearn.linear_model import LogisticRegression
k = 3
pca = PCA(n_components=3)
X_train_pca = pca.fit_transform(X_train_std)
X_test_pca = pca.transform(X_test_std)
lr = LogisticRegression(penalty='l2', C=1.0)
# lr = LogisticRegression(penalty='none')
lr = lr.fit(X_train_pca, y_train)
print(X_train_pca.shape)
print('Cumulative explained variance ratio:', sum(pca.explained_variance_ratio_))
%timeit lr.fit(X_train_pca, y_train)
from sklearn.metrics import plot_confusion_matrix
# 精度
print('accuracy', lr.score(X_train_pca, y_train))
# confusion matrix
plot_confusion_matrix(lr, X_train_pca, y_train)
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
# ax.scatter(X_train_pca[:,0], X_train_pca[:,1], X_train_pca[:,2], c='r', marker='o')
ax.scatter(X_train_pca[:,0], X_train_pca[:,1], X_train_pca[:,2], c=y_train, marker='o')
ax.set_xlabel('PC1')
ax.set_ylabel('PC2')
ax.set_zlabel('PC3')
plt.show()
# plotly を使った interactive な 3D 散布図
import plotly.express as px
df = pd.DataFrame(X_train_pca, columns=['PC1', 'PC2', 'PC3'])
df['label'] = y_train
fig = px.scatter_3d(df, x='PC1', y='PC2', z='PC3',
color='label', opacity=0.7, )
fig.show()
```
人間の目で確認できるのは 3 次元が限界。
## 次元削減せずにロジスティック回帰を行ってみる
```
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression(penalty='l2', C=1.0)
# lr = LogisticRegression(penalty='none')
lr = lr.fit(X_train_std, y_train)
# 学習時間
%timeit lr.fit(X_train_std, y_train)
from sklearn.metrics import plot_confusion_matrix
print('Evaluation of training dataset')
# 精度
print('accuracy', lr.score(X_train_std, y_train))
# confusion matrix
plot_confusion_matrix(lr, X_train_std, y_train)
print('Evaluation of test dataset')
# 精度
print('accuracy', lr.score(X_test_std, y_test))
# confusion matrix
plot_confusion_matrix(lr, X_test_std, y_test)
```
元々の全ての特徴量を使って学習させた方が精度が高くなった。
学習時間は、次元削減したほうがわずかに早くなっている。
(主成分 2 つで学習した場合 4.9 ms に対し、元々の特徴量全て使った場合 5.64 ms)
結論として、今回のタスクでは、PCA を適用するべきではなく、すべての特徴量を使用したほうが良い。
もっとデータ数が大きい場合や、モデルのパラメータ数が多い場合には、次元削減が効果的となる。
### 2つの特徴量だけでロジスティック回帰を行ってみる
```
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression(penalty='l2', C=1.0)
# lr = LogisticRegression(penalty='none')
lr = lr.fit(X_train_std[:,:2], y_train)
%timeit lr.fit(X_train_std[:,:2], y_train)
from sklearn.metrics import plot_confusion_matrix
print('Evaluation of training dataset')
# 精度
print('accuracy', lr.score(X_train_std[:,:2], y_train))
# confusion matrix
plot_confusion_matrix(lr, X_train_std[:,:2], y_train)
print('Evaluation of test dataset')
# 精度
print('accuracy', lr.score(X_test_std[:,:2], y_test))
# confusion matrix
plot_confusion_matrix(lr, X_test_std[:,:2], y_test)
```
もともとの特徴量を 2 つだけ使った場合、精度はかなり下がる。
これと比べると、PCA によって特徴抽出した 2 つの主成分を使った場合には、精度がかなり高いことがわかる。
## まとめ
主成分分析により以下のタスクを行うことができる。
- 次元削減
- データを格納するためのメモリやディスク使用量を削減できる
- 学習アルゴリズムを高速化できる
- 可視化
- 多数の特徴量(次元)をもつデータを2次元などの理解しやすい空間に落とし込んで議論、解釈することができる。
しかし、機械学習の前処理として利用する場合には、以下のことに注意する必要がある。
- 次元削減を行うことによって、多少なりとも情報が失われている
- まずは、すべての特徴量を使ってトレーニングを試すことが大事
- 次元削減によってオーバーフィッティングを防ぐことができるが、次元削減を使う前に正則化を使うべし
- 上記を試してから、それでも望む結果を得られない場合、次元削減を使う
- 機械学習のトレーニングでは、通常は、99% の累積寄与率が得られるように削減後の次元数を選ぶことが多い
参考) [Andrew Ng先生の講義](https://www.coursera.org/learn/machine-learning)
## References
1. *Python Machine Learning 2nd Edition* by [Sebastian Raschka](https://sebastianraschka.com), Packt Publishing Ltd. 2017. Code Repository: https://github.com/rasbt/python-machine-learning-book-2nd-edition
1. [Andrew Ng先生の講義](https://www.coursera.org/learn/machine-learning)
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
```
# Generate images
```
from pathlib import Path
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
SMALL_SIZE = 15
MEDIUM_SIZE = 20
BIGGER_SIZE = 25
plt.rc("font", size=SMALL_SIZE)
plt.rc("axes", titlesize=SMALL_SIZE)
plt.rc("axes", labelsize=MEDIUM_SIZE)
plt.rc("xtick", labelsize=SMALL_SIZE)
plt.rc("ytick", labelsize=SMALL_SIZE)
plt.rc("legend", fontsize=SMALL_SIZE)
plt.rc("figure", titlesize=BIGGER_SIZE)
DATA_PATH = Path("../thesis/img/")
```
# DTW
```
from fastdtw import fastdtw
ts_0 = np.sin(np.logspace(0, np.log10(2 * np.pi), 30))
ts_1 = np.sin(np.linspace(1, 2 * np.pi, 30))
distance, warping_path = fastdtw(ts_0, ts_1)
fig, axs = plt.subplots(2, 1, figsize=(8, 8), sharex=True)
for name, ax in zip(["Euclidian distance", "Dynamic Time Warping"], axs):
ax.plot(ts_0 + 1, "o-", linewidth=3)
ax.plot(ts_1, "o-", linewidth=3)
ax.set_yticks([])
ax.set_xticks([])
ax.set_title(name)
for x, y in zip(zip(np.arange(30), np.arange(30)), zip(ts_0 + 1, ts_1)):
axs[0].plot(x, y, "r--", linewidth=2, alpha=0.5)
for x_0, x_1 in warping_path:
axs[1].plot([x_0, x_1], [ts_0[x_0] + 1, ts_1[x_1]], "r--", linewidth=2, alpha=0.5)
plt.savefig(DATA_PATH / "dtw_vs_euclid.svg")
plt.tight_layout()
plt.show()
matrix = (ts_0.reshape(-1, 1) - ts_1) ** 2
x = [x for x, _ in warping_path]
y = [y for _, y in warping_path]
# plt.close('all')
fig = plt.figure(figsize=(8, 8))
gs = fig.add_gridspec(
2,
2,
width_ratios=(1, 8),
height_ratios=(8, 1),
left=0.1,
right=0.9,
bottom=0.1,
top=0.9,
wspace=0.01,
hspace=0.01,
)
fig.tight_layout()
ax_ts_x = fig.add_subplot(gs[0, 0])
ax_ts_y = fig.add_subplot(gs[1, 1])
ax = fig.add_subplot(gs[0, 1], sharex=ax_ts_y, sharey=ax_ts_x)
ax.set_xticks([])
ax.set_yticks([])
ax.tick_params(axis="x", labelbottom=False)
ax.tick_params(axis="y", labelleft=False)
fig.suptitle("DTW calculated optimal warping path")
im = ax.imshow(np.log1p(matrix), origin="lower", cmap="bone_r")
ax.plot(y, x, "r", linewidth=4, label="Optimal warping path")
ax.plot(
[0, 29], [0, 29], "--", linewidth=3, color="black", label="Default warping path"
)
ax.legend()
ax_ts_x.plot(ts_0 * -1, np.arange(30), linewidth=4, color="#1f77b4")
# ax_ts_x.set_yticks(np.arange(30))
ax_ts_x.set_ylim(-0.5, 29.5)
ax_ts_x.set_xlim(-1.5, 1.5)
ax_ts_x.set_xticks([])
ax_ts_y.plot(ts_1, linewidth=4, color="#ff7f0e")
# ax_ts_y.set_xticks(np.arange(30))
ax_ts_y.set_xlim(-0.5, 29.5)
ax_ts_y.set_ylim(-1.5, 1.5)
ax_ts_y.set_yticks([])
# cbar = plt.colorbar(im, ax=ax, use_gridspec=False, panchor=False)
plt.savefig(DATA_PATH / "dtw_warping_path.svg")
plt.show()
```
# TSNE
```
import mpl_toolkits.mplot3d.axes3d as p3
from sklearn.datasets import make_s_curve, make_swiss_roll
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
n_samples = 1500
X, y = make_swiss_roll(n_samples, noise=0.1)
X, y = make_s_curve(n_samples, random_state=42)
fig = plt.figure(figsize=(10, 10))
ax = fig.gca(projection="3d")
ax.view_init(20, -60)
# ax.set_title("S curve dataset", fontsize=18)
ax.scatter(X[:, 0], X[:, 1], X[:, 2], c=y)
ax.set_yticklabels([])
ax.set_xticklabels([])
ax.set_zticklabels([])
fig.tight_layout()
plt.savefig(DATA_PATH / "s_dataset.svg", bbox_inches=0)
plt.show()
X_pca = PCA(n_components=2, random_state=42).fit_transform(X)
X_tsne = TSNE(n_components=2, perplexity=30, init="pca", random_state=42).fit_transform(
X
)
fig = plt.figure(figsize=(10, 10))
# plt.title("PCA transformation", fontsize=18)
plt.scatter(X_pca[:, 0], X_pca[:, 1], c=y)
plt.xticks([])
plt.yticks([])
plt.savefig(DATA_PATH / "s_dataset_pca.svg")
plt.show()
fig = plt.figure(figsize=(10, 10))
# plt.title("t-SNE transformation", fontsize=18)
plt.scatter(X_tsne[:, 0], X_tsne[:, 1], c=y)
plt.xticks([])
plt.yticks([])
plt.savefig(DATA_PATH / "s_dataset_tsne.svg")
plt.show()
```
# Datashader
```
import datashader as ds
import datashader.transfer_functions as tf
import matplotlib.patches as mpatches
from lttb import downsample
np.random.seed(42)
signal = np.random.normal(0, 10, size=10 ** 6).cumsum() + np.sin(
np.linspace(0, 100 * np.pi, 10 ** 6)
) * np.random.normal(0, 1, size=10 ** 6)
s_frame = pd.DataFrame(signal, columns=["signal"]).reset_index()
x = 1500
y = 500
cvs = ds.Canvas(plot_height=y, plot_width=x)
line = cvs.line(s_frame, "index", "signal")
img = tf.shade(line).to_pil()
trans = downsample(s_frame.values, 100)
trans[:, 0] /= trans[:, 0].max()
trans[:, 0] *= x
trans[:, 1] *= -1
trans[:, 1] -= trans[:, 1].min()
trans[:, 1] /= trans[:, 1].max()
trans[:, 1] *= y
fig, ax = plt.subplots(figsize=(x / 60, y / 60))
plt.imshow(img, origin="upper")
plt.plot(*trans.T, "r", alpha=0.6, linewidth=2)
plt.legend(
handles=[
mpatches.Patch(color="blue", label="Datashader (10^6 points)"),
mpatches.Patch(color="red", label="LTTB (10^3 points)"),
],
prop={"size": 25},
)
ax.set_xticklabels([])
ax.set_yticklabels([])
plt.ylabel("Value", fontsize=25)
plt.xlabel("Time", fontsize=25)
plt.tight_layout()
plt.savefig(DATA_PATH / "datashader.png")
plt.show()
```
# LTTB
```
from matplotlib import cm
from matplotlib.colors import Normalize
from matplotlib.patches import Polygon
np.random.seed(42)
ns = np.random.normal(0, 1, size=26).cumsum()
fig, ax = plt.subplots(1, 1, figsize=(15, 5))
plt.plot(ns, "-o", linewidth=2)
mapper = cm.ScalarMappable(Normalize(vmin=0, vmax=15, clip=True), cmap="autumn_r")
areas = []
for i, data in enumerate(ns[:-2], 1):
cors = [[i + ui, ns[i + ui]] for ui in range(-1, 2)]
x = [m[0] for m in cors]
y = [m[1] for m in cors]
ea = 0.5 * np.abs(np.dot(x, np.roll(y, 1)) - np.dot(y, np.roll(x, 1))) * 10
areas.append(ea)
color = mapper.to_rgba(ea)
plt.plot([i], [ns[i]], "o", color=color)
ax.add_patch(
Polygon(
cors,
closed=True,
fill=True,
alpha=0.3,
color=color,
)
)
cbar = plt.colorbar(mapper, alpha=0.3)
cbar.set_label("Effective Area Size")
fig.suptitle("Effective Area of Data Points")
plt.ylabel("Value")
plt.xlabel("Time")
plt.tight_layout()
plt.savefig(DATA_PATH / "effective-area.svg")
plt.savefig(DATA_PATH / "effective-area.png")
plt.show()
fig, ax = plt.subplots(1, 1, figsize=(15, 5))
plt.plot(ns, "--o", linewidth=2, label="Original time series")
mapper = cm.ScalarMappable(Normalize(vmin=0, vmax=15, clip=True), cmap="autumn_r")
lotb = np.concatenate(
[[0], np.arange(1, 25, 3) + np.array(areas).reshape(-1, 3).argmax(axis=1), [25]]
)
for i, data in enumerate(ns[:-2], 1):
cors = [[i + ui, ns[i + ui]] for ui in range(-1, 2)]
x = [m[0] for m in cors]
y = [m[1] for m in cors]
ea = 0.5 * np.abs(np.dot(x, np.roll(y, 1)) - np.dot(y, np.roll(x, 1))) * 10
color = mapper.to_rgba(ea) # cm.tab10.colors[i % 5 + 1]
plt.plot([i], [ns[i]], "o", color=color)
ax.add_patch(
Polygon(
cors,
closed=True,
fill=True,
alpha=0.3,
color=color,
)
)
plt.plot(
lotb, ns[lotb], "-x", linewidth=2, color="tab:purple", label="LTOB approximation"
)
cbar = plt.colorbar(mapper, alpha=0.3)
cbar.set_label("Effective Area Size")
plt.vlines(np.linspace(0.5, 24.5, 9), ns.min(), ns.max(), "black", "--", alpha=0.5)
plt.ylabel("Value")
plt.xlabel("Time")
fig.suptitle("LTOB downsampling")
plt.legend()
plt.tight_layout()
plt.savefig(DATA_PATH / "ltob.svg")
plt.savefig(DATA_PATH / "ltob.png")
plt.show()
fig, ax = plt.subplots(1, 1, figsize=(15, 5))
plt.plot(ns, "--o", linewidth=2, label="Original time series")
ds = downsample(np.vstack([np.arange(26), ns]).T, 10)
plt.plot(*ds.T, "-x", linewidth=2, label="LTTB approximation")
# plt.plot(ns, "x")
plt.vlines(np.linspace(0.5, 24.5, 9), ns.min(), ns.max(), "black", "--", alpha=0.5)
plt.ylabel("Value")
plt.xlabel("Time")
fig.suptitle("LTTB downsampling")
plt.legend()
plt.tight_layout()
plt.savefig(DATA_PATH / "lttb.svg")
plt.savefig(DATA_PATH / "lttb.png")
plt.show()
```
| github_jupyter |
# Predict when statistics need to be collected
## Connect to Vantage
```
#import the teradataml package for Vantage access
from teradataml import *
import getpass
from teradataml import display
#display.print_sqlmr_query=True
from sqlalchemy.sql.expression import select, case as case_when, func
from sqlalchemy import TypeDecorator, Integer, String
import warnings
warnings.filterwarnings('ignore')
Vantage = 'tdap1627t2.labs.teradata.com'
User = 'alice'
Pass = 'alice'
print(Vantage,User)
con = create_context(Vantage, User, Pass)
```
## Get the Sentiment from the explains
```
dbqlog = DataFrame.from_table(in_schema("dbc", "dbqlogtbl")).drop("ZoneId", axis = 1)
dbqlexplain = DataFrame.from_table(in_schema("dbc", "dbqlexplaintbl")).drop("ZoneID", axis = 1)
dbqldata = dbqlog.join(other = dbqlexplain, on = ["QueryID"], lsuffix = "t1", rsuffix = "t2") \
.select(['t1_QueryID','ExplainText','QueryBand','QueryText'])
dbqldata
# Workaround until ELE-2072.
dbqldata.to_sql('prediction_sentiment', if_exists="replace")
dbqldata = DataFrame.from_table('prediction_sentiment')
df_select_query_column_projection = [
dbqldata.t1_QueryID.expression.label("queryid"),
dbqldata.ExplainText.expression.label("explaintext"),
dbqldata.QueryBand.expression.label("queryband"),
func.REGEXP_SUBSTR(dbqldata.QueryBand.expression,
'(collected_statistics|no_statistics)', 1, 1, 'i').label("training"),
func.REGEXP_SUBSTR(dbqldata.QueryText.expression,
'SELECT', 1, 1, 'i').label("select_info"),
func.REGEXP_SUBSTR(func.REGEXP_SUBSTR(dbqldata.ExplainText.expression,
'(joined using a *[A-z \-]+ join,)', 1, 1, 'i'),
'[A-z]+', 15, 1, 'i').label("join_condition")]
prediction_data = DataFrame.from_query(str(select(df_select_query_column_projection)
#.where(Column('join_condition') != None)
#.where(Column('training') != None)
.compile(compile_kwargs={"literal_binds": True})))
data_set = (prediction_data.join_condition != None) & (prediction_data.training != None)
prediction_set = prediction_data[data_set]
prediction_data.select(['queryid', 'join_condition', 'explaintext', 'training'])
# Workaround until ELE-2072.
#prediction_set.to_sql('prediction_sentiment')
#prediction_set = DataFrame.from_table('prediction_sentiment')
prediction_set
dictionary = DataFrame.from_table('dbql_sentiment')
td_sentiment_extractor_out = SentimentExtractor(
dict_data = dictionary,
newdata = prediction_set,
level = "document",
text_column = "explaintext",
accumulate = ['queryid','join_condition','training']
)
predict = td_sentiment_extractor_out.result #.to_sql('holdit4')
predict
try:
con.execute("drop table target_collection")
except:
pass
stats_model = DataFrame.from_table(in_schema("alice", "stats_model"))
```
# Why does it need formula?
```
# Predict from queries columns needing collected statistics
target_collection = NaiveBayesPredict(newdata=predict,
modeldata = stats_model,
formula="training ~ out_polarity + join_condition",
id_col = "queryid",
responses = ["collected_statistics","no_statistics"]
).result
target_collection.result.to_sql('acc1', if_exists="replace")
target_collection.result
dbqlobj = DataFrame.from_table('dbc.dbqlobjtbl')
# Obtain query's join information
target_names = target_collection.result.join(other = dbqlobj, on = ["queryid"], lsuffix = "t1",
rsuffix = "t2").select('objectdatabasename', 'objecttablename', 'objectcolumnname')
# Collect statistics on each column
for index, row in target_collection.result.to_pandas().iterrows():
con.execute('collect statistics column '+row['ObjectTableName']+" on "+ \
row['ObjectDatabaseName']+'.'+row['ObjectTableName'])
## how to test if table is still there, no help table.
statement
```
| github_jupyter |
# Processing Milwaukee Label (~3K labels)
Building on `2020-03-24-EDA-Size.ipynb`
Goal is to prep a standard CSV that we can update and populate
```
import pandas as pd
import numpy as np
import os
import s3fs # for reading from S3FileSystem
import json # for working with JSON files
import matplotlib.pyplot as plt
pd.set_option('max_colwidth', -1)
# Import custom modules
import sys
SWKE_PATH = r'/home/ec2-user/SageMaker/classify-streetview/swke'
sys.path.append(SWKE_PATH)
import labelcrops
SAGEMAKER_PATH = r'/home/ec2-user/SageMaker'
SPLIT_PATH = os.path.join(SAGEMAKER_PATH, 'classify-streetview', 'split-train-test')
```
# Alternative Template - row for ~3K labels x # crops appeared in
* img_id
* heading
* crop_id
* label
* dist_x_left
* dist_x_right
* dist_y_top
* dist_y_bottom
```
df_labels = pd.read_csv(os.path.join(SPLIT_PATH, 'restructure_single_labels.csv'))
print(df_labels.shape)
df_labels.head()
df_coor = pd.read_csv('crop_coor.csv')
df_coor
df_outer = pd.merge(left=df_labels, right=df_coor, how='outer')
df_outer.shape
df_outer = pd.concat([df_labels, df_coor], axis = 1)
df_outer.head(10)
# Let's just use a for loop and join back together
list_dfs = []
coor_cols = list(df_coor.columns)
for index, row in df_coor.iterrows():
df_temp_labels = df_labels
for col in coor_cols:
df_temp_labels[col] = row[col]
list_dfs.append(df_temp_labels)
print(df_temp_labels.shape)
# Let's just use a for loop and join back together
list_dfs = []
coor_cols = list(df_coor.columns)
for index, row in df_coor.iterrows():
df_temp_labels = df_labels.copy()
for col in coor_cols:
df_temp_labels[col] = row[col]
list_dfs.append(df_temp_labels)
print(df_temp_labels.shape)
df_concat = pd.concat(list_dfs)
df_concat.shape
df_concat['corner_x'].value_counts()
df_concat.head()
df_concat.to_csv('merged_crops_template.csv', index = False)
df_concat.columns
```
## Take the differences
```
df_concat['xpt_minus_xleft'] = df_concat['sv_image_x'] - df_concat['x_crop_left']
df_concat['xright_minus_xpt'] = df_concat['x_crop_right'] - df_concat['sv_image_x']
df_concat['ypt_minus_ytop'] = df_concat['sv_image_y'] - df_concat['y_crop_top']
df_concat['ybottom_minus_ypt'] = df_concat['y_crop_bottom'] - df_concat['sv_image_y']
positive_mask = (df_concat['xpt_minus_xleft'] > 0) & (df_concat['xright_minus_xpt'] > 0) & (df_concat['ypt_minus_ytop'] > 0) & (df_concat['ybottom_minus_ypt'] > 0)
df_concat['label_in_crop'] = positive_mask
df_concat['label_in_crop'].value_counts()
df_incrop = df_concat.loc[df_concat['label_in_crop']]
df_incrop.shape
df_incrop['crop_number'].value_counts()
df_incrop.to_csv('Crops_with_Labels.csv', index = False)
7038 / 2851
```
## Observations
* We have 12919 Null Crops
* We have 7038 Crops with a feature in them
* Three bottom crops (5, 6, 7) have the most points (these are the biggest)
* The 3 middle crops have the most for their row (2, 3, 6)
* Labels appear in an average of 2.47 image crops
# Visualize Label Locations
* xpt_minus_xleft - x location in the crop relative to bottom left (0, 0)
* ybottom_minus_ypt - y location in the crop relative to bottom left (0, 0)
```
fig = plt.figure(figsize = (12, 3))
colors_list = ['tab:red', 'orange', 'gold', 'forestgreen']
for crop_id in range(1, 5):
ax = fig.add_subplot(1, 4, crop_id)
x = df_incrop['xpt_minus_xleft'].loc[df_incrop['crop_number'] == crop_id]
y = df_incrop['ybottom_minus_ypt'].loc[df_incrop['crop_number'] == crop_id]
ax.plot(x, y, marker = '.', ls = 'none', alpha = 0.4, color = colors_list[int(crop_id -1)])
#ax.plot(x, y, marker = '.', ls = 'none', alpha = 0.4)
plt.ylim(0, 220)
plt.xlim(0, 220)
plt.title(f'Crop: {crop_id}')
ax.set_yticklabels([])
ax.set_xticklabels([])
plt.tight_layout()
fig2 = plt.figure(figsize = (12, 4))
# colors_list = ['forestgreen', 'indigo', 'mediumblue', 'gold', 'tab:red']
colors_list = ['blue', 'indigo', 'fuchsia']
for crop_id in range(5, 8):
plot_num = crop_id - 4
ax2 = fig2.add_subplot(1, 3, plot_num)
x = df_incrop['xpt_minus_xleft'].loc[df_incrop['crop_number'] == crop_id]
y = df_incrop['ybottom_minus_ypt'].loc[df_incrop['crop_number'] == crop_id]
ax2.plot(x, y, marker = '.', ls = 'none', alpha = 0.4, color = colors_list[int(plot_num - 1)])
#ax.plot(x, y, marker = '.', ls = 'none', alpha = 0.4)
plt.ylim(0, 300)
plt.xlim(0, 300)
plt.title(f'Crop: {crop_id}')
ax2.set_yticklabels([])
ax2.set_xticklabels([])
plt.tight_layout()
```
# Deep Dive into df_incrop
```
df_incrop.head()
df_incrop.columns
incrop_keep_cols = ['filename', 'crop_number', 'region_id', 'label_name', 'region_count', 'img_id',
'sv_image_x', 'sv_image_y','sv_image_y_bottom_origin', 'xpt_minus_xleft', 'xright_minus_xpt',
'ypt_minus_ytop', 'ybottom_minus_ypt']
df_incrop_short = df_incrop[incrop_keep_cols].copy()
df_incrop_short.head()
# Make some new ids
df_incrop_short['heading'] = df_incrop_short['filename'].str.extract('(.*)_(.*).jpg', expand = True)[1]
df_incrop_short.dtypes
#df_incrop_short['crop_name_id'] = df_incrop_short[['img_id', 'heading', 'crop_number']].apply(lambda x: '_'.join(str(x)), axis=1)
#df_incrop_short['label_id'] = df_incrop_short[['img_id', 'heading', 'region_id']].apply(lambda x: '_'.join(str(x)), axis=1)
df_incrop_short['crop_name_id'] = df_incrop_short['img_id'].astype(str) + '_' + df_incrop_short['heading'] + '_' + df_incrop_short['crop_number'].astype(str)
df_incrop_short['label_id'] = df_incrop_short['img_id'].astype(str) + '_' + df_incrop_short['heading'] + '_' + df_incrop_short['region_id'].astype(str)
df_incrop_short.head()
df_incrop_short['crop_name_id'].value_counts()
crop_label_counts = df_incrop_short['crop_name_id'].value_counts()
crop_label_counts.value_counts()
label_id_counts = df_incrop_short['label_id'].value_counts()
label_id_counts.value_counts()
label_id_counts.head(20)
506 * 7 * 4
14168 - 5254
df_incrop_short.to_csv('incrop_labels.csv', index = False)
```
# Desired End Template CSV for 506 x 7 x 4 image crops
* img_id
* heading
* crop_id
* combined_id
* primary_label - based on a hierarchy of importance
* 0_missing_count
* 1_null_count
* 2_obstacle_count
* 3_present_count
* 4_surface_prob_count
* 5_nosidewalk_count
```
unique_labels_list = list(df_incrop_short['label_name'].unique())
folders_list = ['3_present', '4_surface_prob', '2_obstacle', '0_missing', '6_occlusion', '5_nosidewalk']
for label, folder in zip(unique_labels_list, folders_list):
label_mask = (df_incrop_short['label_name'].str.contains(label))
df_incrop_short[folder] = np.where(label_mask, 1, 0)
df_incrop_short.head()
df_group = df_incrop_short.groupby(['img_id', 'heading', 'crop_number'])[folders_list].sum()
df_group.head()
df_group['count_all'] = df_group[folders_list].values.sum(axis = 1)
df_group.head()
df_group.shape
df_group = df_group.reset_index()
df_group.head()
df_group.to_csv('img_heading_crop_labelcounts.csv', index = False)
df_group[folders_list].sum()
(df_group[folders_list] > 0).sum()
df_group[folders_list].sum()
```
# Next Phase
* Grab a couple thousand null crops
* Find out which ones are null by creating a img_id x heading x all crop_numbers list and then doing a join with df_group
* Then fill in the NAs with 0s and add a new column that if count_all == 0, then 1_null = 1
* Then merge with the test/train names by img_id
* Then move those crops into the test folder
| github_jupyter |
```
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import xgboost
```
Data preprocessing
```
#load dataset
data = pd.read_csv("pima-diabetes.csv")
data.head(10)
# mapping True diabetes prediction to 1
# mapping False diabetes prediction to 0
diabetes_map= {True:1, False:0}
data['diabetes']=data['diabetes'].map(diabetes_map)
print(data['diabetes'])
data.isnull().values.any() #no null values
diabetes_true_count=len(data.loc[data['diabetes']==True])
diabetes_false_count=len(data.loc[data['diabetes']==False])
(diabetes_true_count,diabetes_false_count)
from sklearn.model_selection import train_test_split,cross_val_score
feature_columns=['num_preg', 'glucose_conc', 'diastolic_bp', 'thickness', 'insulin', 'bmi', 'diab_pred', 'age', 'skin' ]
predicted_class=['diabetes']
X = data[feature_columns].values
y= data[predicted_class].values
X_train,X_test,y_train,y_test= train_test_split(X,y,test_size=0.30, random_state=10)
print("total number of rows : {0}".format(len(data)))
print("number of rows missing glucose_conc: {0}".format(len(data.loc[data['glucose_conc'] == 0])))
print("number of rows missing glucose_conc: {0}".format(len(data.loc[data['glucose_conc'] == 0])))
print("number of rows missing diastolic_bp: {0}".format(len(data.loc[data['diastolic_bp'] == 0])))
print("number of rows missing insulin: {0}".format(len(data.loc[data['insulin'] == 0])))
print("number of rows missing bmi: {0}".format(len(data.loc[data['bmi'] == 0])))
print("number of rows missing diab_pred: {0}".format(len(data.loc[data['diab_pred'] == 0])))
print("number of rows missing age: {0}".format(len(data.loc[data['age'] == 0])))
print("number of rows missing skin: {0}".format(len(data.loc[data['skin'] == 0])))
#this is to deal with the zero values
from sklearn.impute import SimpleImputer
fill_values= SimpleImputer(missing_values=0,strategy="mean")
X_train= fill_values.fit_transform(X_train)
X_test= fill_values.fit_transform(X_test)
classifier=xgboost.XGBClassifier()
classifier=xgboost.XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
colsample_bynode=1, colsample_bytree=0.3, gamma=0.4,
learning_rate=0.2, max_delta_step=0, max_depth=15,
min_child_weight=5, missing=None, n_estimators=100, n_jobs=1,
nthread=None, objective='binary:logistic', random_state=0,
reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
silent=None, subsample=1, verbosity=1)
classifier.fit(X_train,y_train.ravel())
y_pred=classifier.predict(X_test)
from sklearn.metrics import accuracy_score, confusion_matrix, classification_report,plot_confusion_matrix
cm= confusion_matrix(y_test,y_pred)
score=accuracy_score(y_test,y_pred)
print(cm)
plot_confusion_matrix(classifier, X_test, y_test,cmap='Greens')
plt.show()
print(classification_report(y_test, y_pred))
score_cross_val=cross_val_score(classifier,X_train,y_train.ravel())
print('Cross validation average score {:.2f}%'.format(score_cross_val.mean()*100))
try:
import alibi
except:
!pip install alibi
import alibi
```
Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The focus of the library is to provide high-quality implementations of black-box, white-box, local and global explanation methods for classification and regression models.
Alibi contains many explainers such as Anchors
```
from alibi.explainers import AnchorTabular
#lambda function to predict the instance we want using xgboost classifier
predict_fn = lambda x: classifier.predict_proba(x)
#Create an explainer, give as arguements the prediction function and name of the features
explainer = AnchorTabular(predict_fn, feature_columns)
#train the explainer
explainer.fit(X_train)
class_names= ['Not Diabetic','Diabetic']
idx = 50
#use the explaine.predictor to predict the result
predicted=explainer.predictor(X_test[idx].reshape(1, -1))[0]
print('Prediction: ',class_names[predicted] )
print('True class: ', class_names[y_test[idx,0]])
#now we use the explainer to explain an test instance. And the threshold of the precision is 95%
explanation = explainer.explain(X_test[idx], threshold=0.95)
print('Anchor: %s' % (' AND '.join(explanation.anchor)))
print('Precision: %.2f' % explanation.precision)
print('Coverage: %.2f' % explanation.coverage)
idx = 14
#use the explaine.predictor to predict the result
predicted=explainer.predictor(X_test[idx].reshape(1, -1))[0]
print('Prediction: ',class_names[predicted] )
print('True class: ', class_names[y_test[idx,0]])
#now we use the explainer to explain an test instance. And the threshold of the precision is 95%
explanation = explainer.explain(X_test[idx], threshold=0.95)
print('Anchor: %s' % (' AND '.join(explanation.anchor)))
print('Precision: %.2f' % explanation.precision)
print('Coverage: %.2f' % explanation.coverage)
```
| github_jupyter |
```
import plotly.express as px
from plotly import graph_objects as go
import pandas as pd
#import chart_studio.tools as tls
df_gp = pd.read_csv('/Users/muhammad-faaiz.shanawas/Documents/GitHub/SystemHierarchies/data/gp-reg-pat-prac-map.csv')
list_of_ccgs = df_gp['CCG_CODE'].unique()
num_of_ccgs = len(list_of_ccgs)
list_of_pcns = df_gp['PCN_NAME'].unique()
num_of_pcns = len(list_of_pcns)
list_of_stps = df_gp['STP_NAME'].unique()
num_of_stps = len(list_of_stps)
df_gp_cut = df_gp[['PRACTICE_NAME', 'PCN_NAME', 'CCG_NAME', 'STP_NAME', 'COMM_REGION_NAME']]
df_gp
#load in data and retrieve the number of trusts
df_trusts = pd.read_csv('/Users/muhammad-faaiz.shanawas/Documents/GitHub/SystemHierarchies/data/data_DSPTmetric_20220208.csv')
df_trusts = df_trusts.loc[df_trusts["Sector"] == "Trust"].reset_index(drop = True)
df_trusts_cut = df_trusts[["Code", "Name", "CCG20CD"]]
#df_trusts_cut = df_trusts_cut.rename(columns = {"CCG20CD" : ""})
df_trusts
#create dict and display funnel chart
data_funnel = dict(number = [7, num_of_stps, num_of_ccgs, num_of_trusts, num_of_pcns, 6528], stage = ["Regions", "ICS", "CCG", "Trusts", "PCN", "GP Practices"])
fig = px.funnel(data_funnel, x= 'number', y = 'stage')
#fig.show()
#create a sunburst chart mapping regions-stps-ccgs-pcn-practices
print(df_gp_cut)
fig2 = px.sunburst(df_gp_cut, path = ['COMM_REGION_NAME', 'STP_NAME', 'CCG_NAME', 'PCN_NAME', 'PRACTICE_NAME'], values = None)
#fig2.show()
#create a medium sunburst chart mapping regions-stps-ccgs-trusts
fig3 = px.sunburst(df_gp_cut, path = ['COMM_REGION_NAME', 'STP_NAME', 'CCG_NAME', 'TRUST_NAME'], values = None)
fig3.show()
#create a smaller sunburst chart mapping regions-stps-ccgs
fig4 = px.sunburst(df_gp_cut, path = ['COMM_REGION_NAME', 'STP_NAME', 'CCG_NAME'], values = None)
#fig4.show()
#create a treemap mapping regions-stps-ccgs-pcns-practices
fig5= px.treemap(df_gp_cut, path = ['COMM_REGION_NAME', 'STP_NAME', 'CCG_NAME', 'PCN_NAME', 'PRACTICE_NAME'], values = None)
#fig5.show()
#create a smaller treemap mapping regions-stps-ccgs
fig6 = px.treemap(df_gp_cut, path = ['COMM_REGION_NAME', 'STP_NAME', 'CCG_NAME'], values = None)
#fig6.show()
#saving all plotly figures
'''
fig.write_html("/Users/muhammad-faaiz.shanawas/Documents/GitHub/SystemHierarchies/map outputs/funnel_chart.html")
fig2.write_html("/Users/muhammad-faaiz.shanawas/Documents/GitHub/SystemHierarchies/map outputs/sunburst_large.html")
fig3.write_html("/Users/muhammad-faaiz.shanawas/Documents/GitHub/SystemHierarchies/map outputs/sunburst_small.html")
fig4.write_html("/Users/muhammad-faaiz.shanawas/Documents/GitHub/SystemHierarchies/map outputs/treemap_large.html")
fig5.write_html("/Users/muhammad-faaiz.shanawas/Documents/GitHub/SystemHierarchies/map outputs/treemap_small.html")
'''
```
| github_jupyter |
```
!nvidia-smi
# unrar x "/content/drive/MyDrive/IDC_regular_ps50_idx5.rar" "/content/drive/MyDrive/"
# !unzip "/content/drive/MyDrive/base_dir/train_dir/b_idc.zip" -d "/content/drive/MyDrive/base_dir/train_dir"
import os
! pip install -q kaggle
from google.colab import files
files.upload()
! mkdir ~/.kaggle
! cp kaggle.json ~/.kaggle/
! chmod 600 ~/.kaggle/kaggle.json
! kaggle datasets list
!kaggle datasets download -d paultimothymooney/breast-histopathology-images
! mkdir breast-histopathology-images
! unzip breast-histopathology-images.zip -d breast-histopathology-images
!pip install tensorflow-gpu
import pandas as pd
import numpy as np
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Conv2D, MaxPooling2D, Flatten
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.metrics import categorical_crossentropy
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.models import Model
from tensorflow.keras.callbacks import EarlyStopping, ReduceLROnPlateau, ModelCheckpoint
import os
import cv2
import imageio
import skimage
import skimage.io
import skimage.transform
from sklearn.utils import shuffle
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
import itertools
import shutil
import matplotlib.pyplot as plt
%matplotlib inline
from google.colab import drive
drive.mount('/content/drive')
SAMPLE_SIZE = 78786
IMAGE_SIZE = 224
os.listdir('/content/breast-histopathology-images')
patient = os.listdir('/content/breast-histopathology-images/IDC_regular_ps50_idx5')
len(patient)
#Copy all images into one directory
#This will make it easier to work with this data.
# Create a new directory to store all available images
all_images = 'all_images'
os.mkdir(all_images)
patient_list = os.listdir('/content/breast-histopathology-images/IDC_regular_ps50_idx5')
print(patient_list)
for patient in patient_list:
# /content/IDC_regular_ps50_idx5/10253/0/10253_idx5_x1001_y1001_class0.png
path_0 = "/content/breast-histopathology-images/IDC_regular_ps50_idx5/"+str(patient)+"/0"
path_1 = "/content/breast-histopathology-images/IDC_regular_ps50_idx5/"+str(patient)+"/1"
#list for 0
file_list_0 = os.listdir(path_0)
#list for 1
file_list_1 = os.listdir(path_1)
#move all 0 related img of a patient to all_image directory
for fname in file_list_0:
#src path of image
src = os.path.join(path_0, fname)
#dst path for image
dst = os.path.join(all_images, fname)
#move the image to directory
shutil.copyfile(src, dst)
#move all 1 related img of a patient to all_image directory
for fname in file_list_1:
#src path of image
src = os.path.join(path_1, fname)
#dst path for image
dst = os.path.join(all_images, fname)
#move the image to directory
shutil.copyfile(src, dst)
len(os.listdir('all_images'))
image_list = os.listdir('all_images')
df_data = pd.DataFrame(image_list, columns=['image_id'])
df_data.head()
# Define Helper Functions
# Each file name has this format:
# '14211_idx5_x2401_y1301_class1.png'
def extract_patient_id(x):
# split into a list
a = x.split('_')
# the id is the first index in the list
patient_id = a[0]
return patient_id
def extract_target(x):
# split into a list
a = x.split('_')
# the target is part of the string in index 4
b = a[4]
# the ytarget i.e. 1 or 2 is the 5th index of the string --> class1
target = b[5]
return target
# extract the patient id
# create a new column called 'patient_id'
df_data['patient_id'] = df_data['image_id'].apply(extract_patient_id)
# create a new column called 'target'
df_data['target'] = df_data['image_id'].apply(extract_target)
df_data.head(10)
df_data.shape
def draw_category_images(col_name,figure_cols, df, IMAGE_PATH):
"""
Give a column in a dataframe,
this function takes a sample of each class and displays that
sample on one row. The sample size is the same as figure_cols which
is the number of columns in the figure.
Because this function takes a random sample, each time the function is run it
displays different images.
"""
categories = (df.groupby([col_name])[col_name].nunique()).index
f, ax = plt.subplots(nrows=len(categories),ncols=figure_cols,
figsize=(4*figure_cols,4*len(categories))) # adjust size here
# draw a number of images for each location
for i, cat in enumerate(categories):
sample = df[df[col_name]==cat].sample(figure_cols) # figure_cols is also the sample size
for j in range(0,figure_cols):
file=IMAGE_PATH + sample.iloc[j]['image_id']
im=cv2.imread(file)
ax[i, j].imshow(im, resample=True, cmap='gray')
ax[i, j].set_title(cat, fontsize=16)
plt.tight_layout()
plt.show()
IMAGE_PATH = 'all_images/'
draw_category_images('target',4, df_data, IMAGE_PATH)
# What is the class distribution?
df_data['target'].value_counts()
# take a sample of the majority class 0 (total = 198738)
df_0 = df_data[df_data['target'] == '0'].sample(SAMPLE_SIZE, random_state=101)
# take a sample of class 1 (total = 78786)
df_1 = df_data[df_data['target'] == '1'].sample(SAMPLE_SIZE, random_state=101)
# concat the two dataframes
df_data = pd.concat([df_0, df_1], axis=0).reset_index(drop=True)
# Check the new class distribution
df_data['target'].value_counts()
# train_test_split
# stratify=y creates a balanced validation set.
y = df_data['target']
df_train, df_val = train_test_split(df_data, test_size=0.10, random_state=101, stratify=y)
print(df_train.shape)
print(df_val.shape)
df_train['target'].value_counts()
df_val['target'].value_counts()
# Create a new directory
base_dir = 'base_dir'
os.mkdir(base_dir)
#[CREATE FOLDERS INSIDE THE BASE DIRECTORY]
# now we create 2 folders inside 'base_dir':
# train_dir
# a_no_idc
# b_has_idc
# val_dir
# a_no_idc
# b_has_idc
# create a path to 'base_dir' to which we will join the names of the new folders
# train_dir
train_dir = os.path.join(base_dir, 'train_dir')
os.mkdir(train_dir)
# val_dir
val_dir = os.path.join(base_dir, 'val_dir')
os.mkdir(val_dir)
# [CREATE FOLDERS INSIDE THE TRAIN AND VALIDATION FOLDERS]
# Inside each folder we create seperate folders for each class
# create new folders inside train_dir
a_no_idc = os.path.join(train_dir, 'a_no_idc')
os.mkdir(a_no_idc)
b_has_idc = os.path.join(train_dir, 'b_has_idc')
os.mkdir(b_has_idc)
# create new folders inside val_dir
a_no_idc = os.path.join(val_dir, 'a_no_idc')
os.mkdir(a_no_idc)
b_has_idc = os.path.join(val_dir, 'b_has_idc')
os.mkdir(b_has_idc)
# check that the folders have been created
os.listdir('base_dir/train_dir')
# Set the id as the index in df_data
df_data.set_index('image_id', inplace=True)
# Get a list of train and val images
train_list = list(df_train['image_id'])
val_list = list(df_val['image_id'])
# Transfer the train images
for image in train_list:
# the id in the csv file does not have the .tif extension therefore we add it here
fname = image
# get the label for a certain image
target = df_data.loc[image,'target']
# these must match the folder names
if target == '0':
label = 'a_no_idc'
if target == '1':
label = 'b_has_idc'
# source path to image
src = os.path.join(all_images, fname)
# destination path to image
dst = os.path.join(train_dir, label, fname)
# move the image from the source to the destination
shutil.move(src, dst)
# Transfer the val images
for image in val_list:
# the id in the csv file does not have the .tif extension therefore we add it here
fname = image
# get the label for a certain image
target = df_data.loc[image,'target']
# these must match the folder names
if target == '0':
label = 'a_no_idc'
if target == '1':
label = 'b_has_idc'
# source path to image
src = os.path.join(all_images, fname)
# destination path to image
dst = os.path.join(val_dir, label, fname)
# move the image from the source to the destination
shutil.move(src, dst)
# check how many train images we have in each folder
print(len(os.listdir('base_dir/train_dir/a_no_idc')))
print(len(os.listdir('base_dir/train_dir/b_has_idc')))
# check how many val images we have in each folder
print(len(os.listdir('base_dir/val_dir/a_no_idc')))
print(len(os.listdir('base_dir/val_dir/b_has_idc')))
train_path = 'base_dir/train_dir'
valid_path = 'base_dir/val_dir'
num_train_samples = len(df_train)
num_val_samples = len(df_val)
train_batch_size = 10
val_batch_size = 10
train_steps = np.ceil(num_train_samples / train_batch_size)
val_steps = np.ceil(num_val_samples / val_batch_size)
datagen = ImageDataGenerator(rescale=1.0/255)
train_gen = datagen.flow_from_directory(train_path,
target_size=(IMAGE_SIZE,IMAGE_SIZE),
batch_size=train_batch_size,
class_mode='categorical')
val_gen = datagen.flow_from_directory(valid_path,
target_size=(IMAGE_SIZE,IMAGE_SIZE),
batch_size=val_batch_size,
class_mode='categorical')
# Note: shuffle=False causes the test dataset to not be shuffled
test_gen = datagen.flow_from_directory(valid_path,
target_size=(IMAGE_SIZE,IMAGE_SIZE),
batch_size=1,
class_mode='categorical',
shuffle=False)
from tensorflow.keras.models import *
from sklearn.model_selection import *
from tensorflow.keras.applications import *
from tensorflow.keras.layers import *
base_Neural_Net= InceptionResNetV2(input_shape=(224,224,3), weights='imagenet', include_top=False)
model=Sequential()
model.add(base_Neural_Net)
model.add(Flatten())
model.add(BatchNormalization())
model.add(Dense(256,kernel_initializer='he_uniform'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(2,activation='softmax'))
model.summary()
model.compile('adam', loss='binary_crossentropy',
metrics=['accuracy'])
filepath = "model.h5"
checkpoint = ModelCheckpoint(filepath, monitor='val_acc', verbose=1,
save_best_only=True, mode='max')
reduce_lr = ReduceLROnPlateau(monitor='val_acc', factor=0.5, patience=3,
verbose=1, mode='max')
callbacks_list = [checkpoint, reduce_lr]
history = model.fit_generator(train_gen, steps_per_epoch=train_steps,
validation_data=val_gen,
validation_steps=val_steps,
epochs=10, verbose=1,
callbacks=callbacks_list)
# get the metric names so we can use evaulate_generator
model.metrics_names
val_loss, val_acc = \
model.evaluate(test_gen,
steps=len(df_val))
print('val_loss:', val_loss)
print('val_acc:', val_acc)
import matplotlib.pyplot as plt
accuracy = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(accuracy) + 1)
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
plt.figure()
plt.plot(epochs, accuracy, 'bo', label='Training acc')
plt.plot(epochs, val_acc , 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.figure()
predictions = model.predict_generator(test_gen, steps=len(df_val), verbose=1)
predictions.shape
test_gen.class_indices
df_preds = pd.DataFrame(predictions, columns=['no_idc', 'has_idc'])
#df_preds.head()
df_preds
y_true = test_gen.classes
# Get the predicted labels as probabilities
y_pred = df_preds['has_idc']
from sklearn.metrics import roc_auc_score
roc_auc_score(y_true, y_pred)
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.2f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.tight_layout()
test_labels = test_gen.classes
test_labels.shape
cm = confusion_matrix(test_labels, predictions.argmax(axis=1))
test_gen.class_indices
cm_plot_labels = ['no_idc', 'has_idc']
plot_confusion_matrix(cm, cm_plot_labels, title='Confusion Matrix')
from sklearn.metrics import classification_report
# Generate a classification report
# For this to work we need y_pred as binary labels not as probabilities
y_pred_binary = predictions.argmax(axis=1)
report = classification_report(y_true, y_pred_binary, target_names=cm_plot_labels)
print(report)
from tensorflow.keras.models import load_model
from tensorflow.keras.preprocessing import image
model.save("/content/drive/MyDrive/InceptionResNetV2.h5")
yy = model.predict(test_gen)
len(yy)
yy
yy = np.argmax(yy, axis=1)
yy
```
| github_jupyter |
# Usage of py_simple_report
This library is intended to be developed for creating elements of a report.
Why element? We want to have titles, legends, labels. However, if you try to obtain all of them, simultaneously, it's a increadible task, and requires a higher graphical knowledge. Then, I take an alternative strategy. Generate elements of a figure.
These compositions can be merged manually in some graphical tools (Ex. powerpoint).
## Currently py_simple_report supports
- Crosstabulation with stratification
- Barplot
- Stacked barplot
- Crosstabulation for multiple binaries with stratification
- Barplot
- Heatmap
## Dataset
py_simple_report assumes that you have two kind of dataset, original data and variable table.
Variable table should include at least 3 columns, a variable name, a description of a variable, and a corresponding numbers and strings for each item.
Here, we use Rdataset, "plantTraits" in the statsmodels. Docs of this dataset can be accessed from [here](https://vincentarelbundock.github.io/Rdatasets/doc/cluster/plantTraits.html).
```
import tqdm
import numpy as np
import pandas as pd
import statsmodels.api as sm
import py_simple_report as sim_repo
sim_repo.__version__
df = sm.datasets.get_rdataset("plantTraits", "cluster").data.reset_index()
n = df.shape[0]
print(df.shape)
# Add missing
np.random.seed(1234)
rnd1 = np.random.randint(n, size=6)
cols = ["mycor", "vegsout"]
df.loc[rnd1, cols] = np.nan
```
Since we do not have variable table for this dataset, just create. See, items are comma separated and connected via "=" (equal).
```
df_var = pd.DataFrame({
"var_name": ["mycor", "height", "vegsout", "autopoll", "piq", "ros", "semiros"],
"description" : ["Mycorrhizas",
"Plan height",
"underground vegetative propagation",
"selfing pollination",
"thorny",
"rosette",
"semiros"
],
"items" : ["0=never,1=sometimes,2=always",
np.nan,
"0=never,1=present but limited,2=important",
"0=never,1=rare,2=often,3=rule",
"0=non-thorny,1=thorny",
"0=non-rosette,1=rosette",
"0=non-semiros,1=semiros",
],
})
df_var.head()
```
## QuestionDataContainer
Question data container is a caontainer easy to be accessed by other functions.
You can create from scratch, or create from a varaible table.
```
# Manually
qdc = sim_repo.QuestionDataContainer(
var_name="mycor",
desc="Mycorrhizas",
title="Mycor",
missing="missing", # name of missing
order = ["never", "sometimes", "always"] # used for ordering indexes or columns
)
qdc.show() # can access information of QuestionDataContainer
# From a variable table.
col_var_name = "var_name"
col_item = "items"
col_desc = "description"
qdcs_dic = sim_repo.question_data_containers_from_dataframe(
df_var, col_var_name, col_item, col_desc, missing="missing")
print(qdcs_dic.keys())
qdc = qdcs_dic["mycor"]
qdc.show()
```
## Visualization
Giving two question data container to function producese a graph.
From now on, data is always stratified by "autopoll", the variable name of qdc for "autopoll" is set to be "qdc_strf"
```
qdc1 = qdcs_dic["vegsout"]
qdc_strf = qdcs_dic["autopoll"]
qdc_strf.order = ['never', 'rare', 'often', 'rule'] # not to show "missing"
qdc_strf.show()
sim_repo.output_crosstab_cate_barplot(
df,
qdc1,
qdc_strf)
```
#### Got it!!
You now see, two tables of cross tabulated data, and three elements of figures.
- a simple figure with legend (this can be ugly when label names are too long)
- a figure witout legend
- only a legend
## With parameters available.
Save functions are available. It saves number vertically, and figures with "\_only\_label" and "\_no\_label".
Also, several parameters to control outputs exist.
```
!mkdir test
dir2save = "./test"
sim_repo.output_crosstab_cate_barplot(
df,
qdc1,
qdc_strf,
skip_miss=True,
save_fig_path = dir2save + "/vegsout_undergr.png",
save_num_path = dir2save + "/number.csv",
decimal = 4
)
```
Just using for loops enables to output multiple results.
```
lis = ['mycor', 'vegsout', 'piq', 'ros', 'semiros']
for var_name in tqdm.tqdm(lis):
sim_repo.output_crosstab_cate_barplot(
df,
qdcs_dic[var_name],
qdc_strf,
skip_miss=False,
save_fig_path = dir2save + f"/{var_name}.png",
save_num_path = dir2save + "number.csv", # save the number to the same file.
show=False,
)
```
## Barplot
Simple barplot version
```
sim_repo.output_crosstab_cate_barplot(
df,qdc_strf, qdc1, percentage=False, skip_miss=False, stacked=False, transpose=True
)
```
### Multiple binaries with stratification
Multiple binaries can be summarized in a single figure.
```
lis = ["piq", "ros", "semiros"] # variable names
vis_var = sim_repo.VisVariables(ylim=[0,50], cmap_name="tab20", cmap_type="matplotlib")
sim_repo.output_multi_binaries_with_strat(df, lis, qdcs_dic, qdc_strf, vis_var)
```
## Heatmap
Heatmap of crosstabulation.
```
sim_repo.heatmap_crosstab_from_df(df, qdc_strf, qdc1, xlabel=qdc_strf.var_name, ylabel=qdc1.var_name ,
save_fig_path=dir2save + f"/{qdc_strf.var_name}_{qdc1.var_name}_cnt.png" )
sim_repo.heatmap_crosstab_from_df(df, qdc_strf, qdc1, xlabel=qdc_strf.var_name, ylabel=qdc1.var_name,
normalize="index")
```
## VisVariables
Important classes in py_simple_report is a VisVariables. It controls a figure setting via it values.
```
vis_var = sim_repo.VisVariables(
figsize=(5,2),
dpi=200,
xlabel="",
ylabel="Kind",
ylabelsize=5,
yticksize=7,
xticksize=7,
)
sim_repo.output_crosstab_cate_barplot(
df,
qdc1,
qdc_strf,
vis_var = vis_var,
save_fig_path = dir2save + "/vegsout_undergr_vis_var.png",
save_num_path = dir2save + "number.csv",
)
```
## Engineered columns
Of course, engineered columns can be used.
```
height_cate = "height_cate"
ser = df["height"]
df[height_cate] = (ser
.mask( ser <= 9, ">5")
.mask( ser <= 5, "3~5")
.mask( ser < 3 , "<3")
)
print(df[height_cate].value_counts())
qdc = sim_repo.QuestionDataContainer(
var_name=height_cate, order=["<3","3~5",">5"], missing="missing", title=height_cate
)
vis_var = sim_repo.VisVariables()
sim_repo.output_crosstab_cate_barplot(
df,
qdc=qdc,
qdc_strf=qdc_strf,
show=True,
vis_var=vis_var,
)
```
| github_jupyter |
# Security Master Analysis
by @marketneutral
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import plotly.plotly as py
from plotly.offline import init_notebook_mode, iplot
import plotly.graph_objs as go
import cufflinks as cf
init_notebook_mode(connected=False)
cf.set_config_file(offline=True, world_readable=True, theme='polar')
plt.style.use('seaborn')
plt.rcParams['figure.figsize'] = 12, 7
```
# Why Sec Master Analysis?
Before you do anything exciting in financial data science, you **need to understand the nature of the universe of assets you are working with** and **how the data is presented**; otherwise, garbage in, garbage out. A "security master" refers to reference data about the lifetime of a particular asset, tracking ticker changes, name changes, etc over time. In finance, raw information is typically uninteresting and uninformative and you need to do substantial feature engineering and create either or both of time series and cross-sectional features. However **to do that without error requires that you deeply understand the nature of the asset universe.** This is not exciting fancy data science, but absolutely essential. Kaggle competitions are usually won in the third or fourth decimal place of the score so every detail matters.
### What are some questions we want to answer?
**Is `assetCode` a unique and permanent identifier?**
If you group by `assetCode` and make time-series features are you assured to be referencing the same instrument? In the real world, the ticker symbol is not guaranteed to refer to the same company over time. Data providers usually provide a "permanent ID" so that you can keep track of this over time. This is not provided here (although in fact both Intrinio and Reuters provide this in the for sale version of the data used in this competition).
The rules state:
> Each asset is identified by an assetCode (note that a single company may have multiple assetCodes). Depending on what you wish to do, you may use the assetCode, assetName, or time as a way to join the market data to news data.
>assetCode(object) - a unique id of an asset
So is it unique or not and can we join time series features **always** over time on `assetCode`?
**What about `assetName`? Is that unique or do names change over time?**
>assetName(category) - the name that corresponds to a group of assetCodes. These may be "Unknown" if the corresponding assetCode does not have any rows in the news data.
**What is the nature of missing data? What does it mean when data is missing?**
Let's explore and see.
```
# Make environment and get data
from kaggle.competitions import twosigmanews
env = twosigmanews.make_env()
(market_train_df, news_train_df) = env.get_training_data()
```
[](http://)Let's define a valid "has_data" day for each asset if there is reported trading `volume` for the day.
```
df = market_train_df
df['has_data'] = df.volume.notnull().astype('int')
```
And let's see how long an asset is "alive" by the
- the distance betwen the first reported data point and last
- and the number of days in that distance that actually has data
```
lifetimes_df = df.groupby(
by='assetCode'
).agg(
{'time': [np.min, np.max],
'has_data': 'sum'
}
)
lifetimes_df.columns = lifetimes_df.columns.droplevel()
lifetimes_df.rename(columns={'sum': 'has_data_sum'}, inplace=True)
lifetimes_df['days_alive'] = np.busday_count(
lifetimes_df.amin.values.astype('datetime64[D]'),
lifetimes_df.amax.values.astype('datetime64[D]')
)
#plt.hist(lifetimes_df.days_alive.astype('int'), bins=25);
#plt.title('Histogram of Asset Lifetimes (business days)');
data = [go.Histogram(x=lifetimes_df.days_alive.astype('int'))]
layout = dict(title='Histogram of Asset Lifetimes (business days)',
xaxis=dict(title='Business Days'),
yaxis=dict(title='Asset Count')
)
fig = dict(data = data, layout = layout)
iplot(fig)
```
This was shocking to me. There are very many assets that only exist for, say, 50 days or less. When we look at the amount of data in these spans, it is even more surprising. Let's compare the asset lifetimes with the amout of data in those lifetime. Here I calculate the difference between the number of business days in each span and the count of valid days; sorted by most "missing data".
```
lifetimes_df['alive_no_data'] = np.maximum(lifetimes_df['days_alive'] - lifetimes_df['has_data_sum'],0)
lifetimes_df.sort_values('alive_no_data', ascending=False ).head(10)
```
For example, ticker VNDA.O has its first data point on 2007-02-23, and its last on 2016-12-22 for a span of 2556 business days. However in that 2556 days, there are only 115 days that actually have data!
```
df.set_index('time').query('assetCode=="VNDA.O"').returnsOpenNextMktres10.iplot(kind='scatter',mode='markers', title='VNDA.O');
```
**It's not the case that VNDA.O didn't exist during those times; we just don't have data.**
Looking across the entire dataset, however, things look a little better.
```
#plt.hist(lifetimes_df['alive_no_data'], bins=25);
#plt.ylabel('Count of Assets');
#plt.xlabel('Count of missing days');
#plt.title('Missing Days in Asset Lifetime Spans');
data = [go.Histogram(x=lifetimes_df['alive_no_data'])]
layout = dict(title='Missing Days in Asset Lifetime Spans',
xaxis=dict(title='Count of missing days'),
yaxis=dict(title='Asset Count')
)
fig = dict(data = data, layout = layout)
iplot(fig)
```
Now let's look at whether tickers change over time. **Is either `assetCode` or `assetName` unique?**
```
df.groupby('assetName')['assetCode'].nunique().sort_values(ascending=False).head(20)
```
**So there are a number of companies that have more than 1 `assetCode` over their lifetime. ** For example, 'T-Mobile US Inc':
```
df[df.assetName=='T-Mobile US Inc'].assetCode.unique()
```
And we can trace the lifetime of this company over multiple `assetCodes`.
```
lifetimes_df.loc[['PCS.N', 'TMUS.N', 'TMUS.O']]
```
The company started its life as PCS.N, was merged with TMUS.N (NYSE-listed) and then became Nasdaq-listed.
In this case, if you want to make long-horizon time-based features, you need to join on `assetName`.
```
(1+df[df.assetName=='T-Mobile US Inc'].set_index('time').returnsClosePrevRaw1).cumprod().plot(title='Time joined cumulative return');
```
**One gotcha I see is that don't think that `assetName` is correct "point-in-time" .** This is hard to verify without proper commercial security master data, but:
- I don't think that the actual name of this company in 2007 was **T-Mobile** it was something like **Metro PCS**. T-Mobile acquired MetroPCS on May 1, 2013 (google search "when did t-mobile acquire MetroPCS"). You can see this data matches with the lifetimes dataframe subset above.
- Therefore, the `assetName` must **not be point-in-time**, rather it looks like `assetName` is the name of the company when this dataset was created for Kaggle recently, and then backfilled.
- However, it would be very odd for the Reuters News Data to **not be point-in-time.** Let's see if we can find any news on this company back in 2007.
```
news_train_df[news_train_df.assetName=='T-Mobile US Inc'].T
```
What's fascinating here is that you can see in the article headlines, that the company is named correctly, point-in-time, as "MetroPCS Communications Inc", however the `assetName` is listed as "T-Mobile US Inc.". So the organizers have also backfilled today's `assetName` into the news history.
This implies that **you cannot use NLP on the `headline` field in any way to join or infer asset clustering.** However, `assetName` continues to look like a consistent choice over time for a perm ID.
What about the other way around? Is `assetName` a unique identifier? In the real world, companies change their names all the time (a hilarious example of this is [here](https://www.businessinsider.com/long-blockchain-company-iced-tea-sec-stock-2018-8)). What about in this dataset?
```
df.groupby('assetCode')['assetName'].nunique().sort_values(ascending=False).head(20)
```
**YES!** We can conclude that since no `assetCode` has ever been linked to more than `assetName`, that `assetName` could be a good choice for a permanent identifier. It is possible that a company changed its ticker *and* it's name on the same day and therefore we would not be able to catch this, but let's assume this doesn't happen.
However, here is **a major gotcha**: dual class stock. Though not very common, some companies issue more than one class of stock at the same time. Likely the most well know is Google (called Alphabet Inc for its full life in this dataset); another is Comcast Corp.
```
df[df.assetName=='Alphabet Inc'].assetCode.unique()
lifetimes_df.loc[['GOOG.O', 'GOOGL.O']]
```
Because of this overlapping data, there is no way to be sure about how to link assets over time. You are stuck with one of two bad choices: link on `assetCode` and miss ticker changes and corporate actions, or link on `assetName` but get bad output in the case of dual-class shares.
## Making time-series features when rows dates are missing
Let's say you want to make rolling window time-series feature, like a moving average on volume. As we saw above, it is not possible to do this 100% without error because we don't know the permanent identifier; we must make a tradeoff between the error of using `assetCode` or `assetName`. Given that `assetCode` will never overlap on time (and therefore allows using time as an index), I choose that here.
To make a rolling feature, it was my initial inclination to try something like:
```
df = market_train_df.reset_index().sort_values(['assetCode', 'time']).set_index(['assetCode','time'])
grp = df.groupby('assetCode')
df['volume_avg20'] = (
grp.apply(lambda x: x.volume.rolling(20).mean())
.reset_index(0, drop=True)
)
```
Let's see what we got:
```
(df.reset_index().set_index('time')
.query('assetCode=="VNDA.O"').loc['2007-03-15':'2009-06', ['volume', 'volume_avg20']]
)
```
Look at the time index...the result makes no sense... the rolling average of 20 days spans **the missing period of >2007-03-20 and <2009-06-26 which is not right in the context of financial time series.** Instead we need to account for business days rolling. This will not be 100% accurate becuase we don't know exchange holidays, but it should be very close. **To do this correctly, you need to roll on business days**. However, pandas doesn't like to roll on business days (freq tag 'B') and will throw: `ValueError: <20 * BusinessDays> is a non-fixed frequency`. The next best thing is to roll on calendar days (freq tag 'D').
It took me awhile to get this to work as pandas complains a lot on multi-idexes (this [issue](https://github.com/pandas-dev/pandas/issues/15584) helped a lot).
```
df = df.reset_index().sort_values(['assetCode', 'time']).reset_index(drop=True)
df['volume_avg20d'] = (df
.groupby('assetCode')
.rolling('20D', on='time') # Note the 'D' and on='time'
.volume
.mean()
.reset_index(drop=True)
)
df.reset_index().set_index('time').query('assetCode=="VNDA.O"').loc['2007-03-15':'2009-06', ['volume', 'volume_avg20', 'volume_avg20d']]
```
This is much better! Note that the default `min_periods` is 1 when you use a freq tag (i.e., '20D') to roll on. So even though we asked for a 20-day window, as long as there is at least 1 data point, we will get a windowed average. The result makes sense: if you look at 2009-06-26, you will see that the rolling average does **not** include any information from the year 2007, rather it is time-aware and since there are 19+ missing rows before, give the 1-day windowed average.
# Takeaways
- Security master issues are critical.
- You have to be very careful with time-based features because of missing data. Long-horizon features like, say, 12m momentum, may not produce sufficient asset coverage to be useful becuase so much data is missing.
- The fact that an asset is missing data *is not informative in itself*; it is an artifact of the data collection and delivery process. For example, you cannot calcuate a true asset "age" (e.g., hypothesizing that days since IPO is a valid feature) and use that as a factor. This is unfortunate becuase you may hypothesize that news impact is a bigger driver of return variance during the early part of an asset's life due to lack of analyst coverage, lack of participation by quants, etc.
- `assetCode` is not consistent across time; the same economic entity can, and in many cases does, have a different `assetCode`; `assetCode` is not a permanent identifier.
- `assetName` while consistent across time, can refer to more than once stock *at the same time* and therefore cannot be used to make time series features; `assetName` is not a unique permanent identifier.
- Missing time series data does not show up as `NaN` on the trading calendar; rather the rows are just missing. As such, to make time series features, you have to be careful with pandas rolling calculations and roll on calendar days, not naively on the count of rows.
| github_jupyter |
```
#Import Dependencies
import re
import numpy as np
import pandas as pd
from sqlalchemy import create_engine
```
```
##Source = https://aca5.accela.com/bcc/customization/bcc/cap/licenseSearch.aspx
#California_Cannabis_Distributer_Data
california_data = "../ETL_project/california_data.csv"
california_data_df = pd.read_csv(california_data, encoding="utf-8")
california_data_df.head()
#Individual column names in california_data_df
list(california_data_df)
```
```
#Note that there are multiple delimiters: a colon (":"), a dash ("-"), a comma (","), and a blank space (" ")
california_data_df["Business Contact Information"].head()
##Extract and separate "Business Name" from the california_data_df["Business Contact Information"] column
# dropping null value columns to avoid errors
california_data_df.dropna(inplace = True)
# new dataframe with split value columns
new = california_data_df["Business Contact Information"].str.split(":", n = 1, expand = True)
# making separate "Business Name" column from new data frame
california_data_df["Business Name"]= new[0]
# making separate "Contact Information" column from new data frame
california_data_df["Contact Information"]= new[1]
# Dropping old "Business Contact Information" column
california_data_df.drop(columns =["Business Contact Information"], inplace = True)
#california_data_df display with the new columns
## Note: california_data_df["Business Name"] and california_data_df["Contact Information"] BOTH need cleaning
california_data_df.head()
##Extract the occasional extraneous "Business Name" info from the california_data_df["Contact Information"] column
# dropping null value columns to avoid errors
california_data_df.dropna(inplace = True)
# new data frame with split value columns
new = california_data_df["Contact Information"].str.split("Email-", n = 1, expand = True)
# making separate "Extra Business Name Information" column from new data frame that contains the occasional extraneous "Business Name" info.
california_data_df["Extra Business Name Information"]= new[0]
# making separate "Contact Information2"column from new data frame
california_data_df["Contact Information2"]= new[1]
# Dropping old "Contact Information" column
california_data_df.drop(columns =["Contact Information"], inplace = True)
#california_data_df display with the new columns
## Note: we must now combine california_data_df["Business Name"] with california_data_df["Extra Business Name Information"]
## Note: california_data_df["Contact Information2"] still needs cleaning
california_data_df.head()
#Combine california_data_df["Business Name"] with california_data_df["Extra Business Name Information"] and clean
california_data_df['Company Name'] = california_data_df['Business Name'].str.cat(california_data_df['Extra Business Name Information'],sep=" ")
california_data_df["Company Name"] = california_data_df["Company Name"].str.replace(':,?' , '')
# Dropping california_data_df["Business Name]" and california_data_df["Extra Business Name Information"] columns
california_data_df.drop(columns =["Business Name"], inplace = True)
california_data_df.drop(columns =["Extra Business Name Information"], inplace = True)
#california_data_df display with the new column (california_data_df["Company Name"])
##Note: california_data_df["Contact Information2"] still needs cleaning
california_data_df.head()
##Extract and separate "Email" from the california_data_df["Contact Information2"] column
# dropping null value columns to avoid errors
california_data_df.dropna(inplace = True)
# new data frame with split value columns
new = california_data_df["Contact Information2"].str.split(":", n = 1, expand = True)
# making separate "Business Name" column from new data frame
california_data_df["Email"]= new[0]
# making separate "Contact Information" column from new data frame
california_data_df["Contact Information3"]= new[1]
# Dropping california_data_df["Contact Information2"] column
california_data_df.drop(columns =["Contact Information2"], inplace = True)
#california_data_df display with the new columns
##Note: california_data_df["Contact Information3"] still needs cleaning
california_data_df.head()
##Extract and separate "Phone Number" from the california_data_df["Contact Information3"] column.
# dropping null value columns to avoid errors
california_data_df.dropna(inplace = True)
# new data frame with split value columns
new = california_data_df["Contact Information3"].str.split(":", n = 1, expand = True)
# making separate "Business Name" column from new data frame
california_data_df["Phone Number"]= new[0]
# making separate "Contact Information" column from new data frame
california_data_df["Contact Information4"]= new[1]
# Dropping california_data_df["Contact Information"] column
california_data_df.drop(columns =["Contact Information3"], inplace = True)
#california_data_df display with the new columns
##Note: california_data_df["Phone Number"] needs to contain only the digits of the phone number
##Note: california_data_df["Contact Information4"] still needs cleaning
california_data_df.head()
#Clean up california_data_df["Phone Number"] so that it shows only the digits of phone number
#(ie. Remove the string ("Phone-") from the column
# dropping null value columns to avoid errors
california_data_df.dropna(inplace = True)
# new data frame with split value columns
new = california_data_df["Phone Number"].str.split("-", n = 1, expand = True)
# making separate "Phone str" column from new data frame to extract the unwanted string
california_data_df["Phone str"]= new[0]
# making separate "Telephone Number" column from new data frame
california_data_df["Telephone Number"]= new[1]
# Dropping california_data_df["Phone str"] and california_data_df["Phone Number"] columns
california_data_df.drop(columns =["Phone Number"], inplace = True)
california_data_df.drop(columns =["Phone str"], inplace = True)
#california_data_df display with the new columns
##Note: california_data_df["Contact Information4"] still needs cleaning
california_data_df.head()
#Clean up the california_data_df["Contact Information4"] column so that it shows only the actual website address
#(ie. Remove the string ("Website-") from the column
# dropping null value columns to avoid errors
california_data_df.dropna(inplace = True)
# new data frame with split value columns
new = california_data_df["Contact Information4"].str.split("-", n = 1, expand = True)
# making separate "Website str" column from new data frame to extract the unwanted string
california_data_df["Website str"]= new[0]
# making separate "Website Address" column from new data frame
california_data_df["Website Address"]= new[1]
# Dropping california_data_df["Website str"] and california_data_df["Contact Information4"] columns
california_data_df.drop(columns =["Contact Information4"], inplace = True)
california_data_df.drop(columns =["Website str"], inplace = True)
#california_data_df display with the new columns
california_data_df.head()
##SECTION 2.1 Completed
##################################################################################################################
```
```
#Business Contact Information column cleanup:
california_data_df["Premise Address"].head()
#Note that there are multiple delimiters: a colon (":"), a comma (","), and a blank space (" ")
#Note that the zip codes have either 5 or 9 digits
##Extract and separate "County" from the california_data_df["Premise Address"] column
# dropping null value columns to avoid errors
california_data_df.dropna(inplace = True)
# new data frame with split value columns
new = california_data_df["Premise Address"].str.split(":", n = 1, expand = True)
# making separate "Business Name" column from new data frame
california_data_df["Premise Address2"]= new[0]
# making separate "Contact Information" column from new data frame
california_data_df["County"]= new[1]
# Dropping california_data_df["Premise Address"] column
california_data_df.drop(columns =["Premise Address"], inplace = True)
#california_data_df display with the new columns
##Note: california_data_df["County"] still needs cleaning
##Note: california_data_df["Premise Address2"] still needs cleaning
california_data_df.head()
#Clean up california_data_df["County"] -- Problem: all letters in the column are capitalized, and we need to fix this
#Adjust the case structure so that only the first letter in "County" is capitalized while all others are lower case
california_data_df["County"] = california_data_df["County"].str.title()
#california_data_df display with the new columns
##Note: california_data_df["Premise Address3"] still needs cleaning
california_data_df.head()
#Clean up california_data_df["Premise Address2"] so that the superfluous string "County" can be excised
#(ie. Remove the string ("County") now that the actual county has been extracted into its own column
#Drop the 'County' string from the "Premise Address2" columnn
california_data_df["Premise Address3"] = california_data_df["Premise Address2"].str.replace('County,?' , '')
# Dropping old "Premise Address2" column
california_data_df.drop(columns =["Premise Address2"], inplace = True)
#california_data_df display with the new columns
##Note: california_data_df["Premise Address3"] still needs cleaning
california_data_df.head()
##Extract and separate "Address" from the "Premise Address3" column.
# dropping null value columns to avoid errors
california_data_df.dropna(inplace = True)
# new data frame with split value columns
new = california_data_df["Premise Address3"].str.split(",", n = 1, expand = True)
# making separate "Address" column from new data frame
california_data_df["Address_misc"]= new[0]
# making separate "State/Zip Code" column from new data frame
california_data_df["State/Zip Code"]= new[1]
# Dropping old "Premise Address3" column
california_data_df.drop(columns =["Premise Address3"], inplace = True)
#california_data_df display with the new columns
## Note: california_data_df["Address misc"] and california_data_df["StateSip Code"] BOTH still need cleaning
california_data_df.head()
#Drop the 'CA' string from the State/Zip Code column, since the State information is superfluous
california_data_df["Zip Code"] = california_data_df["State/Zip Code"].str.replace('CA,?' ,'')
# Dropping old "State/Zip Code" column
california_data_df.drop(columns =["State/Zip Code"], inplace = True)
#california_data_df display with the new columns
## Note: california_data_df["Address_misc"] and california_data_df["Zip Code"] BOTH still need cleaning
california_data_df.head()
#Note: Some of the data in the "Zip Code" column has 9 digits, wihle others have 5 digits
california_data_df["Zip Code"].head()
#Need to clean up the "Zip Code" data so that the zip code is the standard 5-digit code
#Clean up "Zip Code" column so that the zip code is the standard 5-digit code, and not the 9-digit code that appers sporadically above
california_data_df['Zip Code'] = california_data_df['Zip Code'].str[:7]
california_data_df.head()
#Choose the most important columns for the next part of the ETL Project
california_data_df = california_data_df[["Company Name","Website Address","County","Zip Code"]]
#Rename column names so that they are SQL feiendly
california_data_df.columns=["Company_Name","Website_Address","County","Zip_Code"]
california_data_df.head()
california_data_df.reset_index(drop = True)
#lets load the Latitude and Longitude coordinates from the csv we created from the API
lat_lng= pd.read_csv("../ETL_project/lat_lng.csv")
lat_lng.columns=["A","Latitude", "Longitude"]
lat_lng.reset_index(drop=True)
lat_lng = lat_lng.reset_index(drop=True)
lat_lng.head()
#Merge the Latitude/Longitude data in with california_data_df
california_data_df = pd.merge(california_data_df, lat_lng, left_index=True, right_index=True)
#california_data_df.drop(["A"], axis=1, inplace=True)
california_data_df.head()
```
```
## Source = https://www.irs.gov/statistics/soi-tax-stats-individual-income-tax-statistics-2016-zip-code-data-soi
#California_Census_Data
census_data = "../ETL_project/california_2016_census_data.csv"
census_data_df = pd.read_csv(census_data, encoding="utf-8")
census_data_df.head(12)
#Find the pertinent data and their columns and rename the columns
census_data_df.rename(columns={"CALIFORNIA":"Zip Code"}, inplace=True)
census_data_df.rename(columns={"Unnamed: 1":"Income Bracket"}, inplace=True)
census_data_df.rename(columns={"Unnamed: 65":"Number of Tax Returns"}, inplace=True)
census_data_df.rename(columns={"Unnamed: 66":"Total Income"}, inplace=True)
list(census_data_df)
#Choose the most pertinent columns for the census part of the ETL Project
census_data_df = census_data_df[["Zip Code","Income Bracket","Number of Tax Returns","Total Income"]]
census_data_df.head(19)
```
```
#Read in new California_Census_Data
census_data2 = "../ETL_project/census_clean.csv"
census_data2_df = pd.read_csv(census_data2)
#Rename the column names
census_data2_df.columns = ["Zip Code","Total Income","Number of Tax Returns"]
census_data2_df.head()
#Aggregate the group data per Zip Code via groupby function
aggregate_census_data_df = (census_data2_df.groupby('Zip Code').sum()).reset_index()
aggregate_census_data_df.head()
#Create a new "Zip Code Income" column
aggregate_census_data_df["Zip Code Income"] = aggregate_census_data_df["Total Income"]/aggregate_census_data_df["Number of Tax Returns"]
aggregate_census_data_df["Zip Code Income"] = aggregate_census_data_df["Zip Code Income"].round()
aggregate_census_data_df.head()
#Reformat the "Zip Code Income" column so that it includes comma delimiters per thousand
#This is aesthetically more pleasing on the final product
aggregate_census_data_df["Zip Code Income"] = aggregate_census_data_df["Zip Code Income"].apply("{:,}".format)
aggregate_census_data_df.head()
##Clean the "Zip Code Income" column so that the ".0" end of the string is eliminated
# dropping null value columns to avoid errors
aggregate_census_data_df.dropna(inplace = True)
# new data frame with split value columns
new = aggregate_census_data_df["Zip Code Income"].str.split(".", n = 1, expand = True)
# making separate "Business Name" column from new data frame
aggregate_census_data_df["Per Capita Income"]= new[0]
# making separate "misc" column from new data frame
aggregate_census_data_df["misc"]=new[1]
# Dropping aggregate_census_data["misc"] column
# Dropping aggregate_census_data["Zip Code Income"] column
# Dropping aggregate_census_data["Total Income"] column
# Dropping aggregate_census_data["Number of Tax Returns"] column
aggregate_census_data_df.drop(columns =["misc"], inplace = True)
aggregate_census_data_df.drop(columns =["Zip Code Income"], inplace = True)
aggregate_census_data_df.drop(columns =["Total Income"], inplace = True)
aggregate_census_data_df.drop(columns =["Number of Tax Returns"], inplace = True)
#Rename columns so that they are SQL friendly
aggregate_census_data_df.columns=["Zip_Code","Per_Capita_Income"]
#display with the new columns
aggregate_census_data_df.head()
list(aggregate_census_data_df)
```
```
#CREATE DATABASE california_data_db;
# USE california_data_db;
# CREATE TABLE california_data(
# id INT PRIMARY KEY,
# Company_Name TEXT,
# Website_Address TEXT,
# COUNTY TEXT,
# Zip_Code TEXT
# A INT
# Latitude TEXT
# Longitude TEXT
# );
# CREATE TABLE census_data(
# id INT PRIMARY KEY,
# Zip_Code TEXT,
# Per_Capita_Income TEXT
# );
```
```
connection_string = "sqlite:///CALIFORNIA_ETL_data_db.sqlite"
engine = create_engine(connection_string)
engine.table_names()
engine.execute("CREATE TABLE california_data(id INT PRIMARY KEY, Company_Name TEXT, Website_Address TEXT, County TEXT, Zip_Code INT, A INT, Latitude INT, Longitude INT);")
engine.execute("CREATE TABLE census_data(id INT PRIMARY KEY, Zip_Code TEXT, Per_Capita_Income TEXT);")
california_data_df.to_sql(name="california_data", con=engine, if_exists="append", index=False)
california_data_df["Zip_Code"]=california_data_df["Zip_Code"].astype("int64")
california_data_df.dtypes
pd.read_sql_query("SELECT * FROM california_data",con=engine).head()
aggregate_census_data_df.to_sql(name = "census_data", con=engine, if_exists="append", index=False)
aggregate_census_data_df.dtypes
pd.read_sql_query("SELECT * FROM census_data",con=engine).head()
## Why are we getting "none" for both id columns ??? --- solved
query = """
SELECT
*
FROM california_data A
INNER JOIN census_data B on A.Zip_Code = B.Zip_Code GROUP BY A.Zip_Code
"""
pd.read_sql_query(query, con=engine).head()
## question: what is best way to make this join work, since the "id" columna are not working as planned ??
############################################################
california_data_df.head()
aggregate_census_data_df.head()
merged_data_df = pd.merge(california_data_df,aggregate_census_data_df,on="Zip_Code")
merged_data_df.head()
merged_data_df.head()
```
| github_jupyter |
**Chapter 5 – Support Vector Machines**
_This notebook contains all the sample code and solutions to the exercises in chapter 5._
# Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
```
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "svm"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
```
# Large margin classification
The next few code cells generate the first figures in chapter 5. The first actual code sample comes after:
```
from sklearn.svm import SVC
from sklearn import datasets
iris = datasets.load_iris()
X = iris["data"][:, (2, 3)] # petal length, petal width
y = iris["target"]
setosa_or_versicolor = (y == 0) | (y == 1)
X = X[setosa_or_versicolor]
y = y[setosa_or_versicolor]
# SVM Classifier model
svm_clf = SVC(kernel="linear", C=float("inf"))
svm_clf.fit(X, y)
# Bad models
x0 = np.linspace(0, 5.5, 200)
pred_1 = 5*x0 - 20
pred_2 = x0 - 1.8
pred_3 = 0.1 * x0 + 0.5
def plot_svc_decision_boundary(svm_clf, xmin, xmax):
w = svm_clf.coef_[0]
b = svm_clf.intercept_[0]
# At the decision boundary, w0*x0 + w1*x1 + b = 0
# => x1 = -w0/w1 * x0 - b/w1
x0 = np.linspace(xmin, xmax, 200)
decision_boundary = -w[0]/w[1] * x0 - b/w[1]
margin = 1/w[1]
gutter_up = decision_boundary + margin
gutter_down = decision_boundary - margin
svs = svm_clf.support_vectors_
plt.scatter(svs[:, 0], svs[:, 1], s=180, facecolors='#FFAAAA')
plt.plot(x0, decision_boundary, "k-", linewidth=2)
plt.plot(x0, gutter_up, "k--", linewidth=2)
plt.plot(x0, gutter_down, "k--", linewidth=2)
plt.figure(figsize=(12,2.7))
plt.subplot(121)
plt.plot(x0, pred_1, "g--", linewidth=2)
plt.plot(x0, pred_2, "m-", linewidth=2)
plt.plot(x0, pred_3, "r-", linewidth=2)
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs", label="Iris-Versicolor")
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo", label="Iris-Setosa")
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.legend(loc="upper left", fontsize=14)
plt.axis([0, 5.5, 0, 2])
plt.subplot(122)
plot_svc_decision_boundary(svm_clf, 0, 5.5)
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs")
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo")
plt.xlabel("Petal length", fontsize=14)
plt.axis([0, 5.5, 0, 2])
save_fig("large_margin_classification_plot")
plt.show()
```
# Sensitivity to feature scales
```
Xs = np.array([[1, 50], [5, 20], [3, 80], [5, 60]]).astype(np.float64)
ys = np.array([0, 0, 1, 1])
svm_clf = SVC(kernel="linear", C=100)
svm_clf.fit(Xs, ys)
plt.figure(figsize=(12,3.2))
plt.subplot(121)
plt.plot(Xs[:, 0][ys==1], Xs[:, 1][ys==1], "bo")
plt.plot(Xs[:, 0][ys==0], Xs[:, 1][ys==0], "ms")
plot_svc_decision_boundary(svm_clf, 0, 6)
plt.xlabel("$x_0$", fontsize=20)
plt.ylabel("$x_1$ ", fontsize=20, rotation=0)
plt.title("Unscaled", fontsize=16)
plt.axis([0, 6, 0, 90])
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_scaled = scaler.fit_transform(Xs)
svm_clf.fit(X_scaled, ys)
plt.subplot(122)
plt.plot(X_scaled[:, 0][ys==1], X_scaled[:, 1][ys==1], "bo")
plt.plot(X_scaled[:, 0][ys==0], X_scaled[:, 1][ys==0], "ms")
plot_svc_decision_boundary(svm_clf, -2, 2)
plt.xlabel("$x_0$", fontsize=20)
plt.title("Scaled", fontsize=16)
plt.axis([-2, 2, -2, 2])
save_fig("sensitivity_to_feature_scales_plot")
```
# Sensitivity to outliers
```
X_outliers = np.array([[3.4, 1.3], [3.2, 0.8]])
y_outliers = np.array([0, 0])
Xo1 = np.concatenate([X, X_outliers[:1]], axis=0)
yo1 = np.concatenate([y, y_outliers[:1]], axis=0)
Xo2 = np.concatenate([X, X_outliers[1:]], axis=0)
yo2 = np.concatenate([y, y_outliers[1:]], axis=0)
svm_clf2 = SVC(kernel="linear", C=10**9)
svm_clf2.fit(Xo2, yo2)
plt.figure(figsize=(12,2.7))
plt.subplot(121)
plt.plot(Xo1[:, 0][yo1==1], Xo1[:, 1][yo1==1], "bs")
plt.plot(Xo1[:, 0][yo1==0], Xo1[:, 1][yo1==0], "yo")
plt.text(0.3, 1.0, "Impossible!", fontsize=24, color="red")
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.annotate("Outlier",
xy=(X_outliers[0][0], X_outliers[0][1]),
xytext=(2.5, 1.7),
ha="center",
arrowprops=dict(facecolor='black', shrink=0.1),
fontsize=16,
)
plt.axis([0, 5.5, 0, 2])
plt.subplot(122)
plt.plot(Xo2[:, 0][yo2==1], Xo2[:, 1][yo2==1], "bs")
plt.plot(Xo2[:, 0][yo2==0], Xo2[:, 1][yo2==0], "yo")
plot_svc_decision_boundary(svm_clf2, 0, 5.5)
plt.xlabel("Petal length", fontsize=14)
plt.annotate("Outlier",
xy=(X_outliers[1][0], X_outliers[1][1]),
xytext=(3.2, 0.08),
ha="center",
arrowprops=dict(facecolor='black', shrink=0.1),
fontsize=16,
)
plt.axis([0, 5.5, 0, 2])
save_fig("sensitivity_to_outliers_plot")
plt.show()
```
# Large margin *vs* margin violations
This is the first code example in chapter 5:
```
import numpy as np
from sklearn import datasets
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import LinearSVC
iris = datasets.load_iris()
X = iris["data"][:, (2, 3)] # petal length, petal width
y = (iris["target"] == 2).astype(np.float64) # Iris-Virginica
svm_clf = Pipeline([
("scaler", StandardScaler()),
("linear_svc", LinearSVC(C=1, loss="hinge", random_state=42)),
])
svm_clf.fit(X, y)
svm_clf.predict([[5.5, 1.7]])
```
Now let's generate the graph comparing different regularization settings:
```
scaler = StandardScaler()
svm_clf1 = LinearSVC(C=1, loss="hinge", random_state=42)
svm_clf2 = LinearSVC(C=100, loss="hinge", random_state=42)
scaled_svm_clf1 = Pipeline([
("scaler", scaler),
("linear_svc", svm_clf1),
])
scaled_svm_clf2 = Pipeline([
("scaler", scaler),
("linear_svc", svm_clf2),
])
scaled_svm_clf1.fit(X, y)
scaled_svm_clf2.fit(X, y)
# Convert to unscaled parameters
b1 = svm_clf1.decision_function([-scaler.mean_ / scaler.scale_])
b2 = svm_clf2.decision_function([-scaler.mean_ / scaler.scale_])
w1 = svm_clf1.coef_[0] / scaler.scale_
w2 = svm_clf2.coef_[0] / scaler.scale_
svm_clf1.intercept_ = np.array([b1])
svm_clf2.intercept_ = np.array([b2])
svm_clf1.coef_ = np.array([w1])
svm_clf2.coef_ = np.array([w2])
# Find support vectors (LinearSVC does not do this automatically)
t = y * 2 - 1
support_vectors_idx1 = (t * (X.dot(w1) + b1) < 1).ravel()
support_vectors_idx2 = (t * (X.dot(w2) + b2) < 1).ravel()
svm_clf1.support_vectors_ = X[support_vectors_idx1]
svm_clf2.support_vectors_ = X[support_vectors_idx2]
plt.figure(figsize=(12,3.2))
plt.subplot(121)
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "g^", label="Iris-Virginica")
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "bs", label="Iris-Versicolor")
plot_svc_decision_boundary(svm_clf1, 4, 6)
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.legend(loc="upper left", fontsize=14)
plt.title("$C = {}$".format(svm_clf1.C), fontsize=16)
plt.axis([4, 6, 0.8, 2.8])
plt.subplot(122)
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "g^")
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "bs")
plot_svc_decision_boundary(svm_clf2, 4, 6)
plt.xlabel("Petal length", fontsize=14)
plt.title("$C = {}$".format(svm_clf2.C), fontsize=16)
plt.axis([4, 6, 0.8, 2.8])
save_fig("regularization_plot")
```
# Non-linear classification
```
X1D = np.linspace(-4, 4, 9).reshape(-1, 1)
X2D = np.c_[X1D, X1D**2]
y = np.array([0, 0, 1, 1, 1, 1, 1, 0, 0])
plt.figure(figsize=(11, 4))
plt.subplot(121)
plt.grid(True, which='both')
plt.axhline(y=0, color='k')
plt.plot(X1D[:, 0][y==0], np.zeros(4), "bs")
plt.plot(X1D[:, 0][y==1], np.zeros(5), "g^")
plt.gca().get_yaxis().set_ticks([])
plt.xlabel(r"$x_1$", fontsize=20)
plt.axis([-4.5, 4.5, -0.2, 0.2])
plt.subplot(122)
plt.grid(True, which='both')
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.plot(X2D[:, 0][y==0], X2D[:, 1][y==0], "bs")
plt.plot(X2D[:, 0][y==1], X2D[:, 1][y==1], "g^")
plt.xlabel(r"$x_1$", fontsize=20)
plt.ylabel(r"$x_2$", fontsize=20, rotation=0)
plt.gca().get_yaxis().set_ticks([0, 4, 8, 12, 16])
plt.plot([-4.5, 4.5], [6.5, 6.5], "r--", linewidth=3)
plt.axis([-4.5, 4.5, -1, 17])
plt.subplots_adjust(right=1)
save_fig("higher_dimensions_plot", tight_layout=False)
plt.show()
from sklearn.datasets import make_moons
X, y = make_moons(n_samples=100, noise=0.15, random_state=42)
def plot_dataset(X, y, axes):
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "bs")
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "g^")
plt.axis(axes)
plt.grid(True, which='both')
plt.xlabel(r"$x_1$", fontsize=20)
plt.ylabel(r"$x_2$", fontsize=20, rotation=0)
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
plt.show()
from sklearn.datasets import make_moons
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import PolynomialFeatures
polynomial_svm_clf = Pipeline([
("poly_features", PolynomialFeatures(degree=3)),
("scaler", StandardScaler()),
("svm_clf", LinearSVC(C=10, loss="hinge", random_state=42))
])
polynomial_svm_clf.fit(X, y)
def plot_predictions(clf, axes):
x0s = np.linspace(axes[0], axes[1], 100)
x1s = np.linspace(axes[2], axes[3], 100)
x0, x1 = np.meshgrid(x0s, x1s)
X = np.c_[x0.ravel(), x1.ravel()]
y_pred = clf.predict(X).reshape(x0.shape)
y_decision = clf.decision_function(X).reshape(x0.shape)
plt.contourf(x0, x1, y_pred, cmap=plt.cm.brg, alpha=0.2)
plt.contourf(x0, x1, y_decision, cmap=plt.cm.brg, alpha=0.1)
plot_predictions(polynomial_svm_clf, [-1.5, 2.5, -1, 1.5])
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
save_fig("moons_polynomial_svc_plot")
plt.show()
from sklearn.svm import SVC
poly_kernel_svm_clf = Pipeline([
("scaler", StandardScaler()),
("svm_clf", SVC(kernel="poly", degree=3, coef0=1, C=5))
])
poly_kernel_svm_clf.fit(X, y)
poly100_kernel_svm_clf = Pipeline([
("scaler", StandardScaler()),
("svm_clf", SVC(kernel="poly", degree=10, coef0=100, C=5))
])
poly100_kernel_svm_clf.fit(X, y)
plt.figure(figsize=(11, 4))
plt.subplot(121)
plot_predictions(poly_kernel_svm_clf, [-1.5, 2.5, -1, 1.5])
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
plt.title(r"$d=3, r=1, C=5$", fontsize=18)
plt.subplot(122)
plot_predictions(poly100_kernel_svm_clf, [-1.5, 2.5, -1, 1.5])
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
plt.title(r"$d=10, r=100, C=5$", fontsize=18)
save_fig("moons_kernelized_polynomial_svc_plot")
plt.show()
def gaussian_rbf(x, landmark, gamma):
return np.exp(-gamma * np.linalg.norm(x - landmark, axis=1)**2)
gamma = 0.3
x1s = np.linspace(-4.5, 4.5, 200).reshape(-1, 1)
x2s = gaussian_rbf(x1s, -2, gamma)
x3s = gaussian_rbf(x1s, 1, gamma)
XK = np.c_[gaussian_rbf(X1D, -2, gamma), gaussian_rbf(X1D, 1, gamma)]
yk = np.array([0, 0, 1, 1, 1, 1, 1, 0, 0])
plt.figure(figsize=(11, 4))
plt.subplot(121)
plt.grid(True, which='both')
plt.axhline(y=0, color='k')
plt.scatter(x=[-2, 1], y=[0, 0], s=150, alpha=0.5, c="red")
plt.plot(X1D[:, 0][yk==0], np.zeros(4), "bs")
plt.plot(X1D[:, 0][yk==1], np.zeros(5), "g^")
plt.plot(x1s, x2s, "g--")
plt.plot(x1s, x3s, "b:")
plt.gca().get_yaxis().set_ticks([0, 0.25, 0.5, 0.75, 1])
plt.xlabel(r"$x_1$", fontsize=20)
plt.ylabel(r"Similarity", fontsize=14)
plt.annotate(r'$\mathbf{x}$',
xy=(X1D[3, 0], 0),
xytext=(-0.5, 0.20),
ha="center",
arrowprops=dict(facecolor='black', shrink=0.1),
fontsize=18,
)
plt.text(-2, 0.9, "$x_2$", ha="center", fontsize=20)
plt.text(1, 0.9, "$x_3$", ha="center", fontsize=20)
plt.axis([-4.5, 4.5, -0.1, 1.1])
plt.subplot(122)
plt.grid(True, which='both')
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.plot(XK[:, 0][yk==0], XK[:, 1][yk==0], "bs")
plt.plot(XK[:, 0][yk==1], XK[:, 1][yk==1], "g^")
plt.xlabel(r"$x_2$", fontsize=20)
plt.ylabel(r"$x_3$ ", fontsize=20, rotation=0)
plt.annotate(r'$\phi\left(\mathbf{x}\right)$',
xy=(XK[3, 0], XK[3, 1]),
xytext=(0.65, 0.50),
ha="center",
arrowprops=dict(facecolor='black', shrink=0.1),
fontsize=18,
)
plt.plot([-0.1, 1.1], [0.57, -0.1], "r--", linewidth=3)
plt.axis([-0.1, 1.1, -0.1, 1.1])
plt.subplots_adjust(right=1)
save_fig("kernel_method_plot")
plt.show()
x1_example = X1D[3, 0]
for landmark in (-2, 1):
k = gaussian_rbf(np.array([[x1_example]]), np.array([[landmark]]), gamma)
print("Phi({}, {}) = {}".format(x1_example, landmark, k))
rbf_kernel_svm_clf = Pipeline([
("scaler", StandardScaler()),
("svm_clf", SVC(kernel="rbf", gamma=5, C=0.001))
])
rbf_kernel_svm_clf.fit(X, y)
from sklearn.svm import SVC
gamma1, gamma2 = 0.1, 5
C1, C2 = 0.001, 1000
hyperparams = (gamma1, C1), (gamma1, C2), (gamma2, C1), (gamma2, C2)
svm_clfs = []
for gamma, C in hyperparams:
rbf_kernel_svm_clf = Pipeline([
("scaler", StandardScaler()),
("svm_clf", SVC(kernel="rbf", gamma=gamma, C=C))
])
rbf_kernel_svm_clf.fit(X, y)
svm_clfs.append(rbf_kernel_svm_clf)
plt.figure(figsize=(11, 7))
for i, svm_clf in enumerate(svm_clfs):
plt.subplot(221 + i)
plot_predictions(svm_clf, [-1.5, 2.5, -1, 1.5])
plot_dataset(X, y, [-1.5, 2.5, -1, 1.5])
gamma, C = hyperparams[i]
plt.title(r"$\gamma = {}, C = {}$".format(gamma, C), fontsize=16)
save_fig("moons_rbf_svc_plot")
plt.show()
```
# Regression
```
np.random.seed(42)
m = 50
X = 2 * np.random.rand(m, 1)
y = (4 + 3 * X + np.random.randn(m, 1)).ravel()
from sklearn.svm import LinearSVR
svm_reg = LinearSVR(epsilon=1.5, random_state=42)
svm_reg.fit(X, y)
svm_reg1 = LinearSVR(epsilon=1.5, random_state=42)
svm_reg2 = LinearSVR(epsilon=0.5, random_state=42)
svm_reg1.fit(X, y)
svm_reg2.fit(X, y)
def find_support_vectors(svm_reg, X, y):
y_pred = svm_reg.predict(X)
off_margin = (np.abs(y - y_pred) >= svm_reg.epsilon)
return np.argwhere(off_margin)
svm_reg1.support_ = find_support_vectors(svm_reg1, X, y)
svm_reg2.support_ = find_support_vectors(svm_reg2, X, y)
eps_x1 = 1
eps_y_pred = svm_reg1.predict([[eps_x1]])
def plot_svm_regression(svm_reg, X, y, axes):
x1s = np.linspace(axes[0], axes[1], 100).reshape(100, 1)
y_pred = svm_reg.predict(x1s)
plt.plot(x1s, y_pred, "k-", linewidth=2, label=r"$\hat{y}$")
plt.plot(x1s, y_pred + svm_reg.epsilon, "k--")
plt.plot(x1s, y_pred - svm_reg.epsilon, "k--")
plt.scatter(X[svm_reg.support_], y[svm_reg.support_], s=180, facecolors='#FFAAAA')
plt.plot(X, y, "bo")
plt.xlabel(r"$x_1$", fontsize=18)
plt.legend(loc="upper left", fontsize=18)
plt.axis(axes)
plt.figure(figsize=(9, 4))
plt.subplot(121)
plot_svm_regression(svm_reg1, X, y, [0, 2, 3, 11])
plt.title(r"$\epsilon = {}$".format(svm_reg1.epsilon), fontsize=18)
plt.ylabel(r"$y$", fontsize=18, rotation=0)
#plt.plot([eps_x1, eps_x1], [eps_y_pred, eps_y_pred - svm_reg1.epsilon], "k-", linewidth=2)
plt.annotate(
'', xy=(eps_x1, eps_y_pred), xycoords='data',
xytext=(eps_x1, eps_y_pred - svm_reg1.epsilon),
textcoords='data', arrowprops={'arrowstyle': '<->', 'linewidth': 1.5}
)
plt.text(0.91, 5.6, r"$\epsilon$", fontsize=20)
plt.subplot(122)
plot_svm_regression(svm_reg2, X, y, [0, 2, 3, 11])
plt.title(r"$\epsilon = {}$".format(svm_reg2.epsilon), fontsize=18)
save_fig("svm_regression_plot")
plt.show()
np.random.seed(42)
m = 100
X = 2 * np.random.rand(m, 1) - 1
y = (0.2 + 0.1 * X + 0.5 * X**2 + np.random.randn(m, 1)/10).ravel()
```
**Warning**: the default value of `gamma` will change from `'auto'` to `'scale'` in version 0.22 to better account for unscaled features. To preserve the same results as in the book, we explicitly set it to `'auto'`, but you should probably just use the default in your own code.
```
from sklearn.svm import SVR
svm_poly_reg = SVR(kernel="poly", degree=2, C=100, epsilon=0.1, gamma="auto")
svm_poly_reg.fit(X, y)
from sklearn.svm import SVR
svm_poly_reg1 = SVR(kernel="poly", degree=2, C=100, epsilon=0.1, gamma="auto")
svm_poly_reg2 = SVR(kernel="poly", degree=2, C=0.01, epsilon=0.1, gamma="auto")
svm_poly_reg1.fit(X, y)
svm_poly_reg2.fit(X, y)
plt.figure(figsize=(9, 4))
plt.subplot(121)
plot_svm_regression(svm_poly_reg1, X, y, [-1, 1, 0, 1])
plt.title(r"$degree={}, C={}, \epsilon = {}$".format(svm_poly_reg1.degree, svm_poly_reg1.C, svm_poly_reg1.epsilon), fontsize=18)
plt.ylabel(r"$y$", fontsize=18, rotation=0)
plt.subplot(122)
plot_svm_regression(svm_poly_reg2, X, y, [-1, 1, 0, 1])
plt.title(r"$degree={}, C={}, \epsilon = {}$".format(svm_poly_reg2.degree, svm_poly_reg2.C, svm_poly_reg2.epsilon), fontsize=18)
save_fig("svm_with_polynomial_kernel_plot")
plt.show()
```
# Under the hood
```
iris = datasets.load_iris()
X = iris["data"][:, (2, 3)] # petal length, petal width
y = (iris["target"] == 2).astype(np.float64) # Iris-Virginica
from mpl_toolkits.mplot3d import Axes3D
def plot_3D_decision_function(ax, w, b, x1_lim=[4, 6], x2_lim=[0.8, 2.8]):
x1_in_bounds = (X[:, 0] > x1_lim[0]) & (X[:, 0] < x1_lim[1])
X_crop = X[x1_in_bounds]
y_crop = y[x1_in_bounds]
x1s = np.linspace(x1_lim[0], x1_lim[1], 20)
x2s = np.linspace(x2_lim[0], x2_lim[1], 20)
x1, x2 = np.meshgrid(x1s, x2s)
xs = np.c_[x1.ravel(), x2.ravel()]
df = (xs.dot(w) + b).reshape(x1.shape)
m = 1 / np.linalg.norm(w)
boundary_x2s = -x1s*(w[0]/w[1])-b/w[1]
margin_x2s_1 = -x1s*(w[0]/w[1])-(b-1)/w[1]
margin_x2s_2 = -x1s*(w[0]/w[1])-(b+1)/w[1]
ax.plot_surface(x1s, x2, np.zeros_like(x1),
color="b", alpha=0.2, cstride=100, rstride=100)
ax.plot(x1s, boundary_x2s, 0, "k-", linewidth=2, label=r"$h=0$")
ax.plot(x1s, margin_x2s_1, 0, "k--", linewidth=2, label=r"$h=\pm 1$")
ax.plot(x1s, margin_x2s_2, 0, "k--", linewidth=2)
ax.plot(X_crop[:, 0][y_crop==1], X_crop[:, 1][y_crop==1], 0, "g^")
ax.plot_wireframe(x1, x2, df, alpha=0.3, color="k")
ax.plot(X_crop[:, 0][y_crop==0], X_crop[:, 1][y_crop==0], 0, "bs")
ax.axis(x1_lim + x2_lim)
ax.text(4.5, 2.5, 3.8, "Decision function $h$", fontsize=15)
ax.set_xlabel(r"Petal length", fontsize=15)
ax.set_ylabel(r"Petal width", fontsize=15)
ax.set_zlabel(r"$h = \mathbf{w}^T \mathbf{x} + b$", fontsize=18)
ax.legend(loc="upper left", fontsize=16)
fig = plt.figure(figsize=(11, 6))
ax1 = fig.add_subplot(111, projection='3d')
plot_3D_decision_function(ax1, w=svm_clf2.coef_[0], b=svm_clf2.intercept_[0])
#save_fig("iris_3D_plot")
plt.show()
```
# Small weight vector results in a large margin
```
def plot_2D_decision_function(w, b, ylabel=True, x1_lim=[-3, 3]):
x1 = np.linspace(x1_lim[0], x1_lim[1], 200)
y = w * x1 + b
m = 1 / w
plt.plot(x1, y)
plt.plot(x1_lim, [1, 1], "k:")
plt.plot(x1_lim, [-1, -1], "k:")
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.plot([m, m], [0, 1], "k--")
plt.plot([-m, -m], [0, -1], "k--")
plt.plot([-m, m], [0, 0], "k-o", linewidth=3)
plt.axis(x1_lim + [-2, 2])
plt.xlabel(r"$x_1$", fontsize=16)
if ylabel:
plt.ylabel(r"$w_1 x_1$ ", rotation=0, fontsize=16)
plt.title(r"$w_1 = {}$".format(w), fontsize=16)
plt.figure(figsize=(12, 3.2))
plt.subplot(121)
plot_2D_decision_function(1, 0)
plt.subplot(122)
plot_2D_decision_function(0.5, 0, ylabel=False)
save_fig("small_w_large_margin_plot")
plt.show()
from sklearn.svm import SVC
from sklearn import datasets
iris = datasets.load_iris()
X = iris["data"][:, (2, 3)] # petal length, petal width
y = (iris["target"] == 2).astype(np.float64) # Iris-Virginica
svm_clf = SVC(kernel="linear", C=1)
svm_clf.fit(X, y)
svm_clf.predict([[5.3, 1.3]])
```
# Hinge loss
```
t = np.linspace(-2, 4, 200)
h = np.where(1 - t < 0, 0, 1 - t) # max(0, 1-t)
plt.figure(figsize=(5,2.8))
plt.plot(t, h, "b-", linewidth=2, label="$max(0, 1 - t)$")
plt.grid(True, which='both')
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.yticks(np.arange(-1, 2.5, 1))
plt.xlabel("$t$", fontsize=16)
plt.axis([-2, 4, -1, 2.5])
plt.legend(loc="upper right", fontsize=16)
save_fig("hinge_plot")
plt.show()
```
# Extra material
## Training time
```
X, y = make_moons(n_samples=1000, noise=0.4, random_state=42)
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "bs")
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "g^")
import time
tol = 0.1
tols = []
times = []
for i in range(10):
svm_clf = SVC(kernel="poly", gamma=3, C=10, tol=tol, verbose=1)
t1 = time.time()
svm_clf.fit(X, y)
t2 = time.time()
times.append(t2-t1)
tols.append(tol)
print(i, tol, t2-t1)
tol /= 10
plt.semilogx(tols, times)
```
## Linear SVM classifier implementation using Batch Gradient Descent
```
# Training set
X = iris["data"][:, (2, 3)] # petal length, petal width
y = (iris["target"] == 2).astype(np.float64).reshape(-1, 1) # Iris-Virginica
from sklearn.base import BaseEstimator
class MyLinearSVC(BaseEstimator):
def __init__(self, C=1, eta0=1, eta_d=10000, n_epochs=1000, random_state=None):
self.C = C
self.eta0 = eta0
self.n_epochs = n_epochs
self.random_state = random_state
self.eta_d = eta_d
def eta(self, epoch):
return self.eta0 / (epoch + self.eta_d)
def fit(self, X, y):
# Random initialization
if self.random_state:
np.random.seed(self.random_state)
w = np.random.randn(X.shape[1], 1) # n feature weights
b = 0
m = len(X)
t = y * 2 - 1 # -1 if t==0, +1 if t==1
X_t = X * t
self.Js=[]
# Training
for epoch in range(self.n_epochs):
support_vectors_idx = (X_t.dot(w) + t * b < 1).ravel()
X_t_sv = X_t[support_vectors_idx]
t_sv = t[support_vectors_idx]
J = 1/2 * np.sum(w * w) + self.C * (np.sum(1 - X_t_sv.dot(w)) - b * np.sum(t_sv))
self.Js.append(J)
w_gradient_vector = w - self.C * np.sum(X_t_sv, axis=0).reshape(-1, 1)
b_derivative = -C * np.sum(t_sv)
w = w - self.eta(epoch) * w_gradient_vector
b = b - self.eta(epoch) * b_derivative
self.intercept_ = np.array([b])
self.coef_ = np.array([w])
support_vectors_idx = (X_t.dot(w) + t * b < 1).ravel()
self.support_vectors_ = X[support_vectors_idx]
return self
def decision_function(self, X):
return X.dot(self.coef_[0]) + self.intercept_[0]
def predict(self, X):
return (self.decision_function(X) >= 0).astype(np.float64)
C=2
svm_clf = MyLinearSVC(C=C, eta0 = 10, eta_d = 1000, n_epochs=60000, random_state=2)
svm_clf.fit(X, y)
svm_clf.predict(np.array([[5, 2], [4, 1]]))
plt.plot(range(svm_clf.n_epochs), svm_clf.Js)
plt.axis([0, svm_clf.n_epochs, 0, 100])
print(svm_clf.intercept_, svm_clf.coef_)
svm_clf2 = SVC(kernel="linear", C=C)
svm_clf2.fit(X, y.ravel())
print(svm_clf2.intercept_, svm_clf2.coef_)
yr = y.ravel()
plt.figure(figsize=(12,3.2))
plt.subplot(121)
plt.plot(X[:, 0][yr==1], X[:, 1][yr==1], "g^", label="Iris-Virginica")
plt.plot(X[:, 0][yr==0], X[:, 1][yr==0], "bs", label="Not Iris-Virginica")
plot_svc_decision_boundary(svm_clf, 4, 6)
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.title("MyLinearSVC", fontsize=14)
plt.axis([4, 6, 0.8, 2.8])
plt.subplot(122)
plt.plot(X[:, 0][yr==1], X[:, 1][yr==1], "g^")
plt.plot(X[:, 0][yr==0], X[:, 1][yr==0], "bs")
plot_svc_decision_boundary(svm_clf2, 4, 6)
plt.xlabel("Petal length", fontsize=14)
plt.title("SVC", fontsize=14)
plt.axis([4, 6, 0.8, 2.8])
from sklearn.linear_model import SGDClassifier
sgd_clf = SGDClassifier(loss="hinge", alpha = 0.017, max_iter = 50, tol=-np.infty, random_state=42)
sgd_clf.fit(X, y.ravel())
m = len(X)
t = y * 2 - 1 # -1 if t==0, +1 if t==1
X_b = np.c_[np.ones((m, 1)), X] # Add bias input x0=1
X_b_t = X_b * t
sgd_theta = np.r_[sgd_clf.intercept_[0], sgd_clf.coef_[0]]
print(sgd_theta)
support_vectors_idx = (X_b_t.dot(sgd_theta) < 1).ravel()
sgd_clf.support_vectors_ = X[support_vectors_idx]
sgd_clf.C = C
plt.figure(figsize=(5.5,3.2))
plt.plot(X[:, 0][yr==1], X[:, 1][yr==1], "g^")
plt.plot(X[:, 0][yr==0], X[:, 1][yr==0], "bs")
plot_svc_decision_boundary(sgd_clf, 4, 6)
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.title("SGDClassifier", fontsize=14)
plt.axis([4, 6, 0.8, 2.8])
```
# Exercise solutions
## 1. to 7.
See appendix A.
# 8.
_Exercise: train a `LinearSVC` on a linearly separable dataset. Then train an `SVC` and a `SGDClassifier` on the same dataset. See if you can get them to produce roughly the same model._
Let's use the Iris dataset: the Iris Setosa and Iris Versicolor classes are linearly separable.
```
from sklearn import datasets
iris = datasets.load_iris()
X = iris["data"][:, (2, 3)] # petal length, petal width
y = iris["target"]
setosa_or_versicolor = (y == 0) | (y == 1)
X = X[setosa_or_versicolor]
y = y[setosa_or_versicolor]
from sklearn.svm import SVC, LinearSVC
from sklearn.linear_model import SGDClassifier
from sklearn.preprocessing import StandardScaler
C = 5
alpha = 1 / (C * len(X))
lin_clf = LinearSVC(loss="hinge", C=C, random_state=42)
svm_clf = SVC(kernel="linear", C=C)
sgd_clf = SGDClassifier(loss="hinge", learning_rate="constant", eta0=0.001, alpha=alpha,
max_iter=100000, tol=-np.infty, random_state=42)
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)
lin_clf.fit(X_scaled, y)
svm_clf.fit(X_scaled, y)
sgd_clf.fit(X_scaled, y)
print("LinearSVC: ", lin_clf.intercept_, lin_clf.coef_)
print("SVC: ", svm_clf.intercept_, svm_clf.coef_)
print("SGDClassifier(alpha={:.5f}):".format(sgd_clf.alpha), sgd_clf.intercept_, sgd_clf.coef_)
```
Let's plot the decision boundaries of these three models:
```
# Compute the slope and bias of each decision boundary
w1 = -lin_clf.coef_[0, 0]/lin_clf.coef_[0, 1]
b1 = -lin_clf.intercept_[0]/lin_clf.coef_[0, 1]
w2 = -svm_clf.coef_[0, 0]/svm_clf.coef_[0, 1]
b2 = -svm_clf.intercept_[0]/svm_clf.coef_[0, 1]
w3 = -sgd_clf.coef_[0, 0]/sgd_clf.coef_[0, 1]
b3 = -sgd_clf.intercept_[0]/sgd_clf.coef_[0, 1]
# Transform the decision boundary lines back to the original scale
line1 = scaler.inverse_transform([[-10, -10 * w1 + b1], [10, 10 * w1 + b1]])
line2 = scaler.inverse_transform([[-10, -10 * w2 + b2], [10, 10 * w2 + b2]])
line3 = scaler.inverse_transform([[-10, -10 * w3 + b3], [10, 10 * w3 + b3]])
# Plot all three decision boundaries
plt.figure(figsize=(11, 4))
plt.plot(line1[:, 0], line1[:, 1], "k:", label="LinearSVC")
plt.plot(line2[:, 0], line2[:, 1], "b--", linewidth=2, label="SVC")
plt.plot(line3[:, 0], line3[:, 1], "r-", label="SGDClassifier")
plt.plot(X[:, 0][y==1], X[:, 1][y==1], "bs") # label="Iris-Versicolor"
plt.plot(X[:, 0][y==0], X[:, 1][y==0], "yo") # label="Iris-Setosa"
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.legend(loc="upper center", fontsize=14)
plt.axis([0, 5.5, 0, 2])
plt.show()
```
Close enough!
# 9.
_Exercise: train an SVM classifier on the MNIST dataset. Since SVM classifiers are binary classifiers, you will need to use one-versus-all to classify all 10 digits. You may want to tune the hyperparameters using small validation sets to speed up the process. What accuracy can you reach?_
First, let's load the dataset and split it into a training set and a test set. We could use `train_test_split()` but people usually just take the first 60,000 instances for the training set, and the last 10,000 instances for the test set (this makes it possible to compare your model's performance with others):
```
try:
from sklearn.datasets import fetch_openml
mnist = fetch_openml('mnist_784', version=1, cache=True)
except ImportError:
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata('MNIST original')
X = mnist["data"]
y = mnist["target"]
X_train = X[:60000]
y_train = y[:60000]
X_test = X[60000:]
y_test = y[60000:]
```
Many training algorithms are sensitive to the order of the training instances, so it's generally good practice to shuffle them first:
```
np.random.seed(42)
rnd_idx = np.random.permutation(60000)
X_train = X_train[rnd_idx]
y_train = y_train[rnd_idx]
```
Let's start simple, with a linear SVM classifier. It will automatically use the One-vs-All (also called One-vs-the-Rest, OvR) strategy, so there's nothing special we need to do. Easy!
```
lin_clf = LinearSVC(random_state=42)
lin_clf.fit(X_train, y_train)
```
Let's make predictions on the training set and measure the accuracy (we don't want to measure it on the test set yet, since we have not selected and trained the final model yet):
```
from sklearn.metrics import accuracy_score
y_pred = lin_clf.predict(X_train)
accuracy_score(y_train, y_pred)
```
Wow, 86% accuracy on MNIST is a really bad performance. This linear model is certainly too simple for MNIST, but perhaps we just needed to scale the data first:
```
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train.astype(np.float32))
X_test_scaled = scaler.transform(X_test.astype(np.float32))
lin_clf = LinearSVC(random_state=42)
lin_clf.fit(X_train_scaled, y_train)
y_pred = lin_clf.predict(X_train_scaled)
accuracy_score(y_train, y_pred)
```
That's much better (we cut the error rate in two), but still not great at all for MNIST. If we want to use an SVM, we will have to use a kernel. Let's try an `SVC` with an RBF kernel (the default).
**Warning**: if you are using Scikit-Learn ≤ 0.19, the `SVC` class will use the One-vs-One (OvO) strategy by default, so you must explicitly set `decision_function_shape="ovr"` if you want to use the OvR strategy instead (OvR is the default since 0.19).
```
svm_clf = SVC(decision_function_shape="ovr", gamma="auto")
svm_clf.fit(X_train_scaled[:10000], y_train[:10000])
y_pred = svm_clf.predict(X_train_scaled)
accuracy_score(y_train, y_pred)
```
That's promising, we get better performance even though we trained the model on 6 times less data. Let's tune the hyperparameters by doing a randomized search with cross validation. We will do this on a small dataset just to speed up the process:
```
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import reciprocal, uniform
param_distributions = {"gamma": reciprocal(0.001, 0.1), "C": uniform(1, 10)}
rnd_search_cv = RandomizedSearchCV(svm_clf, param_distributions, n_iter=10, verbose=2, cv=3)
rnd_search_cv.fit(X_train_scaled[:1000], y_train[:1000])
rnd_search_cv.best_estimator_
rnd_search_cv.best_score_
```
This looks pretty low but remember we only trained the model on 1,000 instances. Let's retrain the best estimator on the whole training set (run this at night, it will take hours):
```
rnd_search_cv.best_estimator_.fit(X_train_scaled, y_train)
y_pred = rnd_search_cv.best_estimator_.predict(X_train_scaled)
accuracy_score(y_train, y_pred)
```
Ah, this looks good! Let's select this model. Now we can test it on the test set:
```
y_pred = rnd_search_cv.best_estimator_.predict(X_test_scaled)
accuracy_score(y_test, y_pred)
```
Not too bad, but apparently the model is overfitting slightly. It's tempting to tweak the hyperparameters a bit more (e.g. decreasing `C` and/or `gamma`), but we would run the risk of overfitting the test set. Other people have found that the hyperparameters `C=5` and `gamma=0.005` yield even better performance (over 98% accuracy). By running the randomized search for longer and on a larger part of the training set, you may be able to find this as well.
## 10.
_Exercise: train an SVM regressor on the California housing dataset._
Let's load the dataset using Scikit-Learn's `fetch_california_housing()` function:
```
from sklearn.datasets import fetch_california_housing
housing = fetch_california_housing()
X = housing["data"]
y = housing["target"]
```
Split it into a training set and a test set:
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
```
Don't forget to scale the data:
```
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)
```
Let's train a simple `LinearSVR` first:
```
from sklearn.svm import LinearSVR
lin_svr = LinearSVR(random_state=42)
lin_svr.fit(X_train_scaled, y_train)
```
Let's see how it performs on the training set:
```
from sklearn.metrics import mean_squared_error
y_pred = lin_svr.predict(X_train_scaled)
mse = mean_squared_error(y_train, y_pred)
mse
```
Let's look at the RMSE:
```
np.sqrt(mse)
```
In this training set, the targets are tens of thousands of dollars. The RMSE gives a rough idea of the kind of error you should expect (with a higher weight for large errors): so with this model we can expect errors somewhere around $10,000. Not great. Let's see if we can do better with an RBF Kernel. We will use randomized search with cross validation to find the appropriate hyperparameter values for `C` and `gamma`:
```
from sklearn.svm import SVR
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import reciprocal, uniform
param_distributions = {"gamma": reciprocal(0.001, 0.1), "C": uniform(1, 10)}
rnd_search_cv = RandomizedSearchCV(SVR(), param_distributions, n_iter=10, verbose=2, cv=3, random_state=42)
rnd_search_cv.fit(X_train_scaled, y_train)
rnd_search_cv.best_estimator_
```
Now let's measure the RMSE on the training set:
```
y_pred = rnd_search_cv.best_estimator_.predict(X_train_scaled)
mse = mean_squared_error(y_train, y_pred)
np.sqrt(mse)
```
Looks much better than the linear model. Let's select this model and evaluate it on the test set:
```
y_pred = rnd_search_cv.best_estimator_.predict(X_test_scaled)
mse = mean_squared_error(y_test, y_pred)
np.sqrt(mse)
cmap = matplotlib.cm.get_cmap("jet")
from sklearn.datasets import fetch_openml
mnist = fetch_openml("mnist_784", version=1)
print(mnist.data.shape)
```
| github_jupyter |
```
import numpy
%pylab inline
pwd #import crap with this
import numpy
import matplotlib.pyplot as pl
import scipy
from scipy import integrate
from pylab import *
import numpy as np
from numpy import zeros, array, asarray, dot, linspace, size, sin, cos, tan, pi, exp, random, linalg
import scipy as sci
from scipy import optimize, integrate
from scipy.interpolate import interp1d, barycentric_interpolate
from scipy.optimize import curve_fit
import pylab as pl
from gaussElimin import *
from gaussPivot import *
from ridder import *
from newtonRaphson import *
from newtonRaphson2 import *
from printSoln import *
import run_kut4 as runkut
import time
from scipy import interpolate, optimize
from numpy import *
```
$$ Making-Matricies $$
```
from numpy import array,float
a=array([[2.0,1.0],[3.0,4.0]])
print(a)
b=(numpy.zeros((2,2)))
print (b)
c=(numpy.arange(10,20,2))
print (c)
d = numpy.linspace(0,8,9).reshape(3,3)
print(d)
d[0]=[2,3,5] # Change a row
d[1,1]=6 # Change an element
d[2,0:2]=[8,-3] # Change part of a row
print(d)
```
$$ Creating-matricies-of-functions $$
```
def f(x):
return x**3 # sample function
n = 5 # no of points in [0,1]
dx = 1.0/(n-1) # x spacing
xlist = [i*dx for i in range(n)]
ylist = [f(x) for x in xlist]
import numpy as np
x2 = np.array(xlist)
y2 = np.array(ylist)
print(x2,y2)
n = 5 # number of points
x3 = np.linspace(0, 1, n) # n points in [0, 1]
y3 = np.zeros(n) # n zeros (float data type)
for i in range(n):
y3[i] = f(x3[i])
print(x3,y3)
from numpy.linalg import inv,solve
print(inv(a)) # Matrix inverse
print(solve(a,b)) # Solve the system of equations [A]{x} = {b}
```
$$ Plotting $$
```
import numpy as np
import matplotlib.pyplot as plt
def f(x):
return x**2
dx = 1
x0 = [i*dx for i in range (-5,6)]
y = [f(x) for x in x0]
x1 = np.array(x0)
y1 = np.array(y)
print (x1,y1)
```
$$ Fitting-data-to-graphs $$
```
plt.plot(x1,y1,':rs') #:rs = dotted red squares
plt.xlabel("X crap")
plt.ylabel("Y crap")
plt.axis([-6,6,-1,30])
plt.legend('$$')
plt.show()
```
$$ Plotting:Ae^{-kx}*cos(2pi*nu*x) $$
```
#parameters
A, nu, k = 10, 4, 2
#function for creating the data points to be interpolated
def f(x, A, nu, k):
return A * np.exp(-k*x) * np.cos(2*np.pi * nu * x)
#create the data points to be interpolated
xmax, nx = 0.5, 8
x = np.linspace(0, xmax, nx) #(starting point, end point, number of points)
y = f(x, A, nu, k) #X and Y are the data points
#Polynomial Fit
#generate the points where we want to evaluate the interpolating functions
x0 = np.linspace(0, xmax, 100)
#polynomial rpolinterpolation - this gives vector y where the polynomial is already evaluated
y0 = (barycentric_interpolate(x, y, x0)) #X0 and Y0 are polynomial fitted data
print(y0)
# splines: linear and cubic
f_linear = interp1d(x, y)
f_cubic = interp1d(x, y, kind='cubic')
#plot all results and the original data
pl.plot(x, y, 'o', label='data points')
pl.plot(x0, y0, label='polynomial')
pl.plot(x0, f_linear(x0), label='linear')
pl.plot(x0, f_cubic(x0), label='cubic')
pl.legend()
pl.show()
```
$$ Solving-Equations $$
```
from bisection import *
from ridder import *
# Import the required modules
import numpy as np
import pylab as pl
import scipy as sci
from scipy import optimize
from newtonRaphson import *
# First set up the system of equations - note that it is a vector of equations!
def f(x):
return np.array([x[0]**2+x[1]**2-3,x[0]*x[1]-1])
# Initial guess for the roots (e.g. from plotting the two functions) - again a vector
x0=np.array([0.5,1.5])
roots_solve=sci.optimize.fsolve(f,x0)
print(roots_solve)
```
$$ Intergrating $$
```
import scipy
from scipy import integrate
from pylab import *
from scipy import interpolate, optimize
from numpy import *
def f(t):
return -t**(2.0)+(3.0)*t+3.0
from trapezoid import *
from romberg import *
scipy.integrate.romberg(f,-4.0,3.0)
scipy.integrate.quad(f,-4.0,3.0)
#Trapezoid method example
r = zeros(21) # we will be storing the results here
r[1] = trapezoid(f,1.0,3.0,1.0,3) # first call is special, since no
# result to be refined yet exists
for k in range(2,21):
r[k] = trapezoid(f,-4.0,3.0,r[k-1],k) # refinements of the answer using ever more points
result=r[20]
print('Trapezoid method result: ',result)
from scipy.integrate import quad as sciquad
sciquad(f,-4.0,3.0) #wut how work wut
```
$$ Solving-Differential-Equations $$
```
from printSoln import *
from run_kut4 import *
import pylab as pl
# First set up the right-hand side RHS) of the equation
def f(x,y):
f=zeros(1) # sets up RHS as a vector (here of just one element)
f[0]=y[0]*(1.0-y[0]) # RHS; note that y is also a vector
return f
# For solving a first order differential equation
# Example: using Runge-Kutta of 4th order
x = 0.0 #Integration Start Limit
xStop = 5.0 #Integration End Limit
y = array([0.1]) # Initial value of
h = 0.001 # Step size
freq = 1000 # Printout frequency - print the result every 1000 steps
X,Y = integrate(f,x,y,xStop,h) # call the RK4 solver
printSoln(X,Y,freq) # Print the solution (code on SD)
pl.plot(X,Y[:,0]) # Plot the solution
pl.xlabel('Time')
pl.ylabel('Population')
pl.show()
# For solving a first order differential equation
# Same example equation solved with the internal solver
# First set up the right-hand side RHS) of the equation
# NOTE THE DIFFERENT ORDER OF THE FUNCTION ARGUMENTS COMPARED TO ABOVE
def g(y,x):
g=zeros(1) # sets up RHS as a vector
g[0]=y[0]*(1.0-y[0]) # RHS; note that y is also a vector
return g
x=np.linspace(0,5,100) # where do we want the solution
y0=array([0.1]) # initial condition
z=scipy.integrate.odeint(g,y0,x) # call the solver
z=z.reshape(np.size(x)) # reformat the answer
pl.plot(x,z) # Plot the solution
pl.xlabel('Time')
pl.ylabel('Population')
pl.show()
# For solving two interlinked differential equations
# Define right-hand sides of equations (into a vector!).
# 'y', containing all functions to be solved for, is also a vector
def F(x,y,a=1.0,b=2.0,c=1.0,d=2.0):
F = zeros(2)
F[0] = y[0]*(a-b*y[1])
F[1] = y[1]*(c*y[0]-d)
return F
x = 0.0 # Start of integration
xStop = 10.0 # End of integration
y = array([0.1, 0.03]) # Initial values of {y}
h = 0.05 # Step size
freq = 20 # Printout frequency
X,Y = integrate(F,x,y,xStop,h)
printSoln(X,Y,freq)
pl.plot(X,Y[:,0],label='Rabbit population')
pl.plot(X,Y[:,1],label='Fox population')
pl.xlabel('Time')
pl.legend()
pl.show()
# Define the right hand side
def f(y,t):
return y**2-y**3
# Parameter
delta=0.001
# Where do we want the solution?
x=np.linspace(0,2./delta,100)
# Call the solver
z=scipy.integrate.odeint(f,delta,x)
z=z.reshape(np.size(x)) # reformat the answer
pl.plot(x,z) # Plot the solution
pl.xlabel('Time')
pl.ylabel('Position')
pl.show()
```
| github_jupyter |
# Introduction to Biomechanics
> Marcos Duarte
> Laboratory of Biomechanics and Motor Control ([http://demotu.org/](http://demotu.org/))
> Federal University of ABC, Brazil
## Biomechanics @ UFABC
```
from IPython.display import IFrame
IFrame('http://demotu.org', width='100%', height=500)
```
## Biomechanics
The origin of the word *Biomechanics* is evident:
$$ Biomechanics := bios \, (life) + mechanics $$
Professor Herbert Hatze, on a letter to the editors of the Journal of Biomechanics in 1974, proposed a (very good) definition for *the science called Biomechanics*:
> "*Biomechanics is the study of the structure and function of biological systems by means of the methods of mechanics.*"
Hatze H (1974) [The meaning of the term biomechanics](https://github.com/demotu/BMC/blob/master/courses/HatzeJB74biomechanics.pdf).
### Biomechanics & Mechanics
And Hatze, advocating for *Biomechanics to be a science of its own*, argues that Biomechanics **is not** simply Mechanics of (applied to) living systems:
> "*It would not be correct to state that 'Biomechanics is the study of the mechanical aspects of the structure and function of biological systems' because biological systems do not have mechanical aspects. They only have biomechanical aspects (otherwise mechanics, as it exists, would be sufficient to describe all phenomena which we now call biomechanical features of biological systems).*" Hatze (1974)
### Biomechanics vs. Mechanics
To support this argument, Hatze illustrates the difference between Biomechanics and the application of Mechanics, with an example of a javelin throw: studying the mechanics aspects of the javelin flight trajectory (use existing knowledge about aerodynamics and ballistics) vs. studying the biomechanical aspects of the phase before the javelin leaves the thrower’s hand (there are no established mechanical models for this system).
### Branches of Mechanics
Mechanics is a branch of the physical sciences that is concerned with the state of rest or motion of bodies that are subjected to the action of forces. In general, this subject can be subdivided into three branches: rigid-body mechanics, deformable-body mechanics, and fluid mechanics (Hibbeler, 2012).
In fact, only a subset of Mechanics matters to Biomechanics, the Classical Mechanics subset, the domain of mechanics for bodies with moderate speeds $(\ll 3.10^8 m/s!)$ and not very small $(\gg 3.10^{-9} m!)$ as shown in the following diagram (image from [Wikipedia](http://en.wikipedia.org/wiki/Classical_mechanics)):
<figure><img src="http://upload.wikimedia.org/wikipedia/commons/thumb/f/f0/Physicsdomains.svg/500px-Physicsdomains.svg.png" width=300 alt="Domains of mechanics"/>
### Biomechanics & other Sciences I
One last point about the excellent letter from Hatze, already in 1974 he points for the following problem:
> "*The use of the term biomechanics imposes rather severe restrictions on its meaning because of the established definition of the term, mechanics. This is unfortunate, since the synonym Biomechanics, as it is being understood by the majority of biomechanists today, has a much wider meaning.*" Hatze (1974)
### Biomechanics & other Sciences II
Although the term Biomechanics may sound new to you, it's not rare that people think the use of methods outside the realm of Mechanics as Biomechanics.
For instance, electromyography and thermography are two methods that although may be useful in Biomechanics, particularly the former, they clearly don't have any relation with Mechanics; Electromagnetism and Thermodynamics are other [branches of Physics](https://en.wikipedia.org/wiki/Branches_of_physics).
### Biomechanics & Engineering
Even seeing Biomechanics as a field of Science, as argued by Hatze, it's also possible to refer to Engineering Biomechanics considering that Engineering is "*the application of scientific and mathematical principles to practical ends*" [[The Free Dictionary](http://www.thefreedictionary.com/engineering)] and particularly that "*Engineering Mechanics is the application of Mechanics to solve problems involving common engineering elements*" [[Wikibooks]](https://en.wikibooks.org/wiki/Engineering_Mechanics), and, last but not least, that Biomedical engineering is the application of engineering principles and design concepts to medicine and biology for healthcare purposes [[Wikipedia](https://en.wikipedia.org/wiki/Biomedical_engineering)].
### Applications of Biomechanics
Biomechanics matters to fields of science and technology related to biology and health and it's also relevant for the development of synthetic systems inspired on biological systems, as in robotics. To illustrate the variety of applications of Biomechanics, this is the current list of topics covered in the Journal of Biomechanics:
```
from IPython.display import IFrame
IFrame('http://www.jbiomech.com/aims', width='100%', height=500)
```
### On the branches of Mechanics and Biomechanics I
Nowadays, (Classical) Mechanics is typically partitioned in Statics and Dynamics. In turn, Dynamics is divided in Kinematics and Kinetics. This classification is clear; dynamics is the study of the motions of bodies and Statics is the study of forces in the absence of changes in motion. Kinematics is the study of motion without considering its possible causes (forces) and Kinetics is the study of the possible causes of motion.
### On the branches of Mechanics and Biomechanics II
Nevertheless, it's common in Biomechanics to adopt a slightly different classification: to partition it between Kinematics and Kinetics, and then Kinetics into Statics and Dynamics (David Winter, Nigg & Herzog, and Vladimir Zatsiorsky, among others, use this classification in their books). The rationale is that we first separate the study of motion considering or not its causes (forces). The partition of (Bio)Mechanics in this way is useful because is simpler to study and describe (measure) the kinematics of human motion and then go to the more complicated issue of understanding (measuring) the forces related to the human motion.
Anyway, these different classifications reveal a certain contradiction between Mechanics (particularly from an engineering point of view) and Biomechanics; some scholars will say that this taxonomy in Biomechanics is simply wrong and it should be corrected to align with the Mechanics. Be aware.
### The future of Biomechanics
(Human) Movement Science combines many disciplines of science (such as, physiology, biomechanics, and psychology) for the study of human movement. Professor Benno Nigg claims that with the growing concern for the well-being of humankind, Movement Science will have an important role:
> Movement science will be one of the most important and most recognized science fields in the twenty-first century... The future discipline of movement science has a unique opportunity to become an important contributor to the well-being of mankind.
Nigg BM (1993) [Sport science in the twenty-first century](http://www.ncbi.nlm.nih.gov/pubmed/8230394). Journal of Sports Sciences, 77, 343-347.
And so Biomechanics will also become an important contributor to the well-being of humankind.
### Biomechanics and the Biomedical Engineering at UFABC (2017) I
At the university level, the study of Mechanics is typically done in the disciplines Statics and Dynamics (rigid-body mechanics), Strength of Materials (deformable-body mechanics), and Mechanics of Fluids (fluid mechanics). Consequently, the study on Biomechanics must also cover these topics for a greater understanding of the structure and function of biological systems.
### Biomechanics and the Biomedical Engineering at UFABC (2017) II
The Biomedical Engineering degree at UFABC covers these topics for the study of biological systems in different courses: Ciência dos Materiais Biocompatíveis, Modelagem e Simulação de Sistemas Biomédicos, Métodos de Elementos Finitos aplicados a Sistemas Biomédicos, Mecânica dos Fluidos, Caracterização de Biomateriais, Sistemas Biológicos, and last but not least, Biomecânica I & Biomecânica II.
How much of biological systems is in fact studied in these disciplines varies a lot. Anyway, none of these courses cover the study of human motion with implications to health, rehabilitation, and sports, except the last course. This is the reason why the courses Biomecânica I & II focus on the analysis of the human movement.
### More on Biomechanics
The Wikipedia page on biomechanics is a good place to read more about Biomechanics:
```
from IPython.display import IFrame
IFrame('http://en.m.wikipedia.org/wiki/Biomechanics', width='100%', height=400)
```
## History of Biomechanics
Biomechanics progressed basically with the advancements in Mechanics and with the invention of instrumentations for measuring mechanical quantities and computing.
The development of Biomechanics was only possible because people became more interested in the understanding of the structure and function of biological systems and to apply these concepts to the progress of the humankind.
## Aristotle (384-322 BC)
Aristotle was the first to have written about the movement of animals in his works *On the Motion of Animals (De Motu Animalium)* and *On the Gait of Animals (De Incessu Animalium)* [[Works by Aristotle]](http://classics.mit.edu/Browse/index-Aristotle.html).
Aristotle clearly already knew what we nowadays refer as Newton's third law of motion:
"*For as the pusher pushes so is the pushed pushed, and with equal force.*" [Part 3, [On the Motion of Animals](http://classics.mit.edu/Aristotle/motion_animals.html)]
### Aristotle & the Scientific Revolution I
Although Aristotle's contributions were unvaluable to humankind, to make his discoveries he doesn't seem to have employed anything similar to what we today refer as [scientific method](https://en.wikipedia.org/wiki/Scientific_method) (the systematic observation, measurement, and experiment, and the formulation, testing, and modification of hypotheses).
Most of the Physics of Aristotle was ambiguous or incorrect; for example, for him there was no motion without a force. He even deduced that speed was proportional to force and inversely proportional to resistance [[Book VII, Physics](http://classics.mit.edu/Aristotle/physics.7.vii.html)]. Perhaps Aristotle was too influenced by the observation of motion of a body under the action of a friction force, where this notion is not at all unreasonable.
### Aristotle & the Scientific Revolution II
If Aristotle performed any observation/experiment at all in his works, he probably was not good on that as, ironically, evinced in this part of his writing:
> "Males have more teeth than females in the case of men, sheep, goats, and swine; in the case of other animals observations have not yet been made". Aristotle [The History of Animals](http://classics.mit.edu/Aristotle/history_anim.html).
## Leonardo da Vinci (1452-1519)
<figure><img src='https://upload.wikimedia.org/wikipedia/commons/thumb/2/22/Da_Vinci_Vitruve_Luc_Viatour.jpg/353px-Da_Vinci_Vitruve_Luc_Viatour.jpg' width="240" alt="Vitruvian Man" style="float:right;margin: 0 0 0 20px;"/></figure>
Contributions of Leonardo to Biomechanics:
- Studies on the proportions of humans and animals
- Anatomy studies of the human body, especially the foot
- Studies on the mechanical function of muscles
<br><br>
*"Le proporzioni del corpo umano secondo Vitruvio", also known as the [Vitruvian Man](https://en.wikipedia.org/wiki/Vitruvian_Man), drawing by [Leonardo da Vinci](https://en.wikipedia.org/wiki/Leonardo_da_Vinci) circa 1490 based on the work of [Marcus Vitruvius Pollio](https://en.wikipedia.org/wiki/Vitruvius) (1st century BC), depicting a man in supposedly ideal human proportions (image from [Wikipedia](https://en.wikipedia.org/wiki/Vitruvian_Man)).
## Giovanni Alfonso Borelli (1608-1679)
<figure><img src='.\..\images\borelli.jpg' width="240" alt="Borelli" style="float:right;margin: 0 0 0 20px;"/></figure>
- [The father of biomechanics](https://en.wikipedia.org/wiki/Giovanni_Alfonso_Borelli); the first to use modern scienfic method into 'Biomechanics' in his book [De Motu Animalium](http://www.e-rara.ch/doi/10.3931/e-rara-28707).
- Proposed that the levers of the musculoskeletal system magnify motion rather than force.
- Calculated the forces required for equilibrium in various joints of the human body before Newton published the laws of motion.
<br><br>
*Excerpt from the book De Motu Animalium*.
## More on the history of Biomechanics
See:
- <a href=http://courses.washington.edu/bioen520/notes/History_of_Biomechanics_(Martin_1999).pdf>http://courses.washington.edu/bioen520/notes/History_of_Biomechanics_(Martin_1999).pdf</a>
- [http://biomechanics.vtheatre.net/doc/history.html](http://biomechanics.vtheatre.net/doc/history.html)
- Chapter 1 of Nigg and Herzog (2006) [Biomechanics of the Musculo-skeletal System](https://books.google.com.br/books?id=hOIeAQAAIAAJ&dq=editions:ISBN0470017678)
### The International Society of Biomechanics
The biomechanics community has an official scientific society, the [International Society of Biomechanics](http://isbweb.org/), with a journal, the [Journal of Biomechanics](http://www.jbiomech.com), and an e-mail list, the [Biomch-L](http://biomch-l.isbweb.org):
```
from IPython.display import IFrame
IFrame('http://biomch-l.isbweb.org/forums/2-General-Discussion', width='100%', height=400)
```
### Examples of Biomechanics Classes around the World
```
from IPython.display import IFrame
IFrame('http://pages.uoregon.edu/karduna/biomechanics/bme.htm', width='100%', height=400)
```
## Problems
1. Go to [Biomechanics Classes on the Web](http://pages.uoregon.edu/karduna/biomechanics/) to visit websites of biomechanics classes around the world and find out how biomechanics is studied in different fields.
2. Find examples of applications of biomechanics in different areas.
3. Watch the video [The Weird World of Eadweard Muybridge](http://youtu.be/5Awo-P3t4Ho) to learn about [Eadweard Muybridge](http://en.wikipedia.org/wiki/Eadweard_Muybridge), an important person to the development of instrumentation for biomechanics.
4. Think about practical problems in nature that can be studied in biomechanics with simple approaches (simple modeling and low-tech methods) or very complicated approaches (complex modeling and high-tech methods).
5. What the study in the biomechanics of athletes, children, elderlies, persons with disabilities, other animals, and computer animation for the cinema industry may have in common and different?
6. Visit the website of the Laboratory of Biomechanics and Motor Control at UFABC and find out what we do and if there is anything you are interested in.
7. Is there anything in biomechanics that interests you? How could you pursue this interest?
## References
- [Biomechanics - Wikipedia, the free encyclopedia](http://en.wikipedia.org/wiki/Biomechanics)
- [Mechanics - Wikipedia, the free encyclopedia](http://en.wikipedia.org/wiki/Mechanics)
- [International Society of Biomechanics](http://isbweb.org/)
- [Biomech-l, the biomechanics' e-mail list](http://biomch-l.isbweb.org/)
- [Journal of Biomechanics' aims](http://www.jbiomech.com/aims)
- <a href="http://courses.washington.edu/bioen520/notes/History_of_Biomechanics_(Martin_1999).pdf">A Genealogy of Biomechanics</a>
- Duarte M (2014) A física da bicicleta no futebol. Ciência Hoje, 53, 313, 16-21. [Online](http://www.cienciahoje.org.br/revista/materia/id/824/n/a_fisica_da_bicicleta_no_futebol), [PDF](http://demotu.org/pubs/CH14.pdf). [Biomechanics of the Bicycle Kick website](http://demotu.org/x/pele/)
- Hatze H (1974) [The meaning of the term biomechanics](https://github.com/demotu/BMC/blob/master/courses/HatzeJB74biomechanics.pdf). Journal of Biomechanics, 7, 189–190.
- Hibbeler RC (2012) [Engineering Mechanics: Statics](http://books.google.com.br/books?id=PSEvAAAAQBAJ). Prentice Hall; 13 edition.
- Nigg BM and Herzog W (2006) [Biomechanics of the Musculo-skeletal System](https://books.google.com.br/books?id=hOIeAQAAIAAJ&dq=editions:ISBN0470017678). 3rd Edition. Wiley.
- Winter DA (2009) [Biomechanics and motor control of human movement](http://books.google.com.br/books?id=_bFHL08IWfwC). 4 ed. Hoboken, EUA: Wiley.
- Zatsiorsky VM (1997) [Kinematics of Human Motion](http://books.google.com.br/books/about/Kinematics_of_Human_Motion.html?id=Pql_xXdbrMcC&redir_esc=y). Champaign, Human Kinetics.
- Zatsiorsky VM (2002) [Kinetics of human motion](http://books.google.com.br/books?id=wp3zt7oF8a0C). Human Kinetics.
| github_jupyter |
# Keras Exercise
## Predict political party based on votes
As a fun little example, we'll use a public data set of how US congressmen voted on 17 different issues in the year 1984. Let's see if we can figure out their political party based on their votes alone, using a deep neural network!
For those outside the United States, our two main political parties are "Democrat" and "Republican." In modern times they represent progressive and conservative ideologies, respectively.
Politics in 1984 weren't quite as polarized as they are today, but you should still be able to get over 90% accuracy without much trouble.
Since the point of this exercise is implementing neural networks in Keras, I'll help you to load and prepare the data.
Let's start by importing the raw CSV file using Pandas, and make a DataFrame out of it with nice column labels:
```
import pandas as pd
feature_names = ['party','handicapped-infants', 'water-project-cost-sharing',
'adoption-of-the-budget-resolution', 'physician-fee-freeze',
'el-salvador-aid', 'religious-groups-in-schools',
'anti-satellite-test-ban', 'aid-to-nicaraguan-contras',
'mx-missle', 'immigration', 'synfuels-corporation-cutback',
'education-spending', 'superfund-right-to-sue', 'crime',
'duty-free-exports', 'export-administration-act-south-africa']
voting_data = pd.read_csv('../datasets/house-votes-84.data.txt', na_values=['?'],
names = feature_names)
voting_data.head()
```
We can use describe() to get a feel of how the data looks in aggregate:
```
voting_data.describe()
```
We can see there's some missing data to deal with here; some politicians abstained on some votes, or just weren't present when the vote was taken. We will just drop the rows with missing data to keep it simple, but in practice you'd want to first make sure that doing so didn't introduce any sort of bias into your analysis (if one party abstains more than another, that could be problematic for example.)
```
voting_data.dropna(inplace=True)
voting_data.describe()
```
Our neural network needs normalized numbers, not strings, to work. So let's replace all the y's and n's with 1's and 0's, and represent the parties as 1's and 0's as well.
```
voting_data.replace(('y', 'n'), (1, 0), inplace=True)
voting_data.replace(('democrat', 'republican'), (1, 0), inplace=True)
voting_data.head()
```
Finally let's extract the features and labels in the form that Keras will expect:
```
all_features = voting_data[feature_names].drop('party', axis=1).values
all_classes = voting_data['party'].values
```
OK, so have a go at it! You'll want to refer back to the slide on using Keras with binary classification - there are only two parties, so this is a binary problem. This also saves us the hassle of representing classes with "one-hot" format like we had to do with MNIST; our output is just a single 0 or 1 value.
Also refer to the scikit_learn integration slide, and use cross_val_score to evaluate your resulting model with 10-fold cross-validation.
**If you're using tensorflow-gpu on a Windows machine** by the way, you probably *do* want to peek a little bit at my solution - if you run into memory allocation errors, there's a workaround there you can use.
Try out your code here:
## My implementation is below
```
from tensorflow.keras.layers import Dense, Dropout
from tensorflow.keras.models import Sequential
from sklearn.model_selection import cross_val_score
from tensorflow.keras.wrappers.scikit_learn import KerasClassifier
def create_model():
model = Sequential()
#16 feature inputs (votes) going into an 32-unit layer
model.add(Dense(32, input_dim=16, kernel_initializer='normal', activation='relu'))
# Adding Dropout layer to prevent overfitting
model.add(Dropout(0.5))
# Another hidden layer of 16 units
model.add(Dense(16, kernel_initializer='normal', activation='relu'))
#Adding another Dropout layer
model.add(Dropout(0.5))
# Output layer with a binary classification (Democrat or Republican political party)
model.add(Dense(1, kernel_initializer='normal', activation='sigmoid'))
# Compile model
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
# Wrap our Keras model in an estimator compatible with scikit_learn
estimator = KerasClassifier(build_fn=create_model, epochs=100, verbose=0)
# Now we can use scikit_learn's cross_val_score to evaluate this model identically to the others
cv_scores = cross_val_score(estimator, all_features, all_classes, cv=10)
cv_scores.mean()
```
94% without even trying too hard! Did you do better? Maybe more neurons, more layers, or Dropout layers would help even more.
** Adding Dropout layer in between Dense layer will increase accuracy to 96 % **
| github_jupyter |
```
if 'google.colab' in str(get_ipython()):
!pip install -q condacolab
import condacolab
condacolab.install()
"""
You can run either this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
"""
BRANCH = 'r1.7.0'
# If you're using Google Colab and not running locally, run this cell.
# install NeMo
if 'google.colab' in str(get_ipython()):
!python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[all]
if 'google.colab' in str(get_ipython()):
!conda install -c conda-forge pynini=2.1.3
! mkdir images
! wget https://github.com/NVIDIA/NeMo/blob/$BRANCH/tutorials/text_processing/images/deployment.png -O images/deployment.png
! wget https://github.com/NVIDIA/NeMo/blob/$BRANCH/tutorials/text_processing/images/pipeline.png -O images/pipeline.png
import os
import wget
import pynini
import nemo_text_processing
```
# Task Description
Inverse text normalization (ITN) is a part of the Automatic Speech Recognition (ASR) post-processing pipeline.
ITN is the task of converting the raw spoken output of the ASR model into its written form to improve the text readability. For example, `in nineteen seventy five` should be changed to `in 1975` and `one hundred and twenty three dollars` to `$123`.
# NeMo Inverse Text Normalization
NeMo ITN is based on weighted finite-state
transducer (WFST) grammars. The tool uses [`Pynini`](https://github.com/kylebgorman/pynini) to construct WFSTs, and the created grammars can be exported and integrated into [`Sparrowhawk`](https://github.com/google/sparrowhawk) (an open-source version of [The Kestrel TTS text normalization system](https://www.cambridge.org/core/journals/natural-language-engineering/article/abs/kestrel-tts-text-normalization-system/F0C18A3F596B75D83B75C479E23795DA)) for production. The NeMo ITN tool can be seen as a Python extension of `Sparrowhawk`.
Currently, NeMo ITN provides support for English and the following semiotic classes from the [Google Text normalization dataset](https://www.kaggle.com/richardwilliamsproat/text-normalization-for-english-russian-and-polish):
DATE, CARDINAL, MEASURE, DECIMAL, ORDINAL, MONEY, TIME, PLAIN.
We additionally added the class `WHITELIST` for all whitelisted tokens whose verbalizations are directly looked up from a user-defined list.
The toolkit is modular, easily extendable, and can be adapted to other languages and tasks like [text normalization](https://github.com/NVIDIA/NeMo/blob/stable/tutorials/text_processing/Text_Normalization.ipynb). The Python environment enables an easy combination of text covering grammars with NNs.
The rule-based system is divided into a classifier and a verbalizer following [Google's Kestrel](https://www.researchgate.net/profile/Richard_Sproat/publication/277932107_The_Kestrel_TTS_text_normalization_system/links/57308b1108aeaae23f5cc8c4/The-Kestrel-TTS-text-normalization-system.pdf) design: the classifier is responsible for detecting and classifying semiotic classes in the underlying text, the verbalizer the verbalizes the detected text segment.
The overall NeMo ITN pipeline from development in `Pynini` to deployment in `Sparrowhawk` is shown below:

# Quick Start
## Add ITN to your Python ASR post-processing workflow
ITN is a part of the `nemo_text_processing` package which is installed with `nemo_toolkit`. Installation instructions could be found [here](https://github.com/NVIDIA/NeMo/tree/main/README.rst).
```
from nemo_text_processing.inverse_text_normalization.inverse_normalize import InverseNormalizer
inverse_normalizer = InverseNormalizer(lang='en')
raw_text = "we paid one hundred and twenty three dollars for this desk, and this."
inverse_normalizer.inverse_normalize(raw_text, verbose=False)
```
In the above cell, `one hundred and twenty three dollars` would be converted to `$123`, and the rest of the words remain the same.
## Run Inverse Text Normalization on an input from a file
Use `run_predict.py` to convert a spoken text from a file `INPUT_FILE` to a written format and save the output to `OUTPUT_FILE`. Under the hood, `run_predict.py` is calling `inverse_normalize()` (see the above section).
```
# If you're running the notebook locally, update the NEMO_TEXT_PROCESSING_PATH below
# In Colab, a few required scripts will be downloaded from NeMo github
NEMO_TOOLS_PATH = '<UPDATE_PATH_TO_NeMo_root>/nemo_text_processing/inverse_text_normalization'
DATA_DIR = 'data_dir'
os.makedirs(DATA_DIR, exist_ok=True)
if 'google.colab' in str(get_ipython()):
NEMO_TOOLS_PATH = '.'
required_files = ['run_predict.py',
'run_evaluate.py']
for file in required_files:
if not os.path.exists(file):
file_path = 'https://raw.githubusercontent.com/NVIDIA/NeMo/' + BRANCH + '/nemo_text_processing/inverse_text_normalization/' + file
print(file_path)
wget.download(file_path)
elif not os.path.exists(NEMO_TOOLS_PATH):
raise ValueError(f'update path to NeMo root directory')
INPUT_FILE = f'{DATA_DIR}/test.txt'
OUTPUT_FILE = f'{DATA_DIR}/test_itn.txt'
! echo "on march second twenty twenty" > $DATA_DIR/test.txt
! python $NEMO_TOOLS_PATH/run_predict.py --input=$INPUT_FILE --output=$OUTPUT_FILE --language='en'
# check that the raw text was indeed converted to the written form
! cat $OUTPUT_FILE
```
## Run evaluation
[Google Text normalization dataset](https://www.kaggle.com/richardwilliamsproat/text-normalization-for-english-russian-and-polish) consists of 1.1 billion words of English text from Wikipedia, divided across 100 files. The normalized text is obtained with [The Kestrel TTS text normalization system](https://www.cambridge.org/core/journals/natural-language-engineering/article/abs/kestrel-tts-text-normalization-system/F0C18A3F596B75D83B75C479E23795DA)).
Although a large fraction of this dataset can be reused for ITN by swapping input with output, the dataset is not bijective.
For example: `1,000 -> one thousand`, `1000 -> one thousand`, `3:00pm -> three p m`, `3 pm -> three p m` are valid data samples for normalization but the inverse does not hold for ITN.
We used regex rules to disambiguate samples where possible, see `nemo_text_processing/inverse_text_normalization/clean_eval_data.py`.
To run evaluation, the input file should follow the Google Text normalization dataset format. That is, every line of the file needs to have the format `<semiotic class>\t<unnormalized text>\t<self>` if it's trivial class or `<semiotic class>\t<unnormalized text>\t<normalized text>` in case of a semiotic class.
Example evaluation run:
`python run_evaluate.py \
--input=./en_with_types/output-00001-of-00100 \
[--language LANGUAGE] \
[--cat CATEGORY] \
[--filter]`
Use `--cat` to specify a `CATEGORY` to run evaluation on (all other categories are going to be excluded from evaluation). With the option `--filter`, the provided data will be cleaned to avoid disambiguates (use `clean_eval_data.py` to clean up the data upfront).
```
eval_text = """PLAIN\ton\t<self>
DATE\t22 july 2012\tthe twenty second of july twenty twelve
PLAIN\tthey\t<self>
PLAIN\tworked\t<self>
PLAIN\tuntil\t<self>
TIME\t12:00\ttwelve o'clock
<eos>\t<eos>
"""
INPUT_FILE_EVAL = f'{DATA_DIR}/test_eval.txt'
with open(INPUT_FILE_EVAL, 'w') as f:
f.write(eval_text)
! cat $INPUT_FILE_EVAL
! python $NEMO_TOOLS_PATH/run_evaluate.py --input=$INPUT_FILE_EVAL --language='en'
```
`run_evaluate.py` call will output both **sentence level** and **token level** accuracies.
For our example, the expected output is the following:
```
Loading training data: data_dir/test_eval.txt
Sentence level evaluation...
- Data: 1 sentences
100% 1/1 [00:00<00:00, 58.42it/s]
- Denormalized. Evaluating...
- Accuracy: 1.0
Token level evaluation...
- Token type: PLAIN
- Data: 4 tokens
100% 4/4 [00:00<00:00, 504.73it/s]
- Denormalized. Evaluating...
- Accuracy: 1.0
- Token type: DATE
- Data: 1 tokens
100% 1/1 [00:00<00:00, 118.95it/s]
- Denormalized. Evaluating...
- Accuracy: 1.0
- Token type: TIME
- Data: 1 tokens
100% 1/1 [00:00<00:00, 230.44it/s]
- Denormalized. Evaluating...
- Accuracy: 1.0
- Accuracy: 1.0
- Total: 6
Class | Num Tokens | Denormalization
sent level | 1 | 1.0
PLAIN | 4 | 1.0
DATE | 1 | 1.0
CARDINAL | 0 | 0
LETTERS | 0 | 0
VERBATIM | 0 | 0
MEASURE | 0 | 0
DECIMAL | 0 | 0
ORDINAL | 0 | 0
DIGIT | 0 | 0
MONEY | 0 | 0
TELEPHONE | 0 | 0
ELECTRONIC | 0 | 0
FRACTION | 0 | 0
TIME | 1 | 1.0
ADDRESS | 0 | 0
```
# C++ deployment
The instructions on how to export `Pynini` grammars and to run them with `Sparrowhawk`, could be found at [NeMo/tools/text_processing_deployment](https://github.com/NVIDIA/NeMo/tree/main/tools/text_processing_deployment).
# WFST and Common Pynini Operations
Finite-state acceptor (or FSA) is a finite state automaton that has a finite number of states and no output. FSA either accepts (when the matching patter is found) or rejects a string (no match is found).
```
print([byte for byte in bytes('fst', 'utf-8')])
# create an acceptor from a string
pynini.accep('fst')
```
Here `0` - is a start note, `1` and `2` are the accept nodes, while `3` is a finite state.
By default (token_type="byte", `Pynini` interprets the string as a sequence of bytes, assigning one byte per arc.
A finite state transducer (FST) not only matches the pattern but also produces output according to the defined transitions.
```
# create an FST
pynini.cross('fst', 'FST')
```
Pynini supports the following operations:
- `closure` - Computes concatenative closure.
- `compose` - Constructively composes two FSTs.
- `concat` - Computes the concatenation (product) of two FSTs.
- `difference` - Constructively computes the difference of two FSTs.
- `invert` - Inverts the FST's transduction.
- `optimize` - Performs a generic optimization of the FST.
- `project` - Converts the FST to an acceptor using input or output labels.
- `shortestpath` - Construct an FST containing the shortest path(s) in the input FST.
- `union`- Computes the union (sum) of two or more FSTs.
The list of most commonly used `Pynini` operations could be found [https://github.com/kylebgorman/pynini/blob/master/CHEATSHEET](https://github.com/kylebgorman/pynini/blob/master/CHEATSHEET).
Pynini examples could be found at [https://github.com/kylebgorman/pynini/tree/master/pynini/examples](https://github.com/kylebgorman/pynini/tree/master/pynini/examples).
Use `help()` to explore the functionality. For example:
```
help(pynini.union)
```
# NeMo ITN API
NeMo ITN defines the following APIs that are called in sequence:
- `find_tags() + select_tag()` - creates a linear automaton from the input string and composes it with the final classification WFST, which transduces numbers and inserts semantic tags.
- `parse()` - parses the tagged string into a list of key-value items representing the different semiotic tokens.
- `generate_permutations()` - takes the parsed tokens and generates string serializations with different reorderings of the key-value items. This is important since WFSTs can only process input linearly, but the word order can change from spoken to written form (e.g., `three dollars -> $3`).
- `find_verbalizer() + select_verbalizer` - takes the intermediate string representation and composes it with the final verbalization WFST, which removes the tags and returns the written form.

# References and Further Reading:
- [Zhang, Yang, Bakhturina, Evelina, Gorman, Kyle and Ginsburg, Boris. "NeMo Inverse Text Normalization: From Development To Production." (2021)](https://arxiv.org/abs/2104.05055)
- [Ebden, Peter, and Richard Sproat. "The Kestrel TTS text normalization system." Natural Language Engineering 21.3 (2015): 333.](https://www.cambridge.org/core/journals/natural-language-engineering/article/abs/kestrel-tts-text-normalization-system/F0C18A3F596B75D83B75C479E23795DA)
- [Gorman, Kyle. "Pynini: A Python library for weighted finite-state grammar compilation." Proceedings of the SIGFSM Workshop on Statistical NLP and Weighted Automata. 2016.](https://www.aclweb.org/anthology/W16-2409.pdf)
- [Mohri, Mehryar, Fernando Pereira, and Michael Riley. "Weighted finite-state transducers in speech recognition." Computer Speech & Language 16.1 (2002): 69-88.](https://cs.nyu.edu/~mohri/postscript/csl01.pdf)
| github_jupyter |
# 6. External Libraries
<a href="https://colab.research.google.com/github/chongsoon/intro-to-coding-with-python/blob/main/6-External-Libraries.ipynb" target="_parent">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
Up till now, we have been using what ever is available to us in Python.
Sometimes, we need other people's help to solve our problem. For example, I need help in reading data from a website, or doing specific calculation on the data given to me.
Instead of creating my own functions, I can use libraries/packages developed by other people specifically to solve my problem.
Lets look at some common libraries that I use.
## Installed Libraries/Packages in this Environment
Lets find out what has been installed on this environment by running the following code:
```
!conda list
#If this code block fails, try the next one.
!pip list
```
You can see that a lot of packages have been installed. Let us try some of them.
## Getting data from web pages/api (Requests)
Have you ever used apps such as bus apps that tell you when the arrival time is? Those information are actually retrieved from LTA web site.
Of course in this practical, we will use some open and free website apis to get data.
We can use Requests package to get data from web pages and process them in Python.
Lets try it out.
First, we have to tell Python that we want to use this library. In order to do that, we have to "import" it into this program.
```
import requests
import json
```
Let us get data from Binance. Binance is a cryptocurrency exchange. Think of it like stock market for cryptocurrency like bitcoins. They have free public web api that we can get data from. We can start by declaring URL variables.
[Reference to Binance API](https://github.com/binance/binance-spot-api-docs/blob/master/rest-api.md)
```
url = 'https://api.binance.com/'
exchange_info_url = url + 'api/v3/exchangeInfo'
```
Next, we will use requests.get with the url as the parameter and execute the cell.
```
response = requests.get(exchange_info_url)
```
Then we will extract the data from the response into dictionary.
```
response_data = response.json()
```
Lets explore what the keys are in the dictionary.
```
print(response_data.keys())
```
I wonder what is inside the "symbols" key.
```
print(type(response_data['symbols']))
```
Since it contains list, let us see what are the first 5 items in the list.
```
print(response_data['symbols'][:5])
```
That is still too much information, lets just inspect the first item.
```
print(response_data['symbols'][0])
```
### Try it yourself: Get the type of data
This is definitely more manageable. It seems like dictionary types are contained in the list. Are you able to confirm that through code? Print out what is the **type** of the **first** item in the list.
```
#Type in your code here to print the type of the first item in the list.
```
### Try it yourself: Find the crypto!
How can I find the crypto information in such a long list of items? Do you have any idea?
Find information on Shiba Inu Coin (Symbol: SHIBUSDT), since Elon Musk's [tweet](https://twitter.com/elonmusk/status/1444840184500129797?s=20) increased the price of the coin recently.
```
coin_list = response_data['symbols']
#Type your code below, get information on "SHIBUSDT" coin.
```
We can find the crypto, but there are a lot of information. If we only want to find the price of the crypto, we can refer to this [link](https://github.com/binance/binance-spot-api-docs/blob/master/rest-api.md#symbol-price-ticker) to find the price of the crypto.
```
symbol_ticker_price_url = url + 'api/v3/ticker/price'
symbol_ticker_price_url
price_request = requests.get(symbol_ticker_price_url)
price_request.json()
```
Oh no, it is loading everything...Is there a way to just get the Shiba price? According to the documentation, we can add a parameter to find the price of a particular symbol. Let us see how we can do that.
Lets create a param payload.
```
symbol_parameter = {'symbol': 'SHIBUSDT'}
```
Then, use the same request, but add the symbol_paremeter that we created.
```
price_request = requests.get(symbol_ticker_price_url, params=symbol_parameter)
price_request.json()
```
Cool, now we are able to see the price of Shiba Crypto.
So far, we have used "requests" package to get data from website. There are a lot of other packages out there that could solve the problems that you encounter. Feel free to explore.
- [Python Package Repository](https://pypi.org/)
- [Conda Package Repository](https://anaconda.org/anaconda/repo)
Proceed to the next tutorial (last one) to learn simple data analysis.
| github_jupyter |
<a href="http://landlab.github.io"><img style="float: left" src="../../../landlab_header.png"></a>
# The Implicit Kinematic Wave Overland Flow Component
<hr>
<small>For more Landlab tutorials, click here: <a href="https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html">https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html</a></small>
<hr>
## Overview
This notebook demonstrates the `KinwaveImplicitOverlandFlow` Landlab component. The component implements a two-dimensional kinematic wave model of overland flow, using a digital elevation model or other source of topography as the surface over which water flows.
### Theory
The kinematic wave equations are a simplified form of the 2D shallow-water equations in which energy slope is assumed to equal bed slope. Conservation of water mass is expressed in terms of the time derivative of the local water depth, $H$, and the spatial derivative (divergence) of the unit discharge vector $\mathbf{q} = UH$ (where $U$ is the 2D depth-averaged velocity vector):
$$\frac{\partial H}{\partial t} = R - \nabla\cdot \mathbf{q}$$
where $R$ is the local runoff rate [L/T] and $\mathbf{q}$ has dimensions of volume flow per time per width [L$^2$/T]. The discharge depends on the local depth, bed-surface gradient $\mathbf{S}=-\nabla\eta$ (this is the kinematic wave approximation; $\eta$ is land surface height), and a roughness factor $C_r$:
$$\mathbf{q} = \frac{1}{C_r} \mathbf{S} H^\alpha |S|^{-1/2}$$
Reads may recognize this as a form of the Manning, Chezy, or Darcy-Weisbach equation. If $\alpha = 5/3$ then we have the Manning equation, and $C_r = n$ is "Manning's n". If $\alpha = 3/2$ then we have the Chezy/Darcy-Weisbach equation, and $C_r = 1/C = (f/8g)^{1/2}$ represents the Chezy roughness factor $C$ and the equivalent Darcy-Weisbach factor $f$.
### Numerical solution
The solution method used by this component is locally implicit, and works as follows. At each time step, we iterate from upstream to downstream over the topography. Because we are working downstream, we can assume that we know the total water inflow to a given cell. We solve the following mass conservation equation at each cell:
$$\frac{H^{t+1} - H^t}{\Delta t }= \frac{Q_{in}}{A} - \frac{Q_{out}}{A} + R$$
where $H$ is water depth at a given grid node, $t$ indicates time step number, $\Delta t$ is time step duration, $Q_{in}$ is total inflow discharge, $Q_{out}$ is total outflow discharge, $A$ is cell area, and $R$ is local runoff rate (precipitation minus infiltration; could be negative if runon infiltration is occurring).
The specific outflow discharge leaving a cell along one of its faces is:
$$q = (1/C_r) H^\alpha S^{1/2}$$
where $S$ is the downhill-positive gradient of the link that crosses this particular face. Outflow discharge is zero for links that are flat or "uphill" from the given node. Total discharge out of a cell is then the sum of (specific discharge x face width) over all outflow faces:
$$Q_{out} = \sum_{i=1}^N (1/C_r) H^\alpha S_i^{1/2} W_i$$
where $N$ is the number of outflow faces (i.e., faces where the ground slopes downhill away from the cell's node), and $W_i$ is the width of face $i$.
We use the depth at the cell's node, so this simplifies to:
$$Q_{out} = (1/C_r) H'^\alpha \sum_{i=1}^N S_i^{1/2} W_i$$
Notice that we know everything here except $H'$. The reason we know $Q_{out}$ is that it equals $Q_{in}$ (which is either zero or we calculated it previously) plus $RA$.
We define $H$ in the above as a weighted sum of the "old" (time step $t$) and "new" (time step $t+1$) depth values:
$$H' = w H^{t+1} + (1-w) H^t$$
If $w=1$, the method is fully implicit. If $w=0$, it is a simple forward explicit method.
When we combine these equations, we have an equation that includes the unknown $H^{t+1}$ and a bunch of terms that are known. If $w\ne 0$, it is a nonlinear equation in $H^{t+1}$, and must be solved iteratively. We do this using a root-finding method in the scipy.optimize library.
In order to implement the algorithm, we must already know which of neighbors of each node are lower than the neighbor, and what the slopes between them are. We accomplish this using the `FlowAccumulator` and `FlowDirectorMFD` components. Running the `FlowAccumulator` also generates a sorted list (array) of nodes in drainage order.
### The component
Import the needed libraries, then inspect the component's docstring:
```
import copy
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from landlab import RasterModelGrid, imshow_grid
from landlab.components.overland_flow import KinwaveImplicitOverlandFlow
from landlab.io.esri_ascii import read_esri_ascii
print(KinwaveImplicitOverlandFlow.__doc__)
```
The docstring for the `__init__` method will give us a list of parameters:
```
print(KinwaveImplicitOverlandFlow.__init__.__doc__)
```
## Example 1: downpour on a plane
The first example tests that the component can reproduce the expected steady flow pattern on a sloping plane, with a gradient of $S_p$. We'll adopt the Manning equation. Once the system comes into equilibrium, the discharge should increase width distance down the plane according to $q = Rx$. We can use this fact to solve for the corresponding water depth:
$$(1/n) H^{5/3} S^{1/2} = R x$$
which implies
$$H = \left( \frac{nRx}{S^{1/2}} \right)^{3/5}$$
This is the analytical solution against which to test the model.
Pick the initial and run conditions
```
# Process parameters
n = 0.01 # roughness coefficient, (s/m^(1/3))
dep_exp = 5.0 / 3.0 # depth exponent
S = 0.01 # slope of plane
R = 72.0 # runoff rate, mm/hr
# Run-control parameters
run_time = 240.0 # duration of run, (s)
nrows = 5 # number of node rows
ncols = 11 # number of node columns
dx = 2.0 # node spacing, m
dt = 10.0 # time-step size, s
plot_every = 60.0 # plot interval, s
# Derived parameters
num_steps = int(run_time / dt)
```
Create grid and fields:
```
# create and set up grid
grid = RasterModelGrid((nrows, ncols), xy_spacing=dx)
grid.set_closed_boundaries_at_grid_edges(False, True, True, True) # open only on east
# add required field
elev = grid.add_zeros('topographic__elevation', at='node')
# set topography
elev[grid.core_nodes] = S * (np.amax(grid.x_of_node) - grid.x_of_node[grid.core_nodes])
```
Plot topography, first in plan view...
```
imshow_grid(grid, elev)
```
...then as a cross-section:
```
plt.plot(grid.x_of_node, elev)
plt.xlabel('Distance (m)')
plt.ylabel('Height (m)')
plt.grid(True)
# Instantiate the component
olflow = KinwaveImplicitOverlandFlow(grid,
runoff_rate=R,
roughness=n,
depth_exp=dep_exp
)
# Helpful function to plot the profile
def plot_flow_profile(grid, olflow):
"""Plot the middle row of topography and water surface
for the overland flow model olflow."""
nc = grid.number_of_node_columns
nr = grid.number_of_node_rows
startnode = nc * (nr // 2) + 1
midrow = np.arange(startnode, startnode + nc - 1, dtype=int)
topo = grid.at_node['topographic__elevation']
plt.plot(grid.x_of_node[midrow],
topo[midrow] + grid.at_node['surface_water__depth'][midrow],
'b'
)
plt.plot(grid.x_of_node[midrow],
topo[midrow],
'k'
)
plt.xlabel('Distance (m)')
plt.ylabel('Ground and water surface height (m)')
```
Run the component forward in time, plotting the output in the form of a profile:
```
next_plot = plot_every
for i in range(num_steps):
olflow.run_one_step(dt)
if (i + 1) * dt >= next_plot:
plot_flow_profile(grid, olflow)
next_plot += plot_every
# Compare with analytical solution for depth
Rms = R / 3.6e6 # convert to m/s
nc = grid.number_of_node_columns
x = grid.x_of_node[grid.core_nodes][:nc - 2]
Hpred = (n * Rms * x / (S ** 0.5)) ** 0.6
plt.plot(x, Hpred, 'r', label='Analytical')
plt.plot(x,
grid.at_node['surface_water__depth'][grid.core_nodes][:nc - 2],
'b--',
label='Numerical'
)
plt.xlabel('Distance (m)')
plt.ylabel('Water depth (m)')
plt.grid(True)
plt.legend()
```
## Example 2: overland flow on a DEM
For this example, we'll import a small digital elevation model (DEM) for a site in New Mexico, USA.
```
# Process parameters
n = 0.1 # roughness coefficient, (s/m^(1/3))
dep_exp = 5.0 / 3.0 # depth exponent
R = 72.0 # runoff rate, mm/hr
# Run-control parameters
rain_duration = 240.0 # duration of rainfall, s
run_time = 480.0 # duration of run, s
dt = 10.0 # time-step size, s
dem_filename = '../hugo_site_filled.asc'
# Derived parameters
num_steps = int(run_time / dt)
# set up arrays to hold discharge and time
time_since_storm_start = np.arange(0.0, dt * (2 * num_steps + 1), dt)
discharge = np.zeros(2 * num_steps + 1)
# Read the DEM file as a grid with a 'topographic__elevation' field
(grid, elev) = read_esri_ascii(dem_filename, name='topographic__elevation')
# Configure the boundaries: valid right-edge nodes will be open;
# all NODATA (= -9999) nodes will be closed.
grid.status_at_node[grid.nodes_at_right_edge] = grid.BC_NODE_IS_FIXED_VALUE
grid.status_at_node[np.isclose(elev, -9999.)] = grid.BC_NODE_IS_CLOSED
# display the topography
cmap = copy.copy(mpl.cm.get_cmap('pink'))
imshow_grid(grid, elev, colorbar_label='Elevation (m)', cmap=cmap)
```
It would be nice to track discharge at the watershed outlet, but how do we find the outlet location? We actually have several valid nodes along the right-hand edge. Then we'll keep track of the field `surface_water_inflow__discharge` at these nodes. We can identify the nodes by the fact that they are (a) at the right-hand edge of the grid, and (b) have positive elevations (the ones with -9999 are outside of the watershed).
```
indices = np.where(elev[grid.nodes_at_right_edge] > 0.0)[0]
outlet_nodes = grid.nodes_at_right_edge[indices]
print('Outlet nodes:')
print(outlet_nodes)
print('Elevations of the outlet nodes:')
print(elev[outlet_nodes])
# Instantiate the component
olflow = KinwaveImplicitOverlandFlow(grid,
runoff_rate=R,
roughness=n,
depth_exp=dep_exp
)
discharge_field = grid.at_node['surface_water_inflow__discharge']
for i in range(num_steps):
olflow.run_one_step(dt)
discharge[i+1] = np.sum(discharge_field[outlet_nodes])
plt.plot(time_since_storm_start[:num_steps], discharge[:num_steps])
plt.xlabel('Time (s)')
plt.ylabel('Discharge (cms)')
plt.grid(True)
cmap = copy.copy(mpl.cm.get_cmap('Blues'))
imshow_grid(grid,
grid.at_node['surface_water__depth'],
cmap=cmap,
colorbar_label='Water depth (m)'
)
```
Now turn down the rain and run it a bit longer...
```
olflow.runoff_rate = 1.0 # just 1 mm/hr
for i in range(num_steps, 2 * num_steps):
olflow.run_one_step(dt)
discharge[i+1] = np.sum(discharge_field[outlet_nodes])
plt.plot(time_since_storm_start, discharge)
plt.xlabel('Time (s)')
plt.ylabel('Discharge (cms)')
plt.grid(True)
cmap = copy.copy(mpl.cm.get_cmap('Blues'))
imshow_grid(grid,
grid.at_node['surface_water__depth'],
cmap=cmap,
colorbar_label='Water depth (m)'
)
```
Voila! A fine hydrograph, and a water-depth map that shows deeper water in the channels (and highlights depressions in the topography).
### Click here for more <a href="https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html">Landlab tutorials</a>
| github_jupyter |
# Robot Class
In this project, we'll be localizing a robot in a 2D grid world. The basis for simultaneous localization and mapping (SLAM) is to gather information from a robot's sensors and motions over time, and then use information about measurements and motion to re-construct a map of the world.
### Uncertainty
As you've learned, robot motion and sensors have some uncertainty associated with them. For example, imagine a car driving up hill and down hill; the speedometer reading will likely overestimate the speed of the car going up hill and underestimate the speed of the car going down hill because it cannot perfectly account for gravity. Similarly, we cannot perfectly predict the *motion* of a robot. A robot is likely to slightly overshoot or undershoot a target location.
In this notebook, we'll look at the `robot` class that is *partially* given to you for the upcoming SLAM notebook. First, we'll create a robot and move it around a 2D grid world. Then, **you'll be tasked with defining a `sense` function for this robot that allows it to sense landmarks in a given world**! It's important that you understand how this robot moves, senses, and how it keeps track of different landmarks that it sees in a 2D grid world, so that you can work with it's movement and sensor data.
---
Before we start analyzing robot motion, let's load in our resources and define the `robot` class. You can see that this class initializes the robot's position and adds measures of uncertainty for motion. You'll also see a `sense()` function which is not yet implemented, and you will learn more about that later in this notebook.
```
# import some resources
import numpy as np
import matplotlib.pyplot as plt
import random
%matplotlib inline
# the robot class
class robot:
# --------
# init:
# creates a robot with the specified parameters and initializes
# the location (self.x, self.y) to the center of the world
#
def __init__(self, world_size = 100.0, measurement_range = 30.0,
motion_noise = 1.0, measurement_noise = 1.0):
self.measurement_noise = 0.0
self.world_size = world_size
self.measurement_range = measurement_range
self.x = world_size / 2.0
self.y = world_size / 2.0
self.motion_noise = motion_noise
self.measurement_noise = measurement_noise
self.landmarks = []
self.num_landmarks = 0
# returns a positive, random float
def rand(self):
return random.random() * 2.0 - 1.0
# --------
# move: attempts to move robot by dx, dy. If outside world
# boundary, then the move does nothing and instead returns failure
#
def move(self, dx, dy):
x = self.x + dx + self.rand() * self.motion_noise
y = self.y + dy + self.rand() * self.motion_noise
if x < 0.0 or x > self.world_size or y < 0.0 or y > self.world_size:
return False
else:
self.x = x
self.y = y
return True
# --------
# sense: returns x- and y- distances to landmarks within visibility range
# because not all landmarks may be in this range, the list of measurements
# is of variable length. Set measurement_range to -1 if you want all
# landmarks to be visible at all times
#
## TODO: complete the sense function
def sense(self):
''' This function does not take in any parameters, instead it references internal variables
(such as self.landamrks) to measure the distance between the robot and any landmarks
that the robot can see (that are within its measurement range).
This function returns a list of landmark indices, and the measured distances (dx, dy)
between the robot's position and said landmarks.
This function should account for measurement_noise and measurement_range.
One item in the returned list should be in the form: [landmark_index, dx, dy].
'''
measurements = []
## TODO: iterate through all of the landmarks in a world
## TODO: For each landmark
## 1. compute dx and dy, the distances between the robot and the landmark
## 2. account for measurement noise by *adding* a noise component to dx and dy
## - The noise component should be a random value between [-1.0, 1.0)*measurement_noise
## - Feel free to use the function self.rand() to help calculate this noise component
## - It may help to reference the `move` function for noise calculation
## 3. If either of the distances, dx or dy, fall outside of the internal var, measurement_range
## then we cannot record them; if they do fall in the range, then add them to the measurements list
## as list.append([index, dx, dy]), this format is important for data creation done later
## TODO: return the final, complete list of measurements
for i in range(self.num_landmarks):
dx = self.landmarks[i][0] - self.x + self.rand() * self.measurement_noise
dy = self.landmarks[i][1] - self.y + self.rand() * self.measurement_noise
if self.measurement_range < 0.0 or abs(dx) + abs(dy) <= self.measurement_range:
measurements.append([i, dx, dy])
return measurements
# --------
# make_landmarks:
# make random landmarks located in the world
#
def make_landmarks(self, num_landmarks):
self.landmarks = []
for i in range(num_landmarks):
self.landmarks.append([round(random.random() * self.world_size),
round(random.random() * self.world_size)])
self.num_landmarks = num_landmarks
# called when print(robot) is called; prints the robot's location
def __repr__(self):
return 'Robot: [x=%.5f y=%.5f]' % (self.x, self.y)
```
## Define a world and a robot
Next, let's instantiate a robot object. As you can see in `__init__` above, the robot class takes in a number of parameters including a world size and some values that indicate the sensing and movement capabilities of the robot.
In the next example, we define a small 10x10 square world, a measurement range that is half that of the world and small values for motion and measurement noise. These values will typically be about 10 times larger, but we ust want to demonstrate this behavior on a small scale. You are also free to change these values and note what happens as your robot moves!
```
world_size = 10.0 # size of world (square)
measurement_range = 5.0 # range at which we can sense landmarks
motion_noise = 0.2 # noise in robot motion
measurement_noise = 0.2 # noise in the measurements
# instantiate a robot, r
r = robot(world_size, measurement_range, motion_noise, measurement_noise)
# print out the location of r
print(r)
```
## Visualizing the World
In the given example, we can see/print out that the robot is in the middle of the 10x10 world at (x, y) = (5.0, 5.0), which is exactly what we expect!
However, it's kind of hard to imagine this robot in the center of a world, without visualizing the grid itself, and so in the next cell we provide a helper visualization function, `display_world`, that will display a grid world in a plot and draw a red `o` at the location of our robot, `r`. The details of how this function wors can be found in the `helpers.py` file in the home directory; you do not have to change anything in this `helpers.py` file.
```
# import helper function
from helpers import display_world
# define figure size
plt.rcParams["figure.figsize"] = (5,5)
# call display_world and display the robot in it's grid world
print(r)
display_world(int(world_size), [r.x, r.y])
```
## Movement
Now you can really picture where the robot is in the world! Next, let's call the robot's `move` function. We'll ask it to move some distance `(dx, dy)` and we'll see that this motion is not perfect by the placement of our robot `o` and by the printed out position of `r`.
Try changing the values of `dx` and `dy` and/or running this cell multiple times; see how the robot moves and how the uncertainty in robot motion accumulates over multiple movements.
#### For a `dx` = 1, does the robot move *exactly* one spot to the right? What about `dx` = -1? What happens if you try to move the robot past the boundaries of the world?
```
# choose values of dx and dy (negative works, too)
dx = 1
dy = 2
r.move(dx, dy)
# print out the exact location
print(r)
# display the world after movement, not that this is the same call as before
# the robot tracks its own movement
display_world(int(world_size), [r.x, r.y])
```
## Landmarks
Next, let's create landmarks, which are measurable features in the map. You can think of landmarks as things like notable buildings, or something smaller such as a tree, rock, or other feature.
The robot class has a function `make_landmarks` which randomly generates locations for the number of specified landmarks. Try changing `num_landmarks` or running this cell multiple times to see where these landmarks appear. We have to pass these locations as a third argument to the `display_world` function and the list of landmark locations is accessed similar to how we find the robot position `r.landmarks`.
Each landmark is displayed as a purple `x` in the grid world, and we also print out the exact `[x, y]` locations of these landmarks at the end of this cell.
```
# create any number of landmarks
num_landmarks = 3
r.make_landmarks(num_landmarks)
# print out our robot's exact location
print(r)
# display the world including these landmarks
display_world(int(world_size), [r.x, r.y], r.landmarks)
# print the locations of the landmarks
print('Landmark locations [x,y]: ', r.landmarks)
```
## Sense
Once we have some landmarks to sense, we need to be able to tell our robot to *try* to sense how far they are away from it. It will be up t you to code the `sense` function in our robot class.
The `sense` function uses only internal class parameters and returns a list of the the measured/sensed x and y distances to the landmarks it senses within the specified `measurement_range`.
### TODO: Implement the `sense` function
Follow the `##TODO's` in the class code above to complete the `sense` function for the robot class. Once you have tested out your code, please **copy your complete `sense` code to the `robot_class.py` file in the home directory**. By placing this complete code in the `robot_class` Python file, we will be able to refernce this class in a later notebook.
The measurements have the format, `[i, dx, dy]` where `i` is the landmark index (0, 1, 2, ...) and `dx` and `dy` are the measured distance between the robot's location (x, y) and the landmark's location (x, y). This distance will not be perfect since our sense function has some associated `measurement noise`.
---
In the example in the following cell, we have a given our robot a range of `5.0` so any landmarks that are within that range of our robot's location, should appear in a list of measurements. Not all landmarks are guaranteed to be in our visibility range, so this list will be variable in length.
*Note: the robot's location is often called the **pose** or `[Pxi, Pyi]` and the landmark locations are often written as `[Lxi, Lyi]`. You'll see this notation in the next notebook.*
```
# try to sense any surrounding landmarks
measurements = r.sense()
# this will print out an empty list if `sense` has not been implemented
print(measurements)
```
**Refer back to the grid map above. Do these measurements make sense to you? Are all the landmarks captured in this list (why/why not)?**
---
## Data
#### Putting it all together
To perform SLAM, we'll collect a series of robot sensor measurements and motions, in that order, over a defined period of time. Then we'll use only this data to re-construct the map of the world with the robot and landmar locations. You can think of SLAM as peforming what we've done in this notebook, only backwards. Instead of defining a world and robot and creating movement and sensor data, it will be up to you to use movement and sensor measurements to reconstruct the world!
In the next notebook, you'll see this list of movements and measurements (which you'll use to re-construct the world) listed in a structure called `data`. This is an array that holds sensor measurements and movements in a specific order, which will be useful to call upon when you have to extract this data and form constraint matrices and vectors.
`data` is constructed over a series of time steps as follows:
```
data = []
# after a robot first senses, then moves (one time step)
# that data is appended like so:
data.append([measurements, [dx, dy]])
# for our example movement and measurement
print(data)
# in this example, we have only created one time step (0)
time_step = 0
# so you can access robot measurements:
print('Measurements: ', data[time_step][0])
# and its motion for a given time step:
print('Motion: ', data[time_step][1])
```
### Final robot class
Before moving on to the last notebook in this series, please make sure that you have copied your final, completed `sense` function into the `robot_class.py` file in the home directory. We will be using this file in the final implementation of slam!
| github_jupyter |
# SageMaker Debugger Profiling Report
SageMaker Debugger auto generated this report. You can generate similar reports on all supported training jobs. The report provides summary of training job, system resource usage statistics, framework metrics, rules summary, and detailed analysis from each rule. The graphs and tables are interactive.
**Legal disclaimer:** This report and any recommendations are provided for informational purposes only and are not definitive. You are responsible for making your own independent assessment of the information.
```
import json
import pandas as pd
import glob
import matplotlib.pyplot as plt
import numpy as np
import datetime
from smdebug.profiler.utils import us_since_epoch_to_human_readable_time, ns_since_epoch_to_human_readable_time
from smdebug.core.utils import setup_profiler_report
import bokeh
from bokeh.io import output_notebook, show
from bokeh.layouts import column, row
from bokeh.plotting import figure
from bokeh.models.widgets import DataTable, DateFormatter, TableColumn
from bokeh.models import ColumnDataSource, PreText
from math import pi
from bokeh.transform import cumsum
import warnings
from bokeh.models.widgets import Paragraph
from bokeh.models import Legend
from bokeh.util.warnings import BokehDeprecationWarning, BokehUserWarning
warnings.simplefilter('ignore', BokehDeprecationWarning)
warnings.simplefilter('ignore', BokehUserWarning)
output_notebook(hide_banner=True)
processing_job_arn = ""
# Parameters
processing_job_arn = "arn:aws:sagemaker:us-east-1:264082167679:processing-job/pytorch-training-2022-01-2-profilerreport-73c47060"
setup_profiler_report(processing_job_arn)
def create_piechart(data_dict, title=None, height=400, width=400, x1=0, x2=0.1, radius=0.4, toolbar_location='right'):
plot = figure(plot_height=height,
plot_width=width,
toolbar_location=toolbar_location,
tools="hover,wheel_zoom,reset,pan",
tooltips="@phase:@value",
title=title,
x_range=(-radius-x1, radius+x2))
data = pd.Series(data_dict).reset_index(name='value').rename(columns={'index':'phase'})
data['angle'] = data['value']/data['value'].sum() * 2*pi
data['color'] = bokeh.palettes.viridis(len(data_dict))
plot.wedge(x=0, y=0., radius=radius,
start_angle=cumsum('angle', include_zero=True),
end_angle=cumsum('angle'),
line_color="white",
source=data,
fill_color='color',
legend='phase'
)
plot.legend.label_text_font_size = "8pt"
plot.legend.location = 'center_right'
plot.axis.axis_label=None
plot.axis.visible=False
plot.grid.grid_line_color = None
plot.outline_line_color = "white"
return plot
from IPython.display import display, HTML, Markdown, Image
def pretty_print(df):
raw_html = df.to_html().replace("\\n","<br>").replace('<tr>','<tr style="text-align: left;">')
return display(HTML(raw_html))
```
## Training job summary
```
def load_report(rule_name):
try:
report = json.load(open('/opt/ml/processing/output/rule/profiler-output/profiler-reports/'+rule_name+'.json'))
return report
except FileNotFoundError:
print (rule_name + ' not triggered')
job_statistics = {}
report = load_report('MaxInitializationTime')
if report:
if "first" in report['Details']["step_num"] and "last" in report['Details']["step_num"]:
first_step = report['Details']["step_num"]["first"]
last_step = report['Details']["step_num"]["last"]
tmp = us_since_epoch_to_human_readable_time(report['Details']['job_start'] * 1000000)
date = datetime.datetime.strptime(tmp, '%Y-%m-%dT%H:%M:%S:%f')
day = date.date().strftime("%m/%d/%Y")
hour = date.time().strftime("%H:%M:%S")
job_statistics["Start time"] = f"{hour} {day}"
tmp = us_since_epoch_to_human_readable_time(report['Details']['job_end'] * 1000000)
date = datetime.datetime.strptime(tmp, '%Y-%m-%dT%H:%M:%S:%f')
day = date.date().strftime("%m/%d/%Y")
hour = date.time().strftime("%H:%M:%S")
job_statistics["End time"] = f"{hour} {day}"
job_duration_in_seconds = int(report['Details']['job_end'] - report['Details']['job_start'])
job_statistics["Job duration"] = f"{job_duration_in_seconds} seconds"
if "first" in report['Details']["step_num"] and "last" in report['Details']["step_num"]:
tmp = us_since_epoch_to_human_readable_time(first_step)
date = datetime.datetime.strptime(tmp, '%Y-%m-%dT%H:%M:%S:%f')
day = date.date().strftime("%m/%d/%Y")
hour = date.time().strftime("%H:%M:%S")
job_statistics["Training loop start"] = f"{hour} {day}"
tmp = us_since_epoch_to_human_readable_time(last_step)
date = datetime.datetime.strptime(tmp, '%Y-%m-%dT%H:%M:%S:%f')
day = date.date().strftime("%m/%d/%Y")
hour = date.time().strftime("%H:%M:%S")
job_statistics["Training loop end"] = f"{hour} {day}"
training_loop_duration_in_seconds = int((last_step - first_step) / 1000000)
job_statistics["Training loop duration"] = f"{training_loop_duration_in_seconds} seconds"
initialization_in_seconds = int(first_step/1000000 - report['Details']['job_start'])
job_statistics["Initialization time"] = f"{initialization_in_seconds} seconds"
finalization_in_seconds = int(np.abs(report['Details']['job_end'] - last_step/1000000))
job_statistics["Finalization time"] = f"{finalization_in_seconds} seconds"
initialization_perc = int(initialization_in_seconds / job_duration_in_seconds * 100)
job_statistics["Initialization"] = f"{initialization_perc} %"
training_loop_perc = int(training_loop_duration_in_seconds / job_duration_in_seconds * 100)
job_statistics["Training loop"] = f"{training_loop_perc} %"
finalization_perc = int(finalization_in_seconds / job_duration_in_seconds * 100)
job_statistics["Finalization"] = f"{finalization_perc} %"
if report:
text = """The following table gives a summary about the training job. The table includes information about when the training job started and ended, how much time initialization, training loop and finalization took."""
if len(job_statistics) > 0:
df = pd.DataFrame.from_dict(job_statistics, orient='index')
start_time = us_since_epoch_to_human_readable_time(report['Details']['job_start'] * 1000000)
date = datetime.datetime.strptime(start_time, '%Y-%m-%dT%H:%M:%S:%f')
day = date.date().strftime("%m/%d/%Y")
hour = date.time().strftime("%H:%M:%S")
duration = job_duration_in_seconds
text = f"""{text} \n Your training job started on {day} at {hour} and ran for {duration} seconds."""
#pretty_print(df)
if "first" in report['Details']["step_num"] and "last" in report['Details']["step_num"]:
if finalization_perc < 0:
job_statistics["Finalization%"] = 0
if training_loop_perc < 0:
job_statistics["Training loop"] = 0
if initialization_perc < 0:
job_statistics["Initialization"] = 0
else:
text = f"""{text} \n Your training job started on {day} at {hour} and ran for {duration} seconds."""
if len(job_statistics) > 0:
df2 = df.reset_index()
df2.columns = ["0", "1"]
source = ColumnDataSource(data=df2)
columns = [TableColumn(field='0', title=""),
TableColumn(field='1', title="Job Statistics"),]
table = DataTable(source=source, columns=columns, width=450, height=380)
plot = None
if "Initialization" in job_statistics:
piechart_data = {}
piechart_data["Initialization"] = initialization_perc
piechart_data["Training loop"] = training_loop_perc
piechart_data["Finalization"] = finalization_perc
plot = create_piechart(piechart_data,
height=350,
width=500,
x1=0.15,
x2=0.15,
radius=0.15,
toolbar_location=None)
if plot != None:
paragraph = Paragraph(text=f"""{text}""", width = 800)
show(column(paragraph, row(table, plot)))
else:
paragraph = Paragraph(text=f"""{text}. No step information was profiled from your training job. The time spent on initialization and finalization cannot be computed.""" , width = 800)
show(column(paragraph, row(table)))
```
## System usage statistics
```
report = load_report('OverallSystemUsage')
text1 = ''
if report:
if "GPU" in report["Details"]:
for node_id in report["Details"]["GPU"]:
gpu_p95 = report["Details"]["GPU"][node_id]["p95"]
gpu_p50 = report["Details"]["GPU"][node_id]["p50"]
cpu_p95 = report["Details"]["CPU"][node_id]["p95"]
cpu_p50 = report["Details"]["CPU"][node_id]["p50"]
if gpu_p95 < 70 and cpu_p95 < 70:
text1 = f"""{text1}The 95th percentile of the total GPU utilization on node {node_id} is only {int(gpu_p95)}%.
The 95th percentile of the total CPU utilization is only {int(cpu_p95)}%. Node {node_id} is underutilized.
You may want to consider switching to a smaller instance type."""
elif gpu_p95 < 70 and cpu_p95 > 70:
text1 = f"""{text1}The 95th percentile of the total GPU utilization on node {node_id} is only {int(gpu_p95)}%.
However, the 95th percentile of the total CPU utilization is {int(cpu_p95)}%. GPUs on node {node_id} are underutilized,
likely because of CPU bottlenecks."""
elif gpu_p50 > 70:
text1 = f"""{text1}The median total GPU utilization on node {node_id} is {int(gpu_p50)}%.
GPUs on node {node_id} are well utilized."""
else:
text1 = f"""{text1}The median total GPU utilization on node {node_id} is {int(gpu_p50)}%.
The median total CPU utilization is {int(cpu_p50)}%."""
else:
for node_id in report["Details"]["CPU"]:
cpu_p95 = report["Details"]["CPU"][node_id]["p95"]
if cpu_p95 > 70:
text1 = f"""{text1}The 95th percentile of the total CPU utilization on node {node_id} is {int**(cpu_p95)}%. CPUs on node {node_id} are well utilized."""
text1 = Paragraph(text=f"""{text1}""", width=1100)
text2 = Paragraph(text=f"""The following table shows statistics of resource utilization per worker (node),
such as the total CPU and GPU utilization, and the memory utilization on CPU and GPU.
The table also includes the total I/O wait time and the total amount of data sent or received in bytes.
The table shows min and max values as well as p99, p90 and p50 percentiles.""", width=900)
pd.set_option('display.float_format', lambda x: '%.2f' % x)
rows = []
units = {"CPU": "percentage", "CPU memory": "percentage", "GPU": "percentage", "Network": "bytes", "GPU memory": "percentage", "I/O": "percentage"}
if report:
for metric in report['Details']:
for node_id in report['Details'][metric]:
values = report['Details'][metric][node_id]
rows.append([node_id, metric, units[metric], values['max'], values['p99'], values['p95'], values['p50'], values['min']])
df = pd.DataFrame(rows)
df.columns = ['Node', 'metric', 'unit', 'max', 'p99', 'p95', 'p50', 'min']
df2 = df.reset_index()
source = ColumnDataSource(data=df2)
columns = [TableColumn(field='Node', title="node"),
TableColumn(field='metric', title="metric"),
TableColumn(field='unit', title="unit"),
TableColumn(field='max', title="max"),
TableColumn(field='p99', title="p99"),
TableColumn(field='p95', title="p95"),
TableColumn(field='p50', title="p50"),
TableColumn(field='min', title="min"),]
table = DataTable(source=source, columns=columns, width=800, height=df2.shape[0]*30)
show(column( text1, text2, row(table)))
report = load_report('OverallFrameworkMetrics')
if report:
if 'Details' in report:
display(Markdown(f"""## Framework metrics summary"""))
plots = []
text = ''
if 'phase' in report['Details']:
text = f"""The following two pie charts show the time spent on the TRAIN phase, the EVAL phase,
and others. The 'others' includes the time spent between steps (after one step has finished and before
the next step has started). Ideally, most of the training time should be spent on the
TRAIN and EVAL phases. If TRAIN/EVAL were not specified in the training script, steps will be recorded as
GLOBAL."""
if 'others' in report['Details']['phase']:
others = float(report['Details']['phase']['others'])
if others > 25:
text = f"""{text} Your training job spent quite a significant amount of time ({round(others,2)}%) in phase "others".
You should check what is happening in between the steps."""
plot = create_piechart(report['Details']['phase'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between the time spent on the TRAIN/EVAL phase and others")
plots.append(plot)
if 'forward_backward' in report['Details']:
event = max(report['Details']['forward_backward'], key=report['Details']['forward_backward'].get)
perc = report['Details']['forward_backward'][event]
text = f"""{text} The pie chart on the right shows a more detailed breakdown.
It shows that {int(perc)}% of the time was spent in event "{event}"."""
if perc > 70:
text = f"""There is quite a significant difference between the time spent on forward and backward
pass."""
else:
text = f"""{text} It shows that {int(perc)}% of the training time
was spent on "{event}"."""
plot = create_piechart(report['Details']['forward_backward'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between forward and backward pass")
plots.append(plot)
if len(plots) > 0:
paragraph = Paragraph(text=text, width=1100)
show(column(paragraph, row(plots)))
plots = []
text=''
if 'ratio' in report['Details'] and len(report['Details']['ratio']) > 0:
key = list(report['Details']['ratio'].keys())[0]
ratio = report['Details']['ratio'][key]
text = f"""The following piechart shows a breakdown of the CPU/GPU operators.
It shows that {int(ratio)}% of training time was spent on executing the "{key}" operator."""
plot = create_piechart(report['Details']['ratio'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between the time spent on CPU/GPU operators")
plots.append(plot)
if 'general' in report['Details']:
event = max(report['Details']['general'], key=report['Details']['general'].get)
perc = report['Details']['general'][event]
plot = create_piechart(report['Details']['general'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="General framework operations")
plots.append(plot)
if len(plots) > 0:
paragraph = Paragraph(text=text, width=1100)
show(column(paragraph, row(plots)))
plots = []
text = ''
if 'horovod' in report['Details']:
display(Markdown(f"""#### Overview: Horovod metrics"""))
event = max(report['Details']['horovod'], key=report['Details']['horovod'].get)
perc = report['Details']['horovod'][event]
text = f"""{text} The following pie chart shows a detailed breakdown of the Horovod metrics profiled
from your training job. The most expensive function was "{event}" with {int(perc)}%."""
plot = create_piechart(report['Details']['horovod'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="Horovod metrics ")
paragraph = Paragraph(text=text, width=1100)
show(column(paragraph, row(plot)))
pd.set_option('display.float_format', lambda x: '%.2f' % x)
rows = []
values = []
if report:
if 'CPU_total' in report['Details']:
display(Markdown(f"""#### Overview: CPU operators"""))
event = max(report['Details']['CPU'], key=report['Details']['CPU'].get)
perc = report['Details']['CPU'][event]
for function in report['Details']['CPU']:
percentage = round(report['Details']['CPU'][function],2)
time = report['Details']['CPU_total'][function]
rows.append([percentage, time, function])
df = pd.DataFrame(rows)
df.columns = ['percentage', 'time', 'operator']
df = df.sort_values(by=['percentage'], ascending=False)
source = ColumnDataSource(data=df)
columns = [TableColumn(field='percentage', title="Percentage"),
TableColumn(field='time', title="Cumulative time in microseconds"),
TableColumn(field='operator', title="CPU operator"),]
table = DataTable(source=source, columns=columns, width=550, height=350)
text = Paragraph(text=f"""The following table shows a list of operators that ran on the CPUs.
The most expensive operator on the CPUs was "{event}" with {int(perc)} %.""")
plot = create_piechart(report['Details']['CPU'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
)
show(column(text, row(table, plot)))
pd.set_option('display.float_format', lambda x: '%.2f' % x)
rows = []
values = []
if report:
if 'GPU_total' in report['Details']:
display(Markdown(f"""#### Overview: GPU operators"""))
event = max(report['Details']['GPU'], key=report['Details']['GPU'].get)
perc = report['Details']['GPU'][event]
for function in report['Details']['GPU']:
percentage = round(report['Details']['GPU'][function],2)
time = report['Details']['GPU_total'][function]
rows.append([percentage, time, function])
df = pd.DataFrame(rows)
df.columns = ['percentage', 'time', 'operator']
df = df.sort_values(by=['percentage'], ascending=False)
source = ColumnDataSource(data=df)
columns = [TableColumn(field='percentage', title="Percentage"),
TableColumn(field='time', title="Cumulative time in microseconds"),
TableColumn(field='operator', title="GPU operator"),]
table = DataTable(source=source, columns=columns, width=450, height=350)
text = Paragraph(text=f"""The following table shows a list of operators that your training job ran on GPU.
The most expensive operator on GPU was "{event}" with {int(perc)} %""")
plot = create_piechart(report['Details']['GPU'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
)
show(column(text, row(table, plot)))
```
## Rules summary
```
description = {}
description['CPUBottleneck'] = 'Checks if the CPU utilization is high and the GPU utilization is low. \
It might indicate CPU bottlenecks, where the GPUs are waiting for data to arrive \
from the CPUs. The rule evaluates the CPU and GPU utilization rates, and triggers the issue \
if the time spent on the CPU bottlenecks exceeds a threshold percent of the total training time. The default threshold is 50 percent.'
description['IOBottleneck'] = 'Checks if the data I/O wait time is high and the GPU utilization is low. \
It might indicate IO bottlenecks where GPU is waiting for data to arrive from storage. \
The rule evaluates the I/O and GPU utilization rates and triggers the issue \
if the time spent on the IO bottlenecks exceeds a threshold percent of the total training time. The default threshold is 50 percent.'
description['Dataloader'] = 'Checks how many data loaders are running in parallel and whether the total number is equal the number \
of available CPU cores. The rule triggers if number is much smaller or larger than the number of available cores. \
If too small, it might lead to low GPU utilization. If too large, it might impact other compute intensive operations on CPU.'
description['GPUMemoryIncrease'] = 'Measures the average GPU memory footprint and triggers if there is a large increase.'
description['BatchSize'] = 'Checks if GPUs are underutilized because the batch size is too small. \
To detect this problem, the rule analyzes the average GPU memory footprint, \
the CPU and the GPU utilization. '
description['LowGPUUtilization'] = 'Checks if the GPU utilization is low or fluctuating. \
This can happen due to bottlenecks, blocking calls for synchronizations, \
or a small batch size.'
description['MaxInitializationTime'] = 'Checks if the time spent on initialization exceeds a threshold percent of the total training time. \
The rule waits until the first step of training loop starts. The initialization can take longer \
if downloading the entire dataset from Amazon S3 in File mode. The default threshold is 20 minutes.'
description['LoadBalancing'] = 'Detects workload balancing issues across GPUs. \
Workload imbalance can occur in training jobs with data parallelism. \
The gradients are accumulated on a primary GPU, and this GPU might be overused \
with regard to other GPUs, resulting in reducing the efficiency of data parallelization.'
description['StepOutlier'] = 'Detects outliers in step duration. The step duration for forward and backward pass should be \
roughly the same throughout the training. If there are significant outliers, \
it may indicate a system stall or bottleneck issues.'
recommendation = {}
recommendation['CPUBottleneck'] = 'Consider increasing the number of data loaders \
or applying data pre-fetching.'
recommendation['IOBottleneck'] = 'Pre-fetch data or choose different file formats, such as binary formats that \
improve I/O performance.'
recommendation['Dataloader'] = 'Change the number of data loader processes.'
recommendation['GPUMemoryIncrease'] = 'Choose a larger instance type with more memory if footprint is close to maximum available memory.'
recommendation['BatchSize'] = 'The batch size is too small, and GPUs are underutilized. Consider running on a smaller instance type or increasing the batch size.'
recommendation['LowGPUUtilization'] = 'Check if there are bottlenecks, minimize blocking calls, \
change distributed training strategy, or increase the batch size.'
recommendation['MaxInitializationTime'] = 'Initialization takes too long. \
If using File mode, consider switching to Pipe mode in case you are using TensorFlow framework.'
recommendation['LoadBalancing'] = 'Choose a different distributed training strategy or \
a different distributed training framework.'
recommendation['StepOutlier'] = 'Check if there are any bottlenecks (CPU, I/O) correlated to the step outliers.'
files = glob.glob('/opt/ml/processing/output/rule/profiler-output/profiler-reports/*json')
summary = {}
for i in files:
rule_name = i.split('/')[-1].replace('.json','')
if rule_name == "OverallSystemUsage" or rule_name == "OverallFrameworkMetrics":
continue
rule_report = json.load(open(i))
summary[rule_name] = {}
summary[rule_name]['Description'] = description[rule_name]
summary[rule_name]['Recommendation'] = recommendation[rule_name]
summary[rule_name]['Number of times rule triggered'] = rule_report['RuleTriggered']
#summary[rule_name]['Number of violations'] = rule_report['Violations']
summary[rule_name]['Number of datapoints'] = rule_report['Datapoints']
summary[rule_name]['Rule parameters'] = rule_report['RuleParameters']
df = pd.DataFrame.from_dict(summary, orient='index')
df = df.sort_values(by=['Number of times rule triggered'], ascending=False)
display(Markdown(f"""The following table shows a profiling summary of the Debugger built-in rules.
The table is sorted by the rules that triggered the most frequently. During your training job, the {df.index[0]} rule
was the most frequently triggered. It processed {df.values[0,3]} datapoints and was triggered {df.values[0,2]} times."""))
with pd.option_context('display.colheader_justify','left'):
pretty_print(df)
analyse_phase = "training"
if job_statistics and "initialization_in_seconds" in job_statistics:
if job_statistics["initialization_in_seconds"] > job_statistics["training_loop_duration_in_seconds"]:
analyse_phase = "initialization"
time = job_statistics["initialization_in_seconds"]
perc = job_statistics["initialization_%"]
display(Markdown(f"""The initialization phase took {int(time)} seconds, which is {int(perc)}%*
of the total training time. Since the training loop has taken the most time,
we dive deep into the events occurring during this phase"""))
display(Markdown("""## Analyzing initialization\n\n"""))
time = job_statistics["training_loop_duration_in_seconds"]
perc = job_statistics["training_loop_%"]
display(Markdown(f"""The training loop lasted for {int(time)} seconds which is {int(perc)}% of the training job time.
Since the training loop has taken the most time, we dive deep into the events occured during this phase."""))
if analyse_phase == 'training':
display(Markdown("""## Analyzing the training loop\n\n"""))
if analyse_phase == "initialization":
display(Markdown("""### MaxInitializationTime\n\nThis rule helps to detect if the training initialization is taking too much time. \nThe rule waits until first step is available. The rule takes the parameter `threshold` that defines how many minutes to wait for the first step to become available. Default is 20 minutes.\nYou can run the rule locally in the following way:
"""))
_ = load_report("MaxInitializationTime")
if analyse_phase == "training":
display(Markdown("""### Step duration analysis"""))
report = load_report('StepOutlier')
if report:
parameters = report['RuleParameters']
params = report['RuleParameters'].split('\n')
stddev = params[3].split(':')[1]
mode = params[1].split(':')[1]
n_outlier = params[2].split(':')[1]
triggered = report['RuleTriggered']
datapoints = report['Datapoints']
text = f"""The StepOutlier rule measures step durations and checks for outliers. The rule
returns True if duration is larger than {stddev} times the standard deviation. The rule
also takes the parameter mode, that specifies whether steps from training or validation phase
should be checked. In your processing job mode was specified as {mode}.
Typically the first step is taking significantly more time and to avoid the
rule triggering immediately, one can use n_outliers to specify the number of outliers to ignore.
n_outliers was set to {n_outlier}.
The rule analysed {datapoints} datapoints and triggered {triggered} times.
"""
paragraph = Paragraph(text=text, width=900)
show(column(paragraph))
if report and len(report['Details']['step_details']) > 0:
for node_id in report['Details']['step_details']:
tmp = report['RuleParameters'].split('threshold:')
threshold = tmp[1].split('\n')[0]
n_outliers = report['Details']['step_details'][node_id]['number_of_outliers']
mean = report['Details']['step_details'][node_id]['step_stats']['mean']
stddev = report['Details']['step_details'][node_id]['stddev']
phase = report['Details']['step_details'][node_id]['phase']
display(Markdown(f"""**Step durations on node {node_id}:**"""))
display(Markdown(f"""The following table is a summary of the statistics of step durations measured on node {node_id}.
The rule has analyzed the step duration from {phase} phase.
The average step duration on node {node_id} was {round(mean, 2)}s.
The rule detected {n_outliers} outliers, where step duration was larger than {threshold} times the standard deviation of {stddev}s
\n"""))
step_stats_df = pd.DataFrame.from_dict(report['Details']['step_details'][node_id]['step_stats'], orient='index').T
step_stats_df.index = ['Step Durations in [s]']
pretty_print(step_stats_df)
display(Markdown(f"""The following histogram shows the step durations measured on the different nodes.
You can turn on or turn off the visualization of histograms by selecting or unselecting the labels in the legend."""))
plot = figure(plot_height=450,
plot_width=850,
title=f"""Step durations""")
colors = bokeh.palettes.viridis(len(report['Details']['step_details']))
for index, node_id in enumerate(report['Details']['step_details']):
probs = report['Details']['step_details'][node_id]['probs']
binedges = report['Details']['step_details'][node_id]['binedges']
plot.quad( top=probs,
bottom=0,
left=binedges[:-1],
right=binedges[1:],
line_color="white",
fill_color=colors[index],
fill_alpha=0.7,
legend=node_id)
plot.add_layout(Legend(), 'right')
plot.y_range.start = 0
plot.xaxis.axis_label = f"""Step durations in [s]"""
plot.yaxis.axis_label = "Occurrences"
plot.grid.grid_line_color = "white"
plot.legend.click_policy="hide"
plot.legend.location = 'center_right'
show(plot)
if report['RuleTriggered'] > 0:
text=f"""To get a better understanding of what may have caused those outliers,
we correlate the timestamps of step outliers with other framework metrics that happened at the same time.
The left chart shows how much time was spent in the different framework
metrics aggregated by event phase. The chart on the right shows the histogram of normal step durations (without
outliers). The following chart shows how much time was spent in the different
framework metrics when step outliers occurred. In this chart framework metrics are not aggregated byphase."""
plots = []
if 'phase' in report['Details']:
text = f"""{text} The chart (in the middle) shows whether step outliers mainly happened during TRAIN or EVAL phase.
"""
plot = create_piechart(report['Details']['phase'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between the time spent on the TRAIN/EVAL phase")
plots.append(plot)
if 'forward_backward' in report['Details'] and len(report['Details']['forward_backward']) > 0:
event = max(report['Details']['forward_backward'], key=report['Details']['forward_backward'].get)
perc = report['Details']['forward_backward'][event]
text = f"""{text} The pie chart on the right shows a detailed breakdown.
It shows that {int(perc)}% of the training time was spent on event "{event}"."""
plot = create_piechart(report['Details']['forward_backward'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The Ratio between forward and backward pass")
plots.append(plot)
if len(plots) > 0:
paragraph = Paragraph(text=text, width=900)
show(column(paragraph, row(plots)))
plots = []
text = ""
if 'ratio' in report['Details'] and len(report['Details']['ratio']) > 0:
key = list(report['Details']['ratio'].keys())[0]
ratio = report['Details']['ratio'][key]
text = f"""The following pie chart shows a breakdown of the CPU/GPU operators executed during the step outliers.
It shows that {int(ratio)}% of the training time was spent on executing operators in "{key}"."""
plot = create_piechart(report['Details']['ratio'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between CPU/GPU operators")
plots.append(plot)
if 'general' in report['Details'] and len(report['Details']['general']) > 0:
event = max(report['Details']['general'], key=report['Details']['general'].get)
perc = report['Details']['general'][event]
plot = create_piechart(report['Details']['general'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="General metrics recorded in framework ")
plots.append(plot)
if len(plots) > 0:
paragraph = Paragraph(text=text, width=900)
show(column(paragraph, row(plots)))
plots = []
text = ""
if 'horovod' in report['Details'] and len(report['Details']['horovod']) > 0:
event = max(report['Details']['horovod'], key=report['Details']['horovod'].get)
perc = report['Details']['horovod'][event]
text = f"""The following pie chart shows a detailed breakdown of the Horovod metrics that have been
recorded when step outliers happened. The most expensive function was {event} with {int(perc)}%"""
plot = create_piechart(report['Details']['horovod'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="General metrics recorded in framework ")
paragraph = Paragraph(text=text, width=900)
show(column(paragraph, row(plot)))
if analyse_phase == "training":
display(Markdown("""### GPU utilization analysis\n\n"""))
display(Markdown("""**Usage per GPU** \n\n"""))
report = load_report('LowGPUUtilization')
if report:
params = report['RuleParameters'].split('\n')
threshold_p95 = params[0].split(':')[1]
threshold_p5 = params[1].split(':')[1]
window = params[2].split(':')[1]
patience = params[3].split(':')[1]
violations = report['Violations']
triggered = report['RuleTriggered']
datapoints = report['Datapoints']
text=Paragraph(text=f"""The LowGPUUtilization rule checks for a low and fluctuating GPU usage. If the GPU usage is
consistently low, it might be caused by bottlenecks or a small batch size. If usage is heavily
fluctuating, it can be due to bottlenecks or blocking calls. The rule computed the 95th and 5th
percentile of GPU utilization on {window} continuous datapoints and found {violations} cases where
p95 was above {threshold_p95}% and p5 was below {threshold_p5}%. If p95 is high and p5 is low,
it might indicate that the GPU usage is highly fluctuating. If both values are very low,
it would mean that the machine is underutilized. During initialization, the GPU usage is likely zero,
so the rule skipped the first {patience} data points.
The rule analysed {datapoints} datapoints and triggered {triggered} times.""", width=800)
show(text)
if len(report['Details']) > 0:
timestamp = us_since_epoch_to_human_readable_time(report['Details']['last_timestamp'])
date = datetime.datetime.strptime(timestamp, '%Y-%m-%dT%H:%M:%S:%f')
day = date.date().strftime("%m/%d/%Y")
hour = date.time().strftime("%H:%M:%S")
text = Paragraph(text=f"""Your training job is underutilizing the instance. You may want to consider
to either switch to a smaller instance type or to increase the batch size.
The last time that the LowGPUUtilization rule was triggered in your training job was on {day} at {hour}.
The following boxplots are a snapshot from the timestamps.
They show the utilization per GPU (without outliers).
To get a better understanding of the workloads throughout the whole training,
you can check the workload histogram in the next section.""", width=800)
show(text)
del report['Details']['last_timestamp']
for node_id in report['Details']:
plot = figure(plot_height=350,
plot_width=1000,
toolbar_location='right',
tools="hover,wheel_zoom,reset,pan",
title=f"Node {node_id}",
x_range=(0,17),
)
for index, key in enumerate(report['Details'][node_id]):
display(Markdown(f"""**GPU utilization of {key} on node {node_id}:**"""))
text = ""
gpu_max = report['Details'][node_id][key]['gpu_max']
p_95 = report['Details'][node_id][key]['gpu_95']
p_5 = report['Details'][node_id][key]['gpu_5']
text = f"""{text} The max utilization of {key} on node {node_id} was {gpu_max}%"""
if p_95 < int(threshold_p95):
text = f"""{text} and the 95th percentile was only {p_95}%.
{key} on node {node_id} is underutilized"""
if p_5 < int(threshold_p5):
text = f"""{text} and the 5th percentile was only {p_5}%"""
if p_95 - p_5 > 50:
text = f"""{text} The difference between 5th percentile {p_5}% and 95th percentile {p_95}% is quite
significant, which means that utilization on {key} is fluctuating quite a lot.\n"""
upper = report['Details'][node_id][key]['upper']
lower = report['Details'][node_id][key]['lower']
p75 = report['Details'][node_id][key]['p75']
p25 = report['Details'][node_id][key]['p25']
p50 = report['Details'][node_id][key]['p50']
plot.segment(index+1, upper, index+1, p75, line_color="black")
plot.segment(index+1, lower, index+1, p25, line_color="black")
plot.vbar(index+1, 0.7, p50, p75, fill_color="#FDE725", line_color="black")
plot.vbar(index+1, 0.7, p25, p50, fill_color="#440154", line_color="black")
plot.rect(index+1, lower, 0.2, 0.01, line_color="black")
plot.rect(index+1, upper, 0.2, 0.01, line_color="black")
plot.xaxis.major_label_overrides[index+1] = key
plot.xgrid.grid_line_color = None
plot.ygrid.grid_line_color = "white"
plot.grid.grid_line_width = 0
plot.xaxis.major_label_text_font_size="10px"
text=Paragraph(text=f"""{text}""", width=900)
show(text)
plot.yaxis.axis_label = "Utilization in %"
plot.xaxis.ticker = np.arange(index+2)
show(plot)
if analyse_phase == "training":
display(Markdown("""**Workload balancing**\n\n"""))
report = load_report('LoadBalancing')
if report:
params = report['RuleParameters'].split('\n')
threshold = params[0].split(':')[1]
patience = params[1].split(':')[1]
triggered = report['RuleTriggered']
datapoints = report['Datapoints']
paragraph = Paragraph(text=f"""The LoadBalancing rule helps to detect issues in workload balancing
between multiple GPUs.
It computes a histogram of GPU utilization values for each GPU and compares then the
similarity between histograms. The rule checked if the distance of histograms is larger than the
threshold of {threshold}.
During initialization utilization is likely zero, so the rule skipped the first {patience} data points.
""", width=900)
show(paragraph)
if len(report['Details']) > 0:
for node_id in report['Details']:
text = f"""The following histogram shows the workload per GPU on node {node_id}.
You can enable/disable the visualization of a workload by clicking on the label in the legend.
"""
if len(report['Details']) == 1 and len(report['Details'][node_id]['workloads']) == 1:
text = f"""{text} Your training job only used one GPU so there is no workload balancing issue."""
plot = figure(plot_height=450,
plot_width=850,
x_range=(-1,100),
title=f"""Workloads on node {node_id}""")
colors = bokeh.palettes.viridis(len(report['Details'][node_id]['workloads']))
for index, gpu_id2 in enumerate(report['Details'][node_id]['workloads']):
probs = report['Details'][node_id]['workloads'][gpu_id2]
plot.quad( top=probs,
bottom=0,
left=np.arange(0,98,2),
right=np.arange(2,100,2),
line_color="white",
fill_color=colors[index],
fill_alpha=0.8,
legend=gpu_id2 )
plot.y_range.start = 0
plot.xaxis.axis_label = f"""Utilization"""
plot.yaxis.axis_label = "Occurrences"
plot.grid.grid_line_color = "white"
plot.legend.click_policy="hide"
paragraph = Paragraph(text=text)
show(column(paragraph, plot))
if "distances" in report['Details'][node_id]:
text = f"""The rule identified workload balancing issues on node {node_id}
where workloads differed by more than threshold {threshold}.
"""
for index, gpu_id2 in enumerate(report['Details'][node_id]['distances']):
for gpu_id1 in report['Details'][node_id]['distances'][gpu_id2]:
distance = round(report['Details'][node_id]['distances'][gpu_id2][gpu_id1], 2)
text = f"""{text} The difference of workload between {gpu_id2} and {gpu_id1} is: {distance}."""
paragraph = Paragraph(text=f"""{text}""", width=900)
show(column(paragraph))
if analyse_phase == "training":
display(Markdown("""### Dataloading analysis\n\n"""))
report = load_report('Dataloader')
if report:
params = report['RuleParameters'].split("\n")
min_threshold = params[0].split(':')[1]
max_threshold = params[1].split(':')[1]
triggered = report['RuleTriggered']
datapoints = report['Datapoints']
text=f"""The number of dataloader workers can greatly affect the overall performance
of your training job. The rule analyzed the number of dataloading processes that have been running in
parallel on the training instance and compares it against the total number of cores.
The rule checked if the number of processes is smaller than {min_threshold}% or larger than
{max_threshold}% the total number of cores. Having too few dataloader workers can slowdown data preprocessing and lead to GPU
underutilization. Having too many dataloader workers may hurt the
overall performance if you are running other compute intensive tasks on the CPU.
The rule analysed {datapoints} datapoints and triggered {triggered} times."""
paragraph = Paragraph(text=f"{text}", width=900)
show(paragraph)
text = ""
if 'cores' in report['Details']:
cores = int(report['Details']['cores'])
dataloaders = report['Details']['dataloaders']
if dataloaders < cores:
text=f"""{text} Your training instance provided {cores} CPU cores, however your training job only
ran on average {dataloaders} dataloader workers in parallel. We recommend you to increase the number of
dataloader workers."""
if dataloaders > cores:
text=f"""{text} Your training instance provided {cores} CPU cores, however your training job ran
on average {dataloaders} dataloader workers. We recommed you to decrease the number of dataloader
workers."""
if 'pin_memory' in report['Details'] and report['Details']['pin_memory'] == False:
text=f"""{text} Using pinned memory also improves performance because it enables fast data transfer to CUDA-enabled GPUs.
The rule detected that your training job was not using pinned memory.
In case of using PyTorch Dataloader, you can enable this by setting pin_memory=True."""
if 'prefetch' in report['Details'] and report['Details']['prefetch'] == False:
text=f"""{text} It appears that your training job did not perform any data pre-fetching. Pre-fetching can improve your
data input pipeline as it produces the data ahead of time."""
paragraph = Paragraph(text=f"{text}", width=900)
show(paragraph)
colors=bokeh.palettes.viridis(10)
if "dataloading_time" in report['Details']:
median = round(report['Details']["dataloading_time"]['p50'],4)
p95 = round(report['Details']["dataloading_time"]['p95'],4)
p25 = round(report['Details']["dataloading_time"]['p25'],4)
binedges = report['Details']["dataloading_time"]['binedges']
probs = report['Details']["dataloading_time"]['probs']
text=f"""The following histogram shows the distribution of dataloading times that have been measured throughout your training job. The median dataloading time was {median}s.
The 95th percentile was {p95}s and the 25th percentile was {p25}s"""
plot = figure(plot_height=450,
plot_width=850,
toolbar_location='right',
tools="hover,wheel_zoom,reset,pan",
x_range=(binedges[0], binedges[-1])
)
plot.quad( top=probs,
bottom=0,
left=binedges[:-1],
right=binedges[1:],
line_color="white",
fill_color=colors[0],
fill_alpha=0.8,
legend="Dataloading events" )
plot.y_range.start = 0
plot.xaxis.axis_label = f"""Dataloading in [s]"""
plot.yaxis.axis_label = "Occurrences"
plot.grid.grid_line_color = "white"
plot.legend.click_policy="hide"
paragraph = Paragraph(text=f"{text}", width=900)
show(column(paragraph, plot))
if analyse_phase == "training":
display(Markdown(""" ### Batch size"""))
report = load_report('BatchSize')
if report:
params = report['RuleParameters'].split('\n')
cpu_threshold_p95 = int(params[0].split(':')[1])
gpu_threshold_p95 = int(params[1].split(':')[1])
gpu_memory_threshold_p95 = int(params[2].split(':')[1])
patience = int(params[3].split(':')[1])
window = int(params[4].split(':')[1])
violations = report['Violations']
triggered = report['RuleTriggered']
datapoints = report['Datapoints']
text = Paragraph(text=f"""The BatchSize rule helps to detect if GPU is underutilized because of the batch size being
too small. To detect this the rule analyzes the GPU memory footprint, CPU and GPU utilization. The rule checked if the 95th percentile of CPU utilization is below cpu_threshold_p95 of
{cpu_threshold_p95}%, the 95th percentile of GPU utilization is below gpu_threshold_p95 of {gpu_threshold_p95}% and the 95th percentile of memory footprint \
below gpu_memory_threshold_p95 of {gpu_memory_threshold_p95}%. In your training job this happened {violations} times. \
The rule skipped the first {patience} datapoints. The rule computed the percentiles over window size of {window} continuous datapoints.\n
The rule analysed {datapoints} datapoints and triggered {triggered} times.
""", width=800)
show(text)
if len(report['Details']) >0:
timestamp = us_since_epoch_to_human_readable_time(report['Details']['last_timestamp'])
date = datetime.datetime.strptime(timestamp, '%Y-%m-%dT%H:%M:%S:%f')
day = date.date().strftime("%m/%d/%Y")
hour = date.time().strftime("%H:%M:%S")
del report['Details']['last_timestamp']
text = Paragraph(text=f"""Your training job is underutilizing the instance. You may want to consider
either switch to a smaller instance type or to increase the batch size.
The last time the BatchSize rule triggered in your training job was on {day} at {hour}.
The following boxplots are a snapshot from the timestamps. They the total
CPU utilization, the GPU utilization, and the GPU memory usage per GPU (without outliers).""",
width=800)
show(text)
for node_id in report['Details']:
xmax = max(20, len(report['Details'][node_id]))
plot = figure(plot_height=350,
plot_width=1000,
toolbar_location='right',
tools="hover,wheel_zoom,reset,pan",
title=f"Node {node_id}",
x_range=(0,xmax)
)
for index, key in enumerate(report['Details'][node_id]):
upper = report['Details'][node_id][key]['upper']
lower = report['Details'][node_id][key]['lower']
p75 = report['Details'][node_id][key]['p75']
p25 = report['Details'][node_id][key]['p25']
p50 = report['Details'][node_id][key]['p50']
plot.segment(index+1, upper, index+1, p75, line_color="black")
plot.segment(index+1, lower, index+1, p25, line_color="black")
plot.vbar(index+1, 0.7, p50, p75, fill_color="#FDE725", line_color="black")
plot.vbar(index+1, 0.7, p25, p50, fill_color="#440154", line_color="black")
plot.rect(index+1, lower, 0.2, 0.01, line_color="black")
plot.rect(index+1, upper, 0.2, 0.01, line_color="black")
plot.xaxis.major_label_overrides[index+1] = key
plot.xgrid.grid_line_color = None
plot.ygrid.grid_line_color = "white"
plot.grid.grid_line_width = 0
plot.xaxis.major_label_text_font_size="10px"
plot.xaxis.ticker = np.arange(index+2)
plot.yaxis.axis_label = "Utilization in %"
show(plot)
if analyse_phase == "training":
display(Markdown("""### CPU bottlenecks\n\n"""))
report = load_report('CPUBottleneck')
if report:
params = report['RuleParameters'].split('\n')
threshold = int(params[0].split(':')[1])
cpu_threshold = int(params[1].split(':')[1])
gpu_threshold = int(params[2].split(':')[1])
patience = int(params[3].split(':')[1])
violations = report['Violations']
triggered = report['RuleTriggered']
datapoints = report['Datapoints']
if report['Violations'] > 0:
perc = int(report['Violations']/report['Datapoints']*100)
else:
perc = 0
if perc < threshold:
string = 'below'
else:
string = 'above'
text = f"""The CPUBottleneck rule checked when the CPU utilization was above cpu_threshold of {cpu_threshold}%
and GPU utilization was below gpu_threshold of {gpu_threshold}%.
During initialization utilization is likely to be zero, so the rule skipped the first {patience} datapoints.
With this configuration the rule found {violations} CPU bottlenecks which is {perc}% of the total time. This is {string} the threshold of {threshold}%
The rule analysed {datapoints} data points and triggered {triggered} times."""
paragraph = Paragraph(text=text, width=900)
show(paragraph)
if report:
plots = []
text = ""
if report['RuleTriggered'] > 0:
low_gpu = report['Details']['low_gpu_utilization']
cpu_bottleneck = {}
cpu_bottleneck["GPU usage above threshold"] = report["Datapoints"] - report["Details"]["low_gpu_utilization"]
cpu_bottleneck["GPU usage below threshold"] = report["Details"]["low_gpu_utilization"] - len(report["Details"])
cpu_bottleneck["Low GPU usage due to CPU bottlenecks"] = len(report["Details"]["bottlenecks"])
n_bottlenecks = round(len(report['Details']['bottlenecks'])/datapoints * 100, 2)
text = f"""The following chart (left) shows how many datapoints were below the gpu_threshold of {gpu_threshold}%
and how many of those datapoints were likely caused by a CPU bottleneck. The rule found {low_gpu} out of {datapoints} datapoints which had a GPU utilization
below {gpu_threshold}%. Out of those datapoints {n_bottlenecks}% were likely caused by CPU bottlenecks.
"""
plot = create_piechart(cpu_bottleneck,
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="Low GPU usage caused by CPU bottlenecks")
plots.append(plot)
if 'phase' in report['Details']:
text = f"""{text} The chart (in the middle) shows whether CPU bottlenecks mainly
happened during train/validation phase.
"""
plot = create_piechart(report['Details']['phase'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between time spent on TRAIN/EVAL phase")
plots.append(plot)
if 'forward_backward' in report['Details'] and len(report['Details']['forward_backward']) > 0:
event = max(report['Details']['forward_backward'], key=report['Details']['forward_backward'].get)
perc = report['Details']['forward_backward'][event]
text = f"""{text} The pie charts on the right shows a more detailed breakdown.
It shows that {int(perc)}% of the training time was spent on event {event}"""
plot = create_piechart(report['Details']['forward_backward'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between forward and backward pass")
plots.append(plot)
if len(plots) > 0:
paragraph = Paragraph(text=text, width=900)
show(column(paragraph, row(plots)))
plots = []
text = ""
if 'ratio' in report['Details'] and len(report['Details']['ratio']) > 0:
key = list(report['Details']['ratio'].keys())[0]
ratio = report['Details']['ratio'][key]
text = f"""The following pie chart shows a breakdown of the CPU/GPU operators that happened during CPU bottlenecks.
It shows that {int(ratio)}% of the training time was spent on executing operators in "{key}"."""
plot = create_piechart(report['Details']['ratio'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between CPU/GPU operators")
plots.append(plot)
if 'general' in report['Details'] and len(report['Details']['general']) > 0:
event = max(report['Details']['general'], key=report['Details']['general'].get)
perc = report['Details']['general'][event]
plot = create_piechart(report['Details']['general'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="General metrics recorded in framework ")
plots.append(plot)
if len(plots) > 0:
paragraph = Paragraph(text=text, width=900)
show(column(paragraph, row(plots)))
plots = []
text = ""
if 'horovod' in report['Details'] and len(report['Details']['horovod']) > 0:
event = max(report['Details']['horovod'], key=report['Details']['horovod'].get)
perc = report['Details']['horovod'][event]
text = f"""The following pie chart shows a detailed breakdown of the Horovod metrics
that have been recorded when the CPU bottleneck happened. The most expensive function was
{event} with {int(perc)}%"""
plot = create_piechart(report['Details']['horovod'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="General metrics recorded in framework ")
paragraph = Paragraph(text=text, width=900)
show(column(paragraph, row(plot)))
if analyse_phase == "training":
display(Markdown("""### I/O bottlenecks\n\n"""))
report = load_report('IOBottleneck')
if report:
params = report['RuleParameters'].split('\n')
threshold = int(params[0].split(':')[1])
io_threshold = int(params[1].split(':')[1])
gpu_threshold = int(params[2].split(':')[1])
patience = int(params[3].split(':')[1])
violations = report['Violations']
triggered = report['RuleTriggered']
datapoints = report['Datapoints']
if report['Violations'] > 0:
perc = int(report['Violations']/report['Datapoints']*100)
else:
perc = 0
if perc < threshold:
string = 'below'
else:
string = 'above'
text = f"""The IOBottleneck rule checked when I/O wait time was above io_threshold of {io_threshold}%
and GPU utilization was below gpu_threshold of {gpu_threshold}. During initialization utilization is likely to be zero, so the rule skipped the first {patience} datapoints.
With this configuration the rule found {violations} I/O bottlenecks which is {perc}% of the total time. This is {string} the threshold of {threshold}%.
The rule analysed {datapoints} datapoints and triggered {triggered} times."""
paragraph = Paragraph(text=text, width=900)
show(paragraph)
if report:
plots = []
text = ""
if report['RuleTriggered'] > 0:
low_gpu = report['Details']['low_gpu_utilization']
cpu_bottleneck = {}
cpu_bottleneck["GPU usage above threshold"] = report["Datapoints"] - report["Details"]["low_gpu_utilization"]
cpu_bottleneck["GPU usage below threshold"] = report["Details"]["low_gpu_utilization"] - len(report["Details"])
cpu_bottleneck["Low GPU usage due to I/O bottlenecks"] = len(report["Details"]["bottlenecks"])
n_bottlenecks = round(len(report['Details']['bottlenecks'])/datapoints * 100, 2)
text = f"""The following chart (left) shows how many datapoints were below the gpu_threshold of {gpu_threshold}%
and how many of those datapoints were likely caused by a I/O bottleneck. The rule found {low_gpu} out of {datapoints} datapoints which had a GPU utilization
below {gpu_threshold}%. Out of those datapoints {n_bottlenecks}% were likely caused by I/O bottlenecks.
"""
plot = create_piechart(cpu_bottleneck,
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="Low GPU usage caused by I/O bottlenecks")
plots.append(plot)
if 'phase' in report['Details']:
text = f"""{text} The chart (in the middle) shows whether I/O bottlenecks mainly happened during the training or validation phase.
"""
plot = create_piechart(report['Details']['phase'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between the time spent on the TRAIN/EVAL phase")
plots.append(plot)
if 'forward_backward' in report['Details'] and len(report['Details']['forward_backward']) > 0:
event = max(report['Details']['forward_backward'], key=report['Details']['forward_backward'].get)
perc = report['Details']['forward_backward'][event]
text = f"""{text} The pie charts on the right shows a more detailed breakdown.
It shows that {int(perc)}% of the training time was spent on event "{event}"."""
plot = create_piechart(report['Details']['forward_backward'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="The ratio between forward and backward pass")
plots.append(plot)
if len(plots) > 0:
paragraph = Paragraph(text=text, width=900)
show(column(paragraph, row(plots)))
plots = []
text = ""
if 'ratio' in report['Details'] and len(report['Details']['ratio']) > 0:
key = list(report['Details']['ratio'].keys())[0]
ratio = report['Details']['ratio'][key]
text = f"""The following pie chart shows a breakdown of the CPU/GPU operators that happened
during I/O bottlenecks. It shows that {int(ratio)}% of the training time was spent on executing operators in "{key}"."""
plot = create_piechart(report['Details']['ratio'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="Ratio between CPU/GPU operators")
plots.append(plot)
if 'general' in report['Details'] and len(report['Details']['general']) > 0:
event = max(report['Details']['general'], key=report['Details']['general'].get)
perc = report['Details']['general'][event]
plot = create_piechart(report['Details']['general'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="General metrics recorded in framework ")
plots.append(plot)
if len(plots) > 0:
paragraph = Paragraph(text=text, width=900)
show(column(paragraph, row(plots)))
plots = []
text = ""
if 'horovod' in report['Details'] and len(report['Details']['horovod']) > 0:
event = max(report['Details']['horovod'], key=report['Details']['horovod'].get)
perc = report['Details']['horovod'][event]
text = f"""The following pie chart shows a detailed breakdown of the Horovod metrics that have been
recorded when I/O bottleneck happened. The most expensive function was {event} with {int(perc)}%"""
plot = create_piechart(report['Details']['horovod'],
height=350,
width=600,
x1=0.2,
x2=0.6,
radius=0.3,
title="General metrics recorded in framework ")
paragraph = Paragraph(text=text, width=900)
show(column(paragraph, row(plot)))
if analyse_phase == "training":
display(Markdown("""### GPU memory\n\n"""))
report = load_report('GPUMemoryIncrease')
if report:
params = report['RuleParameters'].split('\n')
increase = float(params[0].split(':')[1])
patience = params[1].split(':')[1]
window = params[2].split(':')[1]
violations = report['Violations']
triggered = report['RuleTriggered']
datapoints = report['Datapoints']
text=Paragraph(text=f"""The GPUMemoryIncrease rule helps to detect large increase in memory usage on GPUs.
The rule checked if the moving average of memory increased by more than {increase}%.
So if the moving average increased for instance from 10% to {11+increase}%,
the rule would have triggered. During initialization utilization is likely 0, so the rule skipped the first {patience} datapoints.
The moving average was computed on a window size of {window} continuous datapoints. The rule detected {violations} violations
where the moving average between previous and current time window increased by more than {increase}%.
The rule analysed {datapoints} datapoints and triggered {triggered} times.""",
width=900)
show(text)
if len(report['Details']) > 0:
timestamp = us_since_epoch_to_human_readable_time(report['Details']['last_timestamp'])
date = datetime.datetime.strptime(timestamp, '%Y-%m-%dT%H:%M:%S:%f')
day = date.date().strftime("%m/%d/%Y")
hour = date.time().strftime("%H:%M:%S")
text = Paragraph(text=f"""Your training job triggered memory spikes.
The last time the GPUMemoryIncrease rule triggered in your training job was on {day} at {hour}.
The following boxplots are a snapshot from the timestamps. They show for each node and GPU the corresponding
memory utilization (without outliers).""", width=900)
show(text)
del report['Details']['last_timestamp']
for node_id in report['Details']:
plot = figure(plot_height=350,
plot_width=1000,
toolbar_location='right',
tools="hover,wheel_zoom,reset,pan",
title=f"Node {node_id}",
x_range=(0,17),
)
for index, key in enumerate(report['Details'][node_id]):
display(Markdown(f"""**Memory utilization of {key} on node {node_id}:**"""))
text = ""
gpu_max = report['Details'][node_id][key]['gpu_max']
text = f"""{text} The max memory utilization of {key} on node {node_id} was {gpu_max}%."""
p_95 = int(report['Details'][node_id][key]['p95'])
p_5 = report['Details'][node_id][key]['p05']
if p_95 < int(50):
text = f"""{text} The 95th percentile was only {p_95}%."""
if p_5 < int(5):
text = f"""{text} The 5th percentile was only {p_5}%."""
if p_95 - p_5 > 50:
text = f"""{text} The difference between 5th percentile {p_5}% and 95th percentile {p_95}% is quite
significant, which means that memory utilization on {key} is fluctuating quite a lot."""
text = Paragraph(text=f"""{text}""", width=900)
show(text)
upper = report['Details'][node_id][key]['upper']
lower = report['Details'][node_id][key]['lower']
p75 = report['Details'][node_id][key]['p75']
p25 = report['Details'][node_id][key]['p25']
p50 = report['Details'][node_id][key]['p50']
plot.segment(index+1, upper, index+1, p75, line_color="black")
plot.segment(index+1, lower, index+1, p25, line_color="black")
plot.vbar(index+1, 0.7, p50, p75, fill_color="#FDE725", line_color="black")
plot.vbar(index+1, 0.7, p25, p50, fill_color="#440154", line_color="black")
plot.rect(index+1, lower, 0.2, 0.01, line_color="black")
plot.rect(index+1, upper, 0.2, 0.01, line_color="black")
plot.xaxis.major_label_overrides[index+1] = key
plot.xgrid.grid_line_color = None
plot.ygrid.grid_line_color = "white"
plot.grid.grid_line_width = 0
plot.xaxis.major_label_text_font_size="10px"
plot.xaxis.ticker = np.arange(index+2)
plot.yaxis.axis_label = "Utilization in %"
show(plot)
```
| github_jupyter |
# Overview
This lab has been adapted from the angr [motivating example](https://github.com/angr/angr-doc/tree/master/examples/fauxware). It shows the basic lifecycle and capabilities of the angr framework.
Note this lab (and other notebooks running angr) should be run with the Python 3 kernel!
Look at fauxware.c! This is the source code for a "faux firmware" (@zardus really likes the puns) that's meant to be a simple representation of a firmware that can authenticate users but also has a backdoor - the backdoor is that anybody who provides the string "SOSNEAKY" as their password will be automatically authenticated.
```
# import the python system and angr libraries
import angr
import sys
# We can use this as a basic demonstration of using angr for symbolic execution.
# First, we load the binary into an angr project.
p = angr.Project('/home/pac/Desktop/lab7/fauxware/fauxware')
# Now, we want to construct a representation of symbolic program state.
# SimState objects are what angr manipulates when it symbolically executes
# binary code.
# The entry_state constructor generates a SimState that is a very generic
# representation of the possible program states at the program's entry
# point. There are more constructors, like blank_state, which constructs a
# "blank slate" state that specifies as little concrete data as possible,
# or full_init_state, which performs a slow and pedantic initialization of
# program state as it would execute through the dynamic loader.
state = p.factory.entry_state()
# Now, in order to manage the symbolic execution process from a very high
# level, we have a SimulationManager. SimulationManager is just collections
# of states with various tags attached with a number of convenient
# interfaces for managing them.
sm = p.factory.simulation_manager(state)
# Now, we begin execution. This will symbolically execute the program until
# we reach a branch statement for which both branches are satisfiable.
sm.run(until=lambda sm_: len(sm_.active) > 1)
# If you look at the C code, you see that the first "if" statement that the
# program can come across is comparing the result of the strcmp with the
# backdoor password. So, we have halted execution with two states, each of
# which has taken a different arm of that conditional branch. If you drop
# an IPython shell here and examine sm.active[n].solver.constraints
# you will see the encoding of the condition that was added to the state to
# constrain it to going down this path, instead of the other one. These are
# the constraints that will eventually be passed to our constraint solver
# (z3) to produce a set of concrete inputs satisfying them.
# As a matter of fact, we'll do that now.
input_0 = sm.active[0].posix.dumps(0)
input_1 = sm.active[1].posix.dumps(0)
# We have used a utility function on the state's posix plugin to perform a
# quick and dirty concretization of the content in file descriptor zero,
# stdin. One of these strings should contain the substring "SOSNEAKY"!
if b'SOSNEAKY' in input_0:
analysis_result = input_0
else:
analysis_result = input_1
print("Result: " + str(analysis_result))
with open("/home/pac/Desktop/lab7/fauxware/analysis_result", "wb") as file:
file.write(analysis_result)
# You should be able to run this script and pipe its output to fauxware and
# fauxware will authenticate you!
import os
command = "/home/pac/Desktop/lab7/fauxware/fauxware < /home/pac/Desktop/lab7/fauxware/analysis_result"
print(os.popen(command).read())
```
| github_jupyter |
```
from bs4 import BeautifulSoup as soup
from urllib.request import urlopen as ureq
from selenium import webdriver
import time
import re
url = 'https://programs.usask.ca/engineering/first-year/index.php#Year14144creditunits'
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--ignore-certificate-errors')
chrome_options.add_argument('--incognito')
chrome_options.add_argument('--headless')
driver = webdriver.Chrome("C:\\Users\\jerry\\Downloads\\chromedriver", options=chrome_options)
driver.get(url)
time.sleep(3)
```
# 1. Collect course link texts for webdriver to click on
```
page_html = driver.page_source
link_texts = re.findall("[A-Z]+ [0-9]{3}\.[0-9]", page_html)
link_texts = list(dict.fromkeys(link_texts))
link_texts
len(link_texts)
```
# 2. Test run - try to scrape the first course
```
link = driver.find_element_by_link_text(link_texts[0])
link.click()
time.sleep(2)
driver.page_source
page_soup = soup(driver.page_source, 'lxml')
page_soup.find("h1", {"class": "uofs-page-title"}).text.strip()[:-2]
page_soup.find("p", {"class": "lead"}).text.strip()
page_soup.findAll("div", {"class": "uofs-subsection"})[1].find("p").text.strip()
driver.back()
```
# 3. Test run successful. Implement automation script to scrape all courses
```
from selenium.common.exceptions import NoSuchElementException
course_codes = []
course_names = []
course_descs = []
counter = 0
for link_text in link_texts:
#go to course page
try:
link = driver.find_element_by_partial_link_text(link_text)
except NoSuchElementException:
print("no link for {}".format(link_text))
continue
time.sleep(2)
link.click()
time.sleep(2)
page_soup = soup(driver.page_source, 'lxml')
#scrape data
course_codes.append(page_soup.find("h1", {"class": "uofs-page-title"}).text.strip()[:-2])
course_names.append(page_soup.find("p", {"class": "lead"}).text.strip())
course_descs.append(page_soup.findAll("div", {"class": "uofs-subsection"})[1].find("p").text.strip())
print("Scraped ", page_soup.find("h1", {"class": "uofs-page-title"}).text.strip()[:-2])
counter += 1
driver.back()
time.sleep(2)
print("Finished scraping {} courses".format(counter))
```
# 4. Inspect, clean, and write to CSV
```
course_codes
course_names
course_descs
#the two last courses and the fourth last course are not taken by mech eng students
irrelevant_codes = ["CMPT 146", "CHE 113", "CE 271"]
mech_codes = []
mech_names = []
mech_descs = []
for i in range(len(course_codes)):
if course_codes[i] not in irrelevant_codes:
mech_codes.append(course_codes[i])
mech_names.append(course_names[i])
mech_descs.append(course_descs[i])
mech_codes
mech_names
mech_descs
import pandas as pd
df = pd.DataFrame({
"Course Number": mech_codes,
"Course Name": mech_names,
"Course Description": mech_descs
})
df.to_csv('USaskatchewan_Engineeering_Common_First_Year_Courses.csv', index = False)
len(mech_codes)
driver.quit()
```
| github_jupyter |
```
pip install mlxtend --upgrade --no-deps
import mlxtend
print(mlxtend.__version__)
from google.colab import drive
drive.mount('/content/gdrive')
import cv2
import skimage
import keras
import tensorflow
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from mlxtend.evaluate import confusion_matrix
from mlxtend.plotting import plot_confusion_matrix
from keras.models import Sequential, Model, load_model
from keras.preprocessing.image import ImageDataGenerator
from keras.layers import Convolution2D, MaxPooling2D, Dense, Flatten, concatenate, Concatenate#, Dropout
from keras import regularizers
from keras.callbacks import ModelCheckpoint
from sklearn.metrics import classification_report
from skimage import color
# fix random seed for reproducibility
seed = 7
np.random.seed(seed)
img_width, img_height = 224, 224
batch_size = 1
epochs = 100
train_samples = 7200
validation_samples = 2400
test_samples = 2400
train_data_dir = 'path to train data'
validation_data_dir = 'path to validation data'
test_data_dir = 'path to test data'
def scale0to255(image):
converted_image = image
min_1 = np.min(converted_image[:,:,0])
max_1 = np.max(converted_image[:,:,0])
converted_image[:,:,0] = np.round(((converted_image[:,:,0] - min_1) / (max_1 - min_1)) * 255)
min_2 = np.min(converted_image[:,:,1])
max_2 = np.max(converted_image[:,:,1])
converted_image[:,:,1] = np.round(((converted_image[:,:,1] - min_2) / (max_2 - min_2)) * 255)
min_3 = np.min(converted_image[:,:,2])
max_3 = np.max(converted_image[:,:,2])
converted_image[:,:,2] = np.round(((converted_image[:,:,2] - min_3) / (max_3 - min_3)) * 255)
return converted_image
def log(image):
gaus_image = cv2.GaussianBlur(image,(3,3),0)
laplacian_image = cv2.Laplacian(np.uint8(gaus_image), cv2.CV_64F)
sharp_image = np.uint8(image + laplacian_image)
return sharp_image
def lch_colorFunction(image):
log_image = log(image)
lab_image = skimage.color.rgb2lab(log_image)
lch_image = skimage.color.lab2lch(lab_image)
scale_lch_image = scale0to255(lch_image)
return scale_lch_image
def hsv_colorFunction(image):
log_image = log(image)
hsv_image = skimage.color.rgb2hsv(log_image)
np.nan_to_num(hsv_image, copy=False, nan=0.0, posinf=None, neginf=None)
scale_hsv_image = scale0to255(hsv_image)
return scale_hsv_image
datagen_rgb = ImageDataGenerator()
datagen_lch = ImageDataGenerator(preprocessing_function = lch_colorFunction)
datagen_hsv = ImageDataGenerator(preprocessing_function = hsv_colorFunction)
def myGenerator (gen1, gen2, gen3):#
while True:
xy1 = gen1.next()
xy2 = gen2.next()
xy3 = gen3.next()
yield ([xy1[0], xy2[0], xy3[0]], xy1[1]) #
train_generator_rgb = datagen_rgb.flow_from_directory(
train_data_dir,
color_mode="rgb",
target_size=(img_width, img_height),
batch_size=batch_size,
shuffle=False,
class_mode='categorical')
train_generator_lch = datagen_lch.flow_from_directory(
train_data_dir,
color_mode="rgb",
target_size=(img_width, img_height),
batch_size=batch_size,
shuffle=False,
class_mode='categorical')
train_generator_hsv = datagen_hsv.flow_from_directory(
train_data_dir,
color_mode="rgb",
target_size=(img_width, img_height),
batch_size=batch_size,
shuffle=False,
class_mode='categorical')
train_generator = myGenerator(train_generator_rgb, train_generator_lch, train_generator_hsv)#
validation_generator_rgb = datagen_rgb.flow_from_directory(
validation_data_dir,
color_mode="rgb",
target_size=(img_width, img_height),
batch_size=batch_size,
shuffle=False,
class_mode='categorical')
validation_generator_lch = datagen_lch.flow_from_directory(
validation_data_dir,
color_mode="rgb",
target_size=(img_width, img_height),
batch_size=batch_size,
shuffle=False,
class_mode='categorical')
validation_generator_hsv = datagen_hsv.flow_from_directory(
validation_data_dir,
color_mode="rgb",
target_size=(img_width, img_height),
batch_size=batch_size,
shuffle=False,
class_mode='categorical')
validation_generator = myGenerator(validation_generator_rgb, validation_generator_lch, validation_generator_hsv)#
test_generator_rgb = datagen_rgb.flow_from_directory(
test_data_dir,
color_mode="rgb",
target_size=(img_width, img_height),
batch_size= 1,
shuffle=False,
class_mode='categorical')
test_generator_lch = datagen_lch.flow_from_directory(
test_data_dir,
color_mode="rgb",
target_size=(img_width, img_height),
batch_size= 1,
shuffle=False,
class_mode='categorical')
test_generator_hsv = datagen_hsv.flow_from_directory(
test_data_dir,
color_mode="rgb",
target_size=(img_width, img_height),
batch_size= 1,
shuffle=False,
class_mode='categorical')
test_generator = myGenerator(test_generator_rgb, test_generator_lch, test_generator_hsv)#
model = load_model('path to mceffnet2_model.h5')
model.summary()
inp = model.input
out =model.layers[-1].output
model2 = Model(inp, out)
model2.summary()
keras.utils.plot_model(model2, "model.png", show_shapes=True)
train_pred = model2.predict_generator(train_generator,train_samples, verbose=1)
train_pred.shape
train_target = train_generator_rgb.classes
train_target.shape
val_pred = model2.predict_generator(validation_generator,validation_samples, verbose=1)
val_pred.shape
val_target = validation_generator_rgb.classes
val_target.shape
test_pred = model2.predict_generator(test_generator,test_samples, verbose=1)
test_pred.shape
test_target = test_generator_rgb.classes
test_target.shape
X = np.append(train_pred, val_pred, axis=0)
X = np.append(X, test_pred, axis=0)
np.save("path to save mceffnet_features.npy", X)
X.shape
y = np.append(train_target, val_target, axis=0)
y = np.append(y, test_target, axis=0)
np.save("path to save labels.npy", y)
y.shape
list_fams = ['gan', 'graphics', 'real']
list_fams
pip install tsne
import numpy as np
from numpy.random import RandomState
np.random.seed(1)
from tsne import bh_sne
from sklearn.manifold import TSNE
import matplotlib.pyplot as plt
import os
import os.path
import glob
from keras.preprocessing import image
print("Running t-SNE ...")
vis_eff_data = bh_sne(np.float64(X), d=2, perplexity=30., theta=0.5, random_state=RandomState(1))
np.save("path to save mceffnet_tsne_features.npy", vis_eff_data)
vis_eff_data.shape
vis_eff_data = np.load("path to mceffnet_tsne_features.npy")
y = np.load("path to labels.npy")
print("Plotting t-SNE ...")
figure = plt.gcf()
figure.set_size_inches(20, 17)
plt.scatter(vis_eff_data[y.astype(int)==0, 0], vis_eff_data[y.astype(int)==0, 1], c='green', marker='o', edgecolors="black", label="GAN")
plt.scatter(vis_eff_data[y.astype(int)==1, 0], vis_eff_data[y.astype(int)==1, 1], c='white', marker='s', edgecolors="blue", label="Graphics")
plt.scatter(vis_eff_data[y.astype(int)==2, 0], vis_eff_data[y.astype(int)==2, 1], c='red', marker='D', edgecolors="pink", label="Real")
plt.clim(-0.5, len(list_fams)-0.5)
frame1 = plt.gca()
frame1.axes.xaxis.set_ticklabels([])
frame1.axes.yaxis.set_ticklabels([])
frame1.axes.get_xaxis().set_visible(False)
frame1.axes.get_yaxis().set_visible(False)
plt.legend(loc="upper right", prop={'size': 35})
#plt.savefig('TSNE_EfficientNet_features_visualization_color_size_20_17.jpg', format='jpg')
plt.show()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W3D1_RealNeurons/W3D1_Tutorial2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Neuromatch Academy: Week 3, Day 1, Tutorial 2
# Real Neurons: Effects of Input Correlation
__Content creators:__ Qinglong Gu, Songtin Li, John Murray, Richard Naud, Arvind Kumar
__Content reviewers:__ Maryam Vaziri-Pashkam, Ella Batty, Lorenzo Fontolan, Richard Gao, Matthew Krause, Spiros Chavlis, Michael Waskom
---
# Tutorial Objectives
In this tutorial, we will use the leaky integrate-and-fire (LIF) neuron model (see Tutorial 1) to study how they transform input correlations to output properties (transfer of correlations). In particular, we are going to write a few lines of code to:
- inject correlated GWN in a pair of neurons
- measure correlations between the spiking activity of the two neurons
- study how the transfer of correlation depends on the statistics of the input, i.e. mean and standard deviation.
---
# Setup
```
# Import libraries
import matplotlib.pyplot as plt
import numpy as np
import time
# @title Figure Settings
import ipywidgets as widgets # interactive display
%config InlineBackend.figure_format = 'retina'
# use NMA plot style
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
my_layout = widgets.Layout()
# @title Helper functions
def default_pars(**kwargs):
pars = {}
### typical neuron parameters###
pars['V_th'] = -55. # spike threshold [mV]
pars['V_reset'] = -75. # reset potential [mV]
pars['tau_m'] = 10. # membrane time constant [ms]
pars['g_L'] = 10. # leak conductance [nS]
pars['V_init'] = -75. # initial potential [mV]
pars['V_L'] = -75. # leak reversal potential [mV]
pars['tref'] = 2. # refractory time (ms)
### simulation parameters ###
pars['T'] = 400. # Total duration of simulation [ms]
pars['dt'] = .1 # Simulation time step [ms]
### external parameters if any ###
for k in kwargs:
pars[k] = kwargs[k]
pars['range_t'] = np.arange(0, pars['T'], pars['dt']) # Vector of discretized
# time points [ms]
return pars
def run_LIF(pars, Iinj):
"""
Simulate the LIF dynamics with external input current
Args:
pars : parameter dictionary
Iinj : input current [pA]. The injected current here can be a value or an array
Returns:
rec_spikes : spike times
rec_v : mebrane potential
"""
# Set parameters
V_th, V_reset = pars['V_th'], pars['V_reset']
tau_m, g_L = pars['tau_m'], pars['g_L']
V_init, V_L = pars['V_init'], pars['V_L']
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
tref = pars['tref']
# Initialize voltage and current
v = np.zeros(Lt)
v[0] = V_init
Iinj = Iinj * np.ones(Lt)
tr = 0.
# simulate the LIF dynamics
rec_spikes = [] # record spike times
for it in range(Lt - 1):
if tr > 0:
v[it] = V_reset
tr = tr - 1
elif v[it] >= V_th: # reset voltage and record spike event
rec_spikes.append(it)
v[it] = V_reset
tr = tref / dt
# calculate the increment of the membrane potential
dv = (-(v[it] - V_L) + Iinj[it] / g_L) * (dt / tau_m)
# update the membrane potential
v[it + 1] = v[it] + dv
rec_spikes = np.array(rec_spikes) * dt
return v, rec_spikes
def my_GWN(pars, sig, myseed=False):
"""
Function that calculates Gaussian white noise inputs
Args:
pars : parameter dictionary
mu : noise baseline (mean)
sig : noise amplitute (standard deviation)
myseed : random seed. int or boolean
the same seed will give the same random number sequence
Returns:
I : Gaussian white noise input
"""
# Retrieve simulation parameters
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
# Set random seed. You can fix the seed of the random number generator so
# that the results are reliable however, when you want to generate multiple
# realization make sure that you change the seed for each new realization
if myseed:
np.random.seed(seed=myseed)
else:
np.random.seed()
# generate GWN
# we divide here by 1000 to convert units to sec.
I_GWN = sig * np.random.randn(Lt) * np.sqrt(pars['tau_m'] / dt)
return I_GWN
def Poisson_generator(pars, rate, n, myseed=False):
"""
Generates poisson trains
Args:
pars : parameter dictionary
rate : noise amplitute [Hz]
n : number of Poisson trains
myseed : random seed. int or boolean
Returns:
pre_spike_train : spike train matrix, ith row represents whether
there is a spike in ith spike train over time
(1 if spike, 0 otherwise)
"""
# Retrieve simulation parameters
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
# set random seed
if myseed:
np.random.seed(seed=myseed)
else:
np.random.seed()
# generate uniformly distributed random variables
u_rand = np.random.rand(n, Lt)
# generate Poisson train
poisson_train = 1. * (u_rand < rate * (dt / 1000.))
return poisson_train
def example_plot_myCC():
pars = default_pars(T=50000, dt=.1)
c = np.arange(10) * 0.1
r12 = np.zeros(10)
for i in range(10):
I1gL, I2gL = correlate_input(pars, mu=20.0, sig=7.5, c=c[i])
r12[i] = my_CC(I1gL, I2gL)
plt.figure()
plt.plot(c, r12, 'bo', alpha=0.7, label='Simulation', zorder=2)
plt.plot([-0.05, 0.95], [-0.05, 0.95], 'k--', label='y=x',
dashes=(2, 2), zorder=1)
plt.xlabel('True CC')
plt.ylabel('Sample CC')
plt.legend(loc='best')
def LIF_output_cc(pars, mu, sig, c, bin_size, n_trials=20):
""" Simulates two LIF neurons with correlated input and computes output correlation
Args:
pars : parameter dictionary
mu : noise baseline (mean)
sig : noise amplitute (standard deviation)
c : correlation coefficient ~[0, 1]
bin_size : bin size used for time series
n_trials : total simulation trials
Returns:
r : output corr. coe.
sp_rate : spike rate
sp1 : spike times of neuron 1 in the last trial
sp2 : spike times of neuron 2 in the last trial
"""
r12 = np.zeros(n_trials)
sp_rate = np.zeros(n_trials)
for i_trial in range(n_trials):
I1gL, I2gL = correlate_input(pars, mu, sig, c)
_, sp1 = run_LIF(pars, pars['g_L'] * I1gL)
_, sp2 = run_LIF(pars, pars['g_L'] * I2gL)
my_bin = np.arange(0, pars['T'], bin_size)
sp1_count, _ = np.histogram(sp1, bins=my_bin)
sp2_count, _ = np.histogram(sp2, bins=my_bin)
r12[i_trial] = my_CC(sp1_count[::20], sp2_count[::20])
sp_rate[i_trial] = len(sp1) / pars['T'] * 1000.
return r12.mean(), sp_rate.mean(), sp1, sp2
def plot_c_r_LIF(c, r, mycolor, mylabel):
z = np.polyfit(c, r, deg=1)
c_range = np.array([c.min() - 0.05, c.max() + 0.05])
plt.plot(c, r, 'o', color=mycolor, alpha=0.7, label=mylabel, zorder=2)
plt.plot(c_range, z[0] * c_range + z[1], color=mycolor, zorder=1)
```
The helper function contains the:
- Parameter dictionary: `default_pars( **kwargs)`
- LIF simulator: `run_LIF`
- Gaussian white noise generator: `my_GWN(pars, sig, myseed=False)`
- Poisson type spike train generator: `Poisson_generator(pars, rate, n, myseed=False)`
- Two LIF neurons with correlated inputs simulator: `LIF_output_cc(pars, mu, sig, c, bin_size, n_trials=20)`
- Some additional plotting utilities
---
# Section 1: Correlations (Synchrony)
Correlation or synchrony in neuronal activity can be described for any readout of brain activity. Here, we are concerned with the spiking activity of neurons.
In the simplest way, correlation/synchrony refers to coincident spiking of neurons, i.e., when two neurons spike together, they are firing in **synchrony** or are **correlated**. Neurons can be synchronous in their instantaneous activity, i.e., they spike together with some probability. However, it is also possible that spiking of a neuron at time $t$ is correlated with the spikes of another neuron with a delay (time-delayed synchrony).
## Origin of synchronous neuronal activity:
- Common inputs, i.e., two neurons are receiving input from the same sources. The degree of correlation of the shared inputs is proportional to their output correlation.
- Pooling from the same sources. Neurons do not share the same input neurons but are receiving inputs from neurons which themselves are correlated.
- Neurons are connected to each other (uni- or bi-directionally): This will only give rise to time-delayed synchrony. Neurons could also be connected via gap-junctions.
- Neurons have similar parameters and initial conditions.
## Implications of synchrony
When neurons spike together, they can have a stronger impact on downstream neurons. Synapses in the brain are sensitive to the temporal correlations (i.e., delay) between pre- and postsynaptic activity, and this, in turn, can lead to the formation of functional neuronal networks - the basis of unsupervised learning (we will study some of these concepts in a forthcoming tutorial).
Synchrony implies a reduction in the dimensionality of the system. In addition, correlations, in many cases, can impair the decoding of neuronal activity.
```
# @title Video 1: Input & output correlations
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="nsAYFBcAkes", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
## How to study the emergence of correlations
A simple model to study the emergence of correlations is to inject common inputs to a pair of neurons and measure the output correlation as a function of the fraction of common inputs.
Here, we are going to investigate the transfer of correlations by computing the correlation coefficient of spike trains recorded from two unconnected LIF neurons, which received correlated inputs.
The input current to LIF neuron $i$ $(i=1,2)$ is:
\begin{equation}
\frac{I_i}{g_L} =\mu_i + \sigma_i (\sqrt{1-c}\xi_i + \sqrt{c}\xi_c) \quad (1)
\end{equation}
where $\mu_i$ is the temporal average of the current. The Gaussian white noise $\xi_i$ is independent for each neuron, while $\xi_c$ is common to all neurons. The variable $c$ ($0\le c\le1$) controls the fraction of common and independent inputs. $\sigma_i$ shows the variance of the total input.
So, first, we will generate correlated inputs.
```
# @title
#@markdown Execute this cell to get a function for generating correlated GWN inputs
def correlate_input(pars, mu=20., sig=7.5, c=0.3):
"""
Args:
pars : parameter dictionary
mu : noise baseline (mean)
sig : noise amplitute (standard deviation)
c. : correlation coefficient ~[0, 1]
Returns:
I1gL, I2gL : two correlated inputs with corr. coe. c
"""
# generate Gaussian whute noise xi_1, xi_2, xi_c
xi_1 = my_GWN(pars, sig)
xi_2 = my_GWN(pars, sig)
xi_c = my_GWN(pars, sig)
# Generate two correlated inputs by Equation. (1)
I1gL = mu + np.sqrt(1. - c) * xi_1 + np.sqrt(c) * xi_c
I2gL = mu + np.sqrt(1. - c) * xi_2 + np.sqrt(c) * xi_c
return I1gL, I2gL
print(help(correlate_input))
```
### Exercise 1: Compute the correlation
The _sample correlation coefficient_ between two input currents $I_i$ and $I_j$ is defined as the sample covariance of $I_i$ and $I_j$ divided by the square root of the sample variance of $I_i$ multiplied with the square root of the sample variance of $I_j$. In equation form:
\begin{align}
r_{ij} &= \frac{cov(I_i, I_j)}{\sqrt{var(I_i)} \sqrt{var(I_j)}}\\
cov(I_i, I_j) &= \sum_{k=1}^L (I_i^k -\bar{I}_i)(I_j^k -\bar{I}_j) \\
var(I_i) &= \sum_{k=1}^L (I_i^k -\bar{I}_i)^2
\end{align}
where $\bar{I}_i$ is the sample mean, k is the time bin, and L is the length of $I$. This means that $I_i^k$ is current i at time $k\cdot dt$. Note that the equations above are not accurate for sample covariances and variances as they should be additionally divided by L-1 - we have dropped this term because it cancels out in the sample correlation coefficient formula.
The _sample correlation coefficient_ may also be referred to as the _sample Pearson correlation coefficient_. Here, is a beautiful paper that explains multiple ways to calculate and understand correlations [Rodgers and Nicewander 1988](https://www.stat.berkeley.edu/~rabbee/correlation.pdf).
In this exercise, we will create a function, `my_CC` to compute the sample correlation coefficient between two time series. Note that while we introduced this computation here in the context of input currents, the sample correlation coefficient is used to compute the correlation between any two time series - we will use it later on binned spike trains.
```
def my_CC(i, j):
"""
Args:
i, j : two time series with the same length
Returns:
rij : correlation coefficient
"""
########################################################################
## TODO for students: compute rxy, then remove the NotImplementedError #
# Tip1: array([a1, a2, a3])*array([b1, b2, b3]) = array([a1*b1, a2*b2, a3*b3])
# Tip2: np.sum(array([a1, a2, a3])) = a1+a2+a3
# Tip3: square root, np.sqrt()
# Fill out function and remove
raise NotImplementedError("Student exercise: compute the sample correlation coefficient")
########################################################################
# Calculate the covariance of i and j
cov = ...
# Calculate the variance of i
var_i = ...
# Calculate the variance of j
var_j = ...
# Calculate the correlation coefficient
rij = ...
return rij
# Uncomment the line after completing the my_CC function
# example_plot_myCC()
# to_remove solution
def my_CC(i, j):
"""
Args:
i, j : two time series with the same length
Returns:
rij : correlation coefficient
"""
# Calculate the covariance of i and j
cov = ((i - i.mean()) * (j - j.mean())).sum()
# Calculate the variance of i
var_i = ((i - i.mean()) * (i - i.mean())).sum()
# Calculate the variance of j
var_j = ((j - j.mean()) * (j - j.mean())).sum()
# Calculate the correlation coefficient
rij = cov / np.sqrt(var_i*var_j)
return rij
with plt.xkcd():
example_plot_myCC()
```
### Exercise 2: Measure the correlation between spike trains
After recording the spike times of the two neurons, how can we estimate their correlation coefficient?
In order to find this, we need to bin the spike times and obtain two time series. Each data point in the time series is the number of spikes in the corresponding time bin. You can use `np.histogram()` to bin the spike times.
Complete the code below to bin the spike times and calculate the correlation coefficient for two Poisson spike trains. Note that `c` here is the ground-truth correlation coefficient that we define.
```
# @title
# @markdown Execute this cell to get a function for generating correlated Poisson inputs (generate_corr_Poisson)
def generate_corr_Poisson(pars, poi_rate, c, myseed=False):
"""
function to generate correlated Poisson type spike trains
Args:
pars : parameter dictionary
poi_rate : rate of the Poisson train
c. : correlation coefficient ~[0, 1]
Returns:
sp1, sp2 : two correlated spike time trains with corr. coe. c
"""
range_t = pars['range_t']
mother_rate = poi_rate / c
mother_spike_train = Poisson_generator(pars, rate=mother_rate,
n=1, myseed=myseed)[0]
sp_mother = range_t[mother_spike_train > 0]
L_sp_mother = len(sp_mother)
sp_mother_id = np.arange(L_sp_mother)
L_sp_corr = int(L_sp_mother * c)
np.random.shuffle(sp_mother_id)
sp1 = np.sort(sp_mother[sp_mother_id[:L_sp_corr]])
np.random.shuffle(sp_mother_id)
sp2 = np.sort(sp_mother[sp_mother_id[:L_sp_corr]])
return sp1, sp2
print(help(generate_corr_Poisson))
def corr_coeff_pairs(pars, rate, c, trials, bins):
"""
Calculate the correlation coefficient of two spike trains, for different
realizations
Args:
pars : parameter dictionary
rate : rate of poisson inputs
c : correlation coefficient ~ [0, 1]
trials : number of realizations
bins : vector with bins for time discretization
Returns:
r12 : correlation coefficient of a pair of inputs
"""
r12 = np.zeros(n_trials)
for i in range(n_trials):
##############################################################
## TODO for students: Use np.histogram to bin the spike time #
## e.g., sp1_count, _= np.histogram(...)
# Use my_CC() compute corr coe, compare with c
# Note that you can run multiple realizations and compute their r_12(diff_trials)
# with the defined function above. The average r_12 over trials can get close to c.
# Note: change seed to generate different input per trial
# Fill out function and remove
raise NotImplementedError("Student exercise: compute the correlation coefficient")
##############################################################
# Generate correlated Poisson inputs
sp1, sp2 = generate_corr_Poisson(pars, ..., ..., myseed=2020+i)
# Bin the spike times of the first input
sp1_count, _ = np.histogram(..., bins=...)
# Bin the spike times of the second input
sp2_count, _ = np.histogram(..., bins=...)
# Calculate the correlation coefficient
r12[i] = my_CC(..., ...)
return r12
poi_rate = 20.
c = 0.2 # set true correlation
pars = default_pars(T=10000)
# bin the spike time
bin_size = 20 # [ms]
my_bin = np.arange(0, pars['T'], bin_size)
n_trials = 100 # 100 realizations
# Uncomment to test your function
# r12 = corr_coeff_pairs(pars, rate=poi_rate, c=c, trials=n_trials, bins=my_bin)
# print(f'True corr coe = {c:.3f}')
# print(f'Simu corr coe = {r12.mean():.3f}')
```
Sample output
```
True corr coe = 0.200
Simu corr coe = 0.197
```
```
# to_remove solution
def corr_coeff_pairs(pars, rate, c, trials, bins):
"""
Calculate the correlation coefficient of two spike trains, for different
realizations
Args:
pars : parameter dictionary
rate : rate of poisson inputs
c : correlation coefficient ~ [0, 1]
trials : number of realizations
bins : vector with bins for time discretization
Returns:
r12 : correlation coefficient of a pair of inputs
"""
r12 = np.zeros(n_trials)
for i in range(n_trials):
# Generate correlated Poisson inputs
sp1, sp2 = generate_corr_Poisson(pars, poi_rate, c, myseed=2020+i)
# Bin the spike times of the first input
sp1_count, _ = np.histogram(sp1, bins=bins)
# Bin the spike times of the second input
sp2_count, _ = np.histogram(sp2, bins=bins)
# Calculate the correlation coefficient
r12[i] = my_CC(sp1_count, sp2_count)
return r12
poi_rate = 20.
c = 0.2 # set true correlation
pars = default_pars(T=10000)
# bin the spike time
bin_size = 20 # [ms]
my_bin = np.arange(0, pars['T'], bin_size)
n_trials = 100 # 100 realizations
r12 = corr_coeff_pairs(pars, rate=poi_rate, c=c, trials=n_trials, bins=my_bin)
print(f'True corr coe = {c:.3f}')
print(f'Simu corr coe = {r12.mean():.3f}')
```
---
# Section 2: Investigate the effect of input correlation on the output correlation
Now let's combine the aforementioned two procedures. We first generate the correlated inputs by Equation (1). Then we inject the correlated inputs $I_1, I_2$ into a pair of neurons and record their output spike times. We continue measuring the correlation between the output and
investigate the relationship between the input correlation and the output correlation.
## Drive a neuron with correlated inputs and visualize its output
In the following, you will inject correlated GWN in two neurons. You need to define the mean (`gwn_mean`), standard deviation (`gwn_std`), and input correlations (`c_in`).
We will simulate $10$ trials to get a better estimate of the output correlation. Change the values in the following cell for the above variables (and then run the next cell) to explore how they impact the output correlation.
```
# Play around with these parameters
pars = default_pars(T=80000, dt=1.) # get the parameters
c_in = 0.3 # set input correlation value
gwn_mean = 10.
gwn_std = 10.
# @title
# @markdown Do not forget to execute this cell to simulate the LIF
bin_size = 10. # ms
starttime = time.perf_counter() # time clock
r12_ss, sp_ss, sp1, sp2 = LIF_output_cc(pars, mu=gwn_mean, sig=gwn_std, c=c_in,
bin_size=bin_size, n_trials=10)
# just the time counter
endtime = time.perf_counter()
timecost = (endtime - starttime) / 60.
print(f"Simulation time = {timecost:.2f} min")
print(f"Input correlation = {c_in}")
print(f"Output correlation = {r12_ss}")
plt.figure(figsize=(12, 6))
plt.plot(sp1, np.ones(len(sp1)) * 1, '|', ms=20, label='neuron 1')
plt.plot(sp2, np.ones(len(sp2)) * 1.1, '|', ms=20, label='neuron 2')
plt.xlabel('time (ms)')
plt.ylabel('neuron id.')
plt.xlim(1000, 8000)
plt.ylim(0.9, 1.2)
plt.legend()
plt.show()
```
## Think!
- Is the output correlation always smaller than the input correlation? If yes, why?
- Should there be a systematic relationship between input and output correlations?
You will explore these questions in the next figure but try to develop your own intuitions first!
Lets vary `c_in` and plot the relationship between the `c_in` and output correlation. This might take some time depending on the number of trials.
```
#@title
#@markdown Don't forget to execute this cell!
pars = default_pars(T=80000, dt=1.) # get the parameters
bin_size = 10.
c_in = np.arange(0, 1.0, 0.1) # set the range for input CC
r12_ss = np.zeros(len(c_in)) # small mu, small sigma
starttime = time.perf_counter() # time clock
for ic in range(len(c_in)):
r12_ss[ic], sp_ss, sp1, sp2 = LIF_output_cc(pars, mu=10.0, sig=10.,
c=c_in[ic], bin_size=bin_size,
n_trials=10)
endtime = time.perf_counter()
timecost = (endtime - starttime) / 60.
print(f"Simulation time = {timecost:.2f} min")
plt.figure(figsize=(7, 6))
plot_c_r_LIF(c_in, r12_ss, mycolor='b', mylabel='Output CC')
plt.plot([c_in.min() - 0.05, c_in.max() + 0.05],
[c_in.min() - 0.05, c_in.max() + 0.05],
'k--', dashes=(2, 2), label='y=x')
plt.xlabel('Input CC')
plt.ylabel('Output CC')
plt.legend(loc='best', fontsize=16)
plt.show()
# to_remove explanation
"""
Discussion: The results above show that
- output correlation is smaller than input correlation
- output correlation varies linearly as a function of input correlation.
While the general result holds, this relationship might change depending on the neuron type.
""";
```
---
# Section 3: Correlation transfer function
The above plot of input correlation vs. output correlation is called the __correlation transfer function__ of the neurons.
## Section 3.1: How do the mean and standard deviation of the GWN affect the correlation transfer function?
The correlations transfer function appears to be linear. The above can be taken as the input/output transfer function of LIF neurons for correlations, instead of the transfer function for input/output firing rates as we had discussed in the previous tutorial (i.e., F-I curve).
What would you expect to happen to the slope of the correlation transfer function if you vary the mean and/or the standard deviation of the GWN?
```
#@markdown Execute this cell to visualize correlation transfer functions
pars = default_pars(T=80000, dt=1.) # get the parameters
no_trial = 10
bin_size = 10.
c_in = np.arange(0., 1., 0.2) # set the range for input CC
r12_ss = np.zeros(len(c_in)) # small mu, small sigma
r12_ls = np.zeros(len(c_in)) # large mu, small sigma
r12_sl = np.zeros(len(c_in)) # small mu, large sigma
starttime = time.perf_counter() # time clock
for ic in range(len(c_in)):
r12_ss[ic], sp_ss, sp1, sp2 = LIF_output_cc(pars, mu=10.0, sig=10.,
c=c_in[ic], bin_size=bin_size,
n_trials=no_trial)
r12_ls[ic], sp_ls, sp1, sp2 = LIF_output_cc(pars, mu=18.0, sig=10.,
c=c_in[ic], bin_size=bin_size,
n_trials=no_trial)
r12_sl[ic], sp_sl, sp1, sp2 = LIF_output_cc(pars, mu=10.0, sig=20.,
c=c_in[ic], bin_size=bin_size,
n_trials=no_trial)
endtime = time.perf_counter()
timecost = (endtime - starttime) / 60.
print(f"Simulation time = {timecost:.2f} min")
plt.figure(figsize=(7, 6))
plot_c_r_LIF(c_in, r12_ss, mycolor='b', mylabel=r'Small $\mu$, small $\sigma$')
plot_c_r_LIF(c_in, r12_ls, mycolor='y', mylabel=r'Large $\mu$, small $\sigma$')
plot_c_r_LIF(c_in, r12_sl, mycolor='r', mylabel=r'Small $\mu$, large $\sigma$')
plt.plot([c_in.min() - 0.05, c_in.max() + 0.05],
[c_in.min() - 0.05, c_in.max() + 0.05],
'k--', dashes=(2, 2), label='y=x')
plt.xlabel('Input CC')
plt.ylabel('Output CC')
plt.legend(loc='best', fontsize=14)
plt.show()
```
### Think!
Why do both the mean and the standard deviation of the GWN affect the slope of the correlation transfer function?
```
# to_remove explanation
"""
Discussion: This has got to do with which part of the input current distribution
is transferred to the spiking activity.
Intuitive understanding is difficult but this relationship arises due to non-linearities
in the neuron F-I curve. When F-I curve is linear, output correlation is independent
of the mean and standard deviation. But this relationship arises even in neurons with
threshold-linear F-I curve.
Please see:
De La Rocha J, Doiron B, Shea-Brown E, Josić K, Reyes A. Correlation between
neural spike trains increases with firing rate. Nature. 2007 Aug;448(7155):802-6.
""";
```
## Section 3.2: What is the rationale behind varying $\mu$ and $\sigma$?
The mean and the variance of the synaptic current depends on the spike rate of a Poisson process. We can use [Campbell's theorem](https://en.wikipedia.org/wiki/Campbell%27s_theorem_(probability)) to estimate the mean and the variance of the synaptic current:
\begin{align}
\mu_{\rm syn} = \lambda J \int P(t) \\
\sigma_{\rm syn} = \lambda J \int P(t)^2 dt\\
\end{align}
where $\lambda$ is the firing rate of the Poisson input, $J$ the amplitude of the postsynaptic current and $P(t)$ is the shape of the postsynaptic current as a function of time.
Therefore, when we varied $\mu$ and/or $\sigma$ of the GWN, we mimicked a change in the input firing rate. Note that, if we change the firing rate, both $\mu$ and $\sigma$ will change simultaneously, not independently.
Here, since we observe an effect of $\mu$ and $\sigma$ on correlation transfer, this implies that the input rate has an impact on the correlation transfer function.
### Think!
- What are the factors that would make output correlations smaller than input correlations? (Notice that the colored lines are below the black dashed line)
- What does it mean for the correlation in the network?
- Here we have studied the transfer of correlations by injecting GWN. But in the previous tutorial, we mentioned that GWN is unphysiological. Indeed, neurons receive colored noise (i.e., Shot noise or OU process). How do these results obtained from injection of GWN apply to the case where correlated spiking inputs are injected in the two LIFs? Will the results be the same or different?
Reference
- De La Rocha, Jaime, et al. "Correlation between neural spike trains increases with firing rate." Nature (2007) (https://www.nature.com/articles/nature06028/)
- Bujan AF, Aertsen A, Kumar A. Role of input correlations in shaping the variability and noise correlations of evoked activity in the neocortex. Journal of Neuroscience. 2015 Jun 3;35(22):8611-25. (https://www.jneurosci.org/content/35/22/8611)
```
# to_remove explanation
"""
Discussion:
1. Anything that tries to reduce the mean or variance of the input e.g. mean can
be reduced by inhibition, sigma can be reduced by the membrane time constant.
Obviously, if the two neurons have different parameters that will decorrelate them.
But more importantly, it is the slope of neuron transfer function that will affect the
output correlation.
2. These observations pose an interesting problem at the network level. If the
output correlation are smaller than the input correlation, then the network activity
should eventually converge to zero correlation. But that does not happen. So there
is something missing in this model to understand origin of synchrony in the network.
3. For spike trains, if we do not have explicit control over mu and sigma.
And these two variables will be tied to the firing rate of the inputs. So the
results will be qualitatively similar. But when we think of multiple spike inputs
two different types of correlations arise (see Bujan et al. 2015 for more info)
""";
```
---
# Summary
In this tutorial, we studied how the input correlation of two LIF neurons is mapped to their output correlation. Specifically, we:
- injected correlated GWN in a pair of neurons,
- measured correlations between the spiking activity of the two neurons, and
- studied how the transfer of correlation depends on the statistics of the input, i.e., mean and standard deviation.
Here, we were concerned with zero time lag correlation. For this reason, we restricted estimation of correlation to instantaneous correlations. If you are interested in time-lagged correlation, then we should estimate the cross-correlogram of the spike trains and find out the dominant peak and area under the peak to get an estimate of output correlations.
We leave this as a future to-do for you if you are interested.
---
# Bonus 1: Example of a conductance-based LIF model
Above, we have written code to generate correlated Poisson spike trains. You can write code to stimulate the LIF neuron with such correlated spike trains and study the correlation transfer function for spiking input and compare it to the correlation transfer function obtained by injecting correlated GWNs.
```
# @title Function to simulate conductance-based LIF
def run_LIF_cond(pars, I_inj, pre_spike_train_ex, pre_spike_train_in):
"""
conductance-based LIF dynamics
Args:
pars : parameter dictionary
I_inj : injected current [pA]. The injected current here can
be a value or an array
pre_spike_train_ex : spike train input from presynaptic excitatory neuron
pre_spike_train_in : spike train input from presynaptic inhibitory neuron
Returns:
rec_spikes : spike times
rec_v : mebrane potential
gE : postsynaptic excitatory conductance
gI : postsynaptic inhibitory conductance
"""
# Retrieve parameters
V_th, V_reset = pars['V_th'], pars['V_reset']
tau_m, g_L = pars['tau_m'], pars['g_L']
V_init, E_L = pars['V_init'], pars['E_L']
gE_bar, gI_bar = pars['gE_bar'], pars['gI_bar']
VE, VI = pars['VE'], pars['VI']
tau_syn_E, tau_syn_I = pars['tau_syn_E'], pars['tau_syn_I']
tref = pars['tref']
dt, range_t = pars['dt'], pars['range_t']
Lt = range_t.size
# Initialize
tr = 0.
v = np.zeros(Lt)
v[0] = V_init
gE = np.zeros(Lt)
gI = np.zeros(Lt)
Iinj = I_inj * np.ones(Lt) # ensure I has length Lt
if pre_spike_train_ex.max() == 0:
pre_spike_train_ex_total = np.zeros(Lt)
else:
pre_spike_train_ex_total = pre_spike_train_ex * np.ones(Lt)
if pre_spike_train_in.max() == 0:
pre_spike_train_in_total = np.zeros(Lt)
else:
pre_spike_train_in_total = pre_spike_train_in * np.ones(Lt)
# simulation
rec_spikes = [] # recording spike times
for it in range(Lt - 1):
if tr > 0:
v[it] = V_reset
tr = tr - 1
elif v[it] >= V_th: # reset voltage and record spike event
rec_spikes.append(it)
v[it] = V_reset
tr = tref / dt
# update the synaptic conductance
gE[it+1] = gE[it] - (dt / tau_syn_E) * gE[it] + gE_bar * pre_spike_train_ex_total[it + 1]
gI[it+1] = gI[it] - (dt / tau_syn_I) * gI[it] + gI_bar * pre_spike_train_in_total[it + 1]
# calculate the increment of the membrane potential
dv = (-(v[it] - E_L) - (gE[it + 1] / g_L) * (v[it] - VE) - \
(gI[it + 1] / g_L) * (v[it] - VI) + Iinj[it] / g_L) * (dt / tau_m)
# update membrane potential
v[it + 1] = v[it] + dv
rec_spikes = np.array(rec_spikes) * dt
return v, rec_spikes, gE, gI
print(help(run_LIF_cond))
```
## Interactive Demo: Correlated spike input to an LIF neuron
In the following you can explore what happens when the neurons receive correlated spiking input.
You can vary the correlation between excitatory input spike trains. For simplicity, the correlation between inhibitory spike trains is set to 0.01.
Vary both excitatory rate and correlation and see how the output correlation changes. Check if the results are qualitatively similar to what you observed previously when you varied the $\mu$ and $\sigma$.
```
# @title
# @markdown Make sure you execute this cell to enable the widget!
my_layout.width = '450px'
@widgets.interact(
pwc_ee=widgets.FloatSlider(0.3, min=0.05, max=0.99, step=0.01,
layout=my_layout),
exc_rate=widgets.FloatSlider(1e3, min=500., max=5e3, step=50.,
layout=my_layout),
inh_rate=widgets.FloatSlider(500., min=300., max=5e3, step=5.,
layout=my_layout),
)
def EI_isi_regularity(pwc_ee, exc_rate, inh_rate):
pars = default_pars(T=1000.)
# Add parameters
pars['V_th'] = -55. # spike threshold [mV]
pars['V_reset'] = -75. # reset potential [mV]
pars['tau_m'] = 10. # membrane time constant [ms]
pars['g_L'] = 10. # leak conductance [nS]
pars['V_init'] = -65. # initial potential [mV]
pars['E_L'] = -75. # leak reversal potential [mV]
pars['tref'] = 2. # refractory time (ms)
pars['gE_bar'] = 4.0 # [nS]
pars['VE'] = 0. # [mV] excitatory reversal potential
pars['tau_syn_E'] = 2. # [ms]
pars['gI_bar'] = 2.4 # [nS]
pars['VI'] = -80. # [mV] inhibitory reversal potential
pars['tau_syn_I'] = 5. # [ms]
my_bin = np.arange(0, pars['T']+pars['dt'], .1) # 20 [ms] bin-size
# exc_rate = 1e3
# inh_rate = 0.4e3
# pwc_ee = 0.3
pwc_ii = 0.01
# generate two correlated spike trains for excitatory input
sp1e, sp2e = generate_corr_Poisson(pars, exc_rate, pwc_ee)
sp1_spike_train_ex, _ = np.histogram(sp1e, bins=my_bin)
sp2_spike_train_ex, _ = np.histogram(sp2e, bins=my_bin)
# generate two uncorrelated spike trains for inhibitory input
sp1i, sp2i = generate_corr_Poisson(pars, inh_rate, pwc_ii)
sp1_spike_train_in, _ = np.histogram(sp1i, bins=my_bin)
sp2_spike_train_in, _ = np.histogram(sp2i, bins=my_bin)
v1, rec_spikes1, gE, gI = run_LIF_cond(pars, 0, sp1_spike_train_ex, sp1_spike_train_in)
v2, rec_spikes2, gE, gI = run_LIF_cond(pars, 0, sp2_spike_train_ex, sp2_spike_train_in)
# bin the spike time
bin_size = 20 # [ms]
my_bin = np.arange(0, pars['T'], bin_size)
spk_1, _ = np.histogram(rec_spikes1, bins=my_bin)
spk_2, _ = np.histogram(rec_spikes2, bins=my_bin)
r12 = my_CC(spk_1, spk_2)
print(f"Input correlation = {pwc_ee}")
print(f"Output correlation = {r12}")
plt.figure(figsize=(14, 7))
plt.subplot(211)
plt.plot(sp1e, np.ones(len(sp1e)) * 1, '|', ms=20,
label='Exc. input 1')
plt.plot(sp2e, np.ones(len(sp2e)) * 1.1, '|', ms=20,
label='Exc. input 2')
plt.plot(sp1i, np.ones(len(sp1i)) * 1.3, '|k', ms=20,
label='Inh. input 1')
plt.plot(sp2i, np.ones(len(sp2i)) * 1.4, '|k', ms=20,
label='Inh. input 2')
plt.ylim(0.9, 1.5)
plt.legend()
plt.ylabel('neuron id.')
plt.subplot(212)
plt.plot(pars['range_t'], v1, label='neuron 1')
plt.plot(pars['range_t'], v2, label='neuron 2')
plt.xlabel('time (ms)')
plt.ylabel('membrane voltage $V_{m}$')
plt.tight_layout()
plt.show()
```
Above, we are estimating the output correlation for one trial. You can modify the code to get a trial average of output correlations.
---
# Bonus 2: Ensemble Response
Finally, there is a short BONUS lecture video on the firing response of an ensemble of neurons to time-varying input. There are no associated coding exercises - just enjoy.
```
#@title Video 2 (Bonus): Response of ensemble of neurons to time-varying input
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="78_dWa4VOIo", width=854, height=480, fs=1)
print("Video available at https://youtube.com/watch?v=" + video.id)
video
```
| github_jupyter |
# Fundamental types in Python
# Integers
Integer literals are created by any number without a decimal or complex component.
```
x = 1
print(x)
y=5
print(y)
z="Test"
print(z)
```
# Lets check if a number is integer or not
```
isinstance(x, int)
```
# Floats
Float literals can be created by adding a decimal component to a number.
```
# No concept of declaring variable types in Python
x = 1.0
y = 5.7
print(y)
y=3
print(y)
y=5.6
print(x)
print(y)
```
# Boolean
Boolean can be defined by typing True/False without quotes
```
# Case Sensitive. True is different from TRUE. Dynamic Typing
b1 = True
print(b1)
b2 = False
b1 = 6
print(b1)
```
# Strings
String literals can be defined with any of single quotes ('), double quotes (") or triple quotes (''' or """). All give the same result with two important differences.
If you quote with single quotes, you do not have to escape double quotes and vice-versa. If you quote with triple quotes, your string can span multiple lines.
```
a="Test"
b=5
print(type(a))
print(type(b))
# string
name1 = 'your name'
print(name1)
name2 = "He's coming to the party"
print(name2)
name3 = '''XNews quotes : "He's coming to the party"'''
print(name3)
```
# Summary Statitics
```
import pandas as pd
import numpy as np
import csv
data = pd.read_csv("wine.csv", encoding="latin-1")
#byte_string = chr(195) + chr(167) + chr(97)
#unicode_string = byte_string.decode('latin-1')
#print(unicode_string) # prints: ça
```
# Lets have a brief look at the first four rows of the data in table
```
data.head()
df = pd.DataFrame(data)
print (df)
print (df.describe())
df.describe()
data.columns
```
# Lets find out the mean of the wine score
```
data['points'].mean() #Mean of the dataframe:
```
# Lets find out the column mean of the dataframe
```
df1 = data[['points','price']]
df1.mean(axis=0)
```
# Row Mean of the dataframe:
```
df1.mean(axis=1).head()
```
# Lets calculate the median of the specific Column
```
data['points'].median()
```
# Lets calculate the mode of the specific column
```
data['points'].mode()
```
# Lets calculate the standard deviation of a data frame
```
df.std()
```
# Lets calculate the standard deviation of the data frame column wise
```
df.std(axis=0)
```
# WAP - In class exe :Calculate the standard deviation of the data frame row wise?
# Lets calculate the standard deviation of a specefic column "points"
```
df.loc[:,"points"].std()
```
# WAP - In class exe :Calculate the standard deviation of a specefic column "price"?
```
df.loc[:,"price"].std()
df.var()
```
# WAP - In class exe :Calculate the column and row variance of the data frame?
# Lets calculate the variance of a specefic column "points"
```
df.loc[:,"points"].var()
```
# WAP - In class exe :Calculate the variance of a specefic column "price"?
# A complete measures of wine price
```
# dispersion measures
print('Min price : {0}'.format(df.price.min())) # minimum
print('Max price : {0}'.format(df.price.max())) # maximum
print('price range : {0}'.format(df.price.max() - df.price.min())) # range
print('25 percentile : {0}'.format(df.price.quantile(.25))) # 25 percentile
print('50 percentile : {0}'.format(df.price.quantile(.5))) # 50 percentile
print('75 percentile : {0}'.format(df.price.quantile(.75))) # 75 percentile
print('Variance price : {0}'.format(df.price.var())) # variance
print('Standard deviation price : {0}'.format(df.price.std())) # standard deviation
```
# Visualization: Seaborne and matplotlib
```
import seaborn as sns
import matplotlib.pyplot as plt
```
# Lets display a Seaborn distplot
```
sns.distplot(df['points'])
plt.show()
```
# Lets display a Seaborn distplot with dark background
```
sns.set_style('dark')
sns.distplot(df['points'])
plt.show()
# Clear the figure
plt.clf()
```
# WAP - In class exe :Display a distplot for "price"?
# Lets display a distplot of "price" in 20 different bins
```
# Create a distplot
sns.distplot(df['points'],
kde=False,
bins=20)
# Display a plot
plt.show()
```
# Lets plot a histogram for points
```
df['points'].plot.hist()
plt.show()
plt.clf()
```
# WAP - In class exe :Plot histogram for "price"?
# Lets plot the same histogram with a default seaborn style
```
# Set the default seaborn style
sns.set()
# Plot the pandas histogram again
df['points'].plot.hist()
plt.show()
plt.clf()
```
# Lets display the above histogram with whitegrid using seaborn
```
# Plot with a whitegrid style
sns.set()
sns.set_style('whitegrid')
# Plot the pandas histogram again
df['points'].plot.hist()
plt.show()
plt.clf() #clears the graph
```
# Lets create a box plot for points and price
```
#Create a boxplot
sns.boxplot(data=df,
x='points',
y='price')
plt.show()
plt.clf()
```
# Lets create a bar plot for points and price
```
sns.barplot(data=df,
x='points',
y='price')
plt.show()
plt.clf()
```
# Lets create scatter plot with respect to country and price
```
sns.regplot(data=df,
y='points',
x="price",
fit_reg=False)
plt.show()
plt.clf()
```
# Lets check the skewness of the data
```
df.skew() #skewness value > 0 means that there is more weight in the left tail of the distribution.
print('skewness for points : {0:.2f}'.format(df.points.skew()))
print('skewness for price : {0:.2f}'.format(df.price.skew()))
```
# Outlier detection and treatment
```
from sklearn.datasets import load_boston
boston = load_boston()
x = boston.data
y = boston.target
columns = boston.feature_names
#create the dataframe
boston_df = pd.DataFrame(boston.data)
boston_df.columns = columns
boston_df.head()
```
# Lets detect the outliers using visulaization tool
# 1.Boxplot
```
import seaborn as sns
import matplotlib.pyplot as plt
sns.boxplot(x=boston_df['DIS'])
```
# 2.Scatter plot
```
fig, ax = plt.subplots(figsize=(16,8))
ax.scatter(boston_df['INDUS'], boston_df['TAX'])
ax.set_xlabel('Proportion of non-retail business acres per town')
ax.set_ylabel('Full-value property-tax rate per $10,000')
plt.show()
```
# Lets detect the outliers using mathematical functions
# 1. Z-score
```
from scipy import stats
import numpy as np
z = np.abs(stats.zscore(boston_df))
print(z)
```
# Lets define threshold for the above z-score to identify the outlier.
```
threshold = 3
print(np.where(z > 3))
```
```
print(z[55][1])
```
# 2. IQR score
```
Q1 = boston_df.quantile(0.25)
Q3 = boston_df.quantile(0.75)
IQR = Q3 - Q1
print(IQR)
print(boston_df < (Q1 - 1.5 * IQR)) |(boston_df > (Q3 + 1.5 * IQR))
```
# Working with Outliers: Correcting, Removing
# 1. Z-score
```
boston_df_1 = boston_df[(z < 3).all(axis=1)]
print(boston_df_1)
```
# 2. IQR Score
```
boston_df_out = boston_df[~((boston_df < (Q1 - 1.5 * IQR)) |(boston_df > (Q3 + 1.5 * IQR))).any(axis=1)]
boston_df_out.shape
```
# Missing value treatment and detection
# Lets find out the total NAN value in the data
```
data.isnull().sum()
```
# Lets drop the null or missing values
```
df.dropna()
df.info()
```
# Lets fill the missing values with a mean value
```
mean_value=df['price'].mean()
df['price']=df['price'].fillna(mean_value) #fill null values
```
# Lets fill the missing values with a median value
```
median_value=df['price'].median()
df['price']=df['price'].fillna(median_value)
```
# Lets fille the missing values using back fill.
```
df.fillna(method='bfill')
```
# Lets fille the missing values using forward fill.
```
df.fillna(method='ffill')
```
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import molsysmt as msm
```
# Convert
The meaning of molecular system 'form', in the context of MolSysMT, has been described previously in the section XXX. There is in MolSysMT a method to convert a form into other form: `molsysmt.convert()`. This method is the keystone of this library, the hinge all other methods and tools in MolSysMT rotates on. And in addition, the joining piece connecting the pipes of your work-flow when using different python libraries.
The method `molsysmt.convert()` requires at least two input arguments: the original pre-existing item in whatever form accepted by MolSysMT (see XXX), and the name of the output form:
```
molecular_system = msm.convert('pdb_id:1TCD', 'molsysmt.MolSys')
```
The id code `1TCD` from the Protein Data Bank is converted into a native `molsysmt.MolSys` python object. At this point, you probably think that this operation can also be done with the method `molsysmt.load()`. And you are right. Actually, `molsysmt.load()` is nothing but an alias of `molsysmt.convert()`. Although redundant, a loading method was included in MolSysMT just for the sake of intuitive usability. But it could be removed from the library since `molsysmt.convert()` has the same functionality.
The following cells illustrate some conversions you can do with `molsysmt.convert()`:
```
msm.get_form('1sux.pdb')
msm.convert('pdb_id:1SUX', '1sux.pdb') # fetching a pdb file to save it locally
msm.convert('pdb_id:1SUX', '1sux.mmtf') # fetching an mmtf to save it locally
pdb_file = msm.demo['TcTIM']['1tcd.pdb']
molecular_system = msm.convert(pdb_file, 'mdtraj.Trajectory') # loading a pdb file as an mdtraj.Trajectory object
seq_aa1 = msm.convert(molecular_system, 'string:aminoacids1') # converting an mdtraj.Trajectory into a sequence form
```
## How to convert just a selection
The conversion can be done over the entiry system or over a part of it. The input argument `selection` works with most of the MolSysMT methods, with `molsysmt.convert()` also. To know more about how to perform selections there is a section on this documentation entitled "XXX". By now, lets see some simple selections to see how it operates:
```
pdb_file = msm.demo['TcTIM']['1tcd.pdb']
whole_molecular_system = msm.convert(pdb_file, to_form='openmm.Topology')
msm.info(whole_molecular_system)
aa = msm.convert(pdb_file, to_form='string:pdb_text')
msm.get_form(aa)
molecular_system = msm.convert(pdb_file, to_form='openmm.Topology',
selection='molecule_type=="protein"')
msm.info(molecular_system)
```
## How to combine multiple forms into one
Sometimes the molecular system comes from the combination of more than a form. For example, we can have two files with topology and coordinates to be converted into an only molecular form:
```
prmtop_file = msm.demo['pentalanine']['pentalanine.prmtop']
inpcrd_file = msm.demo['pentalanine']['pentalanine.inpcrd']
molecular_system = msm.convert([prmtop_file, inpcrd_file], to_form='molsysmt.MolSys')
msm.info(molecular_system)
```
## How to convert a form into multiple ones at once
In the previous section the way to convert multiple forms into one was illustrated. Lets see now how to produce more than an output form in just a single line:
```
h5_file = msm.demo['pentalanine']['traj.h5']
topology, trajectory = msm.convert(h5_file, to_form=['molsysmt.Topology','molsysmt.Trajectory'])
msm.info(topology)
msm.info(trajectory)
msm.info([topology, trajectory])
```
Lets now combine both forms into one to see their were properly converted:
```
pdb_string = msm.convert([topology, trajectory], to_form='string:pdb_text', frame_indices=0)
print(pdb_string)
```
## Some examples with files
```
PDB_file = msm.demo['TcTIM']['1tcd.pdb']
system_pdbfixer = msm.convert(PDB_file, to_form='pdbfixer.PDBFixer')
system_parmed = msm.convert(PDB_file, to_form='parmed.Structure')
MOL2_file = msm.demo['caffeine']['caffeine.mol2']
system_openmm = msm.convert(MOL2_file, to_form='openmm.Modeller')
system_mdtraj = msm.convert(MOL2_file, to_form='mdtraj.Trajectory')
MMTF_file = msm.demo['TcTIM']['1tcd.mmtf']
system_aminoacids1_seq = msm.convert(MMTF_file, to_form='string:aminoacids1')
system_molsys = msm.convert(MMTF_file)
print('Form of object system_pdbfixer: ', msm.get_form(system_pdbfixer))
print('Form of object system_parmed: ', msm.get_form(system_parmed))
print('Form of object system_openmm: ', msm.get_form(system_openmm))
print('Form of object system_mdtraj: ', msm.get_form(system_mdtraj))
print('Form of object system_aminoacids1_seq: ', msm.get_form(system_aminoacids1_seq))
print('Form of object system_molsys: ', msm.get_form(system_molsys))
```
## Some examples with IDs
```
molecular_system = msm.convert('pdb_id:1SUX', to_form='mdtraj.Trajectory')
```
## Conversions implemented in MolSysMT
```
msm.help.convert(from_form='mdtraj.Trajectory', to_form_type='string')
msm.help.convert(from_form='mdtraj.Trajectory', to_form_type='file', as_rows='to')
from_list=['pytraj.Trajectory','mdanalysis.Universe']
to_list=['mdtraj.Trajectory', 'openmm.Topology']
msm.help.convert(from_form=from_list, to_form=to_list)
```
| github_jupyter |
# Практическое задание к уроку 1 (2 неделя).
## Линейная регрессия: переобучение и регуляризация
В этом задании мы на примерах увидим, как переобучаются линейные модели, разберем, почему так происходит, и выясним, как диагностировать и контролировать переобучение.
Во всех ячейках, где написан комментарий с инструкциями, нужно написать код, выполняющий эти инструкции. Остальные ячейки с кодом (без комментариев) нужно просто выполнить. Кроме того, в задании требуется отвечать на вопросы; ответы нужно вписывать после выделенного слова "__Ответ:__".
Напоминаем, что посмотреть справку любого метода или функции (узнать, какие у нее аргументы и что она делает) можно с помощью комбинации Shift+Tab. Нажатие Tab после имени объекта и точки позволяет посмотреть, какие методы и переменные есть у этого объекта.
```
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
```
Мы будем работать с датасетом __"bikes_rent.csv"__, в котором по дням записаны календарная информация и погодные условия, характеризующие автоматизированные пункты проката велосипедов, а также число прокатов в этот день. Последнее мы будем предсказывать; таким образом, мы будем решать задачу регрессии.
### Знакомство с данными
Загрузите датасет с помощью функции __pandas.read_csv__ в переменную __df__. Выведите первые 5 строчек, чтобы убедиться в корректном считывании данных:
```
# (0 баллов)
# Считайте данные и выведите первые 5 строк
df = pd.read_csv("bikes_rent.csv")
df.head()
```
Для каждого дня проката известны следующие признаки (как они были указаны в источнике данных):
* _season_: 1 - весна, 2 - лето, 3 - осень, 4 - зима
* _yr_: 0 - 2011, 1 - 2012
* _mnth_: от 1 до 12
* _holiday_: 0 - нет праздника, 1 - есть праздник
* _weekday_: от 0 до 6
* _workingday_: 0 - нерабочий день, 1 - рабочий день
* _weathersit_: оценка благоприятности погоды от 1 (чистый, ясный день) до 4 (ливень, туман)
* _temp_: температура в Цельсиях
* _atemp_: температура по ощущениям в Цельсиях
* _hum_: влажность
* _windspeed(mph)_: скорость ветра в милях в час
* _windspeed(ms)_: скорость ветра в метрах в секунду
* _cnt_: количество арендованных велосипедов (это целевой признак, его мы будем предсказывать)
Итак, у нас есть вещественные, бинарные и номинальные (порядковые) признаки, и со всеми из них можно работать как с вещественными. С номинальныеми признаками тоже можно работать как с вещественными, потому что на них задан порядок. Давайте посмотрим на графиках, как целевой признак зависит от остальных
```
fig, axes = plt.subplots(nrows=3, ncols=4, figsize=(15, 10))
for idx, feature in enumerate(df.columns[:-1]):
df.plot(feature, "cnt", subplots=True, kind="scatter", ax=axes[idx / 4, idx % 4])
```
__Блок 1. Ответьте на вопросы (каждый 0.5 балла):__
1. Каков характер зависимости числа прокатов от месяца?
* ответ: на параболу похоже. Т.е. в летние месяцы число прокатов увеличивается (ваш К.О.)
2. Укажите один или два признака, от которых число прокатов скорее всего зависит линейно
* ответ: temp, atemp
Давайте более строго оценим уровень линейной зависимости между признаками и целевой переменной. Хорошей мерой линейной зависимости между двумя векторами является корреляция Пирсона. В pandas ее можно посчитать с помощью двух методов датафрейма: corr и corrwith. Метод df.corr вычисляет матрицу корреляций всех признаков из датафрейма. Методу df.corrwith нужно подать еще один датафрейм в качестве аргумента, и тогда он посчитает попарные корреляции между признаками из df и этого датафрейма.
```
# Код 1.1 (0.5 балла)
# Посчитайте корреляции всех признаков, кроме последнего, с последним с помощью метода corrwith:
df.corrwith(df['cnt'])[:-1]
```
В выборке есть признаки, коррелирующие с целевым, а значит, задачу можно решать линейными методами.
По графикам видно, что некоторые признаки похожи друг на друга. Поэтому давайте также посчитаем корреляции между вещественными признаками.
```
# Код 1.2 (0.5 балла)
# Посчитайте попарные корреляции между признаками temp, atemp, hum, windspeed(mph), windspeed(ms) и cnt
# с помощью метода corr:
df[['temp', 'atemp', 'hum', 'windspeed(mph)', 'windspeed(ms)', 'cnt']].corr()
```
На диагоналях, как и полагается, стоят единицы. Однако в матрице имеются еще две пары сильно коррелирующих столбцов: temp и atemp (коррелируют по своей природе) и два windspeed (потому что это просто перевод одних единиц в другие). Далее мы увидим, что этот факт негативно сказывается на обучении линейной модели.
Напоследок посмотрим средние признаков (метод mean), чтобы оценить масштаб признаков и доли 1 у бинарных признаков.
```
# Код 1.3 (0.5 балла)
# Выведите средние признаков
df.mean()
```
Признаки имеют разный масштаб, значит для дальнейшей работы нам лучше нормировать матрицу объекты-признаки.
### Проблема первая: коллинеарные признаки
Итак, в наших данных один признак дублирует другой, и есть еще два очень похожих. Конечно, мы могли бы сразу удалить дубликаты, но давайте посмотрим, как бы происходило обучение модели, если бы мы не заметили эту проблему.
Для начала проведем масштабирование, или стандартизацию признаков: из каждого признака вычтем его среднее и поделим на стандартное отклонение. Это можно сделать с помощью метода scale.
Кроме того, нужно перемешать выборку, это потребуется для кросс-валидации.
```
from sklearn.preprocessing import scale
from sklearn.utils import shuffle
df_shuffled = shuffle(df, random_state=123)
X = scale(df_shuffled[df_shuffled.columns[:-1]])
y = df_shuffled["cnt"]
```
Давайте обучим линейную регрессию на наших данных и посмотрим на веса признаков.
```
from sklearn.linear_model import LinearRegression
# Код 2.1 (1 балл)
# Создайте объект линейного регрессора, обучите его на всех данных и выведите веса модели
# (веса хранятся в переменной coef_ класса регрессора).
# Можно выводить пары (название признака, вес), воспользовавшись функцией zip, встроенной в язык python
# Названия признаков хранятся в переменной df.columns
linear_regressor = LinearRegression()
linear_regressor.fit(X, y)
zip(df.columns, linear_regressor.coef_)
```
Мы видим, что веса при линейно-зависимых признаках по модулю значительно больше, чем при других признаках.
Чтобы понять, почему так произошло, вспомним аналитическую формулу, по которой вычисляются веса линейной модели в методе наименьших квадратов:
$w = (X^TX)^{-1} X^T y$.
Если в X есть коллинеарные (линейно-зависимые) столбцы, матрица $X^TX$ становится вырожденной, и формула перестает быть корректной. Чем более зависимы признаки, тем меньше определитель этой матрицы и тем хуже аппроксимация $Xw \approx y$. Такая ситуацию называют _проблемой мультиколлинеарности_, вы обсуждали ее на лекции.
С парой temp-atemp чуть менее коррелирующих переменных такого не произошло, однако на практике всегда стоит внимательно следить за коэффициентами при похожих признаках.
__Решение__ проблемы мультиколлинеарности состоит в _регуляризации_ линейной модели. К оптимизируемому функционалу прибавляют L1 или L2 норму весов, умноженную на коэффициент регуляризации $\alpha$. В первом случае метод называется Lasso, а во втором --- Ridge. Подробнее об этом также рассказано в лекции.
Обучите регрессоры Ridge и Lasso с параметрами по умолчанию и убедитесь, что проблема с весами решилась.
```
from sklearn.linear_model import Lasso, Ridge
# Код 2.2 (0.5 балла)
# Обучите линейную модель с L1-регуляризацией и выведите веса
lasso_regressor = Lasso()
lasso_regressor.fit(X, y)
zip(df.columns, lasso_regressor.coef_)
# Код 2.3 (0.5 балла)
# Обучите линейную модель с L2-регуляризацией и выведите веса
ridge_regressor = Ridge()
ridge_regressor.fit(X, y)
zip(df.columns, ridge_regressor.coef_)
```
### Проблема вторая: неинформативные признаки
В отличие от L2-регуляризации, L1 обнуляет веса при некоторых признаках. Объяснение данному факту дается в одной из лекций курса.
Давайте пронаблюдаем, как меняются веса при увеличении коэффициента регуляризации $\alpha$ (в лекции коэффициент при регуляризаторе мог быть обозначен другой буквой).
```
# Код 3.1 (1 балл)
alphas = np.arange(1, 500, 50)
coefs_lasso = np.zeros((alphas.shape[0], X.shape[1])) # матрица весов размера (число регрессоров) x (число признаков)
coefs_ridge = np.zeros((alphas.shape[0], X.shape[1]))
# Для каждого значения коэффициента из alphas обучите регрессор Lasso
# и запишите веса в соответствующую строку матрицы coefs_lasso (вспомните встроенную в python функцию enumerate),
# а затем обучите Ridge и запишите веса в coefs_ridge.
for index, a in enumerate(alphas):
lasso_regressor = Lasso(alpha=a)
lasso_regressor.fit(X, y)
coefs_lasso[index] = lasso_regressor.coef_
ridge_regressor = Ridge(alpha=a)
ridge_regressor.fit(X, y)
coefs_ridge[index] = ridge_regressor.coef_
```
Визуализируем динамику весов при увеличении параметра регуляризации:
```
plt.figure(figsize=(8, 5))
for coef, feature in zip(coefs_lasso.T, df.columns):
plt.plot(alphas, coef, label=feature, color=np.random.rand(3))
plt.legend(loc="upper right", bbox_to_anchor=(1.4, 0.95))
plt.xlabel("alpha")
plt.ylabel("feature weight")
plt.title("Lasso")
plt.figure(figsize=(8, 5))
for coef, feature in zip(coefs_ridge.T, df.columns):
plt.plot(alphas, coef, label=feature, color=np.random.rand(3))
plt.legend(loc="upper right", bbox_to_anchor=(1.4, 0.95))
plt.xlabel("alpha")
plt.ylabel("feature weight")
plt.title("Ridge")
```
Ответы на следующие вопросы можно давать, глядя на графики или выводя коэффициенты на печать.
__Блок 2. Ответьте на вопросы (каждый 0.25 балла)__:
1. Какой регуляризатор (Ridge или Lasso) агрессивнее уменьшает веса при одном и том же alpha?
* Ответ: Lasso
1. Что произойдет с весами Lasso, если alpha сделать очень большим? Поясните, почему так происходит.
* Ответ: веса уйдут в 0, потому что используется норма L1 (сумма модулей)
1. Можно ли утверждать, что Lasso исключает один из признаков windspeed при любом значении alpha > 0? А Ridge? Ситается, что регуляризатор исключает признак, если коэффициент при нем < 1e-3.
* Ответ: Да. Нет
1. Какой из регуляризаторов подойдет для отбора неинформативных признаков?
* Ответ: Lasso
Далее будем работать с Lasso.
Итак, мы видим, что при изменении alpha модель по-разному подбирает коэффициенты признаков. Нам нужно выбрать наилучшее alpha.
Для этого, во-первых, нам нужна метрика качества. Будем использовать в качестве метрики сам оптимизируемый функционал метода наименьших квадратов, то есть Mean Square Error.
Во-вторых, нужно понять, на каких данных эту метрику считать. Нельзя выбирать alpha по значению MSE на обучающей выборке, потому что тогда мы не сможем оценить, как модель будет делать предсказания на новых для нее данных. Если мы выберем одно разбиение выборки на обучающую и тестовую (это называется holdout), то настроимся на конкретные "новые" данные, и вновь можем переобучиться. Поэтому будем делать несколько разбиений выборки, на каждом пробовать разные значения alpha, а затем усреднять MSE. Удобнее всего делать такие разбиения кросс-валидацией, то есть разделить выборку на K частей, или блоков, и каждый раз брать одну из них как тестовую, а из оставшихся блоков составлять обучающую выборку.
Делать кросс-валидацию для регрессии в sklearn совсем просто: для этого есть специальный регрессор, __LassoCV__, который берет на вход список из alpha и для каждого из них вычисляет MSE на кросс-валидации. После обучения (если оставить параметр cv=3 по умолчанию) регрессор будет содержать переменную __mse\_path\___, матрицу размера len(alpha) x k, k = 3 (число блоков в кросс-валидации), содержащую значения MSE на тесте для соответствующих запусков. Кроме того, в переменной alpha\_ будет храниться выбранное значение параметра регуляризации, а в coef\_, традиционно, обученные веса, соответствующие этому alpha_.
Обратите внимание, что регрессор может менять порядок, в котором он проходит по alphas; для сопоставления с матрицей MSE лучше использовать переменную регрессора alphas_.
```
from sklearn.linear_model import LassoCV
# Код 3.2 (1 балл)
# Обучите регрессор LassoCV на всех параметрах регуляризации из alpha
# Постройте график _усредненного_ по строкам MSE в зависимости от alpha.
# Выведите выбранное alpha, а также пары "признак-коэффициент" для обученного вектора коэффициентов
alphas = np.arange(1, 100, 5)
lassoCV_regressor = LassoCV(alphas=alphas)
lassoCV_regressor.fit(X, y)
import numpy as np
plt.plot(lassoCV_regressor.alphas_, np.mean(lassoCV_regressor.mse_path_, axis=1))
plt.xlabel("alpha")
plt.ylabel("MSE")
plt.title("Cross-validation")
print 'alpha = ' + str(lassoCV_regressor.alpha_)
zip(df.columns, lassoCV_regressor.coef_)
```
Итак, мы выбрали некоторый параметр регуляризации. Давайте посмотрим, какие бы мы выбирали alpha, если бы делили выборку только один раз на обучающую и тестовую, то есть рассмотрим траектории MSE, соответствующие отдельным блокам выборки.
```
# Код 3.3 (1 балл)
# Выведите значения alpha, соответствующие минимумам MSE на каждом разбиении (то есть по столбцам).
# На трех отдельных графиках визуализируйте столбцы .mse_path_
i = 0
for a in lassoCV_regressor.alphas_:
for k in [0, 1, 2]:
if (lassoCV_regressor.mse_path_[i, k] == np.min(lassoCV_regressor.mse_path_[:, k])):
print 'k = ' + str (k+1) + ', alpha = ' + str(a)
plt.plot(lassoCV_regressor.alphas_, lassoCV_regressor.mse_path_[:, k])
plt.xlabel("alpha")
plt.ylabel("MSE")
plt.show()
i += 1
```
На каждом разбиении оптимальное значение alpha свое, и ему соответствует большое MSE на других разбиениях. Получается, что мы настраиваемся на конкретные обучающие и контрольные выборки. При выборе alpha на кросс-валидации мы выбираем нечто "среднее", что будет давать приемлемое значение метрики на разных разбиениях выборки.
Наконец, как принято в анализе данных, давайте проинтерпретируем результат.
__Блок 3. Ответьте на вопросы (каждый 0.5 балла):__
1. В последней обученной модели выберите 4 признака с наибольшими (положительными) коэфициентами (и выпишите их), посмотрите на визуализации зависимостей cnt от этих признаков, которые мы рисовали в блоке "Знакомство с данными". Видна ли возрастающая линейная зависимость cnt от этих признаков по графикам? Логично ли утверждать (из здравого смысла), что чем больше значение этих признаков, тем больше людей захотят взять велосипеды?
* Ответ: yr, season, atemp, temp. Логично, что при увеличении температуры воздуха количество арендованных велосипедов увеличивается. С признаками сезон и год не все так однозначно. Из того, что в 2012 число аренды больше, чем в 2011, не следует, что в 2013 их будет больше, чем в 2012. То же и с сезоном. Или я где-то накосячила и неправильно выбрала признаки :(
1. Выберите 3 признака с наибольшими по модулю отрицательными коэффициентами (и выпишите их), посмотрите на соответствующие визуализации. Видна ли убывающая линейная зависимость? Логично ли утверждать, что чем больше величина этих признаков, тем меньше людей захотят взять велосипеды?
* Ответ: weathersit, windspeed(mph), hum. Все эти признаки - оценка благоприятности, сила ветра, влажность характеризуют неблагоприятную погоду, чем они выше, тем хуже подходят метеоусловия для катания. Поэтому логично, что количество арендованных велосипедов падает с их увеличением.
1. Выпишите признаки с коэффициентами, близкими к нулю (< 1e-3). Как вы думаете, почему модель исключила их из модели (вновь посмотрите на графики)? Верно ли, что они никак не влияют на спрос на велосипеды?
* Ответ: windspeed(ms) т.к. дублирует windspeed(mph). По сути это один признак, он влияет на спрос, но дубль нужно убрать, чтобы не было переоценки влияния.
### Заключение
Итак, мы посмотрели, как можно следить за адекватностью линейной модели, как отбирать признаки и как грамотно, по возможности не настраиваясь на какую-то конкретную порцию данных, подбирать коэффициент регуляризации.
Стоит отметить, что с помощью кросс-валидации удобно подбирать лишь небольшое число параметров (1, 2, максимум 3), потому что для каждой допустимой их комбинации нам приходится несколько раз обучать модель, а это времязатратный процесс, особенно если нужно обучаться на больших объемах данных.
| github_jupyter |
# Initialize a game
```
from ConnectN import ConnectN
game_setting = {'size':(6,6), 'N':4, 'pie_rule':True}
game = ConnectN(**game_setting)
% matplotlib notebook
from Play import Play
gameplay=Play(ConnectN(**game_setting),
player1=None,
player2=None)
```
# Define our policy
Please go ahead and define your own policy! See if you can train it under 1000 games and with only 1000 steps of exploration in each move.
```
import torch
import torch.nn as nn
import torch.nn.functional as F
from math import *
import numpy as np
from ConnectN import ConnectN
game_setting = {'size':(6,6), 'N':4}
game = ConnectN(**game_setting)
class Policy(nn.Module):
def __init__(self, game):
super(Policy, self).__init__()
# input = 6x6 board
# convert to 5x5x8
self.conv1 = nn.Conv2d(1, 16, kernel_size=2, stride=1, bias=False)
# 5x5x16 to 3x3x32
self.conv2 = nn.Conv2d(16, 32, kernel_size=3, stride=1, bias=False)
self.size=3*3*32
# the part for actions
self.fc_action1 = nn.Linear(self.size, self.size//4)
self.fc_action2 = nn.Linear(self.size//4, 36)
# the part for the value function
self.fc_value1 = nn.Linear(self.size, self.size//6)
self.fc_value2 = nn.Linear(self.size//6, 1)
self.tanh_value = nn.Tanh()
def forward(self, x):
y = F.leaky_relu(self.conv1(x))
y = F.leaky_relu(self.conv2(y))
y = y.view(-1, self.size)
# action head
a = self.fc_action2(F.leaky_relu(self.fc_action1(y)))
avail = (torch.abs(x.squeeze())!=1).type(torch.FloatTensor)
avail = avail.view(-1, 36)
maxa = torch.max(a)
exp = avail*torch.exp(a-maxa)
prob = exp/torch.sum(exp)
# value head
value = self.tanh_value(self.fc_value2(F.leaky_relu( self.fc_value1(y) )))
return prob.view(6,6), value
policy = Policy(game)
```
# Define a MCTS player for Play
```
import MCTS
from copy import copy
def Policy_Player_MCTS(game):
mytree = MCTS.Node(copy(game))
for _ in range(1000):
mytree.explore(policy)
mytreenext, (v, nn_v, p, nn_p) = mytree.next(temperature=0.1)
return mytreenext.game.last_move
import random
def Random_Player(game):
return random.choice(game.available_moves())
```
# Play a game against a random policy
```
% matplotlib notebook
from Play import Play
gameplay=Play(ConnectN(**game_setting),
player1=Policy_Player_MCTS,
player2=None)
```
# Training
```
# initialize our alphazero agent and optimizer
import torch.optim as optim
game=ConnectN(**game_setting)
policy = Policy(game)
optimizer = optim.Adam(policy.parameters(), lr=.01, weight_decay=1.e-5)
! pip install progressbar
```
Beware, training is **VERY VERY** slow!!
```
# train our agent
from collections import deque
import MCTS
# try a higher number
episodes = 2000
import progressbar as pb
widget = ['training loop: ', pb.Percentage(), ' ',
pb.Bar(), ' ', pb.ETA() ]
timer = pb.ProgressBar(widgets=widget, maxval=episodes).start()
outcomes = []
policy_loss = []
Nmax = 1000
for e in range(episodes):
mytree = MCTS.Node(game)
logterm = []
vterm = []
while mytree.outcome is None:
for _ in range(Nmax):
mytree.explore(policy)
if mytree.N >= Nmax:
break
current_player = mytree.game.player
mytree, (v, nn_v, p, nn_p) = mytree.next()
mytree.detach_mother()
loglist = torch.log(nn_p)*p
constant = torch.where(p>0, p*torch.log(p),torch.tensor(0.))
logterm.append(-torch.sum(loglist-constant))
vterm.append(nn_v*current_player)
# we compute the "policy_loss" for computing gradient
outcome = mytree.outcome
outcomes.append(outcome)
loss = torch.sum( (torch.stack(vterm)-outcome)**2 + torch.stack(logterm) )
optimizer.zero_grad()
loss.backward()
policy_loss.append(float(loss))
optimizer.step()
if e%10==0:
print("game: ",e+1, ", mean loss: {:3.2f}".format(np.mean(policy_loss[-20:])),
", recent outcomes: ", outcomes[-10:])
if e%500==0:
torch.save(policy,'6-6-4-pie-{:d}.mypolicy'.format(e))
del loss
timer.update(e+1)
timer.finish()
```
# setup environment to pit your AI against the challenge policy '6-6-4-pie.policy'
```
challenge_policy = torch.load('6-6-4-pie.policy')
def Challenge_Player_MCTS(game):
mytree = MCTS.Node(copy(game))
for _ in range(1000):
mytree.explore(challenge_policy)
mytreenext, (v, nn_v, p, nn_p) = mytree.next(temperature=0.1)
return mytreenext.game.last_move
```
# Let the game begin!
```
% matplotlib notebook
gameplay=Play(ConnectN(**game_setting),
player2=Policy_Player_MCTS,
player1=Challenge_Player_MCTS)
```
| github_jupyter |
# Interpolation
```
import numpy as np
import matplotlib.pyplot as plt
```
### Linear Interpolation
Suppose we are given a function $f(x)$ at just two points, $x=a$ and $x=b$, and you want to know the function at another point in between. The simplest way to find an estimate of this value is using linear interpolation. Linear interpolation assumes the function follows a straight line between two points. The slope of the straight line approximate is:
$$ m = \frac{f(b) - f(a)}{b - a} $$
Then the value $f(x)$ can be approximated by:
$$ f(x) \approx \frac{f(b) - f(a)}{b-a} (x-a) + f(a) $$
#### Step 1: Define a linear function
Create a linear function $f(x) = ax + b$. Linear interpolation will yield an accurate answer for a a linear function. This is how we will test our linear interpolation.
```
def my_function(x):
# TO DO: Create a linear function
return (3*x + 2)
```
#### Step 2: Implement the linear interpolation
Using the equations given above, implement the linear interpolation function
```
def linear_interpolation(x, a, fa, b, fb):
"""
Fits a line to points (a, f(a)) and (b, f(b)) and returns an
approximation for f(x) for some value x between a and b from
the equation of the line.
Parameters:
x (float): the point of interest between a and b
a (float): known x value
fa (float): known f(a) value
b (float): known x value (b > a)
fb (float): known f(b) value
Returns:
(float): an approximation of f(x) using linear interpolation
"""
# To Do: Implement the linear interpolation function
slope = (fb - fa) / (b - a)
return (slope * (x - a)) + fa
```
#### Step 3: Test your linear interpolation
Using the linear function you created and your linear interpolation function, write at least three assert statements.
```
# To DO: Create at least three assert statements using my_function and linear_interpolation
assert(abs(linear_interpolation(4, 0, 2, 6, 20) - my_function(4)) < 0.01)
assert(abs(linear_interpolation(0, 1, 5, -2, -4) - my_function(0)) < 0.01)
assert(abs(linear_interpolation(-3, -10, -28, 5, 17) - my_function(-3)) < 0.01)
```
#### Step 4: Visualization your results
Plot your function. Using a scatter plot, plot at least three x, y points generated using your linear_interpolation function.
```
# To Do: Plot your function with at least three interpolated values
x = [-5, 12, 17, 22, 23, 30]
y = [linear_interpolation(x[i], -6, -16, 40, 122) for i in range(len(x))]
plt.scatter(x, y)
plt.show()
```
### 2nd Order Lagrangian Interpolation
If we have more than two points, a better way to get an estimate of "in between" points is using a Lagrangian Interpolation. Lagrangian Interpolation fits a nth order polynomial to a number of points. Higher order polynomials often introduce unnecessary "wiggles" that introduce error. Using many low-order polynomials often generate a better estimate. For this example, let's use a quadratic (i.e. a 2nd order polynomial).
$$f(x) = \frac{(x-b)(x-c)}{(a - b)(a-c)}f(a) + \frac{(x-a)(x-c)}{(b-a)(b-c)}f(b) + \frac{(x - a)(x-b)}{(c - a)(c - b)} f(c) $$
#### Step 1: Define a quadratic function
Create a quadratic function $f(x) = ax^2 + bx + c$. 2nd Order Lagrangian Interpolation will yield an accurate answer for a 2nd order polynomial (i.e. a quadratic). This is how we will test our interpolation.
```
def my_function2(x):
#To Do: Create a quadratic function
return x*x - 4*x + 4
```
#### Step 2: Implement the 2nd Order Lagrangian Interpolation Function
Using the equations given above, implement the 2nd order lagrangian interpolation function
```
def lagrangian_interpolation(x, a, fa, b, fb, c, fc):
"""
Fits a quadratic to points (a, f(a)), (b, f(b)), and (c, f(c)) and returns an
approximation for f(x) for some value x between a and c from the
equation of a quadratic.
Parameters:
x (float): the point of interest between a and b
a (float): known x value
fa (float): known f(a) value
b (float): known x value (b > a)
fb (float): known f(b) value
c (float): known x value (c > b)
fc (float): known f(c) value
Returns:
(float): an approximation of f(x) using linear interpolation
"""
term1 = ((x - b) * (x - c) * fa) / ((a - b) * (a - c))
term2 = ((x - a) * (x - c) * fb) / ((b - a) * (b - c))
term3 = ((x - a) * (x - b) * fc) / ((c - a) * (c - b))
return term1 + term2 + term3
```
#### Step 3: Test your results
Using the quadratic function you created and your 2nd order lagrangian interpolation function, write at least three assert statements.
```
# To Do: Write at least three assert statements
assert(abs(lagrangian_interpolation(-4, -6, 64, 0, 4, 8, 36) - my_function2(-4)) < 0.01)
assert(abs(lagrangian_interpolation(2, -6, 64, 8, 36, 40, 1444) - my_function2(2)) < 0.01)
assert(abs(lagrangian_interpolation(-3, -6, 64, 0, 4, 8, 36) - my_function2(-3)) < 0.01)
```
#### Step 4: Visualize your results
Plot your function and using a scatter plot, plot at least three x, y points generated from your lagrangian_interpolation function.
```
# To Do: Plot your function with interpolated values
x = [-16, -12, -7, 3, 16, 25, 30]
y = [lagrangian_interpolation(x[i], -6, 64, 40, 1444, 41, 1521) for i in range(len(x))]
plt.scatter(x, y)
plt.show()
```
### Application
Also contained in this file is a text file called `Partial_Data.txt`. This contains sparse data. In this application section we're going to import the data and approximate the curve using linear and 2nd order lagranging interpolation.
#### Step 1: Import the data
Take a look at the file and see what data it contains. I suggest using `np.loadtxt` to import this data. Using the argument `unpack = True` will allow you to easily assign each column of data to an individual variable. For more information on the `loadtxt` function and its allowed arguments, see: https://numpy.org/doc/stable/reference/generated/numpy.loadtxt.html
```
# To Do: Import the data
xvalues, yvalues = np.loadtxt("./Partial_Data.txt", unpack = True)
# To Do: Scatter plot the data
plt.scatter(xvalues, yvalues)
plt.show()
```
#### Step 2: Linear Interpolation
Using your linear interpolation function above, iterate through the sparse data and generate interpolated value.
Here's one method to get you started:
Starting at the 2nd data point, interate through the data, using the current value (let this value be $b$) and the previous data point (let this be $a$ where $b$ > $a$). Interpolate 100 points between the values of ($a, b$) and plot these values. Move onto the next data point and repeat.
```
# To Do: Generate and plot interpolated data
xscatter = []
yscatter = []
for i in range(1, len(xvalues)):
b = xvalues[i]
a = xvalues[i-1]
x = a
for j in range(1, 101):
x += ((b-a)/100)
fx = linear_interpolation(x, a, yvalues[i-1], b, yvalues[i])
xscatter.append(x)
yscatter.append(fx)
plt.scatter(xscatter, yscatter)
plt.show()
```
#### Step 3: 2nd Order Lagrangian Interpolation
Using your 2nd Order Lagrangian Interpolation function above, iterate through the sparse data and generate interpolated value.
Here's one method to get you started:
Starting at the 3rd data point, interate through the data, using the current value (let this value be $c$) and the previous two (let these be $a$ and $b$ where $b$ > $a$). Interpolate 100 points between the values of ($a, b$) and plot these values. Move onto the next data point and repeat.
```
# To Do: Generate and plot interpolated data
xscatter2 = []
yscatter2 = []
for i in range(2, len(xvalues)):
c = xvalues[i]
b = xvalues[i-1]
a = xvalues[i-2]
x = a
for j in range(1, 101):
x += ((b-a)/100)
fx = lagrangian_interpolation(x, a, yvalues[i-2], b, yvalues[i-1], c, yvalues[i])
xscatter2.append(x)
yscatter2.append(fx)
plt.scatter(xscatter2, yscatter2)
plt.show()
```
| github_jupyter |
# Cleaning Your Data
Let's take a web access log, and figure out the most-viewed pages on a website from it! Sounds easy, right?
Let's set up a regex that lets us parse an Apache access log line:
```
import re
format_pat= re.compile(
r"(?P<host>[\d\.]+)\s"
r"(?P<identity>\S*)\s"
r"(?P<user>\S*)\s"
r"\[(?P<time>.*?)\]\s"
r'"(?P<request>.*?)"\s'
r"(?P<status>\d+)\s"
r"(?P<bytes>\S*)\s"
r'"(?P<referer>.*?)"\s'
r'"(?P<user_agent>.*?)"\s*'
)
```
Here's the path to the log file I'm analyzing:
```
logPath = "access_log.txt"
```
Now we'll whip up a little script to extract the URL in each access, and use a dictionary to count up the number of times each one appears. Then we'll sort it and print out the top 20 pages. What could go wrong?
```
URLCounts = {}
with open(logPath, "r") as f:
for line in (l.rstrip() for l in f):
match= format_pat.match(line)
if match:
access = match.groupdict()
request = access['request']
(action, URL, protocol) = request.split()
if URL in URLCounts:
URLCounts[URL] = URLCounts[URL] + 1
else:
URLCounts[URL] = 1
results = sorted(URLCounts, key=lambda i: int(URLCounts[i]), reverse=True)
for result in results[:20]:
print(result + ": " + str(URLCounts[result]))
```
Hm. The 'request' part of the line is supposed to look something like this:
GET /blog/ HTTP/1.1
There should be an HTTP action, the URL, and the protocol. But it seems that's not always happening. Let's print out requests that don't contain three items:
```
URLCounts = {}
with open(logPath, "r") as f:
for line in (l.rstrip() for l in f):
match= format_pat.match(line)
if match:
access = match.groupdict()
request = access['request']
fields = request.split()
if (len(fields) != 3):
print(fields)
```
Huh. In addition to empty fields, there's one that just contains garbage. Well, let's modify our script to check for that case:
```
URLCounts = {}
with open(logPath, "r") as f:
for line in (l.rstrip() for l in f):
match= format_pat.match(line)
if match:
access = match.groupdict()
request = access['request']
fields = request.split()
if (len(fields) == 3):
URL = fields[1]
if URL in URLCounts:
URLCounts[URL] = URLCounts[URL] + 1
else:
URLCounts[URL] = 1
results = sorted(URLCounts, key=lambda i: int(URLCounts[i]), reverse=True)
for result in results[:20]:
print(result + ": " + str(URLCounts[result]))
```
It worked! But, the results don't really make sense. What we really want is pages accessed by real humans looking for news from our little news site. What the heck is xmlrpc.php? A look at the log itself turns up a lot of entries like this:
46.166.139.20 - - [05/Dec/2015:05:19:35 +0000] "POST /xmlrpc.php HTTP/1.0" 200 370 "-" "Mozilla/4.0 (compatible: MSIE 7.0; Windows NT 6.0)"
I'm not entirely sure what the script does, but it points out that we're not just processing GET actions. We don't want POSTS, so let's filter those out:
```
URLCounts = {}
with open(logPath, "r") as f:
for line in (l.rstrip() for l in f):
match= format_pat.match(line)
if match:
access = match.groupdict()
request = access['request']
fields = request.split()
if (len(fields) == 3):
(action, URL, protocol) = fields
if (action == 'GET'):
if URL in URLCounts:
URLCounts[URL] = URLCounts[URL] + 1
else:
URLCounts[URL] = 1
results = sorted(URLCounts, key=lambda i: int(URLCounts[i]), reverse=True)
for result in results[:20]:
print(result + ": " + str(URLCounts[result]))
```
That's starting to look better. But, this is a news site - are people really reading the little blog on it instead of news pages? That doesn't make sense. Let's look at a typical /blog/ entry in the log:
54.165.199.171 - - [05/Dec/2015:09:32:05 +0000] "GET /blog/ HTTP/1.0" 200 31670 "-" "-"
Hm. Why is the user agent blank? Seems like some sort of malicious scraper or something. Let's figure out what user agents we are dealing with:
```
UserAgents = {}
with open(logPath, "r") as f:
for line in (l.rstrip() for l in f):
match= format_pat.match(line)
if match:
access = match.groupdict()
agent = access['user_agent']
if agent in UserAgents:
UserAgents[agent] = UserAgents[agent] + 1
else:
UserAgents[agent] = 1
results = sorted(UserAgents, key=lambda i: int(UserAgents[i]), reverse=True)
for result in results:
print(result + ": " + str(UserAgents[result]))
```
Yikes! In addition to '-', there are also a million different web robots accessing the site and polluting my data. Filtering out all of them is really hard, but getting rid of the ones significantly polluting my data in this case should be a matter of getting rid of '-', anything containing "bot" or "spider", and W3 Total Cache.
```
URLCounts = {}
with open(logPath, "r") as f:
for line in (l.rstrip() for l in f):
match= format_pat.match(line)
if match:
access = match.groupdict()
agent = access['user_agent']
if (not('bot' in agent or 'spider' in agent or
'Bot' in agent or 'Spider' in agent or
'W3 Total Cache' in agent or agent =='-')):
request = access['request']
fields = request.split()
if (len(fields) == 3):
(action, URL, protocol) = fields
if (action == 'GET'):
if URL in URLCounts:
URLCounts[URL] = URLCounts[URL] + 1
else:
URLCounts[URL] = 1
results = sorted(URLCounts, key=lambda i: int(URLCounts[i]), reverse=True)
for result in results[:20]:
print(result + ": " + str(URLCounts[result]))
```
Now, our new problem is that we're getting a bunch of hits on things that aren't web pages. We're not interested in those, so let's filter out any URL that doesn't end in / (all of the pages on my site are accessed in that manner - again this is applying knowledge about my data to the analysis!)
```
URLCounts = {}
with open(logPath, "r") as f:
for line in (l.rstrip() for l in f):
match= format_pat.match(line)
if match:
access = match.groupdict()
agent = access['user_agent']
if (not('bot' in agent or 'spider' in agent or
'Bot' in agent or 'Spider' in agent or
'W3 Total Cache' in agent or agent =='-')):
request = access['request']
fields = request.split()
if (len(fields) == 3):
(action, URL, protocol) = fields
if (URL.endswith("/")):
if (action == 'GET'):
if URL in URLCounts:
URLCounts[URL] = URLCounts[URL] + 1
else:
URLCounts[URL] = 1
results = sorted(URLCounts, key=lambda i: int(URLCounts[i]), reverse=True)
for result in results[:20]:
print(result + ": " + str(URLCounts[result]))
```
This is starting to look more believable! But if you were to dig even deeper, you'd find that the /feed/ pages are suspect, and some robots are still slipping through. However, it is accurate to say that Orlando news, world news, and comics are the most popular pages accessed by a real human on this day.
The moral of the story is - know your data! And always question and scrutinize your results before making decisions based on them. If your business makes a bad decision because you provided an analysis of bad source data, you could get into real trouble.
Be sure the decisions you make while cleaning your data are justifiable too - don't strip out data just because it doesn't support the results you want!
## Activity
These results still aren't perfect; URL's that include "feed" aren't actually pages viewed by humans. Modify this code further to strip out URL's that include "/feed". Even better, extract some log entries for these pages and understand where these views are coming from.
| github_jupyter |
# CORDIS FP7
```
import json
import re
import urllib
from titlecase import titlecase
import pandas as pd
pd.set_option('display.max_columns', 50)
```
## Read in Data
```
all_projects = pd.read_excel('input/fp7/cordis-fp7projects.xlsx')
all_projects.shape
all_organizations = pd.read_excel('input/fp7/cordis-fp7organizations.xlsx')
all_organizations.shape
all_briefs = pd.read_excel('input/fp7/cordis-fp7briefs.xlsx')
all_briefs.shape
```
## Count Organisations and Countries
It is useful to know the total number of organisations and the number of countries involved, to deal with cases where the contribution of each organisation is unknown.
```
all_organizations[['projectRcn', 'id', 'country']].count()
[
all_organizations.country.isna().sum(),
(all_organizations.country[~all_organizations.country.isna()] !=
all_organizations.country[~all_organizations.country.isna()].str.strip()).sum(),
(all_organizations.country[~all_organizations.country.isna()] !=
all_organizations.country[~all_organizations.country.isna()].str.upper()).sum(),
]
project_num_organizations = all_organizations.groupby('projectRcn').\
id.nunique().reset_index().rename(columns={'id': 'num_organizations'})
project_num_organizations.shape
project_num_countries = all_organizations.groupby('projectRcn').\
country.nunique().reset_index().rename(columns={'country': 'num_countries'})
project_num_countries.shape
project_num_organizations_and_countries = pd.merge(
project_num_countries, project_num_organizations,
on='projectRcn', validate='1:1'
)
project_num_organizations_and_countries.shape
project_num_organizations_and_countries.head()
```
## Restrict to UK
We are only interested in projects and organizations where the coordinator or at least one participant institution is in the UK.
```
uk_organizations = all_organizations[all_organizations.country == 'UK']
uk_organizations.shape
uk_organizations.head()
uk_projects = all_projects[all_projects.id.isin(uk_organizations.projectID)]
uk_projects.shape
uk_projects.head()
uk_briefs = all_briefs[all_briefs.projectRcn.isin(uk_projects.rcn)]
uk_briefs.shape
uk_briefs.head()
```
## Examples
### Coordinator outside UK
The UK has two participant institutions. It appears that `projects.ecMaxContribution` is the sum of all `organizations.ecContribution`s for all coordinator and participant institutions.
```
uk_projects[uk_projects.rcn == 101244]
uk_organizations[uk_organizations.projectRcn == 101244]
all_organizations[all_organizations.projectRcn == 101244].ecContribution.max()
all_organizations[all_organizations.projectRcn == 101244].ecContribution.sum()
all_briefs[all_briefs.projectRcn == 101244]
```
### Coordinator in UK
This one is also interesting in that it seems to have a lot of duplicate records that don't have titles, for some reason. We will need to filter those out.
```
uk_projects[uk_projects.rcn == 99464]
uk_organizations[uk_organizations.projectRcn == 99464]
uk_organizations[uk_organizations.projectRcn == 99464].ecContribution.unique().sum()
all_briefs[all_briefs.projectRcn == 99464]
```
## Duplicate Projects
It looks like it's safe to just drop projects without titles; those seem to be the only duplicates.
```
[uk_projects.rcn.nunique(), uk_projects.id.nunique(), uk_projects.shape]
uk_projects[uk_projects.duplicated('rcn', keep=False)]
uk_projects[pd.isnull(uk_projects.title)]
clean_projects = uk_projects[~pd.isnull(uk_projects.title)].copy()
# Could include coordinator and participants... would need some extra cleaning.
clean_projects.drop([
'id', 'programme', 'topics', 'frameworkProgramme', 'call',
'fundingScheme', 'coordinator', 'participants', 'subjects'
], axis=1, inplace=True)
clean_projects.rename(columns={
'startDate': 'start_date',
'endDate': 'end_date',
'projectUrl': 'project_url',
'totalCost': 'total_cost_eur',
'ecMaxContribution': 'max_contribution_eur',
'coordinatorCountry': 'coordinator_country',
'participantCountries': 'participant_countries'
}, inplace=True)
clean_projects.shape
clean_projects.describe()
clean_projects.head()
```
## Check Project Columns
```
clean_projects.count()
```
### Acronym
Just missing one.
```
clean_projects[clean_projects.acronym.isna()]
```
### Status
Some projects are listed as cancelled. It's not clear what this means exactly. Spot checks reveal that some of them apparently received at least partial funding and delivered some results, so it does not seem appropriate to remove them altogether.
- https://cordis.europa.eu/result/rcn/237795_en.html (TORTELLEX)
- https://cordis.europa.eu/result/rcn/196663_en.html (YSCHILLER)
- https://cordis.europa.eu/project/rcn/188111_en.html (MICARTREGEN) - no results
```
clean_projects.status.value_counts()
clean_projects[clean_projects.status == 'CAN'].head()
```
### Title
```
(clean_projects.title.str.strip() != clean_projects.title).sum()
```
### Start and End Dates
Some are missing. Discard for now. There is some overlap with the cancelled projects, but it is not exact.
```
(clean_projects.start_date.isna() | clean_projects.end_date.isna()).sum()
((clean_projects.status == 'CAN') & (clean_projects.start_date.isna() | clean_projects.end_date.isna())).sum()
((clean_projects.status != 'CAN') & (clean_projects.start_date.isna() | clean_projects.end_date.isna())).sum()
clean_projects = clean_projects[
~clean_projects.start_date.isna() | ~clean_projects.end_date.isna()
]
clean_projects.shape
(clean_projects.start_date > clean_projects.end_date).sum()
```
### Project URL
Looks pretty clean.
```
(~clean_projects.project_url.isna()).sum()
def is_valid_url(url):
result = urllib.parse.urlparse(str(url))
return bool((result.scheme == 'http' or result.scheme == 'https') and result.netloc)
project_url_bad = ~clean_projects.project_url.isna() & ~clean_projects.project_url.apply(is_valid_url)
project_url_bad.sum()
clean_projects[project_url_bad]
clean_projects.loc[project_url_bad, 'project_url'] = 'http://' + clean_projects.loc[project_url_bad, 'project_url']
(~clean_projects.project_url.isna() & ~clean_projects.project_url.apply(is_valid_url)).sum()
```
### Objective
```
(clean_projects.objective.str.strip() != clean_projects.objective).sum()
clean_projects.objective = clean_projects.objective.str.strip()
```
### Total Cost and EC Max Contribution
```
clean_projects.total_cost_eur.describe()
clean_projects.max_contribution_eur.describe()
(clean_projects.max_contribution_eur > clean_projects.total_cost_eur).sum()
```
## Clean Up Organizations
I notice several issues:
- Some are missing IDs (but do have postcodes)
- Some are missing postcodes
- Some postcodes are clearly typo'd (digit substitutions, etc);
- Some postcodes have been terminated (searched for them with google)
There are only 2993 unique organization IDs, so this is probably the result of a join.
For now, drop all organizations that don't have both an ID and a valid postcode. (It does look possible to match names to find IDs, and many without postcodes still have addresses, which we could geocode.)
Would be interesting to try this: https://codereview.stackexchange.com/questions/117801/uk-postcode-validation-and-format-correction-tool
```
[
uk_organizations.shape,
uk_organizations.id.notna().sum(),
uk_organizations.id.isna().sum(),
uk_organizations.id[uk_organizations.id.notna()].nunique(),
uk_organizations.postCode.isna().sum(),
uk_organizations.postCode[uk_organizations.postCode.notna()].nunique()
]
organizations = uk_organizations[uk_organizations.id.notna() & uk_organizations.postCode.notna()].copy()
organizations.id = organizations.id.astype('int64')
organizations.postCode = organizations.postCode.astype('str')
[
organizations.shape,
organizations.id.nunique(),
organizations.postCode.nunique()
]
ukpostcodes = pd.read_csv('../postcodes/input/ukpostcodes.csv.gz')
ukpostcodes.shape
organizations.postCode.isin(ukpostcodes.postcode).sum()
organizations['cleanPostcode'] = organizations.postCode.\
str.upper().\
str.strip().\
str.replace(r'[^A-Z0-9]', '').\
str.replace(r'^(\S+)([0-9][A-Z]{2})$', r'\1 \2')
organizations.cleanPostcode.isin(ukpostcodes.postcode).sum()
organizations.cleanPostcode[~organizations.cleanPostcode.isin(ukpostcodes.postcode)].unique()
organizations = organizations[organizations.cleanPostcode.isin(ukpostcodes.postcode)]
organizations.shape
clean_projects = clean_projects[clean_projects.rcn.isin(organizations.projectRcn)]
clean_projects.shape
```
## Clean Up Duplicate Organizations
I think there is also a join on the contacts, because we get multiple rows for some project-organization pairs. The main thing is that we want the `ecContribution` to be consistent. Otherwise, any row will do.
```
organizations.sort_values(['projectRcn', 'id']).\
groupby(['projectRcn', 'id']).\
filter(lambda x: x.shape[0] > 1)
organizations.groupby(['projectRcn', 'id']).\
filter(lambda x: x.ecContribution.nunique() > 1).shape
clean_organizations = organizations.groupby(['projectRcn', 'id']).first()
clean_organizations.reset_index(inplace=True)
clean_organizations.drop([
'projectID', 'projectAcronym', 'shortName', 'activityType', 'endOfParticipation',
'country', 'street', 'city', 'postCode',
'contactType', 'contactTitle', 'contactFirstNames', 'contactLastNames',
'contactFunction', 'contactTelephoneNumber', 'contactFaxNumber', 'contactEmail'
], axis=1, inplace=True)
clean_organizations.rename({
'projectRcn': 'project_rcn',
'id': 'organization_id',
'ecContribution': 'contribution_eur',
'organizationUrl': 'organization_url',
'cleanPostcode': 'postcode'
}, axis=1, inplace=True)
clean_organizations.name = clean_organizations.name.apply(titlecase)
clean_organizations.shape
clean_organizations.head()
```
## Check Organisations
```
clean_organizations.count()
```
### Role
```
clean_organizations.role.value_counts()
```
### Name
```
(clean_organizations.name.str.strip() != clean_organizations.name).sum()
```
### Contribution EUR
Missing for some organisations.
```
clean_organizations.contribution_eur.describe()
clean_organizations.contribution_eur.isna().sum()
```
### Organisation URL
Mostly clean. Found a couple with a `;` delimiting two URLs, neither of which resolved, so we can get rid of those.
```
(~clean_organizations.organization_url.isna()).sum()
organization_url_bad = ~clean_organizations.organization_url.isna() & \
~clean_organizations.organization_url.apply(is_valid_url)
organization_url_bad.sum()
clean_organizations.loc[organization_url_bad, 'organization_url'] = \
'http://' + clean_organizations.loc[organization_url_bad, 'organization_url']
organization_url_bad = ~clean_organizations.organization_url.isna() & \
~clean_organizations.organization_url.apply(is_valid_url)
organization_url_bad.sum()
clean_organizations[
~clean_organizations.organization_url.isna() & \
clean_organizations.organization_url.str.match('http.*http')].organization_url.unique()
clean_organizations.loc[
~clean_organizations.organization_url.isna() & \
clean_organizations.organization_url.str.match('http.*http'), 'organization_url'] = float('nan')
```
## Briefs
Might as well merge these into the projects where we have them. We have a few duplicates to take care of.
```
clean_briefs = uk_briefs[
uk_briefs.projectRcn.isin(clean_projects.rcn) &\
(uk_briefs.title.notna() | uk_briefs.teaser.notna() | uk_briefs.article.notna())
].copy()
clean_briefs.shape
clean_briefs[clean_briefs.projectRcn.duplicated(keep=False)]
clean_briefs = clean_briefs.sort_values('lastUpdateDate')
clean_briefs = clean_briefs[~clean_briefs.projectRcn.duplicated(keep='last')]
clean_briefs.shape
clean_briefs.drop([
'rcn', 'language', 'lastUpdateDate', 'country', 'projectAcronym',
'programme', 'topics', 'relatedReportRcn'
], axis=1, inplace=True)
clean_briefs.rename({
'projectRcn': 'rcn',
'title': 'brief_title',
'relatedReportTitle': 'related_report_title',
'imageUri': 'image_path'
}, axis=1, inplace=True)
clean_briefs.head()
clean_projects_with_briefs = pd.merge(
clean_projects, clean_briefs, on='rcn', how='left', validate='1:1'
)
clean_projects_with_briefs.head()
```
## Checks
```
clean_organizations[clean_organizations.project_rcn == 101244]
clean_projects_with_briefs[clean_projects_with_briefs.rcn == 101244]
clean_organizations[clean_organizations.project_rcn == 99464]
clean_projects_with_briefs[clean_projects_with_briefs.rcn == 99464]
project_organizations = pd.merge(
clean_projects_with_briefs, clean_organizations,
left_on='rcn', right_on='project_rcn', validate='1:m')
project_organizations.drop(['project_rcn'], axis=1, inplace=True)
project_organizations.shape
project_organizations.head()
uk_contributions = project_organizations.groupby('rcn').aggregate({'contribution_eur': sum})
uk_contributions.reset_index(inplace=True)
uk_contributions.head()
project_uk_contributions = pd.merge(
clean_projects_with_briefs,
uk_contributions,
on='rcn', validate='1:1')
project_uk_contributions.head()
project_uk_contributions[project_uk_contributions.contribution_eur > project_uk_contributions.max_contribution_eur + 0.1].shape
project_organization_uk_contributions = pd.merge(
project_uk_contributions, clean_organizations,
left_on='rcn', right_on='project_rcn', validate='1:m'
)
project_organization_uk_contributions = pd.merge(
project_organization_uk_contributions, ukpostcodes, on='postcode', validate='m:1'
)
project_organization_uk_contributions.shape
project_organization_uk_contributions.head()
(project_uk_contributions.contribution_eur < 1000).value_counts()
```
### Add Numbers of Organisations and Countries
Add these back on and do a sanity check against the `participant_countries` field. They mostly match up, except for a few relatively small discrepancies.
```
clean_projects_with_briefs.shape
clean_projects_with_briefs = pd.merge(
clean_projects_with_briefs, project_num_organizations_and_countries,
left_on='rcn', right_on='projectRcn', validate='1:1')
clean_projects_with_briefs.drop('projectRcn', axis=1, inplace=True)
clean_projects_with_briefs.shape
clean_projects_with_briefs.head()
[
clean_projects_with_briefs.num_countries.isna().sum(),
clean_projects_with_briefs.coordinator_country.isna().sum(),
clean_projects_with_briefs.participant_countries.isna().sum()
]
def check_num_countries():
ccs = clean_projects_with_briefs.coordinator_country
pcs = clean_projects_with_briefs.participant_countries
ncs = clean_projects_with_briefs.num_countries
pcs_isna = pcs.isna()
coordinator_mismatch = clean_projects_with_briefs[pcs_isna][ncs[pcs_isna] != 1].copy()
coordinator_mismatch['check'] = 1
cs = ccs[~pcs_isna] + ';' + pcs[~pcs_isna]
check_ncs = cs.apply(lambda x: len(set(x.split(';'))))
participant_mismatch = clean_projects_with_briefs[~pcs_isna][ncs[~pcs_isna] != check_ncs].copy()
participant_mismatch['check'] = check_ncs
return pd.concat([coordinator_mismatch, participant_mismatch])\
[['rcn', 'coordinator_country', 'participant_countries', 'num_countries', 'check', 'num_organizations']]
check_num_countries()
all_organizations.country[all_organizations.projectRcn == 100467].unique()
all_organizations.country[all_organizations.projectRcn == 203681].unique()
all_organizations.country[all_organizations.projectRcn == 90982].unique()
```
I suspect a problem with handling of `NA`; that is a valid code (Namibia), but maybe in some cases it is being used for Not Available.
### Convert to GBP
```
eur_gbp = pd.read_pickle('../exchange_rates/output/exchange_rates.pkl.gz')
eur_gbp.tail()
def find_average_eur_gbp_rate(row):
# create timeseries from start to end
days = pd.date_range(row.start_date, row.end_date, closed='left')
daily = pd.DataFrame({
'month_start': days,
'weight': 1.0 / days.shape[0]
})
monthly = daily.resample('MS', on='month_start').sum()
monthly = pd.merge(monthly, eur_gbp, on='month_start', validate='1:1')
return (monthly.weight * monthly.rate).sum()
clean_projects_with_briefs['eur_gbp'] = \
clean_projects_with_briefs.apply(
find_average_eur_gbp_rate, axis=1, result_type='reduce')
clean_projects_with_briefs.head()
```
## Save Data
```
clean_projects_with_briefs.to_pickle('output/fp7_projects.pkl.gz')
clean_organizations.to_pickle('output/fp7_organizations.pkl.gz')
```
| github_jupyter |
# Plotting
Figures to plot:
1. Merit order plots showing:
1. how emissions intensive plant move down the merit order as the permit price increases (subplot a), and the net liability faced by different generators if dispatched (subplot b);
2. short-run marginal costs of generators under a REP scheme and a carbon tax.
2. Plot showing baselines that target average wholesale prices for different price targets over different permit price scenarios (subplot a). Plot showing scheme revenue arising from different baseline permit price combinations (subplot b). Plot showing final average wholesale prices (subplot c).
3. BAU targeting baseline and scheme revenue over a range of permit prices
4. Average emissions intensity as a function of permit price
5. Average regional prices under a BAU average wholesale price targeting REP scheme (subplot a), and a carbon tax (subplot b) for different permit prices.
## Import packages
```
import os
import pickle
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.cm as cm
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from matplotlib.collections import PatchCollection
from matplotlib.ticker import AutoMinorLocator, MultipleLocator, FormatStrFormatter, FixedLocator, LinearLocator
from mpl_toolkits.axes_grid1 import make_axes_locatable
```
Set text options for plots
```
matplotlib.rcParams['font.family'] = ['sans-serif']
matplotlib.rcParams['font.serif'] = ['Helvetica']
plt.rc('text', usetex=True)
```
## Declare paths to files
```
# Identifier used to update paths depending on the number of scenarios investigated
number_of_scenarios = '100_scenarios'
# Core data directory
data_dir = os.path.join(os.path.curdir, os.path.pardir, os.path.pardir, 'data')
# Operating scenario data
operating_scenarios_dir = os.path.join(os.path.curdir, os.path.pardir, '1_create_scenarios')
# Model output directory
parameter_selector_dir = os.path.join(os.path.curdir, os.path.pardir, '2_parameter_selector', 'output', number_of_scenarios)
# Processed results directory
processed_results_dir = os.path.join(os.path.curdir, os.path.pardir, '3_process_results', 'output', number_of_scenarios)
# Output directory
output_dir = os.path.join(os.path.curdir, 'output', number_of_scenarios)
```
## Import data
```
# Model parameters
# ----------------
# Generator data
with open(os.path.join(parameter_selector_dir, 'df_g.pickle'), 'rb') as f:
df_g = pickle.load(f)
# Node data
with open(os.path.join(parameter_selector_dir, 'df_n.pickle'), 'rb') as f:
df_n = pickle.load(f)
# Scenario data
with open(os.path.join(parameter_selector_dir, 'df_scenarios.pickle'), 'rb') as f:
df_scenarios = pickle.load(f)
# Processed results
# -----------------
# BAU average price
with open(os.path.join(processed_results_dir, 'mppdc_bau_average_price.pickle'), 'rb') as f:
mppdc_bau_average_price = pickle.load(f)
# Average system emissions intensities for different permit prices
with open(os.path.join(processed_results_dir, 'df_average_emissions_intensities.pickle'), 'rb') as f:
df_average_emissions_intensities = pickle.load(f)
# Price targeting baselines for different permit prices
with open(os.path.join(processed_results_dir, 'df_baseline_vs_permit_price.pickle'), 'rb') as f:
df_baseline_vs_permit_price = pickle.load(f)
# Scheme revenue corresponding to different price targets
with open(os.path.join(processed_results_dir, 'df_baseline_vs_revenue.pickle'), 'rb') as f:
df_baseline_vs_revenue = pickle.load(f)
# Average regional and national wholesale electricity prices under a REP scheme
with open(os.path.join(processed_results_dir, 'df_rep_average_prices.pickle'), 'rb') as f:
df_rep_average_prices = pickle.load(f)
# Average regional and national wholesale electricity prices under a carbon tax
with open(os.path.join(processed_results_dir, 'df_carbon_tax_average_prices.pickle'), 'rb') as f:
df_carbon_tax_average_prices = pickle.load(f)
```
Conversion factor used to format figure size.
```
# Millimeters to inches
mmi = 0.0393701
```
### Merit order plots
Represent generators as rectangles. The length of a rectangle corresponds to a generator's capacity relative total installed capacity. Arrange these rectangles (generators) in order of increasing short-run marginal cost (SRMC), creating a merit order of generation. The colour and shade of each rectangle can be used to denote different generator properties e.g. emissions intensity, SRMC, or net liability faced under an output-based rebating scheme. Repeat this procedure for different permit price scenarios. Note that different permit prices will change the relative costs of generators, shifting their position in the merit order.
```
def plot_merit_order():
"Shows how merit order is affected w.r.t emissions intensities, SRMCs, and net liability under a REP scheme"
# Only consider fossil units
df_gp = df_g[df_g['FUEL_CAT']=='Fossil'].copy()
# Permit prices
permit_prices = range(2, 71, 2)
# Number of rectanlges
n = len(permit_prices)
# Gap as a fraction of rectangle height
gap_fraction = 1 / 10
# Rectangle height
rectangle_height = 1 / (gap_fraction * (n - 1) + n)
# Gap between rectangles
y_gap = rectangle_height * gap_fraction
# Initial y offset
y_offset = 0
# Container for rectangle patches
rectangles = []
# Container for colours corresponding to patches
colours_emissions_intensity = []
colours_net_liability = []
colours_srmc_rep = []
colours_srmc_carbon_tax = []
# Construct rectangles to plot for each permit price scenario
for permit_price in permit_prices:
# Baseline corresponding to BAU price targeting scenario
baseline = df_baseline_vs_permit_price.loc[permit_price, 1]
# Net liability faced by generator under REP scheme
df_gp['NET_LIABILITY'] = (df_gp['EMISSIONS'] - baseline) * permit_price
# Compute updated SRMC and sort from least cost to most expensive (merit order)
df_gp['SRMC_REP'] = df_gp['SRMC_2016-17'] + df_gp['NET_LIABILITY']
df_gp.sort_values('SRMC_REP', inplace=True)
# Carbon tax SRMCs (baseline = 0 for all permit price scenarios)
df_gp['SRMC_TAX'] = df_gp['SRMC_2016-17'] + (df_gp['EMISSIONS'] * permit_price)
# Normalising registered capacities
df_gp['REG_CAP_NORM'] = (df_gp['REG_CAP'] / df_gp['REG_CAP'].sum())
x_offset = 0
# Plotting rectangles
for index, row in df_gp.iterrows():
rectangles.append(patches.Rectangle((x_offset, y_offset), row['REG_CAP_NORM'], rectangle_height))
# Colour for emissions intensity plot
colours_emissions_intensity.append(row['EMISSIONS'])
# Colour for net liability under REP scheme for each generator
colours_net_liability.append(row['NET_LIABILITY'])
# Colour for net generator SRMCs under REP scheme
colours_srmc_rep.append(row['SRMC_REP'])
# Colour for SRMCs under carbon tax
colours_srmc_carbon_tax.append(row['SRMC_TAX'])
# Offset for placement of next rectangle
x_offset += row['REG_CAP_NORM']
y_offset += rectangle_height + y_gap
# Merit order emissions intensity patches
patches_emissions_intensity = PatchCollection(rectangles, cmap='Reds')
patches_emissions_intensity.set_array(np.array(colours_emissions_intensity))
# Net liability under REP scheme patches
patches_net_liability = PatchCollection(rectangles, cmap='bwr')
patches_net_liability.set_array(np.array(colours_net_liability))
# SRMCs under REP scheme patches
patches_srmc_rep = PatchCollection(rectangles, cmap='Reds')
patches_srmc_rep.set_array(np.array(colours_srmc_rep))
# SRMCs under carbon tax patches
patches_srmc_carbon_tax = PatchCollection(rectangles, cmap='Reds')
patches_srmc_carbon_tax.set_array(np.array(colours_srmc_carbon_tax))
# Format tick positions
# ---------------------
# y-ticks
# -------
# Minor ticks
yminorticks = []
for counter, permit_price in enumerate(permit_prices):
if counter == 0:
position = rectangle_height / 2
else:
position = yminorticks[-1] + y_gap + rectangle_height
yminorticks.append(position)
yminorlocator = FixedLocator(yminorticks)
# Major ticks
ymajorticks = []
for counter in range(0, 7):
if counter == 0:
position = (4.5 * rectangle_height) + (4 * y_gap)
else:
position = ymajorticks[-1] + (5 * rectangle_height) + (5 * y_gap)
ymajorticks.append(position)
ymajorlocator = FixedLocator(ymajorticks)
# x-ticks
# -------
# Minor locator
xminorlocator = LinearLocator(21)
# Major locator
xmajorlocator = LinearLocator(6)
# Emissions intensity and net liability figure
# --------------------------------------------
plt.clf()
# Initialise figure
fig1 = plt.figure()
# Axes on which to construct plots
ax1 = plt.axes([0.065, 0.185, 0.40, .79])
ax2 = plt.axes([0.57, 0.185, 0.40, .79])
# Add emissions intensity patches
ax1.add_collection(patches_emissions_intensity)
# Add net liability patches
patches_net_liability.set_clim([-35, 35])
ax2.add_collection(patches_net_liability)
# Add colour bars with labels
cbar1 = fig1.colorbar(patches_emissions_intensity, ax=ax1, pad=0.015, aspect=30)
cbar1.set_label('Emissions intensity (tCO${_2}$/MWh)', fontsize=8, fontname='Helvetica')
cbar2 = fig1.colorbar(patches_net_liability, ax=ax2, pad=0.015, aspect=30)
cbar2.set_label('Net liability (\$/MWh)', fontsize=8, fontname='Helvetica')
# Label axes
ax1.set_ylabel('Permit price (\$/tCO$_{2}$)', fontsize=9, fontname='Helvetica')
ax1.set_xlabel('Normalised cumulative capacity\n(a)', fontsize=9, fontname='Helvetica')
ax2.set_ylabel('Permit price (\$/tCO$_{2}$)', fontsize=9, fontname='Helvetica')
ax2.set_xlabel('Normalised cumulative capacity\n(a)', fontsize=9, fontname='Helvetica')
# Format ticks
# ------------
# y-axis
ax1.yaxis.set_minor_locator(yminorlocator)
ax1.yaxis.set_major_locator(ymajorlocator)
ax2.yaxis.set_minor_locator(yminorlocator)
ax2.yaxis.set_major_locator(ymajorlocator)
# y-tick labels
ax1.yaxis.set_ticklabels(['10', '20', '30', '40', '50', '60', '70'])
ax2.yaxis.set_ticklabels(['10', '20', '30', '40', '50', '60', '70'])
# x-axis
ax1.xaxis.set_minor_locator(xminorlocator)
ax1.xaxis.set_major_locator(xmajorlocator)
ax2.xaxis.set_minor_locator(xminorlocator)
ax2.xaxis.set_major_locator(xmajorlocator)
# Format figure size
width = 180 * mmi
height = 75 * mmi
fig1.set_size_inches(width, height)
# Save figure
fig1.savefig(os.path.join(output_dir, 'figures', 'emissions_liability_merit_order.pdf'))
# SRMCs under REP and carbon tax
# ------------------------------
# Initialise figure
fig2 = plt.figure()
# Axes on which to construct plots
ax3 = plt.axes([0.065, 0.185, 0.40, .79])
ax4 = plt.axes([0.57, 0.185, 0.40, .79])
# Add REP SRMCs
patches_srmc_rep.set_clim([25, 200])
ax3.add_collection(patches_srmc_rep)
# Add carbon tax net liability
patches_srmc_carbon_tax.set_clim([25, 200])
ax4.add_collection(patches_srmc_carbon_tax)
# Add colour bars with labels
cbar3 = fig2.colorbar(patches_srmc_rep, ax=ax3, pad=0.015, aspect=30)
cbar3.set_label('SRMC (\$/MWh)', fontsize=8, fontname='Helvetica')
cbar4 = fig2.colorbar(patches_srmc_carbon_tax, ax=ax4, pad=0.015, aspect=30)
cbar4.set_label('SRMC (\$/MWh)', fontsize=8, fontname='Helvetica')
# Label axes
ax3.set_ylabel('Permit price (\$/tCO$_{2}$)', fontsize=9, fontname='Helvetica')
ax3.set_xlabel('Normalised cumulative capacity\n(a)', fontsize=9, fontname='Helvetica')
ax4.set_ylabel('Permit price (\$/tCO$_{2}$)', fontsize=9, fontname='Helvetica')
ax4.set_xlabel('Normalised cumulative capacity\n(a)', fontsize=9, fontname='Helvetica')
# Format ticks
# ------------
# y-axis
ax3.yaxis.set_minor_locator(yminorlocator)
ax3.yaxis.set_major_locator(ymajorlocator)
ax4.yaxis.set_minor_locator(yminorlocator)
ax4.yaxis.set_major_locator(ymajorlocator)
# y-tick labels
ax3.yaxis.set_ticklabels(['10', '20', '30', '40', '50', '60', '70'])
ax4.yaxis.set_ticklabels(['10', '20', '30', '40', '50', '60', '70'])
# x-axis
ax3.xaxis.set_minor_locator(xminorlocator)
ax3.xaxis.set_major_locator(xmajorlocator)
ax4.xaxis.set_minor_locator(xminorlocator)
ax4.xaxis.set_major_locator(xmajorlocator)
# Format figure size
width = 180 * mmi
height = 75 * mmi
fig2.set_size_inches(width, height)
# Save figure
fig2.savefig(os.path.join(output_dir, 'figures', 'srmc_merit_order.pdf'))
plt.show()
# Create figure
plot_merit_order()
```
### Price targeting baselines and corresponding scheme revenue
Plot emissions intensity baselines, scheme revenue, and average wholesale price outcomes for each permit price and wholesale price targeting scenario.
```
def plot_price_targeting_baselines_and_scheme_revenue():
"Plot baselines that target given wholesale prices and scheme revenue that corresponds to these scenarios"
# Initialise figure
plt.clf()
fig = plt.figure()
# Axes on which to construct plots
# ax1 = plt.axes([0.08, 0.175, 0.41, 0.77])
# ax2 = plt.axes([0.585, 0.175, 0.41, 0.77])
ax1 = plt.axes([0.07, 0.21, 0.25, 0.72])
ax2 = plt.axes([0.40, 0.21, 0.25, 0.72])
ax3 = plt.axes([0.74, 0.21, 0.25, 0.72])
# Price targets
price_target_colours = {0.8: '#b50e43', 0.9: '#af92cc', 1: '#45a564', 1.1: '#a59845', 1.2: '#f27b2b'}
# Price targeting baselines
# -------------------------
for col in df_baseline_vs_permit_price.columns:
ax1.plot(df_baseline_vs_permit_price[col], '-x', markersize=1.5, linewidth=0.9, label=col, color=price_target_colours[col])
# Label axes
ax1.set_ylabel('Emissions intensity baseline\nrelative to BAU', fontsize=9)
ax1.set_xlabel('Permit price (\$/tCO${_2}$)\n(a)', fontsize=9)
# Format ticks
ax1.xaxis.set_major_locator(MultipleLocator(10))
ax1.xaxis.set_minor_locator(MultipleLocator(2))
ax1.yaxis.set_minor_locator(MultipleLocator(0.1))
# Scheme revenue
# --------------
for col in df_baseline_vs_revenue.columns:
ax2.plot(df_baseline_vs_revenue[col], '-x', markersize=1.5, linewidth=0.9, label=col, color=price_target_colours[col])
# Label axes
ax2.set_xlabel('Permit price (\$/tCO${_2}$)\n(b)', fontsize=9)
ax2.set_ylabel('Scheme revenue (\$/h)', labelpad=0, fontsize=9)
# Format axes
ax2.ticklabel_format(axis='y', useMathText=True, style='sci', scilimits=(1, 5))
ax2.xaxis.set_major_locator(MultipleLocator(10))
ax2.xaxis.set_minor_locator(MultipleLocator(2))
ax2.yaxis.set_minor_locator(MultipleLocator(20000))
# Average prices
# --------------
# Final average price under different REP scenarios
df_final_prices = df_rep_average_prices.reset_index().pivot(index='FIXED_TAU', columns='TARGET_PRICE_BAU_MULTIPLE', values='NATIONAL').div(mppdc_bau_average_price)
for col in df_final_prices.columns:
ax3.plot(df_final_prices[col], '-x', markersize=1.5, linewidth=0.9, label=col, color=price_target_colours[col])
# Label axes
ax3.set_xlabel('Permit price (\$/tCO${_2}$)\n(c)', fontsize=9)
ax3.set_ylabel('Average price\nrelative to BAU', labelpad=0, fontsize=9)
# Format ticks
ax3.xaxis.set_major_locator(MultipleLocator(10))
ax3.xaxis.set_minor_locator(MultipleLocator(2))
ax3.yaxis.set_minor_locator(MultipleLocator(0.02))
# Create legend
legend = ax2.legend(title='Price target\nrelative to BAU', ncol=1, loc='upper center', bbox_to_anchor=(-0.61, 1.01), fontsize=9)
legend.get_title().set_fontsize('9')
# Format figure size
fig = ax2.get_figure()
width = 180 * mmi
height = 70 * mmi
fig.set_size_inches(width, height)
# Save figure
fig.savefig(os.path.join(output_dir, 'figures', 'baseline_revenue_price_subplot.pdf'))
plt.show()
# Create plot
plot_price_targeting_baselines_and_scheme_revenue()
```
Plot the BAU price targeting scenario. Overlay scheme revenue.
```
def plot_bau_price_target_and_baseline():
"Plot baseline that targets BAU prices and scheme revenue on same figure"
# Initialise figure
plt.clf()
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
# Plot emission intensity baseline and scheme revenue
df_bau_baseline_revenue = df_baseline_vs_permit_price[1].to_frame().rename(columns={1: 'baseline'}).join(df_baseline_vs_revenue[1].to_frame().rename(columns={1: 'revenue'}), how='left')
df_bau_baseline_revenue['baseline'].plot(ax=ax1, color='#dd4949', markersize=1.5, linewidth=1, marker='o', linestyle='-')
df_bau_baseline_revenue['revenue'].plot(ax=ax2, color='#4a63e0', markersize=1.5, linewidth=1, marker='o', linestyle='-')
# Format axes labels
ax1.set_xlabel('Permit price (\$/tCO$_{2}$)', fontsize=9)
ax1.set_ylabel('Emissions intensity baseline\nrelative to BAU', fontsize=9)
ax2.set_ylabel('Scheme revenue (\$/h)', fontsize=9)
# Format ticks
ax1.minorticks_on()
ax2.minorticks_on()
ax2.xaxis.set_major_locator(MultipleLocator(10))
ax2.xaxis.set_minor_locator(MultipleLocator(2))
ax2.ticklabel_format(axis='y', useMathText=True, style='sci', scilimits=(0, 1))
# Format legend
h1, l1 = ax1.get_legend_handles_labels()
h2, l2 = ax2.get_legend_handles_labels()
l1 = ['Baseline']
l2 = ['Revenue']
ax1.legend(h1+h2, l1+l2, loc=0, bbox_to_anchor=(0.405, .25), fontsize=8)
# Format figure size
width = 85 * mmi
height = 65 * mmi
fig.subplots_adjust(left=0.22, bottom=0.16, right=0.8, top=.93)
fig.set_size_inches(width, height)
# Save figure
fig.savefig(os.path.join(output_dir, 'figures', 'bau_price_target_baseline_and_revenue.pdf'))
plt.show()
# Create figure
plot_bau_price_target_and_baseline()
```
### System emissions intensity as a function of permit price
Plot system emissions intensity as a function of permit price.
```
def plot_permit_price_vs_emissions_intensity():
"Plot average emissions intensity as a function of permit price"
# Initialise figure
plt.clf()
fig, ax = plt.subplots()
# Plot figure
df_average_emissions_intensities.div(df_average_emissions_intensities.iloc[0]).plot(linestyle='-', marker='x', markersize=2, linewidth=0.8, color='#c11111', ax=ax)
# Format axis labels
ax.set_xlabel('Permit price (\$/tCO${_2}$)', fontsize=9)
ax.set_ylabel('Emissions intensity\nrelative to BAU', fontsize=9)
# Format ticks
ax.xaxis.set_major_locator(MultipleLocator(10))
ax.xaxis.set_minor_locator(MultipleLocator(2))
ax.yaxis.set_minor_locator(MultipleLocator(0.005))
# Format figure size
width = 85 * mmi
height = width / 1.2
fig.subplots_adjust(left=0.2, bottom=0.14, right=.98, top=0.98)
fig.set_size_inches(width, height)
# Save figure
fig.savefig(os.path.join(output_dir, 'figures', 'permit_price_vs_emissions_intensity_normalised.pdf'))
plt.show()
# Create figure
plot_permit_price_vs_emissions_intensity()
```
### Regional prices under REP scheme and carbon tax
Show regional impacts of the policy and compare with a carbon tax where rebates are not give to generators.
```
def plot_regional_prices_rep_and_tax():
"Plot average regional prices under REP and carbon tax scenarios"
# Initialise figure
plt.clf()
fig = plt.figure()
ax1 = plt.axes([0.068, 0.18, 0.41, 0.8])
ax2 = plt.axes([0.58, 0.18, 0.41, 0.8])
# Regional prices under REP scheme
df_rep_prices = df_rep_average_prices.loc[(slice(None), 1), :]
df_rep_prices.index = df_rep_prices.index.droplevel(1)
df_rep_prices.drop('NATIONAL', axis=1).plot(marker='o', linestyle='-', markersize=1.5, linewidth=1, cmap='tab10', ax=ax1)
# Format labels
ax1.set_xlabel('Permit price (\$/tCO${_2}$)\n(a)', fontsize=9)
ax1.set_ylabel('Average wholesale price (\$/MWh)', fontsize=9)
# Format axes
ax1.minorticks_on()
ax1.xaxis.set_minor_locator(MultipleLocator(2))
ax1.xaxis.set_major_locator(MultipleLocator(10))
# Add legend
legend1 = ax1.legend()
legend1.remove()
# Plot prices under a carbon tax (baseline=0)
df_carbon_tax_prices = df_carbon_tax_average_prices.copy()
df_carbon_tax_prices.index = df_carbon_tax_prices.index.droplevel(1)
# Rename columns - remove '1' at end of NEM region name
new_column_names = {i: i.split('_')[-1].replace('1','') for i in df_carbon_tax_prices.columns}
df_carbon_tax_prices = df_carbon_tax_prices.rename(columns=new_column_names)
df_carbon_tax_prices.drop('NATIONAL', axis=1).plot(marker='o', linestyle='-', markersize=1.5, linewidth=1, cmap='tab10', ax=ax2)
# Format axes labels
ax2.set_ylabel('Average wholesale price (\$/MWh)', fontsize=9)
ax2.set_xlabel('Permit price (\$/tCO${_2}$)\n(b)', fontsize=9)
# Format ticks
ax2.minorticks_on()
ax2.xaxis.set_minor_locator(MultipleLocator(2))
ax2.xaxis.set_major_locator(MultipleLocator(10))
# Create legend
legend2 = ax2.legend(ncol=2, loc='upper center', bbox_to_anchor=(0.708, 0.28), fontsize=9)
# Format figure size
width = 180 * mmi
height = 80 * mmi
fig.set_size_inches(width, height)
# Save figure
fig.savefig(os.path.join(output_dir, 'figures', 'regional_wholesale_prices.pdf'))
plt.show()
# Create figure
plot_regional_prices_rep_and_tax()
```
| github_jupyter |
# Dataset Downloaded from Kaggle : https://www.kaggle.com/jessemostipak/hotel-booking-demand
3 Questions that may help the hotel to improve their business by reducing cancellation rate:
1. What is the cancellation rate over the years for different hotel categories?
2. Are we able to predict booking cancellation?
3. For those booking at risk for cancellation, what can the hotel do to mitigate the risk?
```
# import the needed library to load the hotel booking data downloaded from Kaggle.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
df = pd.read_csv('./Dataset/hotel_bookings.csv')
df.head()
```
# Basic Data Analysis
First, let's look at the data set to see any missing value or any further data process required.
```
# looks at the data summary - seems like OK
df.describe()
# now looks at which columns with missing values
dfMissingValue = pd.DataFrame(df.isnull().mean(), columns=['MissingMean'])
dfMissingValue[dfMissingValue.MissingMean >0]
# There are about 112593 records, ~95% of total data set missing Company value.
dfNullAgComp = df[(df.company.isnull())]
dfNullAgComp.shape
# There are about 16340 records, ~14% of total data set missing Agent value.
dfNullAgAgent = df[(df.agent.isnull())]
dfNullAgAgent.shape
# There are about 9760 records, ~8% of total data set missing Company & Agent value.
dfNullAgComp = df[(df.company.isnull()) & (df.agent.isnull())]
dfNullAgComp.shape
#Evaluate what are the unique values for both Company and Agent
df.company.sort_values(ascending=True).unique()
df.agent.sort_values(ascending=True).unique()
# Next let's review column of Child
dfNullChildren = df[(df.children.isnull())]
dfNullChildren.shape
dfNullChildren
# Next let's review column of Country
dfNullCountry = df[(df.country.isnull())]
dfNullCountry.shape
# check what are the unique values for country
df.country.sort_values(ascending=True).unique()
# analyze is there any pattern of data combinations when country is NaN. Nothing obvious observed based on the data summary
dfNullCountry.describe()
# With the above, it is safe to replace NaN value with:
# i. 0 for Company and Agent columns to represent the respective booking was not file neither by Company or Agent
# ii. 0 for Children
# iii. OTH for Country
dfProcess = df
dfProcess.company.fillna(value=0, inplace=True)
dfProcess.agent.fillna(value=0, inplace=True)
dfProcess.children.fillna(value=0, inplace=True)
dfProcess.country.fillna(value='OTH', inplace=True)
dfMissingValue = pd.DataFrame(dfProcess.isnull().mean(), columns=['MissingMean'])
dfMissingValue[dfMissingValue.MissingMean >0]
```
# Now let's start to answer the 3 business questions posted earlier:
# Question 1. What is the cancellation rate over the years for different hotel categories?
```
# First, let's analyze the cancellation rate over the years for different hotel types.
HotelCancellationTransactions = pd.DataFrame(df.groupby(['reservation_status','arrival_date_year','hotel']).count()['arrival_date_week_number'].unstack())
HotelCancellationTransactions
HotelCancellationTransactions.plot(kind='bar', stacked=True)
dfHotelRSBreakdown = pd.DataFrame(columns = ['Hotel', 'Year', 'PcCanceled', 'PcCheckOut', 'PcNoShow'])
for hotelType in df['hotel'].unique():
for rsYear in df.loc[df['hotel']==hotelType]['arrival_date_year'].unique():
dfSubset = df.loc[(df['hotel']==hotelType) & (df['arrival_date_year']==rsYear)]
TotalTrans = dfSubset.shape[0]
PcCanceled = dfSubset.loc[dfSubset['reservation_status']=='Canceled'].shape[0]
PcCheckOut = dfSubset.loc[dfSubset['reservation_status']=='Check-Out'].shape[0]
PcNoShow = dfSubset.loc[dfSubset['reservation_status']=='No-Show'].shape[0]
newRow = pd.Series([hotelType, rsYear, round(PcCanceled/TotalTrans, 2), round(PcCheckOut/TotalTrans,2), round(PcNoShow/TotalTrans,2)], index=dfHotelRSBreakdown.columns)
dfHotelRSBreakdown = dfHotelRSBreakdown.append(newRow, ignore_index=True)
dfHotelRSBreakdown
```
# Analysis - Cancellation rate over the years for different hotel categories:
1. Observed that City Hotel had a higher # of transactions in total compared to Resolt Hotel cross 2015-2017.
2. At the same time, City Hotel had higher # of cancellations as well as cancellation rate compared to Resort Hotel cross the 3 years period.
3. Cancellation rate for City Hotel reduced slightly 2016 compared to 2015 but increased in 2017. On the other hand, Cancellation rate for Resort Hotel increased over years.
4. To answer the subsequent questions, we will focus only for City Hotel using Year 2017 data set.
# Question 2. Are we able to predict booking cancellation? (Targetted: City Hotel)
```
# Let's analyze cancellation data for City Hotel in Year 2017
dfCity2017 = dfProcess[(dfProcess.hotel =='City Hotel') & (dfProcess.arrival_date_year ==2017)]
dfCity2017.shape
dfCity2017[dfCity2017.reservation_status == 'No-Show'].deposit_type.unique()
```
# 2.1 Analysis by Country
```
# Define function to plot pie chart
def PiePlot(dfValues, dfLabels) :
# pie plot
plt.pie(dfValues, labels=dfLabels, autopct='%1.1f%%', shadow=True)
plt.show()
# Country distribution for all transactions
dfCountry = pd.DataFrame(dfCity2017.groupby(['country']).count()['hotel'])
dfCountry.rename(columns={'hotel':'Number of Bookings'}, inplace=True)
dfCountry.sort_values(by='Number of Bookings', ascending=False, inplace=True)
#dfCountry.head()
dfCountry["Booking Rate %"] = round(dfCountry["Number of Bookings"]/dfCity2017.shape[0],3)
dfCountry["Country"] = dfCountry.index
#dfCountry["Booking Rate %"].sum()
# pie plot
PiePlot(dfCountry["Number of Bookings"], dfCountry["Country"])
# Next lets see the cancellation by country
dfCancelCountry = pd.DataFrame(dfCity2017[dfCity2017.is_canceled ==1].groupby(['country']).count()['hotel'])
dfCancelCountry.rename(columns={'hotel':'Number of Cancel Bookings'}, inplace=True)
dfCancelCountry.sort_values(by='Number of Cancel Bookings', ascending=False, inplace=True)
dfCancelCountry["Booking Cancel Rate %"] = round(dfCancelCountry["Number of Cancel Bookings"]/dfCity2017.shape[0],3)
dfCancelCountry["Country Cancel"] = dfCancelCountry.index
dfCancelCountry["Booking Cancel Rate %"].sum()
# pie plot
PiePlot(dfCancelCountry["Number of Cancel Bookings"], dfCancelCountry["Country Cancel"])
```
# Analysis by Country Summary:
1. The hotels seems to be located in Europe as top countries with high # of total and cancellation transactions are from Europe Region. With further analysis over internet, it is confirmed that the hotels are located in Portugal.
2. Local guests from Portugal has the highest cancellation rate among others, ~48% over total transactions.
3. We might want to consider festive seasons & holidays in Europe in general as well as Portugal in specific for subsequent data analysis.
# 2.2 Analysis by Month
```
months = ["January", "February", "March", "April", "May", "June",
"July", "August", "September", "October", "November", "December"]
```
# 2.2.1 Let's first look at City Hotel monthly cancellation trend for 2015-2017
```
dfCompare = pd.DataFrame()
for yr in [2016,2015,2017]:
dfHotel = dfProcess[(dfProcess.hotel =='City Hotel') & (dfProcess.arrival_date_year == yr)]
# Monthly distribution for Cancelled transactions
dfCancelTrnx = pd.DataFrame(dfHotel[dfHotel.is_canceled ==1].groupby(['arrival_date_month']).count()['hotel'])
colCancelYear = 'Cancel{}'.format(yr)
dfCancelTrnx.rename(columns={'hotel':colCancelYear}, inplace=True)
dfCancelTrnx["Month"] = pd.Categorical(dfCancelTrnx.index, categories=months, ordered=True)
dfCancelTrnx.sort_values('Month', inplace=True)
dfCancelTrnx.reindex()
if (dfCompare.shape[0] == 0):
dfCompare = dfCancelTrnx
else:
dfCompare[colCancelYear] = dfCancelTrnx[colCancelYear]
dfCompare[['Cancel2015','Cancel2016','Cancel2017']].plot(kind='line', figsize=[15,7])
```
# Analysis - City Hotel monthly cancellation for 2015-2017
1. We want to understand is there any seasonal elements or features changes over the years that we need to consider when predicting City Hotel's booking cancellation.
2. From the chart above,
i. there are significant differences of bookings cancellation # and trend from Sept-Dec comparing 2015 & 2016 as well as Jan-Feb, Apr-Aug for 2016 & 2017.
ii. Similar patterns of cancellation trend observed from July-Sept for 2015 & 2016. Same goes to Feb-Apr for 2016 & 2017.
iii. We might want to understand more any special events or any festive seasons/holidays for the months with significant changes in cancellation trend as well as high volume # of transactions.
# 2.2.2 Next let's look at City Hotel monthly bookings for 2017
```
# Monthly distribution for all transactions
dfMonth = pd.DataFrame(dfCity2017.groupby(['arrival_date_month']).count()['hotel'])
dfMonth.rename(columns={'hotel':'Number of Bookings'}, inplace=True)
dfMonth['Month'] = pd.Categorical(dfMonth.index, categories=months, ordered=True)
dfMonth.sort_values('Month', inplace=True)
dfMonth.reindex()
# Merge with Number of Cancel Bookings 2017 calculated earlier
dfMonth['Number of Cancel Bookings'] = dfCancelTrnx.Cancel2017
dfMonth["Cancel Bookings %"] = round(dfMonth["Number of Cancel Bookings"]/dfMonth["Number of Bookings"],3)
dfMonth.sort_values('Month', inplace=True)
# create figure and axis objects with subplots()
fig,ax = plt.subplots(1,2, figsize=(15,5))
# First Plot the graph of total books vs cancel bookings to see any similar trend
dfMonth[['Number of Bookings','Number of Cancel Bookings']].plot(kind='line', ax=ax[0])
dfMonth[['Cancel Bookings %']].plot(kind='line', ax=ax[1])
ax2=ax[1].twinx()
dfMonth[['Number of Cancel Bookings']].plot(kind='line', ax=ax2, color="red")
dfMonth
```
# Analysis - City Hotel monthly bookings for 2017
1. Similar trend for # of total bookings vs # of cancel bookings throughout the various month.
2. Cancellation rate is relatively high, ranging from ~36%-49%. It is higher in the month of Apr-June.
# 2.3 Feature Extractions
# 2.3.1 Feature Correlations
```
cancel_corr = dfCity2017.corr()["is_canceled"]
cancel_corr.abs().sort_values(ascending=False)[1:]
```
# Analysis - Feature Correlations
1. It only take into consideration numerical features in the Correlation analysis above.
2. It shows that total_of_special_requests, lead_time, booking_changes, required_car_parking_spaces & is_repeated_guest are the top 5 important features, high correlated with is_canceled.
3. However, based on previous analysis,
i. the cancellation rate is higher when the total bookings increased. April to June are the highest among all. It is not showing up in the feature correlations above due to categorical type.
ii. Country should be another categorical feature that to consider when predicting cancellation. However let's evaluate more in the later part of this analysis. Ex: for those bookings coming from "PRT"
# 2.3.2 Feature Extractions & Modelling
```
# Let's manually select which features to include and exclude for the model prediction
# Features to exclude:
# 1. arrival_date_year & hotel (since we only focus on City Hotel based on transactions in 2017)
# 2. reservation_status - since it is the same with is_cancelled that we want to predict
# 3. reservation_status_date - since we have the other date part features captured for those start with arrival.....
numericalFeatures = ["lead_time","arrival_date_week_number","arrival_date_day_of_month",
"stays_in_weekend_nights","stays_in_week_nights","adults","children",
"babies","is_repeated_guest", "previous_cancellations",
"previous_bookings_not_canceled","booking_changes","agent","company","days_in_waiting_list",
"adr","required_car_parking_spaces","total_of_special_requests"]
categoricalFeatures = ["arrival_date_month","meal","market_segment",
"distribution_channel","reserved_room_type","assigned_room_type","deposit_type","customer_type"]
# Split Features & Prediction Label
rawFeatures = numericalFeatures + categoricalFeatures
X = dfCity2017.drop(["is_canceled"], axis=1)[rawFeatures]
y = dfCity2017["is_canceled"]
# One-hot encoding for numerical data
X = pd.get_dummies(X)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Import the classifier from sklearn
#from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
# Train the Model
#model = DecisionTreeClassifier()
model = RandomForestClassifier()
model.fit(X_train, y_train)
# Making predictions
y_train_pred = model.predict(X_train)
y_test_pred = model.predict(X_test)
```
# 2.3.3 Feature Importance Evaluation
```
featureImportances = model.feature_importances_
std = np.std([tree.feature_importances_ for tree in model.estimators_],
axis=0)
indices = np.argsort(featureImportances)[::-1]
# Print the feature ranking
print("Feature ranking:")
zipped = zip(X_train.columns, featureImportances)
zipped = sorted(zipped, key = lambda t: t[1], reverse=True)
for feat, importance in zipped:
print('feature: {f}, importance: {i}'.format(f=feat, i=importance))
# Plot the impurity-based feature importances of the forest
plt.figure(figsize=(20,7))
plt.title("Feature importances")
plt.bar(range(X_train.shape[1]), featureImportances[indices],
color="r", yerr=std[indices], align="center")
plt.xticks(range(X_train.shape[1]), indices)
plt.xlim([-1, X_train.shape[1]])
plt.show()
```
# 2.3.4 Model Accuracy Precision & Recall
```
# Calculate the accuracy
from sklearn.metrics import accuracy_score
train_accuracy = accuracy_score(y_train, y_train_pred)
test_accuracy = accuracy_score(y_test, y_test_pred)
print("\nModel Scoring:")
print('The training accuracy is', train_accuracy)
print('The test accuracy is', test_accuracy)
from sklearn.metrics import confusion_matrix, f1_score, precision_score, recall_score
print("F1 Score: ", f1_score(y_test, y_test_pred, average="macro"))
print("Precision Score: ", precision_score(y_test, y_test_pred, average="macro"))
print("Recall: ", recall_score(y_test, y_test_pred, average="macro"))
from sklearn.metrics import plot_confusion_matrix
print("\nConfusion Matrix:")
predictLabels = ['Not Cancelled', 'Cancelled']
plot_confusion_matrix(model, X_test, y_test, cmap=plt.cm.Blues, display_labels=predictLabels)
plt.show()
```
# Analysis : Are we able to predict booking cancellation? (Targetted: City Hotel)
1. Based on the City Hotel Year 2017 dataset, we used the RandomForestClassifier model to help predict whether a booking will be cancelled or not by any chance.
2. The model seems to be able to predict 82% correctly based on the historical data whether the booking will be cancel or not.
3. Below are the top 13 features with high impact on the booking cancellation possibility.
lead_time, deposit_type, adr, arrival_date_day_of_month, arrival_date_week_number, total_of_special_requests,
stays_in_week_nights, agent, stays_in_weekend_nights, market_segment_Groups, booking_changes, adults,
customer_type
4. With the above, the hotel can use the information gained to help form a targetted plan to reduce the booking cancellation rate as well as improve the hotel business.
# Question 3. For those booking at risk for cancellation, what can the hotel do to mitigate the risk?
# 3.1 - Let's analyze how is the data distribution for Top 6 Features when the booking is cancelled.
```
# Create a subset of data for cancellation data in 2017 for City Hotel
dfCity2017Cancel = dfCity2017[(dfCity2017.is_canceled == 1)]
#Feature #1 - Lead Time - Number of days that elapsed between the entering date of the booking into the PMS and the arrival date
dfCity2017Cancel.groupby(['lead_time']).count()['hotel'].plot(kind='line', figsize=[30,7])
#Feature #2 - Deposit Type
# - No Deposit – no deposit was made;
# - Non Refund – a deposit was made in the value of the total stay cost;
# - Refundable – a deposit was made with a value under the total cost of stay.
dfCity2017Cancel.groupby(['deposit_type']).count()['hotel'].plot(kind='bar')
dfCity2017Cancel.groupby(['deposit_type']).count()['hotel']
#Feature #4 & #5 - Month & Week of arrival date.. we can analyze by Month only and skip the Week since Week is fall within the month.
# We can refer to early same analysis done on 2.2.2
#Feature #6 - Total of special requests - Number of special requests made by the customer (e.g. twin bed or high floor)
dfCity2017Cancel.groupby(['total_of_special_requests']).count()['hotel'].plot(kind='bar')
dfCity2017Cancel.groupby(['total_of_special_requests']).count()['hotel']
```
# Analysis : For those booking at risk for cancellation, what can the hotel do to mitigate the risk?
1. Based on the subset of importance featues that selected for analysis, there is high cancellation rate when lead time < 100 days, followed by 100-200 days range and it started to reduce after 200 days.
2. For those bookings without Deposit, the cancellation rate is 2x of Non Refundale.
3. Spring time (Mar-June) in Europe are the busiest month for City Hotel. Those months also having ~40-50% cancellation rate.
4. Majority of the cancellation requests have minimal to zero special requests.
5. Based on the above analysis, it doesn't seems to give us a good comprehensive information yet to understand more solidly & collectively what else we can do to mitigate the booking cancelation risk effectively. Hence, it is highly recommended to deep dive further to other features and more data sets to gain more insights.
6. Being that said, below are a few recommendations to consider:
i. During the month of Mar-July, for those bookings with lead time < 100 days,
- can send greeting emails to the customers to remind them on the upcoming hotel stay
- by offering customers with certain level of loyalty tier a higher floor stay
- providing additional room rate discount if customers willing to switch to Deposit - Non Refundable option.
| github_jupyter |
```
import numpy as np
from numpy import ones
from numpy_sugar import ddot
import os
import sys
import pandas as pd
from pandas_plink import read_plink1_bin
from numpy.linalg import cholesky
from numpy_sugar.linalg import economic_svd
import xarray as xr
from struct_lmm2 import StructLMM2
from limix.qc import quantile_gaussianize
# in the actual script this will be provided as an argument
chrom = 22
input_files_dir = "/hps/nobackup/stegle/users/acuomo/all_scripts/struct_LMM2/sc_endodiff/new/input_files/"
## this file will map cells to donors, it will also only including donors we have single cell data (a subset of all of HipSci donors)
sample_mapping_file = input_files_dir+"sample_mapping_file.csv"
sample_mapping = pd.read_csv(sample_mapping_file, dtype={"genotype_individual_id": str, "phenotype_sample_id": str})
sample_mapping.head()
## extract unique individuals
donors = sample_mapping["genotype_individual_id"].unique()
donors.sort()
print("Number of unique donors: {}".format(len(donors)))
## read in genotype file (plink format)
plink_file = "/hps/nobackup/hipsci/scratch/genotypes/imputed/2017-03-27/Full_Filtered_SNPs_Plink/hipsci.wec.gtarray.HumanCoreExome.imputed_phased.20170327.genotypes.norm.renamed.bed"
G = read_plink1_bin(plink_file)
## read in GRM kinship matrix
kinship_file = "/hps/nobackup/hipsci/scratch/genotypes/imputed/2017-03-27/Full_Filtered_SNPs_Plink-F/hipsci.wec.gtarray.HumanCoreExome.imputed_phased.20170327.genotypes.norm.renamed.kinship"
K = pd.read_csv(kinship_file, sep="\t", index_col=0)
assert all(K.columns == K.index)
K = xr.DataArray(K.values, dims=["sample_0", "sample_1"], coords={"sample_0": K.columns, "sample_1": K.index})
K = K.sortby("sample_0").sortby("sample_1")
donors = sorted(set(list(K.sample_0.values)).intersection(donors))
print("Number of donors after kinship intersection: {}".format(len(donors)))
## subset to relevant donors (from sample mapping file)
K = K.sel(sample_0=donors, sample_1=donors)
assert all(K.sample_0 == donors)
assert all(K.sample_1 == donors)
## and decompose such that K = L @ L.T
L_kinship = cholesky(K.values)
L_kinship = xr.DataArray(L_kinship, dims=["sample", "col"], coords={"sample": K.sample_0.values})
assert all(L_kinship.sample.values == K.sample_0.values)
del K
# number of samples (cells)
print("Sample mapping number of rows BEFORE intersection: {}".format(sample_mapping.shape[0]))
sample_mapping = sample_mapping[sample_mapping["genotype_individual_id"].isin(donors)]
print("Sample mapping number of rows AFTER intersection: {}".format(sample_mapping.shape[0]))
# expand from donors to cells
L_expanded = L_kinship.sel(sample=sample_mapping["genotype_individual_id"].values)
assert all(L_expanded.sample.values == sample_mapping["genotype_individual_id"].values)
# environments
# cells by PCs (20)
E_file = input_files_dir+"20PCs.csv"
E = pd.read_csv(E_file, index_col = 0)
E = xr.DataArray(E.values, dims=["cell", "pc"], coords={"cell": E.index.values, "pc": E.columns.values})
E = E.sel(cell=sample_mapping["phenotype_sample_id"].values)
assert all(E.cell.values == sample_mapping["phenotype_sample_id"].values)
# subselect to only SNPs on right chromosome
G_sel = G.where(G.chrom == str(chrom), drop=True)
G_sel
# select down to relevant individuals
G_exp = G_sel.sel(sample=sample_mapping["genotype_individual_id"].values)
assert all(L_expanded.sample.values == G_exp.sample.values)
# get eigendecomposition of EEt
[U, S, _] = economic_svd(E)
us = U * S
# get decomposition of K*EEt
Ls = [ddot(us[:,i], L_expanded) for i in range(us.shape[1])]
Ls[1].shape
# Phenotype (genes X cells)
phenotype_file = input_files_dir+"phenotype.csv.pkl"
phenotype = pd.read_pickle(phenotype_file)
phenotype.head()
print("Phenotype shape BEFORE selection: {}".format(phenotype.shape))
phenotype = xr.DataArray(phenotype.values, dims=["trait", "cell"], coords={"trait": phenotype.index.values, "cell": phenotype.columns.values})
phenotype = phenotype.sel(cell=sample_mapping["phenotype_sample_id"].values)
print("Phenotype shape AFTER selection: {}".format(phenotype.shape))
assert all(phenotype.cell.values == sample_mapping["phenotype_sample_id"].values)
# Filter on specific gene-SNP pairs
# eQTL from endodiff (ips+mesendo+defendo)
endo_eqtl_file = input_files_dir+"endodiff_eqtl_allconditions_FDR10pct.csv"
endo_eqtl = pd.read_csv(endo_eqtl_file, index_col = False)
endo_eqtl.head()
endo_eqtl["chrom"] = [int(i[:i.find("_")]) for i in endo_eqtl["snp_id"]]
genes = endo_eqtl[endo_eqtl['chrom']==int(chrom)]['feature'].unique()
# genes
len(genes)
# Set up model
n_samples = phenotype.shape[1]
M = ones((n_samples, 1))
# Pick one gene as example
i=0
trait_name = genes[i]
trait_name
# select SNPs for a given gene
leads = endo_eqtl[endo_eqtl['feature']==trait_name]['snp_id'].unique()
G_tmp = G_exp[:,G_exp['snp'].isin(leads)]
print("Running for gene {}".format(trait_name))
# quantile normalise y, E
y = phenotype.sel(trait=trait_name)
y = quantile_gaussianize(y)
E = quantile_gaussianize(E)
# null model
slmm2 = StructLMM2(y.values, M, E, Ls)
# run interaction test for the SNPs
pvals = slmm2.scan_interaction(G_tmp)[0]
pv = pd.DataFrame({"chrom":G_tmp.chrom.values,
"pv":pvals,
"variant":G_tmp.snp.values})
pv.head()
# pv.to_csv(outfilename, sep='\t')
```
| github_jupyter |
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import os
pd.set_option('display.max_colwidth', 1000)
pd.set_option('display.max_rows', 500)
from sklearn import svm
import re
from shutil import copyfile
import matplotlib.pyplot as plt
import pydicom
from tqdm import tqdm
import nibabel as nib
import dicom2nifti
# define text files and path
#### CHANGE FOR SPECIFIC DOWNLOAD PACKAGE ####
# for 2274:
path = '../../physcosis/Package_1187332/'
## for 2126:
# path = '../../physcosis/Package_1187335/'
# should be standard
test_file = 'panss01.txt'
subject_file = 'ndar_subject01.txt'
image_file = 'image03.txt'
image03 = pd.read_csv(os.path.join(path, image_file), delimiter = '\t')
print(image03.shape)
# image03[image03['image_description'] == 'ADNI Double_TSE SENSE']
# image03
data_dir = os.path.join(path, 'image03')
#### CHANGE BASED ON WHAT 'image_file' COLUMN LOOKS LIKE IN image03 (corrects the file path to be on local machine) ####
# for 2274:
regex ='s3://NDAR_.*/submission_.[0-9]*/'
rows_with_bad_path = []
for index, row in image03.iterrows():
image_path = re.sub(regex, '', row['image_file'])
full_path = os.path.join(data_dir, image_path)
# print(full_path)
if not os.path.exists(full_path):
print(image_path, index, row['image_file'])
rows_with_bad_path.append(index)
rows_with_bad_path
# drops rows based on rows with bad path and replaces old paths with local paths
image03 = image03.drop(rows_with_bad_path)
l = []
for index, row in image03.iterrows():
image_path = re.sub(regex, '', row['image_file'])
full_path = os.path.join(data_dir, image_path)
l.append(full_path)
image03['image_file'] = l
# shows the different types of images available for the data package
pivot_table = image03.pivot_table(columns = ['image_description'], aggfunc = 'size')
pivot_table
len(image03['image_description'].unique())
test_dir = os.path.join(data_dir, 'TEST')
if not os.path.exists(test_dir):
os.mkdir(test_dir)
test_dir
image03 = image03.drop_duplicates(subset = ['image_description'], keep='first')
image03.shape
for i,row in image03.iterrows():
file_name = row['image_file'].split('/')[-1]
folder = os.path.join(test_dir, file_name.split('.')[0])
if not os.path.exists(folder):
os.system('mkdir ' + folder)
os.system('unzip ' + row['image_file'] + ' -d ' + folder)
print('Unziped and moved ', file_name)
new_path = []
for i,row in image03.iterrows():
folder = row['image_file'].split('/')[-1].split('.')[0]
p = os.path.join(test_dir, folder)
nifti = os.path.join(p, '{}.nii'.format(folder))
if not os.path.exists(nifti):
print('Converting ', folder, ' from dicom to niftii.')
try:
dicom2nifti.dicom_series_to_nifti(p, nifti, reorient_nifti=True)
new_path.append(nifti)
except:
new_path.append('FAILED')
print(folder, ' Failed.')
else:
new_path.append(nifti)
image03['newpath'] = new_path
image03 = image03[image03.newpath != 'FAILED']
image03 = image03.reset_index()
image03
index = 18
path = image03.iloc[index]['newpath']
print(image03.iloc[index]['image_description'])
img = nib.load(path)
img = img.get_fdata()
img.shape
img_t1 = img[:,:,:]
import imageio
# img_t1 = img[:,:,:,0]
num_img = len(list(range(0,img_t1.shape[2],1)))
fig = plt.figure(figsize=(10, 2), dpi=100)
images = []
for i,p in enumerate(range(0,img_t1.shape[2],1)):
images.append(img_t1[:,:,p])
imageio.mimsave('./movie.gif', images)
# ax1 = fig.add_subplot(1,num_img,i+1)
# plt.imshow(img_t1[p,:,:], cmap = 'gray')
# num_img = len(list(range(0,img.shape[1],40)))
# fig = plt.figure(figsize=(20, 2), dpi=100)
# for i,p in enumerate(range(0,img.shape[0],40)):
# ax1 = fig.add_subplot(1,num_img,i+1)
# plt.imshow(img_t1[:,p,:], cmap = 'gray')
# num_img = len(list(range(0,img.shape[2],10)))
# fig = plt.figure(figsize=(20, 2), dpi=100)
# for i,p in enumerate(range(0,img.shape[0],10)):
# ax1 = fig.add_subplot(1,num_img,i+1)
# plt.imshow(img_t1[:,:,p], cmap = 'gray')
image03['image_description']
```
| github_jupyter |
# Anomaly detection using Facebook Prophet:
**Medical background:**
In the last decades, the miniaturization of wearable sensors and development of data transmission technologies have allowed to collect medically relevant data called digital biomarkers. This data is revolutionising our modern Medicine by offering new perspectives to better understand the human physiology and the possibility to better identify, even predict disease progression.
Yet, the data collected by wearable sensors is, for technical reasons heterogeneous and cannot be directly translated into a meaningful clinical status. The relevance of proper data analysis is extremely critical as a wrong data analysis may lead to miss critical disease progression steps or lead to the wrong diagnostic. Therefore, the overall goal of this project is to integrate patients’ sensor data into one or several outcome measures which meaningfully recapitulate the clinical status of the patient.
[Stress](https://en.wikipedia.org/wiki/Stress_(biology)) is a natural and physiological response to threat, challenge or physical and psychological barrier. In humans, the two major systems leading responding to stress are the autonomic nervous system and hypothalamic-pituitary-adrenal axis. The sympathetic nervous system, the stress-related part of the autonomic nervous system aims to distribute the energy to the most relevant body parts, to react to the stressor by fighting or escaping for instance. The hypothalamic-pituitary-adrenal axis regulates metabolic, psychological and immunological functions.
The adrenaline alters the following: motion rate, electrocardiogram (ECG), electrodermal activity (EDA), electromyogram (EMG), respiration and body temperature.
**Goal of the script:**
Here, we aim leverage the power of artificial intelligence to reach a medical insight. Specifically, we want to detect the stress-induced biological changes from the wearable device measure with the highest sensiticity.
**Motivations to use a forecasting method to detect activity:**
Previous works demonstrated the ability to related self-labeled stress status to sensor data acquired by wearable sensors.
Here we try a different approach assuming that physiological rythms are altered by stress. We are investigating if a time series forecasting method coupled with anomaly detection provides a more sensitive methods to detect stress-related changes.
**Data format**
The reader may read the original source of data here:
- [UCI website](https://archive.ics.uci.edu/ml/datasets/WESAD+%28Wearable+Stress+and+Affect+Detection%29) (check the website shown below) to download the WESAD dataset
- [wesad_readme file](wesad_readme.pdf) and [wesad poster](WESAD poster.pdf), both located together with the WESAD dataset
**Structure of the code**
1 - Read the data
2 - Data preparation: segmentation per task, quality control
3 - Prediction of sensor data in the absence of change of stress status
4 - Detection of anomaly in the sensor's data, indicating a change in the stress status
4 - Cross-validation and boot-strapping assess the robustness of each candidate model and generate estimates of variability to facilitate model selection
**Credit**
- The data extraction part was modified from a script written by [aganjag](https://github.com/jaganjag) and is available [here](https://github.com/jaganjag/stress_affect_detection/blob/master/prototype.ipynb)
- The implementation of Prophet for Time Series Forecasting was based on this [tutorial](https://medium.com/analytics-vidhya/time-series-forecast-anomaly-detection-with-facebook-prophet-558136be4b8d) written by Paul Lo. It makes use of the open-source project [Prophet](https://facebook.github.io/prophet/), a forecasting procedure implemented in R and Python, based on the paper of [Taylor and Letham, 2017](https://peerj.com/preprints/3190/).
> Questions:
> Contact Guillaume Azarias at guillaume.azarias@hotmail.com
## Import the relevant library
```
import os
import pickle
import numpy as np
import seaborn as sns
sns.set()
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.dates import DateFormatter
import datetime
from datetime import timedelta
import qgrid
# Note that the interactive plot may not work in Jupyter lab, but only in Jupyter Notebook (conflict of javascripts)
%matplotlib widget
import fbprophet
from fbprophet import Prophet
from fbprophet.diagnostics import cross_validation, performance_metrics
from fbprophet.plot import plot_cross_validation_metric
fbprophet.__version__
from sklearn.model_selection import ParameterGrid
import itertools
from random import sample
# Import the functions from the helper.py
from helper import load_ds, df_dev_formater, find_index, df_generator, prophet_fit, prophet_plot, get_outliers, prophet, GridSearch_Prophet
```
## Read the WESAD data
The dimensions of the dataset depend on both the device and parameters:
| Device | Location|Parameter|Acq. frequency|Number of dimensions|Data points (S5)| Duration (S5)|
|:---------------|:-------:|:-------:|:------------:|:------------------:|:--------------:|:------------:|
|**RespiBAN Pro**|chest | ACC |700Hz |**3** |4496100 |6'423sec |
| | | ECG |" |1 | | |
| | | EDA |" |1 | | |
| | | EMG |" |1 | | |
| | | RESP |" |1 | | |
| | | TEMP |" |1 | | |
| | | | | | | |
|**Empatica E4** |wrist | ACC |32Hz |**3** |200256 |6'258sec |
| | | BVP |64Hz |1 |400512 | |
| | | EDA |4Hz |1 |25032 | |
| | | TEMP |4Hz |1 |25032 | |
*Note that ACC is a matrix of 3 dimensions for the 3 spatial dimensions.*
*'ECG', 'EDA', 'EMG', 'Resp', 'Temp' have each 1 dimension.*
```
freq = np.array([4, 700, 700, 700, 700, 700, 700, 32, 64, 4, 4, 700])
freq_df = pd.Series(freq, index= ['working_freq', 'ACC_chest', 'ECG_chest', 'EDA_chest', 'EMG_chest', 'Resp_chest', 'Temp_chest', 'ACC_wrist', 'BVP_wrist', 'EDA_wrist', 'TEMP_wrist', 'label'])
freq_df
# Define the working frequency, eg the frequency to adjust all data
working_freq = str(int(1000/freq_df.loc['working_freq'])) + 'L'
working_freq
```
*Note:* The class read_data_of_one_subject was originally written by [aganjag](https://github.com/jaganjag/stress_affect_detection/blob/master/prototype.ipynb).
```
# obj_data[subject] = read_data_one_subject(data_set_path, subject)
class read_data_of_one_subject:
"""Read data from WESAD dataset"""
def __init__(self, path, subject):
self.keys = ['label', 'subject', 'signal']
self.signal_keys = ['wrist', 'chest']
self.chest_sensor_keys = ['ACC', 'ECG', 'EDA', 'EMG', 'Resp', 'Temp']
self.wrist_sensor_keys = ['ACC', 'BVP', 'EDA', 'TEMP']
os.chdir(path) # Change the current working directory to path
os.chdir(subject) # Change the current working directory to path. Why not using data_set_path ?
with open(subject + '.pkl', 'rb') as file: # with will automatically close the file after the nested block of code
data = pickle.load(file, encoding='latin1')
self.data = data
def get_labels(self):
return self.data[self.keys[0]]
def get_wrist_data(self):
""""""
#label = self.data[self.keys[0]]
assert subject == self.data[self.keys[1]], 'WARNING: Mixing up the data from different persons'
signal = self.data[self.keys[2]]
wrist_data = signal[self.signal_keys[0]]
#wrist_ACC = wrist_data[self.wrist_sensor_keys[0]]
#wrist_ECG = wrist_data[self.wrist_sensor_keys[1]]
return wrist_data
def get_chest_data(self):
""""""
assert subject == self.data[self.keys[1]], 'WARNING: Mixing up the data from different persons'
signal = self.data[self.keys[2]]
chest_data = signal[self.signal_keys[1]]
return chest_data
data_set_path = "../../Data/WESAD"
subject = 'S5'
obj_data = {}
obj_data[subject] = read_data_of_one_subject(data_set_path, subject)
```
*Workplan:*
**A) Exploratory data analysis**
1) Discard for now the ACC data. Preliminary results on other parameters may guide the ways to investigate the accelerometer data
2) Get the study protocol
3) Use rolling.mean() to synchronise the data at the same frequency
4) Synchronise data
5) Include label data if possible
6) Plot data
7) Segmentation per task
8) Quality control
**B) Perform time series forecasting**
1) ADCF test
2) Prophet
3) ARIMA
## Get the study protocol
*From the wesad_readme.pdf:*
The order of the different conditions is defined on the second line in SX_quest.csv. Please refer to [1] for further details on each of the conditions (see Section 3.3 there). Please ignore the elements “bRead”, “fRead”, and “sRead”: these are not relevant for this dataset.
The time interval of each condition is defined as start and end time, see the lines 3 and 4 in SX_quest.csv. Time is given in the format [minutes.seconds]. Time is counted from the start of the RespiBAN device’s start of recording.
### Study protocol from the _quest.csv file
```
print(os.getcwd())
SX_quest_filename = os.getcwd() + '/' + subject + '_quest.csv'
print(SX_quest_filename)
# bp_data = pd.read_csv("/Users/guillaume/Documents/Projects/Data/WESAD/S2/S2_quest.csv", header=1, delimiter=';')
study_protocol_raw = pd.read_csv(SX_quest_filename, delimiter=';')
# study_protocol_raw.head()
# Create a table with the interval of every steps
study_protocol = study_protocol_raw.iloc[1:3, 1:6]
study_protocol = study_protocol.transpose().astype(float)
study_protocol.columns = ['start', 'end']
study_protocol['task'] = study_protocol_raw.iloc[0, 1:6].transpose()
study_protocol = study_protocol.reset_index(drop=True)
study_protocol
# study_protocol.dtypes
# Create a dataframe with the time formatted as datetime
# Note that the frequency chosen was 4Hz to match the lowest frequency of acquisition (250ms)
total_duration = study_protocol.end.max()
data = pd.DataFrame()
begin_df = datetime.datetime(2020, 1, 1) # For reading convenience
end_df = begin_df + timedelta(minutes=int(total_duration)) + timedelta(seconds=total_duration-int(total_duration))
data['time'] = pd.date_range(start=begin_df, end=end_df, freq=working_freq).to_pydatetime().tolist()
data['task'] = np.nan
# data
# Annotate the task in the data['task']
for row in range(study_protocol.shape[0]):
# Datetime index of the beginning of the task
begin_state = study_protocol.iloc[row, 0]
begin = begin_df + timedelta(minutes=int(begin_state)) + timedelta(seconds=begin_state-int(begin_state))
# Datetime index of the end of the task
end_state = study_protocol.iloc[row, 1]
end = begin_df + timedelta(minutes=int(end_state)) + timedelta(seconds=end_state-int(end_state))
# Fill the task column according to the begin and end of task
data.loc[(data['time'] >= begin) & (data['time'] <= end), 'task'] = study_protocol.iloc[row, 2]
# Show data
qgrid_widget = qgrid.show_grid(data, show_toolbar=True)
qgrid.show_grid(data)
```
### Graphical representation of the study protocol
```
# Attribute an arbitrary value to a task for graphical display of the study protocol
data_graph = data
data_graph['arbitrary_index'] = np.zeros
data_graph.loc[data_graph['task'] == 'Base', 'arbitrary_index'] = 1
data_graph.loc[data_graph['task'] == 'Fun', 'arbitrary_index'] = 3
data_graph.loc[data_graph['task'] == 'Medi 1', 'arbitrary_index'] = 4
data_graph.loc[data_graph['task'] == 'TSST', 'arbitrary_index'] = 2
data_graph.loc[data_graph['task'] == 'Medi 2', 'arbitrary_index'] = 4
data_graph['arbitrary_index'] = pd.to_numeric(data_graph['arbitrary_index'], errors='coerce')
# # Show data
# qgrid_widget = qgrid.show_grid(data_graph, show_toolbar=True)
# qgrid.show_grid(data_graph)
# Plot
fig_sp, ax = plt.subplots(figsize=(8, 4))
plt.plot('time', 'arbitrary_index', data=data_graph, color='darkblue', marker='o',linestyle='dashed', linewidth=0.5, markersize=2)
plt.gcf().autofmt_xdate()
myFmt = DateFormatter("%H:%M")
ax.xaxis.set_major_formatter(myFmt)
plt.xlabel('Time elapsed (hh:mm)', fontsize=15)
plt.ylim(0,6)
plt.ylabel('Arbitrary index', fontsize=15)
name = 'Study protocol for the subject ' + subject
plt.title(name, fontsize=20)
# Graph annotation
for row in range(study_protocol.shape[0]):
# Datetime index of the beginning of the task
begin_state = study_protocol.iloc[row, 0]
begin = begin_df + timedelta(minutes=int(begin_state)) + timedelta(seconds=begin_state-int(begin_state))
# Datetime index of the end of the task
end_state = study_protocol.iloc[row, 1]
end = begin_df + timedelta(minutes=int(end_state)) + timedelta(seconds=end_state-int(end_state))
# Draw a rectangle and annotate the graph
ax.axvspan(begin, end, facecolor='b', alpha=0.2)
text_location = begin+((end-begin)/2)*1/2
ax.annotate(study_protocol.iloc[row, 2], xy=(begin, 5), xytext=(text_location, 5.5), fontsize=10)
plt.show()
```
## Get the results of the self-assesment
```
study_protocol_raw
# Show data
# qgrid_widget = qgrid.show_grid(study_protocol_raw, show_toolbar=True)
# qgrid.show_grid(study_protocol_raw)
```
**The order of result of the self-assessment is listed below:**
- line 0: Condition
- lines 1-2: Start and end of the condition
- line 3: *NaN line for data separation*
- lines 4-8: PANAS result with the 26 different feeling in columns (columns 1-27), and scores (1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Very much, 5 = Extremely) for the conditions: Base (line 4) Fun (line 5) Medi 1 (line 6) TSST (line 7) and Medi 2 (line 8). *Note that there are 2 more features for the Stress condition only.*
- line 9: *NaN line for data separation*
- lines 10-14: STAI result with the 6 different feelings in columns and scores (1 = Not at all, 2 = Somewhat, 3 = Moderately so, 4 = Very much so) for the conditions: Base (line 10) Fun (line 11) Medi 1 (line 12) TSST (line 13) and Medi 2 (line 14).
- line 15: *NaN line for data separation*
- lines 16-20: SAM (Self-Assessment Manikins) results with the 2 different feelings (valence and arousal) in columns for the conditions: Base (line 16) Fun (line 17) Medi 1 (line 18) TSST (line 19) and Medi 2 (line 20).
- line 21: *NaN line for data separation*
- lines 22: SSSQ result with the 6 different feeling and scores (1 = Not at all, 2 = A little bit, 3 = Somewhat, 4 = Very much, 5 = Extremely) for the stress condition only.
*TO DO*: pool, transpose and normalise data. Verify SSSQ information
```
self_assessment = pd.DataFrame()
self_assessment = study_protocol_raw.iloc[0, 1:5].T
# Base results
self_assessment.iloc[0, 1:27] = study_protocol_raw.iloc[4, 1:27]
self_assessment
```
## Get the wrist data and adjust to the working frequency (4Hz)
| Device | Location|Parameter|Acq. frequency|Number of dimensions|Data points (S5)| Duration (S5)|
|:---------------|:-------:|:-------:|:------------:|:------------------:|:--------------:|:------------:|
|**Empatica E4** |wrist | ACC |32Hz |**3** |200256 |6'258sec |
| | | BVP |64Hz |1 |400512 | |
| | | EDA |4Hz |1 |25032 | |
| | | TEMP |4Hz |1 |25032 | |
```
freq_df
wrist_data_dict = obj_data[subject].get_wrist_data()
# Extraction of numbers of data
wrist_dict_length = {key: len(value) for key, value in wrist_data_dict.items()}
print('Original numbers of data per parameter: ' + str(wrist_dict_length))
wrist_ser_length = pd.Series(wrist_dict_length)
df_wrist = pd.DataFrame()
# Adjust all data to the same frequence
for wrist_param, param_length in wrist_ser_length.items():
# Generate the frequence in microseconds (U) from the acquisition frequency
freq = str(int(1000000/freq_df.loc[wrist_param + '_wrist'])) + 'U'
# Generate temporary dataset
index = pd.date_range(start='1/1/2020', periods=param_length, freq=freq)
df_temp_raw = pd.DataFrame(wrist_data_dict[wrist_param], index=index)
if wrist_param == 'ACC':
df_temp_raw.columns = ['ACC_wrist_x', 'ACC_wrist_y', 'ACC_wrist_z']
else:
df_temp_raw.columns = [wrist_param + '_wrist']
# Resampling
df_temp = df_temp_raw.resample(working_freq).pad()
# Append the wrist data
if df_wrist.shape[1]==0:
df_wrist = df_temp
else:
df_wrist = pd.concat([df_wrist, df_temp], axis=1)
print('Resampled data adjusted to ' + str(freq_df['working_freq']) + 'Hz in the pandas DataFrame df_wrist:')
df_wrist
```
## Chest data from the .pkl file adjusted to 4Hz
| Device | Location|Parameter|Acq. frequency|Number of dimensions|Data points (S5)| Duration (S5)|
|:---------------|:-------:|:-------:|:------------:|:------------------:|:--------------:|:------------:|
|**RespiBAN Pro**|chest | ACC |700Hz |**3** |4496100 |6'423sec |
| | | ECG |" |1 | | |
| | | EDA |" |1 | | |
| | | EMG |" |1 | | |
| | | RESP |" |1 | | |
| | | TEMP |" |1 | | |
```
chest_data_dict = obj_data[subject].get_chest_data()
# Extraction of numbers of data
chest_dict_length = {key: len(value) for key, value in chest_data_dict.items()}
print('Original numbers of data per parameter: ' + str(chest_dict_length))
chest_ser_length = pd.Series(chest_dict_length)
df_chest = pd.DataFrame()
# Adjust all data to the same frequence
for chest_param, param_length in chest_ser_length.items():
# Generate the frequence in microseconds (U) from the acquisition frequency
freq = str(int(1000000/freq_df.loc[chest_param + '_chest'])) + 'U'
# Generate temporary dataset
index = pd.date_range(start='1/1/2020', periods=param_length, freq=freq)
df_temp_raw = pd.DataFrame(chest_data_dict[chest_param], index=index)
if chest_param == 'ACC':
df_temp_raw.columns = ['ACC_chest_x', 'ACC_chest_y', 'ACC_chest_z']
else:
df_temp_raw.columns = [chest_param + '_chest']
# Resampling
df_temp = df_temp_raw.resample(working_freq).pad()
# Append the chest data
if df_chest.shape[1]==0:
df_chest = df_temp
else:
df_chest = pd.concat([df_chest, df_temp], axis=1)
print('Resampled data adjusted to ' + str(freq_df['working_freq']) + 'Hz in the pandas DataFrame df_chest:')
df_chest
```
## Get the label data
‘label’: ID of the respective study protocol condition, sampled at 700 Hz.
The following IDs are provided:
- 0 = not defined / transient
- 1 = baseline
- 2 = stress
- 3 = amusement
- 4 = meditation
- 5/6/7 = should be ignored in this dataset
```
labels = {}
labels[subject] = obj_data[subject].get_labels()
labels_dict = obj_data[subject].get_labels()
freq = str(int(1000000/freq_df.loc['label'])) + 'U' # U means microseconds
index = pd.date_range(start='1/1/2020', periods=len(labels_dict), freq=freq)
df_label = pd.DataFrame(labels_dict, index=index)
df_label = df_label.resample(working_freq).pad()
df_label['time'] = df_label.index
df_label.columns = ['label', 'time']
# Ignore 5/6/7
df_label.loc[df_label['label'] > 4, 'label'] = 0
df_label
# Plot
fig_sp, ax = plt.subplots(figsize=(8, 4))
plt.plot('time', 'label', data=df_label, color='darkblue', marker='o',linestyle='dashed', linewidth=0.5, markersize=2)
plt.gcf().autofmt_xdate()
myFmt = DateFormatter("%H:%M")
ax.xaxis.set_major_formatter(myFmt)
plt.xlabel('Time elapsed (hh:mm)', fontsize=15)
plt.ylim(0,6)
plt.ylabel('Label', fontsize=15)
name = 'Label data for the subject ' + subject
plt.title(name, fontsize=20)
# Graph annotation
for row in range(study_protocol.shape[0]):
# Datetime index of the beginning of the task
begin_state = study_protocol.iloc[row, 0]
begin = begin_df + timedelta(minutes=int(begin_state)) + timedelta(seconds=begin_state-int(begin_state))
# Datetime index of the end of the task
end_state = study_protocol.iloc[row, 1]
end = begin_df + timedelta(minutes=int(end_state)) + timedelta(seconds=end_state-int(end_state))
# Draw a rectangle and annotate the graph
ax.axvspan(begin, end, facecolor='b', alpha=0.2)
text_location = begin+((end-begin)/2)*1/2
ax.annotate(study_protocol.iloc[row, 2], xy=(begin, 5), xytext=(text_location, 5.5), fontsize=10)
plt.show()
```
# There is a lag between the data extracted from the _quest.csv file and the synchronized data:
- Check if there is any synchronisation data in the pkl file
- Look how to properly merge the datasets that don't have the same dimensions !
```
def extract_one(chest_data_dict, idx, l_condition=0):
ecg_data = chest_data_dict["ECG"][idx].flatten()
ecg_features = extract_mean_std_features(ecg_data, label=l_condition)
#print(ecg_features.shape)
eda_data = chest_data_dict["EDA"][idx].flatten()
eda_features = extract_mean_std_features(eda_data, label=l_condition)
#print(eda_features.shape)
emg_data = chest_data_dict["EMG"][idx].flatten()
emg_features = extract_mean_std_features(emg_data, label=l_condition)
#print(emg_features.shape)
temp_data = chest_data_dict["Temp"][idx].flatten()
temp_features = extract_mean_std_features(temp_data, label=l_condition)
#print(temp_features.shape)
baseline_data = np.hstack((eda_features, temp_features, ecg_features, emg_features))
#print(len(baseline_data))
label_array = np.full(len(baseline_data), l_condition)
#print(label_array.shape)
#print(baseline_data.shape)
baseline_data = np.column_stack((baseline_data, label_array))
#print(baseline_data.shape)
return baseline_data
def execute():
# data_set_path = "/media/jac/New Volume/Datasets/WESAD"
data_set_path = "../../../Data/WESAD"
file_path = "ecg.txt"
subject = 'S3' # Why defining subject here since it is defined 6 lines later in a loop ?
obj_data = {}
labels = {}
all_data = {}
subs = [2, 3, 4, 5, 6]
for i in subs:
subject = 'S' + str(i)
print("Reading data", subject)
obj_data[subject] = read_data_one_subject(data_set_path, subject)
labels[subject] = obj_data[subject].get_labels()
wrist_data_dict = obj_data[subject].get_wrist_data()
wrist_dict_length = {key: len(value) for key, value in wrist_data_dict.items()}
chest_data_dict = obj_data[subject].get_chest_data()
chest_dict_length = {key: len(value) for key, value in chest_data_dict.items()}
print(chest_dict_length)
chest_data = np.concatenate((chest_data_dict['ACC'], chest_data_dict['ECG'], chest_data_dict['EDA'],
chest_data_dict['EMG'], chest_data_dict['Resp'], chest_data_dict['Temp']), axis=1)
# Get labels
# 'ACC' : 3, 'ECG' 1: , 'EDA' : 1, 'EMG': 1, 'RESP': 1, 'Temp': 1 ===> Total dimensions : 8
# No. of Labels ==> 8 ; 0 = not defined / transient, 1 = baseline, 2 = stress, 3 = amusement,
# 4 = meditation, 5/6/7 = should be ignored in this dataset
# Do for each subject
baseline = np.asarray([idx for idx, val in enumerate(labels[subject]) if val == 1])
# print("Baseline:", chest_data_dict['ECG'][baseline].shape)
# print(baseline.shape)
stress = np.asarray([idx for idx, val in enumerate(labels[subject]) if val == 2])
# print(stress.shape)
amusement = np.asarray([idx for idx, val in enumerate(labels[subject]) if val == 3])
# print(amusement.shape)
baseline_data = extract_one(chest_data_dict, baseline, l_condition=1)
stress_data = extract_one(chest_data_dict, stress, l_condition=2)
amusement_data = extract_one(chest_data_dict, amusement, l_condition=3)
full_data = np.vstack((baseline_data, stress_data, amusement_data))
print("One subject data", full_data.shape)
all_data[subject] = full_data
i = 0
for k, v in all_data.items():
if i == 0:
data = all_data[k]
i += 1
print(all_data[k].shape)
data = np.vstack((data, all_data[k]))
print(data.shape)
return data
# """
ecg, eda = chest_data_dict['ECG'], chest_data_dict['EDA']
x = [i for i in range(len(baseline))]
for one in baseline:
x = [i for i in range(99)]
plt.plot(x, ecg[one:100])
break
# """
x = [i for i in range(10000)]
plt.plot(x, chest_data_dict['ECG'][:10000])
plt.show()
# BASELINE
[ecg_features[k] for k in ecg_features.keys()])
ecg = nk.ecg_process(ecg=ecg_data, rsp=chest_data_dict['Resp'][baseline].flatten(), sampling_rate=700)
print(os.getcwd())
# """
recur_print
print(type(ecg))
print(ecg.keys())
for k in ecg.keys():
print(k)
for i in ecg[k].keys():
print(i)
resp = nk.eda_process(eda=chest_data_dict['EDA'][baseline].flatten(), sampling_rate=700)
resp = nk.rsp_process(chest_data_dict['Resp'][baseline].flatten(), sampling_rate=700)
for k in resp.keys():
print(k)
for i in resp[k].keys():
print(i)
# For baseline, compute mean, std, for each 700 samples. (1 second values)
file_path = os.getcwd()
with open(file_path, "w") as file:
#file.write(str(ecg['df']))
file.write(str(ecg['ECG']['HRV']['RR_Intervals']))
file.write("...")
file.write(str(ecg['RSP']))
#file.write("RESP................")
#file.write(str(resp['RSP']))
#file.write(str(resp['df']))
#print(type(ecg['ECG']['HRV']['RR_Intervals']))
#file.write(str(ecg['ECG']['Cardiac_Cycles']))
#print(type(ecg['ECG']['Cardiac_Cycles']))
#file.write(ecg['ECG']['Cardiac_Cycles'].to_csv())
# Plot the processed dataframe, normalizing all variables for viewing purpose
# """
# """
bio = nk.bio_process(ecg=chest_data_dict["ECG"][baseline].flatten(), rsp=chest_data_dict['Resp'][baseline].flatten(), eda=chest_data_dict["EDA"][baseline].flatten(), sampling_rate=700)
nk.z_score(bio["df"]).plot()
print(bio["ECG"].keys())
print(bio["EDA"].keys())
print(bio["RSP"].keys())
#ECG
print(bio["ECG"]["HRV"])
print(bio["ECG"]["R_Peaks"])
#EDA
print(bio["EDA"]["SCR_Peaks_Amplitudes"])
print(bio["EDA"]["SCR_Onsets"])
#RSP
print(bio["RSP"]["Cycles_Onsets"])
print(bio["RSP"]["Cycles_Length"])
# """
print("Read data file")
#Flow: Read data for all subjects -> Extract features (Preprocessing) -> Train the model
data_set_path = "../../../Data/WESAD"
subject = 'S4'
obj_data = {}
obj_data[subject] = read_data_of_one_subject(data_set_path, subject)
chest_data_dict = obj_data[subject].get_chest_data()
chest_dict_length = {key: len(value) for key, value in chest_data_dict.items()}
print(chest_dict_length)
# Get labels
labels = obj_data[subject].get_labels()
baseline = np.asarray([idx for idx,val in enumerate(labels) if val == 1])
#print(baseline)
print("Baseline:", chest_data_dict['ECG'][baseline].shape)
labels.shape
from sklearn.externals import joblib
bio = nk.bio_process(ecg=chest_data_dict["ECG"][baseline].flatten(), rsp=chest_data_dict['Resp'][baseline].flatten(), eda=chest_data_dict["EDA"][baseline].flatten(), sampling_rate=700)
nk.z_score(bio["df"]).plot()
"""print(bio["ECG"].keys())
print(bio["EDA"].keys())
print(bio["RSP"].keys())
#ECG
print(bio["ECG"]["HRV"])
print(bio["ECG"]["R_Peaks"])
#EDA
print(bio["EDA"]["SCR_Peaks_Amplitudes"])
print(bio["EDA"]["SCR_Onsets"])
#RSP
print(bio["RSP"]["Cycles_Onsets"])
print(bio["RSP"]["Cycles_Length"])
"""
```
### Try to display the dataframe with qgrid:
Check the [quantopian link](https://github.com/quantopian/qgrid).
```python
import qgrid
qgrid_widget = qgrid_widget.show_grid(df, show_toolbar=True)
qgrid_widget
```
# Descriptive statistics in Time Series Modelling
https://towardsdatascience.com/descriptive-statistics-in-time-series-modelling-db6ec569c0b8
Stationarity
A time series is said to be stationary if it doesn’t increase or decrease with time linearly or exponentially(no trends), and if it doesn’t show any kind of repeating patterns(no seasonality). Mathematically, this is described as having constant mean and constant variance over time. Along, with variance, the autocovariance should also not be a function of time. If you have forgotten what mean and variance are: mean is the average of the data and variance is the average squared distance from the mean.
Sometimes, it’s even difficult to interpret the rolling mean visually so we take the help of statistical tests to identify this, one such being Augmented Dickey Fuller Test. ADCF Test is implemented using statsmodels in python which performs a classic null hypothesis test and returns a p-value.
Interpretation of null hypothesis test: If p-value is less than 0.05 (p-value: low), we reject the null hypothesis and assume that the data is stationary. But if the p-value is more than 0.05 (p-value: high), then we fail to reject the null hypothesis and determine the data to be non-stationary.
## 1.2 Random Forest Classifier (from jaganjag Github)
*Not sure if it would be relevant but keeping the code for completeness of the repo*
```
from read_data import *
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.mixture import GaussianMixture
if __name__ == '__main__':
data = execute()
print(data.shape)
X = data[:, :16] # 16 features
y = data[:, 16]
print(X.shape)
print(y.shape)
print(y)
train_features, test_features, train_labels, test_labels = train_test_split(X, y,
test_size=0.25)
print('Training Features Shape:', train_features.shape)
print('Training Labels Shape:', train_labels.shape)
print('Testing Features Shape:', test_features.shape)
print('Testing Labels Shape:', test_labels.shape)
clf = RandomForestClassifier(n_estimators=100, max_depth=5, oob_score=True)
clf.fit(X, y)
print(clf.feature_importances_)
# print(clf.oob_decision_function_)
print(clf.oob_score_)
predictions = clf.predict(test_features)
errors = abs(predictions - test_labels)
print("M A E: ", np.mean(errors))
print(np.count_nonzero(errors), len(test_labels))
print("Accuracy:", np.count_nonzero(errors)/len(test_labels))
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_classification
from sklearn.model_selection import train_test_split
from sklearn.mixture import GaussianMixture
import numpy as np
X, y = make_classification(n_samples=10000, n_features=6,
n_informative=3, n_redundant=0,
random_state=0, shuffle=True)
print(X.shape) # 10000x6
print(y.shape) # 10000
# TODO: Feature extraction using sliding window
train_features, test_features, train_labels, test_labels = train_test_split(X, y,
test_size=0.25, random_state=42)
# TODO: K-fold cross validation
print('Training Features Shape:', train_features.shape)
print('Training Labels Shape:', train_labels.shape)
print('Testing Features Shape:', test_features.shape)
print('Testing Labels Shape:', test_labels.shape)
clf = RandomForestClassifier(n_estimators=100, max_depth=3, oob_score=True
)
clf.fit(X, y)
print(clf.feature_importances_)
#print(clf.oob_decision_function_)
print(clf.oob_score_)
predictions = clf.predict(test_features)
errors = abs(predictions - test_labels)
print("M A E: ", round(np.mean(errors), 2))
# Visualization
feature_list = [1, 2, 3, 4, 5, 6]
from sklearn.tree import export_graphviz
import pydot
# Pull out one tree from the forest
tree = clf.estimators_[5]
# Export the image to a dot file
export_graphviz(tree, out_file='tree.dot', feature_names=feature_list, rounded=True, precision=1)
# Use dot file to create a graph
(graph, ) = pydot.graph_from_dot_file('tree.dot')
# Write graph to a png file
#graph.write_png('tree_.png')
# TODO: Confusion matrix, Accuracy
# GMM
gmm = GaussianMixture(n_components=3, covariance_type='full')
gmm.fit(X, y)
```
## Function to downsample the dataset, run a GridSearch, sort the best model according to the mean average percentage error
* Downsampling of dataset: Pick 10 days in a device-specific dataset and will run the GridSearch. Allow to run the algorithm on all device-specific dataframes.
* GridSearch trying Prophet with different training periods (8, 10 or 12 training days). This was the most critical parameter affecting the mean average percentage error (mape).
* Sort the Prophet model according to the mape. Save the best model with graph and a dataframe containing the prediction and actual data.
```
n_samples = 10 # Limit to 10 predictions per device.
pred_duration = 12 # 12-day prediction
for dev_nb in range(1,52):
device_nb = str('{:02d}'.format(dev_nb))
# Load the device-specific dataframe.
assert isinstance(device_nb, str) and len(device_nb)==2 and sum(d.isdigit() for d in device_nb)==2, 'WARNING: device_nb must be a string of 2-digits!'
assert int(device_nb)>=1 and int(device_nb)<=51, 'This device does not belong to the dataframe'
device, df_dev = load_ds(device_nb)
# Convert the variable device from a np.array to a string
regex = re.compile('[^A-Za-z0-9]')
device = regex.sub('', str(device))
# Create a dataframe with the dates to use
dates = pd.DataFrame(columns={'date_minus_12',
'date_minus_10',
'date_minus_8',
'date_predict'})
dates = dates[['date_minus_12', 'date_minus_10', 'date_minus_8', 'date_predict']]
# List of unique dates in the dataframe
dates['date_minus_12'] = df_dev['ds'].unique().strftime('%Y-%m-%d')
dates = dates.drop_duplicates(subset=['date_minus_12'])
dates = dates.reset_index(drop=True)
# Fill the other columns and drop the 12 last columns
dates['date_minus_10'] = dates.iloc[2:, 0].reset_index(drop=True)
dates['date_minus_8'] = dates.iloc[4:, 0].reset_index(drop=True)
dates['date_predict'] = dates.iloc[12:, 0].reset_index(drop=True)
dates = dates[:-pred_duration] # Drop the 12 last rows
# Keep only the dates with at least 12 training days
dates['Do_It'] = 'Do not'
dates['dm_12_c'] = np.nan
for r in range(dates.shape[0]):
# Calculate the date_predict - pred_duration
date_predict = dates.iloc[r, 3]
date_predict = datetime.strptime(date_predict, "%Y-%m-%d")
date_minus_12_check = date_predict + timedelta(days=-pred_duration)
date_minus_12_check = datetime.strftime(date_minus_12_check, "%Y-%m-%d")
# Tag the date_predict that have at least 12 training days
if date_minus_12_check in dates.date_predict.values or r<=11:
dates.iloc[r, 4] = 'yes'
dates = dates[dates.Do_It == 'yes']
dates.drop(['Do_It', 'dm_12_c'], axis=1)
# Downsampling
if dates.shape[0]>n_samples:
dates = dates.sample(n=n_samples, replace=False)
# GridSearch over the (down-sampled) dataset:
start_time = time.time()
mape_table_full = pd.DataFrame()
for r in range(dates.shape[0]):
# Parameters of the Grid
prophet_grid = {'df_dev' : [df_dev],
'device' : [device],
'parameter' : ['co2'],
'begin' : dates.iloc[r, :3].tolist(),
'end' : [dates.iloc[r, 3]],
'sampling_period_min' : [1],
'graph' : [1],
'predict_day' : [1],
'interval_width' : [0.6],
'changepoint_prior_scale' : [0.01, 0.005], # list(np.arange(0.01,30,1).tolist()),
'daily_fo' : [3],
# 'holidays_prior_scale' : list((1000,100,10,1,0.1)),
}
# Run GridSearch_Prophet
mape_table = GridSearch_Prophet(list(ParameterGrid(prophet_grid)), metric='mape')
mape_table_full = mape_table_full.append(mape_table)
end_time = time.time()
dur_min = int((end_time - start_time)/60)
print('Time elapsed: '+ str(dur_min) + " minutes.")
# Save the best model
print('Saving the best model')
best_model = {'df_dev' : [df_dev],
'device' : [mape_table.iloc[0, 0]],
'parameter' : [mape_table.iloc[0, 1]],
'begin' : [mape_table.iloc[0, 2]],
'end' : [mape_table.iloc[0, 3]],
'sampling_period_min' : [mape_table.iloc[0, 4]],
'graph' : [1],
'predict_day' : [1],
'interval_width' : [mape_table.iloc[0, 5]],
'changepoint_prior_scale' : [mape_table.iloc[0, 7]], # list(np.arange(0.01,30,1).tolist()),
'daily_fo' : [mape_table.iloc[0, 6]],
# 'holidays_prior_scale' : list((1000,100,10,1,0.1)),
}
# Run GridSearch_Prophet on the best model
mape_table = GridSearch_Prophet(list(ParameterGrid(best_model)), metric='mape')
end_time = time.time()
dur_min = int((end_time - start_time)/60)
print('Full analysis completed in '+ str(dur_min) + ' minutes.')
# Save the full table of mape_table
# Store the complete mape_table if this is the last prediction
folder_name = '/Users/guillaume/Documents/DS2020/Caru/caru/data/processed/'
mape_table_name = folder_name + re.sub("[']", '', str(mape_table.iloc[0, 0])) + '_mape_table_full.csv'
mape_table_full.to_csv(mape_table_name)
```
Shortcuts:
- Move cell down: .
- Move cell up: /
| github_jupyter |
```
import re
import nltk
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
import spacy
from nltk.tokenize.toktok import ToktokTokenizer
import en_core_web_sm
from pattern.en import suggest
import pandas as pd
#nltk.download('stopwords')
#nltk.download('punkt')
nlp = spacy.load('en_core_web_sm', parse=True, tag=True, entity=True)
tokenizer = ToktokTokenizer()
#file = open("A Boy's Will - Robert Frost/1.txt", "r")
#txt = file.read()
#list_new = init_process(txt)
DOCS_SIZE = 50
stop_words = set(stopwords.words('english'))
def init_process(txt):
no_new = re.sub('\n', ' ', txt)
no_spl = re.sub('ñ', ' ', no_new)
first_parse = re.sub(r'[^\w]', ' ', no_spl)
return first_parse
def stemming(text):
ps = nltk.porter.PorterStemmer()
text = ' '.join([ps.stem(word) for word in text.split()])
return text
def lemmatize_text(text):
text = nlp(text)
text = ' '.join([word.lemma_ if word.lemma_ != '-PRON-' else word.text for word in text])
return text
def reduce_lengthening(text):
pattern = re.compile(r"(.)\1{2,}")
return pattern.sub(r"\1\1", text)
def correct_spelling(w):
word_wlf = reduce_lengthening(w)
correct_word = suggest(word_wlf)
return correct_word[0][0]
def spelling_correction(words):
correct = [correct_spelling(w) for w in words]
return correct
def remove_stopwords(text, is_lower_case=False):
tokens = tokenizer.tokenize(text)
tokens = [token.strip() for token in tokens]
if is_lower_case:
filtered_tokens = [token for token in tokens if token not in stop_words]
else:
filtered_tokens = [token for token in tokens if token.lower() not in stop_words]
#filtered_text = ' '.join(filtered_tokens)
return filtered_tokens
def normalise(file_name):
file = open(file_name, "r")
read_txt = file.read()
list_new = init_process(read_txt)
stemmed = stemming(list_new)
lemma = lemmatize_text(stemmed)
new_words = remove_stopwords(lemma)
final = spelling_correction(new_words)
return final
def refine_text():
for i in range(1,51):
new_text_file = str(i)+".txt"
file_name = "poems/"+new_text_file
refined_list = normalise(file_name)
refined_text = ' '.join(refined_list)
text_file = open(("refined/"+new_text_file), "w")
text_file.write(refined_text)
text_file.close()
return
#refine_text() # Dont refine everytime
# dictionary = {}
# for i in range(1,51):
# file_name = ("refined/"+str(i)+".txt")
# text = open(file_name, "r").read()
# tokens = tokenizer.tokenize(text)
# for t in tokens:
# if t in dictionary:
# dictionary.get(t).append(i)
# dictionary[t] = list(set(dictionary.get(t)))
# else:
# dictionary[t] = [i]
dictionary = {}
def biwordindexing():
for i in range(1,51):
file_name = ("refined/"+str(i)+".txt")
text = open(file_name, "r").read()
tokens = tokenizer.tokenize(text)
for j in range(0,len(tokens)-1):
t = tokens[j]+" "+tokens[j+1]
if t in dictionary:
dictionary.get(t).append(i)
dictionary[t] = list(set(dictionary.get(t)))
else:
dictionary[t] = [i]
biwordindexing()
# dictionary = {}
# for i in range(1, (DOCSIZE+1)):
# file_name = ("refined/"+str(i)+".txt")
# text = open(file_name, "r").read()
# tokens = tokenizer.tokenize(text)
# for t in tokens:
# if t in dictionary:
# dictionary.get(t).append(i)
# dictionary[t] = list(set(dictionary.get(t)))
# else:
# dictionary[t] = [i]
# Dont do everytime
# text_file = open("inverted_list.csv", "w")
# text_file.write("Words, Inverted Index \n")
# for item in dictionary.keys():
# join_items = ", ".join(str(d) for d in dictionary.get(item))
# text = item+", "+str(join_items)
# text_file.write(text+" \n")
# text_file.close()
def and_intersect(list1, list2):
mer_list = []
i = 0
j = 0
while (i<len(list1) and j<len(list2)):
if (list1[i] == list2[j]):
mer_list.append(list1[i])
i = i+1
j = j+1
else:
if (list1[i] > list2[j]):
j = j+1
else:
i = i+1
return mer_list
def or_intersect(list1, list2):
mer_list = []
i = 0
j = 0
while (i<len(list1) and j<len(list2)):
if (list1[i] == list2[j]):
mer_list.append(list1[i])
i = i+1
j = j+1
else:
if (list1[i] > list2[j]):
mer_list.append(list2[j])
j = j+1
else:
mer_list.append(list1[i])
i = i+1
return mer_list
def not_list(list1):
not_list = []
for i in range(1, (DOCS_SIZE + 1)):
if i in list1:
pass
else:
not_list.append(i)
return not_list
def perform_binary_operations(word1, word2):
list_1 = dictionary.get(word1)
list_2 = dictionary.get(word2)
lists_and = and_intersect(list_1, list_2)
lists_or = or_intersect(list_1, list_2)
list1_not = not_list(list_1)
list2_not = not_list(list_2)
text_file = open(word1+"_"+word2+"_operation.csv", "w")
text_file.write("Words, Inverted Index \n")
join_items = ", ".join(str(d) for d in list_1)
text = word1+", "+str(join_items)
text_file.write(text+" \n")
join_items = ", ".join(str(d) for d in list_2)
text = word2+", "+str(join_items)
text_file.write(text+" \n")
join_items = ", ".join(str(d) for d in lists_and)
text = word1+" AND "+word2+", "+str(join_items)
text_file.write(text+" \n")
join_items = ", ".join(str(d) for d in lists_or)
text = word1+" OR "+word2+", "+str(join_items)
text_file.write(text+" \n")
join_items = ", ".join(str(d) for d in list1_not)
text = "NOT "+word1+", "+str(join_items)
text_file.write(text+" \n")
join_items = ", ".join(str(d) for d in list2_not)
text = "NOT "+word2+", "+str(join_items)
text_file.write(text+" \n")
text_file.close()
perform_binary_operations("know", "far")
def multiple_and(list1):
and_list = and_intersect(list1[0], list1[1])
for i in range(2, len(list1)):
and_list = and_intersect(list1[i], and_list)
return and_list
def multiple_or(list1):
or_list = or_intersect(list1[0], list1[1])
for i in range(2, len(list1)):
or_list = or_intersect(list1[i], or_list)
return or_list
new_list = [dictionary.get('come'), dictionary.get('leave'), dictionary.get('know'), dictionary.get('far')]
multiple_and(new_list)
### Code to make inverted index - not working
# import InvertedIndex
# import InvertedIndexQuery
# i = InvertedIndex.Index()
# filename = '1.txt'
# file_to_index = open(filename).read()
# document_key = filename
# # index the document, using document_key as the document's
# # id.
# i.index(file_to_index, document_key)
# filename = '2.txt'
# file_to_index = open(filename).read()
# document_key = filename
# i.index(file_to_index, document_key)
# search_results = InvertedIndexQuery.query('Python and spam', i)
# search_results.sort()
# cnt = 0
# for document in search_results:
# cnt = cnt + 1
# print ('%d) %s'.format(cnt, document[1]))
dictionary.get("vanish adobe")
```
| github_jupyter |
Version 1.1.0
# Mean encodings
In this programming assignment you will be working with `1C` dataset from the final competition. You are asked to encode `item_id` in 4 different ways:
1) Via KFold scheme;
2) Via Leave-one-out scheme;
3) Via smoothing scheme;
4) Via expanding mean scheme.
**You will need to submit** the correlation coefficient between resulting encoding and target variable up to 4 decimal places.
### General tips
* Fill NANs in the encoding with `0.3343`.
* Some encoding schemes depend on sorting order, so in order to avoid confusion, please use the following code snippet to construct the data frame. This snippet also implements mean encoding without regularization.
```
import pandas as pd
import numpy as np
from itertools import product
from grader import Grader
%matplotlib inline
```
# Read data
```
sales = pd.read_csv('../readonly/final_project_data/sales_train.csv.gz')
sales.head()
```
# Aggregate data
Since the competition task is to make a monthly prediction, we need to aggregate the data to montly level before doing any encodings. The following code-cell serves just that purpose.
```
index_cols = ['shop_id', 'item_id', 'date_block_num']
# For every month we create a grid from all shops/items combinations from that month
grid = []
for block_num in sales['date_block_num'].unique():
cur_shops = sales[sales['date_block_num']==block_num]['shop_id'].unique()
cur_items = sales[sales['date_block_num']==block_num]['item_id'].unique()
grid.append(np.array(list(product(*[cur_shops, cur_items, [block_num]])),dtype='int32'))
#turn the grid into pandas dataframe
grid = pd.DataFrame(np.vstack(grid), columns = index_cols,dtype=np.int32)
#get aggregated values for (shop_id, item_id, month)
gb = sales.groupby(index_cols,as_index=False).agg({'item_cnt_day':{'target':'sum'}})
#fix column names
gb.columns = [col[0] if col[-1]=='' else col[-1] for col in gb.columns.values]
#join aggregated data to the grid
all_data = pd.merge(grid,gb,how='left',on=index_cols).fillna(0)
#sort the data
all_data.sort_values(['date_block_num','shop_id','item_id'],inplace=True)
all_data.head()
```
# Mean encodings without regularization
After we did the techinical work, we are ready to actually *mean encode* the desired `item_id` variable.
Here are two ways to implement mean encoding features *without* any regularization. You can use this code as a starting point to implement regularized techniques.
#### Method 1
```
# Calculate a mapping: {item_id: target_mean}
item_id_target_mean = all_data.groupby('item_id').target.mean()
# In our non-regularized case we just *map* the computed means to the `item_id`'s
all_data['item_target_enc'] = all_data['item_id'].map(item_id_target_mean)
# Fill NaNs
all_data['item_target_enc'].fillna(0.3343, inplace=True)
# Print correlation
encoded_feature = all_data['item_target_enc'].values
print(np.corrcoef(all_data['target'].values, encoded_feature)[0][1])
```
#### Method 2
```
'''
Differently to `.target.mean()` function `transform`
will return a dataframe with an index like in `all_data`.
Basically this single line of code is equivalent to the first two lines from of Method 1.
'''
all_data['item_target_enc'] = all_data.groupby('item_id')['target'].transform('mean')
# Fill NaNs
all_data['item_target_enc'].fillna(0.3343, inplace=True)
# Print correlation
encoded_feature = all_data['item_target_enc'].values
print(np.corrcoef(all_data['target'].values, encoded_feature)[0][1])
```
See the printed value? It is the correlation coefficient between the target variable and your new encoded feature. You need to **compute correlation coefficient** between the encodings, that you will implement and **submit those to coursera**.
```
grader = Grader()
```
# 1. KFold scheme
Explained starting at 41 sec of [Regularization video](https://www.coursera.org/learn/competitive-data-science/lecture/LGYQ2/regularization).
**Now it's your turn to write the code!**
You may use 'Regularization' video as a reference for all further tasks.
First, implement KFold scheme with five folds. Use KFold(5) from sklearn.model_selection.
1. Split your data in 5 folds with `sklearn.model_selection.KFold` with `shuffle=False` argument.
2. Iterate through folds: use all but the current fold to calculate mean target for each level `item_id`, and fill the current fold.
* See the **Method 1** from the example implementation. In particular learn what `map` and pd.Series.map functions do. They are pretty handy in many situations.
```
print(all_data['item_id'].unique())
print('num of unique values: {}'.format(len(all_data['item_id'].unique())))
print('num of samples: {:,}'.format(all_data.shape[0]))
type(all_data.groupby('item_id')['target'].mean())
```
### Plot mean values upon `item_id`
```
all_data.groupby('item_id')['target'].mean().plot()
# YOUR CODE GOES HERE
from sklearn.model_selection import KFold
kf = KFold(n_splits=5, shuffle=False)
for index_train, index_valid in kf.split(all_data):
X_tr, X_val = all_data.iloc[index_train], all_data.iloc[index_valid]
# target coding of valid dataset depends on train dataset
X_tr_group = X_tr.groupby('item_id')['target']
X_val['item_target_enc'] = X_val['item_id'].map(X_tr_group.mean())
# copy target encoding back to all_data
all_data.iloc[index_valid] = X_val
all_data['item_target_enc'].fillna(0.3343, inplace=True)
encoded_feature = all_data['item_target_enc'].values
# You will need to compute correlation like that
corr = np.corrcoef(all_data['target'].values, encoded_feature)[0][1]
print(corr)
grader.submit_tag('KFold_scheme', corr)
```
# 2. Leave-one-out scheme
Now, implement leave-one-out scheme. Note that if you just simply set the number of folds to the number of samples and run the code from the **KFold scheme**, you will probably wait for a very long time.
To implement a faster version, note, that to calculate mean target value using all the objects but one *given object*, you can:
1. Calculate sum of the target values using all the objects.
2. Then subtract the target of the *given object* and divide the resulting value by `n_objects - 1`.
Note that you do not need to perform `1.` for every object. And `2.` can be implemented without any `for` loop.
It is the most convenient to use `.transform` function as in **Method 2**.
```
%%time
# YOUR CODE GOES HERE
# Calculate sum of the target values using all the objects.
target_sum = all_data.groupby('item_id')['target'].transform('sum')
# Then subtract the target of the given object and divide the resulting value by n_objects - 1.
n_objects = all_data.groupby('item_id')['target'].transform('count')
all_data['item_target_enc'] = (target_sum - all_data['target']) / (n_objects - 1)
all_data['item_target_enc'].fillna(0.3343, inplace=True)
encoded_feature = all_data['item_target_enc'].values
corr = np.corrcoef(all_data['target'].values, encoded_feature)[0][1]
print(corr)
grader.submit_tag('Leave-one-out_scheme', corr)
print()
```
# 3. Smoothing
Explained starting at 4:03 of [Regularization video](https://www.coursera.org/learn/competitive-data-science/lecture/LGYQ2/regularization).
Next, implement smoothing scheme with $\alpha = 100$. Use the formula from the first slide in the video and $0.3343$ as `globalmean`. Note that `nrows` is the number of objects that belong to a certain category (not the number of rows in the dataset).
```
%%time
# YOUR CODE GOES HERE
alpha = 100
item_id_target_mean = all_data.groupby('item_id')['target'].transform('mean')
n_objects = all_data.groupby('item_id')['target'].transform('count')
all_data['item_target_enc'] = (item_id_target_mean * n_objects + 0.3343*alpha) / (n_objects + alpha)
all_data['item_target_enc'].fillna(0.3343, inplace=True)
encoded_feature = all_data['item_target_enc'].values
corr = np.corrcoef(all_data['target'].values, encoded_feature)[0][1]
print(corr)
grader.submit_tag('Smoothing_scheme', corr)
print()
```
# 4. Expanding mean scheme
Explained starting at 5:50 of [Regularization video](https://www.coursera.org/learn/competitive-data-science/lecture/LGYQ2/regularization).
Finally, implement the *expanding mean* scheme. It is basically already implemented for you in the video, but you can challenge yourself and try to implement it yourself. You will need [`cumsum`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.DataFrameGroupBy.cumsum.html) and [`cumcount`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.core.groupby.GroupBy.cumcount.html) functions from pandas.
```
print('shape of cumulative sum: {:,}'.format(all_data.groupby('item_id')['target'].cumsum().shape[0]))
# YOUR CODE GOES HERE
cumsum = all_data.groupby('item_id')['target'].cumsum() - all_data['target']
cumcnt = all_data.groupby('item_id').cumcount()
all_data['item_target_enc'] = cumsum / cumcnt
all_data['item_target_enc'].fillna(0.3343, inplace=True)
encoded_feature = all_data['item_target_enc'].values
corr = np.corrcoef(all_data['target'].values, encoded_feature)[0][1]
print(corr)
grader.submit_tag('Expanding_mean_scheme', corr)
```
## Authorization & Submission
To submit assignment parts to Cousera platform, please, enter your e-mail and token into variables below. You can generate token on this programming assignment page. Note: Token expires 30 minutes after generation.
```
STUDENT_EMAIL = 'brandon.hy.lin.0@gmail.com' # EMAIL HERE
STUDENT_TOKEN = 'EC3Bgq6dNzp8q92S' # TOKEN HERE
grader.status()
grader.submit(STUDENT_EMAIL, STUDENT_TOKEN)
```
| github_jupyter |
# Step 2: Building GTFS graphs and merging it with a walking graph
We heavily follow Kuan Butts's Calculating Betweenness Centrality with GTFS blog post: https://gist.github.com/kuanb/c54d0ae7ee353cac3d56371d3491cf56
### The peartree (https://github.com/kuanb/peartree) source code was modified. Until code is merged you should use code from this fork: https://github.com/d3netxer/peartree
```
%load_ext autoreload
%autoreload 2
import matplotlib.pyplot as plt
%matplotlib inline
import osmnx as ox
import pandas as pd
import geopandas as gpd
import networkx as nx
import numpy as np
from shapely.geometry import Point
import partridge as ptg
import os, sys
sys.path.append(r"C:\repos\peartree")
import peartree as pt
print(pt.__file__)
path = r'input_folder/cap_haitien_gtfs.zip'
```
### Build a graph from service_0001
service_0001 is on the weekends, so below we are choosing a data that lands on a weekend
```
# from: http://simplistic.me/playing-with-gtfs.html
import datetime
service_ids_by_date = ptg.read_service_ids_by_date(path)
service_ids = service_ids_by_date[datetime.date(2019, 6, 29)]
print(f"service_ids is {service_ids}")
# view lets you filter before you load the feed. For example, below you are filtering by the service_ids
feed_0001 = ptg.load_feed(path, view={
'trips.txt': {
'service_id': service_ids,
},
})
feed_0001.calendar
```
### give all trips a direction of 0
PearTree wants directions assigned
```
feed_0001.trips['direction_id'] = 0
```
### Preview the GTFS network
```
# Set a target time period to summarize impedance
start = 0 # 0:00
end = 24*60*60 # 24:00
# Converts feed subset into a directed
# network multigraph
G = pt.load_feed_as_graph(feed_0001, start, end, add_trips_per_edge=True)
fig, ax = ox.plot_graph(G,
figsize=(12,12),
show=False,
close=False,
node_color='#8aedfc',
node_size=5,
edge_color='#e2dede',
edge_alpha=0.25,
bgcolor='black')
# PearTree prepends the stop ids with a code the is different each time it loads a graph
list(G.edges)
#list(G.edges(data='True'))
len(G.nodes)
```
### Inspect edge data, and you should see the length attribute, which is the time in seconds needs to traverse an edge. The trips attribute represents how many trips cross that edge.
```
for edge in list(G.edges):
print(G.get_edge_data(edge[0],edge[1]))
```
### get feed 2
```
service_ids_by_date = ptg.read_service_ids_by_date(path)
service_ids = service_ids_by_date[datetime.date(2019,8,6)]
print(f"service_ids is {service_ids}")
# view lets you filter before you load the feed. For example, below you are filtering by the service_ids
feed_0002 = ptg.load_feed(path, view={
'trips.txt': {
'service_id': service_ids,
},
})
```
### Inspect graph as a shapefile
Used for testing
```
# Get reference to GOSTNets
#sys.path.append(r'C:\repos\GOSTnets')
#import GOSTnets as gn
#gn.save(G,"gtfs_export_cap_haitien_service0001",r"temp")
#gn.save(G,"gtfs_export_cap_haitien_service0002",r"temp")
# Also these saved edges will be used in the optional PostProcessing notebook to compare differences between the two graphs
```
note: On inspection the edges have a length field. This length field is the average traversal time per edge based on the GTFS data in seconds.
## Merge a walk network
following this blog post: http://kuanbutts.com/2018/12/24/peartree-with-walk-network/
```
# load existing walk/ferry graph from step 1
G = nx.read_gpickle(r"temp\cap_haitien_walk_w_ferries_via_osmnx_origins_adv_snap.pickle")
#G = nx.read_gpickle(r"temp\cap_haitien_walk_w_ferries_via_osmnx_salted.pickle")
print(nx.info(G))
list(G.edges(data=True))[:10]
```
### Assign traversal times in seconds to edges
Since peartree represents edge length (that is the impedance value associated with the edge) in seconds; we will need to convert the edge values that are in meters into seconds:
```
walk_speed = 3.5 #km per hour; about 3 miles per hour
ferry_speed = 15
# Make a copy of the graph in case we make a mistake
G_adj = G.copy()
# Iterate through and convert lengths to seconds
for from_node, to_node, edge in G_adj.edges(data=True):
orig_len = edge['length']
# Note that this is a MultiDiGraph so there could
# be multiple indices here, I naively assume this is not the case
G_adj[from_node][to_node][0]['orig_length'] = orig_len
try:
# if ferry
if 'ferry' in G_adj[from_node][to_node][0]:
ferry_var = G_adj[from_node][to_node][0]['ferry']
# if ferry does not have nan as a value
# if it is a string then it will produce an error and go to the except statement
# print('print ferry_var')
# print(ferry_var)
# print(type(ferry_var))
# print(np.isnan(ferry_var))
if not np.isnan(ferry_var):
print(G_adj[from_node][to_node][0]['ferry'])
print(G_adj[from_node][to_node][0])
# Conversion of walk speed and into seconds from meters
kmph = (orig_len / 1000) / ferry_speed
in_seconds = kmph * 60 * 60
G_adj[from_node][to_node][0]['length'] = in_seconds
# And state the mode, too
G_adj[from_node][to_node][0]['mode'] = 'ferry'
else:
# Conversion of walk speed and into seconds from meters
kmph = (orig_len / 1000) / walk_speed
in_seconds = kmph * 60 * 60
G_adj[from_node][to_node][0]['length'] = in_seconds
# And state the mode, too
G_adj[from_node][to_node][0]['mode'] = 'walk'
except:
# Conversion of walk speed and into seconds from meters
kmph = (orig_len / 1000) / walk_speed
in_seconds = kmph * 60 * 60
G_adj[from_node][to_node][0]['length'] = in_seconds
# And state the mode, too
G_adj[from_node][to_node][0]['mode'] = 'walk'
G_adj.nodes[330530920]
G_adj.nodes[6770195160]
# So this should be easy - just go through all nodes
# and make them have a 0 cost to board
for i, node in G_adj.nodes(data=True):
G_adj.nodes[i]['boarding_cost'] = 0
# testing
list(G_adj.edges(data=True))[1]
```
### save the graph again to be used for the isochrones notebook
```
sys.path.append(r'C:\repos\GOSTnets')
import GOSTnets as gn
gn.save(G,"cap_haitien_walk_w_ferries_via_osmnx_w_time_adv_snap",r"temp")
```
## Loading the feeds as graphs with the walking graph as the existing graph
Now that the two graphs have the same internal structures, we can load the walk network onto the transit network with the following peartree helper method.
```
# Now that we have a formatted walk network
# it should be easy to reload the peartree graph
# and stack it on the walk network
start = 0 # 0:00
end = 24*60*60 # 24:00
feeds = {'service0001':feed_0001,'service0002':feed_0002}
#feeds = {'service0002':feed_0002}
for feed in feeds.items():
G_adj_copy = G_adj.copy()
# Note this will be a little slow - an optimization here would be
# to have coalesced the walk network
%time G = pt.load_feed_as_graph(feed[1], start, end, existing_graph=G_adj_copy, impute_walk_transfers=True, add_trips_per_edge=True)
# compatible with NetworkX 2.4
list_of_subgraphs = list(G.subgraph(c).copy() for c in nx.weakly_connected_components(G))
max_graph = None
max_edges = 0
for i in list_of_subgraphs:
if i.number_of_edges() > max_edges:
max_edges = i.number_of_edges()
max_graph = i
# set your graph equal to the largest sub-graph
G = max_graph
# save again and inspect
gn.save(G,f"gtfs_export_cap_haitien_merged_impute_walk_adv_snap_{feed[0]}",r"temp")
#gn.save(G,f"gtfs_export_cap_haitien_merged_impute_walk_salted_{feed[0]}",r"temp")
```
## Visualize the last merged feed in the loop
```
G.graph['crs'] = 'epsg:4326'
G.graph
G.nodes[6770195160]
fig, ax = ox.plot_graph(G,
figsize=(12,12),
show=False,
close=False,
node_color='#8aedfc',
node_size=5,
edge_color='#e2dede',
edge_alpha=0.25,
bgcolor='black')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/mengwangk/dl-projects/blob/master/04_02_auto_ml_2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Automated ML
```
COLAB = True
if COLAB:
!sudo apt-get install git-lfs && git lfs install
!rm -rf dl-projects
!git clone https://github.com/mengwangk/dl-projects
#!cd dl-projects && ls -l --block-size=M
if COLAB:
!cp dl-projects/utils* .
!cp dl-projects/preprocess* .
%reload_ext autoreload
%autoreload 2
%matplotlib inline
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import scipy.stats as ss
import math
import matplotlib
from scipy import stats
from collections import Counter
from pathlib import Path
plt.style.use('fivethirtyeight')
sns.set(style="ticks")
# Automated feature engineering
import featuretools as ft
# Machine learning
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import Imputer, MinMaxScaler, StandardScaler
from sklearn.impute import SimpleImputer
from sklearn.metrics import precision_score, recall_score, f1_score, roc_auc_score, precision_recall_curve, roc_curve
from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.ensemble import RandomForestClassifier
from IPython.display import display
from utils import *
from preprocess import *
# The Answer to the Ultimate Question of Life, the Universe, and Everything.
np.random.seed(42)
%aimport
```
## Preparation
```
if COLAB:
from google.colab import drive
drive.mount('/content/gdrive')
GDRIVE_DATASET_FOLDER = Path('gdrive/My Drive/datasets/')
if COLAB:
DATASET_PATH = GDRIVE_DATASET_FOLDER
ORIGIN_DATASET_PATH = Path('dl-projects/datasets')
else:
DATASET_PATH = Path("datasets")
ORIGIN_DATASET_PATH = Path('datasets')
DATASET = DATASET_PATH/"feature_matrix.csv"
ORIGIN_DATASET = ORIGIN_DATASET_PATH/'4D.zip'
if COLAB:
!ls -l gdrive/"My Drive"/datasets/ --block-size=M
!ls -l dl-projects/datasets --block-size=M
data = pd.read_csv(DATASET, header=0, sep=',', quotechar='"', parse_dates=['time'])
origin_data = format_tabular(ORIGIN_DATASET)
data.info()
```
## Preliminary Modeling
```
feature_matrix = data
feature_matrix.columns
feature_matrix.head(4).T
origin_data[origin_data['LuckyNo']==0].head(10)
# feature_matrix.drop(columns=['MODE(Results.PrizeType)_1stPrizeNo',
# 'MODE(Results.PrizeType)_2ndPrizeNo',
# 'MODE(Results.PrizeType)_3rdPrizeNo',
# 'MODE(Results.PrizeType)_ConsolationNo1',
# 'MODE(Results.PrizeType)_ConsolationNo10',
# 'MODE(Results.PrizeType)_ConsolationNo2',
# 'MODE(Results.PrizeType)_ConsolationNo3',
# 'MODE(Results.PrizeType)_ConsolationNo4',
# 'MODE(Results.PrizeType)_ConsolationNo5',
# 'MODE(Results.PrizeType)_ConsolationNo6',
# 'MODE(Results.PrizeType)_ConsolationNo7',
# 'MODE(Results.PrizeType)_ConsolationNo8',
# 'MODE(Results.PrizeType)_ConsolationNo9',
# 'MODE(Results.PrizeType)_SpecialNo1',
# 'MODE(Results.PrizeType)_SpecialNo10',
# 'MODE(Results.PrizeType)_SpecialNo2',
# 'MODE(Results.PrizeType)_SpecialNo3',
# 'MODE(Results.PrizeType)_SpecialNo4',
# 'MODE(Results.PrizeType)_SpecialNo5',
# 'MODE(Results.PrizeType)_SpecialNo6',
# 'MODE(Results.PrizeType)_SpecialNo7',
# 'MODE(Results.PrizeType)_SpecialNo8',
# 'MODE(Results.PrizeType)_SpecialNo9'], inplace=True)
feature_matrix.groupby('time')['COUNT(Results)'].mean().plot()
plt.title('Average Monthly Count of Results')
plt.ylabel('Strike Per Number')
```
## Correlations
```
feature_matrix.shape
corrs = feature_matrix.corr().sort_values('TotalStrike')
corrs['TotalStrike'].head()
corrs['TotalStrike'].dropna().tail()
```
### Random Forest
```
model = RandomForestClassifier(n_estimators = 1000,
random_state = 50,
n_jobs = -1)
def predict_dt(dt, feature_matrix, return_probs = False):
feature_matrix['date'] = feature_matrix['time']
# Subset labels
test_labels = feature_matrix.loc[feature_matrix['date'] == dt, 'Label']
train_labels = feature_matrix.loc[feature_matrix['date'] < dt, 'Label']
print(f"Size of test labels {len(test_labels)}")
print(f"Size of train labels {len(train_labels)}")
# Features
X_train = feature_matrix[feature_matrix['date'] < dt].drop(columns = ['NumberId', 'time',
'date', 'Label', 'TotalStrike', 'month', 'year'])
X_test = feature_matrix[feature_matrix['date'] == dt].drop(columns = ['NumberId', 'time',
'date', 'Label', 'TotalStrike', 'month', 'year'])
print(f"Size of X train {len(X_train)}")
print(f"Size of X test {len(X_test)}")
feature_names = list(X_train.columns)
# Impute and scale features
pipeline = Pipeline([('imputer', SimpleImputer(strategy = 'median')),
('scaler', MinMaxScaler())])
# Fit and transform training data
X_train = pipeline.fit_transform(X_train)
X_test = pipeline.transform(X_test)
# Labels
y_train = np.array(train_labels).reshape((-1, ))
y_test = np.array(test_labels).reshape((-1, ))
print('Training on {} observations.'.format(len(X_train)))
print('Testing on {} observations.\n'.format(len(X_test)))
# Train
model.fit(X_train, y_train)
# Make predictions
predictions = model.predict(X_test)
probs = model.predict_proba(X_test)[:, 1]
# Calculate metrics
p = precision_score(y_test, predictions)
r = recall_score(y_test, predictions)
f = f1_score(y_test, predictions)
auc = roc_auc_score(y_test, probs)
print(f'Precision: {round(p, 5)}')
print(f'Recall: {round(r, 5)}')
print(f'F1 Score: {round(f, 5)}')
print(f'ROC AUC: {round(auc, 5)}')
# Feature importances
fi = pd.DataFrame({'feature': feature_names, 'importance': model.feature_importances_})
if return_probs:
return fi, probs
return fi
# All the months
len(feature_matrix['time'].unique()), feature_matrix['time'].unique()
june_2019 = predict_dt(pd.datetime(2019,6,1), feature_matrix)
from utils import plot_feature_importances
norm_june_fi = plot_feature_importances(june_2019)
```
## Comparison to Baseline
| github_jupyter |
# Print Compact Transitivity Tables
```
import qualreas as qr
import os
import json
path = os.path.join(os.getenv('PYPROJ'), 'qualreas')
```
## Algebras from Original Files
## Algebras from Compact Files
```
alg = qr.Algebra(os.path.join(path, "Algebras/Misc/Linear_Interval_Algebra.json"))
alg.summary()
alg.check_composition_identity()
alg.is_associative()
algX = qr.Algebra(os.path.join(path, "Algebras/Misc/Extended_Linear_Interval_Algebra.json"))
algX.summary()
algX.check_composition_identity()
algX.is_associative()
algR = qr.Algebra(os.path.join(path, "Algebras/Misc/Right_Branching_Interval_Algebra.json"))
algR.summary()
algR.check_composition_identity()
algR.is_associative()
algL = qr.Algebra(os.path.join(path, "Algebras/Misc/Left_Branching_Interval_Algebra.json"))
algL.summary()
algL.check_composition_identity()
algL.is_associative()
rcc8 = qr.Algebra(os.path.join(path, "Algebras/Misc/RCC8_Algebra.json"))
rcc8.summary()
rcc8.check_composition_identity()
rcc8.is_associative()
ptalg = qr.Algebra(os.path.join(path, "Algebras/Misc/Linear_Point_Algebra.json"))
ptalg.summary()
ptalg.check_composition_identity()
ptalg.is_associative()
ptalgR = qr.Algebra(os.path.join(path, "Algebras/Misc/Right_Branching_Point_Algebra.json"))
ptalgR.summary()
ptalgR.check_composition_identity()
ptalgR.is_associative()
ptalgL = qr.Algebra(os.path.join(path, "Algebras/Misc/Left_Branching_Point_Algebra.json"))
ptalgL.summary()
ptalgL.check_composition_identity()
ptalgL.is_associative()
```
## Print Compact Tables
The following function definition was added, as a method, to the definition of an Algebra.
```
def print_compact_transitivity_table(alg):
num_elements = len(alg.elements)
print(" \"TransTable\": {")
outer_count = num_elements # Used to avoid printing last comma in outer list
for rel1 in alg.transitivity_table:
outer_count -= 1
print(f" \"{rel1}\": {{")
inner_count = num_elements # Used to avoid printing last comma in inner list
for rel2 in alg.transitivity_table[rel1]:
inner_count -= 1
if inner_count > 0:
print(f" \"{rel2}\": \"{alg.transitivity_table[rel1][rel2]}\",")
else:
print(f" \"{rel2}\": \"{alg.transitivity_table[rel1][rel2]}\"")
if outer_count > 0:
print(f" }},")
else:
print(f" }}")
print(" }")
```
### Linear Point Algebra
```
print_compact_transitivity_table(ptalg)
```
### Right-Branching Point Algebra
```
print_compact_transitivity_table(ptalgR)
```
### Left-Branching Point Algebra
```
print_compact_transitivity_table(ptalgL)
```
### Linear Interval Algebra
```
print_compact_transitivity_table(alg)
```
### Extended Linear Interval Algebra
```
print_compact_transitivity_table(algX)
```
### Right-Branching Linear Interval Algebra
```
print_compact_transitivity_table(algR)
```
### Left-Branching Linear Interval Algebra
```
print_compact_transitivity_table(algL)
```
### Region Connection Calculus 8
```
print_compact_transitivity_table(rcc8)
```
| github_jupyter |

# Python Basics I
Bienvenido a tu primer asalto con Python. En este notebook encontrarás los primeros pasos para empezar a familiarizarte con este lenguaje.
1. [Variables](#1.-Variables)
2. [Print](#2.-Print)
3. [Comentarios](#3.-Comentarios)
4. [Flujos de ejecución](#4.-Flujos-de-ejecución)
5. [Del](#5.-Del)
6. [Tipos de los datos](#6.-Tipos-de-los-datos)
7. [Conversión de tipos](#7.-Conversión-de-tipos)
8. [Input](#8.-Input)
9. [None](#9.-None)
10. [Sintaxis y best practices](#10.-Sintaxis-y-best-practices)
11. [Resumen](#11.-Resumen)
## 1. Variables
Empezamos nuestra aventura en Python declarando variables. ¿Qué es esto y para qué sirve? Simplemente es una manera de etiquetar los datos del programa. Empezaremos declarando variables muy simples, como números y texto, pero acabaremos implementando estructuras más complejas.
### Variables numéricas
```
ingresos = 1000
gastos = 400
```
**Esto es una asignación**. A la palabra *ingresos*, le asignamos un valor mediante `=`.
Si queremos ver el valor de la variable, simplemente escribimos su nombre en una celda.
```
ingresos
```
### Cadenas de texto
Las cadenas de texto se declaran con comillas simples o dobles
```
ingresos_texto = "Los ingresos del año han sido altos"
ingresos_texto_2 = 'Los ingresos del año han sido altos'
```
Python es un lenguaje dinámico por lo que simpre podremos actualizar los valores de las variables. **Se recomienda usar nombres descriptivos para declarar las variables, pero no excesivamente largos**. Así evitamos sobreescribirlas sin querer.
Por otro lado, cuidado con los caracteres ele y uno (`l` vs `1`), así como con cero y o (`0` vs `O`). Se suelen confundir.
fatal vs fata1
clarO vs clar0
Reasignamos valor a gastos
```
gastos = 200
gastos
```
Ahora la variable gastos vale 200. Si la última línea de una celda es una variable, su valor será el *output* de la celda: 200.
Vale, ¿y de qué nos sirve guardar variables?
Podremos usar estos datos posteriormente, en otro lugar del programa. Por ejemplo, si ahora en una celda nueva queremos obtener el beneficio, simplemente restamos los nombres de las variables
```
beneficio = ingresos - gastos
```
El `print` sirve para ver el output de la celda, imprimiendo valores por pantalla. Lo veremos en el siguiente apartado.
Si has programado en otros lenguajes, te llamará la atención que en Python no hay que especificar los tipos de los datos cuando declaramos una variable. No tenemos que decirle a Python que *ingresos* es un valor numerico o que *ingresos_texto* es una cadena de texto. Python lo interpreta y sabe qué tipo de datos son. Cada variable tiene su tipo de datos ya que **Python es un lenguaje fuertemente tipado**. Lo veremos más adelante, en el apartado *Tipos de datos*.
<table align="left">
<tr><td width="80"><img src="img/error.png" style="width:auto;height:auto"></td>
<td style="text-align:left">
<h3>ERRORES en variables</h3>
</td></tr>
</table>
### Escribir mal el nombre
Un error típico cuando declaramos variables es **escribir su nombre mal, o llamar despues a la variable de forma errónea**. En tal caso, aparece un `NameError: name 'variable_que_no_existe' is not defined`
```
gstos
```
Fíjate que te indica la línea donde se produce el error, el tipo de error (`NameError`), y una breve descripción del error.
### Cerrar bien los strings
Cuidado tambien cuando definamos una cadena de texto, y se nos olvide cerrarla con sus correspondientes comillas. Nos dará un `SyntaxError: EOL while scanning string literal` (End Of Line)
```
gastos_texto = "Los gastos del año han sido bajos
```
### Prohibido espacios en los nombres de las variables
También dará error si tenemos un espacio en la declaración de la variable. Se recomienda mínusculas y guiones bajos para simular los espacios.
**Los espacios que encuentres alrededor del igual se pueden usar perfectamente**. Es pura estética a la hora de leer el código.
```
nueva variable = "variable erronea"
```
### Números en el nombre de la variable
Ojo con los números cuando declaremos variables. En ocasiones es determinante describir nuestra variable con algún número. En tal caso, ponlo siempre al final del nombre de la variable, ya que sino saltará un error.
```
ingresos_2021 = 900
print(ingresos_2021)
2021_ingresos = 900
print(2021_ingresos)
```
### Sensible a mayusculas
Mucho cuidado con las mayusculas y minusculas porque Python no las ignora. Si todas las letras de una variable están en mayusculas, tendremos que usarla en mayusculas, sino dará un error de que no encuentra la variable.
```
ingresos_2021 = 900
print(Ingresos_2021)
```
### Palabras reservadas
En Python, como en otros lenguajes, hay una serie de palabras reservadas que tienen un significado para el intérprete de Python y por lo tanto no podemos usar para ponerle nombre a nuestras variables.
Por ejemplo, `def` se usa para definir funciones en Python (lo veremos en otros notebooks), por lo que no podemos emplear `def` para nombrar a nuestras variables
```
var = 9
print(var)
def = 9
```
Consulta la lista de palabras reservadas de Python
```
import keyword
print(keyword.kwlist)
```
### Resumen variables
En resumen:
* Usar minusculas. Sensible a mayusculas/minusculas
* No usar espacios
* No usar palabras reservadas
* No usar numeros al principio de la variable
* Cuidado con los caracteres `l`, `1`, `O`, `0`
<table align="left">
<tr><td width="80"><img src="img/ejercicio.png" style="width:auto;height:auto"></td>
<td style="text-align:left">
<h3>Ejercicio variables</h3>
<ol>
<li>Crea un programa que almacene tu nombre y tus apellidos en dos variables diferentes. </li>
<li>Guarda tu edad en otra variable</li>
<li>Modifica la variable de tu edad</li>
<li>Comprueba que todas las variables tienen los valores que les corresponde</li>
</ol>
</td></tr>
</table>
```
nombre = "Alberto"
apellido = "Romero"
edad = 38
edad = 37
print(nombre)
print(apellido)
print(edad)
```
## 2. Print
Hasta ahora hemos visto en Jupyter dos tipos de celdas:
* **Markdown**: básicamente texto para las explicaciones
* **Código**: donde insertamos el código Python y lo ejecutamos
Las celdas de código, no solo corren el código y realizan todas las operaciones que le indiquemos, sino que también tienen un output, una salida de ese código. Tenemos dos opciones para ver el output del código, o bien mediante `print`, o poniendo una variable al final de la celda. En este último caso, veremos el valor de esa variable
```
print(ingresos)
print(gastos)
print("Los beneficios han sido: ")
beneficio
```
Si hacemos un print de un texto, podemos concatenar valores de variables mediante `%s`.
```
mes = "junio"
print("Los beneficios de %s han sido de %s millones" % (mes, beneficio))
```
Otra opcion si queremos imprimir por pantalla varios strings, es separandolos por comas en el `print`. Fíjate que le añade espacios en la salida cada vez que ponemos un valor nuevo entre comas.
```
mes = "agosto"
print("Los beneficios de", mes, "han sido de", beneficio)
```
## 3. Comentarios
Se trata de texto que va junto con el código, y que el intérprete de Python ignora por completo. Muy útil para documentar y explicar el código
```
# Comentario de una linea, va con hashtag. El intérprete lo ignora
# print("Este print lo va a ignorar")
print("Esto sí lo ejecuto") # Otro comentario aquí
'''
Comentario
Multilinea
Si lo pones solo en una celda, te imprime su contenido
'''
"""
También me valen 3 comillas dobles
para el comentario multilinea
"""
print("Fin del programa")
```
**IMPORTATE**. SIEMPRE comentar el código. Nunca sabes quén lo puede heredar. Te dejo en [este link](https://realpython.com/python-comments-guide/) una guía interesante sobre cómo comentar tu código.
## 4. Flujos de ejecución
Los programas de Python se ejecutan secuencialmente, por lo que el orden en el que escribas las operaciones es determinante
```
ventas_jun_jul = ventas_junio + ventas_julio
ventas_junio = 100
ventas_julio = 150
```
**Da error**, primero tenemos que declarar las ventas, y luego sumarlas.
**¿Cuándo acaba una línea de código?**. El salto de línea lo interpreta Python como una nueva instrucción. En muchos lengujes, como Java hay que especificar el fin de la sentencia de código mediante `;` o con `,`. En Python no es necesario, aunque se puede usar igualmente.
```
altura = 1.80; peso = 75
print(altura)
print(peso)
x, y, z = 1, 2, 3
print(z)
```
## 5. Del
Es la sentencia que usaremos para borrar una variable. La verdad, no se suelen borrar variables. Vamos desarrollando los programas de Python sin preocuparnos de limpiar aquellas variables que no usamos. Normalmente no borrarlas no suele ser un problema, pero cuando manejamos un gran volumen de datos, podemos sufrir problemas de rendimiento ya que **las variables ocupan memoria**.
Cuando las variables son las que hemos visto hasta ahora, no pasa nada, pero si son más pesadas, como por ejemplo datasets de imágenes que ocupan mucho, sí va a venir bien eliminar aquellas que no usemos.
```
altura = 1.85
del altura
print(altura)
```
## 6. Tipos de los datos
Python es un lenguaje fuertemente tipado. Eso significa que las variables que usamos pertenecen a un tipo de datos: numero entero (int), real (float), texto (String), u otro tipo de objetos.
**¿Por qué es importante saber bien de que tipos son los datos?** Porque cada tipo de dato tiene una serie propiedades y operaciones asociadas. Por ejemplo, a un texto no lo puedo sumar 5. Por lo que cuando vayamos a hacer operaciones entre ellos, tenemos que asegurarnos de que son del mismo tipo para que el resultado sea el esperado. `texto + 5` no tiene sentido y va a dar error. Parece obvio, pero hay ocasiones en las que los tipos de los datos no son los esperados.
**¿Cuántos tipos de datos hay?** Básicamente aquí no hay límites. En este notebook veremos los más básicos, pero en notebooks posteriores verás que puedes crearte tus propios tipos de datos mediante las **clases**. En Python hay una serie de tipos de datos básicos, que podremos usar sin importar ningun módulo externo, son los llamados [*built-in Types*](https://docs.python.org/3/library/stdtypes.html). Estos son los más comunes:
* **Numérico**: tenemos `int`, `float` y `complex`. Dependiendo de si es un numero entero, uno real o un complejo.
* **String**: o Cadena. Cadena de texto plano
* **Booleano**: o Lógico. `True`/`False`
**¿Cómo sabemos el tipo de datos de una variable?** Mediante `type(nombre_variable)`
### Numéricos
```
numero = 22
type(numero)
```
Bien, es un ***int***, un número entero. Si lo que quiero es un numero real, es decir, que tenga decimales, le añado un punto
```
numero_real = 22.0
type(numero_real)
```
Aunque no le haya añadido decimales como tal, ya le estoy diciendo a Python que esta variable es un numero real (***float***).
```
numero_real_decimales = 22.45123
type(numero_real_decimales)
```
Algunas operaciones básicas que podemos hacer con los numeros:
* Sumar: `+`
* Restar: `-`
* Multiplicar: `*`
* Dividir: `/`
* Elevar: `**`
* Cociente division: `//`
* Resto de la división: `%`
```
print("Sumas/restas")
print(1 + 2)
print(1.0 + 2) # Fijate que cuando fuerzo a que alguno de los numeros sea float, el resultado es float
print(1 + 2.0)
print(1-2)
print("Multiplicaciones/Divisiones")
print(2 * 2)
print(2.0 * 2) # Ocurre lo mismo que en las sumas, cuando ponemos uno de los numeros como float.
print(2/2) # Al hacer la división, directamente convierte el numero en float, a pesar de que los dos son enteros y el resultado también.
print(1000/10)
print("Resto de la división")
print(10/3)
print(int(10/3)) # Me quedo con el entero de la division
print(10 % 3) # Me quedo con el resto de la division
```
### Strings
El tercer tipo de datos más común es el *String*, o cadena de texto. Hay varias maneras de declararlo:
```
# con comillas dobles
cadena = "Esto es una cadena de texto"
# con comillas simples
cadena = 'las comillas simples también valen'
type(cadena)
```
Si da la casualidad de que en el texto hay comillas, tenemos la posibilidad de que Python las interprete como parte del texto y no como comando de inicio/cierre de String
```
# comillas dobles si dentro hay simples
print("String con comillas simples dentro ' ' '")
# tres comillas dobles si dentro hay dobles
print("""String con comillas dobles dentro " " " """)
```
En ocasiones queremos poner saltos de línea o tabulaciones en los prints, o simplemente en una variable de tipo String. Para ello usamos los [*escape characters*](https://www.w3schools.com/python/gloss_python_escape_characters.asp) como `\n` para saltos de línea o `\t` para tabulaciones.
```
print("Primera linea\nSegunda linea\n\tTercera linea tabulada")
```
Para unir dos variables de tipo String, simplemente usamos el `+`
```
nombre = "Bon"
apellido = "Scott"
nombre_apellido = nombre + " " + apellido
print(nombre_apellido)
```
### Booleano
Por último, el cuarto tipo de datos basiquísimo es el *booleano*: `True`/`False`. Para que Python reconoza este tipo de datos, hay que escribirlos con la primera letra en mayúscula
```
ya_se_de_python = True
type(ya_se_de_python)
```
Tipos de datos hay muchos, verás más adelante cómo crear tus propios tipos de datos mediante las clases y los objetos. Pero por ahora, quédate con los tipos de datos más simples:
* **int**: entero
* **float**: real
* **str**: cadena de texto
* **booleano**: true/false
En otros lenguajes de programación el valor numérico suele ir más desagregado dependiendo del volumen del mismo. No es lo mismo tener un `double`, que un `float`. Por suerte en Python no hay que preocuparse de eso :)
Veamos más ejemplos
```
print(type(1))
print(type(1.0))
print(type(-74))
print(type(4/1)) # Aunque haga division de enteros, Python automáticamente los convierte a float
print(type("Cadena de texto"))
print(type("-74"))
print(type(True))
print(type(False))
```
<table align="left">
<tr><td width="80"><img src="img/error.png" style="width:auto;height:auto"></td>
<td style="text-align:left">
<h3>ERRORES en tipos de datos</h3>
</td></tr>
</table>
```
# cuidado cuando tenemos cadenas de texto con comillas dentro
print("Lleva comillas "dobles". Da fallo)
```
## 7. Conversión de tipos
Como en otros lenguajes, en Python también tenemos la posibilidad de realizar conversiones entre los tipos de datos. Hasta ahora hemos visto pocos, pero cuando descubramos más tipos de datos y objetos, verás que son muy comunes estas transformaciones.
Un caso muy habitual es leer un archivo datos numéricos, y que Python interprete los números como caracteres. No es un error, pero posteriormente, cuando hagamos operaciones con nuestros números, tendremos errores, ya que en realidad son cadenas de texto. Si forzamos el cambio a numerico, nos evitaremos futuros problemas.
**Mucho cuidado en los cambios de tipos**. Tenemos que estar seguros de lo que hacemos ya que podemos perder información, o lo más probable, puede dar error, al no ser compatible el cambio.
Veamos como cambiar los tipos
```
numero_real = 24.69
print(type(numero_real))
numero_entero = int(numero_real)
print(type(numero_entero))
print(numero_entero)
```
Perdemos unos decimales, normalmente nada grave, aunque depende de la aplicación.
**NOTA**: si lo que queremos es redondear, podemos usar la función `round()`
```
# Esta funcion tiene dos argumentos: el número y la cantidad de decimales que queremos conservar.
# Veremos funciones más adelante.
round(24.69845785,1)
```
Para pasar de un **numero a string**, no hay problema
```
real = 24.69
entero = 5
real_str = str(real)
entero_str = str(entero)
print(real_str)
print(type(real_str))
print(entero_str)
print(type(entero_str))
```
De **String a un número** tampoco suele haber problema. Hay que tener mucho cuidado con los puntos de los decimales. **Puntos, NO comas**
```
print(int("98"))
print(type(int("98")))
print(float("98.25"))
print(type(float("98.25")))
print(float("98"))
print(type(float("98")))
```
Pasar de **numero a boleano y viceversa**, tambien es bastante sencillo. Simplemente ten en cuenta que los 0s son `False`, y el resto de numeros equivalen a un `True`
```
print(bool(1))
print(bool(1.87))
print(bool(1000))
print(bool(-75))
print(bool(0))
print(int(True))
print(int(False))
print(float(True))
print(float(False))
print(complex(True))
print(complex(False))
```
En el caso de transformar **String a booleano**, los strings vacíos serán `False`, mientras que los que tengan cualquier cadena, equivaldrán a `True`.
Sin embargo, si la operación es la inversa, el booleano `True` pasará a valer una cadena de texto como `True`, y para `False` lo mismo.
```
print(bool(""))
print(bool("Cadena de texto"))
verdadero = True
print(verdadero)
print(type(verdadero))
verdadero_str = str(verdadero)
print(verdadero_str)
print(type(verdadero_str))
```
<table align="left">
<tr><td width="80"><img src="img/error.png" style="width:auto;height:auto"></td>
<td style="text-align:left">
<h3>ERRORES en conversion de tipos</h3>
</td></tr>
</table>
Ojo si intentamos pasar a entero un string con pinta de real. En cuanto lleva el punto, ya tiene que ser un numero real. A no ser que lo pasemos a real y posteriormente a entero (`int()`), o usando el `round()` como vimos anteriormente
```
print(int("98.25"))
```
Si leemos datos con decimales y tenemos comas en vez de puntos, habrá errores.
```
float("98,25")
```
Para solventar esto utilizaremos funciones que sustituyan unos caracteres por otros
```
mi_numero = "98,25"
print(mi_numero)
print(type(mi_numero))
mi_numero_punto = mi_numero.replace(",", ".")
print(mi_numero_punto)
print(type(mi_numero_punto))
mi_numero_float = float(mi_numero_punto)
print(mi_numero_float)
print(type(mi_numero_float))
```
Es fudndamental operar con los mismos tipos de datos. Mira lo que ocurre cuando sumamos texto con un numero. Da un `TypeError`. Básicamente nos dice que no puedes concatenar un texto con un numero entero
```
"4" + 6
```
<table align="left">
<tr><td width="80"><img src="img/ejercicio.png" style="width:auto;height:auto"></td>
<td style="text-align:left">
<h3>Ejercicio tipos de datos</h3>
<ol>
<li>Crea una variable de tipo String en la que se incluyan unas comillas dobles </li>
<li>Comprueba su tipo</li>
<li>Crea otro string guardándolo en otra variable, y prueba a sumarlos</li>
<li>Ahora declara una variable entera. Imprímela por pantalla</li>
<li>Cambia el tipo de la variable entera a float. Imprime por pantalla tanto la nueva variable obtenida, como su tipo</li>
</ol>
</td></tr>
</table>
```
mi_var = """texto con "" dobles """
print(type(mi_var))
mi_var2 = " Otro texto"
mi_var + mi_var2
entera = 4
print(float(entera))
print(type(float(entera)))
```
## 8. Input
Esta sentencia se usa en Python para recoger un valor que escriba el usuario del programa. Los programas suelen funcionar de esa manera, reciben un input, realizan operaciones, y acaban sacando un output para el usuario.
Por ejemplo, en la siguiente celda recojo un input, que luego podremos usar en celdas posteriores.
**CUIDADO**. Si corres una celda con un `input()` el programa se queda esperando a que el usuario meta un valor. Si en el momento en el que está esperando, vuelves a correr la misma celda, se te puede quedar pillado, depende del ordendaor/versión de Jupyter. Si eso ocurre, pincha en la celda y dale despues al botón de stop de arriba, o sino a Kernel -> Restart Kernel...
```
primer_input = input()
print(primer_input)
```
Puedes poner ints, floats, strings, lo que quieras recoger en texto plano.
Mira que fácil es hacer un chatbot un poco tonto, mediante el que hacemos preguntas al usuario, y almacenamos sus respuestas.
```
nombre = input("¿Cómo te llamas? ")
print("Encantado de saludarte %s " % (nombre))
feedback = input("¿Qué te está pareciendo Python? ")
print("Coincido")
```
<table align="left">
<tr><td width="80"><img src="img/error.png" style="width:auto;height:auto"></td>
<td style="text-align:left">
<h3>ERRORES con input</h3>
</td></tr>
</table>
```
"""
Por defecto, el input del usuario es de tipo texto. Habrá que convertirlo a numerico con int(numero_input)
o con float(numero_input). Lo vemos en el siguiente notebook.
"""
numero_input = input("Introduce un numero ")
numero_input + 4
```
<table align="left">
<tr><td width="80"><img src="img/ejercicio.png" style="width:auto;height:auto"></td>
<td style="text-align:left">
<h3>Ejercicio input</h3>
En este ejemplo vamos a simular un chatbot al que le haremos pedidos de pizzas.
<ol>
<li>El chatbot tiene que saludar con un: "Buenas tardes, bienvenido al servicio de pedido online, ¿Cuántas pizzas desea?"</li>
<li>El ususario tiene que introducir un número de pizzas en una variable llamada 'pizz'</li>
<li>Respuesta del chatbot: "Estupendo, se están preparando 'pizz' pizzas. Digame su dirección"</li>
<li>El ususario tiene que introducir una direccion en formato String en otra variable llamada 'direcc'</li>
<li>Respuesta final del chatbot: "Le mandaremos las 'pizz' pizzas a la dirección 'direcc'. Muchas gracias por su pedido."</li>
</ol>
</td></tr>
</table>
```
pizz = input("Buenas tardes, bienvenido al servicio de pedido online, ¿cuántas pizzas desea? ")
direcc = input ("Estupendo, se están preparando " + pizz + " pizzas. Dígame su dirección, por favor ")
print("Le mandamos las", pizz, "pizzas a la dirección", direcc, ". Muchas gracias")
```
## 9. None
Palabra reservada en Python para designar al valor nulo. `None` no es 0, tampoco es un string vacio, ni `False`, simplemente es un tipo de datos más para representar el conjunto vacío.
```
print(None)
print(type(None))
```
## 10. Sintaxis y best practices
A la hora de escribir en Python, existen ciertas normas que hay que tener en cuenta:
* Todo lo que abras, lo tienes que cerrar: paréntesis, llaves, corchetes...
* Los decimales se ponen con puntos `.`
* Best practices
* **Caracteres**: NO se recomienda usar Ñs, acentos o caracteres raros (ª,º,@,ç...) en el codigo. Ponerlo únicamente en los comentarios.
* **Espacios**: NO usar espacios en los nombres de las variables ni de las funciones. Se recomienda usar guión bajo para simular el espacio. O también juntar las palabras y usar mayuscula para diferenciarlas `miVariable`. Lo normal es todo minúscula y guiones bajos
* Ahora bien, sí se recomienda usar espacios entre cada comando, para facilitar la lectura, aunque esto ya es más cuestión de gustos. `mi_variable = 36`.
* Se suelen declarar las variables en minuscula.
* Las constantes (variables que no van a cambiar nunca) en mayuscula. `MI_PAIS = "España"`
* **Cada sentencia en una linea**. Se puede usar el `;` para declarar varias variables, pero no es lo habitual
* **Comentarios**: TODOS los que se pueda. Nunca sabes cuándo otra persona va a coger tu espectacular código, o si tu *yo* del futuro se acordará de por qué hiciste ese bucle while en vez de un for.
* **Case sensitive**: sensible a mayusculas y minusculas. CUIDADO con esto cuando declaremos variables o usemos Strings
* **Sintaxis de línea**: para una correcta lectura del codigo lo mejor es aplicar sintaxis de línea en la medida de lo posible
```
# A esto nos referimos con sintaxis de línea
lista_compra = ['Manzanas',
'Galletas',
'Pollo',
'Cereales']
```
### The Zen of Python
Cualquier consejo que te haya podido dar hasta ahora se queda en nada comparado con los 20 principios que definió *Tim Peters* en 1999. Uno de los mayores colaboradores en la creación de este lenguaje
```
import this
```
## 11. Resumen
```
# Declarar variables
var = 10
# Reasignar valores
var = 12
# Imprimir por pantalla
print("Primera linea")
print("Variable declarada es:", var)
print("Variable declarada es: %s" %(var))
"""
Comentarios
Multilínea
"""
# Eliminar variables
del var
# Tipos de datos
print("\n") # Simplemente para que haya un salto de linea en el output
print(type(1)) # Int
print(type(1.0)) # Float
print(type("1")) # String
print(type(True)) # Boolean
# Conversiones de tipos
print("\n")
var2 = 4
print(type(var2))
var2 = str(var2)
print(type(var2))
var2 = bool(var2)
print(type(var2))
# El valor nulo
print(None)
# Input de variables
var3 = input("Inserta variable ")
```
| github_jupyter |
# Logistic Regression With Linear Boundary Demo
> ☝Before moving on with this demo you might want to take a look at:
> - 📗[Math behind the Logistic Regression](https://github.com/trekhleb/homemade-machine-learning/tree/master/homemade/logistic_regression)
> - ⚙️[Logistic Regression Source Code](https://github.com/trekhleb/homemade-machine-learning/blob/master/homemade/logistic_regression/logistic_regression.py)
**Logistic regression** is the appropriate regression analysis to conduct when the dependent variable is dichotomous (binary). Like all regression analyses, the logistic regression is a predictive analysis. Logistic regression is used to describe data and to explain the relationship between one dependent binary variable and one or more nominal, ordinal, interval or ratio-level independent variables.
Logistic Regression is used when the dependent variable (target) is categorical.
For example:
- To predict whether an email is spam (`1`) or (`0`).
- Whether online transaction is fraudulent (`1`) or not (`0`).
- Whether the tumor is malignant (`1`) or not (`0`).
> **Demo Project:** In this example we will try to classify Iris flowers into tree categories (`Iris setosa`, `Iris virginica` and `Iris versicolor`) based on `petal_length` and `petal_width` parameters.
```
# To make debugging of logistic_regression module easier we enable imported modules autoreloading feature.
# By doing this you may change the code of logistic_regression library and all these changes will be available here.
%load_ext autoreload
%autoreload 2
# Add project root folder to module loading paths.
import sys
sys.path.append('../..')
```
### Import Dependencies
- [pandas](https://pandas.pydata.org/) - library that we will use for loading and displaying the data in a table
- [numpy](http://www.numpy.org/) - library that we will use for linear algebra operations
- [matplotlib](https://matplotlib.org/) - library that we will use for plotting the data
- [logistic_regression](https://github.com/trekhleb/homemade-machine-learning/blob/master/homemade/logistic_regression/logistic_regression.py) - custom implementation of logistic regression
```
# Import 3rd party dependencies.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Import custom logistic regression implementation.
from homemade.logistic_regression import LogisticRegression
```
### Load the Data
In this demo we will use [Iris data set](http://archive.ics.uci.edu/ml/datasets/Iris).
The data set consists of several samples from each of three species of Iris (`Iris setosa`, `Iris virginica` and `Iris versicolor`). Four features were measured from each sample: the length and the width of the sepals and petals, in centimeters. Based on the combination of these four features, [Ronald Fisher](https://en.wikipedia.org/wiki/Iris_flower_data_set) developed a linear discriminant model to distinguish the species from each other.
```
# Load the data.
data = pd.read_csv('../../data/iris.csv')
# Print the data table.
data.head(10)
```
### Plot the Data
Let's take two parameters `petal_length` and `petal_width` for each flower into consideration and plot the dependency of the Iris class on these two parameters.
```
# List of suppported Iris classes.
iris_types = ['SETOSA', 'VERSICOLOR', 'VIRGINICA']
# Pick the Iris parameters for consideration.
x_axis = 'petal_length'
y_axis = 'petal_width'
# Plot the scatter for every type of Iris.
for iris_type in iris_types:
plt.scatter(
data[x_axis][data['class'] == iris_type],
data[y_axis][data['class'] == iris_type],
label=iris_type
)
# Plot the data.
plt.xlabel(x_axis + ' (cm)')
plt.ylabel(y_axis + ' (cm)')
plt.title('Iris Types')
plt.legend()
plt.show()
```
### Prepara the Data for Training
Let's extract `petal_length` and `petal_width` data and form a training feature set and let's also form out training labels set.
```
# Get total number of Iris examples.
num_examples = data.shape[0]
# Get features.
x_train = data[[x_axis, y_axis]].values.reshape((num_examples, 2))
# Get labels.
y_train = data['class'].values.reshape((num_examples, 1))
```
### Init and Train Logistic Regression Model
> ☝🏻This is the place where you might want to play with model configuration.
- `polynomial_degree` - this parameter will allow you to add additional polynomial features of certain degree. More features - more curved the line will be.
- `max_iterations` - this is the maximum number of iterations that gradient descent algorithm will use to find the minimum of a cost function. Low numbers may prevent gradient descent from reaching the minimum. High numbers will make the algorithm work longer without improving its accuracy.
- `regularization_param` - parameter that will fight overfitting. The higher the parameter, the simplier is the model will be.
- `polynomial_degree` - the degree of additional polynomial features (`x1^2 * x2, x1^2 * x2^2, ...`). This will allow you to curve the predictions.
- `sinusoid_degree` - the degree of sinusoid parameter multipliers of additional features (`sin(x), sin(2*x), ...`). This will allow you to curve the predictions by adding sinusoidal component to the prediction curve.
```
# Set up linear regression parameters.
max_iterations = 1000 # Max number of gradient descent iterations.
regularization_param = 0 # Helps to fight model overfitting.
polynomial_degree = 0 # The degree of additional polynomial features.
sinusoid_degree = 0 # The degree of sinusoid parameter multipliers of additional features.
# Init logistic regression instance.
logistic_regression = LogisticRegression(x_train, y_train, polynomial_degree, sinusoid_degree)
# Train logistic regression.
(thetas, costs) = logistic_regression.train(regularization_param, max_iterations)
# Print model parameters that have been learned.
pd.DataFrame(thetas, columns=['Theta 1', 'Theta 2', 'Theta 3'], index=['SETOSA', 'VERSICOLOR', 'VIRGINICA'])
```
### Analyze Gradient Descent Progress
The plot below illustrates how the cost function value changes over each iteration. You should see it decreasing.
In case if cost function value increases it may mean that gradient descent missed the cost function minimum and with each step it goes further away from it.
From this plot you may also get an understanding of how many iterations you need to get an optimal value of the cost function.
```
# Draw gradient descent progress for each label.
labels = logistic_regression.unique_labels
plt.plot(range(len(costs[0])), costs[0], label=labels[0])
plt.plot(range(len(costs[1])), costs[1], label=labels[1])
plt.plot(range(len(costs[2])), costs[2], label=labels[2])
plt.xlabel('Gradient Steps')
plt.ylabel('Cost')
plt.legend()
plt.show()
```
### Calculate Model Training Precision
Calculate how many flowers from the training set have been guessed correctly.
```
# Make training set predictions.
y_train_predictions = logistic_regression.predict(x_train)
# Check what percentage of them are actually correct.
precision = np.sum(y_train_predictions == y_train) / y_train.shape[0] * 100
print('Precision: {:5.4f}%'.format(precision))
```
### Draw Decision Boundaries
Let's build our decision boundaries. These are the lines that distinguish classes from each other. This will give us a pretty clear overview of how successfull our training process was. You should see clear distinguishment of three sectors on the data plain.
```
# Get the number of training examples.
num_examples = x_train.shape[0]
# Set up how many calculations we want to do along every axis.
samples = 150
# Generate test ranges for x and y axis.
x_min = np.min(x_train[:, 0])
x_max = np.max(x_train[:, 0])
y_min = np.min(x_train[:, 1])
y_max = np.max(x_train[:, 1])
X = np.linspace(x_min, x_max, samples)
Y = np.linspace(y_min, y_max, samples)
# z axis will contain our predictions. So let's get predictions for every pair of x and y.
Z_setosa = np.zeros((samples, samples))
Z_versicolor = np.zeros((samples, samples))
Z_virginica = np.zeros((samples, samples))
for x_index, x in enumerate(X):
for y_index, y in enumerate(Y):
data = np.array([[x, y]])
prediction = logistic_regression.predict(data)[0][0]
if prediction == 'SETOSA':
Z_setosa[x_index][y_index] = 1
elif prediction == 'VERSICOLOR':
Z_versicolor[x_index][y_index] = 1
elif prediction == 'VIRGINICA':
Z_virginica[x_index][y_index] = 1
# Now, when we have x, y and z axes being setup and calculated we may print decision boundaries.
for iris_type in iris_types:
plt.scatter(
x_train[(y_train == iris_type).flatten(), 0],
x_train[(y_train == iris_type).flatten(), 1],
label=iris_type
)
plt.contour(X, Y, Z_setosa)
plt.contour(X, Y, Z_versicolor)
plt.contour(X, Y, Z_virginica)
plt.xlabel(x_axis + ' (cm)')
plt.ylabel(y_axis + ' (cm)')
plt.title('Iris Types')
plt.legend()
plt.show()
```
| github_jupyter |
# Working with Tensorforce to Train a Reinforcement-Learning Agent
This notebook serves as an educational introduction to the usage of Tensorforce using a gym-electric-motor (GEM) environment. The goal of this notebook is to give an understanding of what tensorforce is and how to use it to train and evaluate a reinforcement learning agent that can solve a current control problem of the GEM toolbox.
## 1. Installation
Before you can start you need to make sure that you have both gym-electric-motor and tensorforce installed. You can install both easily using pip:
- ```pip install gym-electric-motor```
- ```pip install tensorforce```
Alternatively, you can install their latest developer version directly from GitHub:
- [GitHub Gym-Electric-Motor](https://github.com/upb-lea/gym-electric-motor)
- [GitHub Tensorforce](https://github.com/tensorforce/tensorforce)
For this notebook, the following cell will do the job:
```
!pip install -q git+https://github.com/upb-lea/gym-electric-motor.git tensorforce==0.5.5
%matplotlib notebook
import numpy as np
import matplotlib.pyplot as plt
from tqdm import tqdm
```
## 2. Setting up a GEM Environment
The basic idea behind reinforcement learning is to create a so-called agent, that should learn by itself to solve a specified task in a given environment.
This environment gives the agent feedback on its actions and reinforces the targeted behavior.
In this notebook, the task is to train a controller for the current control of a *permanent magnet synchronous motor* (*PMSM*).
In the following, the used GEM-environment is briefly presented, but this notebook does not focus directly on the detailed usage of GEM. If you are new to the used environment and interested in finding out what it does and how to use it, you should take a look at the [GEM cookbook](https://colab.research.google.com/github/upb-lea/gym-electric-motor/blob/master/examples/example_notebooks/GEM_cookbook.ipynb).
To save some space in this notebook, there is a function defined in an external python file called **getting_environment.py**. If you want to know how the environment's parameters are defined you can take a look at that file. By simply calling the **get_env()** function from the external file, you can set up an environment for a *PMSM* with discrete inputs.
The basic idea of the control setup from the GEM-environment is displayed in the following figure.

The agent controls the converter who converts the supply currents to the currents flowing into the motor - for the *PMSM*: $i_{sq}$ and $i_{sd}$
In the continuous case, the agent's action equals a duty cycle which will be modulated into a corresponding voltage.
In the discrete case, the agent's actions denote switching states of the converter at the given instant. Here, only a discrete amount of options are available. In this notebook, for the PMSM the *discrete B6 bridge converter* with six switches is utilized per default. This converter provides a total of eight possible actions.

The motor schematic is the following:

And the electrical ODEs for that motor are:
<h3 align="center">
<!-- $\frac{\mathrm{d}i_{sq}}{\mathrm{d}t} = \frac{u_{sq}-pL_d\omega_{me}i_{sd}-R_si_{sq}}{L_q}$
$\frac{\mathrm{d}i_{sd}}{\mathrm{d}t} = \frac{u_{sd}-pL_q\omega_{me}i_{sq}-R_si_{sd}}{L_d}$
$\frac{\mathrm{d}\epsilon_{el}}{\mathrm{d}t} = p\omega_{me}$
-->
$ \frac{\mathrm{d}i_{sd}}{\mathrm{d}t}=\frac{u_{sd} + p\omega_{me}L_q i_{sq} - R_s i_{sd}}{L_d} $ <br><br>
$\frac{\mathrm{d} i_{sq}}{\mathrm{d} t}=\frac{u_{sq} - p \omega_{me} (L_d i_{sd} + \mathit{\Psi}_p) - R_s i_{sq}}{L_q}$ <br><br>
$\frac{\mathrm{d}\epsilon_{el}}{\mathrm{d}t} = p\omega_{me}$
</h3>
The target for the agent is now to learn to control the currents. For this, a reference generator produces a trajectory that the agent has to follow.
Therefore, it has to learn a function (policy) from given states, references and rewards to appropriate actions.
For a deeper understanding of the used models behind the environment see the [documentation](https://upb-lea.github.io/gym-electric-motor/).
Comprehensive learning material to RL is also [freely available](https://github.com/upb-lea/reinforcement_learning_course_materials).
```
import numpy as np
from pathlib import Path
import gym_electric_motor as gem
from gym_electric_motor.reference_generators import \
MultipleReferenceGenerator,\
WienerProcessReferenceGenerator
from gym_electric_motor.visualization import MotorDashboard
from gym_electric_motor.core import Callback
from gym.spaces import Discrete, Box
from gym.wrappers import FlattenObservation, TimeLimit
from gym import ObservationWrapper
# helper functions and classes
class FeatureWrapper(ObservationWrapper):
"""
Wrapper class which wraps the environment to change its observation. Serves
the purpose to improve the agent's learning speed.
It changes epsilon to cos(epsilon) and sin(epsilon). This serves the purpose
to have the angles -pi and pi close to each other numerically without losing
any information on the angle.
Additionally, this wrapper adds a new observation i_sd**2 + i_sq**2. This should
help the agent to easier detect incoming limit violations.
"""
def __init__(self, env, epsilon_idx, i_sd_idx, i_sq_idx):
"""
Changes the observation space to fit the new features
Args:
env(GEM env): GEM environment to wrap
epsilon_idx(integer): Epsilon's index in the observation array
i_sd_idx(integer): I_sd's index in the observation array
i_sq_idx(integer): I_sq's index in the observation array
"""
super(FeatureWrapper, self).__init__(env)
self.EPSILON_IDX = epsilon_idx
self.I_SQ_IDX = i_sq_idx
self.I_SD_IDX = i_sd_idx
new_low = np.concatenate((self.env.observation_space.low[
:self.EPSILON_IDX], np.array([-1.]),
self.env.observation_space.low[
self.EPSILON_IDX:], np.array([0.])))
new_high = np.concatenate((self.env.observation_space.high[
:self.EPSILON_IDX], np.array([1.]),
self.env.observation_space.high[
self.EPSILON_IDX:],np.array([1.])))
self.observation_space = Box(new_low, new_high)
def observation(self, observation):
"""
Gets called at each return of an observation. Adds the new features to the
observation and removes original epsilon.
"""
cos_eps = np.cos(observation[self.EPSILON_IDX] * np.pi)
sin_eps = np.sin(observation[self.EPSILON_IDX] * np.pi)
currents_squared = observation[self.I_SQ_IDX]**2 + observation[self.I_SD_IDX]**2
observation = np.concatenate((observation[:self.EPSILON_IDX],
np.array([cos_eps, sin_eps]),
observation[self.EPSILON_IDX + 1:],
np.array([currents_squared])))
return observation
# define motor arguments
motor_parameter = dict(p=3, # [p] = 1, nb of pole pairs
r_s=17.932e-3, # [r_s] = Ohm, stator resistance
l_d=0.37e-3, # [l_d] = H, d-axis inductance
l_q=1.2e-3, # [l_q] = H, q-axis inductance
psi_p=65.65e-3, # [psi_p] = Vs, magnetic flux of the permanent magnet
)
# supply voltage
u_sup = 350
# nominal and absolute state limitations
nominal_values=dict(omega=4000*2*np.pi/60,
i=230,
u=u_sup
)
limit_values=dict(omega=4000*2*np.pi/60,
i=1.5*230,
u=u_sup
)
# defining reference-generators
q_generator = WienerProcessReferenceGenerator(reference_state='i_sq')
d_generator = WienerProcessReferenceGenerator(reference_state='i_sd')
rg = MultipleReferenceGenerator([q_generator, d_generator])
# defining sampling interval
tau = 1e-5
# defining maximal episode steps
max_eps_steps = 10_000
motor_initializer={'random_init': 'uniform', 'interval': [[-230, 230], [-230, 230], [-np.pi, np.pi]]}
reward_function=gem.reward_functions.WeightedSumOfErrors(
reward_weights={'i_sq': 10, 'i_sd': 10},
gamma=0.99, # discount rate
reward_power=1)
# creating gem environment
env = gem.make( # define a PMSM with discrete action space
"PMSMDisc-v1",
# visualize the results
visualization=MotorDashboard(state_plots=['i_sq', 'i_sd'], reward_plot=True),
# parameterize the PMSM and update limitations
motor_parameter=motor_parameter,
limit_values=limit_values, nominal_values=nominal_values,
# define the random initialisation for load and motor
load='ConstSpeedLoad',
load_initializer={'random_init': 'uniform', },
motor_initializer=motor_initializer,
reward_function=reward_function,
# define the duration of one sampling step
tau=tau, u_sup=u_sup,
# turn off terminations via limit violation, parameterize the rew-fct
reference_generator=rg, ode_solver='euler',
)
# remove one action from the action space to help the agent speed up its training
# this can be done as both switchting states (1,1,1) and (-1,-1,-1) - which are encoded
# by action 0 and 7 - both lead to the same zero voltage vector in alpha/beta-coordinates
env.action_space = Discrete(7)
# applying wrappers
eps_idx = env.physical_system.state_names.index('epsilon')
i_sd_idx = env.physical_system.state_names.index('i_sd')
i_sq_idx = env.physical_system.state_names.index('i_sq')
env = TimeLimit(FeatureWrapper(FlattenObservation(env),
eps_idx, i_sd_idx, i_sq_idx),
max_eps_steps)
```
## 3. Using Tensorforce
To take advantage of some already implemented deep-RL agents, we use the *tensorforce-framework*. It is built on *TensorFlow* and offers agents based on deep Q-networks, policy gradients, or actor-critic algorithms.
For more information to specific agents or different modules that can be used, some good explanations can be found in the corresponding [documentation](https://tensorforce.readthedocs.io/en/latest/).
For the control task with a discrete action space we will use a [deep Q-network (DQN)]((https://www.nature.com/articles/nature14236)).
### 3.1 Defining a Tensorforce-Environment
Tensorforce requires you to define a *tensorforce-environment*. This is done simply by using the ```Environment.create``` interface, which acts as a wrapper around usual [gym](https://github.com/openai/gym) instances.
```
from tensorforce.environments import Environment
# creating tensorforce environment
tf_env = Environment.create(environment=env,
max_episode_timesteps=max_eps_steps)
```
### 3.2 Setting-up a Tensorforce-Agent
The Agent is created just like the environment. The agent's parameters can be passed as arguments to the ```create()``` function or via a configuration as a dictionary or as *.json* file.
In the following, the way via a dictionary is demonstrated.
With the *tensorforce-framework* it is possible to define own network-architectures like it is shown below.
For some parameters, it can be useful to have a decaying value during the training. A possible way for this is also shown in the following code.
The exact meaning of the used parameters can be found in the already mentioned tensorforce documentation.
```
# using a parameter decay for the exploration
epsilon_decay = {'type': 'decaying',
'decay': 'polynomial',
'decay_steps': 50000,
'unit': 'timesteps',
'initial_value': 1.0,
'decay_rate': 5e-2,
'final_value': 5e-2,
'power': 3.0}
# defining a simple network architecture: 2 dense-layers with 64 nodes each
net = [
dict(type='dense', size=64, activation='relu'),
dict(type='dense', size=64, activation='relu'),
]
# defining the parameters of a dqn-agent
agent_config = {
'agent': 'dqn',
'memory': 200000,
'batch_size': 25,
'network': net,
'update_frequency': 1,
'start_updating': 10000,
'learning_rate': 1e-4,
'discount': 0.99,
'exploration': epsilon_decay,
'target_sync_frequency': 1000,
'target_update_weight': 1.0,
}
from tensorforce.agents import Agent
tau = 1e-5
simulation_time = 2 # seconds
training_steps = int(simulation_time // tau)
# creating agent via dictionary
dqn_agent = Agent.create(agent=agent_config, environment=tf_env)
```
### 3.3 Training the Agent
Training the agent is executed with the **tensorforce-runner**. The runner stores metrics during the training, like the reward per episode, and can be used to save learned agents. If you just want to experiment a little with an already trained agent, it is possible to skip the next cells and just load a pre-trained agent.
```
from tensorforce.execution import Runner
# create and train the agent
runner = Runner(agent=dqn_agent, environment=tf_env)
runner.run(num_timesteps=training_steps)
```
Accessing saved metrics from the runner, it is possible to have a look on the mean reward per episode or the corresponding episode-length.
```
# accesing the metrics from runner
rewards = np.asarray(runner.episode_rewards)
episode_length = np.asarray(runner.episode_timesteps)
# calculating the mean-reward per episode
mean_reward = rewards/episode_length
num_episodes = len(mean_reward)
# plotting mean-reward over episodes
f, (ax1, ax2) = plt.subplots(2, 1, sharex=True, figsize=(20,10))
ax1.plot(range(num_episodes), mean_reward, linewidth=3)
#plt.xticks(fontsize=15)
ax1.set_ylabel('mean-reward', fontsize=22)
ax1.grid(True)
ax1.tick_params(axis="y", labelsize=15)
# plotting episode length over episodes
ax2.plot(range(num_episodes), episode_length, linewidth=3)
ax2.set_xlabel('# episodes', fontsize=22)
ax2.set_ylabel('episode-length', fontsize=22)
ax2.tick_params(axis="y", labelsize=15)
ax2.tick_params(axis="x", labelsize=15)
ax2.grid(True)
plt.show()
print('number of episodes during training: ', len(rewards))
```
Saving the agents trained model makes it available for a separate evaluation and further usage.
```
agent_path = Path('saved_agents')
agent_path.mkdir(parents=True, exist_ok=True)
agent_name = 'dqn_agent_tensorforce'
runner.agent.save(directory=str(agent_path), filename=agent_name)
print('\n agent saved \n')
runner.close()
```
## 4. Evaluating the Trained Agent
### 4.1 Loading a Model
If a previously saved agent is available, it can be restored by using the runner to load the model with the ```load()``` function. To load the agent it is necessary to pass the directory, the filename, the environment, and the agent configuration used for the training.
```
from tensorforce import Agent
dqn_agent = Agent.load(
directory=str(agent_path),
filename=agent_name,
environment=tf_env,
**agent_config
)
print('\n agent loaded \n')
```
### 4.3 Evaluating the Agent
To use the trained agent as a controller, a typical loop to interact with the environment can be used, which is displayed in the cell below.
Now the agent takes the observations from the environment and reacts with an action, which is used to control the environment. To get an impression of how the trained agent performs, the trajectory of the control-states can be observed. A live-plot will be displayed in a jupyter-notebook. If you are using jupyter-lab, the following cell could cause problems regarding the visualization.
```
%matplotlib notebook
# currently the visualization crashes for larger values, than the defined value
visualization_steps = int(9e4)
obs = env.reset()
for step in range(visualization_steps):
# getting the next action from the agent
actions = dqn_agent.act(obs, evaluation=True)
# the env return the next state, reward and the information, if the state is terminal
obs, reward, done, _ = env.step(action=actions)
# activating the visualization
env.render()
if done:
# reseting the env, if a terminal state is reached
obs = env.reset()
```
In the next example a classic *environment-interaction loop* can be extended to access different metrics and values, e.g. the cumulated reward over all steps. The number of evaluation-steps can be reduced, but a higher variance of the evaluation result must then be accepted.
```
# test agent
steps = 250000
rewards = []
episode_lens = []
obs = env.reset()
terminal = False
cumulated_rew = 0
step_counter = 0
episode_rew = 0
for step in (range(steps)):
actions = dqn_agent.act(obs, evaluation=True)
obs, reward, done, _ = env.step(action=actions)
cumulated_rew += reward
episode_rew += reward
step_counter += 1
if done:
rewards.append(episode_rew)
episode_lens.append(step_counter)
episode_rew = 0
step_counter = 0
obs = env.reset()
done = False
print(f' \n Cumulated reward per step is {cumulated_rew/steps} \n')
print(f' \n Number of episodes Reward {len(episode_lens)} \n')
%matplotlib inline
# accesing the metrics from runner
rewards = np.asarray(rewards)
episode_length = np.asarray(episode_lens)
# calculating the mean-reward per episode
mean_reward = rewards/episode_length
num_episodes = len(rewards)
# plotting mean-reward over episodes
f, (ax1, ax2) = plt.subplots(2, 1, sharex=True, figsize=(20, 10))
ax1.plot(range(num_episodes), mean_reward, linewidth=3)
#plt.xticks(fontsize=15)
ax1.set_ylabel('reward', fontsize=22)
ax1.grid(True)
ax1.tick_params(axis="y", labelsize=15)
# plotting episode length over episodes
ax2.plot(range(num_episodes), episode_length, linewidth=3)
ax2.set_xlabel('# episodes', fontsize=22)
ax2.set_ylabel('episode-length', fontsize=20)
ax2.tick_params(axis="y", labelsize=15)
ax2.tick_params(axis="x", labelsize=15)
ax2.grid(True)
plt.show()
print('number of episodes during training: ', len(episode_lens))
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
p = plt.rcParams.find_all(pattern='size')
#plt.rcParams['font.size'] = 14
p
def scale_text(scale='bigger',fig_adj=False):
if isinstance(scale,str):
if scale=='bigger':
scale = 1.25
elif scale=='smaller':
scale = 0.8
elif scale=='default':
size_params = plt.rcParams.find_all(pattern='font.size')
size_params.update(plt.rcParams.find_all(pattern='labelsize'))
fig_sizes = plt.rcParams['figure.figsize']
if fig_adj:
plt.rcParams['figure.figsize'] = [fig_sizes[0]*scale,fig_sizes[1]*scale]
for i in size_params:
if isinstance(plt.rcParams[i],float):
plt.rcParams[i]*=scale
scale_text('smaller')
def squarefig(fold_on='x'):
sizes = plt.rcParams['figure.figsize']
if fold_on == 'x':
plt.rcParams['figure.figsize'] = [sizes[0],sizes[0]]
elif fold_on == 'y':
plt.rcParams['figure.figsize'] = [sizes[1],sizes[1]]
squarefig()
plt.show()
def demo():
ews = np.loadtxt('measured_brg_ews.txt',usecols=(1,2,3,4,5))
ews = ews.T
measured_ews = ews[0]
measured_ew_unc = ews[1]
measured_ew_unc_log = 0.434 * (measured_ew_unc / measured_ews)
joel_ews = ews[3]
joel_minus = ews[2]
joel_plus = ews[4]
joel_errs = np.array([joel_ews-joel_minus,joel_plus-joel_ews])
joel_errs_log = 0.434 * (joel_errs / joel_ews)
#plot_bounds = np.array([np.max([measured_ews,joel_plus]),np.max([measured_ews,joel_plus])])
#plt.rcParams.update({'font.size': 15})
fig, ax = plt.subplots()
ax.errorbar(np.log10(joel_ews),np.log10(measured_ews),yerr=measured_ew_unc_log,xerr=joel_errs_log,fmt='None',color='#113166',alpha=0.15)
ax.plot(np.log10(joel_ews),np.log10(measured_ews),'s',color='#006289',ms=7,alpha=0.9)
#ax.plot([0.0001,np.max([measured_ews,joel_plus])],[0.0001,np.max([measured_ews,joel_plus])],color='k',alpha=0.8)
#plt.subplots_adjust(right=0.98,top=0.98)
#ax.set_xscale('log')
#ax.set_yscale('log')
#ax.tick_params(axis='both',which='both',direction='in',top=True,right=True)
ax.set_ylabel('log EW[Br-$\gamma$] (Measured)')
ax.set_xlabel('log EW[Br-$\gamma$] (predicted from photometry)')
ax.set_xlim((-0.3,1.8))
ax.set_ylim((-1.05,2.5))
#ax.set_xticks([-0.5,0.0,0.5,1.0,])
#ax.set_yticks([-0.5,0.0,0.5,1.0,1.5])
ax.plot([-5,5],[-5,5],'k',alpha=0.5)
ax.text(1.6,-0.72,'mean offset = 0.07 dex',horizontalalignment='right')
ax.text(1.6,-0.9,'biweight scatter = 0.27 dex',horizontalalignment='right')
#ax.plot(0.55,-0.48,'v',color='#7c0b0b',alpha=0.8)
plt.show()
#residuals = measured_ews - joel_ews
#std = np.log10(np.std(residuals))
#print(std)
#fig2, ax2 = plt.subplots(figsize=(5,5))
#ax2.errorbar(measured_ews,joel_ews-measured_ews,yerr=joel_errs,xerr=measured_ew_unc,fmt='s',alpha=0.8)
#plt.show()
class Astroplots():
def __init__(self):
self.orig_params = plt.rcParams.copy()
def default_all(self):
for i in plt.rcParams.keys():
plt.rcParams[i] = self.orig_params[i]
def squarefig(self,fold_on='x'):
sizes = plt.rcParams['figure.figsize']
if fold_on == 'x':
plt.rcParams['figure.figsize'] = [sizes[0],sizes[0]]
elif fold_on == 'y':
plt.rcParams['figure.figsize'] = [sizes[1],sizes[1]]
def bigger_onplot_text(self,scale):
plt.rcParams['figure.fontsize']*=scale
def scale_text(self,scale='bigger',fig_adj=False):
if isinstance(scale,str):
if scale=='bigger':
scale = 1.25
elif scale=='smaller':
scale = 0.8
elif scale=='default':
pass
size_params = plt.rcParams.find_all(pattern='font.size')
size_params.update(plt.rcParams.find_all(pattern='labelsize'))
fig_sizes = plt.rcParams['figure.figsize']
if fig_adj:
plt.rcParams['figure.figsize'] = [fig_sizes[0]*scale,fig_sizes[1]*scale]
for i in size_params:
if isinstance(plt.rcParams[i],float):
plt.rcParams[i]*=scale
astroplots = Astroplots()
astroplots.default_all()
astroplots.squarefig()
astroplots.scale_text(1.5,fig_adj=True)
demo()
astroplots.default_all()
demo()
astroplots.default_all()
astroplots.squarefig()
astroplots.scale_text(1.2,fig_adj=True)
demo()
astroplots.default_all()
astroplots.squarefig()
astroplots.scale_text(0.8,fig_adj=False)
demo()
astroplots.default_all()
astroplots.squarefig()
astroplots.scale_text([1,2,3])
demo()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/martin-fabbri/colab-notebooks/blob/master/deeplearning.ai/nlp/c3_w1_03_trax_intro_2.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Trax : Ungraded Lecture Notebook
In this notebook you'll get to know about the Trax framework and learn about some of its basic building blocks.
## Background
### Why Trax and not TensorFlow or PyTorch?
TensorFlow and PyTorch are both extensive frameworks that can do almost anything in deep learning. They offer a lot of flexibility, but that often means verbosity of syntax and extra time to code.
Trax is much more concise. It runs on a TensorFlow backend but allows you to train models with 1 line commands. Trax also runs end to end, allowing you to get data, model and train all with a single terse statements. This means you can focus on learning, instead of spending hours on the idiosyncrasies of big framework implementation.
### Why not Keras then?
Keras is now part of Tensorflow itself from 2.0 onwards. Also, trax is good for implementing new state of the art algorithms like Transformers, Reformers, BERT because it is actively maintained by Google Brain Team for advanced deep learning tasks. It runs smoothly on CPUs,GPUs and TPUs as well with comparatively lesser modifications in code.
### How to Code in Trax
Building models in Trax relies on 2 key concepts:- **layers** and **combinators**.
Trax layers are simple objects that process data and perform computations. They can be chained together into composite layers using Trax combinators, allowing you to build layers and models of any complexity.
### Trax, JAX, TensorFlow and Tensor2Tensor
You already know that Trax uses Tensorflow as a backend, but it also uses the JAX library to speed up computation too. You can view JAX as an enhanced and optimized version of numpy.
**Watch out for assignments which import `import trax.fastmath.numpy as np`. If you see this line, remember that when calling `np` you are really calling Trax’s version of numpy that is compatible with JAX.**
As a result of this, where you used to encounter the type `numpy.ndarray` now you will find the type `jax.interpreters.xla.DeviceArray`.
Tensor2Tensor is another name you might have heard. It started as an end to end solution much like how Trax is designed, but it grew unwieldy and complicated. So you can view Trax as the new improved version that operates much faster and simpler.
### Resources
- Trax source code can be found on Github: [Trax](https://github.com/google/trax)
- JAX library: [JAX](https://jax.readthedocs.io/en/latest/index.html)
## Installing Trax
Trax has dependencies on JAX and some libraries like JAX which are yet to be supported in [Windows](https://github.com/google/jax/blob/1bc5896ee4eab5d7bb4ec6f161d8b2abb30557be/README.md#installation) but work well in Ubuntu and MacOS. We would suggest that if you are working on Windows, try to install Trax on WSL2.
Official maintained documentation - [trax-ml](https://trax-ml.readthedocs.io/en/latest/) not to be confused with this [TraX](https://trax.readthedocs.io/en/latest/index.html)
```
%%capture
!pip install trax==1.3.1
```
## Imports
```
%%capture
import numpy as np # regular ol' numpy
from trax import layers as tl # core building block
from trax import shapes # data signatures: dimensionality and type
from trax import fastmath # uses jax, offers numpy on steroids
# Trax version 1.3.1 or better
!pip list | grep trax
```
## Layers
Layers are the core building blocks in Trax or as mentioned in the lectures, they are the base classes.
They take inputs, compute functions/custom calculations and return outputs.
You can also inspect layer properties. Let me show you some examples.
### Relu Layer
First I'll show you how to build a relu activation function as a layer. A layer like this is one of the simplest types. Notice there is no object initialization so it works just like a math function.
**Note: Activation functions are also layers in Trax, which might look odd if you have been using other frameworks for a longer time.**
```
# Layers
# Create a relu trax layer
relu = tl.Relu()
# Inspect properties
print("-- Properties --")
print("name :", relu.name)
print("expected inputs :", relu.n_in)
print("promised outputs :", relu.n_out, "\n")
# Inputs
x = np.array([-2, -1, 0, 1, 2])
print("-- Inputs --")
print("x :", x, "\n")
# Outputs
y = relu(x)
print("-- Outputs --")
print("y :", y)
```
### Concatenate Layer
Now I'll show you how to build a layer that takes 2 inputs. Notice the change in the expected inputs property from 1 to 2.
```
# Create a concatenate trax layer
concat = tl.Concatenate()
print("-- Properties --")
print("name :", concat.name)
print("expected inputs :", concat.n_in)
print("promised outputs :", concat.n_out, "\n")
# Inputs
x1 = np.array([-10, -20, -30])
x2 = x1 / -10
print("-- Inputs --")
print("x1 :", x1)
print("x2 :", x2, "\n")
# Outputs
y = concat([x1, x2])
print("-- Outputs --")
print("y :", y)
```
## Layers are Configurable
You can change the default settings of layers. For example, you can change the expected inputs for a concatenate layer from 2 to 3 using the optional parameter `n_items`.
```
# Configure a concatenate layer
concat_3 = tl.Concatenate(n_items=3) # configure the layer's expected inputs
print("-- Properties --")
print("name :", concat_3.name)
print("expected inputs :", concat_3.n_in)
print("promised outputs :", concat_3.n_out, "\n")
# Inputs
x1 = np.array([-10, -20, -30])
x2 = x1 / -10
x3 = x2 * 0.99
print("-- Inputs --")
print("x1 :", x1)
print("x2 :", x2)
print("x3 :", x3, "\n")
# Outputs
y = concat_3([x1, x2, x3])
print("-- Outputs --")
print("y :", y)
```
**Note: At any point,if you want to refer the function help/ look up the [documentation](https://trax-ml.readthedocs.io/en/latest/) or use help function.**
```
#help(tl.Concatenate) #Uncomment this to see the function docstring with explaination
```
## Layers can have Weights
Some layer types include mutable weights and biases that are used in computation and training. Layers of this type require initialization before use.
For example the `LayerNorm` layer calculates normalized data, that is also scaled by weights and biases. During initialization you pass the data shape and data type of the inputs, so the layer can initialize compatible arrays of weights and biases.
```
# Uncomment any of them to see information regarding the function
# help(tl.LayerNorm)
# help(shapes.signature)
# Layer initialization
norm = tl.LayerNorm()
# You first must know what the input data will look like
x = np.array([0, 1, 2, 3], dtype="float")
# Use the input data signature to get shape and type for initializing weights and biases
# We need to convert the input datatype from usual tuple to trax ShapeDtype
norm.init(shapes.signature(x))
print("Normal shape:",x.shape, "Data Type:",type(x.shape))
print("Shapes Trax:",shapes.signature(x),"Data Type:",type(shapes.signature(x)))
# Inspect properties
print("-- Properties --")
print("name :", norm.name)
print("expected inputs :", norm.n_in)
print("promised outputs :", norm.n_out)
# Weights and biases
print("weights :", norm.weights[0])
print("biases :", norm.weights[1], "\n")
# Inputs
print("-- Inputs --")
print("x :", x)
# Outputs
y = norm(x)
print("-- Outputs --")
print("y :", y)
```
## Custom Layers
This is where things start getting more interesting!
You can create your own custom layers too and define custom functions for computations by using `tl.Fn`. Let me show you how.
```
#help(tl.Fn)
# Define a custom layer
# In this example you will create a layer to calculate the input times 2
def TimesTwo():
layer_name = "TimesTwo"
def func(x):
return x * 2
return tl.Fn(layer_name, func)
# Test it
times_two = TimesTwo()
# Inspect properties
print("-- Properties --")
print("name :", times_two.name)
print("expected inputs :", times_two.n_in)
print("promised outputs :", times_two.n_out, "\n")
# Inputs
x = np.array([1, 2, 3])
print("-- Inputs --")
print("x :", x, "\n")
# Outputs
y = times_two(x)
print("-- Outputs --")
print("y :", y)
```
## Combinators
You can combine layers to build more complex layers. Trax provides a set of objects named combinator layers to make this happen. Combinators are themselves layers, so behavior commutes.
### Serial Combinator
This is the most common and easiest to use. For example could build a simple neural network by combining layers into a single layer using the `Serial` combinator. This new layer then acts just like a single layer, so you can inspect intputs, outputs and weights. Or even combine it into another layer! Combinators can then be used as trainable models. _Try adding more layers_
**Note:As you must have guessed, if there is serial combinator, there must be a parallel combinator as well. Do try to explore about combinators and other layers from the trax documentation and look at the repo to understand how these layers are written.**
```
# help(tl.Serial)
# help(tl.Parallel)
# Serial combinator
serial = tl.Serial(
tl.LayerNorm(),
tl.Relu(),
times_two,
)
# Initialization
x = np.array([-2, -1, 0, 1, 2])
serial.init(shapes.signature(x))
print("-- Serial Model --")
print(serial,"\n")
print("-- Properties --")
print("name :", serial.name)
print("sublayers :", serial.sublayers)
print("expected inputs :", serial.n_in)
print("promised outputs :", serial.n_out)
print("weights & biases:", serial.weights, "\n")
# Inputs
print("-- Inputs --")
print("x :", x, "\n")
# Outputs
y = serial(x)
print("-- Outputs --")
print("y :", y)
```
## JAX
Just remember to lookout for which numpy you are using, the regular ol' numpy or Trax's JAX compatible numpy. Both tend to use the alias np so watch those import blocks.
**Note:There are certain things which are still not possible in fastmath.numpy which can be done in numpy so you will see in assignments we will switch between them to get our work done.**
```
# Numpy vs fastmath.numpy have different data types
# Regular ol' numpy
x_numpy = np.array([1, 2, 3])
print("good old numpy : ", type(x_numpy), "\n")
# Fastmath and jax numpy
x_jax = fastmath.numpy.array([1, 2, 3])
print("jax trax numpy : ", type(x_jax))
```
## Summary
Trax is a concise framework, built on TensorFlow, for end to end machine learning. The key building blocks are layers and combinators. This notebook is just a taste, but sets you up with some key inuitions to take forward into the rest of the course and assignments where you will build end to end models.
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.