code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
# Minatar integration
## The minatar wrapper.
In **standardized *Reinforcement Learning* (RL) environments and benchmarks**, one usually has:
- a `reset` method with signature `None -> Tensor` (resets and give 1st observation)
- a `step` method with signature `int -> (Tensor, float, bool, dict)` (takes an action steps into the environment and gives an (observation, reward, done, info) tuple)
- a `render` method with signature `None -> None or Tensor` (renders current env state by giving, or not, an image)
However, **MinAtar does not use this standardized approach**! Rather one has access to:
- a `reset` method with signature `None -> None` (resets only)
- an `act` method with signature `int -> (float, bool)` (steps and gives (reward, done) tuple)
- a `state` method with signature `None -> Tensor` (gives an observation)
- a `display_state` method with signature `None -> int(optional)` (renders only)
In order to adapt the minatar benchmark to standard RL environments,
and to do as little changes to the original code as possible,
we implemented a Wrapper class that looks like
```
class MinatarWrapper(Environment):
def reset(self):
"""
Resets the environment.
Return:
(observation) the first observation.
"""
def step(self, actions):
"""
Steps in the environment.
Args:
actions (): the action to take.
Return:
(tensor, float, bool, dict) new observation, reward, done signal and complementary informations.
"""
def render(self, time=0, done=False):
"""
Resets the environment.
Args:
time (int): the number of milliseconds for each frame. if 0, there will be no live animation.
done (bool): tells if the episode is done.
Return:
(Image) the current image of the game.
"""
def _state(self):
"""
Reduces the dimensions of the raw observation and normalize it.
"""
```
in `_state`, **reduction** and **normalization** tricks are applied.
```
# sums the object channels to have a single image.
state = np.sum([state[i] * (i+1) for i in range(state.shape[0])], axis=0)
# normalize the image
m, M = np.min(state), np.max(state)
state = 2 * (state - m) / (M - m) - 1
```
## Environment hyper-definition
### Breakout
```
breakout = Game(env_name="minatar:breakout",
actionSelect="softmax",
input_size=100,
output_size=6,
time_factor=0,
layers=[5, 5],
i_act=np.full(5, 1),
h_act=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
o_act=np.full(1, 1),
weightCap=2.0,
noise_bias=0.0,
output_noise=[False, False, False],
max_episode_length=1000,
in_out_labels=['x', 'x_dot', 'cos(theta)', 'sin(theta)', 'theta_dot',
'force']
)
games["minatar:breakout"] = breakout
```
## Environment hyper-definition
### Freeway
```
freeway = Game(env_name="minatar:freeway",
actionSelect="softmax",
input_size=100,
output_size=6,
time_factor=0,
layers=[5, 5],
i_act=np.full(5, 1),
h_act=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
o_act=np.full(1, 1),
weightCap=2.0,
noise_bias=0.0,
output_noise=[False, False, False],
max_episode_length=1000,
in_out_labels=['x', 'x_dot', 'cos(theta)', 'sin(theta)', 'theta_dot',
'force']
)
games["minatar:freeway"] = freeway
```
## Hyperparameters automatic search.
```
parameters = OrderedDict(
popSize=[64, 200],
prob_addConn=[.025, .1],
prob_addNode=[.015, .06],
prob_crossover=[.7, .9],
prob_enable=[.005, .02],
prob_mutConn=[.7, .9],
prob_initEnable=[.8, 1.],
)
class RunBuilder:
@staticmethod
def get_runs(parameters):
runs = []
for v in product(*parameters.values()):
runs.append(dict(zip(parameters.keys(), v)))
return runs
b_fit, b_run = 0, -1
for run in RunBuilder.get_runs(parameters):
fitness = run_one_hyp(hyp, run)
if fitness > b_fit:
b_fit = fitness
b_run = run
```
- the runs where ran on **Breakout** because it is a lot faster to evaluate.
- **fitnesses** where **recorded** for further investigations.
however...
results were *not good* at all!
the best set of hyperparameters was:
| popSize | prob_addConn | prob_addNode | prob_crossover | prob_enable | prob_mutAct | prob_mutConn | prob_initEnable | budget |
| ------- | ------------ | ---- | ---- | ---- | ---- | ---- | ---- | ----- |
| 32 | .025 | .015 | .7 | .02 | .0 | .9 | 1. | 50000 |
and the fitness seen during search was 6.0
So, for final training, we have used the above set of parameters.
## The experiment.
- run 50000 learning processes 3 times to show statistical results.
- use the parameters of the above search result.
## The results.
- time spent for Breakout: **~ 2 hours**
- final fitness on Breakout: **max of ~ 2.80**
- time spent for Freeway: **~ 21h** (Cumulated)
- final fitness on Freeway: **14**
A random breakout agent...
<img src="./log/breakout/gifs/3484933263.gif" width="600" align="center">
performing around 0.50
The final best breakout agent given by ***NEAT***:
<img src="./log/breakout/gifs/3292025515.gif" width="600" align="center">
performing around 2.80 and it takes significantly more time to evaluate such an agent!
A random freeway agent...
<img src="./357277723.gif" width="600" align="center">
almost never reaches the top
The final best freeway agent given by ***NEAT***:
<img src="./357277723.gif" width="600" align="center">
performing around 14 but clearly not very efficient...
Same issue for both problems...
<img src="./Graph.png" width="600" align="center">
local minimum prevent innovation and diversity, species mechanism isn't efficient enough with our hyperparameters set.
| github_jupyter |
Daten importieren aus Excel
------
```
import pandas as pd
data_input = pd.read_excel('../data/raw/U bung kNN Klassifizierung Ecoli.xlsx', sheet_name=0)
data_output = pd.read_excel('../data/raw/U bung kNN Klassifizierung Ecoli.xlsx', sheet_name=1)
data_output
```
Train und Test Datensätze erstellen, normalisieren und kNN-Classifier trainieren
---------
```
X = data_input.loc[:,data_input.columns != "Target"]
y = data_input["Target"]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20) #, random_state=0)
data_input
#Normalisieren der Daten
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
scaler.fit(X_train)
X_train_norm = scaler.transform(X_train)
X_test_norm = scaler.transform(X_test)
from sklearn.neighbors import KNeighborsClassifier
knnclf = KNeighborsClassifier(n_neighbors=7)
knnclf.fit(X_train_norm, y_train)
```
Test Datensätze erstellen und kNN-Classifier verwenden
---------
```
X_output = data_output
y_pred = knnclf.predict(X_output)
#Normalisieren der Daten
X_output_norm = scaler.transform(X_output)
y_pred_norm = knnclf.predict(X_output_norm)
print(X_output)
print(y_pred)
print(X_output_norm)
print(y_pred_norm)
```
Akkuranz bestimmen
----------------
```
score = knnclf.score(X_test,y_test)
print('knn-Classifier scores with {}% accuracy'.format(score*100))
```
Optimalen k-wert bestimmen
---------------
y_pred = knnclf.predict(X_test)
from sklearn.metrics import classification_report, confusion_matrix
print(confusion_matrix(y_test, y_pred))
print(classification_report(y_test, y_pred))
```
import matplotlib.pyplot as plt
import numpy as np
error = []
# Calculating error for K values between 1 and 40
for i in range(1, 40):
knn = KNeighborsClassifier(n_neighbors=i)
knn.fit(X_train_norm, y_train)
pred_i = knn.predict(X_test_norm)
error.append(np.mean(pred_i != y_test))
plt.figure(figsize=(12, 6))
plt.plot(range(1, 40), error, color='red', linestyle='dashed', marker='o',
markerfacecolor='blue', markersize=10)
plt.title('Error Rate K Value')
plt.xlabel('K Value')
plt.ylabel('Mean Error')
```
Decision Tree
----------
```
from sklearn.tree import DecisionTreeClassifier
dt = DecisionTreeClassifier(max_depth=3, criterion="entropy", min_samples_split=2)
dt.fit(X_train,y_train)
from sklearn.tree import DecisionTreeClassifier
score = dt.score(X_test,y_test)
print('Decision Tree scores with {}% accuracy'.format(score*100))
from sklearn.model_selection import GridSearchCV
#tree parameters which shall be tested
tree_para = {'criterion':['gini','entropy'],'max_depth':[i for i in range(1,20)], 'min_samples_split':[i for i in range (2,20)]}
#GridSearchCV object
grd_clf = GridSearchCV(dt, tree_para, cv=5)
#creates differnt trees with all the differnet parameters out of our data
grd_clf.fit(X_train, y_train)
#best paramters that were found
best_parameters = grd_clf.best_params_
print(best_parameters)
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Text classification with an RNN
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/beta/tutorials/text/text_classification_rnn"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r2/tutorials/text/text_classification_rnn.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r2/tutorials/text/text_classification_rnn.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs/site/en/r2/tutorials/text/text_classification_rnn.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
This text classification tutorial trains a [recurrent neural network](https://developers.google.com/machine-learning/glossary/#recurrent_neural_network) on the [IMDB large movie review dataset](http://ai.stanford.edu/~amaas/data/sentiment/) for sentiment analysis.
```
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow_datasets as tfds
import tensorflow as tf
```
Import `matplotlib` and create a helper function to plot graphs:
```
import matplotlib.pyplot as plt
def plot_graphs(history, string):
plt.plot(history.history[string])
plt.plot(history.history['val_'+string])
plt.xlabel("Epochs")
plt.ylabel(string)
plt.legend([string, 'val_'+string])
plt.show()
```
## Setup input pipeline
The IMDB large movie review dataset is a *binary classification* dataset—all the reviews have either a *positive* or *negative* sentiment.
Download the dataset using [TFDS](https://www.tensorflow.org/datasets). The dataset comes with an inbuilt subword tokenizer.
```
dataset, info = tfds.load('imdb_reviews/subwords8k', with_info=True,
as_supervised=True)
train_dataset, test_dataset = dataset['train'], dataset['test']
```
As this is a subwords tokenizer, it can be passed any string and the tokenizer will tokenize it.
```
tokenizer = info.features['text'].encoder
print ('Vocabulary size: {}'.format(tokenizer.vocab_size))
sample_string = 'TensorFlow is cool.'
tokenized_string = tokenizer.encode(sample_string)
print ('Tokenized string is {}'.format(tokenized_string))
original_string = tokenizer.decode(tokenized_string)
print ('The original string: {}'.format(original_string))
assert original_string == sample_string
```
The tokenizer encodes the string by breaking it into subwords if the word is not in its dictionary.
```
for ts in tokenized_string:
print ('{} ----> {}'.format(ts, tokenizer.decode([ts])))
BUFFER_SIZE = 10000
BATCH_SIZE = 64
train_dataset = train_dataset.shuffle(BUFFER_SIZE)
train_dataset = train_dataset.padded_batch(BATCH_SIZE, train_dataset.output_shapes)
test_dataset = test_dataset.padded_batch(BATCH_SIZE, test_dataset.output_shapes)
```
## Create the model
Build a `tf.keras.Sequential` model and start with an embedding layer. An embedding layer stores one vector per word. When called, it converts the sequences of word indices to sequences of vectors. These vectors are trainable. After training (on enough data), words with similar meanings often have similar vectors.
This index-lookup is much more efficient than the equivalent operation of passing a one-hot encoded vector through a `tf.keras.layers.Dense` layer.
A recurrent neural network (RNN) processes sequence input by iterating through the elements. RNNs pass the outputs from one timestep to their input—and then to the next.
The `tf.keras.layers.Bidirectional` wrapper can also be used with an RNN layer. This propagates the input forward and backwards through the RNN layer and then concatenates the output. This helps the RNN to learn long range dependencies.
```
model = tf.keras.Sequential([
tf.keras.layers.Embedding(tokenizer.vocab_size, 64),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
```
Compile the Keras model to configure the training process:
```
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
```
## Train the model
```
history = model.fit(train_dataset, epochs=10,
validation_data=test_dataset)
test_loss, test_acc = model.evaluate(test_dataset)
print('Test Loss: {}'.format(test_loss))
print('Test Accuracy: {}'.format(test_acc))
```
The above model does not mask the padding applied to the sequences. This can lead to skewness if we train on padded sequences and test on un-padded sequences. Ideally the model would learn to ignore the padding, but as you can see below it does have a small effect on the output.
If the prediction is >= 0.5, it is positive else it is negative.
```
def pad_to_size(vec, size):
zeros = [0] * (size - len(vec))
vec.extend(zeros)
return vec
def sample_predict(sentence, pad):
tokenized_sample_pred_text = tokenizer.encode(sample_pred_text)
if pad:
tokenized_sample_pred_text = pad_to_size(tokenized_sample_pred_text, 64)
predictions = model.predict(tf.expand_dims(tokenized_sample_pred_text, 0))
return (predictions)
# predict on a sample text without padding.
sample_pred_text = ('The movie was cool. The animation and the graphics '
'were out of this world. I would recommend this movie.')
predictions = sample_predict(sample_pred_text, pad=False)
print (predictions)
# predict on a sample text with padding
sample_pred_text = ('The movie was cool. The animation and the graphics '
'were out of this world. I would recommend this movie.')
predictions = sample_predict(sample_pred_text, pad=True)
print (predictions)
plot_graphs(history, 'accuracy')
plot_graphs(history, 'loss')
```
## Stack two or more LSTM layers
Keras recurrent layers have two available modes that are controlled by the `return_sequences` constructor argument:
* Return either the full sequences of successive outputs for each timestep (a 3D tensor of shape `(batch_size, timesteps, output_features)`).
* Return only the last output for each input sequence (a 2D tensor of shape (batch_size, output_features)).
```
model = tf.keras.Sequential([
tf.keras.layers.Embedding(tokenizer.vocab_size, 64),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(
64, return_sequences=True)),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(32)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
history = model.fit(train_dataset, epochs=10,
validation_data=test_dataset)
test_loss, test_acc = model.evaluate(test_dataset)
print('Test Loss: {}'.format(test_loss))
print('Test Accuracy: {}'.format(test_acc))
# predict on a sample text without padding.
sample_pred_text = ('The movie was not good. The animation and the graphics '
'were terrible. I would not recommend this movie.')
predictions = sample_predict(sample_pred_text, pad=False)
print (predictions)
# predict on a sample text with padding
sample_pred_text = ('The movie was not good. The animation and the graphics '
'were terrible. I would not recommend this movie.')
predictions = sample_predict(sample_pred_text, pad=True)
print (predictions)
plot_graphs(history, 'accuracy')
plot_graphs(history, 'loss')
```
Check out other existing recurrent layers such as [GRU layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers/GRU).
| github_jupyter |
## MIDI Generator
```
## Uncomment command below to kill current job:
#!neuro kill $(hostname)
import random
import sys
import subprocess
import torch
sys.path.append('../midi-generator')
%load_ext autoreload
%autoreload 2
import IPython.display as ipd
from model.dataset import MidiDataset
from utils.load_model import load_model
from utils.generate_midi import generate_midi
from utils.seed import set_seed
from utils.write_notes import write_notes
```
Each `*.mid` file can be thought of as a sequence where notes and chords follow each other with specified time offsets between them. So, following this model a next note can be predicted with a `seq2seq` model. In this work, a simple `GRU`-based model is used.
Note that the number of available notes and chord in vocabulary is not specified and depends on a dataset which a model was trained on.
To listen to MIDI files from Jupyter notebook, let's define help function which transforms `*.mid` file to `*.wav` file.
```
def mid2wav(mid_path, wav_path):
subprocess.check_output(['timidity', mid_path, '-OwS', '-o', wav_path])
```
The next step is loading the model from the checkpoint. To make experiments reproducible let's also specify random seed.
You can also try to use the model, which was trained with label smoothing (see `../results/smoothing.ch`).
```
seed = 1234
set_seed(seed)
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
print(device)
model, vocab = load_model(checkpoint_path='../results/test.ch', device=device)
```
Let's also specify additional help function to avoid code duplication.
```
def dump_result(file_preffix, vocab, note_seq, offset_seq=None):
note_seq = vocab.decode(note_seq)
notes = MidiDataset.decode_notes(note_seq, offset_seq=offset_seq)
mid_path = file_preffix + '.mid'
wav_path = file_preffix + '.wav'
write_notes(mid_path, notes)
mid2wav(mid_path, wav_path)
return wav_path
```
# MIDI file generation
Let's generate a new file. Note that the parameter `seq_len` specifies the length of the output sequence of notes.
Function `generate_midi` return sequence of generated notes and offsets between them.
## Nucleus (`top-p`) Sampling
Sample from the most probable tokens, which sum of probabilities gives `top-p`. If `top-p == 0` the most probable token is sampled.
## Temperature
As `temperature` → 0 this approaches greedy decoding, while `temperature` → ∞ asymptotically approaches uniform sampling from the vocabulary.
```
note_seq, offset_seq = generate_midi(model, vocab, seq_len=128, top_p=0, temperature=1, device=device)
```
Let's listen to result midi.
```
# midi with constant offsets
ipd.Audio(dump_result('../results/output_without_offsets', vocab, note_seq, offset_seq=None))
# midi with generated offsets
ipd.Audio(dump_result('../results/output_with_offsets.mid', vocab, note_seq, offset_seq))
```
The result with constant offsets sounds better, doesn't it? :)
Be free to try different generation parameters (`top-p` and `temperature`) to understand their impact on the resulting sound.
You can also train your own model with different specs (e.g. different hidden size) or use label smoothing during training.
# Continue existing file
## Continue sampled notes
For beginning, let's continue sound that consists of sampled from `vocab` notes.
```
seed = 4321
set_seed(seed)
history_notes = random.choices(range(len(vocab)), k=20)
history_offsets = len(history_notes) * [0.5]
ipd.Audio(dump_result('../results/random_history', vocab, history_notes, history_offsets))
```
It sounds a little bit chaotic. Let's try to continue this with our model.
```
history = [*zip(history_notes, history_offsets)]
note_seq, offset_seq = generate_midi(model, vocab, seq_len=128, top_p=0, temperature=1, device=device,
history=history)
# midi with constant offsets
ipd.Audio(dump_result('../results/random_without_offsets', vocab, note_seq, offset_seq=None))
```
After the sampled part ends, the generated melody starts to sound better.
## Continue existed melody
```
raw_notest = MidiDataset.load_raw_notes('../data/mining.mid')
org_note_seq, org_offset_seq = MidiDataset.encode_notes(raw_notest)
org_note_seq = vocab.encode(org_note_seq)
```
Let's listen to it
```
ipd.Audio(dump_result('../results/original_sound', vocab, org_note_seq, org_offset_seq))
```
and take 20 first elements from the sequence as out history sequence.
```
history_notes = org_note_seq[:20]
history_offsets = org_offset_seq[:20]
history = [*zip(history_notes, history_offsets)]
note_seq, offset_seq = generate_midi(model, vocab, seq_len=128, top_p=0, temperature=1, device=device,
history=history)
# result melody without generated offsets
ipd.Audio(dump_result('../results/continue_rand_without_offsets', vocab, note_seq, offset_seq=None))
# result melody with generated offsets
ipd.Audio(dump_result('../results/continue_rand_with_offsets', vocab, note_seq, offset_seq))
```
You can try to overfit your model on one melody to get better results. Otherwise, you can use already pretrained model (`../results/onemelody.ch`)
# Model overfitted on one melody
Let's try the same thing which we did before. Let's continue melody, but this time do it with the model,
which was overfitted with this melody.
```
seed = 1234
set_seed(seed)
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
model, vocab = load_model(checkpoint_path='../results/onemelody.ch', device=device)
raw_notest = MidiDataset.load_raw_notes('../data/Final_Fantasy_Matouyas_Cave_Piano.mid')
org_note_seq, org_offset_seq = MidiDataset.encode_notes(raw_notest)
org_note_seq = vocab.encode(org_note_seq)
```
Let's listen to it.
```
ipd.Audio(dump_result('../results/onemelody_original_sound', vocab, org_note_seq, org_offset_seq))
end = 60
history_notes = org_note_seq[:end]
history_offsets = org_offset_seq[:end]
```
Listen to history part of loaded melody.
```
ipd.Audio(dump_result('../results/onemelody_history', vocab, history_notes, history_offsets))
```
Now we can try to continue the original melody with our model. But firstly, you can listen to the original tail part of the melody do refresh it in the memory and have reference to compare with.
```
tail_notes = org_note_seq[end:]
tail_offsets = org_offset_seq[end:]
ipd.Audio(dump_result('../results/onemelody_tail', vocab, tail_notes, tail_offsets))
history = [*zip(history_notes, history_offsets)]
note_seq, offset_seq = generate_midi(model, vocab, seq_len=128, top_p=0, temperature=1, device=device,
history=history)
# delete history part
note_seq = note_seq[end:]
offset_seq = offset_seq[end:]
# result melody without generated offsets
ipd.Audio(dump_result('../results/continue_onemelody_without_offsets', vocab, note_seq, offset_seq=None))
# result melody with generated offsets
ipd.Audio(dump_result('../results/continue_onemelody_with_offsets', vocab, note_seq, offset_seq))
```
As you can hear, this time, the model generated better offsets and the result melody does not sound so chaostic.
| github_jupyter |
# Sandbox - Tutorial
## Building a fiber bundle
A [fiber bundle](https://github.com/3d-pli/fastpli/wiki/FiberModel) consit out of multiple individual nerve fibers.
A fiber bundle is a list of fibers, where fibers are represented as `(n,4)-np.array`.
This makes desining individually fiber of any shape possible.
However since nerve fibers are often in nerve fiber bundles, this toolbox allows to fill fiber_bundles from a pattern of fibers.
Additionally this toolbox also allows to build parallell cubic shapes as well as different kinds of cylindric shapes to allow a faster building experience.
## General imports
First, we prepair all necesarry modules and defining a function to euqalice all three axis of an 3d plot.
You can change the `magic ipython` line from `inline` to `qt`.
This generate seperate windows allowing us also to rotate the resulting plots and therfore to investigate the 3d models from different views.
Make sure you have `PyQt5` installed if you use it.
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# %matplotlib qt
import fastpli.model.sandbox as sandbox
def set_3d_axes_equal(ax):
x_limits = ax.get_xlim3d()
y_limits = ax.get_ylim3d()
z_limits = ax.get_zlim3d()
x_range = abs(x_limits[1] - x_limits[0])
x_middle = np.mean(x_limits)
y_range = abs(y_limits[1] - y_limits[0])
y_middle = np.mean(y_limits)
z_range = abs(z_limits[1] - z_limits[0])
z_middle = np.mean(z_limits)
plot_radius = 0.5 * max([x_range, y_range, z_range])
ax.set_xlim3d([x_middle - plot_radius, x_middle + plot_radius])
ax.set_ylim3d([y_middle - plot_radius, y_middle + plot_radius])
ax.set_zlim3d([z_middle - plot_radius, z_middle + plot_radius])
```
## Designing a fiber bundle
The idea is to build design first a macroscopic struces, i. e. nerve fiber bundles, which can then at a later step be filled with individual nerve fibers.
We start by defining a fiber bundle as a trajectory of points (similar to fibers).
As an example we start with use a helical form.
```
t = np.linspace(0, 4 * np.pi, 50, True)
traj = np.array((42 * np.cos(t), 42 * np.sin(t), 10 * t)).T
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1, projection='3d')
ax.plot(
traj[:, 0],
traj[:, 1],
traj[:, 2],
)
plt.title("fb trajectory")
set_3d_axes_equal(ax)
plt.show()
```
### seed points
seed points are used to initialize the populating process of individual fibers inside the fiber bundle.
Seed points are a list of 3d points.
This toolbox provides two methods to build seed points pattern.
The first one is a 2d triangular grid.
It is defined by a `width`, `height` and an inside `spacing` between the seed point.
Additionally one can actiavte the `center` option so that the seed points are centered around a seed point at `(0,0,0)`.
The second method provides a circular shape instead of a rectangular.
However it can also be achievd by using an additional function `crop_circle` which returns only seed points along the first two dimensions with the defined `radius` around the center.
```
seeds = sandbox.seeds.triangular_grid(width=42,
height=42,
spacing=6,
center=True)
radius = 21
circ_seeds = sandbox.seeds.crop_circle(radius=radius, seeds=seeds)
fig, ax = plt.subplots(1, 1)
plt.title("seed points")
plt.scatter(seeds[:, 0], seeds[:, 1])
plt.scatter(circ_seeds[:, 0], circ_seeds[:, 1])
ax.set_aspect('equal', 'box')
# plot circle margin
t = np.linspace(0, 2 * np.pi, 42)
x = radius * np.cos(t)
y = radius * np.sin(t)
plt.plot(x, y)
plt.show()
```
### Generating a fiber bundle from seed points
The next step is to build a fiber bundle from the desined trajectory and seed points.
However one additional step is necesarry.
Since nerve fibers are not a line, but a 3d object, they need also a volume for the later `solving` and `simulation` steps of this toolbox.
This toolbox describes nerve fibers as tubes, which are defined by a list of points and radii, i. e. (n,4)-np.array).
The radii `[:,3]` can change along the fiber trajectories `[:,0:3]` allowiing for a change of thickness.
Now we have everything we need to build a fiber bundle from the desined trajectory and seed points.
The function `bundle` provides this funcionallity.
Additionally to the `traj` and `seeds` parameter the `radii` can be a single number if all fibers should have the same radii, or a list of numbers, if each fiber shell have a different radii.
An additional `scale` parameter allows to scale the seed points along the trajectory e. g. allowing for a fanning.
```
# populating fiber bundle
fiber_bundle = sandbox.build.bundle(
traj=traj,
seeds=circ_seeds,
radii=np.random.uniform(0.5, 0.8, circ_seeds.shape[0]),
scale=0.25 + 0.5 * np.linspace(0, 1, traj.shape[0]))
# plotting
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1, projection='3d')
for fiber in fiber_bundle:
ax.plot(fiber[:, 0], fiber[:, 1], fiber[:, 2])
plt.title("helical thinning out fiber bundle")
set_3d_axes_equal(ax)
plt.show()
```
## Additional macroscopic structures
In the development and using of this toolbox, it was found that it is usefull to have other patterns than filled fiber bundles to build macroscopic structures.
Depending on a brain sections, where the nerve fiber orientation is measured with the 3D-PLI technique, nerve fibers can be visibale as type of patterns.
### Cylindrical shapes
Radial shaped patterns can be quickly build with the following `cylinder` method.
A hollow cylinder is defined by a inner and outer radii `r_in` and `r_out`, along two points `p` and `q`.
Additionally the cylinder can be also only partial along its radius by defining two angles `alpha` and `beta`.
Again as for the `bundle` method, one needs seed points to defining a pattern.
Filling this cylindrig shape can be performed by three differet `mode`s:
- radial
- circular
- parallel
```
# plotting
seeds = sandbox.seeds.triangular_grid(width=200,
height=200,
spacing=5,
center=True)
fig, axs = plt.subplots(1, 3, figsize=(15,5), subplot_kw={'projection':'3d'}, constrained_layout=True)
for i, mode in enumerate(['radial', 'circular', 'parallel']):
# ax = fig.add_subplot(1, 1, 1, projection='3d')
fiber_bundle = sandbox.build.cylinder(p=(0, 80, 50),
q=(40, 80, 100),
r_in=20,
r_out=40,
seeds=seeds,
radii=1,
alpha=np.deg2rad(20),
beta=np.deg2rad(160),
mode=mode)
for fiber in fiber_bundle:
axs[i].plot(fiber[:, 0], fiber[:, 1], fiber[:, 2])
set_3d_axes_equal(axs[i])
axs[i].set_title(f'{mode}')
plt.show()
```
### Cubic shapes
The next method allows placing fibers inside a cube with a use definde direction.
The cube is definded by two 3d points `p` and `q`.
The direction of the fibers inside the cube is defined by spherical angels `phi` and `theta`.
Seed points again describe the pattern of fibers inside the cube.
The seed points (rotated its xy-plane according to `phi` and `theta`) are places at point `q` and `q`.
From the corresponding seed points are the starting and end point for each fiber.
```
# define cub corner points
p = np.array([0, 80, 50])
q = np.array([40, 180, 100])
# create seed points which will fill the cube
d = np.max(np.abs(p - q)) * np.sqrt(3)
seeds = sandbox.seeds.triangular_grid(width=d,
height=d,
spacing=10,
center=True)
# fill a cube with (theta, phi) directed fibers
fiber_bundle = sandbox.build.cuboid(p=p,
q=q,
phi=np.deg2rad(45),
theta=np.deg2rad(90),
seeds=seeds,
radii=1)
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1, projection='3d')
for fiber in fiber_bundle:
ax.plot(fiber[:, 0], fiber[:, 1], fiber[:, 2])
plt.title('cubic shape')
set_3d_axes_equal(ax)
plt.show()
```
## next
from here further anatomical more interesting examples are presented in the solver tutorial and `examples/crossing.py` example.
| github_jupyter |
# Effect of House Characteristics on Their Prices
## by Lubomir Straka
## Investigation Overview
In this investigation, I wanted to look at the key characteristics of houses that could be used to predict their prices. The main focus was on three aspects: above grade living area representing space characteristics, overall quality of house's material and finish representing physical characteristics, and neighborhood cluster representing location characteristics.
## Dataset Overview
The data consists of information regarding 1460 houses in Ames, Iowa, including their sale price, physical characteristics, space properties and location within the city as provided by [Kaggle](https://www.kaggle.com/c/house-prices-advanced-regression-techniques/data). The data set contains 1460 observations and a large number of explanatory variables (23 nominal, 23 ordinal, 14 discrete, and 20 continuous) involved in assessing home values. These 79 explanatory variables plus one response variable (sale price) describe almost every aspect of residential homes in Ames, Iowa.
In addition to some basic data type encoding, missing value imputing and cleaning, four outliers were removed from the analysis due to unusual sale conditions.
```
# Import all packages and set plots to be embedded inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
# Suppress warnings from final output
import warnings
warnings.simplefilter("ignore")
# Load the Ames Housing dataset
path = 'https://raw.githubusercontent.com/lustraka/Data_Analysis_Workouts/main/Communicate_Data_Findings/ames_train_data.csv'
ames = pd.read_csv(path, index_col='Id')
################
# Wrangle data #
################
# The numeric features are already encoded correctly (`float` for
# continuous, `int` for discrete), but the categoricals we'll need to
# do ourselves. Note in particular, that the `MSSubClass` feature is
# read as an `int` type, but is actually a (nominative) categorical.
# The categorical features nominative (unordered)
catn = ["MSSubClass", "MSZoning", "Street", "Alley", "LandContour", "LotConfig",
"Neighborhood", "Condition1", "Condition2", "BldgType", "HouseStyle",
"RoofStyle", "RoofMatl", "Exterior1st", "Exterior2nd", "MasVnrType",
"Foundation", "Heating", "CentralAir", "GarageType", "MiscFeature",
"SaleType", "SaleCondition"]
# The categorical features ordinal (ordered)
# Pandas calls the categories "levels"
five_levels = ["Po", "Fa", "TA", "Gd", "Ex"]
ten_levels = list(range(10))
cato = {
"OverallQual": ten_levels,
"OverallCond": ten_levels,
"ExterQual": five_levels,
"ExterCond": five_levels,
"BsmtQual": five_levels,
"BsmtCond": five_levels,
"HeatingQC": five_levels,
"KitchenQual": five_levels,
"FireplaceQu": five_levels,
"GarageQual": five_levels,
"GarageCond": five_levels,
"PoolQC": five_levels,
"LotShape": ["Reg", "IR1", "IR2", "IR3"],
"LandSlope": ["Sev", "Mod", "Gtl"],
"BsmtExposure": ["No", "Mn", "Av", "Gd"],
"BsmtFinType1": ["Unf", "LwQ", "Rec", "BLQ", "ALQ", "GLQ"],
"BsmtFinType2": ["Unf", "LwQ", "Rec", "BLQ", "ALQ", "GLQ"],
"Functional": ["Sal", "Sev", "Maj1", "Maj2", "Mod", "Min2", "Min1", "Typ"],
"GarageFinish": ["Unf", "RFn", "Fin"],
"PavedDrive": ["N", "P", "Y"],
"Utilities": ["NoSeWa", "NoSewr", "AllPub"],
"CentralAir": ["N", "Y"],
"Electrical": ["Mix", "FuseP", "FuseF", "FuseA", "SBrkr"],
"Fence": ["MnWw", "GdWo", "MnPrv", "GdPrv"],
}
# Add a None level for missing values
cato = {key: ["None"] + value for key, value in
cato.items()}
def encode_dtypes(df):
"""Encode nominal and ordinal categorical variables."""
global catn, cato
# Nominal categories
for name in catn:
df[name] = df[name].astype("category")
# Add a None category for missing values
if "None" not in df[name].cat.categories:
df[name].cat.add_categories("None", inplace=True)
# Ordinal categories
for name, levels in cato.items():
df[name] = df[name].astype(pd.CategoricalDtype(levels,
ordered=True))
return df
def impute_missing(df):
"""Impute zeros to numerical and None to categorical variables."""
for name in df.select_dtypes("number"):
df[name] = df[name].fillna(0)
for name in df.select_dtypes("category"):
df[name] = df[name].fillna("None")
return df
def clean_data(df):
"""Remedy typos and mistakes based on EDA."""
global cato
# YearRemodAdd: Remodel date (same as construction date if no remodeling or additions)
df.YearRemodAdd = np.where(df.YearRemodAdd < df.YearBuilt, df.YearBuilt, df.YearRemodAdd)
assert len(df.loc[df.YearRemodAdd < df.YearBuilt]) == 0, 'Check YearRemodAdd - should be greater or equal then YearBuilt'
# Check range of years
yr_max = 2022
# Some values of GarageYrBlt are corrupt. Fix them by replacing them with the YearBuilt
df.GarageYrBlt = np.where(df.GarageYrBlt > yr_max, df.YearBuilt, df.GarageYrBlt)
assert df.YearBuilt.max() < yr_max and df.YearBuilt.min() > 1800, 'Check YearBuilt min() and max()'
assert df.YearRemodAdd.max() < yr_max and df.YearRemodAdd.min() > 1900, 'Check YearRemodAdd min() and max()'
assert df.YrSold.max() < yr_max and df.YrSold.min() > 2000, 'Check YrSold min() and max()'
assert df.GarageYrBlt.max() < yr_max and df.GarageYrBlt.min() >= 0, 'Check GarageYrBlt min() and max()'
# Check values of ordinal catagorical variables
for k in cato.keys():
assert set(df[k].unique()).difference(df[k].cat.categories) == set(), f'Check values of {k}'
# Check typos in nominal categorical variables
df['Exterior2nd'] = df['Exterior2nd'].replace({'Brk Cmn':'BrkComm', 'CmentBd':'CemntBd', 'Wd Shng':'WdShing'})
# Renew a data type after replacement
df['Exterior2nd'] = df['Exterior2nd'].astype("category")
if "None" not in df['Exterior2nd'].cat.categories:
df['Exterior2nd'].cat.add_categories("None", inplace=True)
return df
def label_encode(df):
"""Encode categorical variables using their dtype setting."""
X = df.copy()
for colname in X.select_dtypes(["category"]):
X[colname] = X[colname].cat.codes
return X
# Pre-process data
ames = encode_dtypes(ames)
ames = impute_missing(ames)
ames = clean_data(ames)
# Add log transformed SalePrice to the dataset
ames['logSalePrice'] = ames.SalePrice.apply(np.log10)
```
## Distribution of House Prices
**Graph on left**
The values of the response variable *SalePrice* are distributed between \$34.900 and \$755.000 with one mode at \$140.000, which is lower than the median at \$163.000, which is lower than the average price of \$180.921. The distribution of *SalePrice* is asymmetric with relatively few large values and tails off to the right. It is also relatively peaked.
**Graph on right**
For analysis of the relationships with other variables would be more suitable a log transformation of *SalePrice*. The distribution of *logSalePrice* is almost symmetric with skewness close to zero although the distribution is still a bit peaked.
```
def log_trans(x, inverse=False):
"""Get log or tenth power of the argument."""
if not inverse:
return np.log10(x)
else:
return 10**x
# Plot SalePrice with a standard and log scale
fig, axs = plt.subplots(1, 2, figsize=[16,5])
# LEFT plot
sns.histplot(data=ames, x='SalePrice', ax=axs[0])
axs[0].set_title('Distribution of Sale Price')
xticks = np.arange(100000, 800000, 100000)
axs[0].set_xticks(xticks)
axs[0].set_xticklabels([f'{int(xtick/1000)}K' for xtick in xticks])
axs[0].set_xlabel('Price ($)')
# RIGHT plot
sns.histplot(data=ames, x='logSalePrice', ax=axs[1])
axs[1].set_title('Distribution of Sale Price After Log Transformation')
lticks = [50000, 100000, 200000, 500000]
axs[1].set_xticks(log_trans(lticks))
axs[1].set_xticklabels([f'{int(xtick/1000)}K' for xtick in lticks])
axs[1].set_xlabel('Price ($)')
plt.show()
```
## Distribution of Living Area
The distribution of above grade living area (*GrLivArea*) is asymmetrical with skewness to the right and some peakness. There was two partial sales (houses not completed), one abnormal sale (trade, foreclosure, or short sale), and one normal, just simply unusual sale (very large house priced relatively appropriately). These outliers (any houses with *GrLivArea* greater then 4000 square feet) had been removed. This is a distribution of the cleaned dataset.
```
# Remove outliers
ames = ames.query('GrLivArea < 4000').copy()
# Plot a distribution of GrLivArea
sns.histplot(data=ames, x='GrLivArea')
plt.title('Distribution of Above Grade Living Area')
plt.xlabel('GrLivArea (sq ft)')
plt.show()
```
## Sale Price vs Overall Quality
Overall Quality (*OverallQual*) represents physical aspects of the building and rates the overall material and finish of the house. The moset frequent value of this categorical variable is average rate (5). The violin plot of sale price versus overall quality illuminates positive correlation between these variables. The higher the quality, the higher the price.
Interestingly, it looks like the missing values occur exclusively in observations with the best overall quality.
```
# Set the base color
base_color = sns.color_palette()[0]
# Show violin plot
plt.figure(figsize=(10,4))
sns.violinplot(data=ames, x='OverallQual', y='logSalePrice', color=base_color, inner='quartile')
plt.ylabel('Price ($)')
plt.yticks(log_trans(lticks), [f'{int(xtick/1000)}K' for xtick in lticks])
plt.xlabel('Overall Quality')
plt.title('Sale Price vs Overall Quality')
plt.show()
```
## Sale Price vs Neighborhood
Neighborhood represent the place where the house is located. The most frequent in the dataset are houses from North Ames (15 %) followed by houses from College Creek (10 %) and Old Town (8 %). To get insight into the effect of the nominal variable *Neighborhood* I had to cluster neighborhoods according to unit price per square feet. The violin plot of sale price versus three neighborhood clusters shows a positive correlation between these variables.
```
# Add a variable with total surface area
ames['TotalSF'] = ames['TotalBsmtSF'] + ames['1stFlrSF'] + ames['2ndFlrSF'] + ames['GarageArea']
# Calculate price per square feet
ames['SalePricePerSF'] = ames.SalePrice / ames.TotalSF
# Cluster neighborhoods into three clusters
ngb_mean_df = ames.groupby('Neighborhood')['SalePricePerSF'].mean().dropna()
bins = np.linspace(ngb_mean_df.min(), ngb_mean_df.max(), 4)
clusters = ['LowUnitPriceCluster', 'AvgUnitPriceCluster', 'HighUnitPriceCluster']
# Create a dict 'Neighborhood' : 'Cluster'
ngb_clusters = pd.cut(ngb_mean_df, bins=bins, labels=clusters, include_lowest=True).to_dict()
# Add new feature to the dataset
ames['NgbCluster'] = ames.Neighborhood.apply(lambda c: ngb_clusters.get(c, "")).astype(pd.CategoricalDtype(clusters, ordered=True))
# Plot the new feature
sns.violinplot(data=ames, x='NgbCluster', y='logSalePrice', color=base_color, inner='quartile')
plt.xlabel('Neighborhood Cluster')
plt.ylabel('Price ($)')
plt.yticks(log_trans(lticks), [f'{int(xtick/1000)}K' for xtick in lticks])
plt.title('Sale Price vs Neighborhood Cluster')
plt.show()
```
## Sale Price vs Living Area and Location
Two previous figures showed a positive correlation between building and location variables and sale price. Now, let's look at the relation between a space variable *GrLivArea* and price. The scatter plot of sale price versus living area with colored neighborhood clusters justifies the approximately linear relationship between price on one side and living area and location on the other side.
```
# Combine space and location variables
ax = sns.scatterplot(data=ames, x='GrLivArea', y='logSalePrice', hue='NgbCluster')
plt.title('Sale Price vs Living Area with Neighborhood Clusters')
plt.xlabel('Living Area (sq ft)')
plt.ylabel('Price ($)')
plt.yticks(log_trans(lticks), [f'{int(xtick/1000)}K' for xtick in lticks])
handles, _ = ax.get_legend_handles_labels()
ax.legend(handles[::-1], ['High Unit Price', 'Average Unit Price', 'Low Unit Price'], title='Neighborhood Clusters')
plt.show()
```
## Sale Price vs Overall Quality by Location
Finally, let's look at combination of overall quality and neighborhood clusters. These violin plots of sale price versus overall quality show differences between neighborhood clusters. The most diverse cluster in regards to overall quality of houses is the one with lower unit price per square feet. As expected, houses in the cluster with high unit prices per square feet have better overall quality as wall as higer prices.
```
# Combine building and location variables
g = sns.FacetGrid(data=ames, row='NgbCluster', aspect=3)
g.map(sns.violinplot, 'OverallQual', 'logSalePrice', inner='quartile')
g.set_titles('{row_name}')
g.set(yticks=log_trans(lticks), yticklabels=[f'{int(xtick/1000)}K' for xtick in lticks])
g.set_ylabels('Price ($)')
g.fig.suptitle('Sale Price vs OverallQual For Neighborhood Clusters', y=1.01)
plt.show()
```
## Conclusion
The investigation justified that the three selected explanatory variables of house can be used to predict the house sale price (a response variable). These variable are namely:
- *above grade living area* representing space characteristics,
- *overall quality* of house's material and finish representing physical characteristics, and
- *neighborhood cluster* representing location characteristics; this variable was engineered by clustering neighborhoods according to unit price per square feet.
All these variables have a positive relationship with sale price of the house.
## Thanks For Your Attention
Finished 2021-11-18
```
# !jupyter nbconvert Part_II_slide_deck_ames.ipynb --to slides --post serve --no-input --no-prompt
```
| github_jupyter |
```
import pickle
with open('ldaseq1234.pickle', 'rb') as f:
ldaseq = pickle.load(f)
print(ldaseq.print_topic_times(topic=0))
topicdis = [[0.04461942257217848,
0.08583100499534332,
0.0327237321141309,
0.0378249089831513,
0.08521717043434086,
0.03543307086614173,
0.054356108712217424,
0.04057658115316231,
0.0499745999491999,
0.04468292269917873,
0.028257556515113032,
0.026013885361104057,
0.0668021336042672,
0.07567098467530269,
0.08409533485733638,
0.026966387266107866,
0.04533909067818136,
0.028172889679112693,
0.04121158242316485,
0.06623063246126493],
[0.04375013958058825,
0.07278290193626193,
0.025437166402394087,
0.03566563190923912,
0.0600978180762445,
0.03541997007392188,
0.07258190588918417,
0.0305960649440561,
0.06355941666480559,
0.0459164303102039,
0.0295464189204279,
0.02733546240257275,
0.03622395426223284,
0.12723049780020992,
0.07838845836031891,
0.03517430823860464,
0.0370726042387833,
0.030216405744020368,
0.04841771445161579,
0.06458672979431404],
[0.047832813411448426,
0.07557888863526846,
0.01995992797179741,
0.03816987496512719,
0.13469781125567476,
0.03291993202972431,
0.07570569885109944,
0.030586624058434146,
0.049760328692079435,
0.04080752745441173,
0.02835476425980877,
0.02495625047553831,
0.044434299627177966,
0.08879251312485734,
0.07190139237616983,
0.028405488346141164,
0.034416292576529964,
0.028608384691470746,
0.04149230261989906,
0.06261888457734155],
[0.042617561579875174,
0.07770737052741912,
0.03601886702558483,
0.04750107199009005,
0.06608223355090762,
0.07060841393110677,
0.0826861689456382,
0.0338272428414884,
0.042951069607889844,
0.04888274810615084,
0.04173614750583639,
0.03728143313164038,
0.04371337367192339,
0.04190290151984373,
0.06603458954690553,
0.03363666682548001,
0.045190337795988376,
0.035590070989565965,
0.03327933679546429,
0.0727523941112011],
[0.05211050194283281,
0.022701868313195296,
0.04215681056049396,
0.03776547612710917,
0.06246340554638846,
0.05240325757172513,
0.03425240858040134,
0.060094746367168786,
0.04189066907968276,
0.03837760153297493,
0.031431308883802626,
0.08609676904242296,
0.04383350188960451,
0.11209879171767712,
0.06754670782988237,
0.03071272688561239,
0.0415446851546282,
0.02789162718901368,
0.0347314632458615,
0.07989567253952201],
[0.052505147563486614,
0.03777739647677877,
0.03743422557767102,
0.03311599176389842,
0.11201670098375657,
0.06963509494394875,
0.02916952642415923,
0.043239533287577216,
0.03854953099977122,
0.03260123541523679,
0.03546099290780142,
0.07958705101807367,
0.03165751544269046,
0.1153054221002059,
0.06637497140242507,
0.02304964539007092,
0.03955044612216884,
0.030942576069549303,
0.031457332418210936,
0.06056966369251887],
[0.03823109185601696,
0.0364105636723971,
0.03279255196570954,
0.033691293727243395,
0.15926164907590912,
0.061321841729271326,
0.036203161727427755,
0.03440567820436005,
0.03157118495644559,
0.0335069364428262,
0.03426741024104715,
0.07637000506982532,
0.03892243167258146,
0.11098308521915472,
0.05643637369221551,
0.026086555745033876,
0.036525786975157855,
0.04528275798497488,
0.033046043231783194,
0.04468359681061898],
[0.02800869900733102,
0.048055000175383215,
0.02597425374443158,
0.0358483285979866,
0.1538987688098495,
0.06082289803220036,
0.04098705671893087,
0.035585253779508226,
0.03213020449682556,
0.03448033954189905,
0.03277912238240556,
0.06162966080886738,
0.07131081412887158,
0.11022834894243923,
0.029727454488056405,
0.02874530849907047,
0.04032060051211898,
0.06248903854923007,
0.029253919814795328,
0.03772492896979901],
[0.04004854368932039,
0.019997889404812157,
0.015882228788518363,
0.04353102574926129,
0.10579358379062896,
0.01978682988602786,
0.030656395103419165,
0.02532714225411566,
0.055878007598142675,
0.033241874208526805,
0.02643520472773322,
0.05730265934993668,
0.05566694807935838,
0.20409455466441537,
0.05925495989869143,
0.02543267201350781,
0.0408400168847615,
0.03197551709582102,
0.033030814689742505,
0.07582313212325877],
[0.035613691698095196,
0.026543180407695613,
0.03375184990690791,
0.020337041103738004,
0.10770038669021817,
0.02291497589153578,
0.02597030601040722,
0.11419296319281998,
0.04516159831956843,
0.02897789659617129,
0.023344631689502078,
0.05060390509380818,
0.04430228672363584,
0.21196352699670598,
0.03948059387979186,
0.028643719864419725,
0.0347543801021626,
0.025779347877977754,
0.031078436052895404,
0.048885281901943]]
```
## Fig.1.a.topic proportion over time (bar chart)
```
import matplotlib.pyplot as plt
from pandas.core.frame import DataFrame
import numpy as np
fig, ax = plt.subplots(1, figsize=(32,16))
year = ['19Q1','19Q2','19Q3','19Q4','20Q1','20Q2','20Q3','20Q4','21Q1','21Q2']
col = ['#63b2ee','#76da91','#f8cb7f','#f89588','#7cd6cf','#9192ab','#7898e1','#efa666','#eddd86','#9987ce',
'#95a2ff','#fa8080','#ffc076','#fae768','#87e885','#3cb9fc','#73abf5','#cb9bff','#90ed7d','#f7a35c']
topicdis1 = DataFrame(topicdis)
topic2 = topicdis1.T
topic2 = np.array(topic2)
for i in range(20):
data = topic2[i]
if i == 0:
plt.bar(year,data)
else:
bot = sum(topic2[k] for k in range(i))
plt.bar(year,data,color=col[i],bottom = bot)
# size for xticks and yticks
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
# x and y limits
# plt.xlim(-0.6, 2.5)
# plt.ylim(-0.0, 1)
# remove spines
ax.spines['right'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.spines['top'].set_visible(False)
# ax.spines['bottom'].set_visible(False)
#grid
ax.set_axisbelow(True)
ax.yaxis.grid(color='gray', linestyle='dashed', alpha=0.7)
# title and legend
legend_label = ['Topic1', 'Topic2', 'Topic3', 'Topic4','Topic5','Topic6','Topic7','Topic8','Topic9','Topic10','Topic11'
,'Topic12','Topic13','Topic14','Topic15','Topic16','Topic17','Topic18','Topic19','Topic20']
plt.legend(legend_label, ncol = 1, bbox_to_anchor=([1.1, 0.65, 0, 0]), loc = 'right', borderaxespad = 0., frameon = True, fontsize = 18)
plt.rcParams['axes.titley'] = 1.05 # y is in axes-relative coordinates.
plt.title('Topic proportions over 2019 - 2021\n', loc='center', fontsize = 40)
plt.show()
```
## Fig.1.b.topic proportion over time (line graph) - for the best topics
```
# best topics in seed1234
ntop = [0,7,8,14,19]
year = ['19Q1','19Q2','19Q3','19Q4','20Q1','20Q2','20Q3','20Q4','21Q1','21Q2']
col = ['#63b2ee','#76da91','#f8cb7f','#f89588','#7cd6cf','#9192ab','#7898e1','#efa666','#eddd86','#9987ce',
'#95a2ff','#fa8080','#ffc076','#fae768','#87e885','#3cb9fc','#73abf5','#cb9bff','#90ed7d','#fa8080']
fig, ax = plt.subplots(1, figsize=(32,16))
# plot proportions of each topic over years
for i in ntop:
ys = [item[i] for item in topicdis]
ax.plot(year, ys, label='Topic ' + str(i+1),color = col[i],linewidth=3)
# size for xticks and yticks
plt.xticks(fontsize=25)
plt.yticks(fontsize=25)
# remove spines
ax.spines['right'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.spines['top'].set_visible(False)
# x and y limits
# plt.xlim(-0.2, 2.2)
plt.ylim(-0.0, 0.125)
# remove spines
ax.spines['right'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.spines['top'].set_visible(False)
#grid
ax.set_axisbelow(True)
ax.yaxis.grid(color='gray', linestyle='dashed', alpha=0.7)
# title and legend
legend_label = ['Topic1: Lipstick', 'Topic8: Tom Ford', 'Topic9: Beauty & Face products', 'Topic15: Eye Products','Topic20: Makeup','Topic6','Topic7','Topic8','Topic9','Topic10','Topic11'
,'Topic12','Topic13','Topic14','Topic15','Topic16','Topic17','Topic18','Topic19','Topic20']
plt.rcParams['axes.titley'] = 1.1 # y is in axes-relative coordinates.
plt.legend(legend_label, ncol = 1, bbox_to_anchor=([0.28, 0.975, 0, 0]), frameon = True, fontsize = 25)
plt.title('Topic proportions over 2019 - 2021 (5 topics) \n', loc='center', fontsize = 40)
plt.show()
```
* Drop in 2020 Q4 because of delay reveal of purchase patterns. Purchased in Q4 but discuss online in Q121.
## Fig.2. topic key words over time
```
import matplotlib.pyplot as plt
# best topics are [0,7,8,14,19]
t = 14
topicEvolution0 = ldaseq.print_topic_times(topic=t)
fig, axes = plt.subplots(2,5, figsize=(30, 15), sharex=True)
axes = axes.flatten()
year = ['19Q1','19Q2','19Q3','19Q4','20Q1','20Q2','20Q3','20Q4','21Q1','21Q2']
title = ['Topic1:Lipstick', 'Topic2', 'Topic3', 'Topic4','Topic5','Topic6','Topic7','Topic8: Tom Ford','Topic9: Beauty & Face Products','Topic10','Topic11'
,'Topic12','Topic13','Topic14','Topic15: Eye Products','Topic16','Topic17','Topic18','Topic19','Topic20: Makeup']
for i in range(len(topicEvolution0)):
value = [item[1] for item in topicEvolution0[i]]
index = [item[0] for item in topicEvolution0[i]]
ax = axes[i]
ax.barh(index,value,height = 0.7)
plt.rcParams['axes.titley'] = 1.25 # y is in axes-relative coordinates.
ax.set_title(year[i],
fontdict={'fontsize': 30})
ax.invert_yaxis()
ax.tick_params(axis='both', which='major', labelsize=20)
for k in 'top right left'.split():
ax.spines[k].set_visible(False)
fig.suptitle(title[t], fontsize=40)
plt.subplots_adjust(top=0.90, bottom=0.05, wspace=0.90, hspace=0.3)
plt.show()
```
## Fig.3. Trend of key words under 1 topic over time
```
from ekphrasis.classes.segmenter import Segmenter
seg = Segmenter(corpus="twitter")
ss_keywords = []
for i in range(len(keywords)):
s_keywords = [] # legends are keywords with first letter capitalised
for j in range(len(keywords[i])):
s_keyword = seg.segment(keywords[i][j])
s_keyword = s_keyword.capitalize()
s_keywords.append(s_keyword)
ss_keywords.append(s_keywords)
ss_keywords
# CHANGE k!
keyTopics = [0,7,8,14,19]
k = 3 # keyTopics index
topicEvolution_k = ldaseq.print_topic_times(topic=key_topics[k])
# transfrom topicEvolutiont to dictionary
for i in range(len(topicEvolution_k)):
topicEvolution_k[i] = dict(topicEvolution_k[i])
# our most interested keywords under each topic, pick manually
keywords = [['nars', 'revlon', 'wetnwild', 'dior'], # keywords for topic 0
['tomford', 'tomfordbeauty', 'body', 'japan', 'yuta'], # keywords for topic 7
['ysl', 'yvessaintlaurent', 'nyx', 'charlottetilbury','ctilburymakeup','diormakeup'], # keywords for topic 8
['maybelline', 'makeupforever', 'mua', 'anastasiabeverlyhills',
'morphe', 'morphebrushes', 'hudabeauty', 'fentybeauty', 'jeffreestar'], # keywords for topic 14
['colourpop', 'colorpopcosmetics', 'wetnwildbeauty', 'nyx',
'nyxcosmetics', 'bhcosmetics', 'tarte','tartecosmetics',
'elfcosmetics','benefit', 'makeuprevolution', 'loreal',
'katvond', 'jeffreestarcosmetics', 'urbandecaycosmetics']] # keywords for topic 19
year = ['19Q1','19Q2','19Q3','19Q4','20Q1','20Q2','20Q3','20Q4','21Q1','21Q2']
col = ['#63b2ee','#76da91','#f8cb7f','#f89588','#7cd6cf','#9192ab','#7898e1','#efa666','#eddd86','#9987ce',
'#95a2ff','#fa8080','#ffc076','#fae768','#87e885','#3cb9fc','#73abf5','#cb9bff','#90ed7d','#fa8080']
fig, ax = plt.subplots(1, figsize=(32,16))
# plot the value of keywords
yss = []
for word in keywords[k]: # for each word in keywords
ys = ["0"] * 10
for j in range(len(topicEvolution_k)): # for each top-20-keywords dict (one dict for one time period)
if word in topicEvolution_k[j]: # if keyword is in top 20 keywords dict
ys[j] = topicEvolution_k[j].get(word) # assign the keyword value to jth position
else:
ys[j] = 0 # else assign 0 to jth position
if k == 0 or k == 1 :
ax.plot(year, ys, linewidth=3) # plot keyword values against year
else:
yss.append(ys)
# k = 2
# ['ysl', 'yvessaintlaurent', 'nyx', 'charlottetilbury','ctilburymakeup','diormakeup']
if k == 2:
yss = [[a + b for a, b in zip(yss[0], yss[1])],
yss[2],
[a + b for a, b in zip(yss[3], yss[4])],
yss[5]]
for i in range(len(yss)):
ax.plot(year, yss[i], linewidth = 3)
# k = 3
# ['maybelline', 'makeupforever', 'mua', 'anatasiabeverlyhills',
# 'morphe', 'morphe brushes', 'hudabeauty', 'fentybeauty', 'jeffreestar']
if k == 3:
for i in range(len(keywords[3])):
if i == 4:
ax.plot(year, [a + b for a, b in zip(yss[4], yss[5])])
elif i == 5:
continue
else:
ax.plot(year, yss[i], linewidth = 3)
# k = 4:
# ['colourpop', 'colorpopcosmetics', 'wetnwildbeauty', 'nyx',
# 'nyxcosmetics', 'bhcosmetics', 'tarte','tartecosmetics',
# 'elfcosmetics','benefit', 'makeuprevolution', 'loreal',
# 'katvond', 'jeffreestarcosmetics', 'urbandecaycosmetics']
if k == 4:
for i in range(len(keywords[4])):
if i == 0:
ax.plot(year, [a + b for a, b in zip(yss[0], yss[1])])
elif i == 1:
continue
elif i == 3:
ax.plot(year, [a + b for a, b in zip(yss[3], yss[4])])
elif i == 4:
continue
else:
ax.plot(year, yss[i], linewidth = 3)
# size for xticks and yticks
plt.xticks(fontsize=25)
plt.yticks(fontsize=25)
# remove spines
ax.spines['right'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.spines['top'].set_visible(False)
# grid
ax.set_axisbelow(True)
ax.yaxis.grid(color='gray', linestyle='dashed', alpha=0.7)
# legend
legends = [['Nars', 'Revlon', 'wet n wild', 'Dior'],
['Tom Ford', 'Tom Ford Beauty', 'Body', 'Japan', 'Yuta'],
['YSL','NYX', 'Charlotte Tilbury','Dior'],
['Maybelline', 'Make Up For Ever','Mua','Anastasia Beverlyhills',
'Morphe','Huda Beauty','Fenty Beauty','Jeffree Star'],
['Colourpop','wet n wild', 'NYX','BH Cosmetics','Tarte',
'e.l.f','Benefit','Makeup revolution','Loreal',
'Kat Von D','Jeffree Star','Urban Decay']]
plt.legend(legends[k], ncol = 1, bbox_to_anchor=([1, 1.05, 0, 0]), frameon = True, fontsize = 25)
# title
titles = ['Brand occupation over topic "Lipstick" from 2019 - 2021',
'Effect of celebrity collaboration on brand',
'Brand occupation over topic "Beauty & Face Products" from 2019 - 2021',
'Brand occupation over topic "Eye Products" from 2019 - 2021',
'Brand occupation over topic "Makeup" from 2019 - 2021']
plt.title(titles[k], loc='left', fontsize = 40)
plt.show()
year = ['19Q1','19Q2','19Q3','19Q4','20Q1','20Q2','20Q3','20Q4','21Q1','21Q2']
col = ['#63b2ee','#76da91','#f8cb7f','#f89588','#7cd6cf','#9192ab','#7898e1','#efa666','#eddd86','#9987ce',
'#95a2ff','#fa8080','#ffc076','#fae768','#87e885','#3cb9fc','#73abf5','#cb9bff','#90ed7d','#fa8080']
fig, ax = plt.subplots(1, figsize=(32,16))
# plot the value of keywords
for word in keywords[k]: # for each word in keywords
ys = ["0"] * 10
for j in range(len(topicEvolution_k)): # for each top-20-keywords dict (one dict for one time period)
if word in topicEvolution_k[j]: # if keyword is in top 20 keywords dict
ys[j] = topicEvolution_k[j].get(word) # assign the keyword value to jth position
else:
ys[j] = 0 # else assign 0 to jth position:
ax.plot(year, ys, linewidth=3) # plot keyword values against year
# size for xticks and yticks
plt.xticks(fontsize=25)
plt.yticks(fontsize=25)
# remove spines
ax.spines['right'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.spines['top'].set_visible(False)
# grid
ax.set_axisbelow(True)
ax.yaxis.grid(color='gray', linestyle='dashed', alpha=0.7)
# legend
plt.legend(keywords[k], ncol = 1, bbox_to_anchor=([1, 1.05, 0, 0]), frameon = True, fontsize = 25)
# title
titles = ['Brand occupation over topic "Lipstick" from 2019 - 2021',
'Effect of celebrity collaboration on brand',
'Brand occupation over topic "Beauty & Face Products" from 2019 - 2021',
'Brand occupation over topic "Eye Products" from 2019 - 2021',
'Brand occupation over topic "Makeup" from 2019 - 2021']
plt.title(titles[k], loc='left', fontsize = 40)
plt.show()
# ver 1
# for z in keywords[k]:
# print(z)
# ys = [item[z] for item in topicEvolution_k]
# print(ys)
# ax.plot(year, ys, linewidth=3)
```
| github_jupyter |
# Getting Started with Azure ML Notebooks and Microsoft Sentinel
---
# Contents
1. Introduction<br><br>
2. What is a Jupyter notebook?<br>
2.1 Using Azure ML notebooks<br>
2.2 Running code in notebooks<br><br>
3. Initializing the notebook and MSTICPy<br><br>
4. Notebook/MSTICPy configuration<br><br>
4.1 Configure Microsoft Sentinel settings<br>
4.2 Configure external data providers (VirusTotal and Maxmind GeoLite2)<br>
4.3 Configure your Azure Cloud<br>
4.4 Validate your settings<br>
4.5 Loading and reloading your settings<br><br>
5. Query data from Microsoft Sentinel<br>
5.1 Load a QueryProvider<br>
5.2 Authenticate to your Microsoft Sentinel Workspace<br>
5.3 Authenticate with Azure CLI to cache your credentials<br>
5.4 Viewing the Microsoft Sentinel Schema<br>
5.5 Using built-in Microsoft Sentinel queries<br>
5.6 Customizable and custom queries<br><br>
6. Testing external data providers<br>
6.1 Threat intelligence lookup using VirusTotal<br>
6.2 IP geo-location lookup with Maxmind GeoLite2<br><br>
7. Conclusion<br><br>
8. Further Resources<br><br>
9. FAQs - Frequently Asked Questions
---
# 1. Introduction
This notebook takes you through the basics needed to get started with Azure Machine Learning (ML) Notebooks and Microsoft Sentinel.
It focuses on getting things set up and basic steps to query data.
After you've finished running this notebook you can go on to look at the following notebooks:
- **A Tour of Cybersec notebook features** - this takes you through some of the basic
features for CyberSec investigation/hunting available to you in notebooks.
- **Configuring your environment** - this covers all of the configuration options for
accessing external cybersec resources
Each topic includes 'learn more' sections to provide you with the resource to deep
dive into each of these topics. We encourage you to work through the notebook from start
to finish.
<div style="color: Black; background-color: Khaki; padding: 5px; font-size: 20px">
<p>Please run the the code cells in sequence. Skipping cells will result in errors.</p>
<div style="font-size: 14px">
<p>If you encounter any unexpected errors or warnings please see the FAQ at the end of this notebook.</p>
</div>
</div>
<hr>
---
# 2. What is a Jupyter notebook?
<div style="color: Black; background-color: Gray; padding: 5px; font-size: 15px">
If you're familiar with notebooks, skip this section and go to "3. Initializing
the notebook and MSTICPy" section.
</div>
<br>
You are currently reading a Jupyter notebook. [Jupyter](http://jupyter.org/) is an interactive
development and data manipulation environment presented in a browser.
A Jupyter notebook is a document
made up of cells that contain interactive code, alongside that code's output,
and other items such as text and images (what you are looking at now is a cell of *Markdown* text).
The name, Jupyter, comes from the core supported programming languages that it supports: **Ju**lia, **Pyt**hon, and **R**.
While you can use any of these languages (and others such as Powershell) we are going to use Python in this notebook.
The majority of the notebooks on the [Microsoft Sentinel GitHub repo](https://github.com/Azure/Azure-Sentinel-Notebooks)
are written in Python. Whilst there are pros, and cons to each language, Python is a well-established
language that has a large number of materials and libraries well suited for
data analysis and security investigation, making it ideal for our needs.
To use a Jupyter notebook you need a Jupyter server that will render the notebook and execute the code within it.
This can take the form of a local [Jupyter installation](https://pypi.org/project/jupyter/),
or a remotely hosted version such as
[Azure Machine Learning Notebooks](https://docs.microsoft.com/en-us/azure/machine-learning/how-to-run-jupyter-notebooks).
<details>
<summary>Learn more...</summary>
<ul>
<li><a href=https://infosecjupyterbook.com/introduction.html>The Infosec Jupyter Book</a>
has more details on the technical working of Jupyter.
</li>
<li><a href=https://jupyter.org/documentation>The Jupyter Project documentation</a></li>
</ul>
</details>
<br>
---
## 2.1 Using Azure Machine Learning (ML) Notebooks
If you launched this notebook from Microsoft Sentinel, you will be running it in an Azure ML workspace.
By default, the notebook is running in the built-in notebook editor. You can also open
and run the notebook in Jupyterlab or Jupyter classic, if these environments are more familiar
to you.
<details>
<summary>Learn more...</summary>
<p>Although you can view a notebook as a static document (GitHub, for example has a built-in
static notebook renderer), if you want to run the code in a notebook, the notebook
must be attached to a backend process, know as a Jupyter kernel. The kernel
is really where your code is being run and where all of variables and objects
created in the code are held. The browser is just the viewer for this data.
</p>
<p>In Azure ML, the kernel runs on a virtual machine known as an Azure ML Compute.
The Compute instance can support the running of many notebooks simultaneously.</p>
<p>
Usually, the creation/attaching of a kernel for your notebook happens
seamlessly - you don't need to do anything manually. One thing that you
may need to check (especially if you are getting errors or the notebook
doesn't seem to be executing) is the version and state of the kernel.</p>
<ul>
<li><a href=https://realpython.com/jupyter-notebook-introduction/>
Installing and running a local Jupyter server
</a>
</li>
</ul>
</details>
For this notebook we are going to be using Python 3.8 (you can also choose the Python 3.6 kernel).
You can check the kernel name and version is selected by looking at
the drop down in the top left corner of the Workspace window as shown in
the image below.
<img src="https://github.com/Azure/Azure-Sentinel-Notebooks/raw/master/images/AML_kernel.png" height="200px">
This image also shows the active compute instance (to the right).
<img src="https://github.com/Azure/Azure-Sentinel-Notebooks/raw/master/images/AML_compute_kernel.png" height="200px">
If the selected kernel does not show `Azure ML Python 3.8` you can select the correct kernel by clicking on the Kernel drop-down.
<p style="border: solid; padding: 5pt"><b>Note</b>: the notebook works with Python 3.6 or later.
If you are using this notebook in another
Jupyter environment you can choose any kernel that supports Python 3.6 or later
</p>
<p style="border: solid; padding: 5pt; color: white; background-color: DarkOliveGreen"><b>Tip:</b>
Sometimes, your notebook may "hang" or you want to just start over.
To do this you can restart the kernel. Use the "recycle" button in the toolbar
in the upper right of the screen above the notebook.
<br>
You will need to re-run any initialization and authentication cells after doing
this since restarting the kernel wipes all variables and other state.
</p>
<img src="https://github.com/Azure/Azure-Sentinel-Notebooks/raw/master/images/AML_restart_kernel.png" height="200px">
<br>
<details>
<summary>Troubleshooting...</summary>
<p>
If you are having trouble getting the notebook running you should review
<a href=https://docs.microsoft.com/en-us/azure/machine-learning/how-to-run-jupyter-notebooks>How to run Juptyer notebooks</a>.
</p>
</details>
---
## 2.2 Running code in notebooks
The **cell** below is a code cell (note that it looks different from the
cell you are reading). The current cell is known as a *Markdown* cell
and lets you write text (including HTML) and include static images.
Select the code cell (using mouse or cursor keys) below.
Once selected, you can execute the code in it by clicking the "Play" button in the cell, or by pressing Shift+Enter.
<p style="border: solid; padding: 5pt; color: white; background-color: DarkOliveGreen">
<b>Tip</b>: You can identify which cells are code cells by selecting them.<br>
In Azure ML notebooks and VSCode, code cells have a larger border
on the left side with a "Play" button to execute the cell.<br>
In other notebook environments code and markdown cells will have
different styles but it's usually easy to distinguish them.
</p>
```
# This is our first code cell, it contains basic Python code.
# You can run a code cell by selecting it and clicking
# the Run button (to the left of the cell), or by pressing Shift + Enter.
# Any output from the code will be displayed directly below it.
print("Congratulations you just ran this code cell")
y = 2 + 2
print("2 + 2 =", y)
```
Variables set within a code cell persist between cells meaning you can chain cells together.
In this example we're using the value of y from the previous cell.
```
# Note that output from the last line of a cell is automatically
# sent to the output cell, without needing the print() function.
y + 2
```
Now that you understand the basics we can move onto more complex code.
<details>
<summary>Learn more about notebooks...</summary>
<ul>
<li><a href=https://infosecjupyterbook.com/>The Infosec Jupyter Book</a>
provides an infosec-specific intro to Python.
</li>
<li><a href=https://realpython.com/>Real Python</a>
is a comprehensive set of Python learnings and tutorials.</li>
<ul>
</details>
---
# 3. Initializing the notebook and MSTICPy
To avoid having to type (or paste) a lot of complex and repetitive code into
notebook cells, most notebooks rely on third party libraries (known in the Python
world as "packages").
Before you can use a package in your notebook, you need to do two things:
- install the package (although the Azure ML Compute has most common packages pre-installed)
- import the package (or some part of the package - usually a module/file, a function or a class)
## MSTICPy
We've created a Python package built for Microsoft Sentinel notebooks - named MSTICPy (pronounced miss-tick-pie).
It is a collection of CyberSecurity tools for data retrieval, analysis, enrichment and visualization.
## Intializing notebooks
At the start of most Microsoft Sentinel notebooks you will see an initialization cell like the one below.
This cell is specific to the MSTICPy initialization:
- it defines the minimum versions for Python and MSTICPy needed for this notebook
- it then imports and runs the `init_notebook` function.
<details>
<summary>More about <i>init_notebook</i>...</summary>
<p>
`init_notebook` does some of the tedious work of importing other packages,
checking configuration (we'll get to configuration in a moment) and, optionally,
installing other required packages.</p>
</details>
<br>
<div style="border: solid; padding: 5pt">
<b>Note: </b>don't be alarmed if you see configuration warnings (such as "Missing msticpyconfig.yaml").<br>
We haven't configured anything yet, so this is expected.<br>
You may also see some warnings about package version conflicts. It is usually safe
to ignore these.
</div>
The first line ensures that the latest version of msticpy is installed.
```
# import some modules needed in this cell
from IPython.display import display, HTML
REQ_PYTHON_VER="3.6"
REQ_MSTICPY_VER="1.4.3"
display(HTML("Checking upgrade to latest msticpy version"))
# %pip install --upgrade --quiet msticpy[azuresentinel]
# initialize msticpy
from msticpy.nbtools import nbinit
nbinit.init_notebook(
namespace=globals(),
extra_imports=["urllib.request, urlretrieve"]
)
pd.set_option("display.html.table_schema", False)
```
---
# 4. Notebook/MSTICPy Configuration
Once we've done this basic initialization step,
we need to make sure we have some configuration in place.
Some of the notebook components need addtional configuration to connect our Microsoft Sentinel workspace
as well as to external services (e.g. API keys to retrieve Threat Intelligence data).
The easiest way to manage the configuration data for these services is to store it in a
configuration file (`msticpyconfig.yaml`).<br>
The alternative to this is having to type
configuration details and keys in each time you use a notebook - which is
guaranteed to put you off notebooks forever!
<details>
<summary>Learn more...</summary>
<p>
Although you don't need to know these details now, you can find more information here:
</p>
<ul>
<li><a href=https://msticpy.readthedocs.io/en/latest/getting_started/msticpyconfig.html >MSTICPy Package Configuration</a></li>
<li><a href=https://msticpy.readthedocs.io/en/latest/getting_started/SettingsEditor.html >MSTICPy Settings Editor</a></li>
</ul>
<p>If you need a more complete walk-through of configuration, we have a separate notebook to help you:</p>
<ul>
<li><a href=https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/ConfiguringNotebookEnvironment.ipynb >Configuring Notebook Environment</a></li>
<li>And for the ultimate walk-through of how to configure all your `msticpyconfig.yaml` settings
see the <a href=https://github.com/microsoft/msticpy/blob/master/docs/notebooks/MPSettingsEditor.ipynb >MPSettingsEditor notebook</a></li>
<li>The Azure-Sentinel-Notebooks GitHub repo also contains an template `msticpyconfig.yaml`, with commented-out sections
that may also be helpful in finding your way around the settings if you want to dig into things
by hand.</li>
</ul>
</details>
<br>
---
## 4.1 Configuring Microsoft Sentinel settings
When you launched this notebook from Microsoft Sentinel it copied a basic configuration file - `config.json` -
to your workspace folder.<br>
You should be able to see this file in the file browser to the left.<br>
This file contains details about your Microsoft Sentinel workspace but has
no configuration settings for other external services that we need.
If you didn't have a `msticpyconfig.yaml` file in your workspace folder (which is likely
if this is your first use of notebooks), the `init_notebook` function should have created
one for you and populated it
with the Microsoft Sentinel workspace data taken from your config.json.
<p style="border: solid; padding: 5pt; color: white; background-color: DarkOliveGreen"><b>Tip:</b>
If you do not see a "msticpyconfig.yaml" file in your user folder, click the refresh button<br>
at the top of the file browser.
</p>
We can check this now by opening the settings editor and view the settings.
<div style="color: Black; background-color: Khaki; border: solid; padding: 5pt;"><b>
You should not have to change anything here unless you need to add
one or more additional workspaces.</b></div>
<p/>
<p style="border: solid; padding: 5pt"><b>Note:</b>
In the Azure ML environment, the settings editor may take 10-20 seconds to appear.<br>
This is a known bug that we are working to fix.
</p>
When you have verified that this looks OK. Click **Save Settings**
<details>
<summary>Multiple Microsoft Sentinel workspaces...</summary>
<p>If you have multiple Microsoft Sentinel workspaces, you can add
them in the following configuration cell.</p>
<p>You can choose to keep one as the default or just delete this entry
if you always want to name your workspaces explicitly when you
connect.
</p>
</details>
```
from msticpy.config import MpConfigEdit
import os
mp_conf = "msticpyconfig.yaml"
# check if MSTICPYCONFIG is already an env variable
mp_env = os.environ.get("MSTICPYCONFIG")
mp_conf = mp_env if mp_env and Path(mp_env).is_file() else mp_conf
if not Path(mp_conf).is_file():
print(
"No msticpyconfig.yaml was found!",
"Please check that there is a config.json file in your workspace folder.",
"If this is not there, go back to the Microsoft Sentinel portal and launch",
"this notebook from there.",
sep="\n"
)
else:
mpedit = MpConfigEdit(mp_conf)
mpedit.set_tab("AzureSentinel")
display(mpedit)
```
At this stage you should only see two entries in the Microsoft Sentinel tab:
- An entry with the name of your Microsoft Sentinel workspace
- An entry named "Default" with the same settings.
MSTICPy lets you store configurations for multiple Microsoft Sentinel workspaces
and switch between them. The "Default" entry lets you authenticate
to a specific workspace without having to name it explicitly.
Let's add some additional configuration.
## 4.2 Configure external data providers (VirusTotal and Maxmind GeoLite2)
We are going to use [VirusTotal](https://www.virustotal.com) (VT) as an example of a popular threat intelligence source.
To use VirusTotal threat intel lookups you will need a VirusTotal account and API key.
You can sign up for a free account at the
[VirusTotal getting started page](https://developers.virustotal.com/v3.0/reference#getting-started) website.
If you are already a VirusTotal user, you can, of course, use your existing key.
<p style="border: solid; padding: 5pt; color: black; background-color: Khaki">
<b>Warning</b> If you are using a VT enterprise key we do not recommend storing this
in the msticpyconfig.yaml file.<br>
MSTICPy supports storage of secrets in
Azure Key Vault. You can read more about this
<a href=https://msticpy.readthedocs.io/en/latest/getting_started/msticpyconfig.html#specifying-secrets-as-key-vault-secrets >in the MSTICPY docs</a><br>
For the moment, you can sign up for a free acount, until you can take the time to
set up Key Vault storage.
</p>
As well as VirusTotal, we also support a range
of other threat intelligence providers: https://msticpy.readthedocs.io/en/latest/data_acquisition/TIProviders.html
<br><br>
To add the VirusTotal details, run the following cell.
1. Select "VirusTotal" from the **Add prov** drop down
2. Click the **Add** button
3. In the left-side Details panel select **Text** as the Storage option.
4. Paste the API key in the **Value** text box.
5. Click the **Update** button to confirm your changes.
Your changes are not yet saved to your configuration file. To
do this, click on the **Save Settings** button at the bottom of the dialog.
If you are unclear about what anything in the configuration editor means, use the **Help** drop-down. This
has instructions and links to more detailed documentation.
```
display(mpedit)
mpedit.set_tab("TI Providers")
```
Our notebooks commonly use IP geo-location information.
In order to enable this we are going to set up [MaxMind GeoLite2](https://www.maxmind.com)
to provide geolocation lookup services for IP addresses.
GeoLite2 uses a downloaded database which requires an account key to download.
You can sign up for a free account and a license key at
[The Maxmind signup page - https://www.maxmind.com/en/geolite2/signup](https://www.maxmind.com/en/geolite2/signup).
<br>
<details>
<summary>Using IPStack as an alernative to GeoLite2...</summary>
<p>
For more details see the
<a href=https://msticpy.readthedocs.io/en/latest/data_acquisition/GeoIPLookups.html >
MSTICPy GeoIP Providers documentation</a>
</p>
</details>
<br>
Once, you have an account, run the following cell to add the Maxmind GeopIP Lite details to your configuration.
The procedure is similar to the one we used for VirusTotal:
1. Select the "GeoIPLite" provider from the **Add prov** drop-down
2. Click **Add**
3. Select **Text** Storage and paste the license (API/Auth) key into the text box
4. Click **Update**
5. Click **Save Settings** to write your settings to your configuration.
```
display(mpedit)
mpedit.set_tab("GeoIP Providers")
```
## 4.3 Configure your Azure Cloud
If you are running in a sovereign or government cloud (i.e. not the Azure global cloud)
you must set up Azure functions to use the correct authentication and
resource management authorities.
<p style="border: solid; padding: 5pt"><b>Note:</b>
This is not required if using the Azure Global cloud (most common).<br>
If the domain of your Microsoft Sentinel or Azure Machine learning does
not end with '.azure.com' you should set the appropriate cloud
for your organization
</p>
```
display(mpedit)
mpedit.set_tab("Azure")
```
## 4.4 Validate your settings
- click on the **Validate settings** button.
You may see some warnings about missing sections but not about the Microsoft Sentinel, TIProviders or GeoIP Providers settings.
Click on the **Close** button to hide the validation output.
If you need to make any changes as a result of the Validation,
remember to save your changes by clicking the **Save File** button.
## 4.5 Loading and re-loading your saved settings
<h3 stylecolor: Black; background-color: Khaki;i; padding: 5px">
Although your settings are saved they are not yet loaded into MSTICPy. Please run the next cell.</h3>
<hr>
You have saved your settings to a "msticpyconfig.yaml" file on disk. This file<br>
is a YAML file so is easy to read with a text editor.<br>
However, MSTICPy does not automatically reload these settings.
Run the following cell to force it to reload from the new configuration file.
```
import msticpy
msticpy.settings.refresh_config()
```
<details>
<summary>Optional - set a MSTICPYCONFIG environment variable...</summary>
Setting a `MSTICPYCONFIG` environment variable to point to the location of your
your `msticpyconfig.yaml` configuration file lets MSTICPy find this file. By
default, it looks in the current directory only.
Having a MSTICPYCONFIG variable set is especially convenient if you are
using notebooks locally or have a deep or complex folder structure
<b>You may not need this on Azure ML</b> - the
`nb_check` script (in the initialization cell) will automatically set the MSTICPYCONFIG
environment variable if it is not already set. The script will try to find a
`msticpyconfig.yaml` in the current directory or in the root of your AML user folder.
For more detailed instructions on this, please see the *Setting the path to your msticpyconfig.yaml* section
in the <a href=https://github.com/Azure/Azure-Sentinel-Notebooks/blob/master/ConfiguringNotebookEnvironment.ipynb >Configuring Notebook Environment</a>.
<p style="border: solid; padding: 5pt; color: black; background-color: #AA4000">
<b>Warning</b>: If you are storing secrets such as API keys in the file you should<br>
probably opt to store either store this file on the compute instance or use<br.>
Azure Key Vault to store the secrets.<br>
Read more about using KeyVault
<a href=https://msticpy.readthedocs.io/en/latest/getting_started/msticpyconfig.html#specifying-secrets-as-key-vault-secrets >in the MSTICPY docs</a>
</p>
</details>
---
# 5. Querying Data from Microsoft Sentinel
Now that we have configured the details for your Microsoft Sentinel workspace,
we can go ahead and test it. We will do this with `QueryProvider` class from MSTICPy.
The QueryProvider class has one main function:
- querying data from a data source to make it available to view and analyze in the notebook.
Query results are always returned as *pandas* DataFrames. If you are new
to using *pandas* look at the **Introduction to Pandas** section at in
the **A Tour of Cybersec notebook features** notebook.
<details>
<summary>Other data sources...</summary>
<p>
The query provider supports several different data sources - the one we'll use
here is "AzureSentinel".</p>
<p>
Other data sources supported by the `QueryProvider` class include Microsoft Defender for Endpoint,
Splunk, Microsoft Graph API, Azure Resource Graph but these are not covered here.
</p>
</details>
<br>
Most query providers come with a range of built-in queries
for common data operations. You can also a query provider to run custom queries against
Microsoft Sentinel data.
Once you've loaded a QueryProvider you'll normally need to authenticate
to the data source (in this case Microsoft Sentinel).
<details>
<summary><b>Learn more...</b></summary>
<p>
More details on configuring and using QueryProviders:
</p>
<ul>
<li>
<a href=https://msticpy.readthedocs.io/en/latest/data_acquisition/DataProviders.html#instantiating-a-query-provider >MSTICPy Documentation</a>.</li>
</ul>
</details>
<br>
## 5.1 Load a QueryProvider
To start, we are going to load up a QueryProvider
for Microsoft Sentinel, pass it the
details for our workspace that we just stored in the msticpyconfig file, and connect.
<div style="border: solid; padding: 5pt"><b>Note:</b>
If you see a warning "Runtime dependency of PyGObject is missing" when loading the<br>
Microsoft Sentinel driver, please see the FAQ section at the end of this notebook.<br>
The warning does not impact any functionality of the notebooks.
</div>
```
# Initalize a QueryProvider for Microsoft Sentinel
qry_prov = QueryProvider("AzureSentinel")
```
## 5.2 Authenticate to the Microsoft Sentinel workspace
Next we need to authenticate The code cell immediately following this section
will start the authentication process. e.When you run the cell it will trigger an Azure authentication sequence.
The connection process will ask us to authenticate to our Microsoft Sentinel workspace using
[device authorization](https://docs.microsoft.com/en-us/azure/active-directory/develop/v2-oauth2-device-code)
with our Azure credentials.
Device authorization adds an additional factor to the authentication by generating a
one-time device code that you supply as part of the authentication process.
The process is as follows:
1. generate and display an device code
<img src="https://github.com/Azure/Azure-Sentinel-Notebooks/raw/master/images/device_auth_code.png" height="300px">
In some Jupyter notebook interfaces, step 1 has a clickable button that will copy the code to the clipboard
and then open a browser window to the devicelogin URI.<br>The default AML notebook interface does not
do this. You need to manually select and copy the code to the clipboard, then click on the devicelogin link
where you can paste the code in to start the authentication process.
2. go to http://microsoft.com/devicelogin and paste in the code
3. authenticate with your Azure credentials
<p style="border: sols:id; <br>
1. Tdding: 5pt"><b>Note:</b> the devicelogin URL may be different for government <br>
2. You must use an account that has permissions to the Microsoft Sentinel workspace.<br>
(<b>Microsoft Sentinel Reader</b> or <b>Log Analytics Reader</b>)
or national Azure clouds.</p>
<br>
<img src="https://github.com/Azure/Azure-Sentinel-Notebooks/raw/master/images/device_authn.png" height="300px">
4. Once the last box in the previous image is shown you can close that tab and return to the notebook.
You should see an image like the one shown below.
<img src="https://github.com/Azure/Azure-Sentinel-Notebooks/raw/master/images/device_auth_complete.png" height="300px">
<br>
```
# Get the Microsoft Sentinel workspace details from msticpyconfig
# Loading WorkspaceConfig with no parameters will use the details
# of your "Default" workspace (see the Configuring Microsoft Sentinel settings section earlier)
# If you want to connect to a specific workspace use this syntax:
# ws_config = WorkspaceConfig(workspace="WorkspaceName")
# ('WorkspaceName' should be one of the workspaces defined in msticpyconfig.yaml)
ws_config = WorkspaceConfig()
# Connect to Microsoft Sentinel with our QueryProvider and config details
qry_prov.connect(ws_config)
```
## 5.3 Authenticating with Azure CLI to cache your logon credentials
Normally, you may need to re-authenticate if you restart the kernel in the current notebook.
You also have to authenticate again if you open another notebook.
You can avoid this by using Azure CLI to create a cached logon on the AML compute (this is described
in the FAQ section at the end of the notebook).<br>
<div style="border: solid; padding: 5pt; color: white; background-color: DarkOliveGreen"><b>Tip:</b>
This also works with other types of Jupyter hub: for example, if you are running the
Jupyter server on your local computer or using another remote Jupyter server.<br>
</div>
<p>
The Azure CLI
logon has to be performed on the system where your Jupyter server is running. If
you are using a remote Jupyter server, you can do this by opening a terminal
and running <pre>az login</pre> on the remote machine.</p>
More, simply you can create an empty cell in a notebook and run
<pre>!az login</pre><br>
Azure CLI must be installed on the system where your Jupyter server is running.
This information is also available in a
[Wiki article](https://github.com/Azure/Azure-Sentinel-Notebooks/wiki/Caching-credentials-with-Azure-CLI)
The Azure CLI logon will cache the token even if you restart kernel.<AML br>
This <i>refresh token</i> is cached on the compute (Jupyter server)
and can be re-used until it times out.<br>
You will need to re-authenticate if you restart your compute instance/
Jupyter server or switch to a different compute/server.
You can try this by uncommenting the line in the cell below and running it.
```
#!az login
```
## 5.4 Viewing the Microsoft Sentinel workspace data schema
You can use the schema to help understand what data is available to query.<br>
This also works as a quick test to ensure that you've authenticated successfully
to your Microsoft Sentinel workspace.
The AzureSentinel QueryProvider has a `schema_tables` property that lets us get a list of tables.
It also has the `schema` property which returns a dictionary of \<table_name, column_schema\>.
For each table the column names and data types are shown.
<p style="border: solid; padding: 5pt"><b>Note:</b>
If you see an error here, make sure that you ran the previous cells in this
section and that your authentication succeeded.
</p>
```
# Get list of tables in our Workspace with the 'schema_tables' property
print("Sample of first 10 tables in the schema")
qry_prov.schema_tables[:10] # We are outputting only a sample of tables for brevity
# remove the "[:10]" to see the whole list
```
## 5.5 Using built-in Microsoft Sentinel queries in MSTICPy
MSTICPy includes a number of built-in queries that you can run.
Each query is a function (or method) that's a member of the
`QueryProvider` object that you loaded earlier. Notice, in
the list of queries below, each query has a two-part name, separated
by a dot. The first part is a container name that helps to group
the queries in related sets. The second part (after the dot) is the
name of the query function itself.
You run a
query by specifying the name of the `QueryProvider` instance,
followed by the fully-qualified name (i.e. container_name + "." + query_name),
followed by a set of parentheses. For example:
```Python
qry_prov.Azure.get_vmcomputer_for_host(host_name="my-computer")
```
Inside the parentheses is where you specify parameters to the query
(nearly all queries require one or more parameters).
List available queries with the QueryProvider `.list_queries()` function<br>
and get specific details about a query, and its required and optional
parameters by running the query function with "?" as the only parameter.
```
# Get a sample of available queries
print("Sample of queries")
print("=================")
print(qry_prov.list_queries()[::5]) # showing a sample - remove "[::5]" for whole list
# Get help about a query by passing "?" as a parameter
print("\nHelp for 'list_all_signins_geo' query")
print("=====================================")
qry_prov.Azure.list_all_signins_geo("?")
```
### MSTICPy Query browser
The query browser combines both of the previous functions in a scrollable<br>
and filterable list. For the selected query, it shows the required and<br>
optional parameters, together with the full text of the query.<br>
You cannot execute queries from the browser but you can copy and paste
the example shown below the help for each query.
```
qry_prov.browse_queries()
```
### Run some queries
### Most queries require time parameters!
Datetime strings are **painful** to type in and keep track of.
We can use MSTICPy's `nbwidgets.QueryTime` class to define "start" and "end" for queries.
Each query provider has its own `QueryTime` instance built-in. If the query
needs "start" and "end" parameters and you do not supply them, the query
will take the time from this built-in timerange control.
```
# Open the query time control for our query provider
qry_prov.query_time
```
### Run a query using this time range.
Query results are returned as a [Pandas DataFrame](https://pandas.pydata.org/).
A dataframe is a tabular data structure much like a spreadsheet or
database table.
<details>
<summary>Learn more about DataFrames...</summary>
<p>
<ul>
<li>
The <a href=https://pandas.pydata.org/ >Pandas website</a> has an extensive user guide.</li>
<li>The <i>A Tour of Cybersec notebook features</i> has a brief introduction to common pandas operations.</li>
</ul>
</p>
</details>
```
# The time parameters are taken from the qry_prov time settings
# but you can override this by supplying explict "start" and "end" datetimes
signins_df = qry_prov.Azure.list_all_signins_geo()
if signins_df.empty:
md("The query returned no rows for this time range. You might want to increase the time range")
# display first 5 rows of any results
signins_df.head() # If you have no data you will just see the column headings displayed
```
## 5.6 Customizable and custom queries
### Customizing built-in queries
Most built-in queries support the "add_query_items" parameter.
You can use this to append additional filters or other operations to the built-in queries.
Microsoft Sentinel queries use the Kusto Query Language (KQL).
<div style=border: solid; padding: 5pt"><b>Note:</b>
If the query in following cell returns too many or too few results you can change the "28"
in the query below to a smaller or larger number of days.
</div>
<br>
<details>
<summary>Learn more about KQL query syntax...</summary>
<p>
<a href=https://aka.ms/kql >Kusto Query Language reference</a><br>
</p>
</details>
<br>
```
from datetime import datetime, timedelta
qry_prov.SecurityAlert.list_alerts(
start=datetime.utcnow() - timedelta(28),
end=datetime.utcnow(),
add_query_items="| summarize NumAlerts=count() by AlertName"
)
```
### Custom queries
Another way to run queries is to pass a full KQL query string to the query provider.
This will run the query against the workspace connected to above, and will return the data
as DataFrame.
```
# Define our query
test_query = """
OfficeActivity
| where TimeGenerated > ago(1d)
| take 10
"""
# Pass that query to our QueryProvider
office_events_df = qry_prov.exec_query(test_query)
display(office_events_df.head())
```
<details>
<summary>Learn more about MSTICPy pre-defined queries...</summary>
<ul>
<li>
<a href=https://msticpy.readthedocs.io/en/latest/data_acquisition/DataProviders.html#running-an-pre-defined-query >Running MSTICPy pre-defined queries</a>
</li>
<li>
<a href=https://msticpy.readthedocs.io/en/latest/data_acquisition/DataQueries.html>MSTICPy query reference</a>
</li>
</ul>
</details>
<br>
---
# 6. Testing external data providers
Threat intelligence and IP location are two common enrichments that you might apply to queried data.
Let's test the VirusTotal provider with a known bad IP Address.
<details>
<summary>Learn more...</summary>
<p>
</p>
<ul>
<li>More details are shown in the <i>A Tour of Cybersec notebook features</i> notebook</li>
<li><a href=https://msticpy.readthedocs.io/en/latest/data_acquisition/TIProviders.html >Threat Intel Lookups in MSTICPy</a></li>
</ul>
</details>
<br>
## 6.1 Threat intelligence lookup using VirusTotal
```
# Create our TI provider
ti = TILookup()
# Lookup an IP Address
ti_resp = ti.lookup_ioc("85.214.149.236", providers=["VirusTotal"])
ti_df = ti.result_to_df(ti_resp)
ti.browse_results(ti_df, severities="all")
```
## 6.2 IP geolocation lookup with Maxmind GeoLite2
<div style="border: solid; padding: 5pt"><b>Note:</b>
You may see the GeoLite driver downloading its database the first time you run this.
</div>
<br>
<details>
<summary>Learn more about MSTICPy GeoIP providers...</summary>
<p>
<a href=https://msticpy.readthedocs.io/en/latest/data_acquisition/GeoIPLookups.html >MSTICPy GeoIP Providers</a>
</p>
</details>
<br>
```
geo_ip = GeoLiteLookup()
raw_res, ip_entity = geo_ip.lookup_ip("85.214.149.236")
display(ip_entity[0])
```
---
# 7. Conclusion
In this notebook, we've gone through the basics of installing MSTICPy and setting up configuration.
We also briefly introduced:
- QueryProviders and querying data from Microsoft Sentinel
- Threat Intelligence lookups using VirusTotal
- Geo-location lookups using MaxMind GeoLite2
We encourage you to run through the **A Tour of Cybersec notebook features** notebook
to get a better feel for some more of the capabilities of notebooks and MSTICPy.</br>
This notebook includes:
- more examples of queries
- visualizing your data
- brief introduction to using panda to manipulate your data.
Also try out any of the notebooks in the [Microsoft Sentinel Notebooks GitHub repo](https://github.com/Azure/Azure-Sentinel-Notebooks)
---
# 8. Futher resources
- [Jupyter Notebooks: An Introduction](https://realpython.com/jupyter-notebook-introduction/)
- [Threat Hunting in the cloud with Azure Notebooks](https://medium.com/@maarten.goet/threat-hunting-in-the-cloud-with-azure-notebooks-supercharge-your-hunting-skills-using-jupyter-8d69218e7ca0)
- [MSTICPy documentation](https://msticpy.readthedocs.io/)
- [Microsoft Sentinel Notebooks documentation](https://docs.microsoft.com/en-us/azure/sentinel/notebooks)
- [The Infosec Jupyterbook](https://infosecjupyterbook.com/introduction.html)
- [Linux Host Explorer Notebook walkthrough](https://techcommunity.microsoft.com/t5/azure-sentinel/explorer-notebook-series-the-linux-host-explorer/ba-p/1138273)
- [Why use Jupyter for Security Investigations](https://techcommunity.microsoft.com/t5/azure-sentinel/why-use-jupyter-for-security-investigations/ba-p/475729)
- [Security Investigtions with Microsoft Sentinel & Notebooks](https://techcommunity.microsoft.com/t5/azure-sentinel/security-investigation-with-azure-sentinel-and-jupyter-notebooks/ba-p/432921)
- [Pandas Documentation](https://pandas.pydata.org/pandas-docs/stable/user_guide/index.html)
- [Bokeh Documentation](https://docs.bokeh.org/en/latest/)
---
# 9. FAQs
The following links take you to short articles in the Azure-Sentinel-Notebooks Wiki
that answer common questions.
## [How can I download all Azure-Sentinel-Notebooks notebooks to my Azure ML workspace?](https://github.com/Azure/Azure-Sentinel-Notebooks/wiki/How-can-I-download-all-Azure-Sentinel-Notebooks-notebooks-to-my-Azure-ML-workspace%3F)
## [Can I install MSTICPy by default on a new AML compute?](https://github.com/Azure/Azure-Sentinel-Notebooks/wiki/Can-I-install-MSTICPy-by-default-on-a-new-AML-compute%3F)
## [I see error "Runtime dependency of PyGObject is missing" when I load a query provider](https://github.com/Azure/Azure-Sentinel-Notebooks/wiki/%22Runtime-dependency-of-PyGObject-is-missing%22-error)
## [MSTICPy and other packages do not install properly when switching between the Python 3.6 or 3.8 Kernels](https://github.com/Azure/Azure-Sentinel-Notebooks/wiki/MSTICPy-and-other-packages-do-not-install-properly-when-switching-between-the-Python-3.6-or-3.8-Kernels)
## [My user account/credentials do not get cached between notebook runs - using Azure CLI](https://github.com/Azure/Azure-Sentinel-Notebooks/wiki/Caching-credentials-with-Azure-CLI)
| github_jupyter |
```
######## snakemake preamble start (automatically inserted, do not edit) ########
import sys; sys.path.extend(['/cluster/ggs_lab/mtparker/.conda/envs/snakemake6/lib/python3.10/site-packages', '/cluster/ggs_lab/mtparker/papers/fiona/fiona_nanopore/rules/notebook_templates']); import pickle; snakemake = pickle.loads(b'\x80\x04\x95t\x0f\x00\x00\x00\x00\x00\x00\x8c\x10snakemake.script\x94\x8c\tSnakemake\x94\x93\x94)\x81\x94}\x94(\x8c\x05input\x94\x8c\x0csnakemake.io\x94\x8c\nInputFiles\x94\x93\x94)\x81\x94(\x8c"yanocomp/fip37_vs_fio1_vs_col0.bed\x94\x8c"yanocomp/fip37_vs_fio1.posthoc.bed\x94\x8c"yanocomp/fip37_vs_col0.posthoc.bed\x94\x8c!yanocomp/fio1_vs_col0.posthoc.bed\x94\x8c6yanocomp/fip37_vs_fio1__and__fip37_vs_col0.posthoc.bed\x94\x8c5yanocomp/fip37_vs_fio1__and__fio1_vs_col0.posthoc.bed\x94\x8c6yanocomp/fip37_vs_col0__and__fip37_vs_fio1.posthoc.bed\x94\x8c5yanocomp/fip37_vs_col0__and__fio1_vs_col0.posthoc.bed\x94\x8c5yanocomp/fio1_vs_col0__and__fip37_vs_fio1.posthoc.bed\x94\x8c5yanocomp/fio1_vs_col0__and__fip37_vs_col0.posthoc.bed\x94\x8c6yanocomp/fip37_vs_fio1__not__fip37_vs_col0.posthoc.bed\x94\x8c5yanocomp/fip37_vs_fio1__not__fio1_vs_col0.posthoc.bed\x94\x8c6yanocomp/fip37_vs_col0__not__fip37_vs_fio1.posthoc.bed\x94\x8c5yanocomp/fip37_vs_col0__not__fio1_vs_col0.posthoc.bed\x94\x8c5yanocomp/fio1_vs_col0__not__fip37_vs_fio1.posthoc.bed\x94\x8c5yanocomp/fio1_vs_col0__not__fip37_vs_col0.posthoc.bed\x94\x8c>yanocomp/fip37_vs_fio1__not__fip37_vs_col0__miclip.posthoc.bed\x94\x8c=yanocomp/fip37_vs_fio1__not__fio1_vs_col0__miclip.posthoc.bed\x94\x8c>yanocomp/fip37_vs_col0__not__fip37_vs_fio1__miclip.posthoc.bed\x94\x8c=yanocomp/fip37_vs_col0__not__fio1_vs_col0__miclip.posthoc.bed\x94\x8c=yanocomp/fio1_vs_col0__not__fip37_vs_fio1__miclip.posthoc.bed\x94\x8c=yanocomp/fio1_vs_col0__not__fip37_vs_col0__miclip.posthoc.bed\x94\x8c ../annotations/miclip_cov.fwd.bw\x94\x8c ../annotations/miclip_cov.rev.bw\x94\x8c"../annotations/miclip_peaks.bed.gz\x94\x8c,../annotations/vir1_vs_VIRc_der_sites.bed.gz\x94\x8cA../annotations/Araport11_GFF3_genes_transposons.201606.no_chr.gtf\x94e}\x94(\x8c\x06_names\x94}\x94(\x8c\x08yanocomp\x94K\x00K\x01\x86\x94\x8c\x10yanocomp_posthoc\x94K\x01K\x16\x86\x94\x8c\nmiclip_cov\x94K\x16K\x18\x86\x94\x8c\x0cmiclip_peaks\x94K\x18N\x86\x94\x8c\tder_sites\x94K\x19N\x86\x94\x8c\x03gtf\x94K\x1aN\x86\x94u\x8c\x12_allowed_overrides\x94]\x94(\x8c\x05index\x94\x8c\x04sort\x94eh6\x8c\tfunctools\x94\x8c\x07partial\x94\x93\x94h\x06\x8c\x19Namedlist._used_attribute\x94\x93\x94\x85\x94R\x94(h<)}\x94\x8c\x05_name\x94h6sNt\x94bh7h:h<\x85\x94R\x94(h<)}\x94h@h7sNt\x94bh(h\x06\x8c\tNamedlist\x94\x93\x94)\x81\x94h\na}\x94(h&}\x94h4]\x94(h6h7eh6h:h<\x85\x94R\x94(h<)}\x94h@h6sNt\x94bh7h:h<\x85\x94R\x94(h<)}\x94h@h7sNt\x94bubh*hG)\x81\x94(h\x0bh\x0ch\rh\x0eh\x0fh\x10h\x11h\x12h\x13h\x14h\x15h\x16h\x17h\x18h\x19h\x1ah\x1bh\x1ch\x1dh\x1eh\x1fe}\x94(h&}\x94h4]\x94(h6h7eh6h:h<\x85\x94R\x94(h<)}\x94h@h6sNt\x94bh7h:h<\x85\x94R\x94(h<)}\x94h@h7sNt\x94bubh,hG)\x81\x94(h h!e}\x94(h&}\x94h4]\x94(h6h7eh6h:h<\x85\x94R\x94(h<)}\x94h@h6sNt\x94bh7h:h<\x85\x94R\x94(h<)}\x94h@h7sNt\x94bubh.h"h0h#h2h$ub\x8c\x06output\x94h\x06\x8c\x0bOutputFiles\x94\x93\x94)\x81\x94\x8c9figures/yanocomp/gene_tracks/AT1G02500_m6a_gene_track.svg\x94a}\x94(h&}\x94\x8c\ngene_track\x94K\x00N\x86\x94sh4]\x94(h6h7eh6h:h<\x85\x94R\x94(h<)}\x94h@h6sNt\x94bh7h:h<\x85\x94R\x94(h<)}\x94h@h7sNt\x94bhshpub\x8c\x06params\x94h\x06\x8c\x06Params\x94\x93\x94)\x81\x94}\x94(h&}\x94h4]\x94(h6h7eh6h:h<\x85\x94R\x94(h<)}\x94h@h6sNt\x94bh7h:h<\x85\x94R\x94(h<)}\x94h@h7sNt\x94bub\x8c\twildcards\x94h\x06\x8c\tWildcards\x94\x93\x94)\x81\x94\x8c\tAT1G02500\x94a}\x94(h&}\x94\x8c\x07gene_id\x94K\x00N\x86\x94sh4]\x94(h6h7eh6h:h<\x85\x94R\x94(h<)}\x94h@h6sNt\x94bh7h:h<\x85\x94R\x94(h<)}\x94h@h7sNt\x94bh\x94h\x91ub\x8c\x07threads\x94K\x01\x8c\tresources\x94h\x06\x8c\tResources\x94\x93\x94)\x81\x94(K\x01K\x01M\xe8\x03M\xe8\x03\x8c\x13/tmp/370838.1.all.q\x94\x8c\x03c6*\x94e}\x94(h&}\x94(\x8c\x06_cores\x94K\x00N\x86\x94\x8c\x06_nodes\x94K\x01N\x86\x94\x8c\x06mem_mb\x94K\x02N\x86\x94\x8c\x07disk_mb\x94K\x03N\x86\x94\x8c\x06tmpdir\x94K\x04N\x86\x94\x8c\x08hostname\x94K\x05N\x86\x94uh4]\x94(h6h7eh6h:h<\x85\x94R\x94(h<)}\x94h@h6sNt\x94bh7h:h<\x85\x94R\x94(h<)}\x94h@h7sNt\x94bh\xa8K\x01h\xaaK\x01h\xacM\xe8\x03h\xaeM\xe8\x03h\xb0h\xa4\x8c\x08hostname\x94h\xa5ub\x8c\x03log\x94h\x06\x8c\x03Log\x94\x93\x94)\x81\x94\x8c5notebook_processed/AT1G02500_m6a_gene_tracks.py.ipynb\x94a}\x94(h&}\x94\x8c\x08notebook\x94K\x00N\x86\x94sh4]\x94(h6h7eh6h:h<\x85\x94R\x94(h<)}\x94h@h6sNt\x94bh7h:h<\x85\x94R\x94(h<)}\x94h@h7sNt\x94bh\xc5h\xc2ub\x8c\x06config\x94}\x94(\x8c\x16transcriptome_fasta_fn\x94\x8c0../annotations/Araport11_genes.201606.cdna.fasta\x94\x8c\x0fgenome_fasta_fn\x94\x8c:../annotations/Arabidopsis_thaliana.TAIR10.dna.toplevel.fa\x94\x8c\x06gtf_fn\x94\x8cA../annotations/Araport11_GFF3_genes_transposons.201606.no_chr.gtf\x94\x8c\x08flowcell\x94\x8c\nFLO-MIN106\x94\x8c\x03kit\x94\x8c\nSQK-RNA002\x94\x8c\x13minimap2_parameters\x94}\x94\x8c\x0fmax_intron_size\x94M Ns\x8c\x12d3pendr_parameters\x94}\x94(\x8c\x10min_read_overlap\x94G?\xc9\x99\x99\x99\x99\x99\x9a\x8c\x06nboots\x94M\xe7\x03\x8c\x0fuse_gamma_model\x94\x88\x8c\x10test_homogeneity\x94\x89u\x8c\x0eexpected_motif\x94\x8c\x05NNANN\x94\x8c\x0bcomparisons\x94]\x94(\x8c\rfip37_vs_col0\x94\x8c\x0cfio1_vs_col0\x94e\x8c\tmulticomp\x94]\x94\x8c\x15fip37_vs_fio1_vs_col0\x94a\x8c\x0fmiclip_coverage\x94]\x94(\x8c ../annotations/miclip_cov.fwd.bw\x94\x8c ../annotations/miclip_cov.rev.bw\x94e\x8c\x0cmiclip_peaks\x94\x8c"../annotations/miclip_peaks.bed.gz\x94\x8c\tder_sites\x94\x8c,../annotations/vir1_vs_VIRc_der_sites.bed.gz\x94\x8c\x0fm6a_gene_tracks\x94]\x94(\x8c\tAT2G22540\x94\x8c\tAT2G45660\x94\x8c\tAT2G43010\x94\x8c\tAT1G02500\x94\x8c\tAT4G01850\x94\x8c\tAT2G36880\x94\x8c\tAT3G17390\x94eu\x8c\x04rule\x94\x8c\x18generate_m6a_gene_tracks\x94\x8c\x0fbench_iteration\x94N\x8c\tscriptdir\x94\x8cN/cluster/ggs_lab/mtparker/papers/fiona/fiona_nanopore/rules/notebook_templates\x94ub.'); from snakemake.logging import logger; logger.printshellcmds = True; import os; os.chdir(r'/cluster/ggs_lab/mtparker/papers/fiona/fiona_nanopore/pipeline');
######## snakemake preamble end #########
import os
import re
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from matplotlib import patches, gridspec
import seaborn as sns
from IPython.display import display_markdown, Markdown
from gene_track_utils import plot_yanocomp_miclip
%matplotlib inline
sns.set(font='Arial')
plt.rcParams['svg.fonttype'] = 'none'
style = sns.axes_style('white')
style.update(sns.axes_style('ticks'))
style['xtick.major.size'] = 1
style['ytick.major.size'] = 1
sns.set(font_scale=1.2, style=style)
pal = sns.color_palette(['#0072b2', '#d55e00', '#009e73', '#f0e442', '#cc79a7', '#56b4e9', '#e69f00'])
cmap = ListedColormap(pal.as_hex())
sns.set_palette(pal)
sns.palplot(pal)
plt.show()
display_markdown(Markdown(f'# {snakemake.wildcards.gene_id} m6A gene track'))
plot_yanocomp_miclip(
snakemake.wildcards.gene_id,
snakemake.input.miclip_cov,
snakemake.input.miclip_peaks,
snakemake.input.der_sites,
{'FIP37-dependent only Yanocomp sites': 'yanocomp/fip37_vs_col0__not__fio1_vs_col0.posthoc.bed',
'FIP37/FIO1-dependent Yanocomp sites': 'yanocomp/fio1_vs_col0__and__fip37_vs_col0.posthoc.bed',
'FIO1-dependent only Yanocomp sites': 'yanocomp/fio1_vs_col0__not__fip37_vs_col0.posthoc.bed'},
snakemake.input.gtf
)
plt.savefig(snakemake.output.gene_track)
plt.show()
```
| github_jupyter |
```
%load_ext notexbook
%texify
```
# PyTorch `nn` package
### `torch.nn`
Computational graphs and autograd are a very powerful paradigm for defining complex operators and automatically taking derivatives; however for large neural networks raw autograd can be a bit too low-level.
When building neural networks we frequently think of arranging the computation into layers, some of which
have learnable parameters which will be optimized during learning.
In TensorFlow, packages like **Keras**, (old **TensorFlow-Slim**, and **TFLearn**) provide higher-level abstractions over raw computational graphs that are useful for building neural networks.
In PyTorch, the `nn` package serves this same purpose.
The `nn` package defines a set of `Module`s, which are roughly equivalent to neural network layers.
A `Module` receives input `Tensor`s and computes output `Tensor`s, but may also hold internal state such as `Tensor`s containing learnable parameters.
The `nn` package also defines a set of useful `loss` functions that are commonly used when
training neural networks.
In this example we use the `nn` package to implement our two-layer network:
```
import torch
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10
# Create random Tensors to hold inputs and outputs
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)
# Use the nn package to define our model as a sequence of layers. nn.Sequential
# is a Module which contains other Modules, and applies them in sequence to
# produce its output. Each Linear Module computes output from input using a
# linear function, and holds internal Tensors for its weight and bias.
model = torch.nn.Sequential(
torch.nn.Linear(D_in, H), # xW+b
torch.nn.ReLU(),
torch.nn.Linear(H, D_out),
)
[p.shape for p in model.parameters()]
# The nn package also contains definitions of popular loss functions; in this
# case we will use Mean Squared Error (MSE) as our loss function.
mseloss = torch.nn.MSELoss(reduction='sum')
learning_rate = 1e-4
for t in range(500):
# Forward pass: compute predicted y by passing x to the model. Module objects
# override the __call__ operator so you can call them like functions. When
# doing so you pass a Tensor of input data to the Module and it produces
# a Tensor of output data.
y_pred = model(x)
# Compute and print loss. We pass Tensors containing the predicted and true
# values of y, and the loss function returns a Tensor containing the
# loss.
loss = mseloss(y_pred, y)
if t % 50 == 0:
print(t, loss.item())
# Zero the gradients before running the backward pass.
model.zero_grad()
# Backward pass: compute gradient of the loss with respect to all the learnable
# parameters of the model. Internally, the parameters of each Module are stored
# in Tensors with requires_grad=True, so this call will compute gradients for
# all learnable parameters in the model.
loss.backward()
# Update the weights using gradient descent. Each parameter is a Tensor, so
# we can access its gradients like we did before.
with torch.no_grad():
for param in model.parameters():
param -= learning_rate * param.grad
```
---
### `torch.optim`
Up to this point we have updated the weights of our models by manually mutating the Tensors holding learnable parameters (**using `torch.no_grad()` or `.data` to avoid tracking history in autograd**).
This is not a huge burden for simple optimization algorithms like stochastic gradient descent, but in practice we often train neural networks using more sophisticated optimizers like `AdaGrad`, `RMSProp`,
`Adam`.
The optim package in PyTorch abstracts the idea of an optimization algorithm and provides implementations of commonly used optimization algorithms.
Let's finally modify the previous example in order to use `torch.optim` and the `Adam` algorithm:
```
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10
# Create random Tensors to hold inputs and outputs
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)
# Use the nn package to define our model and loss function.
model = torch.nn.Sequential(
torch.nn.Linear(D_in, H),
torch.nn.ReLU(),
torch.nn.Linear(H, D_out),
)
loss_fn = torch.nn.MSELoss(reduction='sum')
```
##### Model and Optimiser (w/ Parameters) at a glance

<span class="fn"><i>Source:</i> [1] - _Deep Learning with PyTorch_
```
# Use the optim package to define an Optimizer that will update the weights of
# the model for us. Here we will use Adam; the optim package contains many other
# optimization algoriths. The first argument to the Adam constructor tells the
# optimizer which Tensors it should update.
learning_rate = 1e-4
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
device
for t in range(500):
# Forward pass: compute predicted y by passing x to the model.
y_pred = model(x)
loss = loss_fn(y_pred, y)
if t % 50 == 0:
print(t, loss.item())
optimizer.zero_grad()
loss.backward()
optimizer.step()
```
### Can we do better ?
---
##### The Learning Process

<span class="fn"><i>Source:</i> [1] - _Deep Learning with PyTorch_ </span>
Possible scenarios:
- Specify models that are more complex than a sequence of existing (pre-defined) modules;
- Customise the learning procedure (e.g. _weight sharing_ ?)
- ?
For these cases, **PyTorch** allows to define our own custom modules by subclassing `nn.Module` and defining a `forward` method which receives the input data (i.e. `Tensor`) and returns the output (i.e. `Tensor`).
It is in the `forward` method that **all** the _magic_ of Dynamic Graph and `autograd` operations happen!
### PyTorch: Custom Modules
Let's implement our **two-layers** model as a custom `nn.Module` subclass
```
class TwoLayerNet(torch.nn.Module):
def __init__(self, D_in, H, D_out):
"""
In the constructor we instantiate two nn.Linear modules and assign them as
member variables.
"""
super(TwoLayerNet, self).__init__()
self.linear1 = torch.nn.Linear(D_in, H)
self.hidden_activation = torch.nn.ReLU()
self.linear2 = torch.nn.Linear(H, D_out)
def forward(self, x):
"""
In the forward function we accept a Tensor of input data and we must return
a Tensor of output data. We can use Modules defined in the constructor as
well as arbitrary operators on Tensors.
"""
l1 = self.linear1(x)
h_relu = self.hidden_activation(l1)
y_pred = self.linear2(h_relu)
return y_pred
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10
# Create random Tensors to hold inputs and outputs
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)
# Construct our model by instantiating the class defined above
model = TwoLayerNet(D_in, H, D_out)
# Construct our loss function and an Optimizer. The call to model.parameters()
# in the SGD constructor will contain the learnable parameters of the two
# nn.Linear modules which are members of the model.
criterion = torch.nn.MSELoss(reduction='sum')
optimizer = torch.optim.SGD(model.parameters(), lr=1e-4)
for t in range(500):
# Forward pass: Compute predicted y by passing x to the model
y_pred = model(x)
# Compute and print loss
loss = criterion(y_pred, y)
if t % 50 == 0:
print(t, loss.item())
# Zero gradients, perform a backward pass, and update the weights.
optimizer.zero_grad()
loss.backward()
optimizer.step()
```
#### What happened really? Let's have a closer look
```python
>>> model = TwoLayerNet(D_in, H, D_out)
```
This calls `TwoLayerNet.__init__` **constructor** method (_implementation reported below_ ):
```python
def __init__(self, D_in, H, D_out):
"""
In the constructor we instantiate two nn.Linear modules and assign them as
member variables.
"""
super(TwoLayerNet, self).__init__()
self.linear1 = torch.nn.Linear(D_in, H)
self.hidden_activation = torch.nn.ReLU()
self.linear2 = torch.nn.Linear(H, D_out)
```
1. First thing, we call the `nn.Module` constructor which sets up the housekeeping
- If you forget to do that, you will get and error message reminding that you should call it before using any `nn.Module` capabilities
2. We create a class attribute for each layer (`OP/Tensor/`) that we intend to include in our model
- These can be also `Sequential` as in _Submodules_ or *Block of Layers*
- **Note**: We are **not** defining the Graph yet, just the layer!
```python
>>> y_pred = model(x)
```
1. First thing to notice: the `model` object is **callable**
- It means `nn.Module` is implementing a `__call__` method
- We **don't** need to re-implement that!
2. (in fact) The `nn.Module` class will call `self.forward` - in a [Template Method Pattern](https://en.wikipedia.org/wiki/Template_method_pattern) fashion
- for this reason, we have to define the `forward` method
- (needless to say) the `forward` method implements the **forward** pass of our model
`from torch.nn.modules.module.py`
```python
class Module(object):
# [...] omissis
def __call__(self, *input, **kwargs):
for hook in self._forward_pre_hooks.values():
result = hook(self, input)
if result is not None:
if not isinstance(result, tuple):
result = (result,)
input = result
if torch._C._get_tracing_state():
result = self._slow_forward(*input, **kwargs)
else:
result = self.forward(*input, **kwargs)
for hook in self._forward_hooks.values():
hook_result = hook(self, input, result)
if hook_result is not None:
result = hook_result
if len(self._backward_hooks) > 0:
var = result
while not isinstance(var, torch.Tensor):
if isinstance(var, dict):
var = next((v for v in var.values() if isinstance(v, torch.Tensor)))
else:
var = var[0]
grad_fn = var.grad_fn
if grad_fn is not None:
for hook in self._backward_hooks.values():
wrapper = functools.partial(hook, self)
functools.update_wrapper(wrapper, hook)
grad_fn.register_hook(wrapper)
return result
# [...] omissis
def forward(self, *input):
r"""Defines the computation performed at every call.
Should be overridden by all subclasses.
.. note::
Although the recipe for forward pass needs to be defined within
this function, one should call the :class:`Module` instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
"""
raise NotImplementedError
```
**Take away messages** :
1. We don't need to implement the `__call__` method at all in our custom model subclass
2. We don't need to call the `forward` method directly.
- We could, but we would miss the flexibility of _forward_ and _backwar_ hooks
##### Last but not least
```python
>>> optimizer = torch.optim.SGD(model.parameters(), lr=1e-4)
```
Being `model` a subclass of `nn.Module`, `model.parameters()` will automatically capture all the `Layers/OP/Tensors/Parameters` that require gradient computation, so to feed to the `autograd` engine during the *backward* (optimisation) step.
###### `model.named_parameters`
```
for name_str, param in model.named_parameters():
print("{:21} {:19} {}".format(name_str, str(param.shape), param.numel()))
```
**WAIT**: What happened to `hidden_activation` ?
```python
self.hidden_activation = torch.nn.ReLU()
```
So, it looks that we are registering in the constructor a submodule (`torch.nn.ReLU`) that has no parameters.
Generalising, if we would've had **more** (hidden) layers, it would have required the definition of one of these submodules for each pair of layers (at least).
Looking back at the implementation of the `TwoLayerNet` class as a whole, it looks like a bit of a waste.
**Can we do any better here?** 🤔
---
Well, in this particular case, we could implement the `ReLU` activation _manually_, it is not that difficult, isn't it?
$\rightarrow$ As we already did before, we could use the [`torch.clamp`](https://pytorch.org/docs/stable/torch.html?highlight=clamp#torch.clamp) function
> `torch.clamp`: Clamp all elements in input into the range [ min, max ] and return a resulting tensor
`t.clamp(min=0)` is **exactly** the ReLU that we want.
```
class TwoLayerNet(torch.nn.Module):
def __init__(self, D_in, H, D_out):
"""
In the constructor we instantiate two nn.Linear modules and assign them as
member variables.
"""
super(TwoLayerNet, self).__init__()
self.linear1 = torch.nn.Linear(D_in, H)
self.linear2 = torch.nn.Linear(H, D_out)
def forward(self, x):
"""
In the forward function we accept a Tensor of input data and we must return
a Tensor of output data. We can use Modules defined in the constructor as
well as arbitrary operators on Tensors.
"""
h_relu = self.linear1(x).clamp(min=0)
y_pred = self.linear2(h_relu)
return y_pred
```
###### Sorted!
That was easy, wasn't it? **However**, what if we wanted *other* activation functions (e.g. `tanh`,
`sigmoid`, `LeakyReLU`)?
### Introducing the Functional API
PyTorch has functional counterparts of every `nn` module.
By _functional_ here we mean "having no internal state", or, in other words, "whose output value is solely and fully determined by the value input arguments".
Indeed, `torch.nn.functional` provides the many of the same modules we find in `nn`, but with all eventual parameters moved as an argument to the function call.
For instance, the functional counterpart of `nn.Linear` is `nn.functional.linear`, which is a function that has signature `linear(input, weight, bias=None)`.
The `weight` and `bias` parameters are **arguments** to the function.
Back to our `TwoLayerNet` model, it makes sense to keep using nn modules for `nn.Linear`, so that our model will be able to manage all of its `Parameter` instances during training.
However, we can safely switch to the functional counterparts of `nn.ReLU`, since it has no parameters.
```
class TwoLayerNet(torch.nn.Module):
def __init__(self, D_in, H, D_out):
"""
In the constructor we instantiate two nn.Linear modules and assign them as
member variables.
"""
super(TwoLayerNet, self).__init__()
self.linear1 = torch.nn.Linear(D_in, H)
self.linear2 = torch.nn.Linear(H, D_out)
def forward(self, x):
"""
In the forward function we accept a Tensor of input data and we must return
a Tensor of output data. We can use Modules defined in the constructor as
well as arbitrary operators on Tensors.
"""
h_relu = torch.nn.functional.relu(self.linear1(x)) # torch.relu would do as well
y_pred = self.linear2(h_relu)
return y_pred
model = TwoLayerNet(D_in, H, D_out)
criterion = torch.nn.MSELoss(reduction='sum')
optimizer = torch.optim.SGD(model.parameters(), lr=1e-4)
for t in range(500):
y_pred = model(x)
loss = criterion(y_pred, y)
if t % 50 == 0:
print(t, loss.item())
optimizer.zero_grad()
loss.backward()
optimizer.step()
```
$\rightarrow$ For the curious minds: [The difference and connection between torch.nn and torch.nn.function from relu's various implementations](https://programmer.group/5d5a404b257d7.html)
#### Clever advice and Rule of thumb
> With **quantization**, stateless bits like activations suddenly become stateful because information on the quantization needs to be captured. This means that if we aim to quantize our model, it might be worthwile to stick with the modular API if we go for non-JITed quantization. There is one style matter that will help you avoid surprises with (originally unforseen) uses: if you need several applications of stateless modules (like `nn.HardTanh` or `nn.ReLU`), it is likely a good idea to have a separate instance for each. Re-using the same module appears to be clever and will give correct results with our standard Python usage here, but tools analysing your model might trip over it.
<span class="fn"><i>Source:</i> [1] - _Deep Learning with PyTorch_ </span>
### Custom Graph flow: Example of Weight Sharing
As we already discussed, the definition of custom `nn.Module` in PyTorch requires the definition of layers (i.e. Parameters) in the constructor (`__init__`), and the implementation of the `forward` method in which the dynamic graph will be traversed defined by the call to each of those layers/parameters.
As an example of **dynamic graphs** we are going to implement a scenario in which we require parameters (i.e. _weights_) sharing between layers.
In order to do so, we will implement a very odd model: a fully-connected ReLU network that on each `forward` call chooses a `random` number (between 1 and 4) and uses that many hidden layers, reusing the same weights multiple times to compute the innermost hidden layers.
In order to do so, we will implement _weight sharing_ among the innermost layers by simply reusing the same `Module` multiple times when defining the forward pass.
```
import torch
import random
class DynamicNet(torch.nn.Module):
def __init__(self, D_in, H, D_out):
"""
In the constructor we construct three nn.Linear instances that we will use
in the forward pass.
"""
super(DynamicNet, self).__init__()
self.input_linear = torch.nn.Linear(D_in, H)
self.middle_linear = torch.nn.Linear(H, H)
self.output_linear = torch.nn.Linear(H, D_out)
def forward(self, x):
"""
For the forward pass of the model, we randomly choose either 0, 1, 2, or 3
and reuse the middle_linear Module that many times to compute hidden layer
representations.
Since each forward pass builds a dynamic computation graph, we can use normal
Python control-flow operators like loops or conditional statements when
defining the forward pass of the model.
Here we also see that it is perfectly safe to reuse the same Module many
times when defining a computational graph. This is a big improvement from Lua
Torch, where each Module could be used only once.
"""
h_relu = torch.relu(self.input_linear(x))
hidden_layers = random.randint(0, 3)
for _ in range(hidden_layers):
h_relu = torch.relu(self.middle_linear(h_relu))
y_pred = self.output_linear(h_relu)
return y_pred
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10
# Create random Tensors to hold inputs and outputs
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)
# Construct our model by instantiating the class defined above
model = DynamicNet(D_in, H, D_out)
# Construct our loss function and an Optimizer. Training this strange model with
# vanilla stochastic gradient descent is tough, so we use momentum
criterion = torch.nn.MSELoss(reduction='sum')
optimizer = torch.optim.SGD(model.parameters(), lr=1e-4, momentum=0.9)
for t in range(500):
for i in range(2):
start, end = int((N/2)*i), int((N/2)*(i+1))
x = x[start:end, ...]
y = y[start:end, ...]
# Forward pass: Compute predicted y by passing x to the model
y_pred = model(x)
# Compute and print loss
loss = criterion(y_pred, y)
if t % 50 == 0:
print(t, loss.item())
# Zero gradients, perform a backward pass, and update the weights.
optimizer.zero_grad()
loss.backward()
optimizer.step()
```
### Latest from the `torch` ecosystem
* $\rightarrow$: [Migration from Chainer to PyTorch](https://medium.com/pytorch/migration-from-chainer-to-pytorch-8ed92c12c8)
* $\rightarrow$: [PyTorch Lightning](https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html)
- [fast.ai](https://docs.fast.ai/)
---
### References and Futher Reading:
1. [Deep Learning with PyTorch, Luca Antiga et. al.](https://www.manning.com/books/deep-learning-with-pytorch)
2. [(**Terrific**) PyTorch Examples Repo](https://github.com/jcjohnson/pytorch-examples) (*where most of the examples in this notebook have been adapted from*)
| github_jupyter |
## 13.2 유가증권시장 12개월 모멘텀
최근 투자 기간 기준으로 12개월 모멘텀 계산 날짜 구하기
```
from pykrx import stock
import FinanceDataReader as fdr
df = fdr.DataReader(symbol='KS11', start="2019-11")
start = df.loc["2019-11"]
end = df.loc["2020-09"]
df.loc["2020-11"].head()
start
start_date = start.index[0]
end_date = end.index[-1]
print(start_date, end_date)
```
가격 모멘텀 계산 시작일 기준으로 등락률을 계산합니다.
```
df1 = stock.get_market_ohlcv_by_ticker("20191101")
df2 = stock.get_market_ohlcv_by_ticker("20200929")
kospi = df1.join(df2, lsuffix="_l", rsuffix="_r")
kospi
```
12개월 등락률(가격 모멘텀, 최근 1개월 제외)을 기준으로 상위 20종목의 종목 코드를 가져와봅시다.
```
kospi['모멘텀'] = 100 * (kospi['종가_r'] - kospi['종가_l']) / kospi['종가_l']
kospi = kospi[['종가_l', '종가_r', '모멘텀']]
kospi.sort_values(by='모멘텀', ascending=False)[:20]
kospi_momentum20 = kospi.sort_values(by='모멘텀', ascending=False)[:20]
kospi_momentum20.rename(columns={"종가_l": "매수가", "종가_r": "매도가"}, inplace=True)
kospi_momentum20
df3 = stock.get_market_ohlcv_by_ticker("20201102")
df4 = stock.get_market_ohlcv_by_ticker("20210430")
pct_df = df3.join(df4, lsuffix="_l", rsuffix="_r")
pct_df
pct_df = pct_df[['종가_l', '종가_r']]
kospi_momentum20_result = kospi_momentum20.join(pct_df)
kospi_momentum20_result
kospi_momentum20_result['수익률'] = (kospi_momentum20_result['종가_r'] /
kospi_momentum20_result['종가_l'])
kospi_momentum20_result
수익률평균 = kospi_momentum20_result['수익률'].fillna(0).mean()
수익률평균
mom20_cagr = 수익률평균 ** (1/0.5) - 1 # 6개월
mom20_cagr * 100
df_ref = fdr.DataReader(
symbol='KS11',
start="2020-11-02", # 첫번째 거래일
end="2021-04-30"
)
df_ref
CAGR = ((df_ref['Close'].iloc[-1] / df_ref['Close'].iloc[0]) ** (1/0.5)) -1
CAGR * 100
```
## 13.3 대형주 12개월 모멘텀
대형주(시가총액 200위) 기준으로 상대 모멘텀이 큰 종목을 20개 선정하는 전략
```
df1 = stock.get_market_ohlcv_by_ticker("20191101", market="ALL")
df2 = stock.get_market_ohlcv_by_ticker("20200929", market="ALL")
all = df1.join(df2, lsuffix="_l", rsuffix="_r")
all
# 기본 필터링
# 우선주 제외
all2 = all.filter(regex="0$", axis=0).copy()
all2
all2['모멘텀'] = 100 * (all2['종가_r'] - all2['종가_l']) / all2['종가_l']
all2 = all2[['모멘텀']]
all2
cap = stock.get_market_cap_by_ticker(date="20200929", market="ALL")
cap = cap[['시가총액']]
cap
all3 = all2.join(other=cap)
all3
# 대형주 필터링
big = all3.sort_values(by='시가총액', ascending=False)[:200]
big
big.sort_values(by='모멘텀', ascending=False)
big_pct20 = big.sort_values(by='모멘텀', ascending=False)[:20]
big_pct20
df3 = stock.get_market_ohlcv_by_ticker("20201102", market="ALL")
df4 = stock.get_market_ohlcv_by_ticker("20211015", market="ALL")
pct_df = df3.join(df4, lsuffix="_l", rsuffix="_r")
pct_df['수익률'] = pct_df['종가_r'] / pct_df['종가_l']
pct_df = pct_df[['종가_l', '종가_r', '수익률']]
pct_df
big_mom_result = big_pct20.join(pct_df)
big_mom_result
평균수익률 = big_mom_result['수익률'].mean()
big_mom_cagr = (평균수익률 ** 1/1) -1
big_mom_cagr * 100
```
## 13.4 장기 백테스팅
```
import pandas as pd
import datetime
from dateutil.relativedelta import relativedelta
year = 2010
month = 11
period = 6
inv_start = f"{year}-{month}-01"
inv_start = datetime.datetime.strptime(inv_start, "%Y-%m-%d")
inv_end = inv_start + relativedelta(months=period-1)
mom_start = inv_start - relativedelta(months=12)
mom_end = inv_start - relativedelta(months=2)
print(mom_start.strftime("%Y-%m"), mom_end.strftime("%Y-%m"), "=>",
inv_start.strftime("%Y-%m"), inv_end.strftime("%Y-%m"))
df = fdr.DataReader(symbol='KS11')
df
def get_business_day(df, year, month, index=0):
str_month = f"{year}-{month}"
return df.loc[str_month].index[index]
df = fdr.DataReader(symbol='KS11')
get_business_day(df, 2010, 1, 0)
def momentum(df, year=2010, month=11, period=12):
# 투자 시작일, 종료일
str_day = f"{year}-{month}-01"
start = datetime.datetime.strptime(str_day, "%Y-%m-%d")
end = start + relativedelta(months=period-1)
inv_start = get_business_day(df, start.year, start.month, 0) # 첫 번째 거래일의 종가
inv_end = get_business_day(df, end.year, end.month, -1)
inv_start = inv_start.strftime("%Y%m%d")
inv_end = inv_end.strftime("%Y%m%d")
#print(inv_start, inv_end)
# 모멘텀 계산 시작일, 종료일
end = start - relativedelta(months=2) # 역추세 1개월 제외
start = start - relativedelta(months=period)
mom_start = get_business_day(df, start.year, start.month, 0) # 첫 번째 거래일의 종가
mom_end = get_business_day(df, end.year, end.month, -1)
mom_start = mom_start.strftime("%Y%m%d")
mom_end = mom_end.strftime("%Y%m%d")
print(mom_start, mom_end, " | ", inv_start, inv_end)
# momentum 계산
df1 = stock.get_market_ohlcv_by_ticker(mom_start)
df2 = stock.get_market_ohlcv_by_ticker(mom_end)
mon_df = df1.join(df2, lsuffix="l", rsuffix="r")
mon_df['등락률'] = (mon_df['종가r'] - mon_df['종가l'])/mon_df['종가l']*100
# 우선주 제외
mon_df = mon_df.filter(regex="0$", axis=0)
mon20 = mon_df.sort_values(by="등락률", ascending=False)[:20]
mon20 = mon20[['등락률']]
#print(mon20)
# 투자 기간 수익률
df3 = stock.get_market_ohlcv_by_ticker(inv_start)
df4 = stock.get_market_ohlcv_by_ticker(inv_end)
inv_df = df3.join(df4, lsuffix="l", rsuffix="r")
inv_df['수익률'] = inv_df['종가r'] / inv_df['종가l'] # 수익률 = 매도가 / 매수가
inv_df = inv_df[['수익률']]
# join
result_df = mon20.join(inv_df)
result = result_df['수익률'].fillna(0).mean()
return year, result
import time
data = []
for year in range(2010, 2021):
ret = momentum(df, year, month=11, period=6)
data.append(ret)
time.sleep(1)
import pandas as pd
ret_df = pd.DataFrame(data=data, columns=['year', 'yield'])
ret_df.set_index('year', inplace=True)
ret_df
cum_yield = ret_df['yield'].cumprod()
cum_yield
CAGR = cum_yield.iloc[-1] ** (1/11) - 1
CAGR * 100
buy_price = df.loc["2010-11"].iloc[0, 0]
sell_price = df.loc["2021-04"].iloc[-1, 0]
kospi_yield = sell_price / buy_price
kospi_cagr = kospi_yield ** (1/11)-1
kospi_cagr * 100
```
| github_jupyter |
```
import numpy
import pandas as pd
import sqlite3
import os
from pandas.io import sql
from tables import *
import re
import pysam
import matplotlib
import matplotlib.image as mpimg
import seaborn
import matplotlib.pyplot
%matplotlib inline
def vectorizeSequence(seq):
# the order of the letters is not arbitrary.
# Flip the matrix up-down and left-right for reverse compliment
ltrdict = {'a':[1,0,0,0],'c':[0,1,0,0],'g':[0,0,1,0],'t':[0,0,0,1], 'n':[0,0,0,0]}
return numpy.array([ltrdict[x] for x in seq])
def Generate_training_and_test_datasets(Gem_events_file_path,ARF_label):
#Make Maize genome
from Bio import SeqIO
for record in SeqIO.parse(open('/mnt/Data_DapSeq_Maize/MaizeGenome.fa'),'fasta'):
if record.id =='1':
chr1 = record.seq.tostring()
if record.id =='2':
chr2 = record.seq.tostring()
if record.id =='3':
chr3 = record.seq.tostring()
if record.id =='4':
chr4 = record.seq.tostring()
if record.id =='5':
chr5 = record.seq.tostring()
if record.id =='6':
chr6 = record.seq.tostring()
if record.id =='7':
chr7 = record.seq.tostring()
if record.id =='8':
chr8 = record.seq.tostring()
if record.id =='9':
chr9 = record.seq.tostring()
if record.id =='10':
chr10 = record.seq.tostring()
wholegenome = {'chr1':chr1,'chr2':chr2,'chr3':chr3,'chr4':chr4,'chr5':chr5,'chr6':chr6,'chr7':chr7,'chr8':chr8,'chr9':chr9,'chr10':chr10}
rawdata = open(Gem_events_file_path)
GEM_events=rawdata.read()
GEM_events=re.split(',|\t|\n',GEM_events)
GEM_events=GEM_events[0:(len(GEM_events)-1)] # this is to make sure the reshape step works
GEM_events= numpy.reshape(GEM_events,(-1,10))
#Build Negative dataset
import random
Bound_Sequences = []
for i in range(0,len(GEM_events)):
Bound_Sequences.append(wholegenome[GEM_events[i][0]][int(GEM_events[i][1]):int(GEM_events[i][2])])
Un_Bound_Sequences = []
count=0
while count<len(Bound_Sequences):
chro = numpy.random.choice(['chr1','chr2','chr3','chr4','chr5','chr6','chr7','chr8','chr9','chr10'])
index = random.randint(1,len(wholegenome[chro]))
absent=True
for i in range(len(GEM_events)):
if chro == GEM_events[i][0]:
if index>int(GEM_events[i][1]) and index<int(GEM_events[i][2]):
absent = False
if absent:
if wholegenome[chro][index:(index+201)].upper().count('R') == 0 and wholegenome[chro][index:(index+201)].upper().count('W') == 0 and wholegenome[chro][index:(index+201)].upper().count('M') == 0 and wholegenome[chro][index:(index+201)].upper().count('S') == 0 and wholegenome[chro][index:(index+201)].upper().count('K') == 0 and wholegenome[chro][index:(index+201)].upper().count('Y') == 0and wholegenome[chro][index:(index+201)].upper().count('B') == 0 and wholegenome[chro][index:(index+201)].upper().count('D') == 0and wholegenome[chro][index:(index+201)].upper().count('H') == 0 and wholegenome[chro][index:(index+201)].upper().count('V') == 0 and wholegenome[chro][index:(index+201)].upper().count('Z') == 0 and wholegenome[chro][index:(index+201)].upper().count('N') == 0 :
Un_Bound_Sequences.append(wholegenome[chro][index:(index+201)])
count=count+1
response = [0]*(len(Un_Bound_Sequences))
temp3 = numpy.array(Un_Bound_Sequences)
temp2 = numpy.array(response)
neg = pd.DataFrame({'sequence':temp3,'response':temp2})
#Build Positive dataset labeled with signal value
Bound_Sequences = []
Responses=[]
for i in range(0,len(GEM_events)):
Bound_Sequences.append(wholegenome[GEM_events[i][0]][int(GEM_events[i][1]):int(GEM_events[i][2])])
Responses.append(float(GEM_events[i][6]))
d = {'sequence' : pd.Series(Bound_Sequences, index=range(len(Bound_Sequences))),
'response' : pd.Series(Responses, index=range(len(Bound_Sequences)))}
pos = pd.DataFrame(d)
#Put positive and negative datasets together
LearningData = neg.append(pos)
LearningData = LearningData.reindex()
#one hot encode sequence data
counter2=0
LearningData_seq_OneHotEncoded =numpy.empty([len(LearningData),201,4])
for counter1 in LearningData['sequence']:
LearningData_seq_OneHotEncoded[counter2]=vectorizeSequence(counter1.lower())
counter2=counter2+1
#Create training and test datasets
from sklearn.cross_validation import train_test_split
sequence_train, sequence_test, response_train, response_test = train_test_split(LearningData_seq_OneHotEncoded, LearningData['response'], test_size=0.2, random_state=42)
#Saving datasets
numpy.save('/mnt/Data_DapSeq_Maize/'+ARF_label+'_seq_train.npy',sequence_train)
numpy.save('/mnt/Data_DapSeq_Maize/'+ARF_label+'_res_train.npy',response_train)
numpy.save('/mnt/Data_DapSeq_Maize/'+ARF_label+'_seq_test.npy',sequence_test)
numpy.save('/mnt/Data_DapSeq_Maize/'+ARF_label+'_res_test.npy',response_test)
def Generate_training_and_test_datasets_no_negative(Gem_events_file_path,ARF_label):
#Make Maize genome
from Bio import SeqIO
for record in SeqIO.parse(open('/mnt/Data_DapSeq_Maize/MaizeGenome.fa'),'fasta'):
if record.id =='1':
chr1 = record.seq.tostring()
if record.id =='2':
chr2 = record.seq.tostring()
if record.id =='3':
chr3 = record.seq.tostring()
if record.id =='4':
chr4 = record.seq.tostring()
if record.id =='5':
chr5 = record.seq.tostring()
if record.id =='6':
chr6 = record.seq.tostring()
if record.id =='7':
chr7 = record.seq.tostring()
if record.id =='8':
chr8 = record.seq.tostring()
if record.id =='9':
chr9 = record.seq.tostring()
if record.id =='10':
chr10 = record.seq.tostring()
wholegenome = {'chr1':chr1,'chr2':chr2,'chr3':chr3,'chr4':chr4,'chr5':chr5,'chr6':chr6,'chr7':chr7,'chr8':chr8,'chr9':chr9,'chr10':chr10}
rawdata = open(Gem_events_file_path)
GEM_events=rawdata.read()
GEM_events=re.split(',|\t|\n',GEM_events)
GEM_events=GEM_events[0:(len(GEM_events)-1)] # this is to make sure the reshape step works
GEM_events= numpy.reshape(GEM_events,(-1,10))
#Build Positive dataset labeled with signal value
Bound_Sequences = []
Responses=[]
for i in range(0,len(GEM_events)):
Bound_Sequences.append(wholegenome[GEM_events[i][0]][int(GEM_events[i][1]):int(GEM_events[i][2])])
Responses.append(float(GEM_events[i][6]))
d = {'sequence' : pd.Series(Bound_Sequences, index=range(len(Bound_Sequences))),
'response' : pd.Series(Responses, index=range(len(Bound_Sequences)))}
pos = pd.DataFrame(d)
LearningData = pos
#one hot encode sequence data
counter2=0
LearningData_seq_OneHotEncoded =numpy.empty([len(LearningData),201,4])
for counter1 in LearningData['sequence']:
LearningData_seq_OneHotEncoded[counter2]=vectorizeSequence(counter1.lower())
counter2=counter2+1
#Create training and test datasets
from sklearn.cross_validation import train_test_split
sequence_train, sequence_test, response_train, response_test = train_test_split(LearningData_seq_OneHotEncoded, LearningData['response'], test_size=0.2, random_state=42)
#Saving datasets
numpy.save('/mnt/Data_DapSeq_Maize/'+ARF_label+'no_negative_seq_train.npy',sequence_train)
numpy.save('/mnt/Data_DapSeq_Maize/'+ARF_label+'no_negative_res_train.npy',response_train)
numpy.save('/mnt/Data_DapSeq_Maize/'+ARF_label+'no_negative_seq_test.npy',sequence_test)
numpy.save('/mnt/Data_DapSeq_Maize/'+ARF_label+'no_negative_res_test.npy',response_test)
def Train_and_save_DanQ_model(ARF_label,number_backpropagation_cycles):
#Loading the data
sequence_train=numpy.load('/mnt/Data_DapSeq_Maize/'+ARF_label+'_seq_train.npy')
response_train=numpy.load('/mnt/Data_DapSeq_Maize/'+ARF_label+'_res_train.npy')
sequence_test=numpy.load('/mnt/Data_DapSeq_Maize/'+ARF_label+'_seq_test.npy')
response_test=numpy.load('/mnt/Data_DapSeq_Maize/'+ARF_label+'_res_test.npy')
#Setting up the model
import keras
import numpy as np
from keras import backend
backend._BACKEND="theano"
#DanQ model
from keras.preprocessing import sequence
from keras.optimizers import RMSprop
from keras.models import Sequential
from keras.layers.core import Dense
from keras.layers.core import Merge
from keras.layers.core import Dropout
from keras.layers.core import Activation
from keras.layers.core import Flatten
from keras.layers.convolutional import Convolution1D, MaxPooling1D
from keras.regularizers import l2, activity_l1
from keras.constraints import maxnorm
from keras.layers.recurrent import LSTM, GRU
from keras.callbacks import ModelCheckpoint, EarlyStopping
from keras.layers import Bidirectional
model = Sequential()
model.add(Convolution1D(nb_filter=20,filter_length=26,input_dim=4,input_length=201,border_mode="valid"))
model.add(Activation('relu'))
model.add(MaxPooling1D(pool_length=6, stride=6))
model.add(Dropout(0.2))
model.add(Bidirectional(LSTM(5)))
model.add(Dropout(0.5))
model.add(Dense(10))
model.add(Activation('relu'))
model.add(Dense(1))
#compile the model
model.compile(loss='mean_squared_error', optimizer='rmsprop')
model.fit(sequence_train, response_train, validation_split=0.2,batch_size=100, nb_epoch=number_backpropagation_cycles, verbose=1)
#evaulting correlation between model and test data
import scipy
correlation = scipy.stats.pearsonr(response_test,model.predict(sequence_test).flatten())
correlation_2 = (correlation[0]**2)*100
print('Percent of variability explained by model: '+str(correlation_2))
# saving the model
model.save('/mnt/Data_DapSeq_Maize/TrainedModel_DanQ_' +ARF_label+'.h5')
def Train_and_save_DanQ_model_no_negative(ARF_label,number_backpropagation_cycles,train_size):
#Loading the data
sequence_train=numpy.load('/mnt/Data_DapSeq_Maize/'+ARF_label+'no_negative_seq_train.npy')
response_train=numpy.load('/mnt/Data_DapSeq_Maize/'+ARF_label+'no_negative_res_train.npy')
sequence_train=sequence_train[0:train_size]
response_train=response_train[0:train_size]
sequence_test=numpy.load('/mnt/Data_DapSeq_Maize/'+ARF_label+'no_negative_seq_test.npy')
response_test=numpy.load('/mnt/Data_DapSeq_Maize/'+ARF_label+'no_negative_res_test.npy')
#Setting up the model
import keras
import numpy as np
from keras import backend
backend._BACKEND="theano"
#DanQ model
from keras.preprocessing import sequence
from keras.optimizers import RMSprop
from keras.models import Sequential
from keras.layers.core import Dense
from keras.layers.core import Merge
from keras.layers.core import Dropout
from keras.layers.core import Activation
from keras.layers.core import Flatten
from keras.layers.convolutional import Convolution1D, MaxPooling1D
from keras.regularizers import l2, activity_l1
from keras.constraints import maxnorm
from keras.layers.recurrent import LSTM, GRU
from keras.callbacks import ModelCheckpoint, EarlyStopping
from keras.layers import Bidirectional
model = Sequential()
model.add(Convolution1D(nb_filter=20,filter_length=26,input_dim=4,input_length=201,border_mode="valid"))
model.add(Activation('relu'))
model.add(MaxPooling1D(pool_length=6, stride=6))
model.add(Dropout(0.2))
model.add(Bidirectional(LSTM(5)))
model.add(Dropout(0.5))
model.add(Dense(10))
model.add(Activation('relu'))
model.add(Dense(1))
#compile the model
model.compile(loss='mean_squared_error', optimizer='rmsprop')
model.fit(sequence_train, response_train, validation_split=0.2,batch_size=100, nb_epoch=number_backpropagation_cycles, verbose=1)
#evaulting correlation between model and test data
import scipy
correlation = scipy.stats.pearsonr(response_test,model.predict(sequence_test).flatten())
correlation_2 = (correlation[0]**2)*100
print('Percent of variability explained by model: '+str(correlation_2))
# saving the model
model.save('/mnt/Data_DapSeq_Maize/TrainedModel_DanQ_no_negative_' +ARF_label+'.h5')
Generate_training_and_test_datasets('/mnt/Data_DapSeq_Maize/ARF27_smaller_GEM_events.txt','ARF27_smaller')
Generate_training_and_test_datasets('/mnt/Data_DapSeq_Maize/ARF34_smaller_GEM_events.txt','ARF34_smaller')
Train_and_save_DanQ_model('ARF27_smaller',35)
Train_and_save_DanQ_model('ARF34_smaller',35)
Generate_training_and_test_datasets('/mnt/Data_DapSeq_Maize/ARF16_GEM_events.txt','ARF16')
Generate_training_and_test_datasets('/mnt/Data_DapSeq_Maize/ARF4_GEM_events.txt','ARF4')
Generate_training_and_test_datasets('/mnt/Data_DapSeq_Maize/ARF4_rep2_GEM_events.txt','ARF4_rep2')
Generate_training_and_test_datasets('/mnt/Data_DapSeq_Maize/ARF4_rep3_GEM_events.txt','ARF4_rep3')
Generate_training_and_test_datasets('/mnt/Data_DapSeq_Maize/ARF10_GEM_events.txt','ARF10')
Generate_training_and_test_datasets('/mnt/Data_DapSeq_Maize/ARF13_GEM_events.txt','ARF13')
Generate_training_and_test_datasets('/mnt/Data_DapSeq_Maize/ARF18_GEM_events.txt','ARF18')
Generate_training_and_test_datasets('/mnt/Data_DapSeq_Maize/ARF27_GEM_events.txt','ARF27')
Generate_training_and_test_datasets('/mnt/Data_DapSeq_Maize/ARF29_GEM_events.txt','ARF29')
Generate_training_and_test_datasets('/mnt/Data_DapSeq_Maize/ARF34_GEM_events.txt','ARF34')
Generate_training_and_test_datasets('/mnt/Data_DapSeq_Maize/ARF35_GEM_events.txt','ARF35')
Generate_training_and_test_datasets('/mnt/Data_DapSeq_Maize/ARF39_GEM_events.txt','ARF39')
Generate_training_and_test_datasets('/mnt/Data_DapSeq_Maize/ARF10_rep1_ear_GEM_events.txt','ARF10_rep1_ear')
Generate_training_and_test_datasets('/mnt/Data_DapSeq_Maize/ARF10_rep2_ear_GEM_events.txt','ARF10_rep2_ear')
Generate_training_and_test_datasets('/mnt/Data_DapSeq_Maize/ARF10_rep1_tassel_GEM_events.txt','ARF10_rep1_tassel')
Generate_training_and_test_datasets('/mnt/Data_DapSeq_Maize/ARF10_rep2_tassel_GEM_events.txt','ARF10_rep2_tassel')
Generate_training_and_test_datasets('/mnt/Data_DapSeq_Maize/ARF7_GEM_events.txt','ARF7')
Generate_training_and_test_datasets('/mnt/Data_DapSeq_Maize/ARF14_GEM_events.txt','ARF14')
Generate_training_and_test_datasets('/mnt/Data_DapSeq_Maize/ARF24_GEM_events.txt','ARF24')
Generate_training_and_test_datasets('/mnt/Data_DapSeq_Maize/ARF25_GEM_events.txt','ARF25')
Generate_training_and_test_datasets('/mnt/Data_DapSeq_Maize/ARF36_GEM_events.txt','ARF36')
Train_and_save_DanQ_model('ARF7',35)
Train_and_save_DanQ_model('ARF14',35)
Train_and_save_DanQ_model('ARF24',35)
Train_and_save_DanQ_model('ARF25',35)
Train_and_save_DanQ_model('ARF36',35)
Train_and_save_DanQ_model('ARF10_rep1_ear',35)
Train_and_save_DanQ_model('ARF10_rep2_ear',35)
Train_and_save_DanQ_model('ARF10_rep1_tassel',35)
Train_and_save_DanQ_model('ARF10_rep2_tassel',35)
Train_and_save_DanQ_model('ARF4',35)
Train_and_save_DanQ_model('ARF4_rep2',35)
Train_and_save_DanQ_model('ARF4_rep3',35)
Train_and_save_DanQ_model('ARF10',35)
Train_and_save_DanQ_model('ARF13',35)
Train_and_save_DanQ_model('ARF16',35)
Train_and_save_DanQ_model('ARF18',35)
Train_and_save_DanQ_model('ARF27',35)
Train_and_save_DanQ_model('ARF29',35)
Train_and_save_DanQ_model('ARF34',35)
Train_and_save_DanQ_model('ARF35',35)
Train_and_save_DanQ_model('ARF39',35)
```
# Creating dataset without a negative set
```
Generate_training_and_test_datasets_no_negative('/mnt/Data_DapSeq_Maize/ARF4_GEM_events.txt','ARF4')
Generate_training_and_test_datasets_no_negative('/mnt/Data_DapSeq_Maize/ARF39_GEM_events.txt','ARF39')
Generate_training_and_test_datasets_no_negative('/mnt/Data_DapSeq_Maize/ARF35_GEM_events.txt','ARF35')
Generate_training_and_test_datasets_no_negative('/mnt/Data_DapSeq_Maize/ARF34_GEM_events.txt','ARF34')
Generate_training_and_test_datasets_no_negative('/mnt/Data_DapSeq_Maize/ARF10_GEM_events.txt','ARF10')
Generate_training_and_test_datasets_no_negative('/mnt/Data_DapSeq_Maize/ARF13_GEM_events.txt','ARF13')
Generate_training_and_test_datasets_no_negative('/mnt/Data_DapSeq_Maize/ARF16_GEM_events.txt','ARF16')
Generate_training_and_test_datasets_no_negative('/mnt/Data_DapSeq_Maize/ARF18_GEM_events.txt','ARF18')
Generate_training_and_test_datasets_no_negative('/mnt/Data_DapSeq_Maize/ARF27_GEM_events.txt','ARF27')
Generate_training_and_test_datasets_no_negative('/mnt/Data_DapSeq_Maize/ARF29_GEM_events.txt','ARF29')
#finding the min length of the test set
List_of_ARFs =['ARF4','ARF10','ARF13','ARF16','ARF18','ARF27','ARF29','ARF34','ARF35','ARF39']
seq_test_sets = [None]*len(List_of_ARFs)
counter1=0
for ARF_label in List_of_ARFs:
seq_test_sets[counter1]=numpy.load('/mnt/Data_DapSeq_Maize/'+ARF_label+'no_negative_seq_test.npy')
print(len(seq_test_sets[counter1]))
counter1=counter1+1
#based on this the test set will only be: 5960 in size
Train_and_save_DanQ_model_no_negative('ARF4',35,5960)
Train_and_save_DanQ_model_no_negative('ARF39',35,5960)
Train_and_save_DanQ_model_no_negative('ARF35',35,5960)
Train_and_save_DanQ_model_no_negative('ARF34',35,5960)
Train_and_save_DanQ_model_no_negative('ARF10',35,5960)
Train_and_save_DanQ_model_no_negative('ARF13',35,5960)
Train_and_save_DanQ_model_no_negative('ARF16',35,5960)
Train_and_save_DanQ_model_no_negative('ARF18',35,5960)
Train_and_save_DanQ_model_no_negative('ARF27',35,5960)
Train_and_save_DanQ_model_no_negative('ARF29',35,5960)
```
| github_jupyter |
The following latitude and longitude formats are supported by the `output_format` parameter:
* Decimal degrees (dd): 41.5
* Decimal degrees hemisphere (ddh): "41.5° N"
* Degrees minutes (dm): "41° 30′ N"
* Degrees minutes seconds (dms): "41° 30′ 0″ N"
You can split a column of geographic coordinates into one column for latitude and another for longitude by setting the parameter ``split`` to True.
Invalid parsing is handled with the `errors` parameter:
* "coerce" (default): invalid parsing will be set to NaN
* "ignore": invalid parsing will return the input
* "raise": invalid parsing will raise an exception
After cleaning, a **report** is printed that provides the following information:
* How many values were cleaned (the value must have been transformed).
* How many values could not be parsed.
* A summary of the cleaned data: how many values are in the correct format, and how many values are NaN.
The following sections demonstrate the functionality of `clean_lat_long()` and `validate_lat_long()`.
### An example dataset with geographic coordinates
```
import pandas as pd
import numpy as np
df = pd.DataFrame({
"lat_long":
[(41.5, -81.0), "41.5;-81.0", "41.5,-81.0", "41.5 -81.0",
"41.5° N, 81.0° W", "41.5 S;81.0 E", "-41.5 S;81.0 E",
"23 26m 22s N 23 27m 30s E", "23 26' 22\" N 23 27' 30\" E",
"UT: N 39°20' 0'' / W 74°35' 0''", "hello", np.nan, "NULL"]
})
df
```
## 1. Default `clean_lat_long()`
By default, the `output_format` parameter is set to "dd" (decimal degrees) and the `errors` parameter is set to "coerce" (set to NaN when parsing is invalid).
```
from dataprep.clean import clean_lat_long
clean_lat_long(df, "lat_long")
```
Note (41.5, -81.0) is considered not cleaned in the report since it's resulting format is the same as the input. Also, "-41.5 S;81.0 E" is invalid because if a coordinate has a hemisphere it cannot contain a negative decimal value.
## 2. Output formats
This section demonstrates the supported latitudinal and longitudinal formats.
### decimal degrees hemisphere (ddh)
```
clean_lat_long(df, "lat_long", output_format="ddh")
```
### degrees minutes (dm)
```
clean_lat_long(df, "lat_long", output_format="dm")
```
### degrees minutes seconds (dms)
```
clean_lat_long(df, "lat_long", output_format="dms")
```
## 3. `split` parameter
The split parameter adds individual columns containing the cleaned latitude and longitude values to the given DataFrame.
```
clean_lat_long(df, "lat_long", split=True)
```
Split can be used along with different output formats.
```
clean_lat_long(df, "lat_long", split=True, output_format="dm")
```
## 4. `inplace` parameter
This just deletes the given column from the returned dataframe.
A new column containing cleaned coordinates is added with a title in the format `"{original title}_clean"`.
```
clean_lat_long(df, "lat_long", inplace=True)
```
### `inplace` and `split`
```
clean_lat_long(df, "lat_long", split=True, inplace=True)
```
## 5. Latitude and longitude coordinates in separate columns
### Clean latitude or longitude coordinates individually
```
df = pd.DataFrame({"lat": [" 30′ 0″ E", "41° 30′ N", "41 S", "80", "hello", "NA"]})
clean_lat_long(df, lat_col="lat")
```
### Combine and clean separate columns
Latitude and longitude values are counted separately in the report.
```
df = pd.DataFrame({"lat": ["30° E", "41° 30′ N", "41 S", "80", "hello", "NA"],
"long": ["30° E", "41° 30′ N", "41 W", "80", "hello", "NA"]})
clean_lat_long(df, lat_col="lat", long_col="long")
```
### Clean separate columns and split the output
```
clean_lat_long(df, lat_col="lat", long_col="long", split=True)
```
## 6. `validate_lat_long()`
`validate_lat_long()` returns True when the input is a valid latitude or longitude value otherwise it returns False.
Valid types are the same as `clean_lat_long()`.
```
from dataprep.clean import validate_lat_long
print(validate_lat_long("41° 30′ 0″ N"))
print(validate_lat_long("41.5 S;81.0 E"))
print(validate_lat_long("-41.5 S;81.0 E"))
print(validate_lat_long((41.5, 81)))
print(validate_lat_long(41.5, lat_long=False, lat=True))
df = pd.DataFrame({"lat_long":
[(41.5, -81.0), "41.5;-81.0", "41.5,-81.0", "41.5 -81.0",
"41.5° N, 81.0° W", "-41.5 S;81.0 E",
"23 26m 22s N 23 27m 30s E", "23 26' 22\" N 23 27' 30\" E",
"UT: N 39°20' 0'' / W 74°35' 0''", "hello", np.nan, "NULL"]
})
validate_lat_long(df["lat_long"])
```
### Validate only one coordinate
```
df = pd.DataFrame({"lat":
[41.5, "41.5", "41.5 ",
"41.5° N", "-41.5 S",
"23 26m 22s N", "23 26' 22\" N",
"UT: N 39°20' 0''", "hello", np.nan, "NULL"]
})
validate_lat_long(df["lat"], lat_long=False, lat=True)
```
| github_jupyter |
```
from oda_api.api import DispatcherAPI
from oda_api.plot_tools import OdaImage,OdaLightCurve
from oda_api.data_products import BinaryData
import os
from astropy.io import fits
import numpy as np
from numpy import sqrt
import matplotlib.pyplot as plt
%matplotlib inline
source_name='3C 279'
ra=194.046527
dec=-5.789314
radius=10.
Tstart='2003-03-15T00:00:00'
Tstop='2018-03-15T00:00:00'
E1_keV=30.
E2_keV=100.
host='www.astro.unige.ch/cdci/astrooda/dispatch-data'
rebin=10 # minimal significance in energy bin, for spectral plotting
#try: input = raw_input
#except NameError: pass
#token=input() # token for restricted access server
#cookies=dict(_oauth2_proxy=token)
#disp=DispatcherAPI(host=host)
disp=DispatcherAPI(host=host)
import requests
url="https://www.astro.unige.ch/cdci/astrooda/dispatch-data/gw/timesystem/api/v1.0/scwlist/cons/"
def queryxtime(**args):
params=Tstart+'/'+Tstop+'?&ra='+str(ra)+'&dec='+str(dec)+'&radius='+str(radius)+'&min_good_isgri=1000'
print(url+params)
return requests.get(url+params).json()
scwlist=queryxtime()
m=len(scwlist)
pointings_osa10=[]
pointings_osa11=[]
for i in range(m):
if scwlist[i][-2:]=='10':
if(int(scwlist[i][:4])<1626):
pointings_osa10.append(scwlist[i]+'.001')
else:
pointings_osa11.append(scwlist[i]+'.001')
#else:
# pointings=np.genfromtxt('scws_3C279_isgri_10deg.txt', dtype='str')
m_osa10=len(pointings_osa10)
m_osa11=len(pointings_osa11)
def chunk_swc_list(lst, size):
_l = [lst[x:x+size] for x in range(0, len (lst), size)]
for ID,_ in enumerate(_l):
_l[ID]=','.join(_)
return _l
scw_lists_osa10=chunk_swc_list(pointings_osa10, 50)
scw_lists_osa11=chunk_swc_list(pointings_osa11, 50)
data=disp.get_product(instrument='isgri',
product='isgri_image',
scw_list=scw_lists_osa10[0],
E1_keV=E1_keV,
E2_keV=E2_keV,
osa_version='OSA10.2',
RA=ra,
DEC=dec,
detection_threshold=3.5,
product_type='Real')
data.dispatcher_catalog_1.table
FLAG=0
torm=[]
for ID,n in enumerate(data.dispatcher_catalog_1.table['src_names']):
if(n[0:3]=='NEW'):
torm.append(ID)
if(n==source_name):
FLAG=1
data.dispatcher_catalog_1.table.remove_rows(torm)
nrows=len(data.dispatcher_catalog_1.table['src_names'])
if FLAG==0:
data.dispatcher_catalog_1.table.add_row((0,'3C 279',0,ra,dec,0,2,0,0))
api_cat=data.dispatcher_catalog_1.get_api_dictionary()
spectrum_results=[]
for i in range(len(scw_lists_osa10)):
print(i)
data=disp.get_product(instrument='isgri',
product='isgri_spectrum',
scw_list=scw_lists_osa10[i],
query_type='Real',
osa_version='OSA10.2',
RA=ra,
DEC=dec,
product_type='Real',
selected_catalog=api_cat)
spectrum_results.append(data)
d=spectrum_results[0]
for ID,s in enumerate(d._p_list):
if (s.meta_data['src_name']==source_name):
if(s.meta_data['product']=='isgri_spectrum'):
ID_spec=ID
if(s.meta_data['product']=='isgri_arf'):
ID_arf=ID
if(s.meta_data['product']=='isgri_rmf'):
ID_rmf=ID
print(ID_spec, ID_arf, ID_rmf)
d=spectrum_results[0]
spec=d._p_list[ID_spec].data_unit[1].data
arf=d._p_list[ID_arf].data_unit[1].data
rmf=d._p_list[ID_rmf].data_unit[2].data
ch=spec['CHANNEL']
rate=spec['RATE']*0.
err=spec['STAT_ERR']*0.
syst=spec['SYS_ERR']*0.
rate.fill(0)
err.fill(0)
syst.fill(0)
qual=spec['QUALITY']
matrix=rmf['MATRIX']*0.
specresp=arf['SPECRESP']*0.
tot_expos=0.
corr_expos=np.zeros(len(rate))
print(len(rate))
for k in range(len(scw_lists_osa10)):
d=spectrum_results[k]
spec=d._p_list[ID_spec].data_unit[1].data
arf=d._p_list[ID_arf].data_unit[1].data
rmf=d._p_list[ID_rmf].data_unit[2].data
expos=d._p_list[0].data_unit[1].header['EXPOSURE']
tot_expos=tot_expos+expos
print(k,expos)
for j in range(len(rate)):
if(spec['QUALITY'][j]==0):
rate[j]=rate[j]+spec['RATE'][j]/(spec['STAT_ERR'][j])**2
err[j]=err[j]+1./(spec['STAT_ERR'][j])**2
syst[j]=syst[j]+(spec['SYS_ERR'][j])**2*expos
corr_expos[j]=corr_expos[j]+expos
matrix=matrix+rmf['MATRIX']*expos
specresp=specresp+arf['SPECRESP']*expos
for i in range(len(rate)):
if err[i]>0.:
rate[i]=rate[i]/err[i]
err[i]=1./sqrt(err[i])
matrix=matrix/tot_expos
specresp=specresp/tot_expos
syst=sqrt(syst/(corr_expos+1.))
print('Total exposure:',tot_expos)
print(rate)
print(err)
d._p_list[ID_spec].data_unit[1].data['RATE']=rate
d._p_list[ID_spec].data_unit[1].data['STAT_ERR']=err
d._p_list[ID_rmf].data_unit[2].data['MATRIX']=matrix
d._p_list[ID_arf].data_unit[1].data['SPECRESP']=specresp
name=source_name.replace(" ", "")
specname=name+'_spectrum_osa10.fits'
arfname=name+'_arf_osa10.fits.gz'
rmfname=name+'_rmf_osa10.fits.gz'
data._p_list[ID_spec].write_fits_file(specname)
data._p_list[ID_arf].write_fits_file(arfname)
data._p_list[ID_rmf].write_fits_file(rmfname)
hdul = fits.open(specname, mode='update')
hdr=hdul[1].header
hdr.set('EXPOSURE', tot_expos)
hdul.close()
!./spectrum_fit_osa10.sh $name $rebin
spectrum_results1=[]
for i in range(len(scw_lists_osa11)):
print(i)
data=disp.get_product(instrument='isgri',
product='isgri_spectrum',
scw_list=scw_lists_osa11[i],
query_type='Real',
osa_version='OSA11.0',
RA=ra,
DEC=dec,
product_type='Real',
selected_catalog=api_cat)
spectrum_results1.append(data)
d=spectrum_results1[0]
for ID,s in enumerate(d._p_list):
if (s.meta_data['src_name']==source_name):
if(s.meta_data['product']=='isgri_spectrum'):
ID_spec=ID
if(s.meta_data['product']=='isgri_arf'):
ID_arf=ID
if(s.meta_data['product']=='isgri_rmf'):
ID_rmf=ID
print(ID_spec, ID_arf, ID_rmf)
d=spectrum_results1[0]
spec=d._p_list[ID_spec].data_unit[1].data
arf=d._p_list[ID_arf].data_unit[1].data
rmf=d._p_list[ID_rmf].data_unit[2].data
ch=spec['CHANNEL']
rate=spec['RATE']*0.
err=spec['STAT_ERR']*0.
syst=spec['SYS_ERR']*0.
rate.fill(0)
err.fill(0)
syst.fill(0)
qual=spec['QUALITY']
matrix=rmf['MATRIX']*0.
specresp=arf['SPECRESP']*0.
tot_expos=0.
corr_expos=np.zeros(len(rate))
print(len(rate))
for k in range(len(scw_lists_osa11)):
d=spectrum_results1[k]
spec=d._p_list[ID_spec].data_unit[1].data
arf=d._p_list[ID_arf].data_unit[1].data
rmf=d._p_list[ID_rmf].data_unit[2].data
expos=d._p_list[0].data_unit[1].header['EXPOSURE']
tot_expos=tot_expos+expos
print(k,expos)
for j in range(len(rate)):
if(spec['QUALITY'][j]==0):
rate[j]=rate[j]+spec['RATE'][j]/(spec['STAT_ERR'][j])**2
err[j]=err[j]+1./(spec['STAT_ERR'][j])**2
syst[j]=syst[j]+(spec['SYS_ERR'][j])**2*expos
corr_expos[j]=corr_expos[j]+expos
matrix=matrix+rmf['MATRIX']*expos
specresp=specresp+arf['SPECRESP']*expos
for i in range(len(rate)):
if err[i]>0.:
rate[i]=rate[i]/err[i]
err[i]=1./sqrt(err[i])
matrix=matrix/tot_expos
specresp=specresp/tot_expos
syst=sqrt(syst/(corr_expos+1.))
print('Total exposure:',tot_expos)
d._p_list[ID_spec].data_unit[1].data['RATE']=rate
d._p_list[ID_spec].data_unit[1].data['STAT_ERR']=err
d._p_list[ID_rmf].data_unit[2].data['MATRIX']=matrix
d._p_list[ID_arf].data_unit[1].data['SPECRESP']=specresp
name=source_name.replace(" ", "")
specname=name+'_spectrum_osa11.fits'
arfname=name+'_arf_osa11.fits.gz'
rmfname=name+'_rmf_osa11.fits.gz'
data._p_list[ID_spec].write_fits_file(specname)
data._p_list[ID_arf].write_fits_file(arfname)
data._p_list[ID_rmf].write_fits_file(rmfname)
hdul = fits.open(specname, mode='update')
hdr=hdul[1].header
hdr.set('EXPOSURE', tot_expos)
hdul.close()
!./spectrum_fit_osa11.sh $name $rebin
data=disp.get_product(instrument='isgri',
product='isgri_spectrum',
T1='2015-06-15T15:56:45',
T2='2015-06-16T06:13:10',
query_type='Real',
osa_version='OSA10.2',
RA=ra,
DEC=dec,
#detection_threshold=5.0,
radius=15.,
product_type='Real',
selected_catalog=api_cat)
data._p_list[0].write_fits_file(name+'_flare_spectrum_osa10.fits')
data._p_list[1].write_fits_file(name+'_flare_arf_osa10.fits.gz')
data._p_list[2].write_fits_file(name+'_flare_rmf_osa10.fits.gz')
name1=name+'_flare'
!./spectrum_fit_osa10.sh $name1 5
golden_ratio=0.5*(1.+np.sqrt(5))
width=4.
plt.figure(figsize=(golden_ratio*width,width))
alpha=1
linewidth=4
fontsize=11
spectrum=np.genfromtxt(name+'_spectrum_osa10.txt',skip_header=3)
en=spectrum[:,0]
en_err=spectrum[:,1]
fl=spectrum[:,2]
fl_err=spectrum[:,3]
mo=spectrum[:,4]
plt.errorbar(en,fl,xerr=en_err,yerr=fl_err,linestyle='none',linewidth=linewidth,color='black',alpha=alpha,
label='Average')
plt.plot(en,mo,color='black',linewidth=linewidth/2)
spectrum=np.genfromtxt(name+'_flare_spectrum_osa10.txt',skip_header=3)
en=spectrum[:,0]
en_err=spectrum[:,1]
fl=spectrum[:,2]
fl_err=spectrum[:,3]
mo=spectrum[:,4]
uplims=fl<0
fl[uplims]=3*fl_err[uplims]
fl_err[uplims] = fl[uplims]/3.
plt.errorbar(en,fl,xerr=en_err,yerr=fl_err,linestyle='none',linewidth=linewidth,color='blue',alpha=alpha,
label='Flare', uplims=uplims)
plt.plot(en,mo,color='blue',linewidth=linewidth/2,alpha=alpha)
plt.tick_params(axis='both', which='major', labelsize=fontsize)
plt.tick_params(axis='both', which='minor', labelsize=fontsize)
plt.tick_params(axis='x', which='minor', length=4)
plt.xscale('log')
plt.yscale('log')
plt.ylim(2.e-3,0.2)
plt.xlim(18,110)
plt.xlabel('$E$ [keV]',fontsize=fontsize)
plt.ylabel('$E^2F_E$ [keV$\,$cm$^{-2}\,\mathrm{s}^{-1}$]',fontsize=fontsize)
plt.title('3c 273')
plt.legend()
from matplotlib.ticker import ScalarFormatter
my_formatter=ScalarFormatter()
my_formatter.set_scientific(False)
ax=plt.gca()
ax.get_xaxis().set_major_formatter(my_formatter)
ax.get_xaxis().set_minor_formatter(my_formatter)
plt.savefig(name+'_spectra.pdf',format='pdf',dpi=100)
dir(default_formatter)
spectrum_3C279=name+'_spectra.pdf'
```
| github_jupyter |
<h2> 25ppm - somehow more features detected than at 4ppm... I guess because more likely to pass over the #scans needed to define a feature </h2>
Enough retcor groups, loads of peak insertion problem (1000's). Does that mean data isn't centroided...?
```
import time
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
from sklearn import preprocessing
from sklearn.ensemble import RandomForestClassifier
from sklearn.cross_validation import StratifiedShuffleSplit
from sklearn.cross_validation import cross_val_score
#from sklearn.model_selection import StratifiedShuffleSplit
#from sklearn.model_selection import cross_val_score
from sklearn.ensemble import AdaBoostClassifier
from sklearn.metrics import roc_curve, auc
from sklearn.utils import shuffle
from scipy import interp
%matplotlib inline
def remove_zero_columns(X, threshold=1e-20):
# convert zeros to nan, drop all nan columns, the replace leftover nan with zeros
X_non_zero_colum = X.replace(0, np.nan).dropna(how='all', axis=1).replace(np.nan, 0)
#.dropna(how='all', axis=0).replace(np.nan,0)
return X_non_zero_colum
def zero_fill_half_min(X, threshold=1e-20):
# Fill zeros with 1/2 the minimum value of that column
# input dataframe. Add only to zero values
# Get a vector of 1/2 minimum values
half_min = X[X > threshold].min(axis=0)*0.5
# Add the half_min values to a dataframe where everything that isn't zero is NaN.
# then convert NaN's to 0
fill_vals = (X[X < threshold] + half_min).fillna(value=0)
# Add the original dataframe to the dataframe of zeros and fill-values
X_zeros_filled = X + fill_vals
return X_zeros_filled
toy = pd.DataFrame([[1,2,3,0],
[0,0,0,0],
[0.5,1,0,0]], dtype=float)
toy_no_zeros = remove_zero_columns(toy)
toy_filled_zeros = zero_fill_half_min(toy_no_zeros)
print toy
print toy_no_zeros
print toy_filled_zeros
```
<h2> Import the dataframe and remove any features that are all zero </h2>
```
### Subdivide the data into a feature table
data_path = '/home/irockafe/Dropbox (MIT)/Alm_Lab/projects/revo_healthcare/data/processed/MTBLS315/'\
'uhplc_pos/xcms_result_25.csv'
## Import the data and remove extraneous columns
df = pd.read_csv(data_path, index_col=0)
df.shape
df.head()
# Make a new index of mz:rt
mz = df.loc[:,"mz"].astype('str')
rt = df.loc[:,"rt"].astype('str')
idx = mz+':'+rt
df.index = idx
df
# separate samples from xcms/camera things to make feature table
not_samples = ['mz', 'mzmin', 'mzmax', 'rt', 'rtmin', 'rtmax',
'npeaks', 'uhplc_pos',
]
samples_list = df.columns.difference(not_samples)
mz_rt_df = df[not_samples]
# convert to samples x features
X_df_raw = df[samples_list].T
# Remove zero-full columns and fill zeroes with 1/2 minimum values
X_df = remove_zero_columns(X_df_raw)
X_df_zero_filled = zero_fill_half_min(X_df)
print "original shape: %s \n# zeros: %f\n" % (X_df_raw.shape, (X_df_raw < 1e-20).sum().sum())
print "zero-columns repalced? shape: %s \n# zeros: %f\n" % (X_df.shape,
(X_df < 1e-20).sum().sum())
print "zeros filled shape: %s \n#zeros: %f\n" % (X_df_zero_filled.shape,
(X_df_zero_filled < 1e-20).sum().sum())
# Convert to numpy matrix to play nicely with sklearn
X = X_df.as_matrix()
print X.shape
```
<h2> Get mappings between sample names, file names, and sample classes </h2>
```
# Get mapping between sample name and assay names
path_sample_name_map = '/home/irockafe/Dropbox (MIT)/Alm_Lab/projects/revo_healthcare/data/raw/'\
'MTBLS315/metadata/a_UPLC_POS_nmfi_and_bsi_diagnosis.txt'
# Index is the sample name
sample_df = pd.read_csv(path_sample_name_map,
sep='\t', index_col=0)
sample_df = sample_df['MS Assay Name']
sample_df.shape
print sample_df.head(10)
# get mapping between sample name and sample class
path_sample_class_map = '/home/irockafe/Dropbox (MIT)/Alm_Lab/projects/revo_healthcare/data/raw/'\
'MTBLS315/metadata/s_NMFI and BSI diagnosis.txt'
class_df = pd.read_csv(path_sample_class_map,
sep='\t')
# Set index as sample name
class_df.set_index('Sample Name', inplace=True)
class_df = class_df['Factor Value[patient group]']
print class_df.head(10)
# convert all non-malarial classes into a single classes
# (collapse non-malarial febril illness and bacteremia together)
class_map_df = pd.concat([sample_df, class_df], axis=1)
class_map_df.rename(columns={'Factor Value[patient group]': 'class'}, inplace=True)
class_map_df
binary_class_map = class_map_df.replace(to_replace=['non-malarial febrile illness', 'bacterial bloodstream infection' ],
value='non-malarial fever')
binary_class_map
# convert classes to numbers
le = preprocessing.LabelEncoder()
le.fit(binary_class_map['class'])
y = le.transform(binary_class_map['class'])
```
<h2> Plot the distribution of classification accuracy across multiple cross-validation splits - Kinda Dumb</h2>
Turns out doing this is kind of dumb, because you're not taking into account the prediction score your classifier assigned. Use AUC's instead. You want to give your classifier a lower score if it is really confident and wrong, than vice-versa
```
def rf_violinplot(X, y, n_iter=25, test_size=0.3, random_state=1,
n_estimators=1000):
cross_val_skf = StratifiedShuffleSplit(y, n_iter=n_iter, test_size=test_size,
random_state=random_state)
clf = RandomForestClassifier(n_estimators=n_estimators, random_state=random_state)
scores = cross_val_score(clf, X, y, cv=cross_val_skf)
sns.violinplot(scores,inner='stick')
rf_violinplot(X,y)
# TODO - Switch to using caret for this bs..?
# Do multi-fold cross validation for adaboost classifier
def adaboost_violinplot(X, y, n_iter=25, test_size=0.3, random_state=1,
n_estimators=200):
cross_val_skf = StratifiedShuffleSplit(y, n_iter=n_iter, test_size=test_size, random_state=random_state)
clf = AdaBoostClassifier(n_estimators=n_estimators, random_state=random_state)
scores = cross_val_score(clf, X, y, cv=cross_val_skf)
sns.violinplot(scores,inner='stick')
adaboost_violinplot(X,y)
# TODO PQN normalization, and log-transformation,
# and some feature selection (above certain threshold of intensity, use principal components), et
def pqn_normalize(X, integral_first=False, plot=False):
'''
Take a feature table and run PQN normalization on it
'''
# normalize by sum of intensities in each sample first. Not necessary
if integral_first:
sample_sums = np.sum(X, axis=1)
X = (X / sample_sums[:,np.newaxis])
# Get the median value of each feature across all samples
mean_intensities = np.median(X, axis=0)
# Divde each feature by the median value of each feature -
# these are the quotients for each feature
X_quotients = (X / mean_intensities[np.newaxis,:])
if plot: # plot the distribution of quotients from one sample
for i in range(1,len(X_quotients[:,1])):
print 'allquotients reshaped!\n\n',
#all_quotients = X_quotients.reshape(np.prod(X_quotients.shape))
all_quotients = X_quotients[i,:]
print all_quotients.shape
x = np.random.normal(loc=0, scale=1, size=len(all_quotients))
sns.violinplot(all_quotients)
plt.title("median val: %f\nMax val=%f" % (np.median(all_quotients), np.max(all_quotients)))
plt.plot( title="median val: ")#%f" % np.median(all_quotients))
plt.xlim([-0.5, 5])
plt.show()
# Define a quotient for each sample as the median of the feature-specific quotients
# in that sample
sample_quotients = np.median(X_quotients, axis=1)
# Quotient normalize each samples
X_pqn = X / sample_quotients[:,np.newaxis]
return X_pqn
# Make a fake sample, with 2 samples at 1x and 2x dilutions
X_toy = np.array([[1,1,1,],
[2,2,2],
[3,6,9],
[6,12,18]], dtype=float)
print X_toy
print X_toy.reshape(1, np.prod(X_toy.shape))
X_toy_pqn_int = pqn_normalize(X_toy, integral_first=True, plot=True)
print X_toy_pqn_int
print '\n\n\n'
X_toy_pqn = pqn_normalize(X_toy)
print X_toy_pqn
```
<h2> pqn normalize your features </h2>
```
X_pqn = pqn_normalize(X)
print X_pqn
```
<h2>Random Forest & adaBoost with PQN-normalized data</h2>
```
rf_violinplot(X_pqn, y)
# Do multi-fold cross validation for adaboost classifier
adaboost_violinplot(X_pqn, y)
```
<h2> RF & adaBoost with PQN-normalized, log-transformed data </h2>
Turns out a monotonic transformation doesn't really affect any of these things.
I guess they're already close to unit varinace...?
```
X_pqn_nlog = np.log(X_pqn)
rf_violinplot(X_pqn_nlog, y)
adaboost_violinplot(X_pqn_nlog, y)
def roc_curve_cv(X, y, clf, cross_val,
path='/home/irockafe/Desktop/roc.pdf',
save=False, plot=True):
t1 = time.time()
# collect vals for the ROC curves
tpr_list = []
mean_fpr = np.linspace(0,1,100)
auc_list = []
# Get the false-positive and true-positive rate
for i, (train, test) in enumerate(cross_val):
clf.fit(X[train], y[train])
y_pred = clf.predict_proba(X[test])[:,1]
# get fpr, tpr
fpr, tpr, thresholds = roc_curve(y[test], y_pred)
roc_auc = auc(fpr, tpr)
#print 'AUC', roc_auc
#sns.plt.plot(fpr, tpr, lw=10, alpha=0.6, label='ROC - AUC = %0.2f' % roc_auc,)
#sns.plt.show()
tpr_list.append(interp(mean_fpr, fpr, tpr))
tpr_list[-1][0] = 0.0
auc_list.append(roc_auc)
if (i % 10 == 0):
print '{perc}% done! {time}s elapsed'.format(perc=100*float(i)/cross_val.n_iter, time=(time.time() - t1))
# get mean tpr and fpr
mean_tpr = np.mean(tpr_list, axis=0)
# make sure it ends up at 1.0
mean_tpr[-1] = 1.0
mean_auc = auc(mean_fpr, mean_tpr)
std_auc = np.std(auc_list)
if plot:
# plot mean auc
plt.plot(mean_fpr, mean_tpr, label='Mean ROC - AUC = %0.2f $\pm$ %0.2f' % (mean_auc,
std_auc),
lw=5, color='b')
# plot luck-line
plt.plot([0,1], [0,1], linestyle = '--', lw=2, color='r',
label='Luck', alpha=0.5)
# plot 1-std
std_tpr = np.std(tpr_list, axis=0)
tprs_upper = np.minimum(mean_tpr + std_tpr, 1)
tprs_lower = np.maximum(mean_tpr - std_tpr, 0)
plt.fill_between(mean_fpr, tprs_lower, tprs_upper, color='grey', alpha=0.2,
label=r'$\pm$ 1 stdev')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC curve, {iters} iterations of {cv} cross validation'.format(
iters=cross_val.n_iter, cv='{train}:{test}'.format(test=cross_val.test_size, train=(1-cross_val.test_size)))
)
plt.legend(loc="lower right")
if save:
plt.savefig(path, format='pdf')
plt.show()
return tpr_list, auc_list, mean_fpr
rf_estimators = 1000
n_iter = 3
test_size = 0.3
random_state = 1
cross_val_rf = StratifiedShuffleSplit(y, n_iter=n_iter, test_size=test_size, random_state=random_state)
clf_rf = RandomForestClassifier(n_estimators=rf_estimators, random_state=random_state)
rf_graph_path = '''/home/irockafe/Dropbox (MIT)/Alm_Lab/projects/revolutionizing_healthcare/data/MTBLS315/\
isaac_feature_tables/uhplc_pos/rf_roc_{trees}trees_{cv}cviter.pdf'''.format(trees=rf_estimators, cv=n_iter)
print cross_val_rf.n_iter
print cross_val_rf.test_size
tpr_vals, auc_vals, mean_fpr = roc_curve_cv(X_pqn, y, clf_rf, cross_val_rf,
path=rf_graph_path, save=False)
# For adaboosted
n_iter = 3
test_size = 0.3
random_state = 1
adaboost_estimators = 200
adaboost_path = '''/home/irockafe/Dropbox (MIT)/Alm_Lab/projects/revolutionizing_healthcare/data/MTBLS315/\
isaac_feature_tables/uhplc_pos/adaboost_roc_{trees}trees_{cv}cviter.pdf'''.format(trees=adaboost_estimators,
cv=n_iter)
cross_val_adaboost = StratifiedShuffleSplit(y, n_iter=n_iter, test_size=test_size, random_state=random_state)
clf = AdaBoostClassifier(n_estimators=adaboost_estimators, random_state=random_state)
adaboost_tpr, adaboost_auc, adaboost_fpr = roc_curve_cv(X_pqn, y, clf, cross_val_adaboost,
path=adaboost_path)
```
<h2> Great, you can classify things. But make null models and do a sanity check to make
sure you arent just classifying garbage </h2>
```
# Make a null model AUC curve
def make_null_model(X, y, clf, cross_val, random_state=1, num_shuffles=5, plot=True):
'''
Runs the true model, then sanity-checks by:
Shuffles class labels and then builds cross-validated ROC curves from them.
Compares true AUC vs. shuffled auc by t-test (assumes normality of AUC curve)
'''
null_aucs = []
print y.shape
print X.shape
tpr_true, auc_true, fpr_true = roc_curve_cv(X, y, clf, cross_val)
# shuffle y lots of times
for i in range(0, num_shuffles):
#Iterate through the shuffled y vals and repeat with appropriate params
# Retain the auc vals for final plotting of distribution
y_shuffle = shuffle(y)
cross_val.y = y_shuffle
cross_val.y_indices = y_shuffle
print 'Number of differences b/t original and shuffle: %s' % (y == cross_val.y).sum()
# Get auc values for number of iterations
tpr, auc, fpr = roc_curve_cv(X, y_shuffle, clf, cross_val, plot=False)
null_aucs.append(auc)
#plot the outcome
if plot:
flattened_aucs = [j for i in null_aucs for j in i]
my_dict = {'true_auc': auc_true, 'null_auc': flattened_aucs}
df_poop = pd.DataFrame.from_dict(my_dict, orient='index').T
df_tidy = pd.melt(df_poop, value_vars=['true_auc', 'null_auc'],
value_name='auc', var_name='AUC_type')
#print flattened_aucs
sns.violinplot(x='AUC_type', y='auc',
inner='points', data=df_tidy)
# Plot distribution of AUC vals
plt.title("Distribution of aucs")
#sns.plt.ylabel('count')
plt.xlabel('AUC')
#sns.plt.plot(auc_true, 0, color='red', markersize=10)
plt.show()
# Do a quick t-test to see if odds of randomly getting an AUC that good
return auc_true, null_aucs
# Make a null model AUC curve & compare it to null-model
# Random forest magic!
rf_estimators = 1000
n_iter = 50
test_size = 0.3
random_state = 1
cross_val_rf = StratifiedShuffleSplit(y, n_iter=n_iter, test_size=test_size, random_state=random_state)
clf_rf = RandomForestClassifier(n_estimators=rf_estimators, random_state=random_state)
true_auc, all_aucs = make_null_model(X_pqn, y, clf_rf, cross_val_rf, num_shuffles=5)
# make dataframe from true and false aucs
flattened_aucs = [j for i in all_aucs for j in i]
my_dict = {'true_auc': true_auc, 'null_auc': flattened_aucs}
df_poop = pd.DataFrame.from_dict(my_dict, orient='index').T
df_tidy = pd.melt(df_poop, value_vars=['true_auc', 'null_auc'],
value_name='auc', var_name='AUC_type')
print df_tidy.head()
#print flattened_aucs
sns.violinplot(x='AUC_type', y='auc',
inner='points', data=df_tidy, bw=0.7)
plt.show()
```
<h2> Let's check out some PCA plots </h2>
```
from sklearn.decomposition import PCA
# Check PCA of things
def PCA_plot(X, y, n_components, plot_color, class_nums, class_names, title='PCA'):
pca = PCA(n_components=n_components)
X_pca = pca.fit(X).transform(X)
print zip(plot_color, class_nums, class_names)
for color, i, target_name in zip(plot_color, class_nums, class_names):
# plot one class at a time, first plot all classes y == 0
#print color
#print y == i
xvals = X_pca[y == i, 0]
print xvals.shape
yvals = X_pca[y == i, 1]
plt.scatter(xvals, yvals, color=color, alpha=0.8, label=target_name)
plt.legend(bbox_to_anchor=(1.01,1), loc='upper left', shadow=False)#, scatterpoints=1)
plt.title('PCA of Malaria data')
plt.show()
PCA_plot(X_pqn, y, 2, ['red', 'blue'], [0,1], ['malaria', 'non-malaria fever'])
PCA_plot(X, y, 2, ['red', 'blue'], [0,1], ['malaria', 'non-malaria fever'])
```
<h2> What about with all thre classes? </h2>
```
# convert classes to numbers
le = preprocessing.LabelEncoder()
le.fit(class_map_df['class'])
y_three_class = le.transform(class_map_df['class'])
print class_map_df.head(10)
print y_three_class
print X.shape
print y_three_class.shape
y_labels = np.sort(class_map_df['class'].unique())
print y_labels
colors = ['green', 'red', 'blue']
print np.unique(y_three_class)
PCA_plot(X_pqn, y_three_class, 2, colors, np.unique(y_three_class), y_labels)
PCA_plot(X, y_three_class, 2, colors, np.unique(y_three_class), y_labels)
```
| github_jupyter |
# 0.0. IMPORTS
```
import pandas as pd
import inflection
import math
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import datetime
from IPython.display import Image
```
## 0.1. Helper Functions
## 0.2. Loading Data
```
df_sales_raw = pd.read_csv('data/train.csv', low_memory=False)
df_store_raw = pd.read_csv('data/store.csv', low_memory=False)
# merge
df_raw = pd.merge(df_sales_raw, df_store_raw, how='left', on='Store')
```
# 1.0. DESCRIÇÃO DOS DADOS
```
df1 = df_raw.copy()
```
## 1.1. Rename Columns
```
cols_old = ['Store', 'DayOfWeek', 'Date', 'Sales', 'Customers', 'Open', 'Promo',
'StateHoliday', 'SchoolHoliday', 'StoreType', 'Assortment',
'CompetitionDistance', 'CompetitionOpenSinceMonth',
'CompetitionOpenSinceYear', 'Promo2', 'Promo2SinceWeek',
'Promo2SinceYear', 'PromoInterval']
snakecase = lambda x: inflection.underscore(x)
cols_new = list(map(snakecase, cols_old))
#Rename
df1.columns = cols_new
```
## 1.2. Data Dimensions
```
print('Numbers of rows: {}'.format(df1.shape[0]))
print('Numbers of columns: {}'.format(df1.shape[1]))
```
## 1.3. Data Types
```
df1['date'] = pd.to_datetime(df1['date'])
df1.dtypes
```
## 1.4. Check NA
```
df1.isna().sum()
```
## 1.5. Fillout NA
```
# competition_distance
## df1['competition_distance'].max() # Checking the maximun distance / ==75860.0
df1['competition_distance'] = df1['competition_distance'].apply(lambda x: 200000.0 if math.isnan( x ) else x)
# competition_open_since_month
## Using the month of the date column as reference if there isn't any value in competition_open_since_month column
df1['competition_open_since_month'] = df1.apply(lambda x: x['date'].month if math.isnan(x['competition_open_since_month']) else x['competition_open_since_month'], axis=1)
# competition_open_since_year
## Using the year inside date column as reference if there isn't any value in competition_open_since_year column
df1['competition_open_since_year'] = df1.apply(lambda x: x['date'].year if math.isnan(x['competition_open_since_year']) else x['competition_open_since_year'], axis=1)
# promo2_since_week
## Using the week inside date column as reference if there isn't any value in promo2_since_week column
df1['promo2_since_week'] = df1.apply(lambda x: x['date'].week if math.isnan(x['promo2_since_week']) else x['promo2_since_week'], axis=1)
# promo2_since_year
## Using the year inside date column as reference if there isn't any value in promo2_since_year column
df1['promo2_since_year'] = df1.apply(lambda x: x['date'].year if math.isnan(x['promo2_since_year']) else x['promo2_since_year'], axis=1)
# promo_interval
month_map = {1: 'Jan', 2: 'Feb', 3: 'Mar', 4: 'Apr', 5: 'May', 6: 'Jun', 7: 'Jul', 8: 'Aug', 9: 'Sep', 10: 'Oct', 11: 'Nov', 12: 'Dec'}
df1['promo_interval'].fillna(0, inplace=True)
df1['month_map'] = df1['date'].dt.month.map(month_map)
df1['is_promo'] = df1[['promo_interval', 'month_map']].apply(lambda x: 0 if x['promo_interval'] == 0 else 1 if x['month_map'] in x['promo_interval'].split(',') else 0, axis=1)
```
## 1.6. Change Types
```
df1['competition_open_since_month'] = df1['competition_open_since_month'].astype(np.int64)
df1['competition_open_since_year'] = df1['competition_open_since_year'].astype(np.int64)
df1['promo2_since_week'] = df1['promo2_since_week'].astype(np.int64)
df1['promo2_since_year'] = df1['promo2_since_year'].astype(np.int64)
```
## 1.7. Descriptive Statistical
### 1.7.1 Numerical Attributes
```
# Spliting the dataframe between numeric and categorical
num_attributes = df1.select_dtypes(include = ['int64', 'float64'])
cat_attributes = df1.select_dtypes(exclude = ['int64', 'float64', 'datetime64[ns]'])
# Central Tendency - mean, median
ct1 = pd.DataFrame(num_attributes.apply(np.mean)).T
ct2 = pd.DataFrame(num_attributes.apply(np.median)).T
# Dispersion - std, min, max, range, skew, kurtosis
d1 = pd.DataFrame(num_attributes.apply(np.std)).T
d2 = pd.DataFrame(num_attributes.apply(min)).T
d3 = pd.DataFrame(num_attributes.apply(max)).T
d4 = pd.DataFrame(num_attributes.apply(lambda x: x.max() - x.min())).T
d5 = pd.DataFrame(num_attributes.apply(lambda x: x.skew())).T
d6 = pd.DataFrame(num_attributes.apply(lambda x: x.kurtosis())).T
m = pd.concat([d2, d3, d4, ct1, ct2, d1, d5, d6]).T.reset_index()
m.columns = ['attributes', 'min', 'max', 'range', 'mean', 'median', 'std', 'skew', 'kurtosis']
m
```
### 1.7.2 Categorical Attributes
```
cat_attributes.apply(lambda x: x.unique().shape[0])
aux1 = df1[(df1['state_holiday'] != '0') & (df1['sales'] > 0)]
plt.figure(figsize=(15, 6))
plt.subplot(1, 3, 1)
sns.boxplot(x='state_holiday', y='sales', data=aux1)
plt.subplot(1, 3, 2)
sns.boxplot(x='store_type', y='sales', data=aux1)
plt.subplot(1, 3, 3)
sns.boxplot(x='assortment', y='sales', data=aux1)
plt.tight_layout()
```
# 2.0. Feature Engineering
```
df2 = df1.copy()
```
## 2.1. Mind Map Hypothesis
```
Image('img/MindMapHypothesis.png')
```
## 2.2.Criação das Hipóteses
### 2.2.1. Hipóteses Loja
**1.** Lojas com maior quadro de funcionários deveriam vender mais.
**2.** Lojas com maior capacidade de estoque deveriam vender mais.
**3.** Lojas com maior porte deveriam vender mais.
**4.** Lojas com maior sortimento deveriam vender mais.
**5.** Lojas com competidores mais próximos deveriam vender menos.
**6.** Lojas com competidores à mais tempo deveriam vender mais.
### 2.2.2. Hipóteses Produto
**1.** Lojas que investem mais em marketing deveriam vender mais.
**2.** Lojas que expoem mais os produtos na vitrine deveriam vender mais.
**3.** Lojas com produtos que tem preços menores deveriam vender mais.
**4.** Lojas promoções mais agressivas (descontos maiores) deveriam vender mais.
**5.** Lojas com promoções ativas por mais tempo deveriam vender mais.
**6.** Lojas com mais dias de promoção deveriam vender mais.
**7.** Lojas com mais promoções consecutivas deveriam vender mais.
### 2.2.3. Hipóteses Tempo
**1.** Lojas abertas durante o feriado de Natal deveriam vender mais.
**2.** Lojas deveriam vender mais ao longo dos anos.
**3.** Lojas deveriam vender mais no segundo semestre.
**4.** Lojas deveriam vender mais depois do dia 10 de cada mês.
**5.** Lojas deveriam vender menos aos finais de semana.
**6.** Lojas deveriam vender menos durante os feriados escolares.
## 2.3. Lista Final de Hipóteses
**1.** Lojas com maior sortimento deveriam vender mais.
**2.** Lojas com competidores mais próximos deveriam vender menos.
**3.** Lojas com competidores à mais tempo deveriam vender mais.
**4** Lojas com promoções ativas por mais tempo deveriam vender mais.
**5** Lojas com mais dias de promoção deveriam vender mais.
**6.** Lojas com mais promoções consecutivas deveriam vender mais.
**7.** Lojas abertas durante o feriado de Natal deveriam vender mais.
**8.** Lojas deveriam vender mais ao longo dos anos.
**9.** Lojas deveriam vender mais no segundo semestre.
**10.** Lojas deveriam vender mais depois do dia 10 de cada mês.
**11.** Lojas deveriam vender menos aos finais de semana.
**12.** Lojas deveriam vender menos durante os feriados escolares.
## 2.4. Feature Engineering
```
# year
df2['year'] = df2['date'].dt.year
# month
df2['month'] = df2['date'].dt.month
# day
df2['day'] = df2['date'].dt.day
# week of year
df2['week_of_year'] = df2['date'].dt.weekofyear
# year week
df2['year_week'] = df2['date'].dt.strftime('%Y-%W')
# competition since
df2['competition_since'] = df2.apply(lambda x: datetime.datetime(year=x['competition_open_since_year'], month=x['competition_open_since_month'], day=1), axis=1)
df2['competition_time_month'] = ((df2['date'] - df2['competition_since'])/30).apply(lambda x: x.days).astype(np.int64) # tempo que competição está aberta em meses
# promo since
df2['promo_since'] = df2['promo2_since_year'].astype(str) + '-' + df2['promo2_since_week'].astype(str)
df2['promo_since'] = df2['promo_since'].apply(lambda x: datetime.datetime.strptime(x + '-1', '%Y-%W-%w') - datetime.timedelta(days=7))
df2['promo_time_week'] = ((df2['date'] - df2['promo_since'])/7).apply(lambda x: x.days).astype(np.int64) #tempo que a promoção está ativa em semanas
# assortment
df2['assortment'] = df2['assortment'].apply(lambda x: 'basic' if x == 'a' else 'extra' if x == 'b' else 'extended')
# state holiday
df2['state_holiday'] = df2['state_holiday'].apply(lambda x: 'public_holiday' if x == 'a' else 'eater_holiday' if x == 'b' else 'christmas' if x == 'c' else 'regular_day')
df2.head().T
```
| github_jupyter |
<h1>REGIONE CAMPANIA</h1>
Confronto dei dati relativi ai decessi registrati dall'ISTAT e i decessi causa COVID-19 registrati dalla Protezione Civile Italiana con i decessi previsti dal modello predittivo SARIMA.
<h2>DECESSI MENSILI REGIONE CAMPANIA ISTAT</h2>
Il DataFrame contiene i dati relativi ai decessi mensili della regione <b>Campania</b> dal <b>2015</b> al <b>30 settembre 2020</b>.
```
import matplotlib.pyplot as plt
import pandas as pd
decessi_istat = pd.read_csv('../../csv/regioni/campania.csv')
decessi_istat.head()
decessi_istat['DATA'] = pd.to_datetime(decessi_istat['DATA'])
decessi_istat.TOTALE = pd.to_numeric(decessi_istat.TOTALE)
```
<h3>Recupero dei dati inerenti al periodo COVID-19</h3>
```
decessi_istat = decessi_istat[decessi_istat['DATA'] > '2020-02-29']
decessi_istat.head()
```
<h3>Creazione serie storica dei decessi ISTAT</h3>
```
decessi_istat = decessi_istat.set_index('DATA')
decessi_istat = decessi_istat.TOTALE
decessi_istat
```
<h2>DECESSI MENSILI REGIONE ABRUZZO CAMPANIA DAL COVID</h2>
Il DataFrame contine i dati forniti dalla Protezione Civile relativi ai decessi mensili della regione <b>Campania</b> da <b> marzo 2020</b> al <b>30 settembre 2020</b>.
```
covid = pd.read_csv('../../csv/regioni_covid/campania.csv')
covid.head()
covid['data'] = pd.to_datetime(covid['data'])
covid.deceduti = pd.to_numeric(covid.deceduti)
covid = covid.set_index('data')
covid.head()
```
<h3>Creazione serie storica dei decessi COVID-19</h3>
```
covid = covid.deceduti
```
<h2>PREDIZIONE DECESSI MENSILI REGIONE SECONDO MODELLO SARIMA</h2>
Il DataFrame contiene i dati riguardanti i decessi mensili della regione <b>Campania</b> secondo la predizione del modello SARIMA applicato.
```
predictions = pd.read_csv('../../csv/pred/predictions_SARIMA_campania.csv')
predictions.head()
predictions.rename(columns={'Unnamed: 0': 'Data', 'predicted_mean':'Totale'}, inplace=True)
predictions.head()
predictions['Data'] = pd.to_datetime(predictions['Data'])
predictions.Totale = pd.to_numeric(predictions.Totale)
```
<h3>Recupero dei dati inerenti al periodo COVID-19</h3>
```
predictions = predictions[predictions['Data'] > '2020-02-29']
predictions.head()
predictions = predictions.set_index('Data')
predictions.head()
```
<h3>Creazione serie storica dei decessi secondo la predizione del modello</h3>
```
predictions = predictions.Totale
```
<h1>INTERVALLI DI CONFIDENZA
<h3>Limite massimo
```
upper = pd.read_csv('../../csv/upper/predictions_SARIMA_campania_upper.csv')
upper.head()
upper.rename(columns={'Unnamed: 0': 'Data', 'upper TOTALE':'Totale'}, inplace=True)
upper['Data'] = pd.to_datetime(upper['Data'])
upper.Totale = pd.to_numeric(upper.Totale)
upper.head()
upper = upper[upper['Data'] > '2020-02-29']
upper = upper.set_index('Data')
upper.head()
upper = upper.Totale
```
<h3>Limite minimo
```
lower = pd.read_csv('../../csv/lower/predictions_SARIMA_campania_lower.csv')
lower.head()
lower.rename(columns={'Unnamed: 0': 'Data', 'lower TOTALE':'Totale'}, inplace=True)
lower['Data'] = pd.to_datetime(lower['Data'])
lower.Totale = pd.to_numeric(lower.Totale)
lower.head()
lower = lower[lower['Data'] > '2020-02-29']
lower = lower.set_index('Data')
lower.head()
lower = lower.Totale
```
<h1> CONFRONTO DELLE SERIE STORICHE </h1>
Di seguito il confronto grafico tra le serie storiche dei <b>decessi totali mensili</b>, dei <b>decessi causa COVID-19</b> e dei <b>decessi previsti dal modello SARIMA</b> della regione <b>Campania</b>.
<br />
I mesi di riferimento sono: <b>marzo</b>, <b>aprile</b>, <b>maggio</b>, <b>giugno</b>, <b>luglio</b>, <b>agosto</b> e <b>settembre</b>.
```
plt.figure(figsize=(15,4))
plt.title('CAMPANIA - Confronto decessi totali, decessi causa covid e decessi del modello predittivo', size=18)
plt.plot(covid, label='decessi causa covid')
plt.plot(decessi_istat, label='decessi totali')
plt.plot(predictions, label='predizione modello')
plt.legend(prop={'size': 12})
plt.show()
plt.figure(figsize=(15,4))
plt.title("CAMPANIA - Confronto decessi totali ISTAT con decessi previsti dal modello", size=18)
plt.plot(predictions, label='predizione modello')
plt.plot(upper, label='limite massimo')
plt.plot(lower, label='limite minimo')
plt.plot(decessi_istat, label='decessi totali')
plt.legend(prop={'size': 12})
plt.show()
```
<h3>Calcolo dei decessi COVID-19 secondo il modello predittivo</h3>
Differenza tra i decessi totali rilasciati dall'ISTAT e i decessi secondo la previsione del modello SARIMA.
```
n = decessi_istat - predictions
n_upper = decessi_istat - lower
n_lower = decessi_istat - upper
plt.figure(figsize=(15,4))
plt.title("CAMPANIA - Confronto decessi accertati covid con decessi covid previsti dal modello", size=18)
plt.plot(covid, label='decessi covid accertati - Protezione Civile')
plt.plot(n, label='devessi covid previsti - modello SARIMA')
plt.plot(n_upper, label='limite massimo - modello SARIMA')
plt.plot(n_lower, label='limite minimo - modello SARIMA')
plt.legend(prop={'size': 12})
plt.show()
d = decessi_istat.sum()
print("Decessi 2020:", d)
d_m = predictions.sum()
print("Decessi attesi dal modello 2020:", d_m)
d_lower = lower.sum()
print("Decessi attesi dal modello 2020 - livello mimino:", d_lower)
```
<h3>Numero totale dei decessi accertati COVID-19 regione Campania </h3>
```
m = covid.sum()
print(int(m))
```
<h3>Numero totale dei decessi COVID-19 previsti dal modello per la regione Campania </h3>
<h4>Valore medio
```
total = n.sum()
print(int(total))
```
<h4>Valore massimo
```
total_upper = n_upper.sum()
print(int(total_upper))
```
<h4>Valore minimo
```
total_lower = n_lower.sum()
print(int(total_lower))
```
<h3>Calcolo del numero dei decessi COVID-19 non registrati secondo il modello predittivo SARIMA della regione Campania</h3>
<h4>Valore medio
```
x = decessi_istat - predictions - covid
x = x.sum()
print(int(x))
```
<h4>Valore massimo
```
x_upper = decessi_istat - lower - covid
x_upper = x_upper.sum()
print(int(x_upper))
```
<h4>Valore minimo
```
x_lower = decessi_istat - upper - covid
x_lower = x_lower.sum()
print(int(x_lower))
```
| github_jupyter |
```
import pickle
import numpy as np
import awkward
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
import uproot
import boost_histogram as bh
import mplhep
mplhep.style.use("CMS")
CMS_PF_CLASS_NAMES = ["none" "charged hadron", "neutral hadron", "hfem", "hfhad", "photon", "electron", "muon"]
ELEM_LABELS_CMS = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
ELEM_NAMES_CMS = ["NONE", "TRACK", "PS1", "PS2", "ECAL", "HCAL", "GSF", "BREM", "HFEM", "HFHAD", "SC", "HO"]
CLASS_LABELS_CMS = [0, 211, 130, 1, 2, 22, 11, 13]
CLASS_NAMES_CMS = ["none", "ch.had", "n.had", "HFEM", "HFHAD", "$\gamma$", "$e^\pm$", "$\mu^\pm$"]
class_names = {k: v for k, v in zip(CLASS_LABELS_CMS, CLASS_NAMES_CMS)}
physics_process = "qcd" #"ttbar", "qcd"
if physics_process == "qcd":
data_baseline = awkward.Array(pickle.load(open("/home/joosep/reco/mlpf/CMSSW_12_1_0_pre3/11843.0/out.pkl", "rb")))
data_mlpf = awkward.Array(pickle.load(open("/home/joosep/reco/mlpf/CMSSW_12_1_0_pre3/11843.13/out.pkl", "rb")))
fi1 = uproot.open("/home/joosep/reco/mlpf/CMSSW_12_1_0_pre3/11843.0/DQM_V0001_R000000001__Global__CMSSW_X_Y_Z__RECO.root")
fi2 = uproot.open("/home/joosep/reco/mlpf/CMSSW_12_1_0_pre3/11843.13/DQM_V0001_R000000001__Global__CMSSW_X_Y_Z__RECO.root")
elif physics_process == "ttbar":
data_mlpf = awkward.Array(pickle.load(open("/home/joosep/reco/mlpf/CMSSW_12_1_0_pre3/11834.13/out.pkl", "rb")))
data_baseline = awkward.Array(pickle.load(open("/home/joosep/reco/mlpf/CMSSW_12_1_0_pre3/11834.0/out.pkl", "rb")))
fi1 = uproot.open("/home/joosep/reco/mlpf/CMSSW_12_1_0_pre3/11834.0/DQM_V0001_R000000001__Global__CMSSW_X_Y_Z__RECO.root")
fi2 = uproot.open("/home/joosep/reco/mlpf/CMSSW_12_1_0_pre3/11834.13/DQM_V0001_R000000001__Global__CMSSW_X_Y_Z__RECO.root")
def sample_label(ax, physics_process=physics_process, additional_text="", x=0.01, y=0.93):
plt.text(x, y,
physics_process_str[physics_process]+additional_text,
ha="left", size=20,
transform=ax.transAxes
)
physics_process_str = {
"ttbar": "$t\\bar{t}$ events",
"singlepi": "single $\pi^{\pm}$ events",
"qcd": "QCD events",
}
def ratio_unc(h0, h1):
ratio = h0/h1
c0 = h0.values()
c1 = h1.values()
v0 = np.sqrt(c0)
v1 = np.sqrt(c1)
unc_ratio = ratio.values()*np.sqrt((v0/c0)**2 + (v1/c1)**2)
return ratio, unc_ratio
def plot_candidates_pf_vs_mlpf(variable, varname, bins):
plt.figure(figsize=(16,16))
ax = plt.axes()
hists_baseline = []
hists_mlpf = []
iplot = 1
for pid in [13,11,22,1,2,130,211]:
msk1 = np.abs(data_baseline["particleFlow"]["pdgId"]) == pid
msk2 = np.abs(data_mlpf["particleFlow"]["pdgId"]) == pid
d1 = awkward.flatten(data_baseline["particleFlow"][variable][msk1])
d2 = awkward.flatten(data_mlpf["particleFlow"][variable][msk2])
h1 = bh.Histogram(bh.axis.Variable(bins))
h1.fill(d1)
h2 = bh.Histogram(bh.axis.Variable(bins))
h2.fill(d2)
ax = plt.subplot(3,3,iplot)
plt.sca(ax)
mplhep.histplot(h1, histtype="step", lw=2, label="PF");
mplhep.histplot(h2, histtype="step", lw=2, label="MLPF");
if variable!="eta":
plt.yscale("log")
plt.legend(loc="best", frameon=False, title=class_names[pid])
plt.xlabel(varname)
plt.ylabel("Number of particles / bin")
sample_label(ax, x=0.08)
iplot += 1
hists_baseline.append(h1)
hists_mlpf.append(h2)
plt.tight_layout()
return hists_baseline, hists_mlpf
def plot_candidates_pf_vs_mlpf_single(hists, ncol=1):
#plt.figure(figsize=(10, 13))
f, (a0, a1) = plt.subplots(2, 1, gridspec_kw={'height_ratios': [3, 1]}, sharex=True)
plt.sca(a0)
mplhep.cms.label("Preliminary", data=False, loc=0, rlabel="Run 3 (14 TeV)")
v1 = mplhep.histplot([h[bh.rebin(2)] for h in hists[0]], stack=True, label=[class_names[k] for k in [13,11,22,1,2,130,211]], lw=1)
v2 = mplhep.histplot([h[bh.rebin(2)] for h in hists[1]], stack=True, color=[x.stairs.get_edgecolor() for x in v1][::-1], lw=2, histtype="errorbar")
plt.yscale("log")
plt.ylim(top=1e8)
sample_label(a0)
if ncol==1:
legend1 = plt.legend(v1, [x.legend_artist.get_label() for x in v1], loc=(0.40, 0.25), title="PF", ncol=1)
legend2 = plt.legend(v2, [x.legend_artist.get_label() for x in v1], loc=(0.65, 0.25), title="MLPF", ncol=1)
elif ncol==2:
legend1 = plt.legend(v1, [x.legend_artist.get_label() for x in v1], loc=(0.05, 0.50), title="PF", ncol=2)
legend2 = plt.legend(v2, [x.legend_artist.get_label() for x in v1], loc=(0.50, 0.50), title="MLPF", ncol=2)
plt.gca().add_artist(legend1)
plt.ylabel("Total number of particles / bin")
plt.sca(a1)
sum_h0 = sum(hists[0])
sum_h1 = sum(hists[1])
ratio = sum_h0/sum_h1
ratio, unc_ratio = ratio_unc(sum_h0, sum_h1)
mplhep.histplot(ratio, histtype="errorbar", color="black", yerr=unc_ratio)
plt.ylim(0,2)
plt.axhline(1.0, color="black", ls="--")
plt.ylabel("PF / MLPF")
#lt.tight_layout()
#cms_label(ax)
#sample_label(ax)
return a0, a1
hists = plot_candidates_pf_vs_mlpf("pt", "PFCandidate $p_T$ [GeV]", np.linspace(0,200,101))
plt.savefig("candidates_pt_{}.pdf".format(physics_process), bbox_inches="tight")
a0, a1 = plot_candidates_pf_vs_mlpf_single(hists)
plt.xlabel("PFCandidate $p_T$ [GeV]")
plt.savefig("candidates_pt_single_{}.pdf".format(physics_process), bbox_inches="tight")
plt.savefig("candidates_pt_single_{}.png".format(physics_process), dpi=400, bbox_inches="tight")
hists = plot_candidates_pf_vs_mlpf("eta", "PFCandidate $\eta$", np.linspace(-6, 6,101))
plt.savefig("candidates_eta_{}.pdf".format(physics_process), bbox_inches="tight")
a0, a1 = plot_candidates_pf_vs_mlpf_single(hists, ncol=2)
a0.set_yscale("log")
a0.set_ylim(10,1e10)
plt.xlabel("PFCandidate $\eta$")
plt.savefig("candidates_eta_single_{}.pdf".format(physics_process), bbox_inches="tight")
plt.savefig("candidates_eta_single_{}.png".format(physics_process), dpi=400, bbox_inches="tight")
def plot_pf_vs_mlpf_jet(jetcoll, variable, bins, jetcoll_name, cumulative=False, binwnorm=False):
f, (a0, a1) = plt.subplots(2, 1, gridspec_kw={'height_ratios': [3, 1]}, sharex=True)
plt.sca(a0)
mplhep.cms.label("Preliminary", data=False, loc=0, rlabel="Run 3 (14 TeV)")
h1 = bh.Histogram(bh.axis.Variable(bins))
h1.fill(awkward.flatten(data_baseline[jetcoll][variable]))
if cumulative:
h1[:] = np.sum(h1.values()) - np.cumsum(h1)
h2 = bh.Histogram(bh.axis.Variable(bins))
h2.fill(awkward.flatten(data_mlpf[jetcoll][variable]))
if cumulative:
h2[:] = np.sum(h2.values()) - np.cumsum(h2)
mplhep.histplot(h1, histtype="step", lw=2, label="PF", binwnorm=binwnorm);
mplhep.histplot(h2, histtype="step", lw=2, label="MLPF", binwnorm=binwnorm);
#cms_label(ax)
sample_label(a0, additional_text=jetcoll_name)
plt.ylabel("Number of jets / GeV")
plt.legend(loc=1, frameon=False)
plt.sca(a1)
plt.axhline(1.0, color="black", ls="--")
plt.ylim(0,2)
ratio, unc_ratio = ratio_unc(h1, h2)
mplhep.histplot(h1/h2, histtype="errorbar", color="black", yerr=unc_ratio)
return a0, a1
def varbins(b0, b1):
return np.concatenate([b0[:-1], b1])
jet_bins = varbins(np.linspace(0,100,21), np.linspace(100,200,5))
a0, a1 = plot_pf_vs_mlpf_jet("ak4PFJetsCHS", "pt", jet_bins, ", AK4 CHS jets", cumulative=False, binwnorm=1)
a0.set_yscale("log")
a0.set_ylim(1, 1e5)
a1.set_ylabel("PF / MLPF")
#plt.ylim(top=1e6)
plt.xlabel("jet $p_T$ [GeV]")
plt.savefig("ak4jet_chs_pt_{}.pdf".format(physics_process), bbox_inches="tight")
a0, a1 = plot_pf_vs_mlpf_jet("ak4PFJetsPuppi", "pt", jet_bins, ", AK4 PUPPI jets", cumulative=False, binwnorm=1)
a0.set_yscale("log")
a1.set_ylabel("PF / MLPF")
a0.set_ylim(1, 1e4)
plt.xlabel("jet $p_T$ [GeV]")
plt.savefig("ak4jet_puppi_pt_{}.pdf".format(physics_process), bbox_inches="tight")
a0, a1 = plot_pf_vs_mlpf_jet("ak4PFJetsCHS", "eta", np.linspace(-6, 6, 61), ", AK4 CHS jets", cumulative=False, binwnorm=None)
#a0.set_yscale("log")
a0.set_ylim(0, 8000)
a1.set_ylabel("PF / MLPF")
a0.set_ylabel("Number of jets / 0.2")
plt.xlabel("jet $\eta$")
plt.savefig("ak4jet_chs_eta_{}.pdf".format(physics_process), bbox_inches="tight")
a0, a1 = plot_pf_vs_mlpf_jet("ak4PFJetsPuppi", "eta", np.linspace(-6, 6, 61), ", AK4 PUPPI jets", cumulative=False, binwnorm=None)
a0.set_ylim(0,2000)
a1.set_ylabel("PF / MLPF")
plt.xlabel("jet $\eta$")
a0.set_ylabel("Number of jets / 0.2")
plt.savefig("ak4jet_puppi_eta_{}.pdf".format(physics_process), bbox_inches="tight")
met_bins = varbins(np.linspace(0,150,21), np.linspace(150,450,5))
a0, a1 = plot_pf_vs_mlpf_jet("pfMet", "pt", met_bins, ", PF MET", cumulative=False, binwnorm=1)
a0.set_yscale("log")
a1.set_ylabel("PF / MLPF")
a0.set_ylim(top=1e3)
a0.set_ylabel("Number of events / GeV")
plt.xlabel("MET $p_T$ [GeV]")
plt.savefig("pfmet_pt_{}.pdf".format(physics_process), bbox_inches="tight")
a0, a1 = plot_pf_vs_mlpf_jet("pfMet", "pt", met_bins, ", PF MET", cumulative=True, binwnorm=None)
a0.set_yscale("log")
a1.set_ylabel("PF / MLPF")
a0.set_ylim(top=4000)
plt.xlabel("MET $p_T$ [GeV]")
a0.set_ylabel("Cumulative events")
plt.savefig("pfmet_c_pt_{}.pdf".format(physics_process), bbox_inches="tight")
a0, a1 = plot_pf_vs_mlpf_jet("pfMetPuppi", "pt", met_bins, ", PUPPI MET", cumulative=False, binwnorm=1)
a0.set_yscale("log")
a0.set_ylim(top=1e3)
plt.xlabel("MET $p_T$ [GeV]")
a0.set_ylabel("Number of events / GeV")
plt.savefig("pfmet_puppi_pt_{}.pdf".format(physics_process), bbox_inches="tight")
a0, a1 = plot_pf_vs_mlpf_jet("pfMetPuppi", "pt", met_bins, ", PUPPI MET", cumulative=True, binwnorm=None)
a0.set_yscale("log")
a1.set_ylabel("PF / MLPF")
a0.set_ylim(top=4000)
plt.xlabel("MET $p_T$ [GeV]")
a0.set_ylabel("Cumulative events")
plt.savefig("pfmet_puppi_c_pt_{}.pdf".format(physics_process), bbox_inches="tight")
timing_output = """
Nelem=1600 mean_time=4.66 ms stddev_time=2.55 ms mem_used=711 MB
Nelem=1920 mean_time=4.74 ms stddev_time=0.52 ms mem_used=711 MB
Nelem=2240 mean_time=5.53 ms stddev_time=0.63 ms mem_used=711 MB
Nelem=2560 mean_time=5.88 ms stddev_time=0.52 ms mem_used=711 MB
Nelem=2880 mean_time=6.22 ms stddev_time=0.63 ms mem_used=745 MB
Nelem=3200 mean_time=6.50 ms stddev_time=0.64 ms mem_used=745 MB
Nelem=3520 mean_time=7.07 ms stddev_time=0.61 ms mem_used=745 MB
Nelem=3840 mean_time=7.53 ms stddev_time=0.68 ms mem_used=745 MB
Nelem=4160 mean_time=7.76 ms stddev_time=0.69 ms mem_used=745 MB
Nelem=4480 mean_time=8.66 ms stddev_time=0.72 ms mem_used=745 MB
Nelem=4800 mean_time=9.00 ms stddev_time=0.57 ms mem_used=745 MB
Nelem=5120 mean_time=9.22 ms stddev_time=0.84 ms mem_used=745 MB
Nelem=5440 mean_time=9.64 ms stddev_time=0.73 ms mem_used=812 MB
Nelem=5760 mean_time=10.39 ms stddev_time=1.06 ms mem_used=812 MB
Nelem=6080 mean_time=10.77 ms stddev_time=0.69 ms mem_used=812 MB
Nelem=6400 mean_time=11.33 ms stddev_time=0.75 ms mem_used=812 MB
Nelem=6720 mean_time=12.19 ms stddev_time=0.77 ms mem_used=812 MB
Nelem=7040 mean_time=12.54 ms stddev_time=0.72 ms mem_used=812 MB
Nelem=7360 mean_time=13.08 ms stddev_time=0.78 ms mem_used=812 MB
Nelem=7680 mean_time=13.71 ms stddev_time=0.81 ms mem_used=812 MB
Nelem=8000 mean_time=14.11 ms stddev_time=0.74 ms mem_used=812 MB
Nelem=8320 mean_time=14.85 ms stddev_time=0.86 ms mem_used=812 MB
Nelem=8640 mean_time=15.36 ms stddev_time=0.79 ms mem_used=812 MB
Nelem=8960 mean_time=16.76 ms stddev_time=1.06 ms mem_used=812 MB
Nelem=9280 mean_time=17.27 ms stddev_time=0.71 ms mem_used=812 MB
Nelem=9600 mean_time=17.97 ms stddev_time=0.85 ms mem_used=812 MB
Nelem=9920 mean_time=18.73 ms stddev_time=0.94 ms mem_used=812 MB
Nelem=10240 mean_time=19.26 ms stddev_time=0.89 ms mem_used=812 MB
Nelem=10560 mean_time=19.91 ms stddev_time=0.90 ms mem_used=946 MB
Nelem=10880 mean_time=20.55 ms stddev_time=0.87 ms mem_used=946 MB
Nelem=11200 mean_time=21.82 ms stddev_time=0.78 ms mem_used=940 MB
Nelem=11520 mean_time=22.48 ms stddev_time=0.75 ms mem_used=940 MB
Nelem=11840 mean_time=23.33 ms stddev_time=0.98 ms mem_used=940 MB
Nelem=12160 mean_time=24.28 ms stddev_time=0.85 ms mem_used=940 MB
Nelem=12480 mean_time=24.85 ms stddev_time=0.67 ms mem_used=940 MB
Nelem=12800 mean_time=25.58 ms stddev_time=0.68 ms mem_used=940 MB
Nelem=13120 mean_time=26.58 ms stddev_time=0.78 ms mem_used=940 MB
Nelem=13440 mean_time=27.15 ms stddev_time=0.63 ms mem_used=940 MB
Nelem=13760 mean_time=27.72 ms stddev_time=0.85 ms mem_used=940 MB
Nelem=14080 mean_time=28.08 ms stddev_time=0.66 ms mem_used=940 MB
Nelem=14400 mean_time=28.70 ms stddev_time=0.73 ms mem_used=940 MB
Nelem=14720 mean_time=29.22 ms stddev_time=0.66 ms mem_used=940 MB
Nelem=15040 mean_time=29.73 ms stddev_time=0.80 ms mem_used=940 MB
Nelem=15360 mean_time=30.71 ms stddev_time=0.85 ms mem_used=940 MB
Nelem=15680 mean_time=31.15 ms stddev_time=0.74 ms mem_used=940 MB
Nelem=16000 mean_time=31.74 ms stddev_time=0.80 ms mem_used=940 MB
Nelem=16320 mean_time=32.27 ms stddev_time=0.77 ms mem_used=940 MB
Nelem=16640 mean_time=33.07 ms stddev_time=1.08 ms mem_used=940 MB
Nelem=16960 mean_time=33.60 ms stddev_time=0.69 ms mem_used=940 MB
Nelem=17280 mean_time=34.43 ms stddev_time=0.64 ms mem_used=940 MB
Nelem=17600 mean_time=35.34 ms stddev_time=0.75 ms mem_used=940 MB
Nelem=17920 mean_time=35.84 ms stddev_time=0.68 ms mem_used=940 MB
Nelem=18240 mean_time=36.51 ms stddev_time=0.85 ms mem_used=940 MB
Nelem=18560 mean_time=37.23 ms stddev_time=0.87 ms mem_used=940 MB
Nelem=18880 mean_time=37.72 ms stddev_time=0.78 ms mem_used=940 MB
Nelem=19200 mean_time=38.33 ms stddev_time=0.87 ms mem_used=940 MB
Nelem=19520 mean_time=38.95 ms stddev_time=0.87 ms mem_used=940 MB
Nelem=19840 mean_time=39.73 ms stddev_time=0.74 ms mem_used=940 MB
Nelem=20160 mean_time=40.27 ms stddev_time=0.81 ms mem_used=940 MB
Nelem=20480 mean_time=40.86 ms stddev_time=0.74 ms mem_used=940 MB
Nelem=20800 mean_time=41.71 ms stddev_time=0.94 ms mem_used=940 MB
Nelem=21120 mean_time=42.35 ms stddev_time=1.38 ms mem_used=1209 MB
Nelem=21440 mean_time=42.91 ms stddev_time=1.18 ms mem_used=1209 MB
Nelem=21760 mean_time=43.40 ms stddev_time=0.98 ms mem_used=1184 MB
Nelem=22080 mean_time=44.43 ms stddev_time=1.04 ms mem_used=1184 MB
Nelem=22400 mean_time=45.22 ms stddev_time=1.02 ms mem_used=1184 MB
Nelem=22720 mean_time=45.57 ms stddev_time=0.94 ms mem_used=1184 MB
Nelem=23040 mean_time=46.21 ms stddev_time=0.86 ms mem_used=1184 MB
Nelem=23360 mean_time=46.85 ms stddev_time=0.95 ms mem_used=1184 MB
Nelem=23680 mean_time=47.52 ms stddev_time=1.57 ms mem_used=1184 MB
Nelem=24000 mean_time=48.31 ms stddev_time=0.74 ms mem_used=1184 MB
Nelem=24320 mean_time=48.92 ms stddev_time=0.75 ms mem_used=1184 MB
Nelem=24640 mean_time=49.70 ms stddev_time=0.92 ms mem_used=1184 MB
Nelem=24960 mean_time=50.26 ms stddev_time=0.93 ms mem_used=1184 MB
Nelem=25280 mean_time=50.98 ms stddev_time=0.89 ms mem_used=1184 MB
"""
time_x = []
time_y = []
time_y_err = []
gpu_mem_use = []
for line in timing_output.split("\n"):
if len(line)>0:
spl = line.split()
time_x.append(int(spl[0].split("=")[1]))
time_y.append(float(spl[1].split("=")[1]))
time_y_err.append(float(spl[3].split("=")[1]))
gpu_mem_use.append(float(spl[5].split("=")[1]))
import glob
nelem = []
for fi in glob.glob("../data/TTbar_14TeV_TuneCUETP8M1_cfi/raw/*.pkl"):
d = pickle.load(open(fi, "rb"))
for elem in d:
X = elem["Xelem"][(elem["Xelem"]["typ"]!=2)&(elem["Xelem"]["typ"]!=3)]
nelem.append(X.shape[0])
plt.figure(figsize=(7, 7))
ax = plt.axes()
plt.hist(nelem, bins=np.linspace(2000,6000,100));
plt.ylabel("Number of events / bin")
plt.xlabel("PFElements per event")
cms_label(ax)
sample_label(ax, physics_process="ttbar")
plt.figure(figsize=(10, 3))
ax = plt.axes()
mplhep.cms.label("Preliminary", data=False, loc=0, rlabel="Run 3 (14 TeV)")
plt.errorbar(time_x, time_y, yerr=time_y_err, marker=".")
plt.axvline(np.mean(nelem)-np.std(nelem), color="black", ls="--", lw=1.0)
plt.axvline(np.mean(nelem)+np.std(nelem), color="black", ls="--", lw=1.0)
#plt.xticks(time_x, time_x);
plt.xlim(0,30000)
plt.ylim(0,100)
plt.ylabel("runtime [ms/ev]")
plt.xlabel("PFElements per event")
#plt.legend(loc=4, frameon=False)
#cms_label(ax, y=0.93, x1=0.07, x2=0.99)
plt.text(4000, 20, "typical Run3 range", rotation=90, fontsize=10)
plt.text(6000, 50, "Inference with ONNXRuntime in a single CPU thread,\nsingle GPU stream on NVIDIA RTX2060S 8GB.\nNot a production-like setup. Synthetic inputs.\nModel throughput only, no data preparation.\nPerformance vary depending on the chosen\noptimizations and hyperparameters.", fontsize=10)
plt.savefig("runtime_scaling.pdf", bbox_inches="tight")
plt.savefig("runtime_scaling.png", bbox_inches="tight", dpi=300)
plt.figure(figsize=(10, 3))
ax = plt.axes()
mplhep.cms.label("Preliminary", data=False, loc=0, rlabel="Run 3 (14 TeV)")
plt.plot(time_x, gpu_mem_use, marker=".")
plt.axvline(np.mean(nelem)-np.std(nelem), color="black", ls="--", lw=1.0)
plt.axvline(np.mean(nelem)+np.std(nelem), color="black", ls="--", lw=1.0)
#plt.xticks(time_x, time_x);
plt.xlim(0,30000)
plt.ylim(0,3000)
plt.ylabel("GPU RSS [MB]")
plt.xlabel("PFElements per event")
#cms_label(ax, y=0.93, x1=0.07, x2=0.99)
plt.text(4000, 1000, "typical Run3 range", rotation=90, fontsize=10)
plt.text(6000, 1400, "Inference with ONNXRuntime in a single CPU thread,\nsingle GPU stream on NVIDIA RTX2060S 8GB.\nNot a production-like setup. Synthetic inputs.\nModel throughput only, no data preparation.\nPerformance vary depending on the chosen\noptimizations and hyperparameters.", fontsize=10)
plt.savefig("memory_scaling.pdf", bbox_inches="tight")
plt.savefig("memory_scaling.png", bbox_inches="tight", dpi=300)
def get_cputime(infile):
times = {}
for line in open(infile).readlines():
if "TimeModule" in line:
module = line.split()[4]
time = float(line.split()[5])
if not module in times:
times[module] = []
times[module].append(time)
for k in times.keys():
times[k] = 1000.0*np.array(times[k])
return times
cputime_baseline = get_cputime("/home/joosep/reco/mlpf/CMSSW_12_1_0_pre3/11834.0/times.txt")
cputime_mlpf = get_cputime("/home/joosep/reco/mlpf/CMSSW_12_1_0_pre3/11834.13/times.txt")
np.mean(cputime_baseline["PFBlockProducer"]+ cputime_baseline["PFProducer"])
np.std(cputime_baseline["PFBlockProducer"]+ cputime_baseline["PFProducer"])
np.mean(cputime_mlpf["MLPFProducer"])
np.std(cputime_mlpf["MLPFProducer"])
plt.hist(cputime_baseline["PFBlockProducer"]+ cputime_baseline["PFProducer"], bins=np.linspace(0,500,100), label="baseline PF")
plt.hist(cputime_mlpf["MLPFProducer"], bins=np.linspace(0,500,100), label="MLPF");
```
| github_jupyter |
## 1. Inspecting transfusion.data file
<p><img src="https://assets.datacamp.com/production/project_646/img/blood_donation.png" style="float: right;" alt="A pictogram of a blood bag with blood donation written in it" width="200"></p>
<p>Blood transfusion saves lives - from replacing lost blood during major surgery or a serious injury to treating various illnesses and blood disorders. Ensuring that there's enough blood in supply whenever needed is a serious challenge for the health professionals. According to <a href="https://www.webmd.com/a-to-z-guides/blood-transfusion-what-to-know#1">WebMD</a>, "about 5 million Americans need a blood transfusion every year".</p>
<p>Our dataset is from a mobile blood donation vehicle in Taiwan. The Blood Transfusion Service Center drives to different universities and collects blood as part of a blood drive. We want to predict whether or not a donor will give blood the next time the vehicle comes to campus.</p>
<p>The data is stored in <code>datasets/transfusion.data</code> and it is structured according to RFMTC marketing model (a variation of RFM). We'll explore what that means later in this notebook. First, let's inspect the data.</p>
```
# Print out the first 5 lines from the transfusion.data file
!head -n 5 datasets/transfusion.data
```
## 2. Loading the blood donations data
<p>We now know that we are working with a typical CSV file (i.e., the delimiter is <code>,</code>, etc.). We proceed to loading the data into memory.</p>
```
# Import pandas
import pandas as pd
# Read in dataset
transfusion = pd.read_csv('datasets/transfusion.data')
# Print out the first rows of our dataset
transfusion.head()
```
## 3. Inspecting transfusion DataFrame
<p>Let's briefly return to our discussion of RFM model. RFM stands for Recency, Frequency and Monetary Value and it is commonly used in marketing for identifying your best customers. In our case, our customers are blood donors.</p>
<p>RFMTC is a variation of the RFM model. Below is a description of what each column means in our dataset:</p>
<ul>
<li>R (Recency - months since the last donation)</li>
<li>F (Frequency - total number of donation)</li>
<li>M (Monetary - total blood donated in c.c.)</li>
<li>T (Time - months since the first donation)</li>
<li>a binary variable representing whether he/she donated blood in March 2007 (1 stands for donating blood; 0 stands for not donating blood)</li>
</ul>
<p>It looks like every column in our DataFrame has the numeric type, which is exactly what we want when building a machine learning model. Let's verify our hypothesis.</p>
```
# Print a concise summary of transfusion DataFrame
transfusion.info()
```
## 4. Creating target column
<p>We are aiming to predict the value in <code>whether he/she donated blood in March 2007</code> column. Let's rename this it to <code>target</code> so that it's more convenient to work with.</p>
```
# Rename target column as 'target' for brevity
transfusion.rename(
columns={'whether he/she donated blood in March 2007':'target'},
inplace=True
)
# Print out the first 2 rows
transfusion.head(2)
```
## 5. Checking target incidence
<p>We want to predict whether or not the same donor will give blood the next time the vehicle comes to campus. The model for this is a binary classifier, meaning that there are only 2 possible outcomes:</p>
<ul>
<li><code>0</code> - the donor will not give blood</li>
<li><code>1</code> - the donor will give blood</li>
</ul>
<p>Target incidence is defined as the number of cases of each individual target value in a dataset. That is, how many 0s in the target column compared to how many 1s? Target incidence gives us an idea of how balanced (or imbalanced) is our dataset.</p>
```
# Print target incidence proportions, rounding output to 3 decimal places
transfusion.target.value_counts(normalize=True).round(3)
```
## 6. Splitting transfusion into train and test datasets
<p>We'll now use <code>train_test_split()</code> method to split <code>transfusion</code> DataFrame.</p>
<p>Target incidence informed us that in our dataset <code>0</code>s appear 76% of the time. We want to keep the same structure in train and test datasets, i.e., both datasets must have 0 target incidence of 76%. This is very easy to do using the <code>train_test_split()</code> method from the <code>scikit learn</code> library - all we need to do is specify the <code>stratify</code> parameter. In our case, we'll stratify on the <code>target</code> column.</p>
```
# Import train_test_split method
from sklearn.model_selection import train_test_split
# Split transfusion DataFrame into
# X_train, X_test, y_train and y_test datasets,
# stratifying on the `target` column
X_train, X_test, y_train, y_test = train_test_split(
transfusion.drop(columns='target'),
transfusion.target,
test_size=0.25,
random_state=42,
stratify=transfusion.target
)
# Print out the first 2 rows of X_train
X_train.head(2)
```
## 7. Selecting model using TPOT
<p><a href="https://github.com/EpistasisLab/tpot">TPOT</a> is a Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.</p>
<p><img src="https://assets.datacamp.com/production/project_646/img/tpot-ml-pipeline.png" alt="TPOT Machine Learning Pipeline"></p>
<p>TPOT will automatically explore hundreds of possible pipelines to find the best one for our dataset. Note, the outcome of this search will be a <a href="https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html">scikit-learn pipeline</a>, meaning it will include any pre-processing steps as well as the model.</p>
<p>We are using TPOT to help us zero in on one model that we can then explore and optimize further.</p>
```
# Import TPOTClassifier and roc_auc_score
from tpot import TPOTClassifier
from sklearn.metrics import roc_auc_score
# Instantiate TPOTClassifier
tpot = TPOTClassifier(
generations=5,
population_size=20,
verbosity=2,
scoring='roc_auc',
random_state=42,
disable_update_check=True,
config_dict='TPOT light'
)
tpot.fit(X_train, y_train)
# AUC score for tpot model
tpot_auc_score = roc_auc_score(y_test, tpot.predict_proba(X_test)[:, 1])
print(f'\nAUC score: {tpot_auc_score:.4f}')
# Print best pipeline steps
print('\nBest pipeline steps:', end='\n')
for idx, (name, transform) in enumerate(tpot.fitted_pipeline_.steps, start=1):
# Print idx and transform
print(f'{idx}. {transform}')
```
## 8. Checking the variance
<p>TPOT picked <code>LogisticRegression</code> as the best model for our dataset with no pre-processing steps, giving us the AUC score of 0.7850. This is a great starting point. Let's see if we can make it better.</p>
<p>One of the assumptions for linear regression models is that the data and the features we are giving it are related in a linear fashion, or can be measured with a linear distance metric. If a feature in our dataset has a high variance that's an order of magnitude or more greater than the other features, this could impact the model's ability to learn from other features in the dataset.</p>
<p>Correcting for high variance is called normalization. It is one of the possible transformations you do before training a model. Let's check the variance to see if such transformation is needed.</p>
```
# X_train's variance, rounding the output to 3 decimal places
X_train.var().round(3)
```
## 9. Log normalization
<p><code>Monetary (c.c. blood)</code>'s variance is very high in comparison to any other column in the dataset. This means that, unless accounted for, this feature may get more weight by the model (i.e., be seen as more important) than any other feature.</p>
<p>One way to correct for high variance is to use log normalization.</p>
```
# Import numpy
import numpy as np
# Copy X_train and X_test into X_train_normed and X_test_normed
X_train_normed, X_test_normed = X_train.copy(), X_test.copy()
# Specify which column to normalize
col_to_normalize = 'Monetary (c.c. blood)'
# Log normalization
for df_ in [X_train_normed, X_test_normed]:
# Add log normalized column
df_['monetary_log'] = np.log(df_[col_to_normalize])
# Drop the original column
df_.drop(columns=col_to_normalize, inplace=True)
# Check the variance for X_train_normed
X_train_normed.var().round(3)
```
## 10. Training the linear regression model
<p>The variance looks much better now. Notice that now <code>Time (months)</code> has the largest variance, but it's not the <a href="https://en.wikipedia.org/wiki/Order_of_magnitude">orders of magnitude</a> higher than the rest of the variables, so we'll leave it as is.</p>
<p>We are now ready to train the linear regression model.</p>
```
# Importing modules
from sklearn import linear_model.LogisticRegression
# Instantiate LogisticRegression
logreg = LogisticRegression(
solver='liblinear',
random_state=42
)
# Train the model
fit(X_train_normed, y_train)
# AUC score for tpot model
logreg_auc_score = roc_auc_score(y_test, logreg.predict_proba(X_test_normed)[:, 1])
print(f'\nAUC score: {logreg_auc_score:.4f}')
```
## 11. Conclusion
<p>The demand for blood fluctuates throughout the year. As one <a href="https://www.kjrh.com/news/local-news/red-cross-in-blood-donation-crisis">prominent</a> example, blood donations slow down during busy holiday seasons. An accurate forecast for the future supply of blood allows for an appropriate action to be taken ahead of time and therefore saving more lives.</p>
<p>In this notebook, we explored automatic model selection using TPOT and AUC score we got was 0.7850. This is better than simply choosing <code>0</code> all the time (the target incidence suggests that such a model would have 76% success rate). We then log normalized our training data and improved the AUC score by 0.5%. In the field of machine learning, even small improvements in accuracy can be important, depending on the purpose.</p>
<p>Another benefit of using logistic regression model is that it is interpretable. We can analyze how much of the variance in the response variable (<code>target</code>) can be explained by other variables in our dataset.</p>
```
# Importing itemgetter
from operator import itemgetter
# Sort models based on their AUC score from highest to lowest
sorted(
[('tpot', tpot_auc_score), ('logreg', logreg_auc_score)],
key=itemgetter(1),
reverse=True
)
```
| github_jupyter |
# Implementing and traversing a linked list
In this notebook we'll get some practice implementing a basic linked list—something like this:
<img style="float: left;" src="assets/linked_list_head_none.png">
**Note** - This notebook contains a few audio walkthroughs of the code cells. <font color="red">If you face difficulty in listening to the audio, try reconnecting your audio headsets, and use either Chrome or Firefox browser.</font>
## Key characteristics
First, let's review the overall abstract concepts for this data structure. To get started, click the walkthrough button below.
<span class="graffiti-highlight graffiti-id_fx1ii1b-id_p4ecakd"><i></i><button>Walkthrough</button></span>
## Exercise 1 - Implementing a simple linked list
Now that we've talked about the abstract characteristics that we want our linked list to have, let's look at how we might implement one in Python.
<span class="graffiti-highlight graffiti-id_rytkj5r-id_b4xxfs5"><i></i><button>Walkthrough</button></span>
#### Step 1. Once you've seen the walkthrough, give it a try for yourself:
* Create a `Node` class with `value` and `next` attributes
* Use the class to create the `head` node with the value `2`
* Create and link a second node containing the value `1`
* Try printing the values (`1` and `2`) on the nodes you created (to make sure that you can access them!)
<span class="graffiti-highlight graffiti-id_jnw1zp8-id_uu0moco"><i></i><button>Show Solution</button></span>
At this point, our linked list looks like this:
<img style="float: left;" src="assets/linked_list_two_nodes.png">
Our goal is to extend the list until it looks like this:
<img style="float: left;" src="assets/linked_list_head_none.png">
To do this, we need to create three more nodes, and we need to attach each one to the `next` attribute of the node that comes before it. Notice that we don't have a direct reference to any of the nodes other than the `head` node!
See if you can write the code to finish creating the above list:
#### Step 2. Add three more nodes to the list, with the values `4`, `3`, and `5`
<span class="graffiti-highlight graffiti-id_2ptxe7t-id_6dz27pa"><i></i><button>Show Solution</button></span>
Let's print the values of all the nodes to check if it worked. If you successfully created (and linked) all the nodes, the following should print out `2`, `1`, `4`, `3`, `5`:
```
print(head.value)
print(head.next.value)
print(head.next.next.value)
print(head.next.next.next.value)
print(head.next.next.next.next.value)
```
## Exercise 2 - Traversing the list
We successfully created a simple linked list. But printing all the values like we did above was pretty tedious. What if we had a list with 1,000 nodes?
Let's see how we might traverse the list and print all the values, no matter how long it might be.
<span class="graffiti-highlight graffiti-id_fgyadie-id_g9gdx9q"><i></i><button>Walkthrough</button></span>
Once you've seen the walkthrough, give it a try for yourself.
#### Step 3. Write a function that loops through the nodes of the list and prints all of the values
<span class="graffiti-highlight graffiti-id_oc8545r-id_rhj3m0h"><i></i><button>Show Solution</button></span>
## Creating a linked list using iteration
Previously, we created a linked list using a very manual and tedious method. We called `next` multiple times on our `head` node.
Now that we know about iterating over or traversing the linked list, is there a way we can use that to create a linked list?
We've provided our solution below—but it might be a good exercise to see what you can come up with first. Here's the goal:
#### Step 4. See if you can write the code for the `create_linked_list` function below
* The function should take a Python list of values as input and return the `head` of a linked list that has those values
* There's some test code, and also a solution, below—give it a try for yourself first, but don't hesitate to look over the solution if you get stuck
```
def create_linked_list(input_list):
"""
Function to create a linked list
@param input_list: a list of integers
@return: head node of the linked list
"""
head = None
return head
```
Test your function by running this cell:
```
### Test Code
def test_function(input_list, head):
try:
if len(input_list) == 0:
if head is not None:
print("Fail")
return
for value in input_list:
if head.value != value:
print("Fail")
return
else:
head = head.next
print("Pass")
except Exception as e:
print("Fail: " + e)
input_list = [1, 2, 3, 4, 5, 6]
head = create_linked_list(input_list)
test_function(input_list, head)
input_list = [1]
head = create_linked_list(input_list)
test_function(input_list, head)
input_list = []
head = create_linked_list(input_list)
test_function(input_list, head)
```
Below is one possible solution. Walk through the code and make sure you understand what each part does. Compare it to your own solution—did your code work similarly or did you take a different approach?
<span class="graffiti-highlight graffiti-id_y5x3zt8-id_2txpe86"><i></i><button>Hide Solution</button></span>
```
def create_linked_list(input_list):
head = None
for value in input_list:
if head is None:
head = Node(value)
else:
# Move to the tail (the last node)
current_node = head
while current_node.next:
current_node = current_node.next
current_node.next = Node(value)
return head
```
### A more efficient solution
The above solution works, but it has some shortcomings. In this next walkthrough, we'll demonstrate a different approach and see how its efficiency compares to the solution above.
<span class="graffiti-highlight graffiti-id_ef8aj0i-id_uy9snfg"><i></i><button>Walkthrough</button></span>
#### Step 5. Once you've seen the walkthrough, see if you can implement the more efficient version for yourself
```
def create_linked_list_better(input_list):
head = None
# TODO: Implement the more efficient version that keeps track of the tail
return head
### Test Code
def test_function(input_list, head):
try:
if len(input_list) == 0:
if head is not None:
print("Fail")
return
for value in input_list:
if head.value != value:
print("Fail")
return
else:
head = head.next
print("Pass")
except Exception as e:
print("Fail: " + e)
input_list = [1, 2, 3, 4, 5, 6]
head = create_linked_list_better(input_list)
test_function(input_list, head)
input_list = [1]
head = create_linked_list_better(input_list)
test_function(input_list, head)
input_list = []
head = create_linked_list_better(input_list)
test_function(input_list, head)
```
<span class="graffiti-highlight graffiti-id_lkzu4kb-id_vxxcdfn"><i></i><button>Show Solution</button></span>
| github_jupyter |
```
import open3d as o3d
import numpy as np
import os
import sys
# monkey patches visualization and provides helpers to load geometries
sys.path.append('..')
import open3d_tutorial as o3dtut
# change to True if you want to interact with the visualization windows
o3dtut.interactive = not "CI" in os.environ
```
# Point cloud outlier removal
When collecting data from scanning devices, the resulting point cloud tends to contain noise and artifacts that one would like to remove. This tutorial addresses the outlier removal features of Open3D.
## Prepare input data
A point cloud is loaded and downsampled using `voxel_downsample`.
```
print("Load a ply point cloud, print it, and render it")
pcd = o3d.io.read_point_cloud("../../TestData/ICP/cloud_bin_2.pcd")
o3d.visualization.draw_geometries([pcd], zoom=0.3412,
front=[0.4257, -0.2125, -0.8795],
lookat=[2.6172, 2.0475, 1.532],
up=[-0.0694, -0.9768, 0.2024])
print("Downsample the point cloud with a voxel of 0.02")
voxel_down_pcd = pcd.voxel_down_sample(voxel_size=0.02)
o3d.visualization.draw_geometries([voxel_down_pcd], zoom=0.3412,
front=[0.4257, -0.2125, -0.8795],
lookat=[2.6172, 2.0475, 1.532],
up=[-0.0694, -0.9768, 0.2024])
```
Alternatively, use `uniform_down_sample` to downsample the point cloud by collecting every n-th points.
```
print("Every 5th points are selected")
uni_down_pcd = pcd.uniform_down_sample(every_k_points=5)
o3d.visualization.draw_geometries([uni_down_pcd], zoom=0.3412,
front=[0.4257, -0.2125, -0.8795],
lookat=[2.6172, 2.0475, 1.532],
up=[-0.0694, -0.9768, 0.2024])
```
## Select down sample
The following helper function uses `select_by_index`, which takes a binary mask to output only the selected points. The selected points and the non-selected points are visualized.
```
def display_inlier_outlier(cloud, ind):
inlier_cloud = cloud.select_by_index(ind)
outlier_cloud = cloud.select_by_index(ind, invert=True)
print("Showing outliers (red) and inliers (gray): ")
outlier_cloud.paint_uniform_color([1, 0, 0])
inlier_cloud.paint_uniform_color([0.8, 0.8, 0.8])
o3d.visualization.draw_geometries([inlier_cloud, outlier_cloud],
zoom=0.3412,
front=[0.4257, -0.2125, -0.8795],
lookat=[2.6172, 2.0475, 1.532],
up=[-0.0694, -0.9768, 0.2024])
```
## Statistical outlier removal
`statistical_outlier_removal` removes points that are further away from their neighbors compared to the average for the point cloud. It takes two input parameters:
- `nb_neighbors`, which specifies how many neighbors are taken into account in order to calculate the average distance for a given point.
- `std_ratio`, which allows setting the threshold level based on the standard deviation of the average distances across the point cloud. The lower this number the more aggressive the filter will be.
```
print("Statistical oulier removal")
cl, ind = voxel_down_pcd.remove_statistical_outlier(nb_neighbors=20,
std_ratio=2.0)
display_inlier_outlier(voxel_down_pcd, ind)
```
## Radius outlier removal
`radius_outlier_removal` removes points that have few neighbors in a given sphere around them. Two parameters can be used to tune the filter to your data:
- `nb_points`, which lets you pick the minimum amount of points that the sphere should contain.
- `radius`, which defines the radius of the sphere that will be used for counting the neighbors.
```
print("Radius oulier removal")
cl, ind = voxel_down_pcd.remove_radius_outlier(nb_points=16, radius=0.05)
display_inlier_outlier(voxel_down_pcd, ind)
```
| github_jupyter |
# Quantum Spins
We are going consider a magnetic insulator,
a Mott insulator. In this problem we have a valence electron per
atom, with very localized wave-functions. The Coulomb repulsion between
electrons on the same orbital is so strong, that electrons are bound to
their host atom, and cannot move. For this reason, charge disappears
from the equation, and the only remaining degree of freedom is the spin
of the electrons. The corresponding local state can therefore be either
$|\uparrow\rangle$ or $|\downarrow\rangle$. The only interaction taking
place is a process that “flips” two anti-aligned neighboring spins
$|\uparrow\rangle|\downarrow\rangle \rightarrow |\downarrow\rangle|\uparrow\rangle$.
Let us now consider a collection of spins residing on site of a
–one-dimensional for simplicity– lattice. An arbitrary state of
$N$-spins can be described by using the $S^z$ projection
($\uparrow,\downarrow$) of each spin as: $|s_1,s_2,..., s_N\rangle$. As
we can easily see, there are $2^N$ of such configurations.
Let us now consider a collection of spins residing on site of a
–one-dimensional for simplicity– lattice. An arbitrary state of
$N$-spins can be described by using the $S^z$ projection
($\uparrow,\downarrow$) of each spin as: $|s_1,s_2,..., s_N\rangle$. As
we can easily see, there are $2^N$ of such configurations.
We shall describe the interactions between neighboring spins using the
so-called Heisenberg Hamiltonian:
$$\hat{H}=\sum_{i=1}^{N-1} \hat{\mathbf{S}}_i \cdot \hat{\mathbf{S}}_{i+1}
$$
where $\hat{\mathbf{S}}_i = (\hat{S}^x,\hat{S}^y,\hat{S}^z)$ is the spin
operator acting on the spin on site $i$.
Since we are concerned about spins one-half, $S=1/2$, all these
operators have a $2\times2$ matrix representation, related to the
well-known Pauli matrices:
$$S^z = \left(
\begin{array}{cc}
1/2 & 0 \\
0 & -1/2
\end{array}
\right),
S^x = \left(
\begin{array}{cc}
0 & 1/2 \\
1/2 & 0
\end{array}
\right),
S^y = \left(
\begin{array}{cc}
0 & -i/2 \\
i/2 & 0
\end{array}
\right),
$$
These matrices act on two-dimensional vectors defined by the basis
states $|\uparrow\rangle$ and $|\downarrow\rangle$. It is useful to
introduce the identities:
$$\hat{S}^\pm = \left(\hat{S}^x \pm i \hat{S}^y\right),$$ where $S^+$
and $S^-$ are the spin raising and lowering operators. It is intuitively
easy to see why by looking at how they act on the basis states:
$\hat{S}^+|\downarrow\rangle = |\uparrow\rangle$ and
$\hat{S}^-|\uparrow\rangle = |\downarrow\rangle$. Their corresponding
$2\times2$ matrix representations are: $$S^+ = \left(
\begin{array}{cc}
0 & 1 \\
0 & 0
\end{array}
\right),
S^- = \left(
\begin{array}{cc}
0 & 0 \\
1 & 0
\end{array}
\right),
$$
We can now re-write the Hamiltonian (\[heisenberg\]) as:
$$\hat{H}=\sum_{i=1}^{N-1}
\hat{S}^z_i \hat{S}^z_{i+1} +
\frac{1}{2}\left[
\hat{S}^+_i \hat{S}^-_{i+1} +
\hat{S}^-_i \hat{S}^+_{i+1}
\right]
$$ The first term in this expression is diagonal and
does not flip spins. This is the so-called Ising term. The second term
is off-diagonal, and involves lowering and raising operators on
neighboring spins, and is responsible for flipping anti-aligned spins.
This is the “$XY$” part of the Hamiltonian.
The Heisenberg spin chain is a paradigmatic model in condensed matter.
Not only it is attractive due to its relative simplicity, but can also
describe real materials that can be studied experimentally. The
Heisenberg chain is also a prototypical integrable system, that can be
solved exactly by the Bethe Ansatz, and can be studied using
bosonization techniques and conformal field theory.
The Heisenberg spin chain is a paradigmatic model in condensed matter.
Not only it is attractive due to its relative simplicity, but can also
describe real materials that can be studied experimentally. The
Heisenberg chain is also a prototypical integrable system, that can be
solved exactly by the Bethe Ansatz, and can be studied using
bosonization techniques and conformal field theory.
In these lectures, we will be interested in obtaining its ground state
properties of this model by numerically solving the time-independent
Schrödinger equation: $$\hat{H}|\Psi\rangle = E|\Psi\rangle,$$ where $H$
is the Hamiltonian of the problem, $|\Psi\rangle$ its eigenstates, with
the corresponding eigenvalues, or energies $E$.
Exact diagonalization
=====================

#### Pictorial representation of the Hamiltonian building recursion explained in the text. At each step, the block size is increased by adding a spin at a time.
In this section we introduce a technique that will allow us to calculate
the ground state, and even excited states of small Heisenberg chains.
Exact Diagonalization (ED) is a conceptually simple technique which
basically consists of diagonalizing the Hamiltonian matrix by brute
force. Same as for the spin operators, the Hamiltonian also has a
corresponding matrix representation. In principle, if we are able to
compute all the matrix elements, we can use a linear algebra package to
diagonalize it and obtain all the eigenvalues and eigenvectors @lapack.
In these lectures we are going to follow a quite unconventional
procedure to describe how this technique works. It is important to point out that this
is a quite inefficient and impractical way to diagonalize the
Hamiltonian, and more sophisticated techniques are generally used in
practice.
Two-spin problem
----------------
The Hilbert space for the two-spin problem consists of four possible
configurations of two spins
$$\left\{ |\uparrow\uparrow\rangle,|\uparrow\downarrow\rangle,|\downarrow\uparrow\rangle,|\downarrow\downarrow\rangle \right\}$$
The problem is described by the Hamiltonian: $$\hat{H}=
\hat{S}^z_1 \hat{S}^z_2 +
\frac{1}{2}\left[
\hat{S}^+_1 \hat{S}^-_2 +
\hat{S}^-_1 \hat{S}^+_2
\right]$$
The corresponding matrix will have dimensions $4 \times 4$. In order to
compute this matrix we shall use some simple matrix algebra to first
obtain the single-site operators in the expanded Hilbert space. This is
done by following the following simple scheme: And operator $O_1$ acting
on the left spin, will have the following $4 \times 4$ matrix form:
$$\tilde{O}_1 = O_1 \otimes {1}_2
$$ Similarly, for an operator $O_2$ acting on the right
spin: $$\tilde{O}_2 = {1}_2 \otimes O_2
$$ where we introduced the $n \times n$ identity matrix
${1}_n$. The product of two operators acting on different sites
can be obtained as: $$\tilde{O}_{12} = O_1 \otimes O_2$$ It is easy to
see that the Hamiltonian matrix will be given by: $$H_{12}=
S^z \otimes S^z +
\frac{1}{2}\left[
S^+ \otimes S^- +
S^- \otimes S^+
\right]$$ where we used the single spin ($2 \times 2$) matrices $S^z$
and $S^\pm$. We leave as an exercise for the reader to show that the
final form of the matrix is:
$$H_{12} = \left(
\begin{array}{cccc}
1/4 & 0 & 0 & 0 \\
0 & -1/4 & 1/2 & 0 \\
0 & 1/2 & -1/4 & 0 \\
0 & 0 & 0 & 1/4 \\
\end{array}
\right),
$$
Obtaining the eigenvalues and eigenvectors is also a straightforward
exercise: two of them are already given, and the entire problem now
reduces to diagonalizing a two by two matrix. We therefore obtain the
well known result: The ground state
$|s\rangle = 1/\sqrt{2}\left[ |\uparrow\downarrow\rangle - |\downarrow\uparrow\rangle \right]$,
has energy $E_s=-3/4$, and the other three eigenstates
$\left\{|\uparrow\uparrow\rangle,|\downarrow\downarrow\rangle,1/\sqrt{2}\left[ |\uparrow\downarrow\rangle + |\downarrow\uparrow\rangle \right] \right\}$
form a multiplet with energy $E_t=1/4$.
Many spins
----------
Imagine now that we add a third spin to the right of our two spins. We
can use the previous result to obtain the new $8 \times 8$ Hamiltonian
matrix as: $$H_{3}=
H_{2} \otimes {1}_2 +
\tilde{S}^z_2 \otimes S^z +
\frac{1}{2}\left[
\tilde{S}^+_2 \otimes S^- +
\tilde{S}^-_2 \otimes S^+
\right]$$ Here we used the single spin $S^z_1$, $S^\pm_1$, and the
\`tilde\` matrices defined in Eqs.(\[tildeL\]) and (\[tildeR\]):
$$\tilde{S}^z_2 = {1}_2 \otimes S^z,$$ and
$$\tilde{S}^\pm_2 = {1}_2 \otimes S^\pm,$$
It is easy to see that this leads to a recursion scheme to construct the
$2^i \times 2^i$ Hamiltonian matrix the $i^\mathrm{th}$ step as:
$$H_{i}=
H_{i-1} \otimes {1}_2 +
\tilde{S}^z_{i-1} \otimes S^z +
\frac{1}{2}\left[
\tilde{S}^+_{i-1} \otimes S^- +
\tilde{S}^-_{i-1} \otimes S^+
\right]$$
with $$\tilde{S}^z_{i-1} = {1}_{2^{i-2}} \otimes S^z,$$ and
$$\tilde{S}^\pm_{i-1} = {1}_{2^{i-2}} \otimes S^\pm,$$
This recursion algorithm can be visualized as a left ‘block’, to which
we add new ‘sites’ or spins to the right, one at a time, as shown in
Fig.\[fig:block\].The block has a ‘block Hamiltonian’, $H_L$, that is
iteratively built by connecting to the new spins through the
corresponding interaction terms.
```
%matplotlib inline
import numpy as np
from matplotlib import pyplot
# PARAMETERS
nsites = 8
#Single site operators
sz0 = np.zeros(shape=(2,2)) # single site Sz
splus0 = np.zeros(shape=(2,2)) # single site S+
sz0[0,0] = -0.5
sz0[1,1] = 0.5
splus0[1,0] = 1.0
term_szsz = np.zeros(shape=(4,4)) #auxiliary matrix to store Sz.Sz
term_szsz = np.kron(sz0,sz0)
term_spsm = np.zeros(shape=(4,4)) #auxiliary matrix to store 1/2 S+.S-
term_spsm = np.kron(splus0,np.transpose(splus0))*0.5
term_spsm += np.transpose(term_spsm)
h12 = term_szsz+term_spsm
H = np.zeros(shape=(2,2))
for i in range(1,nsites):
diml = 2**(i-1) # 2^i
dim = diml*2
print ("ADDING SITE ",i," DIML= ",diml)
Ileft = np.eye(diml)
Iright = np.eye(2)
# We add the term for the interaction S_i.S_{i+1}
aux = np.zeros(shape=(dim,dim))
aux = np.kron(H,Iright)
H = aux
H = H + np.kron(Ileft,h12)
w, v = np.linalg.eigh(H) #Diagonalize the matrix
print(w)
from matplotlib import pyplot
pyplot.rcParams['axes.linewidth'] = 2 #set the value globally
%matplotlib inline
Beta = np.arange(0.1,10,0.1)
et = np.copy(Beta)
n = 0
for x in Beta:
p = np.exp(-w*x)
z = np.sum(p)
et[n] = np.dot(w,p)/z
print (x,et[n])
n+=1
pyplot.plot(1/Beta,et,lw=2);
```
#### Challenge 12.1:
Compute the energy and specific heat of the spin chain with $L=12$ as a function of temperature, for $T < 4$.
# A practical exact diagonalization algorithm
1. Initialization: Topology of the lattice, neighbors, and signs.
2. Contruction of a basis suitable for the problem.
3. Constuction of the matrix element of the Hamiltonian.
4. Diagonalization of the matrix.
5. Calculation of observables or expectation values.
As we shall see below, we are going to need the concept of “binary
word”. A binary word is the binary representation of an integer (in
powers of ‘two’, i.e. $n=\sum_{i}b_{i}.2^{i}$), and consist of a
sequence of ‘bits’. A bit $b_{i}$ in the binary basis correspond to our
digits in the decimal system, and can assumme the values ‘zero’ or
‘one’. At this stage we consider appropriate to introduce the logical
operators `AND, OR, XOR`. The binary operators act between bits and
their multiplication tables are listed below
**AND** | 0 | 1
:---:|:---:|:---:
**0** | 0 | 0
**1** | 0 | 1
**OR** | 0 | 1
:---:|:---:|:---:
**0** | 0 | 1
**1** | 1 | 1
**XOR** | 0 | 1
:---:|:---:|:---:
**0** | 0 | 1
**1** | 1 | 0
The bit manipulation is a very useful tool, and most of the programming
languages provide the needed commands.
## Initilalization and definitions
In the program, it is convenient to store in arrays all those quantities
that will be used more frequently. In particular, we must determine the
topology of the cluster, labeling the sites, and storing the components
of the lattice vectors. We must also generate arrays with the nearest
neighbors, and the next-nearest neighbors, according to our needs.
## Construction of the basis
Memory limitations impose severe restrictions on the size of the
clusters that can be studied with this method. To understand this point,
note that although the lowest energy state can be written in the
$\{|\phi
_{n}\rangle \}$ basis as
$|\psi _{0}\rangle =\sum_{m}c_{m}|\phi _{m}\rangle $, this expression is
of no practical use unless $|\phi _{m}\rangle $ itself is expressed in a
convenient basis to which the Hamiltonian can be easily applied. A
natural orthonormal basis for fermion systems is the occupation number
representation, describing all the possible distributions of $N_{e}$
electrons over $N$ sites, while for spin systems it is covenient to work
in a basis where the $S_{z}$ is defined at every site, schematically
represented as $|n\rangle =|\uparrow \downarrow \uparrow ...\rangle $.
The size of this type of basis set
grows exponentially with the system size. In practice this problem can be
considerably alleviated by the use of symmetries of the Hamiltonian that
reduces the matrix to a block-form. The most obvious symmetry is the
number of particles in the problem which is usually conserved at least
for fermionic problems. The total projection of the spin
$S_{total}^{z}$, is also a good quantum number. For translationally
invariant problems, the total momentum $\mathbf{k%
}$ of the system is also conserved introducing a reduction of $1/N$ in
the number of states (this does not hold for models with open boundary
conditions or explicit disorder). In addition, several Hamiltonians have
additional symmetries. On a square lattice, rotations in $\pi /2$ about
a given site, spin inversion, and reflections with respect to the
lattice axis are good quantum numbers (although care must be taken in
their implementation since some of these operations are combinations of
others and thus not independent).
In the following we shall consider a spin-1/2 Heisenberg chain as a
practical example. In this model it is useful to represent the spins
pointing in the ‘up’ direction by a digit ‘1’, and the down-spins by a
‘0’. Following this rule, a state in the $S^{z}$ basis can be
represented by a sequence of ones and zeroes, i.e., a “binary word”.
Thus, two Néel configurations in the 4-site chain can be seen as
$$\begin{aligned}
\mid \uparrow \downarrow \uparrow \downarrow \rangle \equiv |1010\rangle , \\
\mid \downarrow \uparrow \downarrow \uparrow \rangle \equiv |0101\rangle .\end{aligned}$$
Once the up-spins have been placed the whole configuration has been
uniquely determined since the remaining sites can only be occupied by
down-spins. The resulting binary number can be easily converted into
integers $i\equiv
\sum_{l}b_{l}.2^{l}$, where the summation is over all the sites of the
lattice, and $b_{l}$ can be $1$ or $0$. For example:
$$\begin{array}{lllll}
2^{3} & 2^{2} & 2^{1} & 2^{0} & \\
1 & 0 & 1 & 0 & \rightarrow 2^{1}+2^{3}=10, \\
0 & 1 & 0 & 1 & \rightarrow 2^{0}+2^{2}=5.
\end{array}$$
Using the above convention, we can construct the whole basis for the
given problem. However, we must consider the memory limitations of our
computer, introducing some symetries to make the problem more tractable.
The symetries are operations that commute with the Hamiltonian, allowing
us to divide the basis in subspaces with well defined quantum numbers.
By means of similarity transformations we can generate a sequence of
smaller matrices along the diagonal (i.e. the Hamiltonian matrix is
“block diagonal”), and each of them can be diagonalized independently.
For fermionic systems, the simplest symmetries are associated to the
conservation of the number of particles and the projection
$S_{total}^{z}$ of the total spin in the $z$ direction. In the spin-1/2
Heisenberg model, a matrix with $2^{N}\times 2^{N}$ elements can be
reduced to $2S+1$ smaller matrices, corresponding to the projections $%
m=(N_{\uparrow }-N_{\downarrow })/2=-S,-S+1,...,S-1,S$, with $S=N/2$.
The dimension of each of these subspaces is obtained from the
combinatory number $\left(
\begin{array}{c}
N \\
N_{\uparrow }
\end{array}
\right) = \frac{N!}{N_{\uparrow }!N_{\downarrow }!}$.
#### Example: 4-site Heisenberg chain
The total basis has
$2^{4}=16$ states that can be grouped in
$4+1=5$ subspaces with well defined values of the quantum
number $m=S^z$. The dimensions of these subspaces are:
m | dimension
:--:|:--:
-2 | 1
-1 | 4
0 | 6
2 | 4
1 | 1
Since we know that the ground state of the Hilbert
chain is a singlet, we are only interested in the subspace with
</span>$S_{total}^{z}=m=0$<span>. For our example, this subspace is
given by (in increasing order): </span>
$$\begin{eqnarray}
\mid \downarrow \downarrow \uparrow \uparrow \rangle & \equiv &|0011\rangle
\equiv |3\rangle , \\
\mid \downarrow \uparrow \downarrow \uparrow \rangle &\equiv &|0101\rangle
\equiv |5\rangle , \\
\mid \downarrow \uparrow \uparrow \downarrow \rangle &\equiv &|0110\rangle
\equiv |6\rangle , \\
\mid \uparrow \downarrow \downarrow \uparrow \rangle &\equiv &|1001\rangle
\equiv |9\rangle , \\
\mid \uparrow \downarrow \uparrow \downarrow \rangle &\equiv &|1010\rangle
\equiv |10\rangle , \\
\mid \uparrow \uparrow \downarrow \downarrow \rangle &\equiv &|1010\rangle
\equiv |12\rangle .
\end{eqnarray}
$$
Generating the possible configurations of $N$ up-spins (or
spinless fermions) in $L$ sites is equivalent to the problem of
generating the corresponding combinations of zeroes and ones in
lexicographic order.
## Construction of the Hamiltonian matrix elements
We now have to address the problem of generating all the Hamiltonian matrix elements
. These are obtained by the application of the
Hamiltonian on each state of the basis $|\phi \rangle $, generating all
the values
$H_{\phi ,\phi ^{\prime }}=\langle \phi ^{\prime }|{{\hat{H}}}%
|\phi \rangle $. We illustrate this procedure in the following
pseudocode:
    **for** each term ${\hat{H}_i}$ of the Hamiltonian ${\hat{H}}$
        **for** all the states $|\phi\rangle $ in the basis
            $\hat{H}_i|\phi\rangle = \langle \phi^{\prime }|{\hat{H}_i}|\phi\rangle |\phi^{\prime }\rangle $
            
$H_{\phi,\phi^{\prime }}=H_{\phi,\phi^{\prime }} + \langle \phi^{\prime }|{\hat{H}_i}|\phi\rangle $
        **end for**
    **end for**
As the states are representated by binary words, the spin operators (as
well as the creation and destructions operators) can be be easily
implemented using the logical operators `AND, OR, XOR`.
As a practical example let us consider the spin-1/2 Heisenberg
Hamiltonian on the 4-sites chain:
$${{\hat{H}}}=J\sum_{i=0}^{3}\mathbf{S}_{i}.\mathbf{S}_{i+1}=J%
\sum_{i=0}^{3}S_{i}^{z}.S_{i+1}^{z}+\frac{J}{2}%
\sum_{i=0}^{3}(S_{i}^{+}S_{i+1}^{-}+S_{i}^{-}S_{i+1}^{+}), $$
with $\mathbf{S}_{4}=\mathbf{S}_{0}$, due to the periodic boundary
conditions. To evaluate the matrix elements in the basis (\[basis0\]) we
must apply each term of ${{\hat{H}}}$ on such states. The first term is
called Ising term, and is diagonal in this basis (also called Ising
basis). The last terms, or fluctuation terms, give strictly off-diagonal
contributions to the matrix in this representation (when symmetries are
considered, they can also have diagonal contributions). These
fluctuations cause an exchange in the spin orientation between neighbors
with opposite spins, e.g. $\uparrow \downarrow \rightarrow $
$\downarrow \uparrow $. The way to implement spin-flip operations of
this kind on the computer is defining new ‘masks’. A mask for this
operation is a binary number with ‘zeroes’ everywhere, except in the
positions of the spins to be flipped, e.g 00..0110...00. Then, the
logical operator `XOR` is used between the initial state and the mask to
invert the bits (spins) at the positions indicated by the mask. For
example, (0101)$\mathtt{.}$`XOR.`(1100)=(1001), i.e., 5`.XOR.`12=9. It
is useful to store all the masks for a given geometry in memory,
generating them immediately after the tables for the sites and neighbors.
<span>In the Heisenberg model the spin flips can be implemented using
masks, with ‘zeroes’ everywhere, except in the positions of the spins to
be flipped. To illustrate this let us show the effect of the
off-diagonal terms on one of the Néel configurations:</span>
$$\begin{eqnarray}
S_{0}^{+}S_{1}^{-}\,+S_{0}^{-}S_{1}^{+}\,|0101\rangle &\equiv &(0011)\mathtt{%
.XOR.}(0101)=3\mathtt{.XOR.}5=(0110)=6\equiv \,|0110\rangle , \\
S_{1}^{+}S_{2}^{-}+S_{1}^{-}S_{2}^{+}\,\,|0101\rangle &\equiv &(0110)\mathtt{%
.XOR.}(0101)=6\mathtt{.XOR.}5=(0011)=3\equiv \,|0011\rangle , \\
S_{2}^{+}S_{3}^{-}+S_{2}^{-}S_{3}^{+}\,\,|0101\rangle &\equiv &(1100)\mathtt{%
.XOR.}(0101)=12\mathtt{.XOR.}5=(1001)=9\equiv \,|1001\rangle , \\
S_{3}^{+}S_{0}^{-}\,+S_{3}^{-}S_{0}^{+}|0101\rangle &\equiv &(1001)\mathtt{%
.XOR.}(0101)=9\mathtt{.XOR.}5=(1100)=12\equiv \,|1100\rangle .%
\end{eqnarray}$$
<span>After applying the Hamiltonian (\[h4\]) on our basis states, the
reader can verify as an exercise that we obtain </span>
$$\begin{eqnarray}
{{\hat{H}}}\,|0101\rangle &=&-J\,|0101\rangle +\frac{J}{2}\left[
\,|1100\rangle +\,|1001\rangle +\,|0011\rangle +\,|0110\rangle \right] , \\
{{\hat{H}}}\,|1010\rangle &=&-J\,|1010\rangle +\frac{J}{2}\left[
\,|1100\rangle +\,|1001\rangle +\,|0011\rangle +\,|0110\rangle \right] , \\
{{\hat{H}}}\,|0011\rangle &=&\frac{J}{2}\left[ \,\,|0101\rangle
+\,|1010\rangle \right] , \\
{{\hat{H}}}\,|0110\rangle &=&\frac{J}{2}\left[ \,\,|0101\rangle
+\,|1010\rangle \right] , \\
{{\hat{H}}}\,|1001\rangle &=&\frac{J}{2}\left[ \,|0101\rangle +\,|1010\rangle
\right] , \\
{{\hat{H}}}\,|1100\rangle &=&\frac{J}{2}\left[ \,|0101\rangle +\,|1010\rangle
\right] .%
\end{eqnarray}$$
<span>The resulting matrix for the operator </span>$\hat{H}$<span>in the
</span>$S^{z}$<span> representation is: </span> $$H=J\left(
\begin{array}{llllll}
0 & 1/2 & 0 & 0 & 1/2 & 0 \\
1/2 & -1 & 1/2 & 1/2 & 0 & 1/2 \\
0 & 1/2 & 0 & 0 & 1/2 & 0 \\
0 & 1/2 & 0 & 0 & 1/2 & 0 \\
1/2 & 0 & 1/2 & 1/2 & -1 & 1/2 \\
0 & 1/2 & 0 & 0 & 1/2 & 0
\end{array}
\right) . $$
```
class BoundaryCondition:
RBC, PBC = range(2)
class Direction:
RIGHT, TOP, LEFT, BOTTOM = range(4)
L = 8
Nup = 2
maxdim = 2**L
bc = BoundaryCondition.PBC
# Tables for energy
hdiag = np.zeros(4) # Diagonal energies
hflip = np.zeros(4) # off-diagonal terms
hdiag[0] = +0.25
hdiag[1] = -0.25
hdiag[2] = -0.25
hdiag[3] = +0.25
hflip[0] = 0.
hflip[1] = 0.5
hflip[2] = 0.5
hflip[3] = 0.
#hflip *= 2
#hdiag *= 0.
# Lattice geometry (1D chain)
nn = np.zeros(shape=(L,4), dtype=np.int16)
for i in range(L):
nn[i, Direction.RIGHT] = i-1
nn[i, Direction.LEFT] = i+1
if(bc == BoundaryCondition.RBC): # Open Boundary Conditions
nn[0, Direction.RIGHT] = -1 # This means error
nn[L-1, Direction.LEFT] = -1
else: # Periodic Boundary Conditions
nn[0, Direction.RIGHT] = L-1 # We close the ring
nn[L-1, Direction.LEFT] = 0
# We build basis
basis = []
dim = 0
for state in range(maxdim):
basis.append(state)
dim += 1
print ("Basis:")
print (basis)
# We build Hamiltonian matrix
H = np.zeros(shape=(dim,dim))
def IBITS(n,i):
return ((n >> i) & 1)
for i in range(dim):
state = basis[i]
# Diagonal term
for site_i in range(L):
site_j = nn[site_i, Direction.RIGHT]
if(site_j != -1): # This would happen for open boundary conditions
two_sites = IBITS(state,site_i) | (IBITS(state,site_j) << 1)
value = hdiag[two_sites]
H[i,i] += value
for i in range(dim):
state = basis[i]
# Off-diagonal term
for site_i in range(L):
site_j = nn[site_i, Direction.RIGHT]
if(site_j != -1):
mask = (1 << site_i) | (1 << site_j)
two_sites = IBITS(state,site_i) | (IBITS(state,site_j) << 1)
value = hflip[two_sites]
if(value != 0.):
new_state = (state ^ mask)
j = new_state
H[i,j] += value
print(H)
d, v = np.linalg.eigh(H) #Diagonalize the matrix
print("===================================================================================================================")
print(d)
# Using quantum number conservation : Sz
basis = []
dim = 0
for state in range(maxdim):
n_ones = 0
for bit in range(L):
n_ones += IBITS(state,bit)
print (state,n_ones)
if(n_ones == Nup):
basis.append(state)
dim += 1
print ("Dim=",dim)
print ("Basis:")
print (basis)
#hflip[:] *= 2
# We build Hamiltonian matrix
H = np.zeros(shape=(dim,dim))
def IBITS(n,i):
return ((n >> i) & 1)
for i in range(dim):
state = basis[i]
# Diagonal term
for site_i in range(L):
site_j = nn[site_i, Direction.RIGHT]
if(site_j != -1): # This would happen for open boundary conditions
two_sites = IBITS(state,site_i) | (IBITS(state,site_j) << 1)
value = hdiag[two_sites]
H[i,i] += value
def bisect(state, basis):
# Binary search, only works in a sorted list of integers
# state : integer we seek
# basis : list of sorted integers in increasing order
# ret_val : return value, position on the list, -1 if not found
ret_val = -1
dim = len(basis)
# for i in range(dim):
# if(state == basis[i]):
# return i
origin = 0
end = dim-1
middle = (origin+end)/2
while(1):
index_old = middle
middle = (origin+end)//2
if(state < basis[middle]):
end = middle
else:
origin = middle
if(basis[middle] == state):
break
if(middle == index_old):
if(middle == end):
end = end - 1
else:
origin = origin + 1
ret_val = middle
return ret_val
for i in range(dim):
state = basis[i]
# Off-diagonal term
for site_i in range(L):
site_j = nn[site_i, Direction.RIGHT]
if(site_j != -1):
mask = (1 << site_i) | (1 << site_j)
two_sites = IBITS(state,site_i) | (IBITS(state,site_j) << 1)
value = hflip[two_sites]
if(value != 0.):
new_state = (state ^ mask)
j = bisect(new_state, basis)
H[i,j] += value
print(H)
d, v = np.linalg.eigh(H) #Diagonalize the matrix
print(d,np.min(d))
print(v[:,0])
```
Obtaining the ground-state: Lanczos diagonalization
---------------------------------------------------
Once we have a superblock matrix, we can apply a library routine to
obtain the ground state of the superblock $|\Psi\rangle$. The two
algorithms widely used for this purpose are the Lanczos and Davidson
diagonalization. Both are explained to great extent in Ref.@noack, so we
refer the reader to this material for further information. In these
notes we will briefly explain the Lanczos procedure.
The basic idea of the Lanczos method is that a special basis can be constructed
where the Hamiltonian has a tridiagonal representation. This is carried
out iteratively as shown below. First, it is necessary to select an
arbitrary seed vector $|\phi _{0}\rangle $ in the Hilbert space of the
model being studied. If we are seeking the ground-state of the model,
then it is necessary that the overlap between the actual ground-state
$|\psi
_{0}\rangle $, and the initial state $|\phi _{0}\rangle $ be nonzero. If
no “a priori” information about the ground state is known, this
requirement is usually easily satisfied by selecting an initial state
with *randomly* chosen coefficients in the working basis that is being
used. If some other information of the ground state is known, like its
total momentum and spin, then it is convenient to initiate the
iterations with a state already belonging to the subspace having those
quantum numbers (and still with random coefficients within this
subspace).
After $|\phi _{0}\rangle $ is selected, define a new vector by applying
the Hamiltonian ${{\hat{H}}}$, over the initial state. Subtracting the
projection over $|\phi _{0}\rangle $, we obtain
$$|\phi _{1}\rangle ={{\hat{H}}}|\phi _{0}\rangle -\frac{{\langle }\phi _{0}|{{%
\hat{H}}}|{\phi }_{0}{\rangle }}{\langle \phi _{0}|\phi _{0}\rangle }|\phi
_{0}\rangle ,$$ that satisfies $\langle \phi _{0}|\phi _{1}\rangle =0$.
Now, we can construct a new state that is orthogonal to the previous two
as,
$$|\phi _{2}\rangle ={{\hat{H}}}|\phi _{1}\rangle -{\frac{{\langle \phi _{1}|{%
\hat{H}}|\phi _{1}\rangle }}{{\langle \phi _{1}|\phi _{1}\rangle }}}|\phi
_{1}\rangle -{\frac{{\langle \phi _{1}|\phi _{1}\rangle }}{{\langle \phi
_{0}|\phi _{0}\rangle }}}|\phi _{0}\rangle .$$ It can be easily checked
that $\langle \phi _{0}|\phi _{2}\rangle
=\langle \phi _{1}|\phi _{2}\rangle =0$. The procedure can be
generalized by defining an orthogonal basis recursively as,
$$|\phi _{n+1}\rangle ={{\hat{H}}}|\phi _{n}\rangle -a_{n}|\phi _{n}\rangle
-b_{n}^{2}|\phi _{n-1}\rangle ,$$ where $n=0,1,2,...$, and the
coefficients are given by
$$a_{n}={\frac{{\langle \phi _{n}|{\hat{H}}|\phi _{n}\rangle }}{{\langle \phi
_{n}|\phi _{n}\rangle }}},\qquad b_{n}^{2}={\frac{{\langle \phi _{n}|\phi
_{n}\rangle }}{{\langle \phi _{n-1}|\phi _{n-1}\rangle }}},$$
supplemented by $b_{0}=0$, $|\phi _{-1}\rangle =0$. In this basis, it
can be shown that the Hamiltonian matrix becomes, $$H=\left(
\begin{array}{lllll}
a_{0} & b_{1} & 0 & 0 & ... \\
b_{1} & a_{1} & b_{2} & 0 & ... \\
0 & b_{2} & a_{2} & b_{3} & ... \\
0 & 0 & b_{3} & a_{3} & ... \\
\vdots
{} &
\vdots
&
\vdots
&
\vdots
&
\end{array}
\right)$$ i.e. it is tridiagonal as expected. Once in this form the
matrix can be diagonalized easily using standard library subroutines.
However, note that to diagonalize completely a Hamiltonian on a finite
cluster, a number of iterations equal to the size of the Hilbert space
(or the subspace under consideration) are needed. In practice this would
demand a considerable amount of CPU time. However, one of the advantages
of this technique is that accurate enough information about the ground
state of the problem can be obtained after a small number of iterations
(typically of the order of $\sim 100$ or less).
Another way to formulate the problem is by obtaining the tridiagonal
form of the Hamiltonian starting from a Krylov basis, which is spanned
by the vectors
$$\left\{|\phi_0\rangle,\hat{H}|\phi_0\rangle,\hat{H}^2|\phi_0\rangle,...,\hat{H}^n|\phi_0\rangle\right\}$$
and asking that each vector be orthogonal to the previous two. Notice
that each new iteration of the process requires one application of the
Hamiltonian. Most of the time this simple procedure works for practical
purposes, but care must be payed to the possibility of losing
orthogonality between the basis vectors. This may happen due to the
finite machine precision. In that case, a re-orthogonalization procedure
may be required.
Notice that the new super-Hamiltonian matrix has dimensions
$D_L D_R d^2 \times D_L D_R d^2$. This could be a large matrix. In
state-of-the-art simulations with a large number of states, one does not
build this matrix in memory explicitly, but applies the operators to the
state directly in the diagonalization routine.
```
def lanczos(m, seed, maxiter, tol, use_seed, calc_gs, force_maxiter = False):
x1 = seed
x2 = seed
gs = seed
a = np.zeros(100)
b = np.zeros(100)
z = np.zeros((100,100))
lvectors = []
control_max = maxiter;
if(maxiter == -1):
force_maxiter = False
if(control_max == 0):
gs = 1
maxiter = 1
return(e0,gs)
x1[:] = 0
x2[:] = 0
gs[:] = 0
maxiter = 0
a[:] = 0.0
b[:] = 0.0
if(use_seed):
x1 = seed
else :
x1 = np.random.random(x1.shape[0])*2-1.
b[0] = np.sqrt(np.dot(x1,x1))
x1 = x1 / b[0]
x2[:] = 0
b[0] = 1.
e0 = 9999
nmax = min(99, gs.shape[0])
for iter in range(1,nmax+1):
eini = e0
if(b[iter - 1] != 0.):
aux = x1
x1 = -b[iter-1] * x2
x2 = aux / b[iter-1]
aux = np.dot(m,x2)
x1 = x1 + aux
a[iter] = np.dot(x1, x2)
x1 = x1 - x2*a[iter]
b[iter] = np.sqrt(np.dot(x1, x1))
lvectors.append(x2)
z.resize((iter+1,iter+1))
z[:,:] = 0
for i in range(0,iter-1):
z[i,i+1] = b[i+1]
z[i+1,i] = b[i+1]
z[i,i] = a[i+1]
z[iter-1,iter-1]=a[iter]
d, v = np.linalg.eig(z)
col = 0
n = 0
e0 = 9999
for e in d:
if(e < e0):
e0 = e
col = n
n+=1
e0 = d[col]
print ("Iter = ",iter," Ener = ",e0)
if((force_maxiter and iter >= control_max) or (iter >= gs.shape[0] or iter == 99 or abs(b[iter]) < tol) or \
((not force_maxiter) and abs(eini-e0) <= tol)):
# converged
gs = 0.
for i in range(0,iter):
gs += v[i,col]*lvectors[i]
print ("E0 = ", e0, np.sqrt(np.dot(gs,gs)))
maxiter = iter
return(e0,gs) # We return with ground states energy
seed = np.zeros(H.shape[0])
e0, gs = lanczos(H,seed,6,1.e-5,False,False)
print(gs)
print(np.dot(gs,gs))
```
| github_jupyter |
# A Table based Q-Learning Reinforcement Agent in A Grid World
This is a simple example of a Q-Learning agent. The Q function is a table, and each decision is made by sampling the Q-values for a particular state thermally.
```
import numpy as np
import random
import gym
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from IPython.display import clear_output
from tqdm import tqdm
env = gym.make('FrozenLake-v0')
Q = np.zeros([env.observation_space.n, env.action_space.n])
# Set learning parameters
decision_temperature = 0.01
l_rate = 0.5
y = .99
e = 0.1
num_episodes = 900
# create lists to contain total rewawrds and steps per episode
epi_length = []
rs = []
for i in tqdm(range(num_episodes)):
s = env.reset()
r_total = 0
done = False
number_jumps = 0
# limit numerb of jumps
while number_jumps < 99:
number_jumps += 1
softmax = np.exp(Q[s]/decision_temperature)
rand_n = np.random.rand() * np.sum(softmax)
# pick the next action randomly
acc = 0
for ind in range(env.action_space.n):
acc += softmax[ind]
if acc >= rand_n:
a = ind
break
#print(a, softmax, rand_n)
# a = np.argmax(Q[s, :] + np.random.randn(1, env.action_space.n) * (1./(i+1)))
s_next, r, done, _ = env.step(a)
Q_next_value = Q[s_next]
max_Q_next = np.max(Q[s_next,:])
# now update Q
Q[s, a] += l_rate * (r + y * max_Q_next \
- Q[s, a])
r_total += r
s = s_next
if done:
# be more conservative as we learn more
e = 1./((i/50) + 10)
break
if i%900 == 899:
clear_output(wait=True)
print("success rate: " + str(sum(rs[-200:])/2) + "%")
plt.figure(figsize=(8, 8))
plt.subplot(211)
plt.title("Jumps Per Episode", fontsize=18)
plt.plot(epi_length[-200:], "#23aaff")
plt.subplot(212)
plt.title('Reward For Each Episode (0/1)', fontsize=18)
plt.plot(rs[-200:], "o", color='#23aaff', alpha=0.1)
plt.figure(figsize=(6, 6))
plt.title('Decision Table', fontsize=18)
plt.xlabel("States", fontsize=15)
plt.ylabel('Actions', fontsize=15)
plt.imshow(Q.T)
plt.show()
epi_length.append(number_jumps)
rs.append(r_total)
def mv_avg(xs, n):
return [sum(xs[i:i+n])/n for i in range(len(xs)-n)]
# plt.plot(mv_avg(rs, 200))
plt.figure(figsize=(8, 8))
plt.subplot(211)
plt.title("Jumps Per Episode", fontsize=18)
plt.plot(epi_length, "#23aaff", linewidth=0.1, alpha=0.7,
label="raw data")
plt.plot(mv_avg(epi_length, 200), color="blue", alpha=0.3, linewidth=4,
label="Moving Average")
plt.legend(loc=(1.05, 0), frameon=False, fontsize=15)
plt.subplot(212)
plt.title('Reward For Each Episode (0/1)', fontsize=18)
#plt.plot(rs, "o", color='#23aaff', alpha=0.2, markersize=0.4, label="Reward")
plt.plot(mv_avg(rs, 200), color="red", alpha=0.5, linewidth=4, label="Moving Average")
plt.ylim(-0.1, 1.1)
plt.legend(loc=(1.05, 0), frameon=False, fontsize=15)
plt.savefig('./figures/Frozen-Lake-v0-thermal-table.png', dpi=300, bbox_inches='tight')
```
| github_jupyter |
# WeatherPy
----
#### Note
* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
```
!pip install citipy
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import time
import seaborn as sns
import requests
import time
import urllib
import json
from scipy.stats import linregress
# Import API key
from api_keys import weather_api_key
# Incorporated citipy to determine city based on latitude and longitude
from citipy import citipy
# Output File (CSV)
output_data_file = "output_data/cities.csv"
# Range of latitudes and longitudes
lat_range = (-90, 90)
lng_range = (-180, 180)
```
## Generate Cities List
```
# List for holding lat_lngs and cities
lat_lngs = []
cities = []
# Create a set of random lat and lng combinations
lats = np.random.uniform(low=-90.000, high=90.000, size=1500)
lngs = np.random.uniform(low=-180.000, high=180.000, size=1500)
lat_lngs = zip(lats, lngs)
# Identify nearest city for each lat, lng combination
for lat_lng in lat_lngs:
city = citipy.nearest_city(lat_lng[0], lat_lng[1]).city_name
# If the city is unique, then add it to a our cities list
if city not in cities:
cities.append(city)
# Print the city count to confirm sufficient count
len(cities)
```
### Perform API Calls
* Perform a weather check on each city using a series of successive API calls.
* Include a print log of each city as it'sbeing processed (with the city number and city name).
```
#Build URL
url = "http://api.openweathermap.org/data/2.5/weather?"
units = "imperial"
wkey = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
appid = wkey
settings = {"units": "imperial", "appid": wkey}
url = f"{url}appid={wkey}&units={units}"
url
# List of city data
city_data = []
# Print to logger
print("Beginning Data Retrieval ")
print("-----------------------------")
# Create counters
record_count = 1
set_count = 1
# Loop through all the cities in our list
for i, city in enumerate(cities):
# Group cities in sets of 50 for logging purposes
if (i % 50 == 0 and i >= 50):
set_count += 1
record_count = 0
# Create endpoint URL with each city
city_url = url + "&q=" + urllib.request.pathname2url(city)
# Log the url, record, and set numbers
print("Processing Record %s of Set %s | %s" % (record_count, set_count, city))
print(city_url)
# Add 1 to the record count
record_count += 1
# Run an API request for each of the cities
try:
# Parse the JSON and retrieve data
city_weather = requests.get(city_url).json()
# Parse out the max temp, humidity, and cloudiness
city_latitute = city_weather["coord"]["lat"]
city_longitude = city_weather["coord"]["lon"]
city_max_temperature = city_weather["main"]["temp_max"]
city_humidity = city_weather["main"]["humidity"]
city_clouds = city_weather["clouds"]["all"]
city_wind = city_weather["wind"]["speed"]
city_country = city_weather["sys"]["country"]
city_date = city_weather["dt"]
# Append the City information into city_data list
city_data.append({"City": city,
"Lat": city_latitute,
"Lng": city_longitude,
"Max Temp": city_max_temperature,
"Humidity": city_humidity,
"Cloudiness": city_clouds,
"Wind Speed": city_wind,
"Country": city_country,
"Date": city_date})
# If an error is experienced, skip the city
except:
print("City not found...")
pass
# Indicate that Data Loading is complete
print("-----------------------------")
print("Data Retrieval Complete ")
print("-----------------------------")
```
### Convert Raw Data to DataFrame
* Export the city data into a .csv.
* Display the DataFrame
```
# Convert array of JSONs into Pandas DataFrame
city_data_pd = pd.DataFrame(city_data)
# Export the City_Data into a csv
city_data_pd.to_csv("WeatherPy.csv",encoding="utf-8",index=False)
# Show Record Count
city_data_pd.count()
city_data_pd = pd.read_csv("WeatherPy.csv")
# Display the City Data Frame
city_data_pd.head()
city_data_pd.describe()
```
## Inspect the data and remove the cities where the humidity > 100%.
----
Skip this step if there are no cities that have humidity > 100%.
```
# Get the indices of cities that have humidity over 100%.
# Make a new DataFrame equal to the city data to drop all humidity outliers by index.
Passing "inplace=False" will make a copy of the city_data DataFrame, which we call "clean_city_data".
```
## Plotting the Data
* Use proper labeling of the plots using plot titles (including date of analysis) and axes labels.
* Save the plotted figures as .pngs.
## Latitude vs. Temperature Plot
```
# Build a scatter plot for each data type
plt.scatter(city_data_pd["Lat"], city_data_pd["Max Temp"], marker="o", s=10)
# Incorporate the other graph properties
plt.title("City Latitude vs. Max Temperature")
plt.ylabel("Max. Temperature (F)")
plt.xlabel("Latitude")
plt.grid(True)
# Save the figure
# Show plot
plt.show()
# In the city latitude vs max temperature, we can see that areas further away from the equator (-60 are 80) are much colder
# and places that are closer to the equator (0), are generally warmer.
```
## Latitude vs. Humidity Plot
```
# Build a scatter plot for each data type
plt.scatter(city_data_pd["Lat"], city_data_pd["Humidity"], marker="o", s=10)
# Incorporate the other graph properties
plt.title("City Latitude vs. Humidity")
plt.ylabel("Humidity (%)")
plt.xlabel("Latitude")
plt.grid(True)
# Save the figure
#plt.savefig("Output_Plots/Humidity_vs_Latitude.png")
# Show plot
plt.show()
# It does not look like there is a correlation between humidity and latitude. Humidity is generally based on location to water.
```
## Latitude vs. Cloudiness Plot
```
# Build a scatter plot for each data type
plt.scatter(city_data_pd["Lat"], city_data_pd["Cloudiness"], marker="o", s=10)
# Incorporate the other graph properties
plt.title("City Latitude vs. Cloudiness")
plt.ylabel("Cloudiness (%)")
plt.xlabel("Latitude")
plt.grid(True)
# Save the gragh
# Show plot
plt.show()
# It does not appear there is a correlation between latitude and cloudiness. Weather is dependent upon many factors.
```
## Latitude vs. Wind Speed Plot
```
# Build a scatter plot for each data type
plt.scatter(city_data_pd["Lat"], city_data_pd["Wind Speed"], marker="o", s=10)
# Incorporate the other graph properties
plt.title("City Latitude vs. Wind Speed")
plt.ylabel("Wind Speed (mph)")
plt.xlabel("Latitude")
plt.grid(True)
# Save the figure
# Show plot
plt.show()
# It does not appear there is a correlation between latitude and wind speed.
# Wind speed is based on a variety of factors including weather and geography.
```
## Linear Regression
```
df_NorthernHem = city_data_pd.loc[city_data_pd["Lat"] > 0]
df_SouthernHem = city_data_pd.loc[city_data_pd["Lat"] < 0]
x_values = df_NorthernHem["Lat"]
y_values = df_NorthernHem["Max Temp"]
def regressplot(x_values, y_values):
# Perform a linear regression on temperature vs. latitude
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)
# Get regression values
regress_values = x_values * slope + intercept
print(regress_values)
# Create line equation string
line_eq = "y = " + str(round(slope,2)) + "x +" + str(round(intercept,2))
# Build a scatter plot for each data type
plt.scatter(x_values, y_values , marker="o", s=10)
plt.plot(x_values,regress_values,"r-")
# Incorporate the other graph properties
plt.xlabel("Latitude")
plt.grid(True)
plt.annotate(line_eq,(20,15),fontsize=15,color="red")
# Print r value
print(f"The r-value is: {rvalue**2}")
# Save the figure
# Show plot
plt.show()
plt.title("City Latitude vs. Max Temperature")
plt.ylabel("Max. Temperature (F)")
regressplot(x_values, y_values)
```
#### Northern Hemisphere - Max Temp vs. Latitude Linear Regression
```
x_values = df_SouthernHem["Lat"]
y_values = df_SouthernHem["Max Temp"]
plt.title("City Latitude vs. Max Temperature")
plt.ylabel("Max. Temperature (F)")
regressplot(x_values, y_values)
```
#### Southern Hemisphere - Max Temp vs. Latitude Linear Regression
```
x_values = df_SouthernHem["Lat"]
y_values = df_SouthernHem["Max Temp"]
plt.title("City Latitude vs. Max Temperature")
plt.ylabel("Max. Temperature (F)")
regressplot(x_values, y_values)
```
#### Northern Hemisphere - Humidity (%) vs. Latitude Linear Regression
```
x_values = df_NorthernHem["Lat"]
y_values = df_NorthernHem["Humidity"]
plt.title("City Latitude vs. Humidity")
plt.ylabel("Humidity (%)")
regressplot(x_values, y_values)
```
#### Southern Hemisphere - Humidity (%) vs. Latitude Linear Regression
```
x_values = df_SouthernHem["Lat"]
y_values = df_SouthernHem["Humidity"]
plt.title("City Latitude vs. Humidity")
plt.ylabel("Humidity (%)")
regressplot(x_values, y_values)
```
#### Northern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
```
x_values = df_NorthernHem["Lat"]
y_values = df_NorthernHem["Cloudiness"]
plt.title("City Latitude vs. Cloudiness")
plt.ylabel("Cloudiness (%)")
regressplot(x_values, y_values)
```
#### Southern Hemisphere - Cloudiness (%) vs. Latitude Linear Regression
```
x_values = df_SouthernHem["Lat"]
y_values = df_SouthernHem["Cloudiness"]
plt.title("City Latitude vs. Cloudiness")
plt.ylabel("Cloudiness (%)")
regressplot(x_values, y_values)
```
#### Northern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
```
x_values = df_NorthernHem["Lat"]
y_values = df_NorthernHem["Wind Speed"]
plt.title("City Latitude vs. Wind Speed")
plt.ylabel("Wind Speed (mph)")
regressplot(x_values, y_values)
```
#### Southern Hemisphere - Wind Speed (mph) vs. Latitude Linear Regression
```
x_values = df_SouthernHem["Lat"]
y_values = df_SouthernHem["Wind Speed"]
plt.title("City Latitude vs. Wind Speed")
plt.ylabel("Wind Speed (mph)")
regressplot(x_values, y_values)
```
| github_jupyter |
# Binary classification with Support Vector Machines (SVM)
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import ipywidgets as widgets
from sklearn.linear_model import LogisticRegression
from sklearn.svm import LinearSVC, SVC
from ipywidgets import interact, interactive, fixed
from numpy.random import default_rng
plt.rcParams['figure.figsize'] = [9.5, 6]
rng = default_rng(seed=42)
```
## Two Gaussian distributions
Let's generate some data, two sets of Normally distributed points...
```
def plot_data():
plt.xlim(0.0, 1.0)
plt.ylim(0.0, 1.0)
plt.plot(x1, y1, 'bs', markersize=6)
plt.plot(x2, y2, 'rx', markersize=6)
s1=0.01
s2=0.01
n1=30
n2=30
x1, y1 = rng.multivariate_normal([0.5, 0.3], [[s1, 0], [0, s1]], n1).T
x2, y2 = rng.multivariate_normal([0.7, 0.7], [[s2, 0], [0, s2]], n2).T
plot_data()
plt.suptitle('generated data points')
plt.show()
```
## Separating hyperplane
Linear classifiers: separate the two distributions with a line (hyperplane)
```
def plot_line(slope, intercept, show_params=False):
x_vals = np.linspace(0.0, 1.0)
y_vals = slope*x_vals +intercept
plt.plot(x_vals, y_vals, '--')
if show_params:
plt.title('slope={:.4f}, intercept={:.4f}'.format(slope, intercept))
```
You can try out different parameters (slope, intercept) for the line. Note that there are many (in fact an infinite number) of lines that separate the two classes.
```
#plot_data()
#plot_line(-1.1, 1.1)
#plot_line(-0.23, 0.62)
#plot_line(-0.41, 0.71)
#plt.savefig('just_points2.png')
def do_plot_interactive(slope=-1.0, intercept=1.0):
plot_data()
plot_line(slope, intercept, True)
plt.suptitle('separating hyperplane (line)')
interactive_plot = interactive(do_plot_interactive, slope=(-2.0, 2.0), intercept=(0.5, 1.5))
output = interactive_plot.children[-1]
output.layout.height = '450px'
interactive_plot
```
## Logistic regression
Let's create a training set $\mathbf{X}$ with labels in $\mathbf{y}$ with our points (in shuffled order).
```
X = np.block([[x1, x2], [y1, y2]]).T
y = np.hstack((np.repeat(0, len(x1)), np.repeat(1, len(x2))))
rand_idx = rng.permutation(len(x1) + len(x2))
X = X[rand_idx]
y = y[rand_idx]
print(X.shape, y.shape)
print(X[:10,:])
print(y[:10].reshape(-1,1))
```
The task is now to learn a classification model $\mathbf{y} = f(\mathbf{X})$.
First, let's try [logistic regression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html).
```
clf_lr = LogisticRegression(penalty='none')
clf_lr.fit(X, y)
w1 = clf_lr.coef_[0][0]
w2 = clf_lr.coef_[0][1]
b = clf_lr.intercept_[0]
plt.suptitle('Logistic regression')
plot_data()
plot_line(slope=-w1/w2, intercept=-b/w2, show_params=True)
```
## Linear SVM
```
clf_lsvm = SVC(C=1000, kernel='linear')
clf_lsvm.fit(X, y)
w1 = clf_lsvm.coef_[0][0]
w2 = clf_lsvm.coef_[0][1]
b = clf_lsvm.intercept_[0]
plt.suptitle('Linear SVM')
plot_data()
plot_line(slope=-w1/w2, intercept=-b/w2, show_params=True)
def plot_clf(clf):
# plot the decision function
ax = plt.gca()
xlim = ax.get_xlim()
ylim = ax.get_ylim()
# create grid to evaluate model
xx = np.linspace(xlim[0], xlim[1], 30)
yy = np.linspace(ylim[0], ylim[1], 30)
YY, XX = np.meshgrid(yy, xx)
xy = np.vstack([XX.ravel(), YY.ravel()]).T
Z = clf.decision_function(xy).reshape(XX.shape)
# plot decision boundary and margins
ax.contour(XX, YY, Z, colors='k', levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
# plot support vectors
ax.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1], s=100,
linewidth=1, facecolors='none', edgecolors='k')
```
Let's try different $C$ values. We'll also visualize the margins and support vectors.
```
def do_plot_svm(C=1000.0):
clf = SVC(C=C, kernel='linear')
clf.fit(X, y)
plot_data()
plot_clf(clf)
interactive_plot = interactive(do_plot_svm, C=widgets.FloatLogSlider(value=1000, base=10, min=-0.5, max=4, step=0.2))
output = interactive_plot.children[-1]
output.layout.height = '400px'
interactive_plot
#do_plot_svm()
#plt.savefig('linear-svm.png')
```
## Kernel SVM
```
clf_ksvm = SVC(C=10, kernel='rbf')
clf_ksvm.fit(X, y)
plot_data()
plot_clf(clf_ksvm)
plt.savefig('kernel-svm.png')
def do_plot_svm(C=1000.0):
clf = SVC(C=C, kernel='rbf')
clf.fit(X, y)
plot_data()
plot_clf(clf)
interactive_plot = interactive(do_plot_svm, C=widgets.FloatLogSlider(value=100, base=10, min=-1, max=3, step=0.2))
output = interactive_plot.children[-1]
output.layout.height = '400px'
interactive_plot
```
| github_jupyter |
```
import pandas as pd
from finta import TA
import numpy as np
#yahoo finance stock data (for longer timeframe)
import yfinance as yf
def stock_df(ticker, start, end):
stock = yf.Ticker(ticker)
stock_df = stock.history(start = start, end = end)
return stock_df
start = pd.to_datetime('2015-01-01')
end = pd.to_datetime('today')
spy_df = stock_df('SPY', start, end)
len(spy_df)
# spy_df["Monetary Gain"] = spy_df["Close"].diff()
spy_df['Actual Return'] = spy_df["Close"].pct_change()
spy_df.loc[(spy_df['Actual Return'] >= 0), 'Return Direction'] = 1
spy_df.loc[(spy_df['Actual Return'] < 0), 'Return Direction'] = 0
# spy_df['Trades'] = np.abs(spy_df['Trading Signal'].diff())
# spy_df['Strategy Returns'] = spy_df['Actual Return'] * spy_df['Trading Signal'].shift()
spy_df.dropna(inplace= True)
spy_df.tail(20)
spy_technical_indicators = pd.DataFrame()
#Creating RSI evaluation metrics
spy_technical_indicators["RSI"] = TA.RSI(spy_df, 14)
spy_technical_indicators["RSI Evaluation"] = "No Abnormalities"
spy_technical_indicators.loc[spy_technical_indicators["RSI"] > 70, 'RSI Evaluation'] = "Overvalued"
spy_technical_indicators.loc[spy_technical_indicators["RSI"] < 30, 'RSI Evaluation'] = "Undervalued"
spy_technical_indicators["RSI Lag"] = spy_technical_indicators["RSI Evaluation"].shift(1)
for index, row in spy_technical_indicators.iterrows():
if (spy_technical_indicators.loc[index, "RSI Evaluation"] == "No Abnormalities" and spy_technical_indicators.loc[index, "RSI Lag"] == "Undervalued"):
spy_technical_indicators.loc[index, "RSI Evaluation"] = "RSI Bullish Signal"
if (spy_technical_indicators.loc[index, "RSI Evaluation"] == "No Abnormalities" and spy_technical_indicators.loc[index, "RSI Lag"] == "Overvalued"):
spy_technical_indicators.loc[index, "RSI Evaluation"] = "RSI Bearish Signal"
spy_technical_indicators.drop(columns = ["RSI Lag"], inplace = True)
#Creating CCI evaluation metrics
spy_technical_indicators["CCI"] = TA.CCI(spy_df, 14)
spy_technical_indicators["CCI Lag"] = spy_technical_indicators["CCI"].shift(1)
for index, row in spy_technical_indicators.iterrows():
if (spy_technical_indicators.loc[index, "CCI"] > spy_technical_indicators.loc[index, "CCI Lag"]):
if (spy_technical_indicators.loc[index, "CCI"] > 0 and spy_technical_indicators.loc[index, "CCI Lag"] < 0):
spy_technical_indicators.loc[index, "CCI Evaluation"] = "CCI Uptrend"
else:
spy_technical_indicators.loc[index, "CCI Evaluation"] = "CCI Potential Uptrend"
if (spy_technical_indicators.loc[index, "CCI"] < spy_technical_indicators.loc[index, "CCI Lag"]):
if (spy_technical_indicators.loc[index, "CCI"] < 0 and spy_technical_indicators.loc[index, "CCI Lag"] > 0):
spy_technical_indicators.loc[index, "CCI Evaluation"] = "CCI Downtrend"
else:
spy_technical_indicators.loc[index, "CCI Evaluation"] = "CCI Potential Downtrend"
spy_technical_indicators.drop(columns = ["CCI Lag"], inplace = True)
#Creating Rate of Change evaluation metrics
spy_technical_indicators["ROC"] = TA.ROC(spy_df, 20)
spy_technical_indicators["ROC Lag"] = spy_technical_indicators["ROC"].shift(1)
for index, row in spy_technical_indicators.iterrows():
if (spy_technical_indicators.loc[index, "ROC"] > 0 and spy_technical_indicators.loc[index, "ROC Lag"] < 0):
spy_technical_indicators.loc[index, "ROC Evaluation"] = "ROC Rising Momentum"
if (spy_technical_indicators.loc[index, "ROC"] < 0 and spy_technical_indicators.loc[index, "ROC Lag"] > 0):
spy_technical_indicators.loc[index, "ROC Evaluation"] = "ROC Falling Momentum"
else:
spy_technical_indicators.loc[index, "ROC Evaluation"] = "ROC Uninterpretable"
spy_technical_indicators.drop(columns = ["ROC Lag"], inplace = True)
#Creating Stochasic Indicator evaluation metrics
spy_technical_indicators["STO"] = TA.STOCH(spy_df)
spy_technical_indicators["Returns"] = spy_df["Return Direction"]
spy_technical_indicators.dropna(inplace = True)
spy_technical_indicators
from sklearn.preprocessing import StandardScaler,OneHotEncoder
categorical_variables = list(spy_technical_indicators.dtypes[spy_technical_indicators.dtypes == "object"].index)
categorical_variables
enc = OneHotEncoder(sparse= False)
encoded_data = enc.fit_transform(spy_technical_indicators[categorical_variables])
encoded_df = pd.DataFrame(encoded_data, columns = enc.get_feature_names(categorical_variables), index = spy_technical_indicators.index)
encoded_df = pd.concat([encoded_df, spy_technical_indicators.drop(columns = categorical_variables)], axis = 1)
encoded_df
import tensorflow as tf
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
X = encoded_df.drop(columns = ["Returns"])
y = encoded_df["Returns"]
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
scaler = StandardScaler()
X_scaler = scaler.fit(X_train)
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
neural = Sequential()
number_input_features = len(X.columns)
hidden_nodes_layer1 = (number_input_features + 1) // 2
hidden_nodes_layer2 = (hidden_nodes_layer1 + 1) // 2
neural.add(Dense(units=hidden_nodes_layer1, input_dim=number_input_features, activation="elu"))
neural.add(Dense(units=hidden_nodes_layer2, activation="elu"))
neural.add(Dense(units=1, activation="sigmoid"))
neural.summary()
neural.compile(loss = "binary_crossentropy", optimizer = "adam", metrics = ["accuracy"])
model = neural.fit(X_train_scaled, y_train, epochs = 200)
Y_prediction = (neural.predict(X_train_scaled) > 0.5).astype("int32")
Y_prediction = Y_prediction.squeeze()
results = pd.DataFrame( {"Predictions": Y_prediction, "Actual": y_train})
display(results)
model_loss, model_accuracy = neural.evaluate(X_test_scaled,y_test,verbose=2)
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
```
| github_jupyter |
We saw in this [journal entry](http://wiki.noahbrenowitz.com/doku.php?id=journal:2018-10:day-2018-10-24#run_110) that multiple-step trained neural network gives a very imbalanced estimate, but the two-step trained neural network gives a good answer. Where do these two patterns disagree?
```
%matplotlib inline
import matplotlib.pyplot as plt
import xarray as xr
import click
import torch
from uwnet.model import call_with_xr
import holoviews as hv
from holoviews.operation import decimate
hv.extension('bokeh')
def column_integrate(data_array, mass):
return (data_array * mass).sum('z')
def compute_apparent_sources(model_path, ds):
model = torch.load(model_path)
return call_with_xr(model, ds, drop_times=0)
def get_single_location(ds, location=(32,0)):
y, x = location
return ds.isel(y=slice(y,y+1), x=slice(x,x+1))
def dict_to_dataset(datasets, dim='key'):
"""Concatenate a dict of datasets along a new axis"""
keys, values = zip(*datasets.items())
idx = pd.Index(keys, name=dim)
return xr.concat(values, dim=idx)
def dataarray_to_table(dataarray):
return dataarray.to_dataset('key').to_dataframe().reset_index()
def get_apparent_sources(model_paths, data_path):
ds = xr.open_dataset(data_path)
location = get_single_location(ds, location=(32,0))
sources = {training_strategy: compute_apparent_sources(model_path, location)
for training_strategy, model_path in model_paths.items()}
return dict_to_dataset(sources)
model_paths = {
'multi': '../models/113/3.pkl',
'single': '../models/110/3.pkl'
}
data_path = "../data/processed/training.nc"
sources = get_apparent_sources(model_paths, data_path)
```
# Apparent moistening and heating
Here we scatter plot the apparent heating and moistening:
```
%%opts Scatter[width=500, height=500, color_index='z'](cmap='viridis', alpha=.2)
%%opts Curve(color='black')
lims = (-30, 40)
df = dataarray_to_table(sources.QT)
moisture_source = hv.Scatter(df, kdims=["multi", "single"]).groupby('z').redim.range(multi=lims, single=lims) \
*hv.Curve((lims, lims))
lims = (-30, 40)
df = dataarray_to_table(sources.SLI)
heating = hv.Scatter(df, kdims=["multi", "single"]).groupby('z').redim.range(multi=lims, single=lims) \
*hv.Curve((lims, lims))
moisture_source.relabel("Moistening (g/kg/day)") + heating.relabel("Heating (K/day)")
```
The multistep moistening is far too negative in the upper parts of the atmosphere, and the corresponding heating is too positive. Does this **happen because the moisture is negative in those regions**.
| github_jupyter |
# Ejercicio - Busqueda de Alojamiento en Airbnb.
Supongamos que somos un agente de [Airbnb](http://www.airbnb.com) localizado en Lisboa, y tenemos que atender peticiones de varios clientes. Tenemos un archivo llamado `airbnb.csv` (en la carpeta data) donde tenemos información de todos los alojamientos de Airbnb en Lisboa.
```
import pandas as pd
df_airbnb = pd.read_csv("./src/pandas/airbnb.csv")
df_airbnb.head()
df_airbnb.dtypes
```
En concreto el dataset tiene las siguientes variables:
- room_id: el identificador de la propiedad
- host_id: el identificador del dueño de la propiedad
- room_type: tipo de propiedad (vivienda completa/(habitacion para compartir/habitación privada)
- neighborhood: el barrio de Lisboa
- reviews: El numero de opiniones
- overall_satisfaction: Puntuacion media del apartamento
- accommodates: El numero de personas que se pueden alojar en la propiedad
- bedrooms: El número de habitaciones
- price: El precio (en euros) por noche
## Usando Pandas
### Caso 1.
Alicia va a ir a Lisboa durante una semana con su marido y sus 2 hijos. Están buscando un apartamento con habitaciones separadas para los padres y los hijos. No les importa donde alojarse o el precio, simplemente quieren tener una experiencia agradable. Esto significa que solo aceptan lugares con más de 10 críticas con una puntuación mayor de 4. Cuando seleccionemos habitaciones para Alicia, tenemos que asegurarnos de ordenar las habitaciones de mejor a peor puntuación. Para aquellas habitaciones que tienen la misma puntuación, debemos mostrar antes aquellas con más críticas. Debemos darle 3 alternativas.
```
df_airbnb.room_type.unique()
df_airbnb.accommodates.unique()
df_airbnb.bedrooms.unique()
df_airbnb.reviews.unique() #>10 críticas
df_airbnb.overall_satisfaction.unique() #>4 puntos
condicion = (df_airbnb.room_type == "Private room") & (df_airbnb.accommodates == 4) & (df_airbnb.bedrooms == 2.0) & (df_airbnb.reviews > 10) & (df_airbnb.overall_satisfaction > 4.0)
df_airbnb[condicion].sort_values(by=["overall_satisfaction","reviews"],ascending=False).head(3)
```
### Caso 2
Roberto es un casero que tiene una casa en Airbnb. De vez en cuando nos llama preguntando sobre cuales son las críticas de su alojamiento. Hoy está particularmente enfadado, ya que su hermana Clara ha puesto una casa en Airbnb y Roberto quiere asegurarse de que su casa tiene más críticas que las de Clara. Tenemos que crear un dataframe con las propiedades de ambos. Las id de las casas de Roberto y Clara son 97503 y 90387 respectivamente. Finalmente guardamos este dataframe como excel llamado "roberto.xls
```
df_airbnb.room_id.unique()
df_airbnb_2 = df_airbnb.set_index("room_id").copy()
df_airbnb_2.head()
df_roberto = df_airbnb_2.loc[[97503,90387]]
df_roberto
df_roberto.to_excel("./roberto.xlsx",sheet_name="datos",encoding="utf-8")
```
### Caso 3
Diana va a Lisboa a pasar 3 noches y quiere conocer a gente nueva. Tiene un presupuesto de 50€ para su alojamiento. Debemos buscarle las 10 propiedades más baratas, dandole preferencia a aquellas que sean habitaciones compartidas *(room_type == Shared room)*, y para aquellas viviendas compartidas debemos elegir aquellas con mejor puntuación.
```
df_airbnb_1 = df_airbnb[df_airbnb.price <= 50].sort_values(by="price",ascending=True).head(10)
df_airbnb_1
df_airbnb_1[df_airbnb_1.room_type == "Shared room"].sort_values(by = "overall_satisfaction",ascending=False)
```
## Usando MatPlot
```
import matplotlib.pyplot as plt
%matplotlib inline
```
### Caso 1.
Realizar un gráfico circular, de la cantidad de tipo de habitaciones `room_type`
```
df_airbnb.room_type.value_counts().plot.pie(
#labels=["AA", "BB"],
autopct="%.2f",
fontsize=12)
```
| github_jupyter |
# ACS Download
## ACS TOOL STEP 1 -> SETUP :
#### Uses: csa2tractcrosswalk.csv, VitalSignsCensus_ACS_Tables.xlsx
#### Creates: ./AcsDataRaw/ ./AcsDataClean/
### Import Modules & Construct Path Handlers
```
import os
import sys
import pandas as pd
pd.set_option('display.expand_frame_repr', False)
pd.set_option('display.precision', 2)
```
### Get Vital Signs Reference Table
```
acs_tables = pd.read_csv('https://raw.githubusercontent.com/bniajfi/bniajfi/main/vs_acs_table_ids.csv')
acs_tables.head()
```
### Get Tract/ CSA CrossWalk
```
file = 'https://raw.githubusercontent.com/bniajfi/bniajfi/main/CSA-to-Tract-2010.csv'
crosswalk = pd.read_csv( file )
crosswalk = dict(zip(crosswalk['TRACTCE10'], crosswalk['CSA2010'] ) )
```
### Get retrieve_acs_data function
```
!pip install dataplay geopandas VitalSigns
import VitalSigns as vs
from VitalSigns import acsDownload
help(acsDownload)
help(vs.acsDownload.retrieve_acs_data)
```
### Column Operations
```
import csv # 'quote all'
def fixColNamesForCSV(x): return str(x)[:] if str(x) in ["NAME","state","county","tract", "CSA"] else str(x)[12:]
```
## ACS TOOL STEP 2 -> Execute :
```
acs_tables.head()
```
### Save the ACS Data
```
# Set Index df.set_index("NAME", inplace = True)
# Save raw to '../../data/3_outputs/acs/raw/'+year+'/'+tableId+'_'+description+'_5y'+year+'_est.csv'
# Tract to CSA df['CSA'] = df.apply(lambda row: crosswalk.get(int(row['tract']), "empty"), axis=1)
# Save 4 use '../../data/2_cleaned/acs/'+tableId+'_'+description+'_5y'+year+'_est.csv'
year = '19'
count = 0
startFrom = 0
state = '24'
county = '510'
tract = '*'
tableId = 'B19001'
saveAcs = True
# For each ACS Table
for x, row in acs_tables.iterrows():
count += 1
# Grab its Meta Data
description = str(acs_tables.loc[x, 'shortname'])
tableId = str(acs_tables.loc[x, 'id'])
yearExists = int(acs_tables.loc[x, year+'_exists'])
# If the Indicator is valid for the year
# use startFrom to being at a specific count
if yearExists and count >= startFrom:
print(str(count)+') '+tableId + ' ' + description)
# retrieve the Python ACS indicator
print('sending retrieve_acs_data', year, tableId)
df = vs.acsDownload.retrieve_acs_data(state, county, tract, tableId, year, saveAcs)
df.set_index("NAME", inplace = True)
# Save the Data as Raw
# We do not want the id in the column names
saveThis = df.rename( columns = lambda x : ( fixColNamesForCSV(x) ) )
saveThis.to_csv('./AcsDataRaw/'+tableId+'_'+description+'_5y'+year+'_est.csv', quoting=csv.QUOTE_ALL)
# Match Tract to CSA
df['CSA'] = df.apply(lambda row: crosswalk.get(int(row['tract']), "empty"), axis=1)
# Save the data (again) as Cleaned for me to use in the next scripts
df.to_csv('./AcsDataClean/'+tableId+'_5y'+year+'_est.csv', quoting=csv.QUOTE_ALL)
```
# ACS Create Indicators
ACS TOOL STEP 1 -> SETUP :
Uses: ./AcsDataClean/ VitalSignsCensus_ACS_Tables.xlsx VitalSignsCensus_ACS_compare_data.xlsm
Creates: ./VSData/
### Get Vital Signs Reference Table
```
ls
file = 'VitalSignsCensus_ACS_Tables.xlsx'
xls = pd.ExcelFile(findFile('./', file))
indicators = pd.read_excel(xls, sheet_name='indicators', index_col=0 )
indicators.head(30)
```
## ACS TOOL STEP 2 -> Execute :
### Create ACS Indicators
#### Settings/ Get Data
```
flag = True;
year = '19'
vsTbl = pd.read_excel(xls, sheet_name=str('vs'+year), index_col=0 )
# Prepare the Compare Historic Data
file = 'VitalSignsCensus_ACS_compare_data.xlsm'
compare_table = pd.read_excel(findFile('./', file), None);
comparable = False
if( str('VS'+year) in compare_table.keys() ):
compare_table = compare_table[str('VS'+year)]
comparable = True
columnsNames = compare_table.iloc[0]
compare_table = compare_table.drop(compare_table.index[0])
compare_table.set_index(['CSA2010'], drop = True, inplace = True)
```
#### Create Indicators
```
# For Each Indicator
for x, row in indicators.iterrows():
# Grab its Meta Data
shortSource = str(indicators.loc[x, 'Short Source'])
shortName = str(indicators.loc[x, 'ShortName'])[:-2]
yearExists = int(float(indicators.loc[x, year+'_exists']))
indicator = str(indicators.loc[x, 'Indicator'])
indicator_number = str(indicators.index.tolist().index(x)+1 )
fileLocation = str(findFile( './', shortName+'.py') )
# If the Indicator is valid for the year, and uses ACS Data, and method exists
flag = True if fileLocation != str('None') else False
flag = True if flag and yearExists else False
flag = True if flag and shortSource in ['ACS', 'Census'] else False
if flag:
print(shortSource, shortName, yearExists, indicator, fileLocation, indicator_number )
# retrieve the Python ACS indicator
module = __import__( shortName )
result = getattr( module, shortName )( year )
# Put Baltimore City at the bottom of the list
idx = result.index.tolist()
idx.pop(idx.index('Baltimore City'))
result = result.reindex(idx+['Baltimore City'])
# Write the results back into the XL dataframe
vsTbl[ str(indicator_number + '_' +shortName ) ] = result
# Save the Data
result.to_csv('./VSData/vs'+ str(year)+'_'+shortName+'.csv')
# drop columns with any empty values
vsTbl = vsTbl.dropna(axis=1, how='any')
# Save the Data
file = 'VS'+str(year)+'_indicators.xlsx'
file = findFile( 'VSData', file)
# writer = pd.ExcelWriter(file)
#vsTbl.to_excel(writer, str(year+'New_VS_Values') )
# Save the Data
vsTbl.to_csv('./VSData/vs'+str(year+'_New_VS_Values')+'.csv')
# Include Historic Data if exist
if( comparable ):
# add historic indicator to excel doc
# compare_table.to_excel(writer,sheet_name = str(year+'Original_VS_Values') )
# compare sets
info = pd.DataFrame()
diff = pd.DataFrame()
simi = pd.DataFrame()
for column in vsTbl:
number = ''
plchld = ''
if str(column[0:3]).isdigit(): plchld = 3
elif str(column[0:2]).isdigit(): plchld = 2
else: number = plchld = 1
number = int(column[0:plchld])
if number == 98: twoNotThree = False;
new = pd.to_numeric(vsTbl[column], downcast='float')
old = pd.to_numeric(compare_table[number], downcast='float', errors='coerce')
info[str(number)+'_Error#_'] = old - new
diff[str(number)+'_Error#_'] = old - new
info[str(number)+'_Error%_'] = old / new
simi[str(number)+'_Error%_'] = old / new
info[str(number)+'_new_'+column[plchld:]] = vsTbl[column]
info[str(number)+'_old_'+columnsNames[number]] = compare_table[number]
#info.to_csv('./VSData/vs_comparisons_'+ str(year)+'.csv')
#diff.to_csv('./VSData/vs_differences_'+ str(year)+'.csv')
# Save the info dataframe
#info.to_excel(writer, str(year+'_ExpandedView') )
# Save the diff dataframe
#diff.to_excel(writer,sheet_name = str(year+'_Error') )
# Save the diff dataframe
#simi.to_excel(writer,sheet_name = str(year+'_Similarity_Ratio') )
info.to_csv('./VSData/vs'+str(year+'_ExpandedView')+'.csv')
diff.to_csv('./VSData/vs'+str(year+'_Error')+'.csv')
simi.to_csv('./VSData/vs'+str(year+'_Similarity_Ratio')+'.csv')
# writer.save()
```
#### Compare Historic Indicators
```
ls
# Quick test
shortName = str('hh25inc')
year = 19
# retrieve the Python ACS indicator
module = __import__( shortName )
result = getattr( module, shortName )( year )
result
# Delete Unassigned--Jail
df = df[df.index != 'Unassigned--Jail']
# Move Baltimore to Bottom
bc = df.loc[ 'Baltimore City' ]
df = df.drop( df.index[1] )
df.loc[ 'Baltimore City' ] = bc
vsTbl['18_fam']
```
| github_jupyter |
# Logarithmic Regularization: Dataset 1
```
# Import libraries and modules
import numpy as np
import pandas as pd
import xgboost as xgb
from xgboost import plot_tree
from sklearn.metrics import r2_score, classification_report, confusion_matrix, \
roc_curve, roc_auc_score, plot_confusion_matrix, f1_score, \
balanced_accuracy_score, accuracy_score, mean_squared_error, \
log_loss
from sklearn.datasets import make_friedman1
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.linear_model import LogisticRegression, LinearRegression, SGDClassifier, \
Lasso, lasso_path
from sklearn.preprocessing import StandardScaler, LabelBinarizer
from sklearn.impute import SimpleImputer
from sklearn.pipeline import Pipeline
from sklearn_pandas import DataFrameMapper
import scipy
from scipy import stats
import os
import shutil
from pathlib import Path
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import albumentations as A
from albumentations.pytorch import ToTensorV2
import cv2
import itertools
import time
import tqdm
import copy
import warnings
import torch
import torchvision
import torchvision.transforms as transforms
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision.models as models
from torch.utils.data import Dataset
import PIL
import joblib
import json
# import mysgd
# Import user-defined modules
import sys
import imp
sys.path.append('/Users/arbelogonzalezw/Documents/ML_WORK/LIBS/Lockdown')
import tools_general as tg
import tools_pytorch as tp
import lockdown as ld
imp.reload(tg)
imp.reload(tp)
imp.reload(ld)
```
## Read, clean, and save data
```
# Read X and y
X = pd.read_csv('/Users/arbelogonzalezw/Documents/ML_WORK/Project_Jerry_Lockdown/dataset_10LungCarcinoma/GDS3837_gene_profile.csv', index_col=0)
dfy = pd.read_csv('/Users/arbelogonzalezw/Documents/ML_WORK/Project_Jerry_Lockdown/dataset_10LungCarcinoma/GDS3837_output.csv', index_col=0)
# Change column names
cols = X.columns.tolist()
for i in range(len(cols)):
cols[i] = cols[i].lower()
cols[i] = cols[i].replace('-', '_')
cols[i] = cols[i].replace('.', '_')
cols[i] = cols[i].strip()
X.columns = cols
cols = dfy.columns.tolist()
for i in range(len(cols)):
cols[i] = cols[i].lower()
cols[i] = cols[i].replace('-', '_')
cols[i] = cols[i].replace('.', '_')
cols[i] = cols[i].strip()
dfy.columns = cols
# Set target
dfy['disease_state'] = dfy['disease_state'].str.replace(' ', '_')
dfy.replace({'disease_state': {"lung_cancer": 1, "control": 0}}, inplace=True)
Y = pd.DataFrame(dfy['disease_state'])
# Split and save data set
xtrain, xvalid, xtest, ytrain, yvalid, ytest = tg.split_data(X, Y)
tg.save_data(X, xtrain, xvalid, xtest, Y, ytrain, yvalid, ytest, 'dataset/')
tg.save_list(X.columns.to_list(), 'dataset/X.columns')
tg.save_list(Y.columns.to_list(), 'dataset/Y.columns')
#
print("- X size: {}\n".format(X.shape))
print("- xtrain size: {}".format(xtrain.shape))
print("- xvalid size: {}".format(xvalid.shape))
print("- xtest size: {}".format(xtest.shape))
```
## Load Data
```
# Select type of processor to be used
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
if device == torch.device('cuda'):
print("-Type of precessor to be used: 'gpu'")
!nvidia-smi
else:
print("-Type of precessor to be used: 'cpu'")
# Choose device
# torch.cuda.set_device(6)
# Read data
X, x_train, x_valid, x_test, Y, ytrain, yvalid, ytest = tp.load_data_clf('dataset/')
cols_X = tg.read_list('dataset/X.columns')
cols_Y = tg.read_list('dataset/Y.columns')
# Normalize data
xtrain, xvalid, xtest = tp.normalize_x(x_train, x_valid, x_test)
# Create dataloaders
dl_train, dl_valid, dl_test = tp.make_DataLoaders(xtrain, xvalid, xtest, ytrain, yvalid, ytest,
tp.dataset_tabular, batch_size=10000)
# NN architecture with its corresponding forward method
class MyNet(nn.Module):
# .Network architecture
def __init__(self, features, layer_sizes):
super(MyNet, self).__init__()
self.classifier = nn.Sequential(
nn.Linear(features, layer_sizes[0], bias=True),
nn.ReLU(inplace=True),
nn.Linear(layer_sizes[0], layer_sizes[1], bias=True)
)
# .Forward function
def forward(self, x):
x = self.classifier(x)
return x
```
## Lockout (Log, beta=0.7)
```
# TRAIN WITH LOCKDOWN
model = MyNet(n_features, n_layers)
model.load_state_dict(torch.load('./model_forward_valid_min.pth'))
model.eval()
regul_type = [('classifier.0.weight', 2), ('classifier.2.weight', 2)]
regul_path = [('classifier.0.weight', True), ('classifier.2.weight', False)]
lockout_s = ld.lockdown(model, lr=1e-2,
regul_type=regul_type,
regul_path=regul_path,
loss_type=2, tol_grads=1e-2)
lockout_s.train(dl_train, dl_valid, dl_test, epochs=5000, early_stop=15, tol_loss=1e-5, epochs2=100000,
train_how="decrease_t0")
# Save model, data
tp.save_model(lockout_s.model_best_valid, 'model_lockout_valid_min_log7_path.pth')
tp.save_model(lockout_s.model_last, 'model_lockout_last_log7_path.pth')
lockout_s.path_data.to_csv('data_lockout_log7_path.csv')
# Relevant plots
df = pd.read_csv('data_lockout_log7_path.csv')
df.plot('iteration', y=['t0_calc__classifier.0.weight', 't0_used__classifier.0.weight'],
figsize=(8,6))
plt.show()
# L1
nn = int(1e2)
data_tmp = pd.read_csv('data_lockout_l1.csv', index_col=0)
data_lockout_l1 = pd.DataFrame(columns=['sparcity', 'train_accu', 'valid_accu', 'test_accu', 't0_used'])
xgrid, step = np.linspace(0., 1., num=nn,endpoint=True, retstep=True)
for x in xgrid:
msk = (data_tmp['sparcity__classifier.0.weight'] >= x) & \
(data_tmp['sparcity__classifier.0.weight'] < x+step)
train_accu = data_tmp.loc[msk, 'train_accu'].mean()
valid_accu = data_tmp.loc[msk, 'valid_accu'].mean()
test_accu = data_tmp.loc[msk, 'test_accu'].mean()
t0_used = data_tmp.loc[msk, 't0_used__classifier.0.weight'].mean()
data_lockout_l1 = data_lockout_l1.append({'sparcity': x,
'train_accu': train_accu,
'valid_accu': valid_accu,
'test_accu': test_accu,
't0_used': t0_used}, ignore_index=True)
data_lockout_l1.dropna(axis='index', how='any', inplace=True)
# Log, beta=0.7
nn = int(1e2)
data_tmp = pd.read_csv('data_lockout_log7_path.csv', index_col=0)
data_lockout_log7 = pd.DataFrame(columns=['sparcity', 'train_accu', 'valid_accu', 'test_accu', 't0_used'])
xgrid, step = np.linspace(0., 1., num=nn,endpoint=True, retstep=True)
for x in xgrid:
msk = (data_tmp['sparcity__classifier.0.weight'] >= x) & \
(data_tmp['sparcity__classifier.0.weight'] < x+step)
train_accu = data_tmp.loc[msk, 'train_accu'].mean()
valid_accu = data_tmp.loc[msk, 'valid_accu'].mean()
test_accu = data_tmp.loc[msk, 'test_accu'].mean()
t0_used = data_tmp.loc[msk, 't0_used__classifier.0.weight'].mean()
data_lockout_log7 = data_lockout_log7.append({'sparcity': x,
'train_accu': train_accu,
'valid_accu': valid_accu,
'test_accu': test_accu,
't0_used': t0_used}, ignore_index=True)
data_lockout_log7.dropna(axis='index', how='any', inplace=True)
# Plot
fig, axes = plt.subplots(figsize=(9,6))
axes.plot(n_features*data_lockout_l1.loc[2:, 'sparcity'],
1.0 - data_lockout_l1.loc[2:, 'valid_accu'],
"-", linewidth=4, markersize=10, label="Lockout(L1)",
color="tab:orange")
axes.plot(n_features*data_lockout_log7.loc[3:,'sparcity'],
1.0 - data_lockout_log7.loc[3:, 'valid_accu'],
"-", linewidth=4, markersize=10, label=r"Lockout(Log, $\beta$=0.7)",
color="tab:green")
axes.grid(True, zorder=2)
axes.set_xlabel("number of selected features", fontsize=16)
axes.set_ylabel("Validation Error", fontsize=16)
axes.tick_params(axis='both', which='major', labelsize=14)
axes.set_yticks(np.linspace(5e-3, 4.5e-2, 5, endpoint=True))
# axes.ticklabel_format(axis='y', style='sci', scilimits=(0,0))
axes.set_xlim(0, 54800)
axes.legend(fontsize=16)
plt.tight_layout()
plt.savefig('error_vs_features_log_dataset10.pdf', bbox_inches='tight')
plt.show()
```
| github_jupyter |
<img src="logos/Icos_cp_Logo_RGB.svg" align="right" width="400"> <br clear="all" />
# Visualization of average footprints
For questions and feedback contact ida.storm@nateko.lu.se
To use the tool, <span style="background-color: #FFFF00">run all the Notebook cells</span> (see image below).
<img src="network_characterization/screenshots_for_into_texts/how_to_run.PNG" align="left"> <br clear="all" />
#### STILT footprints
STILT is implemented as an <a href="https://www.icos-cp.eu/data-services/tools/stilt-footprint" target="blank">online tool</a> at the ICOS Carbon Portal. Output footprints are presented on a grid with 1/12×1/8 degrees cells (approximately 10km x 10km) where the cell values represent the cell area’s estimated surface influence (“sensitivity”) in ppm / (μmol/ (m²s)) on the atmospheric tracer concentration at the station. Individual footprints are generated every three hours (between 0:00 and 21:00 UTC) and are based on a 10-days backward simulation.
On the ICOS Carbon Portal JupyterHub there are Notebook tools that use STILT footprints such as <a href="https://exploredata.icos-cp.eu/user/jupyter/notebooks/icos_jupyter_notebooks/station_characterization.ipynb">station characterization</a> and <a href="https://exploredata.icos-cp.eu/user/jupyter/notebooks/icos_jupyter_notebooks/network_characterization.ipynb">network characterization</a>. In the station characterization tool there is a visualization method that aggregates the surface influence into bins in different directions and at different distance intervals of the station. Without aggregation of the surface influence it can be difficult to understand to what degree different regions fluxes can be expected to influence the concentration at the station. In this tool an additional method to aggregate and visualize the surface influence is presented.
Example of average footprint for Hylemossa (tall tower - 150 meters - atmospheric station located in Sweden) displayed using logarithmic scale:
<img src="network_characterization/screenshots_for_into_texts/hyltemossa_2018_log.PNG" align="left" width="400"> <br clear="all" />
#### Percent of the footprint sensitivity
This tool generates a map with decreasing intensity in color from 10 to 90% of the footprint sensitivity. This is achieved by including the sensitivity values of the footprint cells in descending order until the 10%, 20%, 30% etc. is reached. In terms of Carbon Portal tools, it is important for an understanding of results in the network characterization tool where the user decides what percent of the footprint should be used for the analysis: The 10 days’ backward simulation means that a footprint can have a very large extent. When averaging many footprints almost the entire STILT model domain (Europe) will have influenced the concentration at the station at some point in time. Deciding on a percent threshold limits the footprint coverage to areas with more significant influence. This tool shows how the user choice will influence the result.
Example using the 2018 average footprint for Hyltemossa:
<img src="network_characterization/screenshots_for_into_texts/hyltemossa_2018.PNG" align="left" width="400"> <br clear="all" />
```
%%javascript
IPython.OutputArea.prototype._should_scroll = function(lines) {
return false;
}
import sys
sys.path.append('./network_characterization')
import gui_percent_aggregate_footprints
```
| github_jupyter |
<a href="https://colab.research.google.com/github/strangelycutlemon/DS-Unit-1-Sprint-1-Dealing-With-Data/blob/master/module4-makefeatures/LS_DS_114_Make_Features_Assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
<img align="left" src="https://lever-client-logos.s3.amazonaws.com/864372b1-534c-480e-acd5-9711f850815c-1524247202159.png" width=200>
# ASSIGNMENT
- Replicate the lesson code.
- This means that if you haven't followed along already, type out the things that we did in class. Forcing your fingers to hit each key will help you internalize the syntax of what we're doing.
- [Lambda Learning Method for DS - By Ryan Herr](https://docs.google.com/document/d/1ubOw9B3Hfip27hF2ZFnW3a3z9xAgrUDRReOEo-FHCVs/edit?usp=sharing)
- Convert the `term` column from string to integer.
- Make a column named `loan_status_is_great`. It should contain the integer 1 if `loan_status` is "Current" or "Fully Paid." Else it should contain the integer 0.
- Make `last_pymnt_d_month` and `last_pymnt_d_year` columns.
```
##### Begin Working Here #####
# Get data
!wget https://resources.lendingclub.com/LoanStats_2018Q4.csv.zip
!unzip LoanStats_2018Q4.csv.zip
!head LoanStats_2018Q4.zip
!tail LoanStats_2018Q4.csv
import pandas as pd
# Set Display options
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 500)
df = pd.read_csv('LoanStats_2018Q4.csv', header=1, skipfooter=2, engine='python')
print(df.shape)
df.head()
df.tail()
df.isnull().sum().sort_values(ascending=False)
df = df.drop(columns=['id', 'member_id', 'desc', 'url'], axis='columns')
df.dtypes
# remove percent signs and convert to floats
type(df['int_rate'])
df['int_rate']
int_rate = '15.02%'
int_rate[:-1]
int_list = ['15.02%', '13.56%', '16.91%']
int_list[:2]
int_rate.strip('%')
type(int_rate.strip('%'))
float(int_rate.strip('%'))
type(float(int_rate.strip('%')))
def remove_percent_to_float(string):
return float(string.strip('%'))
int_list = ['15.02%', '13.56%', '16.91%']
[remove_percent_to_float(item) for item in int_list]
df['int_rate'] = df['int_rate'].apply(remove_percent_to_float)
df.head()
df['emp_title'].value_counts(dropna=False).head(20)
df['emp_title'].value_counts(dropna=False).reset_index().shape
df.describe(exclude='number')
df['emp_title'].isnull().sum()
import numpy as np
type(np.NaN)
examples = ['owner', 'Supervisor', 'Project Manager', np.NaN]
def clean_title(item):
if isinstance(item, str):
return item.strip().title()
else:
return "Unknown"
[clean_title(item) for item in examples]
df['emp_title'] = df['emp_title'].apply(clean_title)
df.head()
df['emp_title'].value_counts(dropna=False).head(20)
df['emp_title'].describe(exclude='number')
df['emp_title'].nunique()
df['emp_title_manager'] = True
print(df['emp_title_manager'])
df['emp_title_manager'] = df['emp_title'].str.contains("Manager")
df.head()
condition = (df['emp_title_manager'] == True)
managers = df[condition]
print(managers.shape)
managers.head()
managers = df[df['emp_title'].str.contains('Manager')]
print(managers.shape)
managers.head()
plebians = df[df['emp_title_manager'] == False]
print(plebians.shape)
plebians.head()
managers['int_rate'].hist(bins=20);
plebians['int_rate'].hist(bins=20);
managers['int_rate'].plot.density()
```
```
plebians['int_rate'].plot.density()
managers['int_rate'].plot.density()
managers['int_rate'].plot.density()
managers['int_rate'].mean()
plebians['int_rate'].mean()
df['issue_d']
df['issue_d'].describe()
df['issue_d'].value_counts()
df.dtypes
df['issue_d'] = pd.to_datetime(df['issue_d'], infer_datetime_format=True)
df['issue_d'].head().values
df.dtypes
df['issue_d'].dt.year
df['issue_d'].dt.month
df['issue_year'] = df['issue_d'].dt.year
df['issue_month'] = df['issue_d'].dt.month
df.head()
[col for col in df if col.endswith('_d')]
df['earliest_cr_line'].head()
df['earliest_cr_line'] = pd.to_datetime(df['earliest_cr_line'],
infer_datetime_format=True)
df['days_from_earliest_credit_to_issue'] = (df['issue_d'] - df['earliest_cr_line']).dt.days
df['days_from_earliest_credit_to_issue'].describe()
# convert 'term' column to int dtype
def remove_months(string):
return int(string.strip(' months'))
df['term'] = df['term'].apply(lambda string: int(string.strip(' months')))
df['term'].head()
# make loan_status_is_great column
# def great_or_not(stringy):
# if stringy.str.contains('Current|Fully Paid', regex=True):
# return True
# else:
# return False
# df["loan_status_is_great"] = df["loan_status"].str.contains("Current|Fully Paid")
df['loan_status_is_great'] = [1 if x in ['Current','Fully Paid'] else 0 for x in df['loan_status']]
# lambda x: True if df['loan_status'].str.contains('Current|Fully Paid', regex=True)
df['loan_status_is_great'].head()
df['loan_status_is_great'].head()
df['last_pymnt_d'] = pd.to_datetime(df['last_pymnt_d'], infer_datetime_format=True)
df['last_pymnt_d_month'] = df['last_pymnt_d'].dt.month
df['last_pymnt_d_year'] = df['last_pymnt_d'].dt.year
df.dtypes
df.head()
```
```
```
# STRETCH OPTIONS
You can do more with the LendingClub or Instacart datasets.
LendingClub options:
- There's one other column in the dataframe with percent signs. Remove them and convert to floats. You'll need to handle missing values.
- Modify the `emp_title` column to replace titles with 'Other' if the title is not in the top 20.
- Take initiatve and work on your own ideas!
Instacart options:
- Read [Instacart Market Basket Analysis, Winner's Interview: 2nd place, Kazuki Onodera](http://blog.kaggle.com/2017/09/21/instacart-market-basket-analysis-winners-interview-2nd-place-kazuki-onodera/), especially the **Feature Engineering** section. (Can you choose one feature from his bulleted lists, and try to engineer it with pandas code?)
- Read and replicate parts of [Simple Exploration Notebook - Instacart](https://www.kaggle.com/sudalairajkumar/simple-exploration-notebook-instacart). (It's the Python Notebook with the most upvotes for this Kaggle competition.)
- Take initiative and work on your own ideas!
You can uncomment and run the cells below to re-download and extract the Instacart data
```
# !wget https://s3.amazonaws.com/instacart-datasets/instacart_online_grocery_shopping_2017_05_01.tar.gz
# !tar --gunzip --extract --verbose --file=instacart_online_grocery_shopping_2017_05_01.tar.gz
# %cd instacart_2017_05_01
```
| github_jupyter |
Teste foursquare
```
import requests # library to handle requests
import pandas as pd # library for data analsysis
import numpy as np # library to handle data in a vectorized manner
import random # library for random number generation
!conda install -c conda-forge geopy --yes
from geopy.geocoders import Nominatim # module to convert an address into latitude and longitude values
# libraries for displaying images
from IPython.display import Image
from IPython.core.display import HTML
# tranforming json file into a pandas dataframe library
from pandas.io.json import json_normalize
!conda install -c conda-forge folium=0.5.0 --yes
import folium # plotting library
import json
import os
import branca
import io
#import plotly.graph_objs as go
#from plotly.offline import plot
print('Folium installed')
print('Libraries imported.')
#CLIENT_ID = 'T1F14ZSMPK4DKKGTZASK3MLJU0RMKYNKQXTIYYW3SSTJHY4W' # your Foursquare ID
#CLIENT_SECRET = 'ATJZAHQ1BW4ODDIPDNVPTR1D3ZTLPVNQAGBTJX1JMUIMKQLE' # your Foursquare Secret
VERSION = '20180604'
#print('Your credentails:')
#print('CLIENT_ID: ' + CLIENT_ID)
#print('CLIENT_SECRET:' + CLIENT_SECRET)
url ='https://api.foursquare.com/v2/venues/search?client_id=T1F14ZSMPK4DKKGTZASK3MLJU0RMKYNKQXTIYYW3SSTJHY4W&client_secret=ATJZAHQ1BW4ODDIPDNVPTR1D3ZTLPVNQAGBTJX1JMUIMKQLE&ll=-8.0615379,-34.895044&&v=20180604&categoryId=4bf58dd8d48988d104941735'
results = requests.get(url).json()
results
# assign relevant part of JSON to venues
venues = results['response']['venues']
# tranform venues into a dataframe
dataframe = json_normalize(venues)
dataframe.head()
dataframe.head()
dataframe.info()
# keep only columns that include venue name, and anything that is associated with location
filtered_columns = ['name', 'categories'] + [col for col in dataframe.columns if col.startswith('location.')] + ['id']
dataframe_filtered = dataframe.loc[:, filtered_columns]
# function that extracts the category of the venue
def get_category_type(row):
try:
categories_list = row['categories']
except:
categories_list = row['venue.categories']
if len(categories_list) == 0:
return None
else:
return categories_list[0]['name']
# filter the category for each row
dataframe_filtered['categories'] = dataframe_filtered.apply(get_category_type, axis=1)
# clean column names by keeping only last term
dataframe_filtered.columns = [column.split('.')[-1] for column in dataframe_filtered.columns]
dataframe_filtered
dataframe_filtered.head()
dataframe_filtered.name
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
```
# Regression: predict fuel efficiency
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/keras/basic_regression"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/keras/basic_regression.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/keras/basic_regression.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
In a *regression* problem, we aim to predict the output of a continuous value, like a price or a probability. Contrast this with a *classification* problem, where we aim to select a class from a list of classes (for example, where a picture contains an apple or an orange, recognizing which fruit is in the picture).
This notebook uses the classic [Auto MPG](https://archive.ics.uci.edu/ml/datasets/auto+mpg) Dataset and builds a model to predict the fuel efficiency of late-1970s and early 1980s automobiles. To do this, we'll provide the model with a description of many automobiles from that time period. This description includes attributes like: cylinders, displacement, horsepower, and weight.
This example uses the `tf.keras` API, see [this guide](https://www.tensorflow.org/guide/keras) for details.
```
# Use seaborn for pairplot
!pip install seaborn
from __future__ import absolute_import, division, print_function
import pathlib
import pandas as pd
import seaborn as sns
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
print(tf.__version__)
```
## The Auto MPG dataset
The dataset is available from the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/).
### Get the data
First download the dataset.
```
dataset_path = keras.utils.get_file("auto-mpg.data", "https://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data")
dataset_path
```
Import it using pandas
```
column_names = ['MPG','Cylinders','Displacement','Horsepower','Weight',
'Acceleration', 'Model Year', 'Origin']
raw_dataset = pd.read_csv(dataset_path, names=column_names,
na_values = "?", comment='\t',
sep=" ", skipinitialspace=True)
dataset = raw_dataset.copy()
dataset.tail()
```
### Clean the data
The dataset contains a few unknown values.
```
dataset.isna().sum()
```
To keep this initial tutorial simple drop those rows.
```
dataset = dataset.dropna()
```
The `"Origin"` column is really categorical, not numeric. So convert that to a one-hot:
```
origin = dataset.pop('Origin')
dataset['USA'] = (origin == 1)*1.0
dataset['Europe'] = (origin == 2)*1.0
dataset['Japan'] = (origin == 3)*1.0
dataset.tail()
```
### Split the data into train and test
Now split the dataset into a training set and a test set.
We will use the test set in the final evaluation of our model.
```
train_dataset = dataset.sample(frac=0.8,random_state=0)
test_dataset = dataset.drop(train_dataset.index)
```
### Inspect the data
Have a quick look at the joint distribution of a few pairs of columns from the training set.
```
sns.pairplot(train_dataset[["MPG", "Cylinders", "Displacement", "Weight"]], diag_kind="kde")
```
Also look at the overall statistics:
```
train_stats = train_dataset.describe()
train_stats.pop("MPG")
train_stats = train_stats.transpose()
train_stats
```
### Split features from labels
Separate the target value, or "label", from the features. This label is the value that you will train the model to predict.
```
train_labels = train_dataset.pop('MPG')
test_labels = test_dataset.pop('MPG')
```
### Normalize the data
Look again at the `train_stats` block above and note how different the ranges of each feature are.
It is good practice to normalize features that use different scales and ranges. Although the model *might* converge without feature normalization, it makes training more difficult, and it makes the resulting model dependent on the choice of units used in the input.
Note: Although we intentionally generate these statistics from only the training dataset, these statistics will also be used to normalize the test dataset. We need to do that to project the test dataset into the same distribution that the model has been trained on.
```
def norm(x):
return (x - train_stats['mean']) / train_stats['std']
normed_train_data = norm(train_dataset)
normed_test_data = norm(test_dataset)
```
This normalized data is what we will use to train the model.
Caution: The statistics used to normalize the inputs here (mean and standard deviation) need to be applied to any other data that is fed to the model, along with the one-hot encoding that we did earlier. That includes the test set as well as live data when the model is used in production.
## The model
### Build the model
Let's build our model. Here, we'll use a `Sequential` model with two densely connected hidden layers, and an output layer that returns a single, continuous value. The model building steps are wrapped in a function, `build_model`, since we'll create a second model, later on.
```
def build_model():
model = keras.Sequential([
layers.Dense(64, activation=tf.nn.relu, input_shape=[len(train_dataset.keys())]),
layers.Dense(64, activation=tf.nn.relu),
layers.Dense(1)
])
optimizer = tf.keras.optimizers.RMSprop(0.001)
model.compile(loss='mse',
optimizer=optimizer,
metrics=['mae', 'mse'])
return model
model = build_model()
```
### Inspect the model
Use the `.summary` method to print a simple description of the model
```
model.summary()
```
Now try out the model. Take a batch of `10` examples from the training data and call `model.predict` on it.
```
example_batch = normed_train_data[:10]
example_result = model.predict(example_batch)
example_result
```
It seems to be working, and it produces a result of the expected shape and type.
### Train the model
Train the model for 1000 epochs, and record the training and validation accuracy in the `history` object.
```
# Display training progress by printing a single dot for each completed epoch
class PrintDot(keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs):
if epoch % 100 == 0: print('')
print('.', end='')
EPOCHS = 1000
history = model.fit(
normed_train_data, train_labels,
epochs=EPOCHS, validation_split = 0.2, verbose=0,
callbacks=[PrintDot()])
```
Visualize the model's training progress using the stats stored in the `history` object.
```
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
hist.tail()
import matplotlib.pyplot as plt
def plot_history(history):
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
plt.figure()
plt.xlabel('Epoch')
plt.ylabel('Mean Abs Error [MPG]')
plt.plot(hist['epoch'], hist['mean_absolute_error'],
label='Train Error')
plt.plot(hist['epoch'], hist['val_mean_absolute_error'],
label = 'Val Error')
plt.legend()
plt.ylim([0,5])
plt.figure()
plt.xlabel('Epoch')
plt.ylabel('Mean Square Error [$MPG^2$]')
plt.plot(hist['epoch'], hist['mean_squared_error'],
label='Train Error')
plt.plot(hist['epoch'], hist['val_mean_squared_error'],
label = 'Val Error')
plt.legend()
plt.ylim([0,20])
plot_history(history)
```
This graph shows little improvement, or even degradation in the validation error after about 100 epochs. Let's update the `model.fit` call to automatically stop training when the validation score doesn't improve. We'll use an *EarlyStopping callback* that tests a training condition for every epoch. If a set amount of epochs elapses without showing improvement, then automatically stop the training.
You can learn more about this callback [here](https://www.tensorflow.org/versions/master/api_docs/python/tf/keras/callbacks/EarlyStopping).
```
model = build_model()
# The patience parameter is the amount of epochs to check for improvement
early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10)
history = model.fit(normed_train_data, train_labels, epochs=EPOCHS,
validation_split = 0.2, verbose=0, callbacks=[early_stop, PrintDot()])
plot_history(history)
```
The graph shows that on the validation set, the average error is usually around +/- 2 MPG. Is this good? We'll leave that decision up to you.
Let's see how well the model generalizes by using the **test** set, which we did not use when training the model. This tells us how well we can expect the model to predict when we use it in the real world.
```
loss, mae, mse = model.evaluate(normed_test_data, test_labels, verbose=0)
print("Testing set Mean Abs Error: {:5.2f} MPG".format(mae))
```
### Make predictions
Finally, predict MPG values using data in the testing set:
```
test_predictions = model.predict(normed_test_data).flatten()
plt.scatter(test_labels, test_predictions)
plt.xlabel('True Values [MPG]')
plt.ylabel('Predictions [MPG]')
plt.axis('equal')
plt.axis('square')
plt.xlim([0,plt.xlim()[1]])
plt.ylim([0,plt.ylim()[1]])
_ = plt.plot([-100, 100], [-100, 100])
```
It looks like our model predicts reasonably well. Let's take a look at the error distribution.
```
error = test_predictions - test_labels
plt.hist(error, bins = 25)
plt.xlabel("Prediction Error [MPG]")
_ = plt.ylabel("Count")
```
It's not quite gaussian, but we might expect that because the number of samples is very small.
## Conclusion
This notebook introduced a few techniques to handle a regression problem.
* Mean Squared Error (MSE) is a common loss function used for regression problems (different loss functions are used for classification problems).
* Similarly, evaluation metrics used for regression differ from classification. A common regression metric is Mean Absolute Error (MAE).
* When numeric input data features have values with different ranges, each feature should be scaled independently to the same range.
* If there is not much training data, one technique is to prefer a small network with few hidden layers to avoid overfitting.
* Early stopping is a useful technique to prevent overfitting.
| github_jupyter |
# Overview
Networks (a.k.a. graphs) are widely used mathematical objects for representing and analysing social systems.
This week is about getting familiar with networks, and we'll focus on four main aspects:
* Basic mathematical description of networks
* The `NetworkX` library.
* Building the network of GME redditors.
* Basic analysis of the network of GME redditors.
```
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import pandas as pd
import networkx as nx
import scipy
```
# Part 1: Basic mathematical description of networks
This week, let's start with some lecturing. You will watch some videos made by Sune for his course _Social Graphs and Interactions_, where he covers networks in details.
> **_Video Lecture_**. Start by watching the ["History of Networks"](https://youtu.be/qjM9yMarl70).
```
from IPython.display import YouTubeVideo
YouTubeVideo("qjM9yMarl70",width=800, height=450)
```
> **_Video Lecture_**. Then check out a few comments on ["Network Notation"](https://youtu.be/MMziC5xktHs).
```
YouTubeVideo("MMziC5xktHs",width=800, height=450)
```
> _Reading_. We'll be reading the textbook _Network Science_ (NS) by Laszlo Barabasi. You can read the whole
> thing for free [**here**](http://barabasi.com/networksciencebook/).
>
> * Read chapter 1\.
> * Read chapter 2\.
>
> _Exercises_
> _Chapter 1_ (Don't forget that you should be answering these in a Jupyter notebook.)
>
> * List three different real networks and state the nodes and links for each of them.
>
><b> Answer: </b> Facebook (nodes = people, links = friendships), busroutes (nodes = stops, links = busses), power grid (nodes = power plants, links = cables)
>
> * Tell us of the network you are personally most interested in. Address the following questions:
> * What are its nodes and links?
> * How large is it?
> * Can be mapped out?
> * Why do you care about it?
><b> Answer: </b> A network of interest could nbe courses at DTU, where the nodes are the courses and links are requirements (requiered courses have a directed link to the course), the size is the number of courses at DTU and it can be mapped out. I care about because it shows which courses give access to the most new courses and what courses are needed for a specific course
> * In your view what would be the area where network science could have the biggest impact in the next decade? Explain your answer - and base it on the text in the book.
>
> _Chapter 2_
>
> * Section 2.5 states that real networks are sparse. Can you think of a real network where each node has _many_ connections? Is that network still sparse? If yes, can you explain why?
><b> Answer: </b> A network representing people that know each other, each node is a person, this is very sparse as people only know a very small fraction of all people in the entire world
> There are more questions on Chapter 2 below.
>
# Part 2: Exercises using the `NetworkX` library
We will analyse networks in Python using the [NetworkX](https://networkx.org/) library. The cool thing about networkx is that it includes a lot of algorithms and metrics for analysing networks, so you don't have to code things from scratch. Get started by running the magic ``pip install networkx`` command. Then, get familiar with the library through the following exercises:
> *Exercises*:
> * Go to the NetworkX project's [tutorial page](https://networkx.org/documentation/stable/tutorial.html). The goal of this exercise is to create your own notebook that contains the entire tutorial. You're free to add your own (e.g. shorter) comments in place of the ones in the official tutorial - and change the code to make it your own where ever it makes sense.
> * Go to Section 2.12: [Homework](http://networksciencebook.com/chapter/2#homework2), then
> * Write the solution for exercise 2.1 (the 'Königsberg Problem') from NS in your notebook.
> * Solve exercise 2.3 ('Graph representation') from NS using NetworkX in your notebook. (You don't have to solve the last sub-question about cycles of length 4 ... but I'll be impressed if you do it).
> * Solve exercise 2.5 ('Bipartite Networks') from NS using NetworkX in your notebook.
### NetworkX tutorial
see seperate notebook
### Königsberg Problem
Which of the icons in Image 2.19 can be drawn without raising yourpencil from the paper, and without drawing any line more than once? Why?
> a) only two edges has an uneven number of nodes and must therefore be start and finish <br>
> c) all nodes have an even number of edges <br>
> d) only two nodes have an uneven number of edges
### Graph Representation
The adjacency matrix is a useful graph representation for many analytical calculations. However, when we need to store a network in a computer, we can save computer memory by offering the list of links in a Lx2 matrix, whose rows contain the starting and end point i and j of each link. Construct for the networks (a) and (b) in Imade 2.20:
```
nodes = np.arange(1,7)
edges = [(1,2), (2, 3), (2,4), (3,1), (3,2), (4,1), (6,1), (6,3)]
G1 = nx.DiGraph()
G1.add_nodes_from(nodes)
G1.add_edges_from(edges)
G2 = nx.Graph(G1)
plt.figure(figsize = (10, 5))
plt.subplot(121)
nx.draw_shell(G2, with_labels = True)
plt.subplot(122)
nx.draw_shell(G1, with_labels = True)
bold = "\033[1m"
end = '\033[0m'
print(bold,"Adjacency matrix for undirected: \n", end, nx.linalg.graphmatrix.adjacency_matrix(G2).todense(), "\n")
print(bold,"Adjacency matrix for directed: \n", end, nx.linalg.graphmatrix.adjacency_matrix(G1).todense(), "\n")
print(bold, "Linked list for undirected: ", end, G2.edges, "\n")
print(bold, "Linked list for directed: ", end, G1.edges, "\n")
print(bold, "Average clustering coefficient for undirected: ", end, nx.average_clustering(G2), "\n")
print(bold, "Swapping 5 and 6: ", end)
print("Swapping the label of nodes 5 and 6 in the undirected graph, will swap the rows and columns of 5 and 6 inb the adjacency matrix and in the linked list all 5's are replaced with 6 and vice versa\n")
print(bold, "Adjacency matrix vs linked list: ", end)
print("In a linked list we will not have informatiuon about node 5 being present, as it has no edges, but it will appear in the adjacency matrix\n")
print(bold,"Paths of length 3", end)
A1 = nx.to_numpy_matrix(G2)
A2 = nx.to_numpy_matrix(G1)
# see for explanation of matrix stuff: https://quickmathintuitions.org/finding-paths-length-n-graph/
print(f"There are {(A1@A1@A1)[0,2]} paths of length 3 from 1 to 3 in the undirected graph")
print(f"There are {(A2@A2@A2)[0,2]} paths of length 3 from 1 to 3 in the directed graph\n")
cycles = [x for x in list(nx.simple_cycles(G2.to_directed())) if len(x) == 4]
print(bold, "Number of cycles with length 4: ", end, len(cycles))
```
### Bipartite networks
```
nodes = np.arange(1, 12)
edges = [(1,7), (2,9), (3,7), (3,8), (3,9), (4,9), (4,10), (5,9), (5,11), (6,11)]
G = nx.Graph()
G.add_nodes_from(nodes)
G.add_edges_from(edges)
color_map = ["green" if x >=7 else "purple" for x in range(1, 12)]
X, Y = nx.bipartite.sets(G)
pos = dict()
pos.update( (n, (1, i)) for i, n in enumerate(X) ) # put nodes from X at x=1
pos.update( (n, (2, i)) for i, n in enumerate(Y) ) # put nodes from Y at x=2
nx.draw(G, pos=pos, with_labels = True, node_color = color_map)
plt.show()
bold = "\033[1m"
end = '\033[0m'
print(bold,"Adjacency matrix:\n", end, nx.linalg.graphmatrix.adjacency_matrix(G).todense(), "\n")
print("Block diagnoal as nodes < 7 are not connected with each other and nodes >6 are not connected with each other\n")
purp_proj = nx.algorithms.bipartite.projected_graph(G, list(G.nodes)[:6])
green_proj = nx.algorithms.bipartite.projected_graph(G, list(G.nodes)[7:])
print(bold, "Projections", end)
print("Adjacency matrix of purple projection:\n", nx.to_numpy_matrix(purp_proj))
print("Adjacency matrix of green projection:\n", nx.to_numpy_matrix(green_proj), "\n")
print(bold, "Average degree", end)
print("Average degree of purple nodes: ", sum([G.degree[i] for i in range(1, 7)])/6)
print("Average degree of green nodes: ", sum([G.degree[i] for i in range(7, 12)])/5, "\n")
print(bold, "Average degree in projections", end)
print("Average degree in purple projection: ", sum([purp_proj.degree[i] for i in range(1, 7)])/6)
print("Average degree in green projection: ", sum([green_proj.degree[i] for i in range(7, 12)])/5)
```
# Part 3: Building the GME redditors network
Ok, enough with theory :) It is time to go back to our cool dataset it took us so much pain to download! And guess what? We will build the network of GME Redditors. Then, we will use some Network Science to study some of its properties.
>
> *Exercise*: Build the network of Redditors discussing about GME on r\wallstreetbets. In this network, nodes correspond to authors of comments, and a direct link going from node _A_ to node _B_ exists if _A_ ever answered a submission or a comment by _B_. The weight on the link corresponds to the number of times _A_ answered _B_. You can build the network as follows:
>
> 1. Open the _comments dataset_ and the _submission datasets_ (the first contains all the comments and the second cointains all the submissions) and store them in two Pandas DataFrames.
> 2. Create three dictionaries, using the command ``dict(zip(keys,values))``, where keys and values are columns in your dataframes. The three dictionaries are the following:
> * __comment_authors__: (_comment id_, _comment author_)
> * __parent__: (_comment id_ , _parent id_)
> * __submission_authors__: (_submission id_, _submission author_)
>
> where above I indicated the (key, value) tuples contained in each dictionary.
>
> 3. Create a function that take as input a _comment id_ and outputs the author of its parent. The function does two things:
> * First, it calls the dictionary __parent__, to find the _parent id_ of the comment identified by a given _comment id_.
> * Then, it finds the author of _parent id_.
> * if the _parent id_ starts with "t1_", call the __comment_authors__ dictionary (for key=parent_id[3:])
> * if the _parent id_ starts with "t3_", call the __submission_authors__ dictionars (for key=parent_id[3:])
>
> where by parent_id[3:], I mean that the first three charachters of the _parent id_ (either "t1_" or "t3_" should be ingnored).
>
> 4. Apply the function you created in step 3. to all the comment ids in your comments dataframe. Store the output in a new column, _"parent author"_, of the comments dataframe.
> 5. For now, we will focus on the genesis of the GME community on Reddit, before all the hype started and many new redditors jumped on board. For this reason, __filter all the comments written before Dec 31st, 2020__. Also, remove deleted users by filtering all comments whose author or parent author is equal to "[deleted]".
> 6. Create the weighted edge-list of your network as follows: consider all comments (after applying the filtering step above), groupby ("_author_", _"parent author"_) and count.
> 7. Create a [``DiGraph``](https://networkx.org/documentation/stable//reference/classes/digraph.html) using networkx. Then, use the networkx function [``add_weighted_edges_from``](https://networkx.org/documentation/networkx-1.9/reference/generated/networkx.DiGraph.add_weighted_edges_from.html) to create a weighted, directed, graph starting from the edgelist you created in step 5.
```
# data
comments_org = pd.read_csv("Data/week1/gme_reddit_comments.csv", parse_dates = ["creation_date"])
submissions_org = pd.read_csv("Data/week1/gme_reddit_submissions.csv", parse_dates = ["creation_date"])
# dictionaries
comment_authors = dict(zip(comments_org["id"], comments_org["author"]))
parent = dict(zip(comments_org["id"], comments_org["parent_id"]))
submissions_authors = dict(zip(submissions_org["id"], submissions_org["author"]))
# function for getting author of parent id
def get_parent_author(comment_id):
parent_id = parent[comment_id]
t_parent_id = parent_id[:3]
parent_id = parent_id[3:]
try:
if t_parent_id == "t1_":
return comment_authors[parent_id]# if parent_id in comment_authors.keys else None
elif t_parent_id == "t3_":
return submissions_authors[parent_id]
else:
return -1
except KeyError:
return -1
# create parent_author column in comments dataframe
comments = comments_org
comments["parent_author"] = list(map(get_parent_author, comments["id"])) #get_parent_author(comments.id)
# remove unwanted authors
comments = comments[comments.parent_author != -1] # remove rows with keyerror (around 14k rows)
comments = comments[comments.creation_date <= "2020-12-31"] # remove comments from after 31/12-2020
comments = comments[(comments.author != "[deleted]") & (comments.parent_author != "[deleted]")]# remove deleted users
#Create the weighted edge-list of your network
comments_network = comments.groupby(["author", "parent_author"]).size()
comments_network = comments_network.reset_index()
comments_network.columns = ["author", "parent_author","weight"]
comments_network.to_csv("Data/week4/comments_network.csv", index = False)
# plot a subset of the users to view graph
comments1 = comments_network[:10]
G_subset = nx.from_pandas_edgelist(comments1, "author", "parent_author", "weight", create_using = nx.DiGraph())
nx.draw_shell(G_subset, with_labels = True)
plt.show()
# create the complete graph - is the direction correct? should it go from parent_author to author following a tree structure?
G = nx.from_pandas_edgelist(comments_network, "author", "parent_author", "weight", create_using = nx.DiGraph())
```
# Part 4: Preliminary analysis of the GME redditors network
We begin with a preliminary analysis of the network.
>
> *Exercise: Basic Analysis of the Redditors Network*
> * Why do you think I want you guys to use a _directed_ graph? Could have we used an undirected graph instead?
> * What is the total number of nodes in the network? What is the total number of links? What is the density of the network (the total number of links over the maximum number of links)?
> * What are the average, median, mode, minimum and maximum value of the in-degree (number of incoming edges per redditor)? And of the out-degree (number of outgoing edges per redditor)? How do you intepret the results?
> * List the top 5 Redditors by in-degree and out-degree. What is their average score over time? At which point in time did they join the discussion on GME? When did they leave it?
> * Plot the distribution of in-degrees and out-degrees, using a logarithmic binning (see last week's exercise 4).
> * Plot a scatter plot of the the in- versus out- degree for all redditors. Comment on the relation between the two.
> * Plot a scatter plot of the the in- degree versus average score for all redditors. Comment on the relation between the two.
<b> Answers </b>
> * The graph has a direction to indicate which users commented on which other users posts. All childs of a node becomes the users who have posted on that nodes posts
## Stats
```
# answers to question 2 and 3
bold = "\033[1m"
end = '\033[0m'
N_nodes = len(G.nodes)
N_links = len(G.edges)
max_links = N_nodes*(N_nodes-1)/2
network_density = N_links/max_links
in_degrees = list(dict(G.in_degree()).values())
out_degrees = list(dict(G.out_degree()).values())
print(bold, "Number of nodes in the network: ", end, N_nodes)
print(bold, "The total number of links: ", end, N_links)
print(bold, "Max/Potentiel number of links: ", end, max_links)
print(bold, "Density of the network: ", end, network_density)
# stats for degrees
print(bold, "Stats for in degree of network:",end)
print("\tMean = ", np.mean(in_degrees))
print("\tMedian = ", np.median(in_degrees))
print("\tMode = ", max(in_degrees, key = in_degrees.count))
print("\tMin = ", min(in_degrees))
print("\tMax = ", max(in_degrees))
print(bold, "Stats for out degree of network:",end)
print("\tMean = ", np.mean(out_degrees))
print("\tMedian = ", np.median(out_degrees))
print("\tMode = ", max(out_degrees, key = out_degrees.count))
print("\tMin = ", min(out_degrees))
print("\tMax = ", max(out_degrees))
```
## Top redditors
```
top = 5 #number of user to rank
top_redditors = comments.groupby(["author"]).agg({'score' : ['mean'], 'creation_date':['min', 'max']})
top_redditors.columns = ["avg_score", "date_joined", "date_left"]
top_redditors["days_active"] = top_redditors["date_left"] - top_redditors["date_joined"]
top_redditors = top_redditors.join(pd.DataFrame(G.in_degree(), columns = ["author", "in_degree"]).set_index("author"), how = "left")
top_redditors = top_redditors.join(pd.DataFrame(G.out_degree(), columns = ["author", "out_degree"]).set_index("author"), how = "left")
top_redditors = top_redditors.reset_index()
display(f"Top {top} redditors by in-degree: ", top_redditors.sort_values("in_degree", ascending=False)[:top])
display(f"Top {top} redditors by out-degree: ", top_redditors.sort_values("out_degree", ascending=False)[:top])
```
## Plots
```
fig, ax = plt.subplots(2, 2, figsize = (15,10))
# fig.tight_layout(h_pad=5, v_pad = 3)
fig.subplots_adjust(left=None, bottom=None, right=None, top=None, wspace=.2, hspace=.3)
# distribution of in degree
# min(top_redditors.in_degree), max(top_redditors.in_degree) # 1, 787
bins = np.logspace(0, np.log10(787), 50)
hist, edges = np.histogram(top_redditors["in_degree"], bins = bins)
x = (edges[1:] + edges[:-1])/2.
# remove 0 entries
xx, yy = zip(*[(i,j) for (i,j) in zip(x, hist) if j > 0])
ax = plt.subplot(2,2,1)
ax.plot(xx, yy, marker = ".")
ax.set_xlabel("In degree of redditors [log10]")
ax.set_xscale("log")
ax.set_yscale("log")
ax.set_title("In degree distribution")
# distribution of out degree
# min(top_redditors.in_degree), max(top_redditors.in_degree) # 0, 2826 - log(0) set to 0
bins = np.logspace(0, np.log10(2826), 50)
hist, edges = np.histogram(top_redditors["out_degree"], bins = bins)
x = (edges[1:] + edges[:-1])/2.
# remove 0 entries
xx, yy = zip(*[(i,j) for (i,j) in zip(x, hist) if j > 0])
ax = plt.subplot(2,2,2)
ax.plot(xx, yy, marker = ".")
ax.set_xlabel("Out degree of redditors [log10]")
ax.set_xscale("log")
ax.set_yscale("log")
ax.set_title("Out-degree distribution")
# scatter plot for in- versus out degree
ax = plt.subplot(2, 2, 3)
ax.scatter(top_redditors.in_degree, top_redditors.out_degree)
ax.set_xlabel("in-degree of redditor")
ax.set_ylabel("out-degree of redditor")
ax.set_yscale("log")
ax.set_xscale("log")
ax.set_title("In-degree vs out-degree")
ax.set_ylim(1e-1, 1e4)
# scatter plot for in-degree vs average score
ax = plt.subplot(2, 2, 4)
ax.scatter(top_redditors.in_degree, top_redditors.avg_score)
ax.set_xlabel("in-degree of redditor")
ax.set_ylabel("average score of redditor")
ax.set_title("In-degree vs average score")
ax.set_yscale("log")
ax.set_xscale("log")
ax.set_ylim(1e-2, 1e4)
plt.show()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/ibzan79/daa_2021_1/blob/master/28Octubre.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
h1 = 0
h2 = 0
m1 = 0
m2 = 0 # 1440 + 24 *6
contador = 0 # 5 + (1440 + ?) * 2 + 144 + 24 + 2= 3057
operaciones = 5
while [h1, h2, m1, m2] != [2,3,5,9]:
if [h1, h2] == [m2, m1]:
print(h1, h2,":", m1, m2)
m2 = m2 + 1
operaciones += 1
if m2 == 10:
m2 = 0
m1 = m1 + 1
operaciones += 2
if m1 == 6:
h2 = h2 + 1
m2 = 0
operaciones += 2
contador = contador + 1
operaciones += 1
m2 = m2 + 1
operaciones += 1
if m2 == 10:
m2 = 0
m1 = m1 + 1
operaciones += 2
if m1 == 6:
m1 = 0
h2 = h2 +1
operaciones += 2
if h2 == 10:
h2 = 0
h1 = h1 +1
operaciones += 2
print("Numero de palindromos: ",contador)
print(f"Operaciones: {operaciones}")
horario="0000"
contador=0
operaciones = 2
while horario!="2359":
inv=horario[::-1]
if horario==inv:
contador+=1
operaciones += 1
print(horario[0:2],":",horario[2:4])
new=int(horario)
new+=1
horario=str(new).zfill(4)
operaciones += 4
print("son ",contador,"palindromos")
print(f"Operaciones: {operaciones}")
# 2 + (2360 * 4 ) + 24
lista=[]
for i in range(0,24,1): # 24
for j in range(0,60,1): # 60 1440
if i<10:
if j<10:
lista.append("0"+str(i)+":"+"0"+str(j))
elif j>=10:
lista.append("0"+str(i)+":"+str(j))
else:
if i>=10:
if j<10:
lista.append(str(i)+":"+"0"+str(j))
elif j>=10:
lista.append(str(i)+":"+str(j))
# 1440 + 2 + 1440 + 16 * 2 = 2900
lista2=[]
contador=0
for i in range(len(lista)): # 1440
x=lista[i][::-1]
if x==lista[i]:
lista2.append(x)
contador=contador+1
print(contador)
for j in (lista2):
print(j)
for x in range (0,24,1):
for y in range(0,60,1): #1440 * 3 +13 = 4333
hora=str(x)+":"+str(y)
if x<10:
hora="0"+str(x)+":"+str(y)
if y<10:
hora=str(x)+"0"+":"+str(y)
p=hora[::-1]
if p==hora:
print(f"{hora} es palindromo")
#solucion:
total = int(0) #Contador de numero de palindromos
for hor in range(0,24): #Bucles anidados for para dar aumentar las horas y los minutos al mismo tiempo
for min in range(0,60):
hor_n = str(hor) #Variables
min_n = str(min)
if (hor<10): #USamos condiciones para que las horas y los minutos no rebasen el horario
hor_n = ("0"+hor_n)
if (min<10):
min_n = ("0"+ min_n)
if (hor_n[::-1] == min_n): #Mediante un slicing le damos el formato a las horas para que este empiece desde la derecha
print("{}:{}".format(hor_n,min_n))
total += 1
#1 + 1440 * 5 =7201
palindronum= int(0)
for hor in range(0,24):
for min in range(0,60): # 1440
principio= str(hor)
final= str(min)
if (hor<10):
principio=("0"+principio)
if (min<10):
final=("0"+final)
if (principio[::-1]==final):
print(principio +":"+final)
palindronum= palindronum+1
print(palindronum)
# 1 + 1440 * 5 = 7201
```
| github_jupyter |
# Networks: structure, evolution & processes
**Internet Analytics - Lab 2**
---
**Group:** W
**Names:**
* Olivier Cloux
* Thibault Urien
* Saskia Reiss
---
#### Instructions
*This is a template for part 2 of the lab. Clearly write your answers, comments and interpretations in Markodown cells. Don't forget that you can add $\LaTeX$ equations in these cells. Feel free to add or remove any cell.*
*Please properly comment your code. Code readability will be considered for grading. To avoid long cells of codes in the notebook, you can also embed long python functions and classes in a separate module. Don’t forget to hand in your module if that is the case. In multiple exercises, you are required to come up with your own method to solve various problems. Be creative and clearly motivate and explain your methods. Creativity and clarity will be considered for grading.*
---
## 2.2 Network sampling
#### Exercise 2.7: Random walk on the Facebook network
```
import requests
import random
import numpy as np
URL_TEMPLATE = 'http://iccluster118.iccluster.epfl.ch:{p}/v1.0/facebook?user={user_id}';
def json_format(uid):
port = 5050
url = URL_TEMPLATE.format(user_id=uid, p=port)
response = requests.get(url)
return response.json()
def crawler(maximum, TP_prob, seed):
"""Crawls facebook from a seed node.
Keyword arguments:
maximum -- numberof nodes to crawl
TP_prob -- probability to keep crawling.
seed -- first node
Crawls Facebook users to find age of users. With probability TP_prob, crawler keeps going
from user to user. Otherwise, it jumps to a random node, that was already seen (friend of a visited node)
but not yet visited.
return a table containing the ages of visited nodes
"""
#quick check of parameters
if(maximum < 1 or TP_prob < 0 or TP_prob > 1 or not isinstance(seed, str)):
print("Error, bad parameter")
return [-1]
#needed global variables
vis = set() #keep track of visited nodes
age = [] #list of ages, to be returned
buffer_max = 100;
buffer = set([]) #buffer of nodes to visit when TP
user_id = seed #UID of starting node
i = 0
while i < maximum:
data = json_format(user_id)
#retrieve list of friends not yet visited
friends_list = set(data['friends']).difference(vis)
age.append(data['age'])
#creates buffer of possible nodes to teleport to.
#each node adds few of its friends.
if(len(buffer) > buffer_max): #If buffer full, remove 10 random and add 10 new.
to_remove = random.sample(buffer, 10)
buffer = buffer.difference(to_remove)
#take at most 10 friends uid from current node.
some_random_friends = random.sample(friends_list, min(len(friends_list), 10))
buffer = buffer.union(set(some_random_friends))
buffer.discard(user_id) #ensure current uid not in buffer, to avoid duplicate visit
#actual crawling. Teleport if current node has no friend or
#if given probability was met, otherwise keep crawling
if(random.randrange(0, 1) < theta and (len(friends_list) is not 0)): #continue crawling
user_id = random.sample(friends_list, 1).pop()
else: #Teleport to random node in the buffer
user_id = random.sample(buffer, 1).pop()
vis.add(user_id)
i += 1
return age
N = 5000;
seed = 'f30ff3966f16ed62f5165a229a19b319'
theta = 0.85 #random jump with prob 0.15
age = crawler(N, theta, seed)
print('The mean age is',np.mean(age))
```
#### Exercise 2.8
We obtain a mean age of about 20-22 (small variations depending on the theta of random teleport and the randomness of jumps). The real mean age being 43, we see a huge variation.
This can be explained with the *friendship paradox*. Young people have more (young) friends (so the younger population is more connected than their elders), and older people have fewer friends. Also, we can imagine older people have a majority of younger facebook friends because with friends of same age, they may prefer using other mediums to keep contact. Older people represents dead ends, with few and young friends. The crawler will so find young nodes more easily.
A solution to that would be to ponderate the nodes, according to the number of friends. An other possibility would be to select more easily nodes with few friends ; that is, when chosing a node, pick with higher probability the ones with few friends.
| github_jupyter |
```
from bs4 import BeautifulSoup
from urllib.request import urlopen
URLPRI = "http://www.losmundialesdefutbol.com"
url = "http://www.losmundialesdefutbol.com/mundiales.php"
def returnWebObject(url):
return BeautifulSoup(urlopen(url) , "lxml")
class Player:
def __init__(self, name ,position, wasCaptain):
self.name = name
self.position = position
self.wasCaptain = False
def __str__(self):
return self.name
def __repr__(self):
return self.name
class Team:
def __init__(self , name , goal ):
self.name = name
self.goal = goal
self.formation = None
self.players = list()
def getStringFromFormation(self):
# Example 4-4-2
pass
class Match:
def __init__(self , firstTeam , secondTeam ,goals ,info):
self.firstTeam = Team(firstTeam , goals[0])
self.secondTeam = Team(secondTeam , goals[1])
self.info = info
class WorldCup:
def __init__(self , year , location , champion) :
self.champion = champion
self.location = location
self.year = year
self.countries = set()
self.matches = list()
def __str__(self):
return "Copa del mundo del " + str(self.year) + " /Campeon " + self.champion + "/Lugar: " + self.location
def __repr__(self):
return "Copa del mundo del " + str(self.year) + " /Campeon " + self.champion + "/Lugar: " + self.location
pageListWorldSoccerCups = returnWebObject(URLPRI)
listWorldCups = list()
tableWordChampions = pageListWorldSoccerCups.find("div", {"class": "rd-100-40 rd-pad-0-l2"})
for x in tableWordChampions.find_all("tr"):
if x.find("div", {"class" : "left" , "style":"width: 50% ; min-width: 100px"}):
auxSplit = x.text.split()
listWorldCups.append(WorldCup(int(auxSplit[0]) , auxSplit[1] ,auxSplit[2]))
listWorldCups = listWorldCups[::-1]
listWorldCups
pageListWorldSoccerCups = returnWebObject(url)
linksResultsWorlCupsMatches = []
for link in pageListWorldSoccerCups.findAll("a",href=True):
if "resultados" in link["href"]:
linksResultsWorlCupsMatches.append(link["href"])
linksResultsWorlCupsMatches
webPages = list()
for links in linksResultsWorlCupsMatches:
webPages.append(returnWebObject(URLPRI+"/"+links))
def getGoals(result):
goals = result.split("-")
return int(goals[0]),int(goals[1])
def cleanWords(firsWord , secondWord = None):
if secondWord == None :
return firsWord .strip()
else:
return firsWord.strip() , secondWord.strip()
for i , worldCup in enumerate(webPages):
for row in worldCup.findAll("div",{"class": "margen-t3 clearfix"}): # Una linea de la pagina
infoMatch = row.find("div",{"class":"game margen-b3 clearfix"})
firstTeam = infoMatch.find("div",{"class":"left margen-b1 clearfix"})
secondTeam = infoMatch.find("div",{"class": "left a-left margen-b1 clearfix"})
firstTeam , secondTeam = cleanWords(firstTeam.text , secondTeam.text)
listWorldCups[i].countries.add(secondTeam)
listWorldCups[i].countries.add(firstTeam)
if i != 0:
for result in infoMatch.find("div",{"class" : "left a-center margen-b3 clearfix"}).findAll("a",href=True):
continue
listWorldCups[i].matches.append(Match(firstTeam , secondTeam , getGoals(result.text), result["href"]))
else :
listWorldCups[i].matches.append(Match(firstTeam , secondTeam , (0,0), ""))
listWorldCups = listWorldCups[1:] # No se esta considerando el mundial del 2018
def getLinkForMatch(info):
#con esto obtenemos los links necesarios
if ".." in info:
return URLPRI + info[2:]
else :
return URLPRI + info
def getPlayerInfo(row):
name = ""
wasCapitan = False
pos = cleanWords(str(row.find("td").text))
for x in row.find("div",{"style":"float: left ; "}).text.split():
if x == "(C)":
wasCapitan = True
else:
name = x + " " + name
return Player(name, pos , wasCapitan)
for worldCups in listWorldCups[0:1]:
for match in worldCups.matches[0:1]:
contain = returnWebObject(getLinkForMatch(match.info))
firstTable = contain.find("div",{"class":"rd-100-50 rd-pad-0-r2"})
formation = dict()
for i , row in enumerate(firstTable.findAll("tr",{"style":"vertical-align: top"})):
if i > 10:
break
player = getPlayerInfo(row)
try:
formation[player.position] = formation[player.position] +1
except:
formation[player.position] = 1
match.firstTeam.players.append(player)
match.firstTeam.formation = formation
secondTable = contain.find("div",{"class":"rd-100-50 rd-pad-0-l2"})
formation = dict()
for i , row in enumerate(secondTable.findAll("tr",{"style":"vertical-align: top"})):
if i > 10:
break
player = getPlayerInfo(row)
try:
formation[player.position] = formation[player.position] +1
except:
formation[player.position] = 1
match.secondTeam.players.append(player)
match.secondTeam.formation = formation
listWorldCups[0].matches[0].secondTeam.players = list()
listWorldCups[0].matches[1].firstTeam.players = list()
listWorldCups[0].matches[0].secondTeam.players
```
| github_jupyter |
```
Copyright 2021 IBM Corporation
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```
# Random Forest on Allstate Dataset
## Background
The goal of this competition is to predict Bodily Injury Liability Insurance claim payments based on the characteristics of the insured’s vehicle.
## Source
The raw dataset can be obtained directly from the [Allstate Claim Prediction Challenge](https://www.kaggle.com/c/ClaimPredictionChallenge).
In this example, we download the dataset directly from Kaggle using their API. In order for to work work, you must:
1. Login into Kaggle and accept the [competition rules](https://www.kaggle.com/c/ClaimPredictionChallenge/rules).
2. Folow [these instructions](https://www.kaggle.com/docs/api) to install your API token on your machine.
## Goal
The goal of this notebook is to illustrate how Snap ML can accelerate training of a random forest model on this dataset.
## Code
```
cd ../../
CACHE_DIR='cache-dir'
import numpy as np
import time
from datasets import Allstate
from sklearn.ensemble import RandomForestClassifier
from snapml import RandomForestClassifier as SnapRandomForestClassifier
from sklearn.metrics import roc_auc_score as score
dataset = Allstate(cache_dir=CACHE_DIR)
X_train, X_test, y_train, y_test = dataset.get_train_test_split()
print("Number of examples: %d" % (X_train.shape[0]))
print("Number of features: %d" % (X_train.shape[1]))
print("Number of classes: %d" % (len(np.unique(y_train))))
# the dataset is highly imbalanced
labels, sizes = np.unique(y_train, return_counts=True)
print("%6.2f %% of the training transactions belong to class 0" % (sizes[0]*100.0/(sizes[0]+sizes[1])))
print("%6.2f %% of the training transactions belong to class 1" % (sizes[1]*100.0/(sizes[0]+sizes[1])))
from sklearn.utils.class_weight import compute_sample_weight
w_train = compute_sample_weight('balanced', y_train)
w_test = compute_sample_weight('balanced', y_test)
model = RandomForestClassifier(max_depth=6, n_estimators=100, n_jobs=4, random_state=42)
t0 = time.time()
model.fit(X_train, y_train, sample_weight=w_train)
t_fit_sklearn = time.time()-t0
score_sklearn = score(y_test, model.predict_proba(X_test)[:,1], sample_weight=w_test)
print("Training time (sklearn): %6.2f seconds" % (t_fit_sklearn))
print("ROC AUC score (sklearn): %.4f" % (score_sklearn))
model = SnapRandomForestClassifier(max_depth=6, n_estimators=100, n_jobs=4, random_state=42, use_histograms=True)
t0 = time.time()
model.fit(X_train, y_train, sample_weight=w_train)
t_fit_snapml = time.time()-t0
score_snapml = score(y_test, model.predict_proba(X_test)[:,1], sample_weight=w_test)
print("Training time (snapml): %6.2f seconds" % (t_fit_snapml))
print("ROC AUC score (snapml): %.4f" % (score_snapml))
speed_up = t_fit_sklearn/t_fit_snapml
score_diff = (score_snapml-score_sklearn)/score_sklearn
print("Speed-up: %.1f x" % (speed_up))
print("Relative diff. in score: %.4f" % (score_diff))
```
## Disclaimer
Performance results always depend on the hardware and software environment.
Information regarding the environment that was used to run this notebook are provided below:
```
import utils
environment = utils.get_environment()
for k,v in environment.items():
print("%15s: %s" % (k, v))
```
## Record Statistics
Finally, we record the enviroment and performance statistics for analysis outside of this standalone notebook.
```
import scrapbook as sb
sb.glue("result", {
'dataset': dataset.name,
'n_examples_train': X_train.shape[0],
'n_examples_test': X_test.shape[0],
'n_features': X_train.shape[1],
'n_classes': len(np.unique(y_train)),
'model': type(model).__name__,
'score': score.__name__,
't_fit_sklearn': t_fit_sklearn,
'score_sklearn': score_sklearn,
't_fit_snapml': t_fit_snapml,
'score_snapml': score_snapml,
'score_diff': score_diff,
'speed_up': speed_up,
**environment,
})
```
| github_jupyter |
# Convolutional Autoencoder
Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
```
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
```
## Network Architecture
The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.
<img src='assets/convolutional_autoencoder.png' width=500px>
Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughly 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.
### What's going on with the decoder
Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers *aren't*. Usually, you'll see **transposed convolution** layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but in reverse. A stride in the input layer results in a larger stride in the transposed convolution layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a transposed convolution layer. The TensorFlow API provides us with an easy way to create the layers, [`tf.nn.conv2d_transpose`](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d_transpose).
However, transposed convolution layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In [this Distill article](http://distill.pub/2016/deconv-checkerboard/) from Augustus Odena, *et al*, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with [`tf.image.resize_images`](https://www.tensorflow.org/versions/r1.1/api_docs/python/tf/image/resize_images), followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.
> **Exercise:** Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by a factor of 2. Odena *et al* claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in `tf.image.resize_images` or use [`tf.image.resize_nearest_neighbor`]( `https://www.tensorflow.org/api_docs/python/tf/image/resize_nearest_neighbor). For convolutional layers, use [`tf.layers.conv2d`](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d). For example, you would write `conv1 = tf.layers.conv2d(inputs, 32, (5,5), padding='same', activation=tf.nn.relu)` for a layer with a depth of 32, a 5x5 kernel, stride of (1,1), padding is 'same', and a ReLU activation. Similarly, for the max-pool layers, use [`tf.layers.max_pooling2d`](https://www.tensorflow.org/api_docs/python/tf/layers/max_pooling2d).
```
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
decoded = tf.nn.sigmoid(logits, name='decoded')
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
```
## Training
As before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
```
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
```
## Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.

Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.
> **Exercise:** Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
```
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
decoded = tf.nn.sigmoid(logits, name='decoded')
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
```
## Checking out the performance
Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprising great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
```
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
```
| github_jupyter |
This tutorial trains a 3D densenet for lung lesion classification from CT image patches.
The goal is to demonstrate MONAI's class activation mapping functions for visualising the classification models.
For the demo data:
- Please see the `bbox_gen.py` script for generating the patch classification data from MSD task06_lung (available via `monai.apps.DecathlonDataset`).
- Alternatively, the patch dataset (~130MB) is available for direct downloading at: https://drive.google.com/drive/folders/1vl330aJew1NCc31IVYJSAh3y0XdDdydi
[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/modules/interpretability/class_lung_lesion.ipynb)
```
!python -c "import monai" || pip install -q "monai-weekly[tqdm]"
import glob
import os
import random
import tempfile
import matplotlib.pyplot as plt
import monai
import numpy as np
import torch
from IPython.display import clear_output
from monai.networks.utils import eval_mode
from monai.transforms import (
AddChanneld,
Compose,
LoadImaged,
RandFlipd,
RandRotate90d,
RandSpatialCropd,
Resized,
ScaleIntensityRanged,
ToTensord,
)
from monai.visualize import plot_2d_or_3d_image
from sklearn.metrics import (
ConfusionMatrixDisplay,
classification_report,
confusion_matrix,
)
from torch.utils.tensorboard import SummaryWriter
monai.config.print_config()
random_seed = 42
monai.utils.set_determinism(random_seed)
np.random.seed(random_seed)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
directory = os.environ.get("MONAI_DATA_DIRECTORY")
root_dir = (
tempfile.mkdtemp() if directory is None else os.path.expanduser(directory)
)
data_path = os.path.join(root_dir, "patch")
!cd {root_dir} && [ -f "lung_lesion_patches.tar.gz" ] || gdown --id 1Jte6L7B_5q9XMgOCAq1Ldn29F1aaIxjW \
&& mkdir -p {data_path} && tar -xvf "lung_lesion_patches.tar.gz" -C {data_path} > /dev/null
lesion = glob.glob(os.path.join(data_path, "lesion_*"))
non_lesion = glob.glob(os.path.join(data_path, "norm_*"))
# optionally make sure there's 50:50 lesion vs non-lesion
balance_classes = True
if balance_classes:
print(
f"Before balance -- Num lesion: {len(lesion)},"
f" num non-lesion: {len(non_lesion)}"
)
num_to_keep = min(len(lesion), len(non_lesion))
lesion = lesion[:num_to_keep]
non_lesion = non_lesion[:num_to_keep]
print(
f"After balance -- Num lesion: {len(lesion)},"
f" num non-lesion: {len(non_lesion)}"
)
labels = np.asarray(
[[0.0, 1.0]] * len(lesion) + [[1.0, 0.0]] * len(non_lesion)
)
all_files = [
{"image": img, "label": label}
for img, label in zip(lesion + non_lesion, labels)
]
random.shuffle(all_files)
print(f"total items: {len(all_files)}")
```
Split the data into 80% training and 20% validation
```
train_frac, val_frac = 0.8, 0.2
n_train = int(train_frac * len(all_files)) + 1
n_val = min(len(all_files) - n_train, int(val_frac * len(all_files)))
train_files, val_files = all_files[:n_train], all_files[-n_val:]
train_labels = [data["label"] for data in train_files]
print(f"total train: {len(train_files)}")
val_labels = [data["label"] for data in val_files]
n_neg, n_pos = np.sum(np.asarray(val_labels) == 0), np.sum(
np.asarray(val_labels) == 1
)
print(f"total valid: {len(val_labels)}")
```
Create the data loaders. These loaders will be used for both training/validation, as well as visualisations.
```
# Define transforms for image
win_size = (196, 196, 144)
train_transforms = Compose(
[
LoadImaged("image"),
AddChanneld("image"),
ScaleIntensityRanged(
"image",
a_min=-1000.0,
a_max=500.0,
b_min=0.0,
b_max=1.0,
clip=True,
),
RandFlipd("image", spatial_axis=0, prob=0.5),
RandFlipd("image", spatial_axis=1, prob=0.5),
RandFlipd("image", spatial_axis=2, prob=0.5),
RandSpatialCropd("image", roi_size=(64, 64, 40)),
Resized("image", win_size, "trilinear", True),
RandRotate90d("image", prob=0.5, spatial_axes=[0, 1]),
ToTensord("image"),
]
)
val_transforms = Compose(
[
LoadImaged("image"),
AddChanneld("image"),
ScaleIntensityRanged(
"image",
a_min=-1000.0,
a_max=500.0,
b_min=0.0,
b_max=1.0,
clip=True,
),
Resized("image", win_size, "trilinear", True),
ToTensord(("image", "label")),
]
)
persistent_cache = os.path.join(root_dir, "persistent_cache")
train_ds = monai.data.PersistentDataset(
data=train_files, transform=train_transforms, cache_dir=persistent_cache
)
train_loader = monai.data.DataLoader(
train_ds, batch_size=2, shuffle=True, num_workers=2, pin_memory=True
)
val_ds = monai.data.PersistentDataset(
data=val_files, transform=val_transforms, cache_dir=persistent_cache
)
val_loader = monai.data.DataLoader(
val_ds, batch_size=2, num_workers=2, pin_memory=True
)
```
Start the model, loss function, and optimizer.
```
model = monai.networks.nets.DenseNet121(
spatial_dims=3, in_channels=1, out_channels=2
).to(device)
bce = torch.nn.BCEWithLogitsLoss()
def criterion(logits, target):
return bce(logits.view(-1), target.view(-1))
optimizer = torch.optim.Adam(model.parameters(), 1e-5)
```
Run training iterations.
```
# start training
val_interval = 1
max_epochs = 100
best_metric = best_metric_epoch = -1
epoch_loss_values = []
metric_values = []
scaler = torch.cuda.amp.GradScaler()
for epoch in range(max_epochs):
clear_output()
print("-" * 10)
print(f"epoch {epoch + 1}/{max_epochs}")
model.train()
epoch_loss = step = 0
for batch_data in train_loader:
inputs, labels = (
batch_data["image"].to(device),
batch_data["label"].to(device),
)
optimizer.zero_grad()
with torch.cuda.amp.autocast():
outputs = model(inputs)
loss = criterion(outputs.float(), labels.float())
scaler.scale(loss).backward()
scaler.step(optimizer)
scaler.update()
epoch_loss += loss.item()
epoch_len = len(train_ds) // train_loader.batch_size
if step % 50 == 0:
print(f"{step}/{epoch_len}, train_loss: {loss.item():.4f}")
step += 1
epoch_loss /= step
epoch_loss_values.append(epoch_loss)
print(f"epoch {epoch + 1} average loss: {epoch_loss:.4f}")
if (epoch + 1) % val_interval == 0:
with eval_mode(model):
num_correct = 0.0
metric_count = 0
for val_data in val_loader:
val_images, val_labels = (
val_data["image"].to(device),
val_data["label"].to(device),
)
val_outputs = model(val_images)
value = torch.eq(
val_outputs.argmax(dim=1), val_labels.argmax(dim=1)
)
metric_count += len(value)
num_correct += value.sum().item()
metric = num_correct / metric_count
metric_values.append(metric)
if metric >= best_metric:
best_metric = metric
best_metric_epoch = epoch + 1
torch.save(
model.state_dict(),
"best_metric_model_classification3d_array.pth",
)
print(
f"current epoch: {epoch + 1} current accuracy: {metric:.4f}"
f" best accuracy: {best_metric:.4f}"
f" at epoch {best_metric_epoch}"
)
print(
f"train completed, best_metric: {best_metric:.4f} at"
f" epoch: {best_metric_epoch}"
)
plt.plot(epoch_loss_values, label="training loss")
val_epochs = np.linspace(
1, max_epochs, np.floor(max_epochs / val_interval).astype(np.int32)
)
plt.plot(val_epochs, metric_values, label="validation acc")
plt.legend()
plt.xlabel("Epoch")
plt.ylabel("Value")
# Reload the best network and display info
model_3d = monai.networks.nets.DenseNet121(
spatial_dims=3, in_channels=1, out_channels=2
).to(device)
model_3d.load_state_dict(
torch.load("best_metric_model_classification3d_array.pth")
)
model_3d.eval()
y_pred = torch.tensor([], dtype=torch.float32, device=device)
y = torch.tensor([], dtype=torch.long, device=device)
for val_data in val_loader:
val_images = val_data["image"].to(device)
val_labels = val_data["label"].to(device).argmax(dim=1)
outputs = model_3d(val_images)
y_pred = torch.cat([y_pred, outputs.argmax(dim=1)], dim=0)
y = torch.cat([y, val_labels], dim=0)
print(
classification_report(
y.cpu().numpy(),
y_pred.cpu().numpy(),
target_names=["non-lesion", "lesion"],
)
)
cm = confusion_matrix(
y.cpu().numpy(),
y_pred.cpu().numpy(),
normalize="true",
)
disp = ConfusionMatrixDisplay(
confusion_matrix=cm,
display_labels=["non-lesion", "lesion"],
)
disp.plot(ax=plt.subplots(1, 1, facecolor="white")[1])
```
# Interpretability
Use GradCAM and occlusion sensitivity for network interpretability.
The occlusion sensitivity returns two images: the sensitivity image and the most probable class.
* Sensitivity image -- how the probability of an inferred class changes as the corresponding part of the image is occluded.
* Big decreases in the probability imply that that region was important in inferring the given class
* The output is the same as the input, with an extra dimension of size N appended. Here, N is the number of inferred classes. To then see the sensitivity image of the class we're interested (maybe the true class, maybe the predcited class, maybe anything else), we simply do ``im[...,i]``.
* Most probable class -- if that part of the image is covered up, does the predicted class change, and if so, to what? This feature is not used in this notebook.
```
# cam = monai.visualize.CAM(nn_module=model_3d, target_layers="class_layers.relu", fc_layers="class_layers.out")
cam = monai.visualize.GradCAM(
nn_module=model_3d, target_layers="class_layers.relu"
)
# cam = monai.visualize.GradCAMpp(nn_module=model_3d, target_layers="class_layers.relu")
print(
"original feature shape",
cam.feature_map_size([1, 1] + list(win_size), device),
)
print("upsampled feature shape", [1, 1] + list(win_size))
occ_sens = monai.visualize.OcclusionSensitivity(
nn_module=model_3d, mask_size=12, n_batch=1, stride=28
)
# For occlusion sensitivity, inference must be run many times. Hence, we can use a
# bounding box to limit it to a 2D plane of interest (z=the_slice) where each of
# the arguments are the min and max for each of the dimensions (in this case CHWD).
the_slice = train_ds[0]["image"].shape[-1] // 2
occ_sens_b_box = [-1, -1, -1, -1, -1, -1, the_slice, the_slice]
train_transforms.set_random_state(42)
n_examples = 5
subplot_shape = [3, n_examples]
fig, axes = plt.subplots(*subplot_shape, figsize=(25, 15), facecolor="white")
items = np.random.choice(len(train_ds), size=len(train_ds))
example = 0
for item in items:
data = train_ds[
item
] # this fetches training data with random augmentations
image, label = data["image"].to(device).unsqueeze(0), data["label"][1]
y_pred = model_3d(image)
pred_label = y_pred.argmax(1).item()
# Only display tumours images
if label != 1 or label != pred_label:
continue
img = image.detach().cpu().numpy()[..., the_slice]
name = "actual: "
name += "lesion" if label == 1 else "non-lesion"
name += "\npred: "
name += "lesion" if pred_label == 1 else "non-lesion"
name += f"\nlesion: {y_pred[0,1]:.3}"
name += f"\nnon-lesion: {y_pred[0,0]:.3}"
# run CAM
cam_result = cam(x=image, class_idx=None)
cam_result = cam_result[..., the_slice]
# run occlusion
occ_result, _ = occ_sens(x=image, b_box=occ_sens_b_box)
occ_result = occ_result[..., pred_label]
for row, (im, title) in enumerate(
zip(
[img, cam_result, occ_result],
[name, "CAM", "Occ. sens."],
)
):
cmap = "gray" if row == 0 else "jet"
ax = axes[row, example]
if isinstance(im, torch.Tensor):
im = im.cpu().detach()
im_show = ax.imshow(im[0][0], cmap=cmap)
ax.set_title(title, fontsize=25)
ax.axis("off")
fig.colorbar(im_show, ax=ax)
example += 1
if example == n_examples:
break
with SummaryWriter(log_dir="logs") as writer:
plot_2d_or_3d_image(img, step=0, writer=writer, tag="Input")
plot_2d_or_3d_image(cam_result, step=0, writer=writer, tag="CAM")
plot_2d_or_3d_image(occ_result, step=0, writer=writer, tag="OccSens")
%load_ext tensorboard
%tensorboard --logdir logs
```
| github_jupyter |
```
FN = '161103-run-plot'
```
Plot validation accuracy vs. noise level for different experiments
the results of the experiments are accumlated in:
```
FN1 = '160919-run-plot'
```
each experiment is run by `run.bash` which runs the same experiment with different training size (`down_sample=.2,.5,1`) and in each time it runs (`run_all.bash`) with 5 different seeds.
Each such run of `jacob-reed.py` includes a loop over different `noise_level`
```
%%bash -s "{FN1}"
echo $1
# ./run.bash --FN=data/$1
./run.bash --FN=data/$1 --model=simple --beta=0
# ./run.bash --FN=data/$1 --model=complex --beta=0 --pretrain=2
# ./run.bash --FN=data/$1 --model=reed_hard --beta=0.8
# ./run.bash --FN=data/$1 --model=reed_soft --beta=0.95
import fasteners
import time
import os
from collections import defaultdict
import numpy as np
import matplotlib.pyplot as plt
import pickle
import warnings ; warnings.filterwarnings("ignore")
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['axes.facecolor'] = 'white'
with fasteners.InterProcessLock('/tmp/%s.lock_file'%FN1):
with open('data/%s.results.pkl'%FN1,'rb') as fp:
results = pickle.load(fp)
experiments = set([k[0] for k in results.keys()]) # find all unique experiments from experiment,noise keys
experiments = sorted(experiments)
print experiments
```
The different models for which we have results. Composed from the following:
* B baseline, S simple, C complex, R reed soft, r reed hard
* M for MLP, C for CNN
* If the model is not baseline then beta is less than 1 and its value **after the decimal dot** appears (e.g. beta=0.85 will show as 85.) Note that if beta happens to be zero then nothing will appear
* Pre-training using labels generated by: p=baseline, q=simple model. r=use simple weights as a start point for the bias of the channel matrix instead of rebuilding it from the labels predicted by the simple model
* If model is complex then the value (usually 0) used to initialize channel matrix weights (unless it was 0.1)
```
print "Models:",', '.join(set(x.split('-')[0] for x in experiments))
```
The noise used:
```
print "Noise:",', '.join(set(x.split('-')[1].split('_')[0] for x in experiments))
for x in experiments:
print x
print x.split('-')[1].split('_')
```
different seeds used
size of training data used. Multiply by 10 to get percentage, s at the end indicate that the number of labels in training set is stratified
```
print "Training size:",', '.join(set(x.split('-')[1].split('_')[1] for x in experiments))
```
Count how many experiments (by different seeds) we did for every model/train-size combinations
```
e2n = defaultdict(int)
for e in reversed(experiments):
model_noise, seed, train_size = e.split('_')
e2n['_'.join([model_noise, train_size])] += 1
# e2n
#http://people.duke.edu/~ccc14/pcfb/analysis.html
def bootstrap(data, num_samples, statistic, alpha):
"""Returns bootstrap estimate of 100.0*(1-alpha) CI for statistic."""
n = len(data)
idx = np.random.randint(0, n, (num_samples, n))
samples = data[idx]
stat = np.sort(statistic(samples, 1))
return (stat[int((alpha/2.0)*num_samples)],
stat[int((1-alpha/2.0)*num_samples)])
```
convert model code to label we will use in the graph's legend
```
namelookup = {'CMp0':'Complex baseline-labels',
'CMq0':'Complex', # simple-labels',
'CCq0':'Complex CNN', # simple-labels',
'CMr0':'Complex simple-weights',
'BM':'Baseline','BC':'Baseline CNN',
'SMp':'Simple','SMP':'Simple soft confusion',
'SCp':'Simple CNN','SCP':'Simple CNN soft confusion',
'rM8p':'Reed hard','RM95p':'Reed soft',
'rC8p':'Reed hard','RC95p':'Reed soft',
'rMp':'Reed hard beta=0','RMp':'Reed soft beta=0'}
```
the order in which the graphs will appear in the legend (and colors)
```
model_order = {'SMp':3, 'SCp':3, 'CMp0':0, 'CMq0':1, 'CCq0':1, 'CMr0':2,
'rM8p':4, 'rC8p':4, 'rMp':5,
'RM95p':6, 'RC95p':6, 'RMp':7,
'BM':8,'BC':9, 'SMP':10}
def plot(idx, models=None, perm=None, down_samples=None,xlim=(0.3,0.5),ylim=(0.4,1),title='', stat=np.mean):
"""
models - list of str
take only experiments that starts with one of the str in models.
down_samples - list of float or str
take only results with specificed down sample
title - if None then dont produce any title. If string then build a title which includes in it the given str
"""
if down_samples is not None:
if not isinstance(down_samples,list):
down_samples = [down_samples]
down_samples = [down_sample if isinstance(down_sample, basestring)
else '%g'%(down_sample*10)
for down_sample in down_samples]
plt.subplot(idx)
e2xy = defaultdict(lambda : defaultdict(list))
for e in reversed(experiments):
if down_samples is not None:
for down_sample in down_samples:
if e.endswith('_'+down_sample) or e.endswith('_'+down_sample+'s'):
break
else:
continue
if models:
for model in models:
if e.startswith(model):
break
else:
continue
else:
model = e.split('-')[0]
down_sample = e.split('_')[-1]
if perm is not None:
pperm = e.split('-')[1].split('_')[0]
if pperm != perm:
continue
X = [k[1] for k in results.keys() if k[0] == e]
Y = []
for n in X:
d = results[(e,n)][1]
for k in ['baseline_acc', 'acc']:
if k in d:
Y.append(d[k])
break
else:
raise Exception('validation accuracy not found')
X, Y = zip(*sorted(zip(X,Y),key=lambda x: x[0]))
for x,y in zip(X,Y):
e2xy[(model,down_sample)][x] += [y]
Y = np.array(Y)
keys = sorted(e2xy.keys(),key=lambda md: (model_order[md[0].split('-')[0]], (' ' if len(md[1])-int(md[1].endswith('s'))==1 else '') + md[1]))
# determine if all down_sample are stratified (True) or all not stratified (False) or mixed (Nones)
all_stratified = None
for model,down_sample in keys:
if all_stratified is None:
all_stratified = down_sample.endswith('s')
elif all_stratified != down_sample.endswith('s'):
all_stratified = None
break
for c, (model,down_sample) in enumerate(keys):
xy = e2xy[(model,down_sample)]
X = []
Y = []
Ys = []
for x in sorted(xy.keys()):
X.append(x)
Y.append(stat(xy[x]))
Ys.append(xy[x])
error_bars = []
for y,ys in zip(Y,Ys):
ym, yp = bootstrap(np.array(ys), 10000, stat, 0.05)
error_bars.append((y-ym,yp-y))
X = np.array(X)
colors = ['b','g','r','c','m','k']
linestyles=['solid', 'dashed','dashdot','dotted']
color = colors[c % len(colors)]
linestyle = linestyles[(c//len(colors))%len(linestyles)]
plt.errorbar(X + 0.001*np.random.random(len(X)),
Y, yerr=np.array(error_bars).T, ecolor=color, elinewidth=2, alpha=0.2,
fmt='none')
# label of each graph is the model name in English and
# if we have more than one downsample option then also add it to the label
if model in namelookup:
label = namelookup[model]
else:
label = namelookup[model.split('-')[0]] + ' ' + model.split('-')[1]
if down_samples is None or len(down_samples) > 1:
if all_stratified is None:
label += ' '+(down_sample[:-1]+'0% stratified' if down_sample.endswith('s') else down_sample+'0%')
else:
label += ' '+(down_sample[:-1] if down_sample.endswith('s') else down_sample)+'0%'
plt.plot(X,Y, color=color, linestyle=linestyle, label=label)
plt.legend(loc='lower left')
plt.ylabel('test accuracy', fontsize=18)
plt.xlabel('noise fraction', fontsize=18);
if title is not None:
title = 'MNIST '+title
if all_stratified:
title += ' stratified'
plt.title(title)
# The range of X and Y is exactly like in other paper
if xlim is not None:
plt.xlim(*xlim)
if ylim is not None:
plt.ylim(*ylim)
plt.figure(figsize=(12,10))
plot(111, down_samples=1.0, models=['BM','SMp','CMq0','rM8p','RM95p'], title=None, perm='7904213568')
plt.figure(figsize=(12,10))
plot(111, down_samples=.5, models=['BM','SMp','CMq0','rM8p','RM95p'], title=None, perm='7904213568')
plt.figure(figsize=(12,10))
plot(111, down_samples=.2, models=['BM','SMp','CMq0','rM8p','RM95p'], title=None, perm='7904213568')
```
| github_jupyter |
[Shikaku](https://en.wikipedia.org/wiki/Shikaku) je jedna z japonských logických her, kterou publikoval časopis Nikoli. Jako obvykle se jedná o deskovou hru, kterou je možné hrát na čtvercové nebo obdélníkové desce.
Úkolem je rozdělit desku na obdélníky, které jí plně pokrývají a vzájemně se nepřekrývají. Každý obdélník je určen jednou pozicí na desce s číslem udávajícím jeho plochu ve čtvercích.
V zadání úlohy je tedy na desce tolik čísel, kolik je požadovaných obdélníků. Součet čísel pak představuje plochu celé desky.
Například zadání by mohlo vypadat následovně:
```
---------------
| | | | 4 |
---------------
| | 8 | | |
---------------
| | | 4 | |
---------------
| | | | |
---------------
```
Pokusil jsem se tedy napsat řešení pro takovou hru. Nejdříve pomocí backtracking a následně také Constraint Programming. Třeba příjdete na nějaké jiné řešení.
# Zdroj testovacích dat
Protože budu zkoušet dva postupy, vytvořil jsem si nejdříve jeden společný zdroj testovacích dat.
Je to jedna třída `DataSource` v samostatném modulu. Ta obsahuje několik zadání hry a metody, jak s nimi jednoduše pracovat.
Jednodušší bude asi ukázka, jak s tím zdrojem testovacích vzorků pracuji:
```
import numpy as np
from ShikakuSource import DataSource
source = DataSource()
```
Počet vzorků zadání hry ve zdroji:
```
print(len(source))
```
Rozměry hrací desky pro jedem vzorek s pořadovým číslem 1:
```
print(source.shape(1))
```
Zadání pro jeden vzorek ve formě pole (pořadové číslo 1):
```
print(source.board(1))
```
Zdrojová data vzorku s pořadovým číslem 1, jedná se o souřadnice na desce a hodnotu políčka:
```
print(source.data(1))
```
# Řešení s pomocí backtracking
Nejdříve se pokusím na to jít tak, jak by asi každý očekával. Prostě budu zkoušet kombinace obdélníků, dokud nenajdu přijatelné řešení.
## Přípravné práce
Bude asi dobré si připravit nějaké nástroje pro práci s obdélníky. Proto jsem si vytvořil jednu třídu `Rectangle` a nějaké pomocné funkce pro práci s nimi.
```
class Rectangle:
def __init__(self, corner, shape, board_shape):
if not all((0 <= corner[i] < board_shape[0] and 0 < shape[i] <= board_shape[i] for i in (0, 1))):
raise ValueError('coordinates are invalid')
if not all((0 <= corner[i] + shape[i] <= board_shape[i] for i in (0, 1))):
raise ValueError('coordinates are invalid')
self.corner, self.shape, self.board_shape = corner, shape, board_shape
def __iter__(self):
yield self.corner
yield self.shape
def __eq__(self, other):
return isinstance(other, type(self)) and tuple(self) == tuple(other)
def __ne__(self, other):
return not (self == other)
def __lt__(self, other):
return tuple(self) < tuple(other)
def __repr__(self):
return type(self).__name__ + repr(tuple(self))
def inside(self, position):
return all((self.corner[i] <= position[i] < self.corner[i] + self.shape[i] for i in (0, 1)))
def slice(self):
return tuple((slice(self.corner[i], self.corner[i] + self.shape[i]) for i in (0, 1)))
def board(self):
b = np.zeros(self.board_shape, dtype=int)
b[slice()] = 1
return b
```
Nejdříve funkce pro výpočet rozměrů obdélníků tak, aby měly požadovanou plochu.
Parametrem je kromě plochy obdélníka také velikost hrací desky, neb nechci řešit obdélníky, které by se mně na desku vůbec nevešly.
```
def dimensions_for_area(area, board_shape):
for i in range(1, area + 1):
if area % i == 0:
rows, cols = i, area // i
if rows <= board_shape[0] and cols <= board_shape[0]:
yield rows, cols
print(list(dimensions_for_area(8, (4, 4))))
```
Úkolem další funkce bude najít všechny přijatelné obdélníky pro konkrétní pozici na desce tak, aby obdélníky měly požadované rozměry a na desku se vešly.
Parametry funkce jsou:
* pozice na desce, vůči které se obdélníky vztahují (každý obdélník musí tuto ozici obsahovat)
* požadované rozměry obdélníku
* velikost hrací desky
Výsledkem je generátor takových obdélníků.
```
def rectangles_for_position(position, shape, board_shape):
for i in (a for a in range(max(position[0] - (shape[0] - 1), 0), position[0] + 1) if a + shape[0] <= board_shape[0]):
for j in (b for b in range(max(position[1] - (shape[1] - 1), 0), position[1] + 1) if b + shape[1] <= board_shape[1]):
yield Rectangle((i, j), shape, board_shape)
print(list(rectangles_for_position((2, 2), (1, 4), (4, 4))))
print(list(rectangles_for_position((2, 2), (4, 1), (4, 4))))
```
Spojením dvou výše uvedených funkci jsem nyní schopen napsat funkci, která mně vrátí všechny přijatelné obdélníky pro pozici s uvedením požadované plochy
Parametry funkce:
* pozice na desce, vůči které se obdélníky vztahují
* požadovaná plocha obdélníku
* velikost hrací desky
Výsledkem je opět generátor obdélníků, které vyhovují požadavkům.
```
def all_for_position(position, area, board_shape):
for shape in dimensions_for_area(area, board_shape):
for rect in rectangles_for_position(position, shape, board_shape):
yield rect
print(list(all_for_position((2, 2), 4, (4, 4))))
```
Poslední funkcí z této části bude funkce, která mně ze všech obdélníků přijatelných pro jednu pozici vybere pouze ty obdélníky, které neobsahují pozice jiných obdélníků (to je to pravidlo, že pozice s číslem může ležet pouze v jednom obdélníku).
Parametry funkce:
* pozice na desce, vůči které se obdélníky vztahují
* požadovaná plocha obdélníku
* zdrojová data pro jedno zadání hry
* velikost hrací desky
Výsledkem je generátor takových obdélníků.
```
def possible_for_position(position, area, data, board_shape):
for rect in all_for_position(position, area, board_shape):
if all((not rect.inside(p) for p, v in data if p != position)):
yield rect
print(list(possible_for_position((2, 2), 4, source.data(1), (4, 4))))
```
## Řešení
A nyní již vlastní řešení pomocí backtrack.
Pro vyhodnocení, zda se mně obdélníky překrývají, budu používat numpy pole.
Budu postupně procházet všechna zadání. Pro každé zadání pak:
1. nejdříve vytisknu zadání
1. vytvořím si prázdnou desku
1. pro každou pozici zadání vytvořím seznam všech přijatelných obdélniků
1. spustím funkci `solve()`, která:
- postupně prochází všechny přijatelné obdélníky pro pozici
- obdélník promitne na desku
- pokud na desce není konflik, pokračuje rekurzivně na další pozici
5. v případě, že jsem nějaké řešení našel, tak jej vytisknu
Výsledek uvidíte po spuštění:
```
for sample in range(len(source)):
print(source.board(sample))
print('.' * 40)
board = np.zeros(source.shape(sample), dtype=int)
possible_rectangles = []
for pos, val in source.data(sample):
possible_rectangles.append(list(possible_for_position(pos, val, source.data(sample), source.shape(sample))))
def solve(index=0):
if index >= len(possible_rectangles):
return True, []
for rec in possible_rectangles[index]:
board[rec.slice()] += 1
if (board <= 1).all():
ok, result = solve(index + 1)
if ok:
result.insert(0, rec)
return True, result
board[rec.slice()] -= 1
else:
return False, None
ok, result = solve()
if ok:
b = np.zeros(source.shape(sample), dtype=int)
for val, rec in enumerate(result):
b[rec.slice()] += 1
if (b == 1).all():
b = np.zeros(source.shape(sample), dtype=object)
for val, rec in enumerate(result):
b[rec.slice()] = chr(ord('A') + val)
print(b)
else:
print("SOLUTION IS NOT VALID")
print()
print('=' * 40)
print()
```
# Řešení s využitím OR-Tools
Jako základ celého řešení je vytvoření modelu hry s využitím připravených nástrojů z knihovny OR-Tools.
```
from ortools.sat.python import cp_model
```
Tímto modelem je třída, která je potomkem třídy `CpSolverSolutionCallback`.
Vlastní nastavení modelu se děje v metodě `__init__`.
```
class GameBoard(cp_model.CpSolverSolutionCallback):
def __init__(self, data, board_shape):
cp_model.CpSolverSolutionCallback.__init__(self)
self.model = cp_model.CpModel()
self.shape = board_shape
self.rectangles = []
intervals_x, intervals_y = [], []
for i, ((x, y), val) in enumerate(data):
x1, y1 = self.model.NewIntVar(0, x, f"X1:{i}"), self.model.NewIntVar(0, y, f"Y1:{i}")
x2, y2 = self.model.NewIntVar(x, board_shape[0] - 1, f"X2:{i}"), self.model.NewIntVar(y, board_shape[1] - 1, f"Y2:{i}")
self.rectangles.append((x1, y1, x2, y2))
size_x = self.model.NewIntVar(1, board_shape[0], f"dimX:{i}")
size_y = self.model.NewIntVar(1, board_shape[1], f"dimY:{i}")
self.model.Add(x1 + size_x <= board_shape[0])
self.model.Add(y1 + size_y <= board_shape[1])
self.model.Add(x1 + size_x > x)
self.model.Add(y1 + size_y > y)
self.model.AddMultiplicationEquality(val, [size_x, size_y])
intervals_x.append(self.model.NewIntervalVar(x1, size_x, x2 + 1, f"IX:{i}"))
intervals_y.append(self.model.NewIntervalVar(y1, size_y, y2 + 1, f"IY:{i}"))
self.model.AddNoOverlap2D(intervals_x, intervals_y)
def OnSolutionCallback(self):
a = np.zeros(self.shape, dtype=object)
for i, (x1, y1, x2, y2) in enumerate(self.rectangles):
x1, y1, x2, y2 = self.Value(x1), self.Value(y1), self.Value(x2), self.Value(y2)
a[x1:x2+1, y1:y2+1] = chr(ord('A') + i)
print(a)
self.StopSearch()
def solve(self):
cp_model.CpSolver().SearchForAllSolutions(self.model, self)
```
## Nastavení modelu
Parametry pro nastavení modelu jsou zdrojová data zadání hry a velikost hrací desky.
Při nastavení proměnných modelu a omezení budu postupně procházet zadání pro všechny obdélníky.
### Proměnné pro obdélníky
V mém modelu bude každý obdélník představován levým-horním a pravým-dolním rohem (proměnné x1, y1, x2 a y2).
Současně je u každé proměnné omezení hodnot tak, aby bylo zajištěno, že (x1, y1) <= (x, y) a (x2, y2) >= (x, y).
Dále mám vytvořené proměnné představující velikost strany obdélníku, size_x a size_y. Pro ně je nastaveno omezení, že nesmí přesáhnout velikost hrací desky.
### Omezení pro obdélníky
Do modelu jsem doplnil omezení pro levý-horní roh a velikost strany obdélníku. Jejich součet nesmí přesáhnout hranice hrací desky.
A také je zde omezení, že součet levého-horního rohu a velikosti obdélníku musí zajistit zahrnutí referenční pozice obdélníku v zadání.
Posledním omezením, které se vztahuje k jednomu obdélníku, je požadavek na velikost jeho plochy jako součin velikosti jeho stran.
### Intervaly obdélníků v osách
Abych mohl zajistit to, že se mně obdélníky nebudou překrývat, potřebuji vytvořit ještě jednu sadu proměnných.
Jsou to tzv. intervaly a představují vztah mezi souřadnicemi obdélníku a velikostí jeho stran.
Všechny intervaly si přidám do dvou polí, protože v této podobě je budu potřebovat pro poslední omezení.
### Omezení na překryv obdélníků
Poslední, co potřebuji zajistit je omezení, aby se mně dva obdélníky nepřekrývaly.
Toho můžu docílit pomocí metody `AddNoOverlap2D`, která jako parametry potřebuje právě pole intervalů v osách x a y.
## Výkonné metody
Ve třídě `GameBoard` mám definovány ještě dvě metody:
Metoda **OnSolutionCallback** se zavolá, když solver najde nějaké řešení. V mém případě si vyzvednu rohy všech obdélníků, promítnu je do numpy pole a vytisknu.
Následně pak ještě vyvolám zastavení činnosti solveru. To proto, že mne bude zajímat pouze první nalezené řešení.
Metoda **solve** zajistí vyvolání vlastního solveru nad modelem. To je tedy ta výkonná část celého řešení.
A to je co se týká vytvoření modelu a jeho řešení vše potřebné. Následuje již pouze vyzkoušení, jak to celé funguje.
## Vyzkoušení řešení hry
Projdu všechna zadání hry a spustím na ně řešení. No a pak se uvidí.
```
for sample in range(len(source)):
print(source.board(sample))
print('.' * 40)
board = GameBoard(source.data(sample), source.shape(sample))
board.solve()
print()
print('=' * 40)
print()
```
A to je vše.
| github_jupyter |
```
import pandas as pd
import numpy as np
import scanpy as sc
import os
from sklearn.cluster import KMeans
from sklearn.cluster import AgglomerativeClustering
from sklearn.metrics.cluster import adjusted_rand_score
from sklearn.metrics.cluster import adjusted_mutual_info_score
from sklearn.metrics.cluster import homogeneity_score
import rpy2.robjects as robjects
from rpy2.robjects import pandas2ri
df_metrics = pd.DataFrame(columns=['ARI_Louvain','ARI_kmeans','ARI_HC',
'AMI_Louvain','AMI_kmeans','AMI_HC',
'Homogeneity_Louvain','Homogeneity_kmeans','Homogeneity_HC'])
workdir = './output/'
path_fm = os.path.join(workdir,'feature_matrices/')
path_clusters = os.path.join(workdir,'clusters/')
path_metrics = os.path.join(workdir,'metrics/')
os.system('mkdir -p '+path_clusters)
os.system('mkdir -p '+path_metrics)
metadata = pd.read_csv('./input/metadata.tsv',sep='\t',index_col=0)
num_clusters = len(np.unique(metadata['label']))
print(num_clusters)
files = [x for x in os.listdir(path_fm) if x.startswith('FM')]
len(files)
files
def getNClusters(adata,n_cluster,range_min=0,range_max=3,max_steps=20):
this_step = 0
this_min = float(range_min)
this_max = float(range_max)
while this_step < max_steps:
print('step ' + str(this_step))
this_resolution = this_min + ((this_max-this_min)/2)
sc.tl.louvain(adata,resolution=this_resolution)
this_clusters = adata.obs['louvain'].nunique()
print('got ' + str(this_clusters) + ' at resolution ' + str(this_resolution))
if this_clusters > n_cluster:
this_max = this_resolution
elif this_clusters < n_cluster:
this_min = this_resolution
else:
return(this_resolution, adata)
this_step += 1
print('Cannot find the number of clusters')
print('Clustering solution from last iteration is used:' + str(this_clusters) + ' at resolution ' + str(this_resolution))
for file in files:
file_split = file.split('_')
method = file_split[1]
dataset = file_split[2].split('.')[0]
if(len(file_split)>3):
method = method + '_' + '_'.join(file_split[3:]).split('.')[0]
print(method)
pandas2ri.activate()
readRDS = robjects.r['readRDS']
df_rds = readRDS(os.path.join(path_fm,file))
fm_mat = pandas2ri.ri2py(robjects.r['data.frame'](robjects.r['as.matrix'](df_rds)))
fm_mat.fillna(0,inplace=True)
fm_mat.columns = metadata.index
adata = sc.AnnData(fm_mat.T)
adata.var_names_make_unique()
adata.obs = metadata.loc[adata.obs.index,]
df_metrics.loc[method,] = ""
#Louvain
sc.pp.neighbors(adata, n_neighbors=15,use_rep='X')
# sc.tl.louvain(adata)
getNClusters(adata,n_cluster=num_clusters)
#kmeans
kmeans = KMeans(n_clusters=num_clusters, random_state=2019).fit(adata.X)
adata.obs['kmeans'] = pd.Series(kmeans.labels_,index=adata.obs.index).astype('category')
#hierachical clustering
hc = AgglomerativeClustering(n_clusters=num_clusters).fit(adata.X)
adata.obs['hc'] = pd.Series(hc.labels_,index=adata.obs.index).astype('category')
#clustering metrics
#adjusted rank index
ari_louvain = adjusted_rand_score(adata.obs['label'], adata.obs['louvain'])
ari_kmeans = adjusted_rand_score(adata.obs['label'], adata.obs['kmeans'])
ari_hc = adjusted_rand_score(adata.obs['label'], adata.obs['hc'])
#adjusted mutual information
ami_louvain = adjusted_mutual_info_score(adata.obs['label'], adata.obs['louvain'],average_method='arithmetic')
ami_kmeans = adjusted_mutual_info_score(adata.obs['label'], adata.obs['kmeans'],average_method='arithmetic')
ami_hc = adjusted_mutual_info_score(adata.obs['label'], adata.obs['hc'],average_method='arithmetic')
#homogeneity
homo_louvain = homogeneity_score(adata.obs['label'], adata.obs['louvain'])
homo_kmeans = homogeneity_score(adata.obs['label'], adata.obs['kmeans'])
homo_hc = homogeneity_score(adata.obs['label'], adata.obs['hc'])
df_metrics.loc[method,['ARI_Louvain','ARI_kmeans','ARI_HC']] = [ari_louvain,ari_kmeans,ari_hc]
df_metrics.loc[method,['AMI_Louvain','AMI_kmeans','AMI_HC']] = [ami_louvain,ami_kmeans,ami_hc]
df_metrics.loc[method,['Homogeneity_Louvain','Homogeneity_kmeans','Homogeneity_HC']] = [homo_louvain,homo_kmeans,homo_hc]
adata.obs[['louvain','kmeans','hc']].to_csv(os.path.join(path_clusters ,method + '_clusters.tsv'),sep='\t')
df_metrics.to_csv(path_metrics+'clustering_scores.csv')
df_metrics
```
| github_jupyter |
<img src="http://cfs22.simplicdn.net/ice9/new_logo.svgz "/>
# Assignment 01: Evaluate the Ad Budget Dataset of XYZ Firm
*The comments/sections provided are your cues to perform the assignment. You don't need to limit yourself to the number of rows/cells provided. You can add additional rows in each section to add more lines of code.*
*If at any point in time you need help on solving this assignment, view our demo video to understand the different steps of the code.*
**Happy coding!**
* * *
#### 1: Import the dataset
```
#Import the required libraries
import pandas as pd
#Import the advertising dataset
df_adv_data = pd.read_csv('Advertising Budget and Sales.csv', index_col=0)
```
#### 2: Analyze the dataset
```
#View the initial few records of the dataset
df_adv_data.head()
#Check the total number of elements in the dataset
df_adv_data.size
```
#### 3: Find the features or media channels used by the firm
```
#Check the number of observations (rows) and attributes (columns) in the dataset
df_adv_data.shape
#View the names of each of the attributes
df_adv_data.columns
```
#### 4: Create objects to train and test the model; find the sales figures for each channel
```
#Create a feature object from the columns
X_feature = df_adv_data[['Newspaper Ad Budget ($)','Radio Ad Budget ($)','TV Ad Budget ($)']]
#View the feature object
X_feature.head()
#Create a target object (Hint: use the sales column as it is the response of the dataset)
Y_target = df_adv_data[['Sales ($)']]
#View the target object
Y_target.head()
#Verify if all the observations have been captured in the feature object
X_feature.shape
#Verify if all the observations have been captured in the target object
Y_target.shape
```
#### 5: Split the original dataset into training and testing datasets for the model
```
#Split the dataset (by default, 75% is the training data and 25% is the testing data)
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(X_feature,Y_target,random_state=1)
#Verify if the training and testing datasets are split correctly (Hint: use the shape() method)
print(x_train.shape)
print(x_test.shape)
print(y_train.shape)
print(y_test.shape)
```
#### 6: Create a model to predict the sales outcome
```
#Create a linear regression model
from sklearn.linear_model import LinearRegression
linreg = LinearRegression()
linreg.fit(x_train,y_train)
#Print the intercept and coefficients
print(linreg.intercept_)
print(linreg.coef_)
#Predict the outcome for the testing dataset
y_pred = linreg.predict(x_test)
y_pred
```
#### 7: Calculate the Mean Square Error (MSE)
```
#Import required libraries for calculating MSE (mean square error)
from sklearn import metrics
import numpy as np
#Calculate the MSE
print(np.sqrt(metrics.mean_squared_error(y_test,y_pred)))
print('True' , y_test.values[0:10])
print()
print('Pred' , y_pred[0:10])
```
| github_jupyter |
# Analytic center computation using a infeasible start Newton method
# The set-up
```
import numpy as np
import pandas as pd
import accpm
import accpm
from IPython.display import display
%load_ext autoreload
%autoreload 1
%aimport accpm
```
$\DeclareMathOperator{\domain}{dom}
\newcommand{\transpose}{\text{T}}
\newcommand{\vec}[1]{\begin{pmatrix}#1\end{pmatrix}}$
# Theory
To test the $\texttt{analytic_center}$ function we consider the following example. Suppose we want to find the analytic center $x_{ac} \in \mathbb{R}^2$ of the inequalities $x_1 \leq c_1, x_1 \geq 0, x_2 \leq c_2, x_2 \geq 0$. This is a rectange with dimensions $c_1 \times c_2$ centered at at $(\frac{c_1}{2}, \frac{c_2}{2})$ so we should have $x_{ac} = (\frac{c_1}{2}, \frac{c_2}{2})$. Now, $x_{ac}$ is the solution of the minimization problem
\begin{equation*}
\min_{\domain \phi} \phi(x) = - \sum_{i=1}^{4}{\log{(b_i - a_i^\transpose x)}}
\end{equation*}
where
\begin{equation*}
\domain \phi = \{x \;|\; a_i^\transpose x < b_i, i = 1, 2, 3, 4\}
\end{equation*}
with
\begin{align*}
&a_1 = \begin{bmatrix}1\\0\end{bmatrix}, &&b_1 = c_1, \\
&a_2 = \begin{bmatrix}-1\\0\end{bmatrix}, &&b_2 = 0, \\
&a_3 = \begin{bmatrix}0\\1\end{bmatrix}, &&b_3 = c_2, \\
&a_4 = \begin{bmatrix}0\\-1\end{bmatrix}, &&b_4 = 0.
\end{align*}
So we solve
\begin{align*}
&\phantom{iff}\nabla \phi(x) = \sum_{i=1}^{4
} \frac{1}{b_i - a_i^\transpose x}a_i = 0 \\
&\iff \frac{1}{c_1-x_1}\begin{bmatrix}1\\0\end{bmatrix} + \frac{1}{x_1}\begin{bmatrix}-1\\0\end{bmatrix} + \frac{1}{c_2-x_2}\begin{bmatrix}0\\1\end{bmatrix} + \frac{1}{x_2}\begin{bmatrix}0\\-1\end{bmatrix} = 0 \\
&\iff \frac{1}{c_1-x_1} - \frac{1}{x_1} = 0, \frac{1}{c_2-x_2} - \frac{1}{x_2} = 0 \\
&\iff x_1 = \frac{c_1}{2}, x_2 = \frac{c_2}{2},
\end{align*}
as expected.
# Testing
We test $\texttt{analytic_center}$ for varying values of $c_1, c_2$ and algorithm parameters $\texttt{alpha, beta}$:
```
def get_results(A, test_input, alpha, beta, tol=10e-8):
expected = []
actual = []
result = []
for (c1, c2) in test_input:
b = np.array([c1, 0, c2, 0])
ac_expected = np.asarray((c1/2, c2/2))
ac_actual = accpm.analytic_center(A, b, alpha = alpha, beta = beta)
expected.append(ac_expected)
actual.append(ac_actual)
# if np.array_equal(ac_expected, ac_actual):
if np.linalg.norm(ac_expected - ac_actual) <= tol:
result.append(True)
else:
result.append(False)
results = pd.DataFrame([test_input, expected, actual, result])
results = results.transpose()
results.columns = ['test_input', 'expected', 'actual', 'result']
print('alpha =', alpha, 'beta =', beta)
display(results)
```
Here we have results for squares of varying sizes and for varying values of $\texttt{alpha}$ and $\texttt{beta}$. In general, the algorithm performs worse on large starting polyhedrons than small starting polyhedrons. This seems acceptable given that we are most concerned with smaller polyhedrons.
```
A = np.array([[1, 0],[-1,0],[0,1],[0,-1]])
test_input = [(1, 1), (5, 5), (20, 20), (10e2, 10e2), (10e4, 10e4),
(10e6, 10e6), (10e8, 10e8), (10e10, 10e10),
(0.5, 0.5), (0.1, 0.1), (0.01, 0.01),
(0.005, 0.005), (0.001, 0.001),(0.0005, 0.0005), (0.0001, 0.0001),
(0.00005, 0.00005), (0.00001, 0.00001), (0.00001, 0.00001)]
get_results(A, test_input, alpha=0.01, beta=0.7)
get_results(A, test_input, alpha=0.01, beta=0.99)
get_results(A, test_input, alpha=0.49, beta=0.7)
get_results(A, test_input, alpha=0.25, beta=0.7)
```
| github_jupyter |
```
import fluentpy as _
```
These are solutions for the Advent of Code puzzles of 2018 in the hopes that this might inspire the reader how to use the fluentpy api to solve problems.
See https://adventofcode.com/2018/ for the problems.
The goal of this is not to produce minimal code or neccessarily to be as clear as possible, but to showcase as many of the features of fluentpy. Pull requests to use more of fluentpy are welcome!
I do hope however that you find the solutions relatively succinct as your understanding of how fluentpy works grows.
# Day 1
https://adventofcode.com/2018/day/1
```
_(open('input/day1.txt')).read().replace('\n','').call(eval)._
day1_input = (
_(open('input/day1.txt'))
.readlines()
.imap(eval)
._
)
seen = set()
def havent_seen(number):
if number in seen:
return False
seen.add(number)
return True
(
_(day1_input)
.icycle()
.iaccumulate()
.idropwhile(havent_seen)
.get(0)
._
)
```
# Day 2
https://adventofcode.com/2018/day/2
```
day2 = open('input/day2.txt').readlines()
def has_two_or_three(code):
counts = _.lib.collections.Counter(code).values()
return 2 in counts, 3 in counts
twos, threes = _(day2).map(has_two_or_three).star_call(zip).to(tuple)
sum(twos) * sum(threes)
def is_different_by_only_one_char(codes):
# REFACT consider how to more effectively vectorize this function
# i.e. map ord, elementwise minus, count non zeroes == 1
code1, code2 = codes
diff_count = 0
for index, char in enumerate(code1):
if char != code2[index]:
diff_count += 1
return 1 == diff_count
(
_(day2)
.icombinations(r=2)
.ifilter(is_different_by_only_one_char)
.get(0)
.star_call(zip)
.filter(lambda pair: pair[0] == pair[1])
.star_call(zip)
.get(0)
.join('')
._
)
```
# Day 3
https://adventofcode.com/2018/day/3
```
line_regex = r'#(\d+) @ (\d+),(\d+): (\d+)x(\d+)'
class Entry(_.lib.collections.namedtuple('Entry', ['id', 'left', 'top', 'width', 'height'])._):
def coordinates(self):
return _.lib.itertools.product._(
range(claim.left, claim.left + claim.width),
range(claim.top, claim.top + claim.height)
)
def parse_day3(line):
return _(line).match(line_regex).groups().map(int).star_call(Entry)._
day3 = _(open('input/day3.txt')).read().splitlines().map(parse_day3)._
plane = dict()
for claim in day3:
for coordinate in claim.coordinates():
plane[coordinate] = plane.get(coordinate, 0) + 1
_(plane).values().filter(_.each != 1).len()._
for claim in day3:
if _(claim.coordinates()).imap(lambda each: plane[each] == 1).all()._:
print(claim.id)
```
# Day 4
https://adventofcode.com/2018/day/4
```
day4_lines = _(open('input/day4.txt')).read().splitlines().sort().self._
class Sleep(_.lib.collections.namedtuple('Sleep', ['duty_start', 'sleep_start', 'sleep_end'])._):
def minutes(self):
return (self.sleep_end - self.sleep_start).seconds // 60
class Guard:
def __init__(self, guard_id, sleeps=None):
self.id = guard_id
self.sleeps = sleeps or list()
def minutes_asleep(self):
return _(self.sleeps).map(_.each.minutes()._).sum()._
def minutes_and_sleep_counts(self):
distribution = dict()
for sleep in self.sleeps:
# problematic if the hour wraps, but it never does, see check below
for minute in range(sleep.sleep_start.minute, sleep.sleep_end.minute):
distribution[minute] = distribution.get(minute, 0) + 1
return _(distribution).items().sorted(key=_.each[1]._, reverse=True)._
def minute_most_asleep(self):
return _(self.minutes_and_sleep_counts()).get(0, tuple()).get(0, 0)._
def number_of_most_sleeps(self):
return _(self.minutes_and_sleep_counts()).get(0, tuple()).get(1, 0)._
guards = dict()
current_guard = current_duty_start = current_sleep_start = None
for line in day4_lines:
time = _.lib.datetime.datetime.fromisoformat(line[1:17])._
if 'Guard' in line:
guard_id = _(line[18:]).match(r'.*?(\d+).*?').group(1).call(int)._
current_guard = guards.setdefault(guard_id, Guard(guard_id))
current_duty_start = time
if 'falls asleep' in line:
current_sleep_start = time
if 'wakes up' in line:
current_guard.sleeps.append(Sleep(current_duty_start, current_sleep_start, time))
# confirm that we don't really have to do real date calculations but can just work with simplified values
for guard in guards.values():
for sleep in guard.sleeps:
assert sleep.sleep_start.minute < sleep.sleep_end.minute
assert sleep.sleep_start.hour == 0
assert sleep.sleep_end.hour == 0
guard = (
_(guards)
.values()
.sorted(key=Guard.minutes_asleep, reverse=True)
.get(0)
._
)
guard.id * guard.minute_most_asleep()
guard = (
_(guards)
.values()
.sorted(key=Guard.number_of_most_sleeps, reverse=True)
.get(0)
._
)
guard.id * guard.minute_most_asleep()
```
# Day 5
https://adventofcode.com/2018/day/5
```
day5 = _(open('input/day5.txt')).read().strip()._
def is_reacting(a_polymer, an_index):
if an_index+2 > len(a_polymer):
return False
first, second = a_polymer[an_index:an_index+2]
return first.swapcase() == second
def reduce(a_polymer):
for index in range(len(a_polymer) - 2, -1, -1):
if is_reacting(a_polymer, index):
a_polymer = a_polymer[:index] + a_polymer[index+2:]
return a_polymer
def fully_reduce(a_polymer):
last_polymer = current_polymer = a_polymer
while True:
last_polymer, current_polymer = current_polymer, reduce(current_polymer)
if last_polymer == current_polymer:
break
return current_polymer
len(fully_reduce(day5))
alphabet = _(range(26)).map(_.each + ord('a')).map(chr)._
shortest_length = float('inf')
for char in alphabet:
polymer = day5.replace(char, '').replace(char.swapcase(), '')
length = len(fully_reduce(polymer))
if length < shortest_length:
shortest_length = length
shortest_length
```
# Day 6
https://adventofcode.com/2018/day/6
```
Point = _.lib.collections.namedtuple('Point', ['x', 'y'])._
day6_coordinates = (
_(open('input/day6.txt'))
.read()
.splitlines()
.map(lambda each: _(each).split(', ').map(int).star_call(Point)._)
._
)
def manhatten_distance(first, second):
return abs(first.x - second.x) + abs(first.y - second.y)
def nearest_two_points_and_distances(a_point):
return (
_(day6_coordinates)
.imap(lambda each: (each, manhatten_distance(each, a_point)))
.sorted(key=_.each[1]._)
.slice(2)
._
)
def has_nearest_point(a_point):
(nearest_point, nearest_distance), (second_point, second_distance) \
= nearest_two_points_and_distances(a_point)
return nearest_distance < second_distance
def nearest_point(a_point):
return nearest_two_points_and_distances(a_point)[0][0]
def plane_extent():
all_x, all_y = _(day6_coordinates).imap(lambda each: (each.x, each.y)).star_call(zip).to(tuple)
min_x, min_y = min(all_x) - 1, min(all_y) - 1
max_x, max_y = max(all_x) + 2, max(all_y) + 2
return (
(min_x, min_y),
(max_x, max_y)
)
def compute_bounding_box():
(min_x, min_y), (max_x, max_y) = plane_extent()
return _.lib.itertools.chain(
(Point(x, min_y) for x in range(min_x, max_x)),
(Point(x, max_y) for x in range(min_x, max_x)),
(Point(min_x, y) for y in range(min_y, max_y)),
(Point(max_x, y) for y in range(min_y, max_y)),
).to(tuple)
bounding_box = compute_bounding_box()
def internal_points():
# no point on bounding box is nearest to it
external_points = _(bounding_box).map(nearest_point).to(set)
return set(day6_coordinates) - external_points
def points_by_number_of_nearest_points():
plane = dict()
(min_x, min_y), (max_x, max_y) = plane_extent()
for x in range(min_x, max_x):
for y in range(min_y, max_y):
point = Point(x,y)
if has_nearest_point(point):
plane[point] = nearest_point(point)
plane_points = _(plane).values().to(tuple)
counts = dict()
for point in internal_points():
counts[point] = plane_points.count(point)
return counts
points = points_by_number_of_nearest_points()
_(points).items().sorted(key=_.each[1]._, reverse=True).get(0)._
def total_distance(a_point):
return (
_(day6_coordinates)
.imap(lambda each: manhatten_distance(a_point, each))
.sum()
._
)
def number_of_points_with_total_distance_less(a_limit):
plane = dict()
(min_x, min_y), (max_x, max_y) = plane_extent()
for x in range(min_x, max_x):
for y in range(min_y, max_y):
point = Point(x,y)
plane[point] = total_distance(point)
return (
_(plane)
.values()
.ifilter(_.each < a_limit)
.len()
._
)
number_of_points_with_total_distance_less(10000)
```
# Day 7
https://adventofcode.com/2018/day/7
```
import fluentpy as _
day7_input = (
_(open('input/day7.txt'))
.read()
.findall(r'Step (\w) must be finished before step (\w) can begin.', flags=_.lib.re.M._)
._
)
def execute_in_order(dependencies):
prerequisites = dict()
_(dependencies).each(lambda each: prerequisites.setdefault(each[1], []).append(each[0]))
all_jobs = _(dependencies).flatten().call(set)._
ready_jobs = all_jobs - prerequisites.keys()
done_jobs = []
while 0 != len(ready_jobs):
current_knot = _(ready_jobs).sorted()[0]._
ready_jobs.discard(current_knot)
done_jobs.append(current_knot)
for knot in all_jobs.difference(done_jobs):
if set(done_jobs).issuperset(prerequisites.get(knot, [])):
ready_jobs.add(knot)
return _(done_jobs).join('')._
execute_in_order(day7_input)
def cached_property(cache_instance_variable_name):
def outer_wrapper(a_method):
@property
@_.lib.functools.wraps._(a_method)
def wrapper(self):
if not hasattr(self, cache_instance_variable_name):
setattr(self, cache_instance_variable_name, a_method(self))
return getattr(self, cache_instance_variable_name)
return wrapper
return outer_wrapper
class Jobs:
def __init__(self, dependencies, delays):
self.dependencies = dependencies
self.delays = delays
self._ready = self.all.difference(self.prerequisites.keys())
self._done = []
self._in_progress = set()
@cached_property('_prerequisites')
def prerequisites(self):
prerequisites = dict()
for prerequisite, job in self.dependencies:
prerequisites.setdefault(job, []).append(prerequisite)
return prerequisites
@cached_property('_all')
def all(self):
return _(self.dependencies).flatten().call(set)._
def can_start(self, a_job):
return set(self._done).issuperset(self.prerequisites.get(a_job, []))
def has_ready_jobs(self):
return 0 != len(self._ready)
def get_ready_job(self):
assert self.has_ready_jobs()
current_job = _(self._ready).sorted()[0]._
self._ready.remove(current_job)
self._in_progress.add(current_job)
return current_job, self.delays[current_job]
def set_job_done(self, a_job):
assert a_job in self._in_progress
self._done.append(a_job)
self._in_progress.remove(a_job)
for job in self.unstarted():
if self.can_start(job):
self._ready.add(job)
def unstarted(self):
return self.all.difference(self._in_progress.union(self._done))
def is_done(self):
return set(self._done) == self.all
def __repr__(self):
return f'<Jobs(in_progress={self._in_progress}, done={self._done})>'
@_.lib.dataclasses.dataclass._
class Worker:
id: int
delay: int
current_job: str
jobs: Jobs
def work_a_second(self):
self.delay -= 1
if self.delay <= 0:
self.finish_job_if_working()
self.accept_job_if_available()
def finish_job_if_working(self):
if self.current_job is None:
return
self.jobs.set_job_done(self.current_job)
self.current_job = None
def accept_job_if_available(self):
if not self.jobs.has_ready_jobs():
return
self.current_job, self.delay = self.jobs.get_ready_job()
def execute_in_parallel(dependencies, delays, number_of_workers):
jobs = Jobs(dependencies, delays)
workers = _(range(number_of_workers)).map(_(Worker).curry(
id=_,
delay=0, current_job=None, jobs=jobs,
)._)._
seconds = -1
while not jobs.is_done():
seconds += 1
_(workers).each(_.each.work_a_second()._)
return seconds
test_input = (('C', 'A'), ('C', 'F'), ('A', 'B'), ('A', 'D'), ('B', 'E'), ('D', 'E'), ('F', 'E'))
test_delays = _(range(1,27)).map(lambda each: (chr(ord('A') + each - 1), each)).call(dict)._
execute_in_parallel(test_input, test_delays, 2)
day7_delays = _(range(1,27)).map(lambda each: (chr(ord('A') + each - 1), 60 + each)).call(dict)._
assert 1107 == execute_in_parallel(day7_input, day7_delays, 5)
execute_in_parallel(day7_input, day7_delays, 5)
```
# Day 8
https://adventofcode.com/2018/day/8
```
import fluentpy as _
@_.lib.dataclasses.dataclass._
class Node:
children: tuple
metadata: tuple
@classmethod
def parse(cls, number_iterator):
child_count = next(number_iterator)
metadata_count = next(number_iterator)
return cls(
children=_(range(child_count)).map(lambda ignored: Node.parse(number_iterator))._,
metadata=_(range(metadata_count)).map(lambda ignored: next(number_iterator))._,
)
def sum_all_metadata(self):
return sum(self.metadata) + _(self.children).imap(_.each.sum_all_metadata()._).sum()._
def value(self):
if 0 == len(self.children):
return sum(self.metadata)
return (
_(self.metadata)
.imap(_.each - 1) # convert to indexes
.ifilter(_.each >= 0)
.ifilter(_.each < len(self.children))
.imap(self.children.__getitem__)
.imap(Node.value)
.sum()
._
)
test_input = (2,3,0,3,10,11,12,1,1,0,1,99,2,1,1,2)
test_node = Node.parse(iter(test_input))
assert 138 == test_node.sum_all_metadata()
assert 66 == test_node.value()
day8_input = _(open('input/day8.txt')).read().split(' ').map(int)._
node = Node.parse(iter(day8_input))
node.sum_all_metadata()
```
# Day 9
https://adventofcode.com/2018/day/9
```
class Marble:
def __init__(self, value):
self.value = value
self.prev = self.next = self
def insert_after(self, a_marble):
a_marble.next = self.next
a_marble.prev = self
a_marble.next.prev = a_marble.prev.next = a_marble
def remove(self):
self.prev.next = self.next
self.next.prev = self.prev
return self
class Circle:
def __init__(self):
self.current = None
def play_marble(self, marble):
if self.current is None:
self.current = marble
return 0 # normmal insert, no points, only happens once at the beginning
elif marble.value % 23 == 0:
removed = self.current.prev.prev.prev.prev.prev.prev.prev.remove()
self.current = removed.next
return marble.value + removed.value
else:
self.current.next.insert_after(marble)
self.current = marble
return 0 # normal insert, no points
def marble_game(player_count, marbles):
player_scores = [0] * player_count
circle = Circle()
for marble_value in range(marbles + 1):
player_scores[marble_value % player_count] += circle.play_marble(Marble(marble_value))
return max(player_scores)
assert 8317 == marble_game(player_count=10, marbles=1618)
assert 146373 == marble_game(player_count=13, marbles=7999)
assert 2764 == marble_game(player_count=17, marbles=1104)
assert 54718 == marble_game(player_count=21, marbles=6111)
assert 37305 == marble_game(player_count=30, marbles=5807)
marble_game(player_count=455, marbles=71223)
marble_game(player_count=455, marbles=71223*100)
```
# Day 10
https://adventofcode.com/2018/day/10
```
@_.lib.dataclasses.dataclass._
class Particle:
x: int
y: int
delta_x: int
delta_y: int
day10_input = (
_(open('input/day10.txt'))
.read()
.findall(r'position=<\s?(-?\d+),\s+(-?\d+)> velocity=<\s*(-?\d+),\s+(-?\d+)>')
.map(lambda each: _(each).map(int)._)
.call(list)
._
)
%matplotlib inline
def evolve(particles):
particles.x += particles.delta_x
particles.y += particles.delta_y
def devolve(particles):
particles.x -= particles.delta_x
particles.y -= particles.delta_y
def show(particles):
particles.y *= -1
particles.plot(x='x', y='y', kind='scatter', s=1)
particles.y *= -1
last_width = last_height = float('inf')
def particles_are_shrinking(particles):
global last_width, last_height
current_width = particles.x.max() - particles.x.min()
current_height = particles.y.max() - particles.y.min()
is_shrinking = current_width < last_width and current_height < last_height
last_width, last_height = current_width, current_height
return is_shrinking
particles = _.lib.pandas.DataFrame.from_records(
data=day10_input,
columns=['x', 'y', 'delta_x', 'delta_y']
)._
last_width = last_height = float('inf')
seconds = 0
while particles_are_shrinking(particles):
evolve(particles)
seconds += 1
devolve(particles)
show(particles)
seconds - 1
```
# Day 11
https://adventofcode.com/2018/day/11
```
import fluentpy as _
from pyexpect import expect
def power_level(x, y, grid_serial):
rack_id = x + 10
power_level = rack_id * y
power_level += grid_serial
power_level *= rack_id
power_level //= 100
power_level %= 10
return power_level - 5
assert 4 == power_level(x=3, y=5, grid_serial=8)
assert -5 == power_level( 122,79, grid_serial=57)
assert 0 == power_level(217,196, grid_serial=39)
assert 4 == power_level(101,153, grid_serial=71)
def power_levels(grid_serial):
return (
_(range(1, 301))
.product(repeat=2)
.star_map(_(power_level).curry(x=_, y=_, grid_serial=grid_serial)._)
.to(_.lib.numpy.array)
._
.reshape(300, -1)
.T
)
def compute_max_power(matrix, subset_size):
expect(matrix.shape[0]) == matrix.shape[1]
expect(subset_size) <= matrix.shape[0]
expect(subset_size) > 0
# +1 because 300 matrix by 300 subset should produce one value
width = matrix.shape[0] - subset_size + 1
height = matrix.shape[1] - subset_size + 1
output = _.lib.numpy.zeros((width, height))._
for x in range(width):
for y in range(height):
output[x,y] = matrix[y:y+subset_size, x:x+subset_size].sum()
return output
def coordinates_with_max_power(matrix, subset_size=3):
output = compute_max_power(matrix, subset_size=subset_size)
np = _.lib.numpy._
index = np.unravel_index(np.argmax(output), output.shape)
return (
_(index).map(_.each + 1)._, # turn back into coordinates
np.amax(output)
)
result = coordinates_with_max_power(power_levels(18))
assert ((33, 45), 29) == result, result
result = coordinates_with_max_power(power_levels(42))
assert ((21, 61), 30) == result, result
coordinates_with_max_power(power_levels(5034))
def find_best_subset(matrix):
best_max_power = best_subset_size = float('-inf')
best_coordinates = None
for subset_size in range(1, matrix.shape[0] + 1):
coordinates, max_power = coordinates_with_max_power(matrix, subset_size=subset_size)
if max_power > best_max_power:
best_max_power = max_power
best_subset_size = subset_size
best_coordinates = coordinates
return (
best_coordinates,
best_subset_size,
best_max_power,
)
result = coordinates_with_max_power(power_levels(18), subset_size=16)
expect(result) == ((90, 269), 113)
result = coordinates_with_max_power(power_levels(42), subset_size=12)
expect(result) == ((232, 251), 119)
result = find_best_subset(power_levels(18))
expect(result) == ((90, 269), 16, 113)
find_best_subset(power_levels(5034))
```
# Day 12
https://adventofcode.com/2018/day/12
```
import fluentpy as _
def parse(a_string):
is_flower = _.each == '#'
initial_state = (
_(a_string)
.match(r'initial state:\s*([#.]+)')
.group(1)
.map(is_flower)
.enumerate()
.call(dict)
._
)
patterns = dict(
_(a_string)
.findall('([#.]{5})\s=>\s([#.])')
.map(lambda each: (_(each[0]).map(is_flower)._, is_flower(each[1])))
._
)
return initial_state, patterns
def print_state(generation, state):
lowest_offset = min(state)
print(f'{generation:>5} {sum_state(state):>5} {lowest_offset:>5}: ', end='')
print(string_from_state(state))
def string_from_state(state):
lowest_offset, highest_offset = min(state), max(state)
return (
_(range(lowest_offset - 2, highest_offset + 3))
.map(lambda each: state.get(each, False))
.map(lambda each: each and '#' or '.')
.join()
._
)
def sum_state(state):
return (
_(state)
.items()
.map(lambda each: each[1] and each[0] or 0)
.sum()
._
)
def evolve(initial_state, patterns, number_of_generations,
show_sums=False, show_progress=False, show_state=False, stop_on_repetition=False):
current_state = dict(initial_state)
next_state = dict()
def surrounding_of(state, index):
return tuple(state.get(each, False) for each in range(index-2, index+3))
def compute_next_generation():
nonlocal current_state, next_state
first_key, last_key = min(current_state), max(current_state)
for index in range(first_key - 2, last_key + 2):
is_flower = patterns.get(surrounding_of(current_state, index), False)
if is_flower:
next_state[index] = is_flower
current_state, next_state = next_state, dict()
return current_state
seen = set()
for generation in range(number_of_generations):
if show_sums:
print(generation, sum_state(current_state))
if show_progress and generation % 1000 == 0: print('.', end='')
if show_state: print_state(generation, current_state)
if stop_on_repetition:
stringified = string_from_state(current_state)
if stringified in seen:
print(f'repetition on generation {generation}')
print(stringified)
return current_state
seen.add(stringified)
compute_next_generation()
return current_state
end_state = evolve(*parse("""initial state: #..#.#..##......###...###
...## => #
..#.. => #
.#... => #
.#.#. => #
.#.## => #
.##.. => #
.#### => #
#.#.# => #
#.### => #
##.#. => #
##.## => #
###.. => #
###.# => #
####. => #
"""), 20, show_state=True)
assert 325 == sum_state(end_state)
day12_input = open('input/day12.txt').read()
sum_state(evolve(*parse(day12_input), 20))
# still very much tooo slow
number_of_iterations = 50000000000
#number_of_iterations = 200
sum_state(evolve(*parse(day12_input), number_of_iterations, stop_on_repetition=True))
last_score = 11959
increment_per_generation = 11959 - 11873
last_generation = 135
generations_to_go = number_of_iterations - last_generation
end_score = last_score + generations_to_go * increment_per_generation
end_score
import fluentpy as _
# Numpy implementation for comparison
np = _.lib.numpy._
class State:
@classmethod
def parse_string(cls, a_string):
is_flower = lambda each: int(each == '#')
initial_state = (
_(a_string)
.match(r'initial state:\s*([#\.]+)')
.group(1)
.map(is_flower)
._
)
patterns = (
_(a_string)
.findall('([#.]{5})\s=>\s([#\.])')
.map(lambda each: (_(each[0]).map(is_flower)._, is_flower(each[1])))
._
)
return initial_state, patterns
@classmethod
def from_string(cls, a_string):
return cls(*cls.parse_string(a_string))
def __init__(self, initial_state, patterns):
self.type = np.uint8
self.patterns = self.trie_from_patterns(patterns)
self.state = np.zeros(len(initial_state) * 3, dtype=self.type)
self.zero = self.state.shape[0] // 2
self.state[self.zero:self.zero+len(initial_state)] = initial_state
def trie_from_patterns(self, patterns):
trie = np.zeros((2,) * 5, dtype=self.type)
for pattern, production in patterns:
trie[pattern] = production
return trie
@property
def size(self):
return self.state.shape[0]
def recenter_or_grow_if_neccessary(self):
# check how much empty space there is, and if re-centering the pattern might be good enough
if self.needs_resize() and self.is_region_empty(0, self.zero - self.size // 4):
self.move(- self.size // 4)
if self.needs_resize() and self.is_region_empty(self.zero + self.size // 4, -1):
self.move(self.size // 4)
if self.needs_resize():
self.grow()
def needs_resize(self):
return any(self.state[:4]) or any(self.state[-4:])
def is_region_empty(self, lower_limit, upper_limit):
return not any(self.state[lower_limit:upper_limit])
def move(self, move_by):
assert move_by != 0
new_state = np.zeros_like(self.state)
if move_by < 0:
new_state[:move_by] = self.state[-move_by:]
else:
new_state[move_by:] = self.state[:-move_by]
self.state = new_state
self.zero += move_by
def grow(self):
new_state = np.zeros(self.size * 2, dtype=self.type)
move_by = self.zero - (self.size // 2)
new_state[self.zero : self.zero + self.size] = self.state
self.state = new_state
self.zero -= move_by
def evolve_once(self):
self.state[2:-2] = self.patterns[
self.state[:-4],
self.state[1:-3],
self.state[2:-2],
self.state[3:-1],
self.state[4:]
]
self.recenter_or_grow_if_neccessary()
return self
def evolve(self, number_of_iterations, show_progress=False, show_state=False):
while number_of_iterations:
self.evolve_once()
number_of_iterations -= 1
if show_progress and number_of_iterations % 1000 == 0:
print('.', end='')
if show_state:
self.print()
return self
def __repr__(self):
return (
_(self.state)
.map(lambda each: each and '#' or '.')
.join()
._
)
def print(self):
print(f"{self.zero:>5} {self.sum():>5}", repr(self))
def sum(self):
return (
_(self.state)
.ienumerate()
.imap(lambda each: each[1] and (each[0] - self.zero) or 0)
.sum()
._
)
test = State.from_string("""initial state: #..#.#..##......###...###
...## => #
..#.. => #
.#... => #
.#.#. => #
.#.## => #
.##.. => #
.#### => #
#.#.# => #
#.### => #
##.#. => #
##.## => #
###.. => #
###.# => #
####. => #
""")
assert 325 == test.evolve(20, show_state=True).sum(), test.sum()
# Much faster initially, but then gets linearly slower as the size of the memory increases. And since it increases linearly with execution time
# Its still way too slow
day12_input = open('input/day12.txt').read()
state = State.from_string(day12_input)
#state.evolve(50000000000, show_progress=True)
#state.evolve(10000, show_progress=True).print()
```
# Day 13
https://adventofcode.com/2018/day/13
```
import fluentpy as _
from pyexpect import expect
Location = _.lib.collections.namedtuple('Location', ('x', 'y'))._
UP, RIGHT, DOWN, LEFT, STRAIGHT = '^>v<|'
UPDOWN, LEFTRIGHT, UPRIGHT, RIGHTDOWN, DOWNLEFT, LEFTUP = '|-\/\/'
MOVEMENT = {
'^' : Location(0, -1),
'>' : Location(1, 0),
'v' : Location(0, 1),
'<' : Location(-1, 0)
}
CURVE = {
'\\': { '^':'<', '<':'^', 'v':'>', '>':'v'},
'/': { '^':'>', '<':'v', 'v':'<', '>':'^'},
}
INTERSECTION = {
'^': { LEFT:'<', STRAIGHT:'^', RIGHT:'>' },
'>': { LEFT:'^', STRAIGHT:'>', RIGHT:'v' },
'v': { LEFT:'>', STRAIGHT:'v', RIGHT:'<' },
'<': { LEFT:'v', STRAIGHT:'<', RIGHT:'^' },
}
@_.lib.dataclasses.dataclass._
class Cart:
location: Location
orientation: str
world: str
program: iter = _.lib.dataclasses.field._(default_factory=lambda: _((LEFT, STRAIGHT, RIGHT)).icycle()._)
def tick(self):
move = MOVEMENT[self.orientation]
self.location = Location(self.location.x + move.x, self.location.y + move.y)
if self.world_at_current_location() in CURVE:
self.orientation = CURVE[self.world_at_current_location()][self.orientation]
if self.world_at_current_location() == '+':
self.orientation = INTERSECTION[self.orientation][next(self.program)]
return self
def world_at_current_location(self):
expect(self.location.y) < len(self.world)
expect(self.location.x) < len(self.world[self.location.y])
return self.world[self.location.y][self.location.x]
def __repr__(self):
return f'<Cart(location={self.location}, orientation={self.orientation})'
def parse_carts(world):
world = world.splitlines()
for line_number, line in enumerate(world):
for line_offset, character in _(line).enumerate():
if character in '<>^v':
yield Cart(location=Location(line_offset, line_number), orientation=character, world=world)
def crashed_carts(cart, carts):
carts = carts[:]
if cart not in carts:
return tuple() # crashed carts already removed
carts.remove(cart)
for first, second in _([cart]).icycle().zip(carts):
if first.location == second.location:
return first, second
def did_crash(cart, carts):
carts = carts[:]
if cart not in carts: # already removed because of crash
return True
carts.remove(cart)
for first, second in _([cart]).icycle().zip(carts):
if first.location == second.location:
return True
return False
def location_of_first_crash(input_string):
carts = list(parse_carts(input_string))
while True:
for cart in _(carts).sorted(key=_.each.location._)._:
cart.tick()
if did_crash(cart, carts):
return cart.location
def location_of_last_cart_after_crashes(input_string):
carts = list(parse_carts(input_string))
while True:
for cart in _(carts).sorted(key=_.each.location._)._:
cart.tick()
if did_crash(cart, carts):
_(crashed_carts(cart, carts)).each(carts.remove)
if 1 == len(carts):
return carts[0].location
expect(Cart(location=Location(0,0), orientation='>', world=['>-']).tick().location) == (1,0)
expect(Cart(location=Location(0,0), orientation='>', world=['>\\']).tick().location) == (1,0)
expect(Cart(location=Location(0,0), orientation='>', world=['>\\']).tick().orientation) == 'v'
expect(Cart(location=Location(0,0), orientation='>', world=['>+']).tick().orientation) == '^'
cart1, cart2 = parse_carts('>--<')
expect(cart1).has_attributes(location=(0,0), orientation='>')
expect(cart2).has_attributes(location=(3,0), orientation='<')
expect(location_of_first_crash('>--<')) == (2,0)
test_input = r"""/->-\
| | /----\
| /-+--+-\ |
| | | | v |
\-+-/ \-+--/
\------/
"""
expect(location_of_first_crash(test_input)) == (7,3)
day13_input = open('input/day13.txt').read()
location_of_first_crash(day13_input)
test_input = r"""/>-<\
| |
| /<+-\
| | | v
\>+</ |
| ^
\<->/
"""
expect(location_of_last_cart_after_crashes(test_input)) == (6,4)
location_of_last_cart_after_crashes(day13_input)
```
# Day 14
https://adventofcode.com/2018/day/14
```
import fluentpy as _
from pyexpect import expect
scores = bytearray([3,7])
elf1 = 0
elf2 = 1
def reset():
global scores, elf1, elf2
scores = bytearray([3,7])
elf1 = 0
elf2 = 1
def generation():
global scores, elf1, elf2
new_recipe = scores[elf1] + scores[elf2]
first_digit, second_digit = divmod(new_recipe, 10)
if first_digit: scores.append(first_digit)
scores.append(second_digit)
elf1 = (elf1 + 1 + scores[elf1]) % len(scores)
elf2 = (elf2 + 1 + scores[elf2]) % len(scores)
def next_10_after(how_many_generations):
reset()
while len(scores) < how_many_generations + 10:
generation()
return _(scores)[how_many_generations:how_many_generations+10].join()._
expect(next_10_after(9)) == '5158916779'
expect(next_10_after(5)) == '0124515891'
expect(next_10_after(18)) == '9251071085'
expect(next_10_after(2018)) == '5941429882'
day14_input = 894501
print(next_10_after(day14_input))
def generations_till_we_generate(a_number):
needle = _(a_number).str().map(int).call(bytearray)._
reset()
while needle not in scores[-len(needle) - 2:]: # at most two numbers get appended
generation()
return scores.rindex(needle)
expect(generations_till_we_generate('51589')) == 9
expect(generations_till_we_generate('01245')) == 5
expect(generations_till_we_generate('92510')) == 18
expect(generations_till_we_generate('59414')) == 2018
print(generations_till_we_generate(day14_input))
```
# Day 15
https://adventofcode.com/2018/day/15
```
expect = _.lib.pyexpect.expect._
np = _.lib.numpy._
inf = _.lib.math.inf._
dataclasses = _.lib.dataclasses._
def tuplify(a_function):
@_.lib.functools.wraps._(a_function)
def wrapper(*args, **kwargs):
return tuple(a_function(*args, **kwargs))
return wrapper
Location = _.lib.collections.namedtuple('Location', ['x', 'y'])._
NO_LOCATION = Location(-1,-1)
@dataclasses.dataclass
class Player:
type: chr
location: Location
level: 'Level' = dataclasses.field(repr=False)
hitpoints: int = 200
attack_power: int = dataclasses.field(default=3, repr=False)
distances: np.ndarray = dataclasses.field(default=None, repr=False)
predecessors: np.ndarray = dataclasses.field(default=None, repr=False)
def turn(self):
self.distances = self.predecessors = None
if self.hitpoints <= 0:
return # we're dead
if not self.is_in_range_of_enemies():
self.move_towards_enemy()
if self.is_in_range_of_enemies():
self.attack_weakest_enemy_in_range()
return self
def is_in_range_of_enemies(self):
adjacent_values = (
_(self).level.adjacent_locations(self.location)
.map(self.level.level.__getitem__)
._
)
return self.enemy_class() in adjacent_values
def enemy_class(self):
if self.type == 'E':
return 'G'
return 'E'
def move_towards_enemy(self):
targets = (
_(self).level.enemies_of(self)
.map(_.each.attack_positions()._).flatten(level=1)
.filter(self.is_location_reachable)
.sorted(key=self.distance_to_location)
.groupby(key=self.distance_to_location)
.get(0, []).get(1, [])
._
)
target = _(targets).sorted().get(0, None)._
if target is None:
return # no targets in range
self.move_to(self.one_step_towards(target))
def move_to(self, new_location):
self.level.level[self.location] = '.'
self.level.level[new_location] = self.type
self.location = new_location
self.distances = self.predecessors = None
def attack_positions(self):
return self.level.reachable_adjacent_locations(self.location)
def is_location_reachable(self, location):
self.ensure_distances()
return inf != self.distances[location]
def distance_to_location(self, location):
self.ensure_distances()
return self.distances[location]
def one_step_towards(self, location):
self.ensure_distances()
if 2 != len(self.predecessors[location]):
breakpoint()
while Location(*self.predecessors[location]) != self.location:
location = Location(*self.predecessors[location])
return location
def ensure_distances(self):
if self.distances is not None:
return
self.distances, self.predecessors = self.level.shortest_distances_from(self.location)
def attack_weakest_enemy_in_range(self):
adjacent_locations = _(self).level.adjacent_locations(self.location)._
target = (
_(self).level.enemies_of(self)
.filter(_.each.location.in_(adjacent_locations)._)
.sorted(key=_.each.hitpoints._)
.groupby(key=_.each.hitpoints._)
.get(0).get(1)
.sorted(key=_.each.location._)
.get(0)
._
)
target.damage_by(self.attack_power)
def damage_by(self, ammount):
self.hitpoints -= ammount
# REFACT this should happen on the level object
if self.hitpoints <= 0:
self.level.players = _(self).level.players.filter(_.each != self)._
self.level.level[self.location] = '.'
class Level:
def __init__(self, level_description):
self.level = np.array(_(level_description).strip().split('\n').map(tuple)._)
self.players = self.parse_players()
self.number_of_full_rounds = 0
@tuplify
def parse_players(self):
for row_number, row in enumerate(self.level):
for col_number, char in enumerate(row):
if char in 'GE':
yield Player(char, location=Location(row_number,col_number), level=self)
def enemies_of(self, player):
return _(self).players.filter(_.each.type != player.type)._
def adjacent_locations(self, location):
return (
_([
(location.x-1, location.y),
(location.x, location.y-1),
(location.x, location.y+1),
(location.x+1, location.y),
])
.star_map(Location)
._
)
def reachable_adjacent_locations(self, location):
return (
_(self).adjacent_locations(location)
.filter(self.is_location_in_level)
.filter(self.is_traversible)
._
)
def is_location_in_level(self, location):
x_size, y_size = self.level.shape
return 0 <= location.x < x_size \
and 0 <= location.y < y_size
def is_traversible(self, location):
return '.' == self.level[location]
def shortest_distances_from(self, location):
distances = np.full(fill_value=_.lib.math.inf._, shape=self.level.shape, dtype=float)
distances[location] = 0
predecessors = np.full(fill_value=NO_LOCATION, shape=self.level.shape, dtype=(int, 2))
next_locations = _.lib.collections.deque._([location])
while len(next_locations) > 0:
current_location = next_locations.popleft()
for location in self.reachable_adjacent_locations(current_location):
if distances[location] <= (distances[current_location] + 1):
continue
distances[location] = distances[current_location] + 1
predecessors[location] = current_location
next_locations.append(location)
return distances, predecessors
def __repr__(self):
return '\n'.join(''.join(line) for line in self.level)
def print(self):
print(
repr(self)
+ f'\nrounds: {self.number_of_full_rounds}'
+ '\n' + _(self).players_in_reading_order().join('\n')._
)
def round(self):
for player in self.players_in_reading_order():
if self.did_battle_end():
return self
player.turn()
self.number_of_full_rounds += 1
return self
def players_in_reading_order(self):
return _(self).players.sorted(key=_.each.location._)._
def run_battle(self):
while not self.did_battle_end():
self.round()
return self
def run_rounds(self, number_of_full_rounds):
for ignored in range(number_of_full_rounds):
self.round()
if self.did_battle_end():
break
return self
def did_battle_end(self):
return _(self).players.map(_.each.type._).call(set).len()._ == 1
def battle_summary(self):
number_of_remaining_hitpoints = _(self).players.map(_.each.hitpoints._).sum()._
return self.number_of_full_rounds * number_of_remaining_hitpoints
level = Level("""\
#######
#.G.E.#
#E....#
#######
""")
expect(level.players) == (Player('G',Location(1,2), level), Player('E', Location(1, 4), level), Player('E', Location(2,1), level))
expect(level.enemies_of(level.players[0])) == (Player('E', Location(x=1, y=4), level), Player('E', Location(x=2, y=1), level))
level.players[0].damage_by(200)
expect(level.players) == (Player('E', Location(1, 4), level), Player('E', Location(2,1), level))
expect(repr(level)) == '''\
#######
#...E.#
#E....#
#######'''
inf = _.lib.math.inf._
NO = [-1,-1]
distances, parents = Level('''
###
#G#
#.#
#E#
###
''').shortest_distances_from(Location(1,1))
expect(distances.tolist()) == [
[inf, inf, inf],
[inf, 0, inf],
[inf, 1, inf],
[inf, inf, inf],
[inf, inf, inf],
]
expect(parents.tolist()) == [
[NO, NO, NO],
[NO, NO, NO],
[NO, [1,1], NO],
[NO, NO, NO],
[NO, NO, NO],
]
distances, parents = Level('''
#######
#E..G.#
#...#.#
#.G.#G#
#######
''').shortest_distances_from(Location(1,1))
expect(distances.tolist()) == [
[inf, inf, inf, inf, inf, inf, inf],
[inf, 0, 1, 2, inf, inf, inf],
[inf, 1, 2, 3, inf, inf, inf],
[inf, 2, inf, 4, inf, inf, inf],
[inf, inf, inf, inf, inf, inf, inf]
]
expect(parents.tolist()) == [
[NO, NO, NO, NO, NO, NO, NO],
[NO, NO, [1, 1], [1, 2], NO, NO, NO],
[NO, [1, 1], [1, 2], [1, 3], NO, NO, NO],
[NO, [2, 1], NO, [2, 3], NO, NO, NO],
[NO, NO, NO, NO, NO, NO, NO]
]
distances, parents = Level('''
#######
#E..G.#
#.#...#
#.G.#G#
#######
''').shortest_distances_from(Location(1,1))
expect(distances[1:-1, 1:-1].tolist()) == [
[0,1,2,inf,6],
[1,inf,3,4,5],
[2,inf,4,inf,inf]
]
level = Level("""\
#######
#.G.E.#
#E....#
#######
""")
expect(level.players[0].location) == Location(1,2)
expect(level.players[0].turn().location) == Location(1,1)
expect(level.players[0].is_in_range_of_enemies()).is_true()
level = Level('''\
#######
#..G..#
#...EG#
#.#G#G#
#...#E#
#.....#
#######''')
expect(level.players[0].is_in_range_of_enemies()).is_false()
level = Level('''\
#########
#G..G..G#
#.......#
#.......#
#G..E..G#
#.......#
#.......#
#G..G..G#
#########''')
expect(level.round().__repr__()) == '''\
#########
#.G...G.#
#...G...#
#...E..G#
#.G.....#
#.......#
#G..G..G#
#.......#
#########'''
expect(level.round().__repr__()) == '''\
#########
#..G.G..#
#...G...#
#.G.E.G.#
#.......#
#G..G..G#
#.......#
#.......#
#########'''
expect(level.round().__repr__()) == '''\
#########
#.......#
#..GGG..#
#..GEG..#
#G..G...#
#......G#
#.......#
#.......#
#########'''
level = Level('''\
#######
#.G...#
#...EG#
#.#.#G#
#..G#E#
#.....#
#######''')
expect(level.round().__repr__()) == '''\
#######
#..G..#
#...EG#
#.#G#G#
#...#E#
#.....#
#######'''
expect(level.players[0]).has_attributes(
hitpoints=200, location=Location(1,3)
)
expect(level.round().__repr__()) == '''\
#######
#...G.#
#..GEG#
#.#.#G#
#...#E#
#.....#
#######'''
level.run_rounds(21)
expect(level.number_of_full_rounds) == 23
expect(repr(level)) == '''\
#######
#...G.#
#..G.G#
#.#.#G#
#...#E#
#.....#
#######'''
expect(level.players).has_len(5)
level.run_rounds(47-23)
expect(level.number_of_full_rounds) == 47
expect(repr(level)) == '''\
#######
#G....#
#.G...#
#.#.#G#
#...#.#
#....G#
#######'''
expect(_(level).players.map(_.each.type._).join()._) == 'GGGG'
expect(level.battle_summary()) == 27730
expect(level.did_battle_end()).is_true()
level = Level('''\
#######
#.G...#
#...EG#
#.#.#G#
#..G#E#
#.....#
#######''')
level.run_battle()
expect(level.battle_summary()) == 27730
level = Level('''\
#######
#G..#E#
#E#E.E#
#G.##.#
#...#E#
#...E.#
#######''').run_battle()
expect(repr(level)) == '''\
#######
#...#E#
#E#...#
#.E##.#
#E..#E#
#.....#
#######'''
expect(level.number_of_full_rounds) == 37
expect(level.battle_summary()) == 36334
level = Level('''\
#######
#E..EG#
#.#G.E#
#E.##E#
#G..#.#
#..E#.#
#######''').run_battle()
expect(level.battle_summary()) == 39514
_('input/day15.txt').call(open).read().call(Level).run_battle().battle_summary()._
def number_of_losses_with_attack_power(level_ascii, attack_power):
level = Level(level_ascii)
elves = lambda: _(level).players.filter(_.each.type == 'E')._
staring_number_of_elves = len(elves())
for elf in elves():
elf.attack_power = attack_power
level.run_battle()
return staring_number_of_elves - len(elves()), level.battle_summary()
def minimum_attack_power_for_no_losses(level_ascii):
for attack_power in range(4, 100):
number_of_losses, summary = number_of_losses_with_attack_power(level_ascii, attack_power)
if 0 == number_of_losses:
return attack_power, summary
expect(minimum_attack_power_for_no_losses('''\
#######
#.G...#
#...EG#
#.#.#G#
#..G#E#
#.....#
#######''')) == (15, 4988)
expect(minimum_attack_power_for_no_losses('''\
#######
#E..EG#
#.#G.E#
#E.##E#
#G..#.#
#..E#.#
#######''')) == (4, 31284)
expect(minimum_attack_power_for_no_losses('''\
#######
#E.G#.#
#.#G..#
#G.#.G#
#G..#.#
#...E.#
#######''')) == (15, 3478)
expect(minimum_attack_power_for_no_losses('''\
#######
#.E...#
#.#..G#
#.###.#
#E#G#G#
#...#G#
#######''')) == (12, 6474)
expect(minimum_attack_power_for_no_losses('''\
#########
#G......#
#.E.#...#
#..##..G#
#...##..#
#...#...#
#.G...G.#
#.....G.#
#########''')) == (34, 1140)
_('input/day15.txt').call(open).read().call(minimum_attack_power_for_no_losses)._
```
# Day 16
https://adventofcode.com/2018/day/16
## Registers
- four registers 0,1,2,3
- initialized to 0
## instructions
- 16 opcodes
- 1 opcode, 2 source registers (A, B), 1 output register (C)
- inputs can be register addresses, immediate values,
- output is always register
only have the opcode numbers, need to check validity
```
import fluentpy as _
expect = _.lib.pyexpect.expect._
operator = _.lib.operator._
def identity(*args):
return args[0]
def register(self, address):
return self.registers[address]
def immediate(self, value):
return value
def ignored(self, value):
return None
def make_operation(namespace, name, operation, a_resolver, b_resolver):
def instruction(self, a, b, c):
self.registers[c] = operation(a_resolver(self, a), b_resolver(self, b))
return self
instruction.__name__ = instruction.__qualname__ = name
namespace[name] = instruction
return instruction
class CPU:
def __init__(self, initial_registers=(0,0,0,0)):
self.registers = list(initial_registers)
operations = (
_([
('addr', operator.add, register, register),
('addi', operator.add, register, immediate),
('mulr', operator.mul, register, register),
('muli', operator.mul, register, immediate),
('banr', operator.and_, register, register),
('bani', operator.and_, register, immediate),
('borr', operator.or_, register, register),
('bori', operator.or_, register, immediate),
('setr', identity, register, ignored),
('seti', identity, immediate, ignored),
('gtir', operator.gt, immediate, register),
('gtri', operator.gt, register, immediate),
('gtrr', operator.gt, register, register),
('eqir', operator.eq, immediate, register),
('eqri', operator.eq, register, immediate),
('eqrr', operator.eq, register, register),
])
.star_map(_(make_operation).curry(locals())._)
._
)
def evaluate_program(self, instructions, opcode_map):
for instruction in instructions:
opcode, a,b,c = instruction
operation = opcode_map[opcode]
operation(self, a,b,c)
return self
@classmethod
def number_of_qualifying_instructions(cls, input_registers, instruction, expected_output_registers):
return len(cls.qualifying_instructions(input_registers, instruction, expected_output_registers))
@classmethod
def qualifying_instructions(cls, input_registers, instruction, expected_output_registers):
opcode, a, b, c = instruction
return (
_(cls)
.operations
.filter(lambda operation: operation(CPU(input_registers), a,b,c).registers == expected_output_registers)
._
)
expect(CPU([3, 2, 1, 1]).mulr(2, 1, 2).registers) == [3, 2, 2, 1]
expect(CPU.number_of_qualifying_instructions([3, 2, 1, 1], (9, 2, 1, 2), [3, 2, 2, 1])) == 3
day16_input = _(open('input/day16.txt')).read()._
test_input, test_program_input = day16_input.split('\n\n\n')
def parse_inputs(before, instruction, after):
return (
_(before).split(', ').map(int).to(list),
_(instruction).split(' ').map(int)._,
_(after).split(', ').map(int).to(list),
)
test_inputs = (
_(test_input)
.findall(r'Before: \[(.*)]\n(.*)\nAfter: \[(.*)\]')
.star_map(parse_inputs)
._
)
(
_(test_inputs)
.star_map(CPU.number_of_qualifying_instructions)
.filter(_.each >= 3)
.len()
._
)
def add_operations(mapping, opcode_and_operations):
opcode, operations = opcode_and_operations
mapping[opcode].append(operations)
return mapping
opcode_mapping = (
_(test_inputs)
.map(_.each[1][0]._) # opcodes
.zip(
_(test_inputs).star_map(CPU.qualifying_instructions)._
)
# list[tuple[opcode, list[list[functions]]]]
.reduce(add_operations, _.lib.collections.defaultdict(list)._)
# dict[opcode, list[list[function]]]
.items()
.star_map(lambda opcode, operations: (
opcode,
_(operations).map(set).reduce(set.intersection)._
))
.to(dict)
# dict[opcode, set[functions]]
)
def resolved_operations():
return (
_(opcode_mapping)
.values()
.filter(lambda each: len(each) == 1)
.reduce(set.union)
._
)
def has_unresolved_operations():
return 0 != (
_(opcode_mapping)
.values()
.map(len)
.filter(_.each > 1)
.len()
._
)
while has_unresolved_operations():
for opcode, matching_operations in opcode_mapping.items():
if len(matching_operations) == 1:
continue # already resolved
opcode_mapping[opcode] = matching_operations.difference(resolved_operations())
opcode_mapping = _(opcode_mapping).items().star_map(lambda opcode, operations: (opcode, list(operations)[0])).to(dict)
# dict[opcode, function]
opcode_mapping
test_program = _(test_program_input).strip().split('\n').map(lambda each: _(each).split(' ').map(int)._)._
CPU().evaluate_program(test_program, opcode_mapping).registers[0]
```
# Day 17
https://adventofcode.com/2018/day/17
```
import fluentpy as _
@_.lib.dataclasses.dataclass._
class ClayLine:
x_from: int
x_to: int
y_from: int
y_to: int
@classmethod
def from_string(cls, a_string):
first_var, first_value, second_var, second_value_start, second_value_end = \
_(a_string).fullmatch(r'(\w)=(\d+), (\w)=(\d+)..(\d+)').groups()._
first_value, second_value_start, second_value_end = _((first_value, second_value_start, second_value_end)).map(int)._
if 'x' == first_var:
return cls(first_value, first_value, second_value_start, second_value_end)
else:
return cls(second_value_start, second_value_end, first_value, first_value)
@property
def x_range(self):
return range(self.x_from, self.x_to + 1) # last coordinate is included
@property
def y_range(self):
return range(self.y_from, self.y_to + 1) # last coordinate is included
class Underground:
def __init__(self):
self.earth = dict()
self.earth[(500, 0)] = '+' # spring
self.min_x = self.max_x = 500
self.max_y = - _.lib.math.inf._
self.min_y = _.lib.math.inf._
def add_clay_line(self, clay_line):
for x in clay_line.x_range:
for y in clay_line.y_range:
self.set_earth(x,y, '#', should_adapt_depth=True)
return self
def set_earth(self, x,y, to_what, should_adapt_depth=False):
self.earth[(x,y)] = to_what
# whatever is set will expand the looked at area
if x > self.max_x:
self.max_x = x
if x < self.min_x:
self.min_x = x
if should_adapt_depth:
# only clay setting will expand y (depth)
if y > self.max_y:
self.max_y = y
if y < self.min_y:
self.min_y = y
def flood_fill_down_from_spring(self):
return self.flood_fill_down(500,1)
def flood_fill_down(self, x,y):
while self.can_flow_down(x,y):
if y > self.max_y:
return self
if '|' == self.earth.get((x,y), '.'):
# we've already been here
return self
self.set_earth(x,y, '|')
y += 1
while self.is_contained(x,y):
self.fill_container_level(x,y)
y -=1
self.mark_flowing_water_around(x,y)
for overflow_x in self.find_overflows(x, y):
self.flood_fill_down(overflow_x,y+1)
return self
def fill_container_level(self, x,y):
leftmost_free, rightmost_free = self.find_furthest_away_free_spots(x,y)
for mark_x in range(leftmost_free, rightmost_free+1):
self.set_earth(mark_x,y, '~')
def find_overflows(self, x,y):
leftmost_flow_border, rightmost_flow_border = self.find_flow_borders(x,y)
if self.can_flow_down(leftmost_flow_border, y):
yield leftmost_flow_border
if self.can_flow_down(rightmost_flow_border, y):
yield rightmost_flow_border
def is_blocked(self, x,y):
return self.earth.get((x,y), '.') in '#~'
def can_flow_down(self, x,y):
return not self.is_blocked(x, y+1)
def can_flow_left(self, x,y):
return not self.is_blocked(x-1, y)
def can_flow_right(self, x,y):
return not self.is_blocked(x+1, y)
def x_coordinates_towards(self, x, target_x):
if target_x < x:
return range(x, target_x-2, -1)
else:
return range(x, target_x+2)
def coordinates_towards(self, x,y, target_x):
return _(self.x_coordinates_towards(x, target_x)).map(lambda x: (x, y))._
def first_coordinate_that_satisfies(self, coordinates, a_test):
for x, y in coordinates:
if a_test(x,y):
return x
return None
def is_contained(self, x,y):
leftmost_flow_border, rightmost_flow_border = self.find_flow_borders(x,y)
if leftmost_flow_border is None or rightmost_flow_border is None:
return False
return not self.can_flow_down(leftmost_flow_border,y) and not self.can_flow_down(rightmost_flow_border,y)
def find_furthest_away_free_spots(self, x,y):
blocked_right = self.first_coordinate_that_satisfies(
self.coordinates_towards(x, y, self.max_x),
lambda x,y: not self.can_flow_right(x,y)
)
blocked_left = self.first_coordinate_that_satisfies(
self.coordinates_towards(x, y, self.min_x),
lambda x,y: not self.can_flow_left(x,y)
)
return (blocked_left, blocked_right)
def mark_flowing_water_around(self, x,y):
leftmost_free_spot, rightmost_free_spot = self.find_flow_borders(x,y)
for mark_x in range(leftmost_free_spot, rightmost_free_spot+1):
self.set_earth(mark_x, y, '|')
def find_flow_borders(self, x, y):
# REFACT there should be a fluent utility for this? no?
flow_border_right = self.first_coordinate_that_satisfies(
self.coordinates_towards(x,y, self.max_x),
lambda x,y: self.can_flow_down(x,y) or not self.can_flow_right(x,y)
)
flow_border_left = self.first_coordinate_that_satisfies(
self.coordinates_towards(x, y, self.min_x),
lambda x,y: self.can_flow_down(x,y) or not self.can_flow_left(x,y)
)
return (flow_border_left, flow_border_right)
def __str__(self):
return (
_(range(0, self.max_y+1))
.map(lambda y: (
_(range(self.min_x, self.max_x+1))
.map(lambda x: self.earth.get((x,y), '.'))
.join()
._
))
.join('\n')
._
)
def visualize(self):
print('min_x', self.min_x, 'max_x', self.max_x, 'min_y', self.min_y, 'max_y', self.max_y)
print(str(self))
return self
def number_of_water_reachable_tiles(self):
return (
_(self).earth.keys()
.filter(lambda coordinates: self.min_y <= coordinates[1] <= self.max_y)
.map(self.earth.get)
.filter(_.each.in_('~|')._)
.len()
._
)
def number_of_tiles_with_standing_water(self):
return (
_(self).earth.keys()
.filter(lambda coordinates: self.min_y <= coordinates[1] <= self.max_y)
.map(self.earth.get)
.filter(_.each.in_('~')._)
.len()
._
)
test_input = '''\
x=495, y=2..7
y=7, x=495..501
x=501, y=3..7
x=498, y=2..4
x=506, y=1..2
x=498, y=10..13
x=504, y=10..13
y=13, x=498..504'''
underground = _(test_input).splitlines().map(ClayLine.from_string).reduce(Underground.add_clay_line, Underground()).visualize()._
underground.flood_fill_down_from_spring().visualize()
underground.number_of_water_reachable_tiles()
underground = _(open('input/day17.txt')).read().splitlines().map(ClayLine.from_string).reduce(Underground.add_clay_line, Underground())._
underground.flood_fill_down_from_spring()
from IPython.display import display, HTML
display(HTML(f'<pre style="font-size:6px">{underground}</pre>'))
underground.number_of_water_reachable_tiles()
underground.number_of_tiles_with_standing_water()
```
# Day 18
https://adventofcode.com/2018/day/18
```
import fluentpy as _
from pyexpect import expect
class Area:
OPEN = '.'
TREES = '|'
LUMBERYARD = '#'
def __init__(self, area_description):
self.area = _(area_description).strip().splitlines().to(tuple)
self.generation = 0
self.cache = dict()
def evolve_to_generation(self, target_generation):
remaining_generations = target_generation - self.generation # so we can restart
while remaining_generations > 0:
if self.area in self.cache:
# looping pattern detected
last_identical_generation = self.cache[self.area]
generation_difference = self.generation - last_identical_generation
number_of_possible_jumps = remaining_generations // generation_difference
if number_of_possible_jumps > 0:
remaining_generations -= generation_difference * number_of_possible_jumps
continue # jump forward
self.cache[self.area] = self.generation
self.evolve()
self.generation += 1
remaining_generations -= 1
return self
def evolve(self):
new_area = []
for x, line in enumerate(self.area):
new_line = ''
for y, tile in enumerate(line):
new_line += self.next_tile(tile, self.counts_around(x,y))
new_area.append(new_line)
self.area = tuple(new_area)
return self
def next_tile(self, current_tile, counts):
if current_tile == self.OPEN and counts[self.TREES] >= 3:
return self.TREES
elif current_tile == self.TREES and counts[self.LUMBERYARD] >= 3:
return self.LUMBERYARD
elif current_tile == self.LUMBERYARD:
if counts[self.LUMBERYARD] >= 1 and counts[self.TREES] >= 1:
return self.LUMBERYARD
else:
return self.OPEN
else:
return current_tile
def counts_around(self, x,y):
return _.lib.collections.Counter(self.tiles_around(x,y))._
def tiles_around(self, x,y):
if x > 0:
line = self.area[x-1]
yield from line[max(0, y-1):y+2]
line = self.area[x]
if y > 0: yield line[y-1]
if y+1 < len(line): yield line[y+1]
if x+1 < len(self.area):
line = self.area[x+1]
yield from line[max(0, y-1):y+2]
def resource_value(self):
counts = _(self).area.join().call(_.lib.collections.Counter)._
return counts[self.TREES] * counts[self.LUMBERYARD]
test_input = '''\
.#.#...|#.
.....#|##|
.|..|...#.
..|#.....#
#.#|||#|#|
...#.||...
.|....|...
||...#|.#|
|.||||..|.
...#.|..|.
'''
test_area = _(test_input).call(Area).evolve_to_generation(10)._
expect(test_area.area) == _('''\
.||##.....
||###.....
||##......
|##.....##
|##.....##
|##....##|
||##.####|
||#####|||
||||#|||||
||||||||||
''').strip().splitlines().to(tuple)
expect(test_area.resource_value()) == 1147
area = _(open('input/day18.txt')).read().call(Area)._
area.evolve_to_generation(10).resource_value()
area.evolve_to_generation(1000000000).resource_value()
```
# Day 19
https://adventofcode.com/2018/day/19
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#@title MIT License
#
# Copyright (c) 2017 François Chollet
#
# Permission is hereby granted, free of charge, to any person obtaining a
# copy of this software and associated documentation files (the "Software"),
# to deal in the Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Software, and to permit persons to whom the
# Software is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
# THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.
```
# Basic regression: Predict fuel efficiency
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://tensorflow.google.cn/tutorials/keras/regression"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png" />在 tensorFlow.google.cn 上查看</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/keras/regression.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png" />在 Google Colab 中运行</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/keras/regression.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png" />在 GitHub 上查看源代码</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/keras/regression.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png" />下载 notebook</a>
</td>
</table>
Note: 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的
[官方英文文档](https://www.tensorflow.org/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到
[tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入
[docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。
在 *回归 (regression)* 问题中,我们的目的是预测出如价格或概率这样连续值的输出。相对于*分类(classification)* 问题,*分类(classification)* 的目的是从一系列的分类出选择出一个分类 (如,给出一张包含苹果或橘子的图片,识别出图片中是哪种水果)。
本 notebook 使用经典的 [Auto MPG](https://archive.ics.uci.edu/ml/datasets/auto+mpg) 数据集,构建了一个用来预测70年代末到80年代初汽车燃油效率的模型。为了做到这一点,我们将为该模型提供许多那个时期的汽车描述。这个描述包含:气缸数,排量,马力以及重量。
本示例使用 `tf.keras` API,相关细节请参阅 [本指南](https://tensorflow.google.cn/guide/keras)。
```
# 使用 seaborn 绘制矩阵图 (pairplot)
!pip install seaborn
from __future__ import absolute_import, division, print_function, unicode_literals
import pathlib
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
print(tf.__version__)
```
## Auto MPG 数据集
该数据集可以从 [UCI机器学习库](https://archive.ics.uci.edu/ml/) 中获取.
### 获取数据
首先下载数据集。
```
dataset_path = keras.utils.get_file("auto-mpg.data", "http://archive.ics.uci.edu/ml/machine-learning-databases/auto-mpg/auto-mpg.data")
dataset_path
```
使用 pandas 导入数据集。
```
column_names = ['MPG','Cylinders','Displacement','Horsepower','Weight',
'Acceleration', 'Model Year', 'Origin']
raw_dataset = pd.read_csv(dataset_path, names=column_names,
na_values = "?", comment='\t',
sep=" ", skipinitialspace=True)
dataset = raw_dataset.copy()
dataset.tail()
```
### 数据清洗
数据集中包括一些未知值。
```
dataset.isna().sum()
```
为了保证这个初始示例的简单性,删除这些行。
```
dataset = dataset.dropna()
```
`"Origin"` 列实际上代表分类,而不仅仅是一个数字。所以把它转换为独热码 (one-hot):
```
origin = dataset.pop('Origin')
dataset['USA'] = (origin == 1)*1.0
dataset['Europe'] = (origin == 2)*1.0
dataset['Japan'] = (origin == 3)*1.0
dataset.tail()
```
### 拆分训练数据集和测试数据集
现在需要将数据集拆分为一个训练数据集和一个测试数据集。
我们最后将使用测试数据集对模型进行评估。
```
train_dataset = dataset.sample(frac=0.8,random_state=0)
test_dataset = dataset.drop(train_dataset.index)
```
### 数据检查
快速查看训练集中几对列的联合分布。
```
sns.pairplot(train_dataset[["MPG", "Cylinders", "Displacement", "Weight"]], diag_kind="kde")
```
也可以查看总体的数据统计:
```
train_stats = train_dataset.describe()
train_stats.pop("MPG")
train_stats = train_stats.transpose()
train_stats
```
### 从标签中分离特征
将特征值从目标值或者"标签"中分离。 这个标签是你使用训练模型进行预测的值。
```
train_labels = train_dataset.pop('MPG')
test_labels = test_dataset.pop('MPG')
```
### 数据规范化
再次审视下上面的 `train_stats` 部分,并注意每个特征的范围有什么不同。
使用不同的尺度和范围对特征归一化是好的实践。尽管模型*可能* 在没有特征归一化的情况下收敛,它会使得模型训练更加复杂,并会造成生成的模型依赖输入所使用的单位选择。
注意:尽管我们仅仅从训练集中有意生成这些统计数据,但是这些统计信息也会用于归一化的测试数据集。我们需要这样做,将测试数据集放入到与已经训练过的模型相同的分布中。
```
def norm(x):
return (x - train_stats['mean']) / train_stats['std']
normed_train_data = norm(train_dataset)
normed_test_data = norm(test_dataset)
```
我们将会使用这个已经归一化的数据来训练模型。
警告: 用于归一化输入的数据统计(均值和标准差)需要反馈给模型从而应用于任何其他数据,以及我们之前所获得独热码。这些数据包含测试数据集以及生产环境中所使用的实时数据。
## 模型
### 构建模型
让我们来构建我们自己的模型。这里,我们将会使用一个“顺序”模型,其中包含两个紧密相连的隐藏层,以及返回单个、连续值得输出层。模型的构建步骤包含于一个名叫 'build_model' 的函数中,稍后我们将会创建第二个模型。 两个密集连接的隐藏层。
```
def build_model():
model = keras.Sequential([
layers.Dense(64, activation='relu', input_shape=[len(train_dataset.keys())]),
layers.Dense(64, activation='relu'),
layers.Dense(1)
])
optimizer = tf.keras.optimizers.RMSprop(0.001)
model.compile(loss='mse',
optimizer=optimizer,
metrics=['mae', 'mse'])
return model
model = build_model()
```
### 检查模型
使用 `.summary` 方法来打印该模型的简单描述。
```
model.summary()
```
现在试用下这个模型。从训练数据中批量获取‘10’条例子并对这些例子调用 `model.predict` 。
```
example_batch = normed_train_data[:10]
example_result = model.predict(example_batch)
example_result
```
它似乎在工作,并产生了预期的形状和类型的结果
### 训练模型
对模型进行1000个周期的训练,并在 `history` 对象中记录训练和验证的准确性。
```
# 通过为每个完成的时期打印一个点来显示训练进度
class PrintDot(keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs):
if epoch % 100 == 0: print('')
print('.', end='')
EPOCHS = 1000
history = model.fit(
normed_train_data, train_labels,
epochs=EPOCHS, validation_split = 0.2, verbose=0,
callbacks=[PrintDot()])
```
使用 `history` 对象中存储的统计信息可视化模型的训练进度。
```
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
hist.tail()
def plot_history(history):
hist = pd.DataFrame(history.history)
hist['epoch'] = history.epoch
plt.figure()
plt.xlabel('Epoch')
plt.ylabel('Mean Abs Error [MPG]')
plt.plot(hist['epoch'], hist['mae'],
label='Train Error')
plt.plot(hist['epoch'], hist['val_mae'],
label = 'Val Error')
plt.ylim([0,5])
plt.legend()
plt.figure()
plt.xlabel('Epoch')
plt.ylabel('Mean Square Error [$MPG^2$]')
plt.plot(hist['epoch'], hist['mse'],
label='Train Error')
plt.plot(hist['epoch'], hist['val_mse'],
label = 'Val Error')
plt.ylim([0,20])
plt.legend()
plt.show()
plot_history(history)
```
该图表显示在约100个 epochs 之后误差非但没有改进,反而出现恶化。 让我们更新 `model.fit` 调用,当验证值没有提高上是自动停止训练。
我们将使用一个 *EarlyStopping callback* 来测试每个 epoch 的训练条件。如果经过一定数量的 epochs 后没有改进,则自动停止训练。
你可以从[这里](https://tensorflow.google.cn/versions/master/api_docs/python/tf/keras/callbacks/EarlyStopping)学习到更多的回调。
```
model = build_model()
# patience 值用来检查改进 epochs 的数量
early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10)
history = model.fit(normed_train_data, train_labels, epochs=EPOCHS,
validation_split = 0.2, verbose=0, callbacks=[early_stop, PrintDot()])
plot_history(history)
```
如图所示,验证集中的平均的误差通常在 +/- 2 MPG左右。 这个结果好么? 我们将决定权留给你。
让我们看看通过使用 **测试集** 来泛化模型的效果如何,我们在训练模型时没有使用测试集。这告诉我们,当我们在现实世界中使用这个模型时,我们可以期望它预测得有多好。
```
loss, mae, mse = model.evaluate(normed_test_data, test_labels, verbose=2)
print("Testing set Mean Abs Error: {:5.2f} MPG".format(mae))
```
### 做预测
最后,使用测试集中的数据预测 MPG 值:
```
test_predictions = model.predict(normed_test_data).flatten()
plt.scatter(test_labels, test_predictions)
plt.xlabel('True Values [MPG]')
plt.ylabel('Predictions [MPG]')
plt.axis('equal')
plt.axis('square')
plt.xlim([0,plt.xlim()[1]])
plt.ylim([0,plt.ylim()[1]])
_ = plt.plot([-100, 100], [-100, 100])
```
这看起来我们的模型预测得相当好。我们来看下误差分布。
```
error = test_predictions - test_labels
plt.hist(error, bins = 25)
plt.xlabel("Prediction Error [MPG]")
_ = plt.ylabel("Count")
```
它不是完全的高斯分布,但我们可以推断出,这是因为样本的数量很小所导致的。
## 结论
本笔记本 (notebook) 介绍了一些处理回归问题的技术。
* 均方误差(MSE)是用于回归问题的常见损失函数(分类问题中使用不同的损失函数)。
* 类似的,用于回归的评估指标与分类不同。 常见的回归指标是平均绝对误差(MAE)。
* 当数字输入数据特征的值存在不同范围时,每个特征应独立缩放到相同范围。
* 如果训练数据不多,一种方法是选择隐藏层较少的小网络,以避免过度拟合。
* 早期停止是一种防止过度拟合的有效技术。
| github_jupyter |
# Color Detect Application
----
<div class="alert alert-box alert-info">
Please use Jupyter labs http://<board_ip_address>/lab for this notebook.
</div>
This notebook shows how to download and play with the Color Detect Application
## Aims
* Instantiate the application
* Start the application
* Play with the runtime parameters
* Stop the application
## Table of Contents
* [Download Composable Overlay](#download)
* [Start Application](#start)
* [Play with the Application](#play)
* [Stop Application](#stop)
* [Conclusion](#conclusion)
----
## Revision History
* v1.0 | 30 March 2021 | First notebook revision.
----
## Download Composable Overlay <a class="anchor" id="download"></a>
Download the Composable Overlay using the `ColorDetect` class which wraps all the functionality needed to run this application
```
from composable_pipeline import ColorDetect
app = ColorDetect("../overlay/cv_dfx_4_pr.bit")
```
## Start Application <a class="anchor" id="start"></a>
Start the application by calling the `.start()` method, this will:
1. Initialize the pipeline
1. Setup initial parameters
1. Display the implemented pipelined
1. Configure HDMI in and out
The output image should be visible on the external screen at this point
<div class="alert alert-heading alert-danger">
<h4 class="alert-heading">Warning:</h4>
Failure to connect HDMI cables to a valid video source and screen may cause the notebook to hang
</div>
```
app.start()
```
## Play with the Application <a class="anchor" id="play"></a>
The `.play` attribute exposes several runtime parameters
### Color Space
This drop-down menu allows you to select between three color spaces
* [HSV](https://en.wikipedia.org/wiki/HSL_and_HSV)
* [RGB](https://en.wikipedia.org/wiki/RGB_color_space)
$h_{0-2}$, $s_{0-2}$, $v_{0-2}$ represent the thresholding values for the three channels
### Noise reduction
This drop-down menu allows you to the disable noise reduction in the application
```
app.play
```
## Stop Application <a class="anchor" id="stop"></a>
Finally stop the application to release the resources
<div class="alert alert-heading alert-danger">
<h4 class="alert-heading">Warning:</h4>
Failure to stop the HDMI Video may hang the board
when trying to download another bitstream onto the FPGA
</div>
```
app.stop()
```
----
## Conclusion <a class="anchor" id="conclusion"></a>
This notebook has presented the Color Detect Application that leverages the Composable Overlay.
The runtime parameters of such application can be modified using drop-down and sliders from `ipywidgets`
[⬅️ Corner Detect Application](02_corner_detect_app.ipynb) | | [Filter2D Application ➡️](04_filter2d_app.ipynb)
Copyright © 2021 Xilinx, Inc
SPDX-License-Identifier: BSD-3-Clause
----
| github_jupyter |
# MMN 11 - Computability and Complexity
## Question 1
We define a Turing Machine (TM) which decides the palindrome language PAL(:= $\{w \in \{0, 1\}^* | w = w^R\})$.
### Overview
The TM reads the leftmost symbol of the tape, loops forward to the end of the input, reads the rightmost symbol,
compares and rejects if leftmost and rightmost symbols are not equivalent, rewinds tape, if not symbols are left the TM accepts the word.
### Decidability
Our TM _decides_ PAL because it rejects all words such that a symbol at the ith position from the start of the word is not equal to the symbol at the ith position from the end of the word (i.e all non-palindromes),
and accepts all other words - which are in PAL by way of its definition.
### Definition
1. Finite set of states $Q = \{ q_0, q_1, q_2, q_3, q_4, q_5, q_accept, q_reject \} $
2. Input alphabet $ \Sigma = \{0,1\} $
3. Tape alphabet $\Gamma = \{0,1,\_ \}$
4. Transition function $\delta : Q \times \Gamma \rightarrow Q \times \Gamma \times \{ L, R \} $ (as defined by the diagram).
5. Start state $q_0$
6. Accept state $ q_{accept} $ (the input is a palindrome).
7. Reject state $q_{reject} $ (the input is not a palindrome).
### Diagram
```
from graphviz import Digraph
f = Digraph('palindrome deciding turing machine')
f.attr(rankdir='LR', size='8,5')
f.attr('node', shape='Mdiamond')
f.node('start')
f.attr('node', shape='circle')
f.edge('start', 'q_0')
# initial state - read left symbol
f.edge('q_0', 'q_1', label='0 -> _, R')
f.edge('q_0', 'q_3', label='1 -> _, R')
f.edge('q_0', 'q_accept', label='_ -> R')
# loop forward - case 0
f.edge('q_1','q_1', label='0, 1 -> R')
f.edge('q_1','q_2', label='_ -> L')
# check right symbol - case 0
f.edge('q_2','q_5', label='0, _ -> _, L')
f.edge('q_2','q_reject', label='1 -> R')
# loop forward - case 1
f.edge('q_3','q_3', label='0, 1 -> R')
f.edge('q_3','q_4', label='_ -> L')
# check right symbol - case 1
f.edge('q_4','q_5', label='0, _ -> _, L')
f.edge('q_4','q_reject', label='0 -> R')
# rewind to tape start
f.edge('q_5','q_5', label='0,1 -> L')
f.edge('q_5','q_0', label='_ -> R')
f
```
## Question 2.A
We're given $|w| = n$, and $ w \# w \in B $ (:= as in example 3.9, p.173).
The TM $M_1$ (:= as in figure 3.10, p.174) reads the symbol '#' when performing the following:
1. ($q_2$\ $q_3$) winding forward to end of input.
2. ($q_6$) re-winding to start of input.
3. ($q_1$) all symbols except middle already checked.
Since $ w\#w \in B $, $w \# w$ is not rejected $M_1$ checks every pair of symbols before entering q_accept.
Thus, # is read 2 times per symbol pair check (2n times), and is read once right before reaching q_accept (1 times).
We conclude # is read for a 2n+1 times.
## Question 2.B
We can achieve a reduction to n+1 reads of symbol # by utilizing the symbol itself to mark the first unchecked symbol of the duplicate word.
Consequentially, the # symbol will still be read once when forward-winding, but the read during re-winding will be eliminated (thus 2n-1-n = n-1).
1. We'll modify the forward-winding step (q_2/q_3) by adding symbol x to the self-loop cases, we'll also write an x upon reaching #.
2. We'll modify the check step (q_4/q_5) by writing a # after doing the comparison.
3. We''ll modify the first rewind step (q_6) by looping back until the first 0/1 symbol.
4. We''ll modify the second rewind step (q_7) by looping back until the first x symbol.
## Question 3
Given an extened TM $M_e$ with a transition of the type $ \delta(q, a) = (r, b, R_k)$ or $ \delta(q, a) = (r, b, L_k)$.
We can simulate $M_e$ with a canonical TM ($M_c$) by adding symbols to the tape alphabet $\Gamma$ or adding states to the set of states $Q$.
By way of the above, $M_e$ and $M_c$ are computability equivalent.
We'll elect to demonstrate the state based method:
For each state in $M_e$ with an extended transition (k>1) to it, we'll create k transition states in $M_c$.
We'll arrange the transition states sequentially, such that each moves the tape-head exactly once (for total of k), the last state transitions to the target state.
### Diagram
```
from graphviz import Digraph
dot = Digraph('transition exteneded turing machine simulator')
dot.attr(rankdir='LR')
dot.attr('node', shape='circle')
# Extended TM transition
dot.edge('q_extended_0', 'q_extended_1', label='a -> b, R_k')
# Canonical TM simulation
dot.edge('q_canonical_0', 'q_canonical_1_k_1', label='a -> b, R')
dot.edge('q_canonical_1_k_1', 'q_canonical_1_k_...', label='* -> R')
dot.edge('q_canonical_1_k_...', 'q_canonical_1_k_{k-1}', label='* -> R')
dot.edge('q_canonical_1_k_{k-1}', 'q^c_1', label='* -> R')
dot
```
## Question 4.A
### Overview
We accept if |w| is found to be composite prime, we reject |w| is found to be prime.
The idea is to use alternate between 2 TMs, such that one recognizes the language and the other recognizes its complement.
Thus, we form a NDTM (Non-Deterministic TM) which answer the definition of a decider.
### Steps
1. accept if |w| <=1
2. choose an 1<i<n and mark each ith position until end of input
3. accept if the last input position is marked
4. reject if all except last symbol are checked
5. back to 2
The NDTM advantage is we don't have to keep track of i.
## Question 4.B
If we replace q_accept with q_reject we'll reject all composites and accept all primes.
The NDTM will become a word length primality test - i.e decide the language of prime length words.
This correct because the proposed NDTM is composed of a prime recognizer and a composite recognizer (which are complements).
## Questions 5
We're required to define a two-tape deterministic TM which does a DFS on the search tree defined by a non-deterministic TM decider.
### Notes and Assumptions
* We're given the NDTM to be simulated is a decider, which means every branch terminates.
* If we make a choice, we won't ever need to go back because it will terminate. So no need to keep the original input.
* We need some way to know which choices were made in the NDTM. Because, otherwise we'll always go down the same branch.
* Having the NDTM choices allows us to not neccasarily start at the root.
* The simulating TM will reject on any input the NDTM rejects and accept any input the NDTM rejects. Therefore, it decides the same language.
### Overview
1. first tape contains the input - same as for the NDTM.
2. second tape contains the NDTM path choices.
3. an alphabet for the second tape $\Gamma_2 = \{ c_1, ... , c_n \}$ is defined to allow decision tree input.
### Description
1. Initially tape 1 contains the original input and tape 2 contains the NDTM choices.
2. Before each transition on tape 1, check tape 2 to see which choice the NDTM has made.
2.1 move the 2nd tape-head one symbol right.
3. Make the transition on the input tape.
4. accept if $q_accept$ reached, reject if $q_reject$ reached.
5. Back to step 2.
## Question 6
We're required to define an enumerator and draw a diagram for the language $ A = \{ 0^{2^n} | n \in \mathbb{N} \}$.
We're given: $\Sigma = \{ 0 \}$, $\Gamma = \{0, x, \_ \}$.
### Notes and Assumptions
* Enumerator formally defined on p.16 of course manual.
* A is infinite, so there's no halting state - we leave it unused in our definition.
* A is the languge of even length words consisting of symbol '0'.
* The enumerator must print all possible words in A.
* The enumerator must not print anything not in A.
* Print order doesn't matter.
* Printing duplicates is allowed.
* Printing clears the output tape.
* We can write $\epsilon$ (nothing) to the output tape on transitions that do nothing.
### Overview
0. initially both tapes are empty.
1. mark the work tape start by skipping a space ($\_$).
2. write xx to the work tape starting from current tape-head position (i.e concat at input end).
3. move work tape-head to start of input.
3. scan both tapes in tandem: for each x in the work tape, write a 0 to the output tape.
4. print the output
5. back to step 2.
### Definition
1. $Q = \{ q_0, q_1, q_2, q_3, q_4, q_{print}, q_{halt} \} $
2. $\Gamma = \{0, x, \_ \}$
3. $\Sigma = \{ 0 \}$
4. $ \delta : Q \times \Gamma \rightarrow Q \times \Gamma \times \{ L, R \} \times ( \Sigma \cup \{ \epsilon \} ) $
5. Initial state $q_0$.
6. Print state $q_{print}$.
7. Halting state $q_{halt}$ (unused).
### Diagram
```
from graphviz import Digraph
f = Digraph('even length non-empty 0 filled word enumerator')
f.attr(rankdir='LR', size='8,5')
# draw the start state on the graph
f.attr('node', shape='Mdiamond')
f.node('start')
f.attr('node', shape='circle')
f.edge('start', 'q_0')
# 1. mark work tape start
f.edge('q_0', 'q_1', label='* -> R')
# 2. write xx to the work tape
f.edge('q_1', 'q_2', label='* -> x, R')
f.edge('q_2', 'q_3', label='* -> x, R')
# 3. rewind work-tape-head to input start
f.edge('q_3', 'q_3', label='0, x -> L')
f.edge('q_3', 'q_4', label='_ -> R')
# 4. write 0s to the output tape
f.edge('q_4', 'q_4', label='0, x -> R, 0')
f.edge('q_4', 'q_print', label='_ -> R')
# 5. print and continue on to next cycle
f.edge('q_print', 'q_1', label='* -> L')
f
```
| github_jupyter |
# To use this notebook
- Open in Azure Data Studio
- Ensure the Kernel is set to "PowerShell"
# You can run Flyway in a variety of ways
Community edition is free
You may download and install locally - [https://flywaydb.org/download/](https://flywaydb.org/download/)
You may use the flyway docker container - [https://github.com/flyway/flyway-docker](https://github.com/flyway/flyway-docker)
# Running the Flyway Docker container
Install Docker and make sure it's running - [https://docs.docker.com/get-docker/](https://docs.docker.com/get-docker/)
Instructions to run Flyway via Docker are here - [https://github.com/flyway/flyway-docker](https://github.com/flyway/flyway-docker)
Some examples of this are below
# Run Flyway and return info on available commands
If the image isn't available for you locally yet (first run), this command should automatically pull it.
The --rm causes Docker to automatically remove the container when it exits.
```
print('hello world')
print('hi')
dir
docker run --rm flyway/flyway
```
# A simple test of Flyway's info command using the H2 in memory database
```
!docker run --rm flyway/flyway -url=jdbc:h2:mem:test -user=sa info
```
# Let's talk to a SQL Server
I'm using a config file here, by passing in a volume with -v. We are naming the volume /flyway/conf.
- This needs to be an absolute path to the folder where you have flyway.conf
- You will need to edit the connection string, user, and password in flyway.conf
- You will need to create a database named GitForDBAs (or change the config file to reference a database of another name which already exists)
I'm using a second volume mapping to a folder that holds my flyway migrations. We are naming the volume /flyway/sql.
- This needs to be an absolute path to the folder where you have migrations stored
- The filenames for the migrations matter -- Flyway uses the file names to understand what type of script it is and the order in which it should be run
Note: I have spread this across multiple lines using the \` character for readability purposes
# Call Flyway info to inspect
```
docker run --rm `
-v C:\Git\GitForDBAs\flywayconf:/flyway/conf `
-v C:\Git\GitForDBAs\migrations:/flyway/sql `
flyway/flyway info
```
# Call Flyway migrate to execute
```
docker run --rm `
-v C:\Git\GitForDBAs\flywayconf:/flyway/conf `
-v C:\Git\GitForDBAs\migrations:/flyway/sql `
flyway/flyway migrate
```
# Examine the table - open a new query
USE GitForDBAs;
GO
EXEC sp\_help 'dbo.HelloWorld';
GO
SELECT \* FROM dbo.HelloWorld;
GO
# Call Flyway clean to drop everything 🔥🔥🔥
```
docker run --rm `
-v C:\Git\GitForDBAs\flywayconf:/flyway/conf `
-v C:\Git\GitForDBAs\migrations:/flyway/sql `
flyway/flyway clean
```
| github_jupyter |
## Constants, Sequences, Variables, Ops
```
import tensorflow as tf
```
## Constants
[https://www.tensorflow.org/api_docs/python/tf/constant](https://www.tensorflow.org/api_docs/python/tf/constant)
Constants are values that will never change through out your calculations. These stand fixed
```
# note that we are reshaping the matrix
a = tf.constant(value=[[1,2,3,4,5],[10,20,30,40,50]],
dtype=tf.float32,
shape=[5,2],
name="tf_const",
verify_shape=False
)
a_reshape = tf.reshape(a, shape=[2,5])
with tf.Session() as sess:
result = sess.run([a, a_reshape])
print(result[0])
print(result[1])
```
#### Making Empty Tensors
```
b = tf.zeros_like(a)
with tf.Session() as sess:
print(sess.run(b))
```
#### Making Tensors filled with ones
```
c = tf.ones_like(a)
with tf.Session() as sess:
print(sess.run(c))
```
#### Making Tensors filled with arbitrary value
```
d = tf.fill(dims=[3,3,3], value=0.5)
with tf.Session() as sess:
print(sess.run(d))
```
#### How to make a range of numbers
```
e = tf.lin_space(start=0., stop=25., num=3, name='by5')
f = tf.range(start=0., limit=25., delta=5., dtype=tf.float32, name='range')
with tf.Session() as sess:
print('linspace', sess.run(e))
print('range', sess.run(f))
```
### Random Generators
```
tf.set_random_seed(1)
g = tf.random_normal(shape=(2,2))
with tf.Session() as sess:
print('random normal', sess.run(g))
```
## Key Operations
| Category | Examples|
|---------- | --------|
|Element-wise mathematical Operations | Add, Sub, Mul, Div, Exp, Log, Greater, Less, Equal|
|Array operations | Concat, Slice, Split, Constant, Rank, Shape, Shuffle|
|Matrix Operations | MatMul, MatrixInverse, MatrixDeterminant,...|
|Stateful Operations | Variable, Assign, AssignAdd|
|NN Building Blocks | SoftMax, Sigmoid, ReLu, Convolution2D, MaxPool..|
|Checkpointing Operations | Save, Restore|
|Queue and synchronization operations | Enqueue, Dequeue, MutexAcquire, MutexRelease, ...|
|Control flow operations | Merge, Switch, Enter, Leave, NextIteration|
# Variables
> A TensorFlow variable is the best way to represent shared, persistent state manipulated by your program.
> Variables must be initialized, to serve as a best "guess", once initialized, these variables can be frozen, or changed throughout the graph calculation
> `tf.constant` is an op
> `tf.Variable` is a class with many op
Why use them? Constants are great, except that they are actually stored WITHIN the graph. The larger the graph, the more constants, and the larger the size of the graph. Use constants for primitive (and simple) types. Use variables and readers for data that will require more memory
### How to make variables Method 1: `tf.Variable`
```
scalar = tf.Variable(2, name='scalar')
matrix = tf.Variable([[0,1],[2,3]], name='mtx')
empty = tf.Variable(tf.zeros([7, 3]), name='empty_mtx')
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(scalar))
print(sess.run(matrix))
print(sess.run(empty))
```
### How to make variables Method 2: `tf.get_variable`
```
scalar = tf.get_variable("scalar1", initializer=tf.constant(2))
matrix = tf.get_variable("mtx1", initializer=tf.constant([[0,1],[2,3]]))
empty = tf.get_variable('empty_mtx1', shape=(7,3), initializer=tf.zeros_initializer())
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print(sess.run(scalar))
print(sess.run(matrix))
print(sess.run(empty))
```
## Variables: Initialization
Note that in the last two code blocks, the variables were **initialized**
The `global_variables_initializer()` initializes all variables in the graph
```
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
...
```
For only a subset of variables:
```
with tf.Session() as sess:
sess.run(tf.variables_initializer([var1, var2, var3]))
...
```
Or only 1 variable
```
with tf.Session() as sess:
sess.run(W.initializer)
...
```
## Variables: `eval()`
```
weights = tf.Variable(tf.truncated_normal([5, 3]))
with tf.Session() as sess:
sess.run(weights.initializer)
print(weights.eval())
```
## Variables: `assign()`
```
ct1 = tf.Variable(10)
updated_ct1 = ct1.assign(1000)
with tf.Session() as sess:
sess.run(ct1.initializer)
sess.run(updated_ct1)
print(ct1.eval())
```
### Trick Question: Whats `my var`?
```
my_var = tf.Variable(5)
double_my_var = my_var.assign(2*my_var)
with tf.Session() as sess:
sess.run(my_var.initializer)
sess.run(double_my_var)
print(my_var.eval())
# run again
sess.run(double_my_var)
print(my_var.eval())
# run again
sess.run(double_my_var)
print(my_var.eval())
```
### Sessions & Variables
```
Z = tf.Variable(20)
sess1 = tf.Session()
sess2 = tf.Session()
sess1.run(Z.initializer)
sess2.run(Z.initializer)
print(sess1.run(Z.assign_add(5)))
print(sess2.run(Z.assign_sub(3)))
print(sess1.run(Z.assign_add(5)))
print(sess2.run(Z.assign_sub(3)))
sess1.close()
sess2.close()
```
### Control Evaluation / Dependencies
```
# a = tf.Variable(2)
# b = tf.Variable(20)
# c = tf.Variable(200)
# add_c = c.assign_add(a)
# g = tf.Graph()
# with g.control_dependencies([add_c]):
# add_d = a.assign_add(d)
# with tf.Session(graph=g) as sess:
# sess.run(tf.global_variables_initializer)
# sess.run(add_d)
```
| github_jupyter |
```
from keras.applications.vgg16 import VGG16
from keras.preprocessing import image
from keras.preprocessing.image import ImageDataGenerator
from keras.applications.vgg16 import preprocess_input
import keras as k
from keras.models import Sequential, Model
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras.callbacks import EarlyStopping, ModelCheckpoint
from keras.optimizers import SGD, RMSprop, Adam
from keras.layers.normalization import BatchNormalization
import numpy as np
import pandas as pd
import cv2
import shutil
from matplotlib import pyplot as plt
from tqdm import tqdm
DATA_DIR = '/home/chicm/ml/kgdata/species'
RESULT_DIR = DATA_DIR + '/results'
TRAIN_FEAT = RESULT_DIR + '/train_feats.dat'
VAL_FEAT = RESULT_DIR + '/val_feats.dat'
TRAIN_DIR = DATA_DIR + '/train-224'
VAL_DIR = DATA_DIR + '/val-224'
batch_size = 64
df_train = pd.read_csv(DATA_DIR+'/train_labels.csv')
```
## create validation data
```
f_dict = {row[0]: row[1] for i, row in enumerate(df_train.values)}
fnames = [row[0] for i, row in enumerate(df_train.values)]
print(len(f_dict))
print(fnames[:10])
print(len([row[1] for i, row in enumerate(df_train.values) if row[1] == 0]))
print(len([row[1] for i, row in enumerate(df_train.values) if row[1] == 1]))
for f in fnames:
cls = f_dict[f]
src = TRAIN_DIR + '/' + str(f) + '.jpg'
dst = TRAIN_DIR + '/' + str(cls) + '/' + str(f) + '.jpg'
shutil.move(src, dst)
fnames = np.random.permutation(fnames)
for i in range(350):
cls = f_dict[fnames[i]]
fn = TRAIN_DIR +'/' + str(cls) + '/' + str(fnames[i])+'.jpg'
tgt_fn = VAL_DIR +'/' + str(cls) + '/' + str(fnames[i])+'.jpg'
shutil.move(fn, tgt_fn)
```
## build pretrained model
```
vgg_model = VGG16(weights='imagenet', include_top=False, input_shape=(224,224,3))
# build a classifier model to put on top of the convolutional model
top_model = Sequential()
top_model.add(Flatten(input_shape=(7,7,512)))
top_model.add(Dense(256, activation='relu'))
top_model.add(Dropout(0.5))
top_model.add(Dense(1, activation='sigmoid'))
#model.add(top_model)
model = Model(inputs=vgg_model.input, outputs=top_model(vgg_model.output))
for layer in model.layers[:25]:
layer.trainable = False
model.compile(Adam(), loss='binary_crossentropy', metrics=['accuracy'])
train_datagen = ImageDataGenerator(
rescale=1. / 255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
test_datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = train_datagen.flow_from_directory(
TRAIN_DIR,
target_size=(224, 224),
batch_size=batch_size,
class_mode='binary')
print(train_generator.n)
print(train_generator.samples)
validation_generator = test_datagen.flow_from_directory(
VAL_DIR,
target_size=(224, 224),
batch_size=batch_size,
class_mode='binary')
epochs = 50
model.fit_generator(
train_generator,
steps_per_epoch=train_generator.n//batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=validation_generator.n//batch_size,
verbose=2)
def create_model():
conv_layers = [
Conv2D(24,(3,3), activation='relu',input_shape=(224,224,3)),
BatchNormalization(axis=-1),
MaxPooling2D((2, 2), strides=(2, 2)),
Conv2D(24,(3,3), activation='relu'),
BatchNormalization(axis=-1),
MaxPooling2D((2, 2), strides=(2, 2)),
Conv2D(48,(3,3), activation='relu'),
BatchNormalization(axis=-1),
MaxPooling2D((2, 2), strides=(2, 2)),
Conv2D(48,(3,3), activation='relu'),
BatchNormalization(axis=-1),
MaxPooling2D((2, 2), strides=(2, 2)),
Flatten(),
Dropout(0.25),
Dense(128, activation='relu'),
BatchNormalization(),
Dropout(0.5),
Dense(128, activation='relu'),
BatchNormalization(),
Dropout(0.5),
Dense(1, activation='softmax')
]
#print conv_layers
model = Sequential(conv_layers)
model.compile(Adam(), loss = 'binary_crossentropy', metrics=['accuracy'])
return model
epochs = 50
model2 = create_model()
model2.fit_generator(
train_generator,
steps_per_epoch=train_generator.n//batch_size,
epochs=epochs,
validation_data=validation_generator,
validation_steps=validation_generator.n//batch_size,
verbose=2)
x_train = []
y_train = []
for i, row in tqdm(enumerate(df_train.values)):
fn = DATA_DIR+'/train/' + str(row[0])+'.jpg'
x_train.append(cv2.resize(cv2.imread(fn), (224,224)))
y_train.append([row[1]])
x_train = np.array(x_train, np.float32)
y_train = np.array(y_train, np.uint8)
print(x_train.shape)
print(y_train.shape)
split = int(x_train.shape[0] * 0.85)
x_val = x_train[split:]
y_val = y_train[split:]
x_train = x_train[:split]
y_train = y_train[:split]
print(x_train.shape)
print(x_val.shape)
print(y_train.shape)
print(y_val.shape)
train_steps = x_train.shape[0] // batch_size
x_train = x_train[:train_steps*batch_size]
y_train = y_train[:train_steps*batch_size]
val_steps = x_val.shape[0] // batch_size
x_val = x_val[:val_steps*batch_size]
y_val = y_val[:val_steps*batch_size]
print(train_steps)
print(val_steps)
model = VGG16(weights='imagenet', include_top=False)
datagen = ImageDataGenerator(
rotation_range=45,
width_shift_range=0.1,
height_shift_range=0.1,
rescale=1./255,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
vertical_flip = True)
print(x_train.shape)
print(x_val.shape)
train_feat = model.predict_generator(datagen.flow(x_train, batch_size=batch_size, shuffle=False),
steps = train_steps*8)
#val_feat = model.predict_generator(datagen.flow(x_val, batch_size=batch_size, shuffle=False),
#steps = val_steps)
train_feat = model.predict(x_train)
val_feat = model.predict(x_val)
print(train_feat.shape)
print(val_feat.shape)
import bcolz
def save_array(fname, arr):
c=bcolz.carray(arr, rootdir=fname, mode='w')
c.flush()
def load_array(fname):
return bcolz.open(fname)[:]
save_array(TRAIN_FEAT, train_feat)
save_array(VAL_FEAT, val_feat)
print(train_feat.shape)
print(val_feat.shape)
def get_layers(input_shape):
return [
Flatten(input_shape=input_shape),
Dropout(0.4),
Dense(256, activation='relu'),
BatchNormalization(),
Dropout(0.5),
Dense(256, activation='relu'),
BatchNormalization(),
Dropout(0.6),
Dense(1, activation='sigmoid')
]
def get_model(input_shape):
model = Sequential(get_layers(input_shape))
model.compile(Adam(), loss = 'binary_crossentropy', metrics=['accuracy'])
return model
dense_model = get_model(train_feat.shape[1:])
dense_model.fit(train_feat, y_train, batch_size=batch_size,
validation_data=(val_feat, y_val),
epochs=50, verbose=2)
y_train_da = np.concatenate([y_train]*8)
print(y_train.shape)
print(y_train_da.shape)
model = get_model(train_feat.shape[1:])
model.fit(train_feat, y_train_da, batch_size=batch_size, validation_data=(val_feat, y_val), epochs = 20, verbose=2)
model.optimizer.lr = 0.00001
model.fit(train_feat, y_train_da, batch_size=batch_size, validation_data=(val_feat, y_val), epochs = 50, verbose=2)
```
| github_jupyter |
```
import matplotlib.pyplot as plt # pip install matplotlib
import seaborn as sns # pip install seaborn
import plotly.graph_objects as go # pip install plotly
import imageio # pip install imageio
import grid2op
env = grid2op.make(test=True)
from grid2op.PlotGrid import PlotMatplot
plot_helper = PlotMatplot(env.observation_space)
line_ids = [int(i) for i in range(env.n_line)]
fig_layout = plot_helper.plot_layout()
obs = env.reset()
fig_obs = plot_helper.plot_obs(obs)
action = env.action_space({"set_bus": {"loads_id": [(0,2)], "lines_or_id": [(3,2)], "lines_ex_id": [(0,2)]}})
print(action)
new_obs, reward, done, info = env.step(action)
fig_obs3 = plot_helper.plot_obs(new_obs)
from grid2op.Agent import RandomAgent
class CustomRandom(RandomAgent):
def __init__(self, action_space):
RandomAgent.__init__(self, action_space)
self.i = 1
def my_act(self, transformed_observation, reward, done=False):
if (self.i % 10) != 0:
res = 0
else:
res = self.action_space.sample()
self.i += 1
return res
myagent = CustomRandom(env.action_space)
obs = env.reset()
reward = env.reward_range[0]
done = False
while not done:
env.render()
act = myagent.act(obs, reward, done)
obs, reward, done, info = env.step(act)
env.close()
from grid2op.Runner import Runner
env = grid2op.make(test=True)
my_awesome_agent = CustomRandom(env.action_space)
runner = Runner(**env.get_params_for_runner(), agentClass=None, agentInstance=my_awesome_agent)
import os
path_agents = "path_agents" # this is mandatory for grid2viz to have a directory with only agents
# that is why we have it here. It is aboslutely not mandatory for this more simple class.
max_iter = 10 # to save time we only assess performance on 30 iterations
if not os.path.exists(path_agents):
os.mkdir(path_agents)
path_awesome_agent_log = os.path.join(path_agents, "awesome_agent_logs")
res = runner.run(nb_episode=2, path_save=path_awesome_agent_log, max_iter=max_iter)
from grid2op.Episode import EpisodeReplay
gif_name = "episode"
ep_replay = EpisodeReplay(agent_path=path_awesome_agent_log)
for _, chron_name, cum_reward, nb_time_step, max_ts in res:
ep_replay.replay_episode(chron_name, # which chronic was started
gif_name=gif_name, # Name of the gif file
display=False, # dont wait before rendering each frames
fps=3.0) # limit to 3 frames per second
# make a runner for this agent
from grid2op.Agent import DoNothingAgent, TopologyGreedy
import shutil
for agentClass, agentName in zip([DoNothingAgent], # , TopologyGreedy
["DoNothingAgent"]): # , "TopologyGreedy"
path_this_agent = os.path.join(path_agents, agentName)
shutil.rmtree(os.path.abspath(path_this_agent), ignore_errors=True)
runner = Runner(**env.get_params_for_runner(),
agentClass=agentClass
)
res = runner.run(path_save=path_this_agent, nb_episode=10,
max_iter=800)
print("The results for the {} agent are:".format(agentName))
for _, chron_id, cum_reward, nb_time_step, max_ts in res:
msg_tmp = "\tFor chronics with id {}\n".format(chron_id)
msg_tmp += "\t\t - cumulative reward: {:.6f}\n".format(cum_reward)
msg_tmp += "\t\t - number of time steps completed: {:.0f} / {:.0f}".format(nb_time_step, max_ts)
print(msg_tmp)
import sys
shutil.rmtree(os.path.join(os.path.abspath(path_agents), "_cache"), ignore_errors=True)
!$sys.executable -m grid2viz.main --path=$path_agents
```
| github_jupyter |
<img width="10%" alt="Naas" src="https://landen.imgix.net/jtci2pxwjczr/assets/5ice39g4.png?w=160"/>
# Matplotlib - Create Waterfall chart
<a href="https://app.naas.ai/user-redirect/naas/downloader?url=https://raw.githubusercontent.com/jupyter-naas/awesome-notebooks/master/Matplotlib/Matplotlib_Create_Waterfall_chart.ipynb" target="_parent"><img src="https://naasai-public.s3.eu-west-3.amazonaws.com/open_in_naas.svg"/></a>
**Tags:** #matplotlib #chart #warterfall #dataviz #snippet #operations #image
**Author:** [Jeremy Ravenel](https://www.linkedin.com/in/ACoAAAJHE7sB5OxuKHuzguZ9L6lfDHqw--cdnJg/)
## Input
### Import library
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.ticker import FuncFormatter
```
## Model
### Create the waterfall chart
```
#Use python 2.7+ syntax to format currency
def money(x, pos):
'The two args are the value and tick position'
return "${:,.0f}".format(x)
formatter = FuncFormatter(money)
#Data to plot. Do not include a total, it will be calculated
index = ['sales','returns','credit fees','rebates','late charges','shipping']
data = {'amount': [350000,-30000,-7500,-25000,95000,-7000]}
#Store data and create a blank series to use for the waterfall
trans = pd.DataFrame(data=data,index=index)
blank = trans.amount.cumsum().shift(1).fillna(0)
#Get the net total number for the final element in the waterfall
total = trans.sum().amount
trans.loc["net"]= total
blank.loc["net"] = total
#The steps graphically show the levels as well as used for label placement
step = blank.reset_index(drop=True).repeat(3).shift(-1)
step[1::3] = np.nan
#When plotting the last element, we want to show the full bar,
#Set the blank to 0
blank.loc["net"] = 0
#Plot and label
my_plot = trans.plot(kind='bar', stacked=True, bottom=blank,legend=None, figsize=(10, 5), title="2014 Sales Waterfall")
my_plot.plot(step.index, step.values,'k')
my_plot.set_xlabel("Transaction Types")
#Format the axis for dollars
my_plot.yaxis.set_major_formatter(formatter)
#Get the y-axis position for the labels
y_height = trans.amount.cumsum().shift(1).fillna(0)
#Get an offset so labels don't sit right on top of the bar
max = trans.max()
neg_offset = max / 25
pos_offset = max / 50
plot_offset = int(max / 15)
#Start label loop
loop = 0
for index, row in trans.iterrows():
# For the last item in the list, we don't want to double count
if row['amount'] == total:
y = y_height[loop]
else:
y = y_height[loop] + row['amount']
# Determine if we want a neg or pos offset
if row['amount'] > 0:
y += pos_offset
else:
y -= neg_offset
my_plot.annotate("{:,.0f}".format(row['amount']),(loop,y),ha="center")
loop+=1
```
## Output
### Display result
```
#Scale up the y axis so there is room for the labels
my_plot.set_ylim(0,blank.max()+int(plot_offset))
#Rotate the labels
my_plot.set_xticklabels(trans.index,rotation=0)
my_plot.get_figure().savefig("waterfall.png",dpi=200,bbox_inches='tight')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/mbk-dev/okama/blob/master/examples/07%20forecasting.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a>
```
!pip install okama
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [12.0, 6.0]
import okama as ok
```
*okama* has several methods to forecast portfolio perfomance:
- according to historical data (without distribution models)
- according to normal distribution
- according to lognormal distribution
### Testing distribution
Before we use normal or lognormal distribution models, we should test the portfolio returns historical distribution and see if it fits.
There is a notebook dedicated to backtesting distributions.
```
ls = ['GLD.US', 'SPY.US', 'VNQ.US', 'AGG.US']
al = ok.AssetList(ls, inflation=False)
al
al.names
al.kstest(distr='norm')
al.kstest(distr='lognorm')
```
We see that at least SPY is failed zero hypothesis (didn't match 5% threshold) for both normal and lognormal distributions.
But AGG has distribution close to normal. For GLD lognormal fits slightly better.
Now we can construct the portfolio.
```
weights = [0.20, 0.10, 0.10, 0.60]
pf = ok.Portfolio(ls, ccy='USD', weights=weights, inflation=False)
pf
pf.table
pf.kstest(distr='norm')
pf.kstest(distr='lognorm')
```
As expected Kolmogorov-Smirnov test shows that normal distribution fits much better. AGG has 60% weight in the allocation.
### Forecasting
The most intuitive way to present forecasted portfolio performance is to use **plot_forecast** method to draw the accumulated return chart (historical return and forecasted data).
It is possible to use arbitrary percentiles set (10, 50, 90 is a default attribute value).
Maximum forecast period is limited with 1/2 historical data period. For example, if the historical data period is 10 years, it's possible to use forecast periods up to 5 years.
```
pf.plot_forecast(distr='norm', years=5, figsize=(12,5));
```
Another way to visualize the normally distributed random forecasted data is with Monte Carlo simulation ...
```
pf.plot_forecast_monte_carlo(distr='norm', years=5, n=20) # Generates 20 forecasted wealth indexes (for random normally distributed returns time series)
```
We can get numeric CAGR percentiles for each period with **percentile_distribution_cagr** method. To get credible forecast results high n values should be used.
```
pf.percentile_distribution_cagr(distr='norm', years=5, percentiles=[1, 20, 50, 80, 99], n=10000)
```
The same could be used to get VAR (Value at Risk):
```
pf.percentile_distribution_cagr(distr='norm', years=1, percentiles=[1], n=10000) # 1% perecentile corresponds to 99% confidence level
```
One-year VAR (99% confidence level) is equal to 8%. It's a fair value for conservative portfolio.
The probability of getting negative result in forecasted period is the percentile rank for zero CAGR value (score=0).
```
pf.percentile_inverse_cagr(distr='norm', years=1, score=0, n=10000) # one year period
```
### Lognormal distribution
Some financial assets returns have returns distribution close to lognormal.
The same calculations could be repeated for lognormal distribution by setting dist='lognorm'.
```
ln = ok.Portfolio(['EDV.US'], inflation=False)
ln
ln.names
```
We can visualize the distribution and compare it with the lognormal PDF (Probability Distribution Function).
```
ln.plot_hist_fit(distr='lognorm', bins=30)
ln.kstest(distr='norm') # Kolmogorov-Smirnov test for normal distribution
ln.kstest(distr='lognorm') # Kolmogorov-Smirnov test for lognormal distribution
```
What is more important Kolmogorov-Smirnov test shows that historical distribution is slightly closer to lognormal.
Therefore, we can use lognormal distribution to forecast.
```
ln.plot_forecast(distr='lognorm', percentiles=[30, 50, 70], years=2, n=10000);
pf.percentile_distribution_cagr(distr='lognorm', years=1, percentiles=[1, 20, 50, 80, 99], n=10000)
```
### Forecasting using historical data
If it's not possible to fit the data to normal or lognormal distributions, percentiles from the historical data could be used.
```
ht = ok.Portfolio(['SPY.US'])
ht
ht.kstest('norm')
ht.kstest('lognorm')
```
Kolmogorov-Smirnov test is not passing 5% threshold...
Big deviation in the tails could be seen in Quantile-Quantile Plot.
```
ht.plot_percentiles_fit('norm')
```
Then we can use percentiles from the historical data to forecast.
```
ht.plot_forecast(years=5, percentiles=[20, 50, 80]);
ht.percentile_wealth(distr='hist', years=5)
```
Quantitative CAGR percentiles could be obtained from **percentile_history_cagr** method:
```
ht.percentile_history_cagr(years=5)
```
We can visualize the same to see how CAGR ranges were narrowing with investment horizon.
```
ht.percentile_history_cagr(years=5).plot();
```
| github_jupyter |
# Models
```
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from functools import reduce
import sys
import numpy
import math
numpy.set_printoptions(threshold=sys.maxsize)
from sklearn.metrics import accuracy_score
import matplotlib.pyplot as plt
from sklearn.model_selection import cross_val_score
from sklearn import metrics
from sklearn.feature_selection import RFE
from xgboost import XGBClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.svm import SVC
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.preprocessing import OneHotEncoder, StandardScaler, MinMaxScaler
from sklearn.feature_selection import SelectKBest, mutual_info_classif, SelectPercentile
from sklearn.metrics import confusion_matrix, classification_report, f1_score, auc, roc_curve, roc_auc_score, precision_score, recall_score, balanced_accuracy_score
from numpy.random import seed
from sklearn.model_selection import GridSearchCV, train_test_split, cross_val_score, KFold, StratifiedKFold
seed(42)
import tensorflow as tf
tf.random.set_seed(38)
from sklearn.impute import SimpleImputer, KNNImputer
from sklearn.experimental import enable_iterative_imputer
from sklearn.impute import IterativeImputer
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from keras.callbacks import TensorBoard
from keras.models import Sequential
from keras.layers import Dense
# load pre porcessed data
df = pd.read_csv('../prepross_data/data.csv')
```
#### Filterout the paper described patient set
```
# filter dataset as describe in paper
def get_filter_by_age_diabDur(df, age, diabDur):
filter_patients = df[(df["AgeAtConsent"] >= age) & (df["diagDuration"] > diabDur)]
# filter_patients=filter_patients.drop_duplicates(subset="PtID",keep="first")
print(f'Number of patients whos age is {age}+ and diabetics duration greater than {diabDur} is -> {filter_patients.PtID.size}')
return filter_patients
df = get_filter_by_age_diabDur(df, 26, 2)
```
### for SH events prediction pre processing
```
y_label = 'DKADiag'
# possible labels Pt_SevHypoEver, SHSeizComaPast12mos, DKAPast12mos, Depression, DiabNeuro, DKADiag
# fill null value according to the other parameters
# fill with 0 - if data not available probably patient has not that medical condition
def fill_y_label(row):
if(math.isnan(row['DKADiag'])):
if((row['DKAPast12mos'] == 1) or (row['NumDKAOccur'] >= 1) or (row['Pt_NumHospDKA']>=1) or (row['Pt_HospDKASinceDiag'] == 1)):
row['DKADiag'] = 0
else:
row['DKADiag'] = 1
return row
df = df.apply(fill_y_label, axis=1)
# get possible values in column including nan
def get_possible_vals_with_nan(df, colName):
list_val =df[colName].unique().tolist()
return list_val
# {'1.Yes': 0, '2.No': 1, "3.Don't know": 2}
print(df[y_label].unique())
get_possible_vals_with_nan(df, y_label)
if (y_label == 'DKADiag'):
# df.drop(['NumSHSeizComaPast12mos','Pt_v3NumSHSeizComa', 'SHSeizComaPast12mos'], inplace=True, axis=1) # add SHSeizComaPast12mos
df[y_label] = df[y_label].replace({2.0: 0.0, 1.0: 0.0, 3.0:1.0, 4.0:1.0})
pd.options.display.max_rows = 100
def get_missing_val_percentage(df):
return (df.isnull().sum()* 100 / len(df))
missing_per = get_missing_val_percentage(df)
# get missing values < threshold feature name list
variables = df.columns
thresh = 40
variable = [ ]
var = []
for i in range(df.columns.shape[0]):
if missing_per[i]<= thresh: #setting the threshold as 40%
variable.append(variables[i])
else :
var.append(variables[i])
print("variables missing vals < threshold")
print(variable)
print("Length: ", len(variable))
print()
print("variables missing vals > threshold")
print(var)
print("Length: ", len(var))
# cols_to_del = ['Diab_dur_greater','HbA1C_SH', 'Pt_InsHumalog', 'Pt_InsNovolog', 'Pt_BolusDecCntCarb',
# 'Pt_BolusBedtimeSnackFreq', 'Pt_InsPumpStartAge', 'Pt_PumpManuf', 'Pt_PumpModel',
# 'Pt_DaysLeavePumpIns', 'Pt_BasInsRateChgDay', 'Pt_NumBolusDay', 'Pt_ReturnPump',
# 'Pt_InjMethod', 'Pt_InjLongActDay', 'Pt_InjShortActDay', 'Pt_LongActInsDay',
# 'Pt_ShortActInsDay', 'Pt_PumpStopUse', 'Pt_HealthProfDiabEdu', 'Pt_SmokeAmt',
# 'Pt_DaysWkEx', 'Pt_MenarcheAge', 'Pt_RegMenstCyc', 'Pt_IrregMenstCycReas',
# 'Pt_CurrPreg', 'Pt_MiscarriageNum', 'Pt_v3NumHospOthReas',
# 'HyperglyCritRandGluc', 'WeightDiag', 'NumDKAOccur', 'TannerNotDone', 'PumpTotBasIns',
# 'HGMNumDays', 'HGMTestCntAvg', 'HGMGlucMean', 'CGMGlucPctBelow70', 'CGMGlucPctBelow60',
# 'PulseRate', 'InsCarbRatBrkfst', 'InsCarbRatLunch', 'InsCarbRatDinn', 'InsCarbRatDinnNotUsed',
# 'CGMPctBelow55', 'CGMPctBelow80']
cols_to_del = ['Diab_dur_greater']
df.drop(cols_to_del, inplace=True, axis=1)
df.head(10)
```
# Divide Dataset
```
df=df.drop('PtID', axis = 1)
def divide_data(df,label):
Y = df[label]
X = df.drop(label, axis=1)
return X, Y
X, Y = divide_data(df, y_label)
Y.unique()
```
# Feature Selection
```
shape = np.shape(X)
feature = 20 #shape[1]
n_classes = 2
seed(42)
tf.random.set_seed(38)
# Save original data set
original_X = X
# Split into training and testing sets
X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.25, stratify=Y, random_state=123)
# if variable y is a binary categorical variable with values 0 and 1 and there are 25% of zeros and 75% of ones, stratify=y will make sure that your random split has 25% of 0's and 75% of 1's.
# X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.25)
len(Y_train == 0.0)
unique, counts = numpy.unique(Y_train.to_numpy(), return_counts=True)
print("Train - ", unique, counts)
unique_test, counts_test = numpy.unique(Y_test.to_numpy(), return_counts=True)
print("Test - ", unique_test, counts_test)
```
# Imputations
```
import missingno as msno
msno.bar(X_train)
```
### XGB with missing values
```
def plot_roc_curve(fpr, tpr):
plt.plot(fpr, tpr, color='orange', label='ROC')
plt.plot([0, 1], [0, 1], color='darkblue', linestyle='--')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver Operating Characteristic (ROC) Curve')
plt.legend()
plt.show()
# use only for XGB classifier with missing values
X_train_copy = X_train.drop(['DKAPast12mos', 'NumDKAOccur', 'Pt_NumHospDKA', 'Pt_HospDKASinceDiag'], axis=1)
X_test_copy = X_test.drop(['DKAPast12mos', 'NumDKAOccur', 'Pt_NumHospDKA', 'Pt_HospDKASinceDiag'], axis=1)
# kf = KFold(n_splits= 3, shuffle=False)
train = X_train_copy.copy()
train[y_label] = Y_train.values
def cross_val_with_missing_val(model):
# i = 1
# for train_index, test_index in kf.split(train):
# X_train1 = train.iloc[train_index].loc[:, X_train.columns]
# X_test1 = train.iloc[test_index][X_train.columns]
# y_train1 = train.iloc[train_index].loc[:,y_label]
# y_test1 = train.iloc[test_index][y_label]
# #Train the model
# model.fit(X_train1, y_train1) #Training the model
# print(f"Accuracy for the fold no. {i} on the test set: {accuracy_score(y_test1, model.predict(X_test1))}")
# i += 1
# return model
dfs = []
kf = StratifiedKFold(n_splits=10, shuffle=True, random_state=123)
i = 1
for train_index, test_index in kf.split(train, Y_train):
X_train1 = train.iloc[train_index].loc[:, X_train_copy.columns]
X_test1 = train.iloc[test_index].loc[:,X_train_copy.columns]
y_train1 = train.iloc[train_index].loc[:,y_label]
y_test1 = train.iloc[test_index].loc[:,y_label]
#Train the model
model.fit(X_train1, y_train1) #Training the model
print(f"Accuracy for the fold no. {i} on the test set: {accuracy_score(y_test1, model.predict(X_test1))}, doublecheck: {model.score(X_test1,y_test1)}")
# how many occurances appear in the train set
s_train = train.iloc[train_index].loc[:,y_label].value_counts()
s_train.name = f"train {i}"
s_test = train.iloc[test_index].loc[:,y_label].value_counts()
s_test.name = f"test {i}"
df = pd.concat([s_train, s_test], axis=1, sort=False)
df["|"] = "|"
dfs.append(df)
i += 1
return model
# xgboost - train with missing values
model=XGBClassifier(
use_label_encoder=False,
eta = 0.1,#eta between(0.01-0.2)
max_depth = 4, #values between(3-10)
max_delta_step = 10,
# # scale_pos_weight = 0.4,
# # n_jobs = 0,
subsample = 0.5,#values between(0.5-1)
colsample_bytree = 1,#values between(0.5-1)
tree_method = "auto",
process_type = "default",
num_parallel_tree=7,
objective='multi:softmax',
# # min_child_weight = 3,
booster='gbtree',
eval_metric = "mlogloss",
num_class = n_classes
)
# model.fit(X_train_copy,Y_train)
model = cross_val_with_missing_val(model)
# xgb_pred=model.predict(X_test_copy)
# xgb_pred_train=model.predict(X_train_copy)
print("\n \n =========== Train Dataset =============")
y_scores1 = model.predict_proba(X_train_copy)[:,1]
fpr, tpr, thresholds = roc_curve(Y_train, y_scores1)
print("train ROC score", roc_auc_score(Y_train, y_scores1))
optimal_idx = np.argmax(tpr - fpr)
optimal_threshold = thresholds[optimal_idx]
print("Threshold value is:", optimal_threshold)
plot_roc_curve(fpr, tpr)
xgb_pred_train = (model.predict_proba(X_train_copy)[:,1] >= optimal_threshold).astype(int)
print("accuracy score: ", accuracy_score(Y_train, xgb_pred_train)*100)
confusion_matrix_xgb_train = pd.DataFrame(confusion_matrix(Y_train, xgb_pred_train))
sns.heatmap(confusion_matrix_xgb_train, annot=True,fmt='g')
print(classification_report(Y_train, xgb_pred_train))
plt.show()
train_acc = model.score(X_train_copy, Y_train)
print('Accuracy of XGB on training set: {:.2f}'.format(train_acc))
print("\n\n =========== Test Dataset =============")
# find optimal threshold
y_scores = model.predict_proba(X_test_copy)[:,1]
fpr, tpr, thresholds = roc_curve(Y_test, y_scores)
optimal_idx = np.argmax(tpr - fpr)
optimal_threshold = thresholds[optimal_idx]
print("Threshold value is:", optimal_threshold)
plot_roc_curve(fpr, tpr)
xgb_pred = (model.predict_proba(X_test_copy)[:,1] >= optimal_threshold).astype(int)
print("accuracy score: ", accuracy_score(Y_test, xgb_pred)*100)
confusion_matrix_xgb = confusion_matrix(Y_test, xgb_pred)
sns.heatmap(confusion_matrix_xgb, annot=True, fmt='g')
print(classification_report(Y_test, xgb_pred))
plt.show()
test_acc = model.score(X_test_copy, Y_test)
print('Accuracy of XGB classifier on test set: {:.2f}'
.format(test_acc))
# ROC
print("\n\n =========== ROC =============")
y_scores = model.predict_proba(X_test_copy)
score = roc_auc_score(Y_test, y_scores[:, 1])
score = round(score,4)
print(f'roc_auc = {score}')
print("\n\n =========== Class-wise test accuracy =============")
acc = confusion_matrix_xgb.diagonal()/confusion_matrix_xgb.sum(axis=1)
print('classwise accuracy [class 0, class 1]: ', acc)
print('average accuracy: ', np.sum(acc)/2)
# feature importance graph of XGB
feat_importances = pd.Series(model.feature_importances_, index=X_train_copy.columns[0:162])
feat_importances.nlargest(20).plot(kind='barh')
X_train.update(X_train[[
'Pt_InsPriv','Pt_BolusDecCntCarb', 'Pt_HealthProfDiabEdu',
'Pt_MiscarriageNum','HyperglyCritRandGluc','NumDKAOccur','TannerNotDone',
'Pt_InsLev1PerDay','Pt_InsLev2PerDay','Pt_InsLant1PerDay','Pt_InsLant2PerDay']].fillna(0))
X_test.update(X_test[[
'Pt_InsPriv','Pt_BolusDecCntCarb', 'Pt_HealthProfDiabEdu',
'Pt_MiscarriageNum','HyperglyCritRandGluc','NumDKAOccur','TannerNotDone',
'Pt_InsLev1PerDay','Pt_InsLev2PerDay','Pt_InsLant1PerDay','Pt_InsLant2PerDay']].fillna(0))
# fill nan values in categorical dataset with frequent value
# tested wuth mean and median - results is lower than most_frequent
imputeX = SimpleImputer(missing_values=np.nan, strategy = "most_frequent")
# imputeX = KNNImputer(missing_values=np.nan, n_neighbors = 3, weights='distance')
# imputeX = IterativeImputer(max_iter=5, random_state=0)
X_train = imputeX.fit_transform(X_train)
# test data imputation
Test = X_test.copy()
Test.loc[:,y_label] = Y_test
X_test = imputeX.transform(X_test)
```
# Scale data
```
# Normalize numeric features
scaler = StandardScaler()
# scaler = MinMaxScaler()
select = {}
select[0] = pd.DataFrame(scaler.fit_transform(X_train))
select[1] = Y_train
select[2] = pd.DataFrame(scaler.transform(X_test))
```
## Feature Selection
```
# TODO
# def select_features(select, feature):
# selected = {}
# fs = SelectKBest(score_func=mutual_info_classif, k=feature) # k=feature score_func SelectPercentile
# selected[0] = fs.fit_transform(select[0], select[1])
# selected[1] = fs.transform(select[2])
# idx = fs.get_support(indices=True)
# return selected, fs, idx
#Selecting the Best important features according to Logistic Regression
# Give better performance than selectKBest
def select_features(select, feature):
selected = {}
# fs = RFE(estimator=LogisticRegression(), n_features_to_select=feature, step = 1) # step (the number of features eliminated each iteration)
fs = RFE(estimator=XGBClassifier(), n_features_to_select=feature, step = 5) # step (the number of features eliminated each iteration)
# fs = RFE(estimator=RandomForestClassifier(), n_features_to_select=feature, step = 1) # step (the number of features eliminated each iteration)
selected[0] = fs.fit_transform(select[0], select[1])
selected[1] = fs.transform(select[2])
idx = fs.get_support(indices=True)
return selected, fs, idx
# Feature selection
selected, fs, idx = select_features(select, feature)
# Get columns to keep and create new dataframe with those only
from pprint import pprint
cols = fs.get_support(indices=True)
features_df_new = original_X.iloc[:,cols]
pprint(features_df_new.columns)
print(features_df_new.shape)
X_train = pd.DataFrame(selected[0], columns = features_df_new.columns)
X_test = pd.DataFrame(selected[1], columns = features_df_new.columns)
if('DKAPast12mos' in X_train.columns):
X_train = X_train.drop(['DKAPast12mos'], axis=1)
X_test = X_test.drop(['DKAPast12mos'], axis=1)
if('NumDKAOccur' in X_train.columns):
X_train = X_train.drop(['NumDKAOccur'], axis=1)
X_test = X_test.drop([ 'NumDKAOccur'], axis=1)
if('Pt_NumHospDKA' in X_train.columns):
X_train = X_train.drop(['Pt_NumHospDKA'], axis=1)
X_test = X_test.drop([ 'Pt_NumHospDKA'], axis=1)
if('Pt_HospDKASinceDiag' in X_train.columns):
X_train = X_train.drop(['Pt_HospDKASinceDiag'], axis=1)
X_test = X_test.drop([ 'Pt_HospDKASinceDiag'], axis=1)
```
### Common functions
```
# kf = KFold(n_splits= 3, shuffle=False)
train = X_train.copy()
train[y_label] = Y_train.values
def cross_val(model):
# i = 1
# for train_index, test_index in kf.split(train):
# X_train1 = train.iloc[train_index].loc[:, X_train.columns]
# X_ test1 = train.iloc[test_index][X_train.columns]
# y_train1 = train.iloc[train_index].loc[:,y_label]
# y_test1 = train.iloc[test_index][y_label]
# #Train the model
# model.fit(X_train1, y_train1) #Training the model
# print(f"Accuracy for the fold no. {i} on the test set: {accuracy_score(y_test1, model.predict(X_test1))}")
# i += 1
# return model
dfs = []
kf = StratifiedKFold(n_splits=10, shuffle=True, random_state=123)
i = 1
for train_index, test_index in kf.split(train, Y_train):
X_train1 = train.iloc[train_index].loc[:, X_train.columns]
X_test1 = train.iloc[test_index].loc[:,X_train.columns]
y_train1 = train.iloc[train_index].loc[:,y_label]
y_test1 = train.iloc[test_index].loc[:,y_label]
#Train the model
model.fit(X_train1, y_train1) #Training the model
print(f"Accuracy for the fold no. {i} on the test set: {accuracy_score(y_test1, model.predict(X_test1))}, doublecheck: {model.score(X_test1,y_test1)}")
# how many occurances appear in the train set
s_train = train.iloc[train_index].loc[:,y_label].value_counts()
s_train.name = f"train {i}"
s_test = train.iloc[test_index].loc[:,y_label].value_counts()
s_test.name = f"test {i}"
df = pd.concat([s_train, s_test], axis=1, sort=False)
df["|"] = "|"
dfs.append(df)
i += 1
return model
def optimal_thresh(model, X, Y):
y_scores = model.predict_proba(X)[:,1]
fpr, tpr, thresholds = roc_curve(Y, y_scores)
print(roc_auc_score(Y, y_scores))
# optimal_idx = np.argmax(sqrt(tpr * (1-fpr)))
optimal_idx = np.argmax(tpr - fpr)
optimal_threshold = thresholds[optimal_idx]
print("Threshold value is:", optimal_threshold)
plot_roc_curve(fpr, tpr)
return optimal_threshold
def train_results(model, X_train, Y_train, pred_train):
print("\n \n ===================== Train Dataset ======================")
print(accuracy_score(Y_train, pred_train)*100)
confusion_matrix_train = pd.DataFrame(confusion_matrix(Y_train, pred_train))
sns.heatmap(confusion_matrix_train, annot=True,fmt='g')
print(classification_report(Y_train, pred_train))
plt.show()
train_acc = model.score(X_train, Y_train)
print('Accuracy of on training set: {:.2f}'.format(train_acc))
def test_results(model, X_test, Y_test, pred):
print("\n\n ===================== Test Dataset =======================")
print(accuracy_score(Y_test, pred)*100)
confusion_matrix_model = confusion_matrix(Y_test, pred)
sns.heatmap(confusion_matrix_model, annot=True,fmt='g')
print(classification_report(Y_test, pred))
plt.show()
test_acc = model.score(X_test, Y_test)
print('Accuracy of classifier on test set: {:.2f}'
.format(test_acc))
def ROC_results(model, X_test, Y_test):
print("\n\n ======================= Test-ROC =========================")
y_scores = model.predict_proba(X_test)
score = roc_auc_score(Y_test, y_scores[:, 1])
score = round(score,4)
print(f'roc_auc = {score}')
def class_wise_test_accuracy(model, Y_test, pred):
print("\n\n ======================= Class-wise test accuracy =====================")
confusion_matrix_model = confusion_matrix(Y_test, pred)
acc = confusion_matrix_model.diagonal()/confusion_matrix_model.sum(axis=1)
print('classwise accuracy [class 0, class 1]: ',(acc))
print('average accuracy: ',( np.sum(acc)/2))
```
### Adaboost model
```
from sklearn.model_selection import KFold, StratifiedKFold, train_test_split, cross_validate, cross_val_score
adaboost = AdaBoostClassifier(random_state=0, learning_rate=0.05, n_estimators=1000, algorithm = "SAMME.R") #algorithm{‘SAMME’, ‘SAMME.R’}, default=’SAMME.R’
# adaboost.fit(X_train, Y_train)
adaboost = cross_val(adaboost)
# pred=adaboost.predict(X_test)
# pred_train=adaboost.predict(X_train)
# find optimal threshold
optimal_threshold = optimal_thresh(adaboost, X_test, Y_test)
pred = (adaboost.predict_proba(X_test)[:,1] >= optimal_threshold).astype(int)
optimal_threshold_train= optimal_thresh(adaboost, X_train, Y_train)
pred_train = (adaboost.predict_proba(X_train)[:,1] >= optimal_threshold_train).astype(int)
# test and train results
train_results(adaboost, X_train, Y_train, pred_train)
test_results(adaboost, X_test, Y_test, pred)
# ROC
ROC_results(adaboost, X_test, Y_test)
# class wise accuracy
class_wise_test_accuracy(adaboost, Y_test, pred)
feat_importances = pd.Series(adaboost.feature_importances_, index=X_train.columns[0:feature])
feat_importances.nlargest(20).plot(kind='barh')
from imodels import BoostedRulesClassifier, FIGSClassifier, SkopeRulesClassifier
from imodels import RuleFitRegressor, HSTreeRegressorCV, SLIMRegressor
def viz_classification_preds(probs, y_test):
'''look at prediction breakdown
'''
plt.subplot(121)
plt.hist(probs[:, 1][y_test == 0], label='Class 0')
plt.hist(probs[:, 1][y_test == 1], label='Class 1', alpha=0.8)
plt.ylabel('Count')
plt.xlabel('Predicted probability of class 1')
plt.legend()
plt.subplot(122)
preds = np.argmax(probs, axis=1)
plt.title('ROC curve')
fpr, tpr, thresholds = metrics.roc_curve(y_test, preds)
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.plot(fpr, tpr)
plt.tight_layout()
plt.show()
# fit boosted stumps
brc = BoostedRulesClassifier(n_estimators=10)
brc.fit(X_train, Y_train, feature_names=X_test.columns)
print(brc)
# look at performance
probs = brc.predict_proba(X_test)
viz_classification_preds(probs, Y_test)
```
# Model - XGB
```
# xgboost - train with missing values
xgb_impute=XGBClassifier(
use_label_encoder=False,
eta = 0.1,#eta between(0.01-0.2)
max_depth = 4, #values between(3-10)
max_delta_step = 10,
subsample = 0.5,#values between(0.5-1)
colsample_bytree = 1,#values between(0.5-1)
tree_method = "auto",
process_type = "default",
num_parallel_tree=7,
objective='multi:softmax',
# min_child_weight = 3,
booster='gbtree',
eval_metric = "mlogloss",
num_class = n_classes
)
# xgb_impute.fit(X_train,Y_train)
xgb_impute = cross_val(xgb_impute)
# xgb_pred=xgb_impute.predict(X_test)
# xgb_pred_train=xgb_impute.predict(X_train)
# find optimal threshold
optimal_threshold = optimal_thresh(xgb_impute, X_test, Y_test)
optimal_threshold_train= optimal_thresh(xgb_impute, X_train, Y_train)
xgb_pred = (xgb_impute.predict_proba(X_test)[:,1] >= optimal_threshold).astype(int)
xgb_pred_train = (xgb_impute.predict_proba(X_train)[:,1] >= optimal_threshold_train).astype(int)
train_results(xgb_impute, X_train, Y_train, xgb_pred_train)
test_results(xgb_impute, X_test, Y_test, xgb_pred)
# ROC
ROC_results(xgb_impute, X_test, Y_test)
# class wise accuracy
class_wise_test_accuracy(xgb_impute, Y_test, xgb_pred)
# feature importance graph of XGB
feat_importances = pd.Series(xgb_impute.feature_importances_, index=X_train.columns[0:feature])
feat_importances.nlargest(20).plot(kind='barh')
```
## Model 2 - Random forest
```
# random forest classifier
rf=RandomForestClassifier(max_depth=10,
# n_estimators = feature,
criterion = 'entropy', # {“gini”, “entropy”}, default=”gini”
class_weight = 'balanced_subsample', # {“balanced”, “balanced_subsample”}, dict or list of dicts, default=None
ccp_alpha=0.001,
random_state=0)
# rf.fit(X_train,Y_train)
rf = cross_val(rf)
# find optimal threshold
optimal_threshold = optimal_thresh(rf, X_test, Y_test)
optimal_threshold_train= optimal_thresh(rf, X_train, Y_train)
# pred=rf.predict(X_test)
pred = (rf.predict_proba(X_test)[:,1] >= optimal_threshold).astype(int)
# pred_train=rf.predict(X_train)
pred_train = (rf.predict_proba(X_train)[:,1] >= optimal_threshold_train).astype(int)
train_results(rf, X_train, Y_train, pred_train)
test_results(rf, X_test, Y_test, pred)
# ROC
ROC_results(rf, X_test, Y_test)
# class wise accuracy
class_wise_test_accuracy(rf, Y_test, pred)
feat_importances = pd.Series(rf.feature_importances_, index=X_train.columns[0:feature])
feat_importances.nlargest(20).plot(kind='barh')
```
## Model 3 LogisticRegression
```
#penalty{‘l1’, ‘l2’, ‘elasticnet’, ‘none’}, default=’l2’
logreg = LogisticRegression(
penalty='l2',
tol = 5e-4,
C=1,
l1_ratio = 10,
class_weight='balanced', # balanced
random_state=0,
solver = 'saga' # saga, sag
)
# logreg.fit(X_train, Y_train)
logreg = cross_val(logreg)
# pred=logreg.predict(X_test)
# pred_train=logreg.predict(X_train)
# find optimal threshold
optimal_threshold = optimal_thresh(logreg, X_test, Y_test)
optimal_threshold_train= optimal_thresh(logreg, X_train, Y_train)
pred = (logreg.predict_proba(X_test)[:,1] >= optimal_threshold).astype(int)
pred_train = (logreg.predict_proba(X_train)[:,1] >= optimal_threshold_train).astype(int)
train_results(logreg, X_train, Y_train, pred_train)
test_results(logreg, X_test, Y_test, pred)
# ROC
ROC_results(logreg, X_test, Y_test)
# class wise accuracy
class_wise_test_accuracy(logreg, Y_test, pred)
feat_importances = pd.Series(logreg.coef_[0], index=X_train.columns[0:feature])
feat_importances.nlargest(20).plot(kind='barh')
```
## Model 4 - Decision tree
```
clf = DecisionTreeClassifier(
random_state=0,
criterion='gini',
splitter = 'best',
max_depth = 100,
max_features = 20)
# clf.fit(X_train, Y_train)
clf = cross_val(clf)
# pred=clf.predict(X_test)
# pred_train=clf.predict(X_train)
# find optimal threshold
optimal_threshold = optimal_thresh(clf, X_test, Y_test)
optimal_threshold_train= optimal_thresh(clf, X_train, Y_train)
pred = (clf.predict_proba(X_test)[:,1] >= optimal_threshold).astype(int)
pred_train = (clf.predict_proba(X_train)[:,1] >= optimal_threshold_train).astype(int)
train_results(clf, X_train, Y_train, pred_train)
test_results(clf, X_test, Y_test, pred)
# ROC
ROC_results(clf, X_test, Y_test)
# class wise accuracy
class_wise_test_accuracy(clf, Y_test, pred)
feat_importances = pd.Series(clf.feature_importances_, index=X_train.columns[0:feature])
feat_importances.nlargest(20).plot(kind='barh')
```
## Model 5 - K-Nearest Neighbors
```
knn = KNeighborsClassifier(
n_neighbors =1,
weights = "uniform", # uniform, distance
algorithm = 'brute', # {‘auto’, ‘ball_tree’, ‘kd_tree’, ‘brute’}, default=’auto’
)
# knn.fit(X_train, Y_train)
knn = cross_val(knn)
# pred=knn.predict(X_test)
# pred_train=knn.predict(X_train)
# find optimal threshold
optimal_threshold = optimal_thresh(knn, X_test, Y_test)
optimal_threshold_train= optimal_thresh(knn, X_train, Y_train)
pred = (knn.predict_proba(X_test)[:,1] >= optimal_threshold).astype(int)
pred_train = (knn.predict_proba(X_train)[:,1] >= optimal_threshold_train).astype(int)
train_results(knn, X_train, Y_train, pred_train)
test_results(knn, X_test, Y_test, pred)
# ROC
ROC_results(knn, X_test, Y_test)
# class wise accuracy
class_wise_test_accuracy(knn, Y_test, pred)
```
## Model 6 - Linear Discriminant Analysis
```
lda = LinearDiscriminantAnalysis(
solver = 'eigen', # solver{‘svd’, ‘lsqr’, ‘eigen’}, default=’svd’
shrinkage= 'auto', #shrinkage‘auto’ or float, default=None
n_components = 1,
tol = 1e-3
)
# lda.fit(X_train, Y_train)
lda = cross_val(lda)
# pred=lda.predict(X_test)
# pred_train=lda.predict(X_train)
# find optimal threshold
optimal_threshold = optimal_thresh(lda, X_test, Y_test)
optimal_threshold_train= optimal_thresh(lda, X_train, Y_train)
pred = (lda.predict_proba(X_test)[:,1] >= optimal_threshold).astype(int)
pred_train = (lda.predict_proba(X_train)[:,1] >= optimal_threshold_train).astype(int)
train_results(lda, X_train, Y_train, pred_train)
test_results(lda, X_test, Y_test, pred)
# ROC
ROC_results(lda, X_test, Y_test)
# class wise accuracy
class_wise_test_accuracy(lda, Y_test, pred)
```
## Model 7- Gaussian Naive Bayes
```
gnb = GaussianNB()
param_grid_nb = {
'var_smoothing': np.logspace(0,-9, num=100)
}
nbModel_grid = GridSearchCV(estimator=gnb, param_grid=param_grid_nb, verbose=1, cv=10, n_jobs=-1)
# nbModel_grid.fit(X_train, Y_train)
nbModel_grid = cross_val(nbModel_grid)
# best parameters
print(nbModel_grid.best_estimator_)
gnb = GaussianNB(priors=None, var_smoothing=1.0)
gnb.fit(X_train, Y_train)
# pred=gnb.predict(X_test)
# pred_train=gnb.predict(X_train)
# find optimal threshold
optimal_threshold = optimal_thresh(gnb, X_test, Y_test)
optimal_threshold_train= optimal_thresh(gnb, X_train, Y_train)
pred = (gnb.predict_proba(X_test)[:,1] >= optimal_threshold).astype(int)
pred_train = (gnb.predict_proba(X_train)[:,1] >= optimal_threshold_train).astype(int)
train_results(gnb, X_train, Y_train, pred_train)
test_results(gnb, X_test, Y_test, pred)
# ROC
ROC_results(gnb, X_test, Y_test)
# class wise accuracy
class_wise_test_accuracy(gnb, Y_test, pred)
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# 神经风格迁移
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://tensorflow.google.cn/tutorials/generative/style_transfer"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png" />在 tensorflow.google.cn 上查看</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/generative/style_transfer.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png" />在 Google Colab 上运行</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/generative/style_transfer.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png" />在 GitHub 上查看源代码</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/generative/style_transfer.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png" />下载此 notebook</a>
</td>
</table>
Note: 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的
[官方英文文档](https://tensorflow.google.cn/?hl=en)。如果您有改进此翻译的建议, 请提交 pull request 到
[tensorflow/docs](https://github.com/tensorflow/docs) GitHub 仓库。要志愿地撰写或者审核译文,请加入
[docs-zh-cn@tensorflow.org Google Group](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs-zh-cn)。
本教程使用深度学习来用其他图像的风格创造一个图像(曾经你是否希望可以像毕加索或梵高一样绘画?)。 这被称为*神经风格迁移*,该技术概述于 <a href="https://arxiv.org/abs/1508.06576" class="external">A Neural Algorithm of Artistic Style</a> (Gatys et al.).
Note: 本教程演示了原始的风格迁移算法。它将图像内容优化为特定样式。最新的一些方法训练模型以直接生成风格化图像(类似于 [cyclegan](cyclegan.ipynb))。原始的这种方法要快得多(高达 1000 倍)。[TensorFlow Hub](https://tensorflow.google.cn/hub) 和 [TensorFlow Lite](https://tensorflow.google.cn/lite/models/style_transfer/overview) 中提供了预训练的[任意图像风格化模块](https://colab.sandbox.google.com/github/tensorflow/hub/blob/master/examples/colab/tf2_arbitrary_image_stylization.ipynb)。
神经风格迁移是一种优化技术,用于将两个图像——一个*内容*图像和一个*风格参考*图像(如著名画家的一个作品)——混合在一起,使输出的图像看起来像内容图像, 但是用了风格参考图像的风格。
这是通过优化输出图像以匹配内容图像的内容统计数据和风格参考图像的风格统计数据来实现的。 这些统计数据可以使用卷积网络从图像中提取。
例如,我们选取这张小狗的照片和 Wassily Kandinsky 的作品 7:
<img src="https://storage.googleapis.com/download.tensorflow.org/example_images/YellowLabradorLooking_new.jpg" width="500px"/>
[黄色拉布拉多犬的凝视](https://commons.wikimedia.org/wiki/File:YellowLabradorLooking_new.jpg),来自 Wikimedia Commons
<img src="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/generative/images/kadinsky.jpg?raw=1" style="width: 500px;"/>
如果 Kandinsky 决定用这种风格来专门描绘这只海龟会是什么样子? 是否如下图一样?
<img src="https://tensorflow.google.cn/tutorials/generative/images/stylized-image.png" style="width: 500px;"/>
## 配置
### 导入和配置模块
```
import tensorflow as tf
import IPython.display as display
import matplotlib.pyplot as plt
import matplotlib as mpl
mpl.rcParams['figure.figsize'] = (12,12)
mpl.rcParams['axes.grid'] = False
import numpy as np
import PIL.Image
import time
import functools
def tensor_to_image(tensor):
tensor = tensor*255
tensor = np.array(tensor, dtype=np.uint8)
if np.ndim(tensor)>3:
assert tensor.shape[0] == 1
tensor = tensor[0]
return PIL.Image.fromarray(tensor)
```
下载图像并选择风格图像和内容图像:
```
content_path = tf.keras.utils.get_file('YellowLabradorLooking_new.jpg', 'https://storage.googleapis.com/download.tensorflow.org/example_images/YellowLabradorLooking_new.jpg')
# https://commons.wikimedia.org/wiki/File:Vassily_Kandinsky,_1913_-_Composition_7.jpg
style_path = tf.keras.utils.get_file('kandinsky5.jpg','https://storage.googleapis.com/download.tensorflow.org/example_images/Vassily_Kandinsky%2C_1913_-_Composition_7.jpg')
```
## 将输入可视化
定义一个加载图像的函数,并将其最大尺寸限制为 512 像素。
```
def load_img(path_to_img):
max_dim = 512
img = tf.io.read_file(path_to_img)
img = tf.image.decode_image(img, channels=3)
img = tf.image.convert_image_dtype(img, tf.float32)
shape = tf.cast(tf.shape(img)[:-1], tf.float32)
long_dim = max(shape)
scale = max_dim / long_dim
new_shape = tf.cast(shape * scale, tf.int32)
img = tf.image.resize(img, new_shape)
img = img[tf.newaxis, :]
return img
```
创建一个简单的函数来显示图像:
```
def imshow(image, title=None):
if len(image.shape) > 3:
image = tf.squeeze(image, axis=0)
plt.imshow(image)
if title:
plt.title(title)
content_image = load_img(content_path)
style_image = load_img(style_path)
plt.subplot(1, 2, 1)
imshow(content_image, 'Content Image')
plt.subplot(1, 2, 2)
imshow(style_image, 'Style Image')
```
## 使用 TF-Hub 进行快速风格迁移
本教程演示了原始的风格迁移算法。其将图像内容优化为特定风格。在进入细节之前,让我们看一下 [TensorFlow Hub](https://tensorflow.google.cn/hub) 模块如何快速风格迁移:
```
import tensorflow_hub as hub
hub_module = hub.load('https://tfhub.dev/google/magenta/arbitrary-image-stylization-v1-256/1')
stylized_image = hub_module(tf.constant(content_image), tf.constant(style_image))[0]
tensor_to_image(stylized_image)
```
## 定义内容和风格的表示
使用模型的中间层来获取图像的*内容*和*风格*表示。 从网络的输入层开始,前几个层的激励响应表示边缘和纹理等低级 feature (特征)。 随着层数加深,最后几层代表更高级的 feature (特征)——实体的部分,如*轮子*或*眼睛*。 在此教程中,我们使用的是 VGG19 网络结构,这是一个已经预训练好的图像分类网络。 这些中间层是从图像中定义内容和风格的表示所必需的。 对于一个输入图像,我们尝试匹配这些中间层的相应风格和内容目标的表示。
加载 [VGG19](https://keras.io/applications/#vgg19) 并在我们的图像上测试它以确保正常运行:
```
x = tf.keras.applications.vgg19.preprocess_input(content_image*255)
x = tf.image.resize(x, (224, 224))
vgg = tf.keras.applications.VGG19(include_top=True, weights='imagenet')
prediction_probabilities = vgg(x)
prediction_probabilities.shape
predicted_top_5 = tf.keras.applications.vgg19.decode_predictions(prediction_probabilities.numpy())[0]
[(class_name, prob) for (number, class_name, prob) in predicted_top_5]
```
现在,加载没有分类部分的 `VGG19` ,并列出各层的名称:
```
vgg = tf.keras.applications.VGG19(include_top=False, weights='imagenet')
print()
for layer in vgg.layers:
print(layer.name)
```
从网络中选择中间层的输出以表示图像的风格和内容:
```
# 内容层将提取出我们的 feature maps (特征图)
content_layers = ['block5_conv2']
# 我们感兴趣的风格层
style_layers = ['block1_conv1',
'block2_conv1',
'block3_conv1',
'block4_conv1',
'block5_conv1']
num_content_layers = len(content_layers)
num_style_layers = len(style_layers)
```
#### 用于表示风格和内容的中间层
那么,为什么我们预训练的图像分类网络中的这些中间层的输出允许我们定义风格和内容的表示?
从高层理解,为了使网络能够实现图像分类(该网络已被训练过),它必须理解图像。 这需要将原始图像作为输入像素并构建内部表示,这个内部表示将原始图像像素转换为对图像中存在的 feature (特征)的复杂理解。
这也是卷积神经网络能够很好地推广的一个原因:它们能够捕获不变性并定义类别(例如猫与狗)之间的 feature (特征),这些 feature (特征)与背景噪声和其他干扰无关。 因此,将原始图像传递到模型输入和分类标签输出之间的某处的这一过程,可以视作复杂的 feature (特征)提取器。通过这些模型的中间层,我们就可以描述输入图像的内容和风格。
## 建立模型
使用`tf.keras.applications`中的网络可以让我们非常方便的利用Keras的功能接口提取中间层的值。
在使用功能接口定义模型时,我们需要指定输入和输出:
`model = Model(inputs, outputs)`
以下函数构建了一个 VGG19 模型,该模型返回一个中间层输出的列表:
```
def vgg_layers(layer_names):
""" Creates a vgg model that returns a list of intermediate output values."""
# 加载我们的模型。 加载已经在 imagenet 数据上预训练的 VGG
vgg = tf.keras.applications.VGG19(include_top=False, weights='imagenet')
vgg.trainable = False
outputs = [vgg.get_layer(name).output for name in layer_names]
model = tf.keras.Model([vgg.input], outputs)
return model
```
然后建立模型:
```
style_extractor = vgg_layers(style_layers)
style_outputs = style_extractor(style_image*255)
#查看每层输出的统计信息
for name, output in zip(style_layers, style_outputs):
print(name)
print(" shape: ", output.numpy().shape)
print(" min: ", output.numpy().min())
print(" max: ", output.numpy().max())
print(" mean: ", output.numpy().mean())
print()
```
## 风格计算
图像的内容由中间 feature maps (特征图)的值表示。
事实证明,图像的风格可以通过不同 feature maps (特征图)上的平均值和相关性来描述。 通过在每个位置计算 feature (特征)向量的外积,并在所有位置对该外积进行平均,可以计算出包含此信息的 Gram 矩阵。 对于特定层的 Gram 矩阵,具体计算方法如下所示:
$$G^l_{cd} = \frac{\sum_{ij} F^l_{ijc}(x)F^l_{ijd}(x)}{IJ}$$
这可以使用`tf.linalg.einsum`函数来实现:
```
def gram_matrix(input_tensor):
result = tf.linalg.einsum('bijc,bijd->bcd', input_tensor, input_tensor)
input_shape = tf.shape(input_tensor)
num_locations = tf.cast(input_shape[1]*input_shape[2], tf.float32)
return result/(num_locations)
```
## 提取风格和内容
构建一个返回风格和内容张量的模型。
```
class StyleContentModel(tf.keras.models.Model):
def __init__(self, style_layers, content_layers):
super(StyleContentModel, self).__init__()
self.vgg = vgg_layers(style_layers + content_layers)
self.style_layers = style_layers
self.content_layers = content_layers
self.num_style_layers = len(style_layers)
self.vgg.trainable = False
def call(self, inputs):
"Expects float input in [0,1]"
inputs = inputs*255.0
preprocessed_input = tf.keras.applications.vgg19.preprocess_input(inputs)
outputs = self.vgg(preprocessed_input)
style_outputs, content_outputs = (outputs[:self.num_style_layers],
outputs[self.num_style_layers:])
style_outputs = [gram_matrix(style_output)
for style_output in style_outputs]
content_dict = {content_name:value
for content_name, value
in zip(self.content_layers, content_outputs)}
style_dict = {style_name:value
for style_name, value
in zip(self.style_layers, style_outputs)}
return {'content':content_dict, 'style':style_dict}
```
在图像上调用此模型,可以返回 style_layers 的 gram 矩阵(风格)和 content_layers 的内容:
```
extractor = StyleContentModel(style_layers, content_layers)
results = extractor(tf.constant(content_image))
style_results = results['style']
print('Styles:')
for name, output in sorted(results['style'].items()):
print(" ", name)
print(" shape: ", output.numpy().shape)
print(" min: ", output.numpy().min())
print(" max: ", output.numpy().max())
print(" mean: ", output.numpy().mean())
print()
print("Contents:")
for name, output in sorted(results['content'].items()):
print(" ", name)
print(" shape: ", output.numpy().shape)
print(" min: ", output.numpy().min())
print(" max: ", output.numpy().max())
print(" mean: ", output.numpy().mean())
```
## 梯度下降
使用此风格和内容提取器,我们现在可以实现风格传输算法。我们通过计算每个图像的输出和目标的均方误差来做到这一点,然后取这些损失值的加权和。
设置风格和内容的目标值:
```
style_targets = extractor(style_image)['style']
content_targets = extractor(content_image)['content']
```
定义一个 `tf.Variable` 来表示要优化的图像。 为了快速实现这一点,使用内容图像对其进行初始化( `tf.Variable` 必须与内容图像的形状相同)
```
image = tf.Variable(content_image)
```
由于这是一个浮点图像,因此我们定义一个函数来保持像素值在 0 和 1 之间:
```
def clip_0_1(image):
return tf.clip_by_value(image, clip_value_min=0.0, clip_value_max=1.0)
```
创建一个 optimizer 。 本教程推荐 LBFGS,但 `Adam` 也可以正常工作:
```
opt = tf.optimizers.Adam(learning_rate=0.02, beta_1=0.99, epsilon=1e-1)
```
为了优化它,我们使用两个损失的加权组合来获得总损失:
```
style_weight=1e-2
content_weight=1e4
def style_content_loss(outputs):
style_outputs = outputs['style']
content_outputs = outputs['content']
style_loss = tf.add_n([tf.reduce_mean((style_outputs[name]-style_targets[name])**2)
for name in style_outputs.keys()])
style_loss *= style_weight / num_style_layers
content_loss = tf.add_n([tf.reduce_mean((content_outputs[name]-content_targets[name])**2)
for name in content_outputs.keys()])
content_loss *= content_weight / num_content_layers
loss = style_loss + content_loss
return loss
```
使用 `tf.GradientTape` 来更新图像。
```
@tf.function()
def train_step(image):
with tf.GradientTape() as tape:
outputs = extractor(image)
loss = style_content_loss(outputs)
grad = tape.gradient(loss, image)
opt.apply_gradients([(grad, image)])
image.assign(clip_0_1(image))
```
现在,我们运行几个步来测试一下:
```
train_step(image)
train_step(image)
train_step(image)
tensor_to_image(image)
```
运行正常,我们来执行一个更长的优化:
```
import time
start = time.time()
epochs = 10
steps_per_epoch = 100
step = 0
for n in range(epochs):
for m in range(steps_per_epoch):
step += 1
train_step(image)
print(".", end='')
display.clear_output(wait=True)
display.display(tensor_to_image(image))
print("Train step: {}".format(step))
end = time.time()
print("Total time: {:.1f}".format(end-start))
```
## 总变分损失
此实现只是一个基础版本,它的一个缺点是它会产生大量的高频误差。 我们可以直接通过正则化图像的高频分量来减少这些高频误差。 在风格转移中,这通常被称为*总变分损失*:
```
def high_pass_x_y(image):
x_var = image[:,:,1:,:] - image[:,:,:-1,:]
y_var = image[:,1:,:,:] - image[:,:-1,:,:]
return x_var, y_var
x_deltas, y_deltas = high_pass_x_y(content_image)
plt.figure(figsize=(14,10))
plt.subplot(2,2,1)
imshow(clip_0_1(2*y_deltas+0.5), "Horizontal Deltas: Original")
plt.subplot(2,2,2)
imshow(clip_0_1(2*x_deltas+0.5), "Vertical Deltas: Original")
x_deltas, y_deltas = high_pass_x_y(image)
plt.subplot(2,2,3)
imshow(clip_0_1(2*y_deltas+0.5), "Horizontal Deltas: Styled")
plt.subplot(2,2,4)
imshow(clip_0_1(2*x_deltas+0.5), "Vertical Deltas: Styled")
```
这显示了高频分量如何增加。
而且,本质上高频分量是一个边缘检测器。 我们可以从 Sobel 边缘检测器获得类似的输出,例如:
```
plt.figure(figsize=(14,10))
sobel = tf.image.sobel_edges(content_image)
plt.subplot(1,2,1)
imshow(clip_0_1(sobel[...,0]/4+0.5), "Horizontal Sobel-edges")
plt.subplot(1,2,2)
imshow(clip_0_1(sobel[...,1]/4+0.5), "Vertical Sobel-edges")
```
与此相关的正则化损失是这些值的平方和:
```
def total_variation_loss(image):
x_deltas, y_deltas = high_pass_x_y(image)
return tf.reduce_sum(tf.abs(x_deltas)) + tf.reduce_sum(tf.abs(y_deltas))
total_variation_loss(image).numpy()
```
以上说明了总变分损失的用途。但是无需自己实现,因为 TensorFlow 包含了一个标准实现:
```
tf.image.total_variation(image).numpy()
```
## 重新进行优化
选择 `total_variation_loss` 的权重:
```
total_variation_weight=30
```
现在,将它加入 `train_step` 函数中:
```
@tf.function()
def train_step(image):
with tf.GradientTape() as tape:
outputs = extractor(image)
loss = style_content_loss(outputs)
loss += total_variation_weight*tf.image.total_variation(image)
grad = tape.gradient(loss, image)
opt.apply_gradients([(grad, image)])
image.assign(clip_0_1(image))
```
重新初始化优化的变量:
```
image = tf.Variable(content_image)
```
并进行优化:
```
import time
start = time.time()
epochs = 10
steps_per_epoch = 100
step = 0
for n in range(epochs):
for m in range(steps_per_epoch):
step += 1
train_step(image)
print(".", end='')
display.clear_output(wait=True)
display.display(tensor_to_image(image))
print("Train step: {}".format(step))
end = time.time()
print("Total time: {:.1f}".format(end-start))
```
最后,保存结果:
```
file_name = 'stylized-image.png'
tensor_to_image(image).save(file_name)
try:
from google.colab import files
except ImportError:
pass
else:
files.download(file_name)
```
| github_jupyter |
# Factor Operations with pyBN
It is probably rare that a user wants to directly manipulate factors unless they are developing a new algorithm, but it's still important to see how factor operations are done in pyBN. Moreover, the ease-of-use and transparency of pyBN's factor operations mean it can be a great teaching/learning tool!
In this tutorial, I will go over the main operations you can do with factors. First, let's start with actually creating a factor. So, we will read in a Bayesian Network from one of the included networks:
```
from pyBN import *
bn = read_bn('data/cmu.bn')
print bn.V
print bn.E
```
As you can see, we have a Bayesian network with 5 nodes and some edges between them. Let's create a factor now. This is easy in pyBN - just pass in the BayesNet object and the name of the variable.
```
alarm_factor = Factor(bn,'Alarm')
```
Now that we have a factor, we can explore its properties. Every factor in pyBN has the following attributes:
*self.bn* : a BayesNet object
*self.var* : a string
The random variable to which this Factor belongs
*self.scope* : a list
The RV, and its parents (the RVs involved in the
conditional probability table)
*self.card* : a dictionary, where
key = an RV in self.scope, and
val = integer cardinality of the key (i.e. how
many possible values it has)
*self.stride* : a dictionary, where
key = an RV in self.scope, and
val = integer stride (i.e. how many rows in the
CPT until the NEXT value of RV is reached)
*self.cpt* : a nested numpy array
The probability values for self.var conditioned
on its parents
```
print alarm_factor.bn
print alarm_factor.var
print alarm_factor.scope
print alarm_factor.card
print alarm_factor.stride
print alarm_factor.cpt
```
Along with those properties, there are a great number of methods (functions) at hand:
*multiply_factor*
Multiply two factors together. The factor
multiplication algorithm used here is adapted
from Koller and Friedman (PGMs) textbook.
*sumover_var* :
Sum over one *rv* by keeping it constant. Thus, you
end up with a 1-D factor whose scope is ONLY *rv*
and whose length = cardinality of rv.
*sumout_var_list* :
Remove a collection of rv's from the factor
by summing out (i.e. calling sumout_var) over
each rv.
*sumout_var* :
Remove passed-in *rv* from the factor by summing
over everything else.
*maxout_var* :
Remove *rv* from the factor by taking the maximum value
of all rv instantiations over everyting else.
*reduce_factor_by_list* :
Reduce the factor by numerous sets of
[rv,val]
*reduce_factor* :
Condition the factor by eliminating any sets of
values that don't align with a given [rv, val]
*to_log* :
Convert probabilities to log space from
normal space.
*from_log* :
Convert probabilities from log space to
normal space.
*normalize* :
Make relevant collections of probabilities sum to one.
Here is a look at Factor Multiplication:
```
import numpy as np
f1 = Factor(bn,'Alarm')
f2 = Factor(bn,'Burglary')
f1.multiply_factor(f2)
f3 = Factor(bn,'Burglary')
f4 = Factor(bn,'Alarm')
f3.multiply_factor(f4)
print np.round(f1.cpt,3)
print '\n',np.round(f3.cpt,3)
```
Here is a look at "sumover_var":
```
f = Factor(bn,'Alarm')
print f.cpt
print f.scope
print f.stride
f.sumover_var('Burglary')
print '\n',f.cpt
print f.scope
print f.stride
```
Here is a look at "sumout_var", which is essentially the opposite of "sumover_var":
```
f = Factor(bn,'Alarm')
f.sumout_var('Earthquake')
print f.stride
print f.scope
print f.card
print f.cpt
```
Additionally, you can sum over a LIST of variables with "sumover_var_list". Notice how summing over every variable in the scope except for ONE variable is equivalent to summing over that ONE variable:
```
f = Factor(bn,'Alarm')
print f.cpt
f.sumout_var_list(['Burglary','Earthquake'])
print f.scope
print f.stride
print f.cpt
f1 = Factor(bn,'Alarm')
print '\n',f1.cpt
f1.sumover_var('Alarm')
print f1.scope
print f1.stride
print f1.cpt
```
Even more, you can use "maxout_var" to take the max values over a variable in the factor. This is a fundamental operation in Max-Sum Variable Elimination for MAP inference. Notice how the variable being maxed out is removed from the scope because it is conditioned upon and thus taken as truth in a sense.
```
f = Factor(bn,'Alarm')
print f.scope
print f.cpt
f.maxout_var('Burglary')
print '\n', f.scope
print f.cpt
```
Moreover, you can also use "reduce_factor" to reduce a factor based on evidence. This is different from "sumover_var" because "reduce_factor" is not summing over anything, it is simply removing any
parent-child instantiations which are not consistent with
the evidence. Moreover, there should not be any need for
normalization because the CPT should already be normalized
over the rv-val evidence (but we do it anyways because of
rounding). This function is essential when user's pass in evidence to any inference query.
```
f = Factor(bn, 'Alarm')
print f.scope
print f.cpt
f.reduce_factor('Burglary','Yes')
print '\n', f.scope
print f.cpt
```
Another piece of functionality is the capability to convert the factor probabilities to/from log-space. This is important for MAP inference, since the sum of log-probabilities is equal the product of normal probabilities
```
f = Factor(bn,'Alarm')
print f.cpt
f.to_log()
print np.round(f.cpt,2)
f.from_log()
print f.cpt
```
Lastly, we have normalization. This function does most of its work behind the scenes because it cleans up the factor probabilities after multiplication or reduction. Still, it's an important function of which users should be aware.
```
f = Factor(bn, 'Alarm')
print f.cpt
f.cpt[0]=20
f.cpt[1]=20
f.cpt[4]=0.94
f.cpt[7]=0.15
print f.cpt
f.normalize()
print f.cpt
```
That's all for factor operations with pyBN. As you can see, there is a lot going on with factor operations. While these functions are the behind-the-scenes drivers of most inference queries, it is still useful for users to see how they operate. These operations have all been optimized to run incredibly fast so that inference queries can be as fast as possible.
| github_jupyter |
# Clustering
See our notes on [unsupervised learning](https://jennselby.github.io/MachineLearningCourseNotes/#unsupervised-learning), [K-means](https://jennselby.github.io/MachineLearningCourseNotes/#k-means-clustering), [DBSCAN](https://jennselby.github.io/MachineLearningCourseNotes/#dbscan-clustering), and [clustering validation](https://jennselby.github.io/MachineLearningCourseNotes/#clustering-validation).
For documentation of various clustering methods in scikit-learn, see http://scikit-learn.org/stable/modules/clustering.html
This code was based on the example at http://scikit-learn.org/stable/auto_examples/cluster/plot_cluster_iris.html
which has the following comments:
Code source: Gaël Varoquaux<br/>
Modified for documentation by Jaques Grobler<br/>
License: BSD 3 clause
## Instructions
0. If you haven't already, follow [the setup instructions here](https://jennselby.github.io/MachineLearningCourseNotes/#setting-up-python3) to get all necessary software installed.
1. Read through the code in the following sections:
* [Iris Dataset](#Iris-Dataset)
* [Visualization](#Visualization)
* [Training and Visualization](#Training-and-Visualization)
2. Complete the three-part [Exercise](#Exercise)
```
%matplotlib inline
import numpy
import matplotlib.pyplot
from mpl_toolkits.mplot3d import Axes3D
from sklearn.cluster import KMeans
from sklearn import datasets
import pandas
```
## Iris Dataset
Before you go on, if you haven't used the iris dataset in a previous assignment, make sure you understand it. Modify the cell below to examine different parts of the dataset that are contained in the iris dictionary object.
What are the features? What are we trying to classify?
```
iris = datasets.load_iris()
iris.keys()
iris_df = pandas.DataFrame(iris.data)
iris_df.columns = iris.feature_names
iris_df.head()
```
## Visualization Setup
```
# We can only plot 3 of the 4 iris features, since we only see in 3D.
# These are the ones the example code picked
X_FEATURE = 'petal width (cm)'
Y_FEATURE = 'sepal length (cm)'
Z_FEATURE = 'petal length (cm)'
# set some bounds for the figures that will display the plots of clusterings with various
# hyperparameter settings
# this allows for NUM_COLS * NUM_ROWS plots in the figure
NUM_COLS = 4
NUM_ROWS = 6
FIG_WIDTH = 4 * NUM_COLS
FIG_HEIGHT = 3 * NUM_ROWS
def add_plot(figure, subplot_num, subplot_name, data, labels):
'''Create a new subplot in the figure.'''
# create a new subplot
axis = figure.add_subplot(NUM_ROWS, NUM_COLS, subplot_num, projection='3d',
elev=48, azim=134)
# Plot three of the four features on the graph, and set the color according to the labels
axis.scatter(data[X_FEATURE], data[Y_FEATURE], data[Z_FEATURE], c=labels)
# get rid of the tick numbers. Otherwise, they all overlap and it looks horrible
for axis_obj in [axis.w_xaxis, axis.w_yaxis, axis.w_zaxis]:
axis_obj.set_ticklabels([])
# label the subplot
axis.title.set_text(subplot_name)
```
## Visualization
This is the correct labeling, based on the targets.
```
# start a new figure to hold all of the subplots
truth_figure = matplotlib.pyplot.figure(figsize=(FIG_WIDTH, FIG_HEIGHT))
# Plot the ground truth
add_plot(truth_figure, 1, "Ground Truth", iris_df, iris.target)
```
## Training and Visualization
Now let's see how k-means clusters the iris dataset, with various different numbers of clusters
```
MAX_CLUSTERS = 10
# start a new figure to hold all of the subplots
kmeans_figure = matplotlib.pyplot.figure(figsize=(FIG_WIDTH, FIG_HEIGHT))
# Plot the ground truth
add_plot(kmeans_figure, 1, "Ground Truth", iris_df, iris.target)
plot_num = 2
for num_clusters in range(2, MAX_CLUSTERS + 1):
# train the model
model = KMeans(n_clusters=num_clusters)
model.fit(iris_df)
# get the predictions of which cluster each input is in
labels = model.labels_
# plot this clustering
title = '{} Clusters'.format(num_clusters)
add_plot(kmeans_figure, plot_num, title, iris_df, labels.astype(numpy.float))
plot_num += 1
```
# Exercise
1. Add [validation](https://jennselby.github.io/MachineLearningCourseNotes/#clustering-validation) to measure how good the clustering is, with different numbers of clusters.
1. Run the iris data through DBSCAN or hierarchical clustering and validate that as well.
1. Comment on the validation results, explaining which models did best and why you think that might be.
```
# your code here
```
| github_jupyter |
<a href="https://colab.research.google.com/github/ChanceDurr/AB-Demo/blob/master/DS_Unit_1_Sprint_Challenge_3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Data Science Unit 1 Sprint Challenge 3
## Exploring Data, Testing Hypotheses
In this sprint challenge you will look at a dataset of people being approved or rejected for credit.
https://archive.ics.uci.edu/ml/datasets/Credit+Approval
Data Set Information: This file concerns credit card applications. All attribute names and values have been changed to meaningless symbols to protect confidentiality of the data. This dataset is interesting because there is a good mix of attributes -- continuous, nominal with small numbers of values, and nominal with larger numbers of values. There are also a few missing values.
Attribute Information:
- A1: b, a.
- A2: continuous.
- A3: continuous.
- A4: u, y, l, t.
- A5: g, p, gg.
- A6: c, d, cc, i, j, k, m, r, q, w, x, e, aa, ff.
- A7: v, h, bb, j, n, z, dd, ff, o.
- A8: continuous.
- A9: t, f.
- A10: t, f.
- A11: continuous.
- A12: t, f.
- A13: g, p, s.
- A14: continuous.
- A15: continuous.
- A16: +,- (class attribute)
Yes, most of that doesn't mean anything. A16 (the class attribute) is the most interesting, as it separates the 307 approved cases from the 383 rejected cases. The remaining variables have been obfuscated for privacy - a challenge you may have to deal with in your data science career.
Sprint challenges are evaluated based on satisfactory completion of each part. It is suggested you work through it in order, getting each aspect reasonably working, before trying to deeply explore, iterate, or refine any given step. Once you get to the end, if you want to go back and improve things, go for it!
## Part 1 - Load and validate the data
- Load the data as a `pandas` data frame.
- Validate that it has the appropriate number of observations (you can check the raw file, and also read the dataset description from UCI).
- UCI says there should be missing data - check, and if necessary change the data so pandas recognizes it as na
- Make sure that the loaded features are of the types described above (continuous values should be treated as float), and correct as necessary
This is review, but skills that you'll use at the start of any data exploration. Further, you may have to do some investigation to figure out which file to load from - that is part of the puzzle.
```
import pandas as pd
from scipy.stats import ttest_ind, chi2_contingency
import matplotlib.pyplot as plt
df = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/credit-screening/crx.data',
names=['A1', 'A2', 'A3', 'A4', 'A5', 'A6', 'A7', 'A8', 'A9',
'A10', 'A11', 'A12', 'A13', 'A14', 'A15', 'A16'])
print(df.shape) # Check to see the correct amount of observations
df.head()
# Check for missing values in A16
df['A16'].value_counts(dropna=False)
# Replace + and - with 1 and 0
df['A16'] = df['A16'].replace({'+': 1, '-': 0})
df.head(10)
df = df.replace({'?': None}) #Replace ? with NaN
df['A2'] = df['A2'].astype(float) # Change the dtype of A2 to float
df['A2'].describe()
df_approved = df[df['A16'] == 1]
df_rejected = df[df['A16'] == 0]
print(df_approved.shape)
df_approved.head(10)
print(df_rejected.shape)
df_rejected.head(10)
```
## Part 2 - Exploring data, Testing hypotheses
The only thing we really know about this data is that A16 is the class label. Besides that, we have 6 continuous (float) features and 9 categorical features.
Explore the data: you can use whatever approach (tables, utility functions, visualizations) to get an impression of the distributions and relationships of the variables. In general, your goal is to understand how the features are different when grouped by the two class labels (`+` and `-`).
For the 6 continuous features, how are they different when split between the two class labels? Choose two features to run t-tests (again split by class label) - specifically, select one feature that is *extremely* different between the classes, and another feature that is notably less different (though perhaps still "statistically significantly" different). You may have to explore more than two features to do this.
For the categorical features, explore by creating "cross tabs" (aka [contingency tables](https://en.wikipedia.org/wiki/Contingency_table)) between them and the class label, and apply the Chi-squared test to them. [pandas.crosstab](http://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.crosstab.html) can create contingency tables, and [scipy.stats.chi2_contingency](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.chi2_contingency.html) can calculate the Chi-squared statistic for them.
There are 9 categorical features - as with the t-test, try to find one where the Chi-squared test returns an extreme result (rejecting the null that the data are independent), and one where it is less extreme.
**NOTE** - "less extreme" just means smaller test statistic/larger p-value. Even the least extreme differences may be strongly statistically significant.
Your *main* goal is the hypothesis tests, so don't spend too much time on the exploration/visualization piece. That is just a means to an end - use simple visualizations, such as boxplots or a scatter matrix (both built in to pandas), to get a feel for the overall distribution of the variables.
This is challenging, so manage your time and aim for a baseline of at least running two t-tests and two Chi-squared tests before polishing. And don't forget to answer the questions in part 3, even if your results in this part aren't what you want them to be.
```
# ttest_ind to see if means are similar, reject null hypothesis
ttest_ind(df_approved['A2'].dropna(), df_rejected['A2'].dropna())
# ttest_ind to see if means are similar, reject null hypothesis
ttest_ind(df_approved['A8'].dropna(), df_rejected['A8'].dropna())
ct1 = pd.crosstab(df['A16'], df['A1'])
chi_statistic1, p_value1, dof1, table1 = chi2_contingency(ct1)
print(f'Chi test statistic: {chi_statistic1}')
print(f'P Value: {p_value1}')
print(f'Degrees of freedom: {dof1}')
print(f'Expected Table: \n {table1}')
ct2 = pd.crosstab(df['A16'], df['A4'])
chi_statistic2, p_value2, dof2, table2 = chi2_contingency(ct2)
print(f'Chi test statistic: {chi_statistic2}')
print(f'P Value: {p_value2}')
print(f'Degrees of freedom: {dof2}')
print(f'Expected Table: \n {table2}')
ct2
```
## Exploration with Visuals
```
plt.style.use('fivethirtyeight')
plt.scatter(df['A2'], df['A16'], alpha=.1)
plt.yticks([0, 1])
plt.style.use('fivethirtyeight')
plt.scatter(df['A8'], df['A16'], alpha=.1)
plt.yticks([0, 1])
```
## Part 3 - Analysis and Interpretation
Now that you've looked at the data, answer the following questions:
- Interpret and explain the two t-tests you ran - what do they tell you about the relationships between the continuous features you selected and the class labels?
- Interpret and explain the two Chi-squared tests you ran - what do they tell you about the relationships between the categorical features you selected and the class labels?
- What was the most challenging part of this sprint challenge?
Answer with text, but feel free to intersperse example code/results or refer to it from earlier.
In both ttests you can see that we were able to reject the null hypothesis that the two means of the features(A2 and A8) are not the same. Therefore, we should be able to say that there is a correlation between the features and having an effect on whether or not they get approved for credit. If we Failed to Reject the null, I would say that there isn't a significant correlation between the A2, and A8 features and getting approved for credit.
With the two Chi sqaured test, I wanted to see if there was a dependency with one of the other categorical features and whether or not they got approved for credit. You can see in one of the cases that we Rejected the Null hypothesis of them being independant of each other. Therefore we can say that there is a correlation between the two features. On the other hand, we had a case where we Fail to Reject the Null hypothesis. Meaning that we cannot say that these are dependent on each other.
I would say the most challenging part of this Sprint challenge was preparing for it. It was tough to get a grasp of what were doing and why we were doing it. After a full day of study though with some peers and Ryan himself. I was able to go through step by step and get some questions answered. After that, it was a lot easier to understand. However, I still dont know why there is a higher chance with door 2 in the monty hall problem :)
| github_jupyter |
## Hyperparameter Tuning Design Pattern
In Hyperparameter Tuning, the training loop is itself inserted into an optimization method to find the optimal set of model hyperparameters.
```
import datetime
import os
import numpy as np
import pandas as pd
import tensorflow as tf
import time
from tensorflow import keras
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score, f1_score
```
### Grid search in Scikit-learn
Here we'll look at how to implement hyperparameter tuning with the grid search algorithm, using Scikit-learn's built-in `GridSearchCV`. We'll do this by training a random forest model on the UCI mushroom dataset, which predicts whether a mushroom is edible or poisonous.
```
# First, download the data
# We've made it publicly available in Google Cloud Storage
!gsutil cp gs://ml-design-patterns/mushrooms.csv .
mushroom_data = pd.read_csv('mushrooms.csv')
mushroom_data.head()
```
To keep things simple, we'll first convert the label column to numeric and then
use `pd.get_dummies()` to covert the data to numeric.
```
# 1 = edible, 0 = poisonous
mushroom_data.loc[mushroom_data['class'] == 'p', 'class'] = 0
mushroom_data.loc[mushroom_data['class'] == 'e', 'class'] = 1
labels = mushroom_data.pop('class')
dummy_data = pd.get_dummies(mushroom_data)
# Split the data
train_size = int(len(mushroom_data) * .8)
train_data = dummy_data[:train_size]
test_data = dummy_data[train_size:]
train_labels = labels[:train_size].astype(int)
test_labels = labels[train_size:].astype(int)
```
Next, we'll build our Scikit-learn model and define the hyperparameters we want to optimize using grid serach.
```
model = RandomForestClassifier()
grid_vals = {
'max_depth': [5, 10, 100],
'n_estimators': [100, 150, 200]
}
grid_search = GridSearchCV(model, param_grid=grid_vals, scoring='accuracy')
# Train the model while running hyperparameter trials
grid_search.fit(train_data.values, train_labels.values)
```
Let's see which hyperparameters resulted in the best accuracy.
```
grid_search.best_params_
```
Finally, we can generate some test predictions on our model and evaluate its accuracy.
```
grid_predict = grid_search.predict(test_data.values)
grid_acc = accuracy_score(test_labels.values, grid_predict)
grid_f = f1_score(test_labels.values, grid_predict)
print('Accuracy: ', grid_acc)
print('F1-Score: ', grid_f)
```
### Hyperparameter tuning with `keras-tuner`
To show how this works we'll train a model on the MNIST handwritten digit dataset, which is available directly in Keras. For more details, see this [Keras tuner guide](https://www.tensorflow.org/tutorials/keras/keras_tuner).
```
!pip install keras-tuner --quiet
import kerastuner as kt
# Get the mnist data
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
def build_model(hp):
model = keras.Sequential([
keras.layers.Flatten(input_shape=(28, 28)),
keras.layers.Dense(hp.Int('first_hidden', 128, 256, step=32), activation='relu'),
keras.layers.Dense(hp.Int('second_hidden', 16, 128, step=32), activation='relu'),
keras.layers.Dense(10, activation='softmax')
])
model.compile(
optimizer=tf.keras.optimizers.Adam(
hp.Float('learning_rate', .005, .01, sampling='log')),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
return model
tuner = kt.BayesianOptimization(
build_model,
objective='val_accuracy',
max_trials=30
)
tuner.search(x_train, y_train, validation_split=0.1, epochs=10)
best_hps = tuner.get_best_hyperparameters(num_trials = 1)[0]
```
### Hyperparameter tuning on Cloud AI Platform
In this section we'll show you how to scale your hyperparameter optimization by running it on Google Cloud's AI Platform. You'll need a Cloud account with AI Platform Training enabled to run this section.
We'll be using PyTorch to build a regression model in this section. To train the model we'll be the BigQuery natality dataset. We've made a subset of this data available in a public Cloud Storage bucket, which we'll download from within the training job.
```
from google.colab import auth
auth.authenticate_user()
```
In the cells below, replcae `your-project-id` with the ID of your Cloud project, and `your-gcs-bucket` with the name of your Cloud Storage bucket.
```
!gcloud config set project your-project-id
BUCKET_URL = 'gs://your-gcs-bucket'
```
To run this on AI Platform, we'll need to package up our model code in Python's package format, which includes an empty `__init__.py` file and a `setup.py` to install dependencies (in this case PyTorch, Scikit-learn, and Pandas).
```
!mkdir trainer
!touch trainer/__init__.py
%%writefile setup.py
from setuptools import find_packages
from setuptools import setup
REQUIRED_PACKAGES = ['torch>=1.5', 'scikit-learn>=0.20', 'pandas>=1.0']
setup(
name='trainer',
version='0.1',
install_requires=REQUIRED_PACKAGES,
packages=find_packages(),
include_package_data=True,
description='My training application package.'
)
```
Below, we're copying our model training code to a `model.py` file in our trainer package directory. This code runs training and after training completes, reports the model's final loss to Cloud HyperTune.
```
%%writefile trainer/model.py
import argparse
import hypertune
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import torch.optim as optim
from sklearn.utils import shuffle
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import normalize
def get_args():
"""Argument parser.
Returns:
Dictionary of arguments.
"""
parser = argparse.ArgumentParser(description='PyTorch MNIST')
parser.add_argument('--job-dir', # handled automatically by AI Platform
help='GCS location to write checkpoints and export ' \
'models')
parser.add_argument('--lr', # Specified in the config file
type=float,
default=0.01,
help='learning rate (default: 0.01)')
parser.add_argument('--momentum', # Specified in the config file
type=float,
default=0.5,
help='SGD momentum (default: 0.5)')
parser.add_argument('--hidden-layer-size', # Specified in the config file
type=int,
default=8,
help='hidden layer size')
args = parser.parse_args()
return args
def train_model(args):
# Get the data
natality = pd.read_csv('https://storage.googleapis.com/ml-design-patterns/natality.csv')
natality = natality.dropna()
natality = shuffle(natality, random_state = 2)
natality.head()
natality_labels = natality['weight_pounds']
natality = natality.drop(columns=['weight_pounds'])
train_size = int(len(natality) * 0.8)
traindata_natality = natality[:train_size]
trainlabels_natality = natality_labels[:train_size]
testdata_natality = natality[train_size:]
testlabels_natality = natality_labels[train_size:]
# Normalize and convert to PT tensors
normalized_train = normalize(np.array(traindata_natality.values), axis=0)
normalized_test = normalize(np.array(testdata_natality.values), axis=0)
train_x = torch.Tensor(normalized_train)
train_y = torch.Tensor(np.array(trainlabels_natality))
test_x = torch.Tensor(normalized_test)
test_y = torch.Tensor(np.array(testlabels_natality))
# Define our data loaders
train_dataset = torch.utils.data.TensorDataset(train_x, train_y)
train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=128, shuffle=True)
test_dataset = torch.utils.data.TensorDataset(test_x, test_y)
test_dataloader = torch.utils.data.DataLoader(test_dataset, batch_size=128, shuffle=False)
# Define the model, while tuning the size of our hidden layer
model = nn.Sequential(nn.Linear(len(train_x[0]), args.hidden_layer_size),
nn.ReLU(),
nn.Linear(args.hidden_layer_size, 1))
criterion = nn.MSELoss()
# Tune hyperparameters in our optimizer
optimizer = optim.SGD(model.parameters(), lr=args.lr, momentum=args.momentum)
epochs = 10
for e in range(epochs):
for batch_id, (data, label) in enumerate(train_dataloader):
optimizer.zero_grad()
y_pred = model(data)
label = label.view(-1,1)
loss = criterion(y_pred, label)
loss.backward()
optimizer.step()
val_mse = 0
num_batches = 0
# Evaluate accuracy on our test set
with torch.no_grad():
for i, (data, label) in enumerate(test_dataloader):
num_batches += 1
y_pred = model(data)
mse = criterion(y_pred, label.view(-1,1))
val_mse += mse.item()
avg_val_mse = (val_mse / num_batches)
# Report the metric we're optimizing for to AI Platform's HyperTune service
# In this example, we're mimizing loss on our test set
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='val_mse',
metric_value=avg_val_mse,
global_step=epochs
)
def main():
args = get_args()
print('in main', args)
train_model(args)
if __name__ == '__main__':
main()
%%writefile config.yaml
trainingInput:
hyperparameters:
goal: MINIMIZE
maxTrials: 10
maxParallelTrials: 5
hyperparameterMetricTag: val_mse
enableTrialEarlyStopping: TRUE
params:
- parameterName: lr
type: DOUBLE
minValue: 0.0001
maxValue: 0.1
scaleType: UNIT_LINEAR_SCALE
- parameterName: momentum
type: DOUBLE
minValue: 0.0
maxValue: 1.0
scaleType: UNIT_LINEAR_SCALE
- parameterName: hidden-layer-size
type: INTEGER
minValue: 8
maxValue: 32
scaleType: UNIT_LINEAR_SCALE
MAIN_TRAINER_MODULE = "trainer.model"
TRAIN_DIR = os.getcwd() + '/trainer'
JOB_DIR = BUCKET_URL + '/output'
REGION = "us-central1"
# Create a unique job name (run this each time you submit a job)
timestamp = str(datetime.datetime.now().time())
JOB_NAME = 'caip_training_' + str(int(time.time()))
```
The command below will submit your training job to AI Platform. To view the logs, and the results of each HyperTune trial visit your Cloud console.
```
# Configure and submit the training job
!gcloud ai-platform jobs submit training $JOB_NAME \
--scale-tier basic \
--package-path $TRAIN_DIR \
--module-name $MAIN_TRAINER_MODULE \
--job-dir $JOB_DIR \
--region $REGION \
--runtime-version 2.1 \
--python-version 3.7 \
--config config.yaml
```
Copyright 2020 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Setup" data-toc-modified-id="Setup-1"><span class="toc-item-num">1 </span>Setup</a></span><ul class="toc-item"><li><span><a href="#Load-data" data-toc-modified-id="Load-data-1.1"><span class="toc-item-num">1.1 </span>Load data</a></span></li></ul></li><li><span><a href="#Figure-5b---Categories-of-combined-iModulons" data-toc-modified-id="Figure-5b---Categories-of-combined-iModulons-2"><span class="toc-item-num">2 </span>Figure 5b - Categories of combined iModulons</a></span></li><li><span><a href="#Create-RBH-graph" data-toc-modified-id="Create-RBH-graph-3"><span class="toc-item-num">3 </span>Create RBH graph</a></span></li><li><span><a href="#Figure-5c---Presence/absence-of-iModulons" data-toc-modified-id="Figure-5c---Presence/absence-of-iModulons-4"><span class="toc-item-num">4 </span>Figure 5c - Presence/absence of iModulons</a></span></li><li><span><a href="#Figure-5d---Heatmap" data-toc-modified-id="Figure-5d---Heatmap-5"><span class="toc-item-num">5 </span>Figure 5d - Heatmap</a></span></li><li><span><a href="#Figure-5e---Explained-variance" data-toc-modified-id="Figure-5e---Explained-variance-6"><span class="toc-item-num">6 </span>Figure 5e - Explained variance</a></span></li><li><span><a href="#Figure-5f---ppGpp-Activities" data-toc-modified-id="Figure-5f---ppGpp-Activities-7"><span class="toc-item-num">7 </span>Figure 5f - ppGpp Activities</a></span></li><li><span><a href="#Figure-5g:-PCA-of-datasets" data-toc-modified-id="Figure-5g:-PCA-of-datasets-8"><span class="toc-item-num">8 </span>Figure 5g: PCA of datasets</a></span></li><li><span><a href="#Figure-5h:-PCA-of-activites" data-toc-modified-id="Figure-5h:-PCA-of-activites-9"><span class="toc-item-num">9 </span>Figure 5h: PCA of activites</a></span></li><li><span><a href="#Supplementary-Figure-7" data-toc-modified-id="Supplementary-Figure-7-10"><span class="toc-item-num">10 </span>Supplementary Figure 7</a></span><ul class="toc-item"><li><span><a href="#Panel-a:-Explained-variance-of-lost-i-modulons" data-toc-modified-id="Panel-a:-Explained-variance-of-lost-i-modulons-10.1"><span class="toc-item-num">10.1 </span>Panel a: Explained variance of lost i-modulons</a></span></li><li><span><a href="#Panel-b:-Classes-of-new-i-modulons" data-toc-modified-id="Panel-b:-Classes-of-new-i-modulons-10.2"><span class="toc-item-num">10.2 </span>Panel b: Classes of new i-modulons</a></span></li><li><span><a href="#Panel-c:-Histogram-of-IC-gene-coefficients" data-toc-modified-id="Panel-c:-Histogram-of-IC-gene-coefficients-10.3"><span class="toc-item-num">10.3 </span>Panel c: Histogram of IC gene coefficients</a></span></li><li><span><a href="#Panel-e:-F1-score-chart" data-toc-modified-id="Panel-e:-F1-score-chart-10.4"><span class="toc-item-num">10.4 </span>Panel e: F1-score chart</a></span></li><li><span><a href="#Panel-f:-Pearson-R-between-activities" data-toc-modified-id="Panel-f:-Pearson-R-between-activities-10.5"><span class="toc-item-num">10.5 </span>Panel f: Pearson R between activities</a></span></li></ul></li><li><span><a href="#New-biological-component" data-toc-modified-id="New-biological-component-11"><span class="toc-item-num">11 </span>New biological component</a></span></li></ul></div>
# Setup
```
import matplotlib.pyplot as plt
from tqdm.notebook import tqdm
import pandas as pd
import numpy as np
import os, sys
from itertools import combinations
import seaborn as sns
from matplotlib_venn import venn2
from scipy import stats
from sklearn.decomposition import PCA
sys.path.append('../../scripts/')
from core import *
sns.set_style('ticks')
# Use custom stylesheet for figures
plt.style.use('custom')
```
## Load data
```
datasets = sorted([x for x in os.listdir(os.path.join(DATA_DIR,'iModulons'))
if '.' not in x])
# Thresholds were obtained from sensitivity analysis
cutoffs = {'MA-1': 550,
'MA-2': 600,
'MA-3': 350,
'RNAseq-1': 700,
'RNAseq-2': 300,
'combined': 400}
def load(dataset):
# Define directories
ds_dir = os.path.join(DATA_DIR,'iModulons',dataset)
# Define files
X_file = os.path.join(DATA_DIR,'processed_data',dataset+'_bc.csv')
M_file = os.path.join(ds_dir,'M.csv')
A_file = os.path.join(ds_dir,'A.csv')
metadata_file = os.path.join(DATA_DIR,'metadata',dataset+'_metadata.csv')
return IcaData(M_file,A_file,X_file,metadata_file,cutoffs[dataset])
# Load datasets
objs = {}
for ds in tqdm(datasets):
objs[ds] = load(ds)
DF_categories = pd.read_csv(os.path.join(DATA_DIR,'iModulons','categories_curated.csv'),index_col=0)
DF_categories.index = DF_categories.dataset.combine(DF_categories.component,lambda x1,x2:x1+'_'+str(x2))
```
# Figure 5b - Categories of combined iModulons
```
data = DF_categories[DF_categories.dataset=='combined'].type.value_counts()
data
data.sum()
data/data.sum()
unchar_mod_lens = []
mod_lens = []
for k in objs['combined'].M.columns:
if DF_categories.loc['combined_'+str(k),'type']=='uncharacterized':
unchar_mod_lens.append(len(objs['combined'].show_enriched(k)))
else:
mod_lens.append(len(objs['combined'].show_enriched(k)))
data = DF_categories[DF_categories.dataset=='combined'].type.value_counts()
plt.pie(data.values,labels=data.index);
```
# Create RBH graph
```
from rbh import *
l2s = []
for ds in datasets[:-1]:
links = rbh(objs['combined'].M,objs[ds].M)
for i,j,val in links:
comp1 = 'combined'+'_'+str(i)
comp2 = ds+'_'+str(j)
class1 = DF_categories.loc[comp1,'type']
class2 = DF_categories.loc[comp2,'type']
desc1 = DF_categories.loc[comp1,'description']
desc2 = DF_categories.loc[comp2,'description']
l2s.append(['combined',ds,i,j,comp1,comp2,class1,class2,desc1,desc2,1-val])
DF_links = pd.DataFrame(l2s,columns=['ds1','ds2','comp1','comp2','name1','name2','type1','type2','desc1','desc2','dist'])
DF_links = DF_links[DF_links.dist > 0.3]
DF_links = DF_links.sort_values(['ds1','comp1','ds2'])
DF_links[DF_links.type1 == 'uncharacterized'].name1.value_counts()
# Total links between full dataset and individual datasets
DF_links.groupby('ds2').count()['ds1']
# Average distance between full dataset and individual datasets
means = DF_links.groupby('ds2').mean()['dist']
stds = DF_links.groupby('ds2').std()['dist']
DF_links.to_csv(os.path.join(DATA_DIR,'iModulons','RBH_combined.csv'))
DF_links
```
# Figure 5c - Presence/absence of iModulons
```
index = objs['combined'].M.columns
type_dict = {'regulatory':-2,'functional':-3,'genomic':-4,'uncharacterized':-5}
DF_binarized = pd.DataFrame([1]*len(index),index=index,columns=['Combined Compendium'])
for ds in datasets[:-1]:
DF_binarized[ds] = [x in DF_links[DF_links.ds2==ds].comp1.tolist() for x in index]
DF_binarized = DF_binarized.astype(int)
DF_binarized['total'] = DF_binarized.sum(axis=1)
DF_binarized = (DF_binarized-1)
DF_binarized = DF_binarized[['RNAseq-1','RNAseq-2','MA-1','MA-2','MA-3','total']]
DF_binarized['type'] = [type_dict[DF_categories.loc['combined_'+str(k)].type] for k in DF_binarized.index]
DF_binarized = DF_binarized.sort_values(['total','RNAseq-1','RNAseq-2','MA-1','MA-2','MA-3','type'],ascending=False)
cmap = ['#b4d66c','#bc80b7','#81b1d3','#f47f72'] + ['white','black'] + sns.color_palette('Blues',5)
bin_counts = DF_binarized.groupby(['total','type']).size().unstack(fill_value=0).T.sort_index(ascending=False)
bin_counts = bin_counts
bin_counts.index = ['regulatory','biological','genomic','uncharacterized']
bin_counts.T.plot.bar(stacked=True)
plt.legend(bbox_to_anchor=(1,1))
print('Number of comps:',len(DF_binarized))
print('Number of linked comps: {} ({:.2f})'.format(sum(DF_binarized.total > 0),
sum(DF_binarized.total > 0)/len(DF_binarized)))
print('Number of linked comps: {} ({:.2f})'.format(sum(DF_binarized.total >1),
sum(DF_binarized.total > 1)/len(DF_binarized)))
fig,ax = plt.subplots(figsize=(4,1.5))
sns.heatmap(DF_binarized.T,cmap=cmap,ax=ax)
ax.set_xticks(np.arange(len(DF_binarized),step=20));
ax.tick_params(axis='x',reset=True,length=3,width=.5,color='k',top=False)
ax.set_xticklabels(np.arange(len(DF_binarized),step=20),);
```
# Figure 5d - Heatmap
```
fig,ax = plt.subplots(figsize=(2.1,1.3))
DF_types = DF_categories.groupby(['dataset','type']).count().component.unstack().fillna(0).drop('combined')
DF_types.loc['Total'] = DF_types.sum(axis=0)
DF_types['Total'] = DF_types.sum(axis=1)
DF_types_linked = DF_links.groupby(['ds2','type2']).count().comp1.unstack().fillna(0)
DF_types_linked.loc['Total'] = DF_types_linked.sum(axis=0)
DF_types_linked['Total'] = DF_types_linked.sum(axis=1)
DF_types_lost = DF_types - DF_types_linked
DF_text = pd.DataFrame()
for col in DF_types_lost:
DF_text[col] = DF_types_lost[col].astype(int).astype(str).str.cat(DF_types[col].astype(int).astype(str),sep='/')
DF_text = DF_text[['regulatory','functional','genomic','uncharacterized','Total']]
type_grid = (DF_types_lost/DF_types).fillna(0)[['regulatory','functional','genomic','uncharacterized','Total']]
type_grid = type_grid.reindex(['RNAseq-1','RNAseq-2','MA-1','MA-2','MA-3','Total'])
DF_text = DF_text.reindex(['RNAseq-1','RNAseq-2','MA-1','MA-2','MA-3','Total'])
sns.heatmap(type_grid,cmap='Blues',annot=DF_text,fmt='s',annot_kws={"size": 5})
# Types lost
DF_lost = DF_types- DF_types_linked
DF_lost
DF_types_linked.loc['Total']
DF_types_linked.loc['Total']/DF_types_linked.loc['Total'].iloc[:-1].sum()
```
# Figure 5e - Explained variance
```
# Load dataset - Downloaded from Sanchez-Vasquez et al 2019
DF_ppGpp = pd.read_excel(os.path.join(DATA_DIR,'ppGpp_data','dataset_s01_from_sanchez_vasquez_2019.xlsx'),sheet_name='Data')
# Get 757 genes described to be directly regulated by ppGpp
paper_genes = DF_ppGpp[DF_ppGpp['1+2+ 5 min Category'].isin(['A','B'])].Synonym.values
len(paper_genes)
paper_genes_down = DF_ppGpp[DF_ppGpp['1+2+ 5 min Category'].isin(['A'])].Synonym.values
paper_genes_up = DF_ppGpp[DF_ppGpp['1+2+ 5 min Category'].isin(['B'])].Synonym.values
venn2((set(paper_genes_down),set(objs['combined'].show_enriched(147).index)),set_labels=('Genes downregulated from ppGpp binding to RNAP','Genes in Central Dogma I-modulon'))
pp_genes = {}
for k in objs['combined'].M.columns:
pp_genes[k] = set(objs['combined'].show_enriched(k).index) & set(paper_genes)
set(objs['combined'].show_enriched(147).index) - set(paper_genes)
```
# Figure 5f - ppGpp Activities
```
ppGpp_X = pd.read_csv(os.path.join(DATA_DIR,'ppGpp_data','log_tpm_norm.csv'),index_col=0)
# Get genes in both ICA data and ppGpp dataframe
shared_genes = sorted(set(objs['combined'].X.index) & set(ppGpp_X.index))
# Keep only genes in both dataframes
ppGpp_X = ppGpp_X.loc[shared_genes]
M = objs['combined'].M.loc[shared_genes]
# Center columns
X = ppGpp_X.sub(ppGpp_X.mean(axis=0))
# Perform projection
M_inv = np.linalg.pinv(M)
A = np.dot(M_inv,X)
A = pd.DataFrame(A,columns = X.columns, index = M.columns)
t0 = ['ppgpp__t0__1','ppgpp__t0__2','ppgpp__t0__3']
t5 = ['ppgpp__t5__1','ppgpp__t5__2','ppgpp__t5__3']
ds4 = objs['combined'].metadata[objs['combined'].metadata['dataset'] == 'RNAseq-1'].index
df = pd.DataFrame(objs['combined'].A.loc[147,ds4])
df['group'] = ['RpoB\nE672K' if 'rpoBE672K' in x else 'RpoB\nE546V' if 'rpoBE546V' in x else 'WT RpoB' for x in df.index]
fig,ax = plt.subplots(figsize=(2,2))
sns.boxplot(data=df,y=147,x='group')
sns.stripplot(data=df,y=147,x='group',dodge=True,color='k',jitter=0.3,s=3)
ax.set_ylabel('Central Dogma\nI-modulon Activity',fontsize=7)
ax.set_xlabel('Carbon Source',fontsize=7)
ax.tick_params(labelsize=5)
plt.tight_layout()
```
# Figure 5g: PCA of datasets
```
cdict = dict(zip(datasets[:-1],['tab:orange','black','tab:red','tab:green','tab:blue']))
exp_data = pd.read_csv(os.path.join(DATA_DIR,'processed_data','combined_bc.csv'),index_col=0)
pca = PCA()
DF_weights = pd.DataFrame(pca.fit_transform(exp_data.T),index=exp_data.columns)
DF_components = pd.DataFrame(pca.components_.T,index=exp_data.index)
var_cutoff = 0.99
fig,ax = plt.subplots(figsize=(1.5,1.5))
for name,group in objs['combined'].metadata.groupby('dataset'):
idx = exp_data.loc[:,group.index.tolist()].columns.tolist()
ax.scatter(DF_weights.loc[idx,0],
DF_weights.loc[idx,1],
c=cdict[name],
label=name,alpha=0.8,s=3)
ax.set_xlabel('Component 1: %.1f%%'%(pca.explained_variance_ratio_[0]*100))
ax.set_ylabel('Component 2: %.1f%%'%(pca.explained_variance_ratio_[1]*100))
ax.legend(bbox_to_anchor=(1,-.2),ncol=2)
```
# Figure 5h: PCA of activites
```
pca = PCA()
DF_weights = pd.DataFrame(pca.fit_transform(objs['combined'].A.T),index=objs['combined'].A.columns)
DF_components = pd.DataFrame(pca.components_.T,index=objs['combined'].A.index)
var_cutoff = 0.99
fig,ax = plt.subplots(figsize=(1.5,1.5))
for name,group in objs['combined'].metadata.groupby('dataset'):
idx = exp_data.loc[:,group.index.tolist()].columns.tolist()
ax.scatter(DF_weights.loc[idx,0],
DF_weights.loc[idx,1],
c=cdict[name],
label=name,alpha=0.8,s=3)
ax.set_xlabel('Component 1: %.1f%%'%(pca.explained_variance_ratio_[0]*100))
ax.set_ylabel('Component 2: %.1f%%'%(pca.explained_variance_ratio_[1]*100))
ax.legend(bbox_to_anchor=(1,-.2),ncol=2)
```
# Supplementary Figure 7
## Panel a: Explained variance of lost i-modulons
```
kept_mods = set(DF_links.name2.unique())
all_mods = set([ds+'_'+str(name) for ds in datasets[:-1] for name in objs[ds].M.columns])
missing_mods = all_mods - kept_mods
from util import plot_rec_var
missing_var = []
for mod in tqdm(missing_mods):
ds,comp = mod.split('_')
missing_var.append(plot_rec_var(objs[ds],modulons=[int(comp)],plot=False).values[0])
if plot_rec_var(objs[ds],modulons=[int(comp)],plot=False).values[0] > 10:
print(mod)
kept_var = []
for mod in tqdm(kept_mods):
ds,comp = mod.split('_')
kept_var.append(plot_rec_var(objs[ds],modulons=[int(comp)],plot=False).values[0])
plt.hist(missing_var,range=(0,20),bins=20)
plt.hist(kept_var,range=(0,20),bins=20,alpha=0.5)
plt.xticks(range(0,21,2))
plt.xlabel('Percent Variance Explained')
plt.ylabel('Count')
stats.mannwhitneyu(missing_var,kept_var)
fig,ax = plt.subplots(figsize=(1.5,1.5))
plt.hist(missing_var,range=(0,1),bins=10)
plt.hist(kept_var,range=(0,1),bins=10,alpha=0.5)
plt.xlabel('Percent Variance Explained')
plt.ylabel('Count')
```
## Panel b: Classes of new i-modulons
```
type_dict
new_counts = DF_binarized[(DF_binarized.total==0)].type.value_counts()
new_counts
new_reg = DF_binarized[(DF_binarized.total==0) & (DF_binarized.type==-2)].index
new_bio = DF_binarized[(DF_binarized.total==0) & (DF_binarized.type==-3)].index
new_gen = DF_binarized[(DF_binarized.total==0) & (DF_binarized.type==-4)].index
new_unc = DF_binarized[(DF_binarized.total==0) & (DF_binarized.type==-5)].index
new_single = []
for k in new_unc:
if objs['combined'].show_enriched(k)['weight'].max() > 0.4:
new_single.append(k)
[len(new_reg),len(new_bio),len(new_gen),len(new_unc)-len(new_single),len(new_single)]
plt.pie([len(new_reg),len(new_bio),len(new_gen),len(new_unc)-len(new_single),len(new_single)],
labels=['Regulatory','Functional','Genomic','Uncharacterized','Single Gene'])
```
## Panel c: Histogram of IC gene coefficients
```
fig,ax = plt.subplots(figsize=(2,2))
plt.hist(objs['combined'].M[31])
plt.yscale('log')
plt.xlabel('IC Gene Coefficient')
plt.ylabel('Count (Log-scale)')
plt.vlines([objs['combined'].thresholds[31],-objs['combined'].thresholds[31]],0,3000,
linestyles='dashed',linewidth=0.5)
```
## Panel e: F1-score chart
```
reg_links = DF_links[(DF_links.type1 == 'regulatory') & (DF_links.desc1 == DF_links.desc2)]
reg_links.head()
fig,ax=plt.subplots(figsize=(1.5,2))
struct = []
for name,group in reg_links.groupby('ds2'):
struct.append(pd.DataFrame(list(zip([name]*len(group),
DF_categories.loc[group.name1,'f1score'].values,
DF_categories.loc[group.name2,'f1score'].values)),
columns=['title','full','partial']))
DF_stats = pd.concat(struct)
DF_stats = DF_stats.melt(id_vars='title')
sns.boxplot(data=DF_stats,x='variable',y='value',order=['partial','full'])
sns.stripplot(data=DF_stats,x='variable',y='value',color='k',s=2,jitter=0.3,order=['partial','full'])
DF_stats[DF_stats.variable=='partial'].value.mean()
DF_stats[DF_stats.variable=='full'].value.mean()
stats.wilcoxon(DF_stats[DF_stats.variable=='partial'].value,DF_stats[DF_stats.variable=='full'].value)
```
## Panel f: Pearson R between activities
```
from sklearn.metrics import r2_score
linked_pearson = []
for i,row in DF_links.iterrows():
partial_acts = objs[row.ds2].A.loc[row.comp2]
full_acts = objs[row.ds1].A.loc[row.comp1,partial_acts.index]
r,_ = stats.spearmanr(full_acts,partial_acts)
linked_pearson.append(abs(r))
sum(np.array(linked_pearson) > 0.6) / len(linked_pearson)
fig,ax = plt.subplots(figsize=(2,2))
ax.hist(linked_pearson,bins=20)
ax.set_xlabel('Absolute Spearman R between activities of linked i-modulons')
ax.set_ylabel('Count')
```
# New biological component
```
rRNA = 0
tRNA = 0
ygene = 0
polyamine = 0
for gene in objs['combined'].show_enriched(147)['product']:
if 'rRNA' in gene or 'ribosom' in gene:
rRNA += 1
elif 'tRNA' in gene:
tRNA += 1
elif 'putative' in gene or 'family' in gene:
ygene += 1
elif 'spermidine' in gene or 'YEEF' in gene:
polyamine +=1
else:
print(gene)
objs['combined'].show_enriched(147)
```
| github_jupyter |
```
import cartopy.crs
import cmocean.cm
import matplotlib.pyplot as plt
import numpy
import xarray
import pandas
import pathlib
import yaml
```
##### file paths and names
```
mesh_mask = xarray.open_dataset("https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSn2DMeshMaskV17-02")
water_mask = mesh_mask.tmaskutil.isel(time=0)
fields = xarray.open_dataset("https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSg3DTracerFields1hV19-05")
salinity = fields.salinity.sel(time="2020-08-14 14:30", depth=0, method="nearest").where(water_mask)
georef = xarray.open_dataset("https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSnBathymetryV17-02")
location_dir = pathlib.Path('/Users/rmueller/Data/MIDOSS/AIS')
location_file = location_dir / 'Oil_Transfer_Facilities.xlsx'
wa_oil = pathlib.Path('/Users/rmueller/Data/MIDOSS/marine_transport_data') / 'WA_origin.yaml'
vessel_types = [
'tanker',
'atb',
'barge'
]
oil_types = [
'akns',
'bunker',
'dilbit',
'jet',
'diesel',
'gas',
'other'
]
oil_colors = [
'darkolivegreen',
'olivedrab',
'slategrey',
'indigo',
'mediumslateblue',
'cornflowerblue',
'saddlebrown'
]
```
### Load locations of marine transfer facilities
##### read in oil attribute data
```
with open(wa_oil) as file:
oil_attrs = yaml.load(file, Loader=yaml.Loader)
oil_attrs['Alon Asphalt Company (Paramount Petroleum)']['barge']['diesel']
oil_attrs.keys()
```
##### read in locations for marine transfer facilities
```
wa_locs = pandas.read_excel(location_file, sheet_name='Washington',
usecols="B,I,J")
bc_locs = pandas.read_excel(location_file, sheet_name='British Columbia',
usecols="A,B,C")
```
##### find and correct entries that differ between DOE naming and Casey's name list (I plan to correct these names so we are self-consistent)
```
for location in wa_locs['FacilityName']:
if location not in oil_attrs.keys():
print(location)
# Maxum Petroleum - Harbor Island Terminal -> 'Maxum (Rainer Petroleum)'
# Marathon Anacortes Refinery (formerly Tesoro) -> 'Andeavor Anacortes Refinery (formerly Tesoro)'
# Nustar Energy Vancouver -> Not included
for location in wa_locs['FacilityName']:
if location == 'Marathon Anacortes Refinery (formerly Tesoro)':
# wa_locs[wa_locs['FacilityName']=='Marathon Anacortes Refinery (formerly Tesoro)'].FacilityName = (
# 'Andeavor Anacortes Refinery (formerly Tesoro)'
# )
wa_locs.loc[wa_locs['FacilityName']=='Marathon Anacortes Refinery (formerly Tesoro)','FacilityName'] = (
'Andeavor Anacortes Refinery (formerly Tesoro)'
)
elif location == 'Maxum Petroleum - Harbor Island Terminal':
wa_locs.loc[wa_locs['FacilityName']=='Maxum Petroleum - Harbor Island Terminal','FacilityName'] = (
'Maxum (Rainer Petroleum)'
)
wa_locs.drop(index=17, inplace=True)
wa_locs.reset_index( drop=True, inplace=True )
wa_locs
```
### Add oil entries to dataframe
```
for oil in oil_types:
wa_locs[oil] = numpy.zeros(len(wa_locs['FacilityName']))
wa_locs[f'{oil}_scale'] = numpy.zeros(len(wa_locs['FacilityName']))
#oil_attrs[location][vessel][oil]['total_gallons']
# for oil in oil_types:
# wa_locs[oil] = numpy.zeros(len(wa_locs['FacilityName']))
# index = 0
# for oil in oil_types:
# wa_locs[oil[index]] = 1
# wa_locs
```
### loop through all facilities and get the total amount of fuel by fuel type
```
wa_locs['all'] = numpy.zeros(len(wa_locs['FacilityName']))
for index in range(len(wa_locs)):
for oil in oil_types:
for vessel in vessel_types:
# add oil accross vessel types and convert to liters while we are at it
wa_locs.loc[index, oil] += oil_attrs[wa_locs.FacilityName[index]][vessel][oil]['total_gallons']
# add oil across oil types
wa_locs.loc[index,'all'] += wa_locs.loc[index,oil]
# Now make a scale to use for marker size
for oil in oil_types:
wa_locs[f'{oil}_scale'] = wa_locs[oil]/wa_locs[oil].sum()
print(wa_locs[f'{oil}_scale'].sum())
wa_locs['all_scale'] = wa_locs['all']/wa_locs['all'].sum()
wa_locs
fs = 20
scatter_handles = numpy.zeros(len(oil_types))
%matplotlib inline
rotated_crs = cartopy.crs.RotatedPole(pole_longitude=120.0, pole_latitude=63.75)
plain_crs = cartopy.crs.PlateCarree()
# Use `subplot_kw` arg to pass a dict of kwargs containing the `RotatedPole` CRS
# and the `facecolor` value to the add_subplot() call(s) that subplots() will make
fig,ax = plt.subplots(
1, 1, figsize=(18, 9), subplot_kw={"projection": rotated_crs, "facecolor": "#8b7765"}
)
# Use the `transform` arg to tell cartopy to transform the model field
# between grid coordinates and lon/lat coordinates when it is plotted
quad_mesh = ax.pcolormesh(
georef.longitude, georef.latitude, salinity, transform=plain_crs, cmap=cmocean.cm.haline, shading="auto"
)
# add WA locations
for oil in oil_types:
index = oil_types.index(oil)
scatter_handles = ax.scatter(
wa_locs['DockLongNumber'],
wa_locs['DockLatNumber'],
s = 600*wa_locs[f'{oil}_scale'],
transform=plain_crs,
color=oil_colors[index],
edgecolors='white',
linewidth=1,
label=oil
)
#plt.legend(handles = scatter_handles)
#plt.show()
# add BC locations
# ax.scatter(
# bc_locs['Longitude'],
# bc_locs['Latitude'],
# transform=plain_crs,
# color='maroon'
# )
# Colour bar
cbar = plt.colorbar(quad_mesh, ax=ax)
cbar.set_label(f"{salinity.attrs['long_name']} [{salinity.attrs['units']}]")
# Axes title; ax.gridlines() below labels axes tick in a way that makes
# additional axes labels unnecessary IMHO
ax.set_title(f"Exports of all oil types, scaled by magnitude of individual type", fontsize=fs)
# Don't call set_aspect() because plotting on lon/lat grid implicitly makes the aspect ratio correct
# Show grid lines
# Note that ax.grid() has no effect; ax.gridlines() is from cartopy, not matplotlib
ax.gridlines(draw_labels=True, auto_inline=False)
# cartopy doesn't seem to play nice with tight_layout() unless we call canvas.draw() first
fig.canvas.draw()
fig.tight_layout()
fs = 20
%matplotlib inline
rotated_crs = cartopy.crs.RotatedPole(pole_longitude=120.0, pole_latitude=63.75)
plain_crs = cartopy.crs.PlateCarree()
# Use `subplot_kw` arg to pass a dict of kwargs containing the `RotatedPole` CRS
# and the `facecolor` value to the add_subplot() call(s) that subplots() will make
fig,ax = plt.subplots(
1, 1, figsize=(18, 9), subplot_kw={"projection": rotated_crs, "facecolor": "#8b7765"}
)
# Use the `transform` arg to tell cartopy to transform the model field
# between grid coordinates and lon/lat coordinates when it is plotted
quad_mesh = ax.pcolormesh(
georef.longitude, georef.latitude, salinity, transform=plain_crs, cmap=cmocean.cm.haline, shading="auto"
)
# add WA locations
ax.scatter(
wa_locs['DockLongNumber'],
wa_locs['DockLatNumber'],
s = 600*wa_locs['all_scale'],
transform=plain_crs,
color='maroon',
edgecolors='white',
linewidth=1
)
# add BC locations
# ax.scatter(
# bc_locs['Longitude'],
# bc_locs['Latitude'],
# transform=plain_crs,
# color='maroon'
# )
# Colour bar
cbar = plt.colorbar(quad_mesh, ax=ax)
cbar.set_label(f"{salinity.attrs['long_name']} [{salinity.attrs['units']}]")
# Axes title; ax.gridlines() below labels axes tick in a way that makes
# additional axes labels unnecessary IMHO
ax.set_title('Exports of all oil types', fontsize=fs)
# Don't call set_aspect() because plotting on lon/lat grid implicitly makes the aspect ratio correct
# Show grid lines
# Note that ax.grid() has no effect; ax.gridlines() is from cartopy, not matplotlib
ax.gridlines(draw_labels=True, auto_inline=False)
# cartopy doesn't seem to play nice with tight_layout() unless we call canvas.draw() first
fig.canvas.draw()
fig.tight_layout()
```
### Now add spill locations
| github_jupyter |
<p align="center">
<img src="https://github.com/GeostatsGuy/GeostatsPy/blob/master/TCG_color_logo.png?raw=true" width="220" height="240" />
</p>
## Bootstrap-based Hypothesis Testing Demonstration
### Boostrap and Methods for Hypothesis Testing, Difference in Means
* we calculate the hypothesis test for different in means with boostrap and compare to the analytical expression
* **Welch's t-test**: we assume the features are Gaussian distributed and the variance are unequal
#### Michael Pyrcz, Associate Professor, University of Texas at Austin
##### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1) | [GeostatsPy](https://github.com/GeostatsGuy/GeostatsPy)
#### Hypothesis Testing
Powerful methodology for spatial data analytics:
1. extracted sample set 1 and 2, the means look different, but are they?
2. should we suspect that the samples are in fact from 2 different populations?
Now, let's try the t-test, hypothesis test for difference in means. This test assumes that the variances are similar along with the data being Gaussian distributed (see the course notes for more on this). This is our test:
\begin{equation}
H_0: \mu_{X1} = \mu_{X2}
\end{equation}
\begin{equation}
H_1: \mu_{X1} \ne \mu_{X2}
\end{equation}
To test this we will calculate the t statistic with the bootstrap and analytical approaches.
#### The Welch's t-test for Difference in Means by Analytical and Empirical Methods
We work with the following test statistic, *t-statistic*, from the two sample sets.
\begin{equation}
\hat{t} = \frac{\overline{x}_1 - \overline{x}_2}{\sqrt{\frac{s^2_1}{n_1} + \frac{s^2_2}{n_2}}}
\end{equation}
where $\overline{x}_1$ and $\overline{x}_2$ are the sample means, $s^2_1$ and $s^2_2$ are the sample variances and $n_1$ and $n_2$ are the numer of samples from the two datasets.
The critical value, $t_{critical}$ is calculated by the analytical expression by:
\begin{equation}
t_{critical} = \left|t(\frac{\alpha}{2},\nu)\right|
\end{equation}
The degrees of freedom, $\nu$, is calculated as follows:
\begin{equation}
\nu = \frac{\left(\frac{1}{n_1} + \frac{\mu}{n_2}\right)^2}{\frac{1}{n_1^2(n_1-1)} + \frac{\mu^2}{n_2^2(n_2-1)}}
\end{equation}
Alternatively, the sampling distribution of the $t_{statistic}$ and $t_{critical}$ may be calculated empirically with bootstrap.
The workflow proceeds as:
* shift both sample sets to have the mean of the combined data set, $x_1$ → $x^*_1$, $x_2$ → $x^*_2$, this makes the null hypothesis true.
* for each bootstrap realization, $\ell=1\ldots,L$
* perform $n_1$ Monte Carlo simulations, draws with replacement, from sample set $x^*_1$
* perform $n_2$ Monte Carlo simulations, draws with replacement, from sample set $x^*_2$
* calculate the t_{statistic} realization, $\hat{t}^{\ell}$ given the resulting sample means $\overline{x}^{*,\ell}_1$ and $\overline{x}^{*,\ell}_2$ and the sample variances $s^{*,2,\ell}_1$ and $s^{*,2,\ell}_2$
* pool the results to assemble the $t_{statistic}$ sampling distribution
* calculate the cumulative probability of the observed t_{statistic}m, $\hat{t}$, from the boostrap distribution based on $\hat{t}^{\ell}$, $\ell = 1,\ldots,L$.
Here's some prerequisite information on the boostrap.
#### Bootstrap
Bootstrap is a method to assess the uncertainty in a sample statistic by repeated random sampling with replacement.
Assumptions
* sufficient, representative sampling, identical, idependent samples
Limitations
1. assumes the samples are representative
2. assumes stationarity
3. only accounts for uncertainty due to too few samples, e.g. no uncertainty due to changes away from data
4. does not account for boundary of area of interest
5. assumes the samples are independent
6. does not account for other local information sources
The Bootstrap Approach (Efron, 1982)
Statistical resampling procedure to calculate uncertainty in a calculated statistic from the data itself.
* Does this work? Prove it to yourself, for uncertainty in the mean solution is standard error:
\begin{equation}
\sigma^2_\overline{x} = \frac{\sigma^2_s}{n}
\end{equation}
Extremely powerful - could calculate uncertainty in any statistic! e.g. P13, skew etc.
* Would not be possible access general uncertainty in any statistic without bootstrap.
* Advanced forms account for spatial information and sampling strategy (game theory and Journel’s spatial bootstrap (1993).
Steps:
1. assemble a sample set, must be representative, reasonable to assume independence between samples
2. optional: build a cumulative distribution function (CDF)
* may account for declustering weights, tail extrapolation
* could use analogous data to support
3. For $\ell = 1, \ldots, L$ realizations, do the following:
* For $i = \alpha, \ldots, n$ data, do the following:
* Draw a random sample with replacement from the sample set or Monte Carlo simulate from the CDF (if available).
6. Calculate a realization of the sammary statistic of interest from the $n$ samples, e.g. $m^\ell$, $\sigma^2_{\ell}$. Return to 3 for another realization.
7. Compile and summarize the $L$ realizations of the statistic of interest.
This is a very powerful method. Let's try it out and compare the result to the analytical form of the confidence interval for the sample mean.
#### Objective
Provide an example and demonstration for:
1. interactive plotting in Jupyter Notebooks with Python packages matplotlib and ipywidgets
2. provide an intuitive hands-on example of confidence intervals and compare to statistical boostrap
#### Getting Started
Here's the steps to get setup in Python with the GeostatsPy package:
1. Install Anaconda 3 on your machine (https://www.anaconda.com/download/).
2. Open Jupyter and in the top block get started by copy and pasting the code block below from this Jupyter Notebook to start using the geostatspy functionality.
#### Load the Required Libraries
The following code loads the required libraries.
```
%matplotlib inline
from ipywidgets import interactive # widgets and interactivity
from ipywidgets import widgets
from ipywidgets import Layout
from ipywidgets import Label
from ipywidgets import VBox, HBox
import matplotlib.pyplot as plt # plotting
import numpy as np # working with arrays
import pandas as pd # working with DataFrames
from scipy import stats # statistical calculations
import random # random drawing / bootstrap realizations of the data
```
#### Make a Synthetic Dataset
This is an interactive method to:
* select a parametric distribution
* select the distribution parameters
* select the number of samples and visualize the synthetic dataset distribution
```
# interactive calculation of the sample set (control of source parametric distribution and number of samples)
l = widgets.Text(value=' Interactive Hypothesis Testing, Difference in Means, Analytical & Bootstrap Methods, Michael Pyrcz, Associate Professor, The University of Texas at Austin',layout=Layout(width='950px', height='30px'))
n1 = widgets.IntSlider(min=0, max = 100, value = 10, step = 1, description = '$n_{1}$',orientation='horizontal',layout=Layout(width='300px', height='30px'))
n1.style.handle_color = 'red'
m1 = widgets.FloatSlider(min=0, max = 50, value = 3, step = 1.0, description = '$\overline{x}_{1}$',orientation='horizontal',layout=Layout(width='300px', height='30px'))
m1.style.handle_color = 'red'
s1 = widgets.FloatSlider(min=0, max = 10, value = 3, step = 0.25, description = '$s_1$',orientation='horizontal',layout=Layout(width='300px', height='30px'))
s1.style.handle_color = 'red'
ui1 = widgets.VBox([n1,m1,s1],) # basic widget formatting
n2 = widgets.IntSlider(min=0, max = 100, value = 10, step = 1, description = '$n_{2}$',orientation='horizontal',layout=Layout(width='300px', height='30px'))
n2.style.handle_color = 'yellow'
m2 = widgets.FloatSlider(min=0, max = 50, value = 3, step = 1.0, description = '$\overline{x}_{2}$',orientation='horizontal',layout=Layout(width='300px', height='30px'))
m2.style.handle_color = 'yellow'
s2 = widgets.FloatSlider(min=0, max = 10, value = 3, step = 0.25, description = '$s_2$',orientation='horizontal',layout=Layout(width='300px', height='30px'))
s2.style.handle_color = 'yellow'
ui2 = widgets.VBox([n2,m2,s2],) # basic widget formatting
L = widgets.IntSlider(min=10, max = 1000, value = 100, step = 1, description = '$L$',orientation='horizontal',layout=Layout(width='300px', height='30px'))
L.style.handle_color = 'gray'
alpha = widgets.FloatSlider(min=0, max = 50, value = 3, step = 1.0, description = '$α$',orientation='horizontal',layout=Layout(width='300px', height='30px'))
alpha.style.handle_color = 'gray'
ui3 = widgets.VBox([L,alpha],) # basic widget formatting
ui4 = widgets.HBox([ui1,ui2,ui3],) # basic widget formatting
ui2 = widgets.VBox([l,ui4],)
def f_make(n1, m1, s1, n2, m2, s2, L, alpha): # function to take parameters, make sample and plot
np.random.seed(73073)
x1 = np.random.normal(loc=m1,scale=s1,size=n1)
np.random.seed(73074)
x2 = np.random.normal(loc=m2,scale=s2,size=n2)
mu = (s2*s2)/(s1*s1)
nu = ((1/n1 + mu/n2)*(1/n1 + mu/n2))/(1/(n1*n1*(n1-1)) + ((mu*mu)/(n2*n2*(n2-1))))
prop_values = np.linspace(-8.0,8.0,100)
analytical_distribution = stats.t.pdf(prop_values,df = nu)
analytical_tcrit = stats.t.ppf(1.0-alpha*0.005,df = nu)
# Analytical Method with SciPy
t_stat_observed, p_value_analytical = stats.ttest_ind(x1,x2,equal_var=False)
# Bootstrap Method
global_average = np.average(np.concatenate([x1,x2])) # shift the means to be equal to the globla mean
x1s = x1 - np.average(x1) + global_average
x2s = x2 - np.average(x2) + global_average
t_stat = np.zeros(L); p_value = np.zeros(L)
random.seed(73075)
for l in range(0, L): # loop over realizations
samples1 = random.choices(x1s, weights=None, cum_weights=None, k=len(x1s))
#print(samples1)
samples2 = random.choices(x2s, weights=None, cum_weights=None, k=len(x2s))
#print(samples2)
t_stat[l], p_value[l] = stats.ttest_ind(samples1,samples2,equal_var=False)
bootstrap_lower = np.percentile(t_stat,alpha * 0.5)
bootstrap_upper = np.percentile(t_stat,100.0 - alpha * 0.5)
plt.subplot(121)
#print(t_stat)
plt.hist(x1,cumulative = False, density = True, alpha=0.4,color="red",edgecolor="black", bins = np.linspace(0,50,50), label = '$x_1$')
plt.hist(x2,cumulative = False, density = True, alpha=0.4,color="yellow",edgecolor="black", bins = np.linspace(0,50,50), label = '$x_2$')
plt.ylim([0,0.4]); plt.xlim([0.0,30.0])
plt.title('Sample Distributions'); plt.xlabel('Value'); plt.ylabel('Density')
plt.legend()
#plt.hist(x2)
plt.subplot(122)
plt.ylim([0,0.6]); plt.xlim([-8.0,8.0])
plt.title('Bootstrap and Analytical $t_{statistic}$ Sampling Distributions'); plt.xlabel('$t_{statistic}$'); plt.ylabel('Density')
plt.plot([t_stat_observed,t_stat_observed],[0.0,0.6],color = 'black',label='observed $t_{statistic}$')
plt.plot([bootstrap_lower,bootstrap_lower],[0.0,0.6],color = 'blue',linestyle='dashed',label = 'bootstrap interval')
plt.plot([bootstrap_upper,bootstrap_upper],[0.0,0.6],color = 'blue',linestyle='dashed')
plt.plot(prop_values,analytical_distribution, color = 'red',label='analytical $t_{statistic}$')
plt.hist(t_stat,cumulative = False, density = True, alpha=0.2,color="blue",edgecolor="black", bins = np.linspace(-8.0,8.0,50), label = 'bootstrap $t_{statistic}$')
plt.fill_between(prop_values, 0, analytical_distribution, where = prop_values <= -1*analytical_tcrit, facecolor='red', interpolate=True, alpha = 0.2)
plt.fill_between(prop_values, 0, analytical_distribution, where = prop_values >= analytical_tcrit, facecolor='red', interpolate=True, alpha = 0.2)
ax = plt.gca()
handles,labels = ax.get_legend_handles_labels()
handles = [handles[0], handles[2], handles[3], handles[1]]
labels = [labels[0], labels[2], labels[3], labels[1]]
plt.legend(handles,labels,loc=1)
plt.subplots_adjust(left=0.0, bottom=0.0, right=2.0, top=1.2, wspace=0.2, hspace=0.2)
plt.show()
# connect the function to make the samples and plot to the widgets
interactive_plot = widgets.interactive_output(f_make, {'n1': n1, 'm1': m1, 's1': s1, 'n2': n2, 'm2': m2, 's2': s2, 'L': L, 'alpha': alpha})
interactive_plot.clear_output(wait = True) # reduce flickering by delaying plot updating
```
### Boostrap and Analytical Methods for Hypothesis Testing, Difference in Means
* including the analytical and bootstrap methods for testing the difference in means
* interactive plot demonstration with ipywidget, matplotlib packages
#### Michael Pyrcz, Associate Professor, University of Texas at Austin
##### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1) | [GeostatsPy](https://github.com/GeostatsGuy/GeostatsPy)
### The Problem
Let's simulate bootstrap, resampling with replacement from a hat with $n_{red}$ and $n_{green}$ balls
* **$n_1$**, **$n_2$** number of samples, **$\overline{x}_1$**, **$\overline{x}_2$** means and **$s_1$**, **$s_2$** standard deviation of the 2 sample sets
* **$L$**: number of bootstrap realizations
* **$\alpha$**: alpha level
```
display(ui2, interactive_plot) # display the interactive plot
```
#### Observations
Some observations:
* lower dispersion and higher difference in means increases the absolute magnitude of the observed $t_{statistic}$
* the bootstrap distribution closely matches the analytical distribution if $L$ is large enough
* it is possible to use bootstrap to calculate the sampling distribution instead of relying on the theoretical express distribution, in this case the Student's t distribution.
#### Comments
This was a demonstration of interactive hypothesis testing for the significance in difference in means aboserved between 2 sample sets in Jupyter Notebook Python with the ipywidgets and matplotlib packages.
I have many other demonstrations on data analytics and machine learning, e.g. on the basics of working with DataFrames, ndarrays, univariate statistics, plotting data, declustering, data transformations, trend modeling and many other workflows available at https://github.com/GeostatsGuy/PythonNumericalDemos and https://github.com/GeostatsGuy/GeostatsPy.
I hope this was helpful,
*Michael*
#### The Author:
### Michael Pyrcz, Associate Professor, University of Texas at Austin
*Novel Data Analytics, Geostatistics and Machine Learning Subsurface Solutions*
With over 17 years of experience in subsurface consulting, research and development, Michael has returned to academia driven by his passion for teaching and enthusiasm for enhancing engineers' and geoscientists' impact in subsurface resource development.
For more about Michael check out these links:
#### [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
#### Want to Work Together?
I hope this content is helpful to those that want to learn more about subsurface modeling, data analytics and machine learning. Students and working professionals are welcome to participate.
* Want to invite me to visit your company for training, mentoring, project review, workflow design and / or consulting? I'd be happy to drop by and work with you!
* Interested in partnering, supporting my graduate student research or my Subsurface Data Analytics and Machine Learning consortium (co-PIs including Profs. Foster, Torres-Verdin and van Oort)? My research combines data analytics, stochastic modeling and machine learning theory with practice to develop novel methods and workflows to add value. We are solving challenging subsurface problems!
* I can be reached at mpyrcz@austin.utexas.edu.
I'm always happy to discuss,
*Michael*
Michael Pyrcz, Ph.D., P.Eng. Associate Professor The Hildebrand Department of Petroleum and Geosystems Engineering, Bureau of Economic Geology, The Jackson School of Geosciences, The University of Texas at Austin
#### More Resources Available at: [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1)
| github_jupyter |
# Individual Project
## Barber review in Gliwice
### Wojciech Pragłowski
#### Data scraped from [booksy.](https://booksy.com/pl-pl/s/barber-shop/12795_gliwice)
```
import requests
from bs4 import BeautifulSoup
booksy = requests.get("https://booksy.com/pl-pl/s/barber-shop/12795_gliwice")
soup = BeautifulSoup(booksy.content, 'html.parser')
barber = soup.find_all('h2')
text_barber = [i.get_text() for i in barber]
barber_names = [i.strip() for i in text_barber]
barber_names.pop(-1)
rate = soup.find_all('div', attrs={'data-testid':'rank-average'})
text_rate = [i.get_text() for i in rate]
barber_rate = [i.strip() for i in text_rate]
opinions = soup.find_all('div', attrs={'data-testid':'rank-label'})
text_opinions = [i.get_text() for i in opinions]
replace_opinions = [i.replace('opinii', '') for i in text_opinions]
replace_opinions2 = [i.replace('opinie', '') for i in replace_opinions]
strip_opinions = [i.strip() for i in replace_opinions2]
barber_opinions = [int(i) for i in strip_opinions]
prices = soup.find_all('div', attrs={'data-testid':'service-price'})
text_prices = [i.get_text() for i in prices]
replace_prices = [i.replace('zł', '') for i in text_prices]
replace_prices2 = [i.replace(',', '.') for i in replace_prices]
replace_prices3 = [i.replace('+', '') for i in replace_prices2]
replace_null = [i.replace('Bezpłatna', '0') for i in replace_prices3]
replace_space = [i.replace(' ', '') for i in replace_null]
strip_prices = [i.strip() for i in replace_space]
barber_prices = [float(i) for i in strip_prices]
import pandas as pd
barbers = pd.DataFrame(barber_names, columns=["Barber's name"])
barbers["Barber's rate"] = barber_rate
barbers["Barber's opinions"] = barber_opinions
barbers
```
#### I want to find those barbers who have more than 500 reviews
```
# znalezienie wiarygodnych barberów, czyli takich, którzy mają opinii > 500
best_opinions = [i for i in barber_opinions if i > 500]
best_indexes = []
for amount in barber_opinions:
if amount in best_opinions:
best_indexes.append(barber_opinions.index(amount))
best_barbers = [barber_names[i] for i in best_indexes]
best_rates = [barber_rate[i] for i in best_indexes]
```
#### On the page there are 3 basic prices for one Barber, so I'm combining them
```
# połączenie 3 cen dla jednego barbera
combined_prices = [barber_prices[i:i+3] for i in range(0, len(barber_prices), 3)]
best_prices = [combined_prices[i] for i in best_indexes]
print(best_prices)
avg = [sum(i)/len(i) for i in best_prices]
avg_price = [round(i,2) for i in avg]
avg_price
df_best_barber = pd.DataFrame(best_barbers, columns=["Barber's name"])
df_best_barber["Amount of opinions"] = best_opinions
df_best_barber["Barber's rate"] = best_rates
df_best_barber["Average Barber's prices"] = avg_price
df_best_barber
import matplotlib.pyplot as plt
plt.style.use('ggplot')
x = ['POPE', 'Matt', 'Sick', 'Freak', 'WILKOSZ', 'Wojnar', 'Trendy']
y = avg_price
x_pos = [i for i, _ in enumerate(x)]
plt.bar(x_pos, y)
plt.ylabel("Barbers' prices")
plt.title("Barbers in Gliwice & their average prices")
plt.xticks(x_pos, x)
plt.show()
import seaborn as sns
sns.relplot(x = "Average Barber's prices", y = "Amount of opinions", hue="Barber's name", data = df_best_barber)
```
| github_jupyter |
```
from sklearn.datasets import make_blobs
from sklearn.svm import SVC
import matplotlib.pyplot as plt
import numpy as np
X,y=make_blobs(n_samples=50,centers=2,random_state=0,cluster_std=0.6)
plt.scatter(X[:,0],X[:,1],c=y,s=50,cmap='rainbow')
plt.show()
print(X.shape)
# print(X)
clf=SVC(kernel="linear").fit(X,y)
# plot_svc_decision_function(clf)
from sklearn.svm import SVC # 导入SVC类 SVC(Support vector classifier)
model = SVC(kernel='linear', C=1E10) # 引入线性核函数
model.fit(X, y)
def plot_svc_decision_function(model, ax=None, plot_support=True):
"""画2维SVC的决策函数(分离超平面)"""
if ax is None:
ax = plt.gca() #plt.gca()获得当前的Axes对象ax
xlim = ax.get_xlim() #Return the x-axis view limits返回x轴视图限制。
ylim = ax.get_ylim()
# 创建评估模型的网格
x = np.linspace(xlim[0], xlim[1], 30)
y = np.linspace(ylim[0], ylim[1], 30)
Y, X = np.meshgrid(y, x) # 将两个一维数组变为二维矩阵 返回一个行乘和列乘的二维数组
xy = np.vstack([X.ravel(), Y.ravel()]).T # np.vstack()沿着竖直方向将矩阵堆叠起来。
P = model.decision_function(xy).reshape(X.shape)
# 画决策边界和边界
ax.contour(X, Y, P, colors='k',
levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
if plot_support:
ax.scatter(model.support_vectors_[:, 0],
model.support_vectors_[:, 1],
s=200, linewidth=1,c='k',alpha=0.4)
ax.set_xlim(xlim)
ax.set_ylim(ylim)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')
plot_svc_decision_function(model)
from sklearn.datasets import make_circles
from sklearn.svm import SVC
import matplotlib.pyplot as plt
import numpy as np
X,y=make_circles(100,factor=0.1,noise=0.2)
plt.scatter(X[:,0],X[:,1],c=y,s=50,cmap='rainbow')
plt.show()
print(X.shape)
model=SVC(kernel='RBF')
model.fit(X,y)
# plot_svc_decision_function(model)
# print(X)
# from sklearn.datasets import make_circles
# from sklearn.svm import SVC
# import matplotlib.pyplot as plt
# import numpy as np
# X,y = make_circles(100,factor=0.1,noise=0.2)
# plt.scatter(X[:,0],X[:,1],c=y,s=50,cmap="rainbow")
# plt.show()
# print(X.shape)
# def plot_svc_decision_function(model, ax=None, plot_support=True):
# """画2维SVC的决策函数(分离超平面)"""
# if ax is None:
# ax = plt.gca() #plt.gca()获得当前的Axes对象ax
# xlim = ax.get_xlim() #Return the x-axis view limits返回x轴视图限制。
# ylim = ax.get_ylim()
# # 创建评估模型的网格
# x = np.linspace(xlim[0], xlim[1], 30)
# y = np.linspace(ylim[0], ylim[1], 30)
# Y, X = np.meshgrid(y, x) # 将两个一维数组变为二维矩阵 返回一个行乘和列乘的二维数组
# xy = np.vstack([X.ravel(), Y.ravel()]).T # np.vstack()沿着竖直方向将矩阵堆叠起来。
# P = model.decision_function(xy).reshape(X.shape)
# # 画决策边界和边界
# ax.contour(X, Y, P, colors='k',
# levels=[-1, 0, 1], alpha=0.5,
# linestyles=['--', '-', '--'])
# if plot_support:
# ax.scatter(model.support_vectors_[:, 0],
# model.support_vectors_[:, 1],
# s=200, linewidth=1,c='k',alpha=0.4)
# ax.set_xlim(xlim)
# ax.set_ylim(ylim)
# plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')
# plot_svc_decision_function(model)
```
| github_jupyter |
# Systems of Nonlinear Equations
## CH EN 2450 - Numerical Methods
**Prof. Tony Saad (<a>www.tsaad.net</a>) <br/>Department of Chemical Engineering <br/>University of Utah**
<hr/>
# Example 1
A system of nonlinear equations consists of several nonlinear functions - as many as there are unknowns. Solving a system of nonlinear equations means funding those points where the functions intersect each other. Consider for example the following system of equations
\begin{equation}
y = 4x - 0.5 x^3
\end{equation}
\begin{equation}
y = \sin(x)e^{-x}
\end{equation}
The first step is to write these in residual form
\begin{equation}
f_1 = y - 4x + 0.5 x^3,\\
f_2 = y - \sin(x)e^{-x}
\end{equation}
```
import numpy as np
from numpy import cos, sin, pi, exp
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
import matplotlib.pyplot as plt
from scipy.optimize import fsolve
y1 = lambda x: 4 * x - 0.5 * x**3
y2 = lambda x: sin(x)*exp(-x)
x = np.linspace(-3.5,4,100)
plt.ylim(-8,6)
plt.plot(x,y1(x), 'k')
plt.plot(x,y2(x), 'r')
plt.grid()
plt.savefig('example1.pdf')
def F(xval):
x = xval[0] # let the first value in xval denote x
y = xval[1] # let the second value in xval denote y
f1 = y - 4.0*x + 0.5*x**3 # define f1
f2 = y - sin(x)*exp(-x) # define f2
return np.array([f1,f2]) # must return an array
def J(xval):
x = xval[0]
y = xval[1]
return np.array([[1.5*x**2 - 4.0 , 1.0 ],
[-cos(x)*exp(-x) + sin(x)*exp(-x) , 1.0]]) # Jacobian matrix J = [[df1/dx, df1/dy], [df2/dx,df2/dy]]
guess = np.array([1,3])
F(guess)
J(guess)
def newton_solver(F, J, x, tol): # x is nothing more than your initial guess
F_value = F(x)
err = np.linalg.norm(F_value, ord=2) # l2 norm of vector
# err = tol + 100
niter = 0
while abs(err) > tol and niter < 100:
J_value = J(x)
delta = np.linalg.solve(J_value, - F_value)
x = x + delta # update the solution
F_value = F(x) # compute new values for vector of residual functions
err = np.linalg.norm(F_value, ord=2) # compute error norm (absolute error)
niter += 1
# Here, either a solution is found, or too many iterations
if abs(err) > tol:
niter = -1
print('No Solution Found!!!!!!!!!')
return x, niter, err
```
Try to find the root less than [-2,-4]
```
tol = 1e-8
xguess = np.array([-3,0])
roots, n, err = newton_solver(F,J,xguess,tol)
print ('# of iterations', n, 'roots:', roots)
print ('Error Norm =',err)
F(roots)
```
Use Python's fsolve routine
```
fsolve(F,xguess)
```
# Example 2
Find the roots of the following system of equations
\begin{equation}
x^2 + y^2 = 1, \\
y = x^3 - x + 1
\end{equation}
First we assign $x_1 \equiv x$ and $x_2 \equiv y$ and rewrite the system in residual form
\begin{equation}
f_1(x_1,x_2) = x_1^2 + x_2^2 - 1, \\
f_2(x_1,x_2) = x_1^3 - x_1 - x_2 + 1
\end{equation}
```
x = np.linspace(-1,1)
y1 = lambda x: x**3 - x + 1
y2 = lambda x: np.sqrt(1 - x**2)
plt.plot(x,y1(x), 'k')
plt.plot(x,y2(x), 'r')
plt.grid()
def F(xval):
?
def J(xval):
?
tol = 1e-8
xguess = np.array([0.5,0.5])
x, n, err = newton_solver(F, J, xguess, tol)
print (n, x)
print ('Error Norm =',err)
fsolve(F,(0.5,0.5))
import urllib
import requests
from IPython.core.display import HTML
def css_styling():
styles = requests.get("https://raw.githubusercontent.com/saadtony/NumericalMethods/master/styles/custom.css")
return HTML(styles.text)
css_styling()
```
| github_jupyter |
[Table of contents](../toc.ipynb)
# Deep Learning
This notebook is a contribution of Dr.-Ing. Mauricio Fernández.
Institution: Technical University of Darmstadt, Cyber-Physical Simulation Group.
Email: fernandez@cps.tu-darmstadt.de, mauricio.fernandez.lb@gmail.com
Profiles
- [TU Darmstadt](https://www.maschinenbau.tu-darmstadt.de/cps/department_cps/team_1/team_detail_184000.en.jsp)
- [Google Scholar](https://scholar.google.com/citations?user=pwQ_YNEAAAAJ&hl=de)
- [GitHub](https://github.com/mauricio-fernandez-l)
## Contents of this lecture
[1. Short overview of artificial intelligence](#1.-Short-overview-of-artificial-intelligence-(AI))
[2. Introduction to artificial neural networks](#2.-Introduction-to-artificial-neural-networks-(ANN))
[3. How to build a basic tf.keras model](#3.-How-to-build-a-basic-tf.keras-model)
[4. Regression problem](#4.-Regression-problem)
[5. Classification problem](#5.-Classification-problem)
[6. Summary of this lecture](#6.-Summary-of-this-lecture)
## 1. Short overview of artificial intelligence (AI)
Some definitions in the web:
- the theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.
- study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals
<img src="https://images.theconversation.com/files/168081/original/file-20170505-21003-zbguhy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=926&fit=clip" alt="neural netowork" width="300" align="right">
Computational methods in AI:
- Data mining
- Machine learning
- Artificial neural networks
- Single layer learning
- **Deep learning (DL)**
- Kernel methods (SVM,...)
- Decision trees
- ...
- ...
## Why DL?
Pros:
- Enourmous flexibility due to high number of parameters
- Capability to represent complex functions
- Huge range of applications (visual perception, decision-making, ...) in industry and research
- Open high-performance software (TensorFlow, Keras, PyTorch, Scikit-learn,...)
<img src="https://s3.amazonaws.com/keras.io/img/keras-logo-2018-large-1200.png" alt="Keras" width="200" align="right">
<img src="https://www.tensorflow.org/images/tf_logo_social.png" alt="TensoFlow" width="200" align="right">
Cons:
- Difficult to train (vanishing gradient,...)
- High number of internal parameters
## 2. Introduction to artificial neural networks (ANN)
Needed modules
```
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits import mplot3d
from matplotlib.image import imread
import os
```
## Neuron model
**Neuron:** single unit cell processing incoming electric signals (input)
<img src="https://images.theconversation.com/files/168081/original/file-20170505-21003-zbguhy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=926&fit=clip" alt="neural netowork" width="200" align="left">
<img src="https://s3.amazonaws.com/assets.datacamp.com/blog_assets/Keras+Python+Tutorial/content_content_neuron.png" alt="neuron" width="600" align="center">
**Mathematical model:** input $x$ with output $y$ and internal parameters $w$ (weight), $b$ (bias) and activation function $a(z)$
$$
\hat{y} = a(wx +b)
$$
**Example:**
$$
\hat{y} = \tanh(0.3x - 3)
$$
## Activation functions
$$ a(z) = a(w x + b) $$
```
z = np.linspace(-3, 3, 100)
plt.figure()
plt.plot(z, tf.nn.relu(z), label='relu')
plt.plot(z, tf.nn.softplus(z), label='softplus')
plt.plot(z, tf.nn.tanh(z), label='tanh')
plt.plot(z, tf.nn.sigmoid(z), label='sigmoid')
plt.xlabel('$z$')
plt.ylabel('$a(z)$')
plt.legend()
```
## ANN architecture
**Example 1:** Two one-dimensional layers
$$
\hat{y} = a^{(2)}(w^{(2)}a^{(1)}(w^{(1)}x+b^{(1)})+b^{(2)})
$$
**Example 2:** Network for 2D input and 1D output with one hidden layer (3 neurons) and identity final activation
$$
\hat{y} =
\begin{pmatrix}
w^{(2)}_1 & w^{(2)}_2 & w^{(2)}_3
\end{pmatrix}
a^{(1)}
\left(
\begin{pmatrix}
w^{(1)}_{11} & w^{(1)}_{12} \\
w^{(1)}_{21} & w^{(1)}_{22} \\
w^{(1)}_{31} & w^{(1)}_{32} \\
\end{pmatrix}
\begin{pmatrix}
x_1 \\ x_2
\end{pmatrix}
+
\begin{pmatrix}
b^{(1)}_1 \\ b^{(1)}_2 \\ b^{(1)}_3
\end{pmatrix}
\right)
+
b^{(2)}
$$
<img src="deepl_files/network1.png" alt="network1" width="300" align="center">
[Draw networks](http://alexlenail.me/NN-SVG/index.html)
## Deep networks
Lots of layers
<img src="deepl_files/network2.png" alt="network2" width="600" align="left">
<img src="https://images.theconversation.com/files/168081/original/file-20170505-21003-zbguhy.jpg?ixlib=rb-1.1.0&q=45&auto=format&w=926&fit=clip" alt="neural network" width="200" align="right">
## Training an ANN
For input vector $x \in \mathbb{R}^3$ consider the network $\hat{y}(x) \in \mathbb{R}^2$
<img src="deepl_files/network2.png" alt="network2" width="400" align="center">
for the approximation of a vector function $y(x) \in \mathbb{R}^2$. After fixing the architecture of the network (number of layers, number of neurons and activation functions), the remaining parameters (weights and biases) need calibration. This is achieved in **supervised learning** through the minimization of an objective function (referred to as **loss**) for provided dataset $D$ with $N$ data pairs
$$
D = \{(x^{(1)},y^{(1)}),(x^{(2)},y^{(2)}),\dots,(x^{(N)},y^{(N)})\}
$$
which the ANN $\hat{y}(x)$ is required to approximate.
Example loss: mean squared error (MSE) $e(y,\hat{y})$ for each data pair averaged over the complete dataset
$$
L = \frac{1}{N} \sum_{i=1}^N e(y^{(i)}, \hat{y}^{(i)})
\ , \quad
e(y,\hat{y}) = \frac{1}{2}\sum_{j=1}^2(y_j-\hat{y}_j)^2
$$
The calibration of weights and biases based on the minimization of the loss for given data is referred to as **training**.
## Standard problems
**Regression:** fit a model $\hat{y}(x)$ to approximate a function $y(x)$.
* $x = 14.5$
* $y(x) = 3\sin(14.5)+10 = 12.8...$
* $\hat{y}(x) = 11.3...$
**Classification:** fit a model $\hat{y}(x)$ predicting that $x$ belongs to one of $C$ classes.
* $C=4$ classes $\{$cat,dog,horse,pig$\}$
* $x =$ image of a horse
* $y(x) = (0,0,1,0)$ (third class = horse)
* $\hat{y}(x) = (0.1,0.2,0.4,0.3)$ (class probabilities - model predicts for the third class the highest probability)
## 3. How to build a basic tf.keras model
<img src="deepl_files/network1.png" alt="network1" width="300" align="ceter">
```
# Create sequential model
model = tf.keras.Sequential([
tf.keras.layers.Dense(3, input_shape=[2], activation='relu')
# 3x2 weights and 3 biases = 9 parameters
,tf.keras.layers.Dense(1)
# 1x3 weights and 1 bias = 4 parameters
])
# Model summary
model.summary()
```
<img src="deepl_files/network1.png" alt="network1" width="300" align="ceter">
```
# List of 3 points to be evaluated
xs = np.array([
[0, 0], [0, np.pi], [np.pi, np.pi]
])
# Prediction / model evaluation
ys_model = model.predict(xs)
print(ys_model)
```
$$ y = 3 \sin(x_1 + x_2) + 10 $$
```
# Data of function to be approximated (e.g., from measurements or simulations)
ys = 3*np.sin(np.sum(xs, axis=1, keepdims=True))+10
# Compile model: choose optimizer and loss
model.compile(optimizer='adam', loss='mse')
# Train
model.fit(xs, ys, epochs=100, verbose=0)
# Predict after training
ys_model = model.predict(xs)
print(xs)
print(ys)
print(ys_model)
```
## 4. Regression problem
Approximate the function
$$
y(x_1,x_2) = 3 \sin(x_1+x_2)+10
$$
## Exercise: train an ANN
<img src="../_static/exercise.png" alt="Exercise" width="75" align="left">
Create training data and train an ANN
* For $(x_1,x_2) \in [0,\pi] \times [0,2\pi]$ generate a grid with 20 points in each direction.
* Evaluate the function $y(x_1,x_2) = 3 \sin(x_1+x_2) + 10$ for the generated points.
* Build a tf.keras model with two hidden-layers, 16 and 8 neurons. Use the RELU activation function.
* Plot the data and the model output at its initialization.
* Train the model based on the MSE.
* Plot the data and the model output after training for 500 epochs.
## Solution
Please find one possible solution in [`regression.py`](./deepl_files/regression.py) file.
## 5. Classification problem
Build a classifier $\hat{y}(x)$ for distinguishing between the following examples.
**Question**
How could this be useful in automotive engineering?
<img src="deepl_files/data/3_1.png" alt="triangle 1" width="100" align="left">
<img src="deepl_files/data/3_3.png" alt="triangle 2" width="100" align="left">
<img src="deepl_files/data/4_2.png" alt="triangle 1" width="100" align="left">
<img src="deepl_files/data/4_4.png" alt="triangle 2" width="100" align="left">
<img src="deepl_files/data/16_2.png" alt="triangle 1" width="100" align="left">
<img src="deepl_files/data/16_4.png" alt="triangle 2" width="100" align="left">
**Autonomous driving**: recognition of street signs
<img src="https://w.grube.de/media/image/7b/5f/63/art_78-101_1.jpg" alt="street sign" width="200">
<img src="https://assets.tooler.de/media/catalog/product/b/g/bgk106363_8306628.jpg" alt="street sign" width="200">
Very good advanced tutorial: https://www.pyimagesearch.com/2019/11/04/traffic-sign-classification-with-keras-and-deep-learning/
## Image classification
What is an image in terms of data?
```
# This if else is a fix to make the file available for Jupyter and Travis CI
if os.path.isfile('deepl_files/data/3_1.png'):
file = 'deepl_files/data/3_1.png'
else:
file = '04_mini-projects/deepl_files/data/3_1.png'
# Load image as np.array
image = plt.imread(file)
print(type(image))
print(image.shape)
plt.imshow(image)
# Image shape
print(image.shape)
print(image[0, 0, :])
# Flatten
a = np.array([[1, 2, 3], [4, 5, 6]])
a_fl = a.flatten()
print(a_fl)
# Flatten image
image_fl = image.flatten()
print(image_fl.shape)
```
## Classification - formulation of optimization problem
Encode an image in a vector $x = (x_1,\dots,x_n) \in \mathbb{R}^n$. Every image $x^{(i)}, i=1,\dots,N$ belongs to one of $C$ prescribed classes. Denote the unknown classification function $y:\mathbb{R}^n \mapsto [0,1]^C$, e.g., $C=3$, $y(x^{(1)}) = (1,0,0), y(x^{(2)}) = (0,0,1), y(x^{(3)}) = (0,1,0)$.
Assume a model $\hat{y}(x)$ is given but requires calibration. For given labeled images, the [cross entropy](https://en.wikipedia.org/wiki/Cross_entropy) $e(p,\hat{p})$ (exact and model class probabilities $p$ and $\hat{p}$, respectively) as loss function
$$
L
=
\frac{1}{N}
\sum_{i=1}^{N}
e(y(x^{(i)}),\hat{y}(x^{(i)}))
\ , \quad
e(p,\hat{p})
=
-
\sum_{j=1}^{M}
p_j \log(\hat{p}_j)
$$
is best suited for classification problems.
## Exercise: train an image classifier
<img src="../_static/exercise.png" alt="Exercise" width="75" align="left">
Train an image classifier for given images
* Load the images from the folder [deepl_files/data](./deepl_files/data) and [deepl_files/data_test](./deepl_files/data_test)
* Create a tf.keras model with input image and output class probability
* Train the model with the cross entropy for 10 epochs
* Test the trained model on the test data
## Solution
Please find one possible solution in [`classification.py`](./deepl_files/classification.py) file.
## 6. Summary of this lecture
ANN
<img src="deepl_files/network2.png" alt="network2" width="700" align="ceter">
Standard problems
<img src="deepl_files/reg_start.png" alt="network2" width="200" align="left">
<img src="deepl_files/reg_trained.png" alt="network2" width="200" align="left">
<img src="deepl_files/data/3_1.png" alt="network2" width="100" align="right">
<img src="deepl_files/data/4_1.png" alt="network2" width="100" align="right">
<img src="deepl_files/data/16_1.png" alt="network2" width="100" align="right">
## Further topics
* DOE, Sample generation strategies and extrapolation
* Learning scenarios
* Unsupervised learning
* Reinforcement learning
* Keras models
* Functional API
* Subclassing
* Layers
* Convolution layer
* Dropout
* Batch normalization
* Advanced neural networks
* CNN (convolutional NN)
* RNN (recurrent NN)
* Custom
* Losses
* Custom loss
* Training
* Overfitting
* Optimization algorithm and parameters
* Mini-batch training
Thank you very much for your attention! Happy coding!
Contact: Dr.-Ing. Mauricio Fernández
Email: fernandez@cps.tu-darmstadt.de, mauricio.fernandez.lb@gmail.com
| github_jupyter |
# Quantum Simple Harmonic Oscillator
Motion of a quantum simple harmonic oscillator is guided by time independent Schr$\ddot{o}$dinger equation -
$$
\frac{d^2\psi}{dx^2}=\frac{2m}{\hbar^2}(V(x)-E)\psi
$$
In simple case, we may consider the potential function $V(x)$ to be square well one, which can be described by
$$
E =
\begin{cases}
\frac{1}{2}kL^2,& -L < x < L\\
\frac{1}{2}kx^2,& \text{otherwise}
\end{cases}
$$
This equation can be solved analytically and the energy eigenvalues are given by
$$
E_n = \left(n + \frac{1}{2}\right)\hbar \omega
$$
In this section, we shall try to solve the equation numerically by `odeint` function from `SciPy` package. For that we have to express ODE (\ref{eq01}) into two first order ODEs in the following way -
$$
\begin{aligned}
&\frac{d\psi}{dx}=\phi\\
&\frac{d\phi}{dx}= \frac{2m}{\hbar^2}(V(x)-E)\psi
\end{aligned}
$$
Since, it is an initial value problem, we can solve it by `solve_ivp` function `SciPy` package.
```
import numpy as np
import matplotlib.pyplot as plt
from scipy.integrate import solve_ivp
from scipy.optimize import bisect
@np.vectorize
def V(x):
if np.abs(x) < L:
return (1/2)*k*x**2
else:
return (1/2)*k*L**2
def model(x, z):
psi, phi = z
dpsi_dx = phi
dphi_dx = 2*(V(x) - E)*psi
return np.array([dpsi_dx, dphi_dx])
@np.vectorize
def waveRight(energy):
global E, x, psi
E = energy
x = np.linspace(-b, b, 100)
x_span = (x[0], x[-1])
psi0, dpsi_dx0 = 0.1, 0
x0 = [psi0, dpsi_dx0]
sol = solve_ivp(model, x_span, x0, t_eval=x)
x = sol.t
psi, phi = sol.y
return psi[-1]
k = 50
m = 1
hcross = 1
b = 2
L = 1
omega = np.sqrt(k/m)
energy = np.linspace(0, 0.5*k*L**2, 100)
psiR = waveRight(energy)
energyEigenVal = []
for i in range(len(psiR)-1):
if np.sign(psiR[i+1]) == -np.sign(psiR[i]):
root = bisect(waveRight, energy[i+1], energy[i])
energyEigenVal.append(root)
energyEigenVal
# Analytic energies
E_analytic = []
Emax = max(energyEigenVal)
n = 0
En = 0
while En < Emax:
En = (n + 1/2)*hcross*omega
E_analytic.append(En)
n += 1
E_analytic
plt.plot(energyEigenVal, ls=':', marker='^', color='blue', label='Numerical')
plt.plot(E_analytic, ls=':', marker='o', color='red', label='Analytical')
plt.legend()
plt.show()
print('------------------------------------')
print('{0:10s}{1:2s}{2:10s}'.format('Energy(Analytic)','','Energy(Numerical)'))
print('------------------------------------')
for i in range(len(energyEigenVal)):
print('{0:10.3f}{1:5s}{2:10.3f}'.format(E_analytic[i],'', energyEigenVal[i]))
print('------------------------------------')
for i in range(len(energyEigenVal)):
waveRight(energyEigenVal[i])
plt.plot(x, 100**i*psi**2, label='En = %0.3f'%energyEigenVal[i])
plt.xlabel('$x$', fontsize=14)
plt.ylabel('$|\psi(x)|^2$', fontsize=14)
plt.legend()
plt.show()
```
| github_jupyter |
(pandas_plotting)=
# Plotting
``` {index} Pandas: plotting
```
Plotting with pandas is very intuitive. We can use syntax:
df.plot.*
where * is any plot from matplotlib.pyplot supported by pandas. Full tutorial on pandas plots can be found [here](https://pandas.pydata.org/pandas-docs/stable/user_guide/visualization.html).
Alternatively, we can use other plots from matplotlib library and pass specific columns as arguments:
plt.scatter(df.col1, df.col2, c=df.col3, s=df.col4, *kwargs)
In this tutorial we will use both ways of plotting.
At first we will load New Zealand earthquake data and following date-time tutorial we will create date-time index:
```
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
nz_eqs = pd.read_csv("../../geosciences/data/nz_largest_eq_since_1970.csv")
nz_eqs.head(4)
nz_eqs["hour"] = nz_eqs["utc_time"].str.split(':').str.get(0).astype(float)
nz_eqs["minute"] = nz_eqs["utc_time"].str.split(':').str.get(1).astype(float)
nz_eqs["second"] = nz_eqs["utc_time"].str.split(':').str.get(2).astype(float)
nz_eqs["datetime"] = pd.to_datetime(nz_eqs[['year', 'month', 'day', 'hour', 'minute', 'second']])
nz_eqs.head(4)
nz_eqs = nz_eqs.set_index('datetime')
```
Let's plot magnitude data for all years and then for year 2000 only using pandas way of plotting:
```
plt.figure(figsize=(7,5))
nz_eqs['mag'].plot()
plt.xlabel('Date')
plt.ylabel('Magnitude')
plt.show()
plt.figure(figsize=(7,5))
nz_eqs['mag'].loc['2000-01':'2001-01'].plot()
plt.xlabel('Date')
plt.ylabel('Magnitude')
plt.show()
```
We can calculate how many earthquakes are within each year using:
df.resample('bintype').count()
For example, if we want to use intervals for year, month, minute and second we can use 'Y', 'M', 'T' and 'S' in the bintype argument.
Let's count our earthquakes in 4 month intervals and display it with xticks every 4 years:
```
figure, ax = plt.subplots(figsize=(7,5))
# Resample datetime index into 4 month bins
# and then count how many
nz_eqs['year'].resample("4M").count().plot(ax=ax, x_compat=True)
import matplotlib
# Change xticks to be every 4 years
ax.xaxis.set_major_locator(matplotlib.dates.YearLocator(base=4))
ax.xaxis.set_major_formatter(matplotlib.dates.DateFormatter("%Y"))
plt.xlabel('Date')
plt.ylabel('No. of earthquakes')
plt.show()
```
Suppose we would like to view the earthquake locations, places with largest earthquakes and their depths. To do that, we can use Cartopy library and create a scatter plot, passing magnitude column into size and depth column into colour.
```
import cartopy.crs as ccrs
import cartopy.feature as cfeature
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
import matplotlib.ticker as mticker
```
Let's plot this data passing columns into scatter plot:
```
plt.rcParams.update({'font.size': 14})
central_lon, central_lat = 170, -50
extent = [160,188,-48,-32]
fig, ax = plt.subplots(1, subplot_kw=dict(projection=ccrs.Mercator(central_lon, central_lat)), figsize=(7,7))
ax.set_extent(extent)
ax.coastlines(resolution='10m')
ax.set_title("Earthquakes in New Zealand since 1970")
# Create a scatter plot
scatplot = ax.scatter(nz_eqs.lon,nz_eqs.lat, c=nz_eqs.depth_km,
s=nz_eqs.depth_km/10, edgecolor="black",
cmap="PuRd", lw=0.1,
transform=ccrs.Geodetic())
# Create colourbar
cbar = plt.colorbar(scatplot, ax=ax, fraction=0.03, pad=0.1, label='Depth [km]')
# Sort out gridlines and their density
xticks_extent = list(np.arange(160, 180, 4)) + list(np.arange(-200,-170,4))
yticks_extent = list(np.arange(-60, -30, 2))
gl = ax.gridlines(linewidths=0.1)
gl.xlabels_top = False
gl.xlabels_bottom = True
gl.ylabels_left = True
gl.ylabels_right = False
gl.xlocator = mticker.FixedLocator(xticks_extent)
gl.ylocator = mticker.FixedLocator(yticks_extent)
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
plt.show()
```
This way we can easily see that the deepest and largest earthquakes are in the North.
# References
The notebook was compiled based on:
* [Pandas official Getting Started tutorials](https://pandas.pydata.org/docs/getting_started/index.html#getting-started)
* [Kaggle tutorial](https://www.kaggle.com/learn/pandas)
| github_jupyter |
# Fundamentos de *Python* para Computação Científica
## Python como calculadora de bolso
Nesta seção mostraremos como podemos fazer cálculos usando Python como uma calculadora científica. Como Python é uma *linguagem interpretada*, ela funciona diretamente como um ciclo *LAI (Ler-Avaliar-Imprimir)*. Isto é, ela lê uma instrução que você escreve, avalia essa instrução interpretando-a e imprimindo (ou não) um valor.
### Aritmética
Os símbolos são os seguintes:
|operação|símbolo|
|---|---|
|adição|+|
|subtração|-|
|multiplicação|*|
|divisão|/|
|potenciação|**|
**Exemplo:** calcule o valor de $4/3 - 2 \times 3^4$
```
4/3 - 2*3**4
```
### Expressões numéricas
Comutatividade, associatividade, distributividade e agrupamento de números por meio de parênteses, colchetes ou chaves são escritos em Python usando apenas parênteses.
**Exemplo:** calcule o valor de $2 - 98\{ 7/3 \, [ 2 \times (3 + 5 \times 6) - ( 2^3 + 2^2 ) - (2^2 + 2^3) ] - (-1) \}$
```
2 - 98*(7/3*(2*(3+5*6)-(2**3+2**2)-(2**2+2**3))-(-1))
```
Note que a legibilidade da instrução acima não está boa. Podemos incluir espaços para melhorá-la. Espaços **não alteram** o resultado do cálculo.
```
2 - 98*( 7/3*( 2*( 3 + 5*6 ) - ( 2**3 + 2**2 ) - ( 2**2 + 2**3 ) ) - (-1) )
```
Um dos princípios do Zen do Python é *a legibilidade conta*, ou seja, vale a pena aperfeiçoar sua maneira de escrever um código computacional de maneira que outros possam entendê-lo sem dificuldades quando o lerem.
Parênteses não pareados (abrir, mas não fechar) produzirão um erro.
```
2 - 98*( 7/3
```
O erro diz que houve uma terminação não esperada da instrução. Em geral, quando digitamos um parêntese de abertura, o Jupyter adiciona o de fechamento automaticamente.
```
2 - 98*(7/3)
```
### Números inteiros e fracionários
Em Python, números inteiros e fracionários são escritos de maneira distinta através do ponto decimal.
**Exemplo:** Calcule o valor de $2 + 3$
```
2 + 3
```
**Exemplo:** Calcule o valor de
$2,0 + 3$
```
2.0 + 3
```
O valor é o mesmo, mas em Python, os dois números acima possuem naturezas diferentes. Essa "natureza" é chamada _tipo_. Falaremos disso mais tarde. Por enquanto, veja o seguinte:
```
type(5)
type(5.0)
```
As duas palavras acima indicam que o **tipo** do valor 5 é `int` e o do valor 5.0 é `float`. Isto significa que no primeiro caso temos um número inteiro, mas no segundo caso temos um número em _ponto flutuante_. Números em ponto flutuante imitam o conjunto dos números reais. Veremos mais exemplos disso ao longo do curso.
**Exemplo:** Calcule o valor de $5.0 - 5$
```
5.0 - 5
```
Observe a resposta no caso anterior. A diferença entre um número em ponto flutuante e um número inteiro resulta em um número em ponto flutuante! Isto é verdadeiro para as versões 3.x (onde x é um número maior ou igual a 0) da linguagem Python, mas não o é para versões anteriores da linguagem.
Vale mencionar este detalhe para seu conhecimento. Entretanto, usaremos Python 3 neste curso e você não deverá ter problemas com isso no caminho.
#### Usar ponto ou usar vírgula?
Em nosso sistema numérico, a vírgula (`,`) é quem separa a parte inteira da parte fracionária de um número decimal. Em Python, este papel é desempenhado pelo ponto (`.`). Daqui para a frente, sempre que não houver ambiguidade de notação, usaremos o ponto e não a vírgula para representar números fracionários em exemplos, exercícios ou explicações.
**Exemplo:** calcule o valor de $5.1 - 5$
```
5.1 - 5
```
Observe o cálculo anterior... O resultado deveria ser 0.1, não? O que aconteceu?!
Isto se deve a um conceito chamado _precisão de máquina_. Não entraremos em detalhe nisto neste curso, mas é suficiente que você saiba que números no computador não são exatos como na Matemática tradicional.
Um computador, por mais rápido e inteligente que possa parecer, é incapaz de representar a infinidade dos números reais. Às vezes, ele _aproximará_ resultados. Portanto, não se preocupe com isso. A conta parece errada e, de fato, está! Porém, este erro é tão pequeno que praticamente não afetará suas contas significativamente.
### Divisão inteira
Podemos realizar a operação de divisão inteira quando quisermos um quociente inteiro. Para isso, utilizamos o símbolo `//` (divisão inteira).
**Exemplo:** calcule o valor de $5/2$
```
5/2
```
**Exemplo:** calcule o valor da **divisão inteira** $5/2$
```
5//2
```
#### O algoritmo da divisão e restos
Você deve se lembrar do algoritmo da divisão. Um número $D$ (dividendo), quando dividido por outro número $d$ (divisor), resulta em um quociente $q$ e um resto $r$ nulo se a divisão for exata, ou um resto $r$ diferente de zero, se a divisão não for exata. Ou seja, o algoritmo diz o seguinte:
$$D = d \times q + r$$
Em Python, podemos descobrir o valor $r$ de uma divisão inexata diretamente por meio do símbolo `%`, chamado de **operador módulo**.
**Exemplo:** determine o resto da divisão $7/2$
```
7 % 2
```
De fato, $5 = 2\times3 + 1$
A partir disso, podemos então verificar números inteiros pares e ímpares com o operador `%`.
```
5 % 2
6 % 2
```
Com efeito, 5 é ímpar, pois, ao ser dividido por 2, retorna resto igual a 1, e 6 é par, pois, sendo exatamente divisível por 2, retorna resto 0.
### Porcentagem
Então, se `%` serve para calcular o resto de uma divisão, como calcular porcentagem?
Bem, não há um símbolo especial para o cálculo da porcentagem. Ele deve ser feito dividindo o número em questão por 100 da maneira usual.
**Exemplo:** Quanto é 45\% de R\$ 43,28?
```
45/100*43.28
```
Veja que poderíamos também realizar esta conta da seguinte forma:
```
0.45*43.28
```
Ou da forma:
```
45*43.28/100
```
Esta última não seria tão literal quanto à conta original, porém, os três exemplos mostram que a multiplicação e a divisão são equivalentes em **precedência**. Tanto faz neste caso realizar primeiro a multiplicação e depois a divisão, ou vice-versa. A única ressalva é o segundo exemplo, no qual, na verdade, não foi o computador quem dividiu 45 por 100.
#### Precedência de operações
Assim como na Matemática, Python possui precedências em operações.
Quando não há parênteses envolvidos, multiplicação e divisão são realizadas antes de adições e subtrações.
**Exemplo:** calcule $4 \times 5 + 3$
```
4*5 + 3
```
**Exemplo:** calcule $4 / 5 + 3$
```
4/5 + 3
```
Quando houver parênteses, eles têm precedência.
**Exemplo:** calcule $4 \times (5 + 3)$
```
4*(5+3)
```
Em regra, operações são executadas da esquerda para a direita e do par de parênteses mais interno para o mais externo.
**Exemplo:** calcule $2 - 10 - 3$
```
# primeiramente, 2 - 10 = - 8 é calculado;
# depois - 8 - 3 = -11
2 - 10 - 3
```
**Exemplo:** calcule $2 - (10 - 3)$
```
# primeiramente, (10 - 3) = 7 é calculado;
# depois 2 - 7 = -5
2 - (10 - 3)
```
### Comentários
Note que acima descrevemos passos executados pelo interpretador Python para calcular as expressões numéricas. Porém, fizemos isso em uma célula de código e não houve nenhuma interferência no resultado. Por quê? Porque inserimos _comentários_.
#### Comentários em linha
São utilizados para ignorar tudo o que vier após o símbolo `#` naquela linha.
```
# isto aqui é ignorado
2 + 3 # esta linha será calculada
```
A instrução abaixo resulta em erro, pois `2 +` não é uma operação completa.
```
2 + # 3
```
A instrução abaixo resulta em 2, pois `+ 3` está comentado.
```
2 # + 3
```
**Exemplo:** qual é o valor de $(3 - 4)^2 + (7/2 - 2 - (3 + 1))$?
```
""" ORDEM DE OPERAÇÕES
primeiramente, (3 + 1) = 4 é calculado :: parêntese mais interno
em seguida, 7/2 = 3.5 :: divisão mais interna
em seguida, 3.5 - 2 = 1.5 :: primeira subtração mais interna
em seguida, 1.5 - 4 = -2.5 :: segunda subtração e resolve parêntese externo
em seguida, (3 - 4) = -1 :: parêntese
em seguida, (-1)**2 = 1 :: potenciação, ou duas multiplicações
em seguida, 1 + (-2.5) = -1.5 :: última soma
"""
# Expressão
(3 - 4)**2 + (7/2 - 2 - (3 + 1))
```
#### Docstrings
Em Python, não há comentários em bloco, mas podemos usar _docstrings_ quando queremos comentar uma ou múltiplas linhas. Basta inserir um par de três aspas simples (`'''`) ou duplas (`"""`). Aspas simples são também chamadas de "plicas" (`'...'`).
Por exemplo,
```python
'''
Isto aqui é
um comentário em bloco e
será ignorado pelo
interpretador se vier
seguido de outra instruções.
'''
```
e
```python
"""
Isto aqui é
um comentário em bloco e
será ignorado pelo
interpretador se vier
seguido de outra instruções.
"""
```
possuem o mesmo efeito prático, porém existem recomendações de estilo para usar docstrings. Não discutiremos esses tópicos aqui.
Se inseridas isoladamente numa célula de código aqui, produzem uma saída textual.
**Cuidado!** plicas não são apóstrofes!
```
"""
Isto aqui é
um comentário em bloco e
será ignorado pelo
interpretador se vier
seguido de outra instruções.
"""
```
### Omitindo impressão
Podemos usar um `;` para omitir a impressão do último comando executado na célula.
```
2 + 3; # a saída não é impressa na tela
2 - 3
1 - 2; # nada é impresso pois o último comando vem seguido por ;
"""
Docstring múlipla
não impressa
""";
'''Docstring simples omitida''';
```
## Variáveis, atribuição e reatribuição
Em Matemática, é muito comum usarmos letras para substituir valores. Em Python, uma variável é um nome associado a um local na memória do computador. A idéia é similar ao endereço de seu domicílio.
**Exemplo:** se $x = 2$ e $y = 3$, calcule $x + y$
```
x = 2
y = 3
x + y
```
Acima, `x` e `y` são variáveis. O símbolo `=` indica que fizemos uma atribuição.
Uma reatribuição ocorre quando fazemos uma nova atribuição na mesma variável. Isto é chamade de _overwriting_.
```
x = 2 # x tem valor 2
y = 3 # y tem valor 3
x = y # x tem valor 3
x
```
**Exemplo:** A área de um retângulo de base $b$ e altura $h$ é dada por $A = bh$. Calcule valores de $A$ para diferentes valores de $b$ e $h$.
```
b = 2 # base
h = 4 # altura
A = b*h
A
b = 10 # reatribuição em b
A = b*h # reatribuição em A, mas não em h
A
h = 5 # reatribuição em h
b*h # cálculo sem atribuição em A
A # A não foi alterado
```
Variáveis são _case sensitive_, isto é, sensíveis a maiúsculas/minúsculas.
```
a = 2
A = 3
a - A
A - a
```
### Atribuição por desempacotamento
Podemos realizar atribuições em uma única linha.
```
b,h = 2,4
b
h
```
### Imprimindo com `print`
`print` é uma função. Aprenderemos um pouco sobre funções mais à frente. Por enquanto, basta entender que ela funciona da seguinte forma:
```
print(A) # imprime o valor de A
print(b*h) # imprime o valor do produto b*h
```
Com `print`, podemos ter mais de uma saída na célula.
```
b, h = 2.1, 3.4
print(b)
print(h)
```
Podemos inserir mais de uma variável em print separando-as por vírgula.
```
print(b, h)
x = 0
y = 1
z = 2
print(x, y, z)
print(x, x + 1, x + 2)
```
## Caracteres, letras, palavras: strings
Em Python, escrevemos caracteres entre plicas ou aspas.
```
'Olá!'
"Olá!"
'Olá, meu nome é Python.'
```
Podemos atribuir caracteres a variáveis.
```
a = 'a'
A = 'A'
print(a)
print(A)
area1 = 'Matemática'
area2 = 'Estatística'
print(area1, 'e', area2)
```
Nomes de variáveis não podem iniciar por números ou conter caracteres especiais ($, #, ?, !, etc)
```
1a = 'a' # inválido
a1 = 'a1' # válido
a2 = 'a2' # válido
```
Podemos usar `print` concatenando variáveis e valores.
```
b = 20
h = 10
print('Aprendo', area1, 'e', area2)
print('A área do retângulo é', b*h)
```
Em Python, tudo é um "objeto". Nos comandos abaixo, temos objetos de três "tipos" diferentes.
```
a = 1
x = 2.0
b = 'b'
```
Poderíamos também fazer:
```
a, x, b = 1, 2.0, 'b' # modo menos legível!
```
Ao investigar essas variáveis (objetos) com `type`, veja o que temos:
```
type(a) # verifica o "tipo" do objeto
type(x)
type(b)
```
`str` (abreviação de _string_) é um objeto definido por uma cadeia de 0 ou mais caracteres.
```
nenhum = ''
nenhum
espaco = ' '
espaco
```
**Exemplo:** Calcule o valor da área de um círculo de raio $R = 3$ e imprima o seu valor usando print e strings. Assuma o valor de $\pi = 3.145$.
```
R = 3
pi = 3.145
A = pi*R**2
print('A área do círculo é: ', A)
```
## Tipos de dados: `int`, `float` e `str`
Até o momento, aprendemos a trabalhar com números inteiros, fracionários e sequencias de caracteres.
Em Python, cada objeto possui uma "família". Chamamos essas famílias de "tipos".
Fazendo um paralelo com a teoria dos conjuntos em Matemática, você sabe que $\mathbb{Z}$ representa o _conjunto dos números inteiros_ e que $\mathbb{R}$ representa o _conjunto dos números inteiros_.
Basicamente, se `type(x)` é `int`, isto equivale a dizer que $x \in \mathbb{Z}$. Semelhantemente, se `type(y)` é `float`, isto é "quase o mesmo" que dizer $y \in \mathbb{R}$. Porém, neste caso, não é uma verdade absoluta para todo número $y$.
Futuramente, você aprenderá mais sobre "ponto flutuante". Então, é mais correto dizer que $y \in \mathbb{F}$, onde $\mathbb{F}$ é seria o conjunto de todos os números em _ponto flutuante_. No final das contas, um número de $\mathbb{F}$ faz uma "aproximação" para um número de $\mathbb{R}$.
No caso de `str`, é mais difícil estabelecer uma notação similar, mas podemos criar exemplos.
**Exemplo:** Considere o conjunto $A = \{s \in \mathbb{S} \, : s \text{ possui apenas duas vogais}\}$, onde $\mathbb{S}$ é o conjunto de palavras formadas por 2 letras.
Todo elemento $s$ desse conjunto pode assumir um dos seguintes valores:
`aa`, `ae`, `ai`, `ao`, `au`,
`ea`, `ee`, `ei`, `eo`, `eu`,
`ia`, `ie`, `ii`, `io`, `iu`,
`oa`, `oe`, `oi`, `oo`, `ou`,
`ua`, `ue`, `ui`, `uo`, `uu`.
Apesar de apenas algumas terem significado na nossa língua, como é o caso de `ai`, `ei`, `oi` e `ui`, que são interjeições, `eu`, que é um pronome e `ou`, que é uma conjunção, existem 25 _anagramas_.
Então, poderíamos escrever:
```
s1, s2, s3, s4, s5 = 'aa', 'ae', 'ai', 'ao', 'au'
s6, s7, s8, s9, s10 = 'ea', 'ee', 'ei', 'eo', 'eu'
s11, s12, s13, s14, s15 = 'ia', 'ie', 'ii', 'io', 'iu'
s16, s17, s18, s19, s20 = 'oa', 'oe', 'oi', 'oo', 'ou'
s21, s22, s23, s24, s25 = 'oa', 'oe', 'oi', 'oo', 'ou'
```
Os 25 anagramas acima foram armazenados em 25 variáveis diferentes.
```
print(s3)
print(s11)
print(s24)
```
### _Casting_
Uma das coisas legais que podemos fazer em Python é alterar o tipo de um dado para outro. Essa operação é chamada de _type casting_, ou simplesmente _casting_.
Para fazer _casting_ de `int`, `float` e `str`, usamos funções de mesmo nome.
```
float(25) # 25 é um inteiro, mas float(25) é fracionário
int(34.21) # 34.21 é fracionário, mas int(34.21) é um "arredondamento" para um inteiro
int(5.65) # o arredondamento é sempre para o inteiro mais próximo "para baixo"
int(-6.6)
```
O _casting_ de um objeto 'str' composto de letras com 'int' é inválido.
```
int('a')
```
O _casting_ de um 'int' ou 'float' com 'str' formada como número é válido.
```
str(2)
str(3.14)
```
O _casting_ de um 'str' puro com 'float' é inválido.
```
float('a')
```
### Concatenação
Outra coisa legal que podemos fazer em Python é a _concatenação_ de objetos `str`. A concatenação pode ser feita de modo direto como uma "soma" (usando `+`) ou acompanhada por _casting_.
```
s21 + s23
s1 + s3 + s5
'Casting do número ' + str(2) + '.'
str(1) + ' é inteiro' + ',' + ' ' + 'mas ' + str(3.1415) + ' é fracionário!'
```
Podemos criar concatenações de `str` por repetição usando multiplicação (`*`) e usar parênteses para formar as mais diversas "montagens".
```
(str(s3) + ',')*2 + s3 + '...'
x = 1.0
'Usando 0.1, incrementamos ' + str(x) + ' para obter ' + str(x + 0.1) + '.'
print(5*s20 + 10*s2 + 5*s17 + 5*('.') + 'YEAH! :)')
```
## Módulos e importação
Um mecânico, bombeiro hidráulico ou eletricista sempre anda com uma caixa de ferramentas contendo ferramentas essenciais, tais como alicate, chave de fenda e parafusadeira. Cada ferramenta possui uma função bem definida. Porém, quando esses profissionais deparam com uma tarefa que nenhuma ferramenta de sua caixa é capaz de executar, é necessário buscar por outra ferramenta para resolver o problema.
Podemos entender Python de modo similar. A linguagem fornece uma estrutura mínima de ferramentas que pode ser expandida. Por exemplo, até agora aprendemos a somar e a subtrair, mas ainda não sabemos calcular a raiz quadrada de um número.
Fazemos isto com a importação de funções que "habitam" em _módulos_. Sempre que precisarmos de algo especial em nossa maleta de ferramentas de _data science_, devemos pesquisar por alguma solução existente ou criar nossa própria solução. Neste curso, não vamos nos aprofundar no tema de criar nossas próprias soluções, o que se chama _customização_. Você aprenderá mais sobre isto no devido tempo. Usaremos coisas já construídas para ganharmos tempo.
Módulos são como gavetas em um escritório ou equipamentos em uma oficina. Vários módulos podem ser organizados juntos para formar um _pacote_. Então, imagine que você precisa substituir o cabo RJ45 do seu desktop que seu _pet_ roeu e o deixou sem internet no final de semana. Além dos terminais e de muita paciência, você precisará de um alicate de crimpagem para construir um cabo novo.
O alicate de crimpagem é um alicate especial, assim como a chave phillips é um tipo de chave especial. Se você tiver todas essas ferramentas na sua casa e for organizado, a primeira coisa que você fará é ir até a gaveta onde aquele tipo de ferramenta está guardada. Em seguida, você pegará a ferramenta especializada.
Em Python, quando precisamos de algo que desempenha um papel específico, realizamos a importação de um módulo inteiro, de um submódulo deste módulo ou de apenas um objeto do módulo. Há muitas maneiras de fazer isso.
Para importar um módulo inteiro, podemos usar a sintaxe:
```python
import nomeDoModulo
```
Para importar um submódulo, podemos usar a sintaxe:
```python
import nomeDoModulo.nomeDoSubmodulo
```
ou a sintaxe
```python
from nomeDoModulo import nomeDoSubmodulo
```
Caso queiramos usar uma função específica, podemos usar
```python
from nomeDoModulo import nomeDaFuncao
```
ou
```python
from nomeDoModulo.nomeDoSubmodulo import nomeDaFuncao
```
se a função pertencer a um submódulo.
Existe uma forma muito eficiente de acessar os objetos de um módulo específico usando um _alias_ (pseudônimo). O _alias_ é um nome substituto para o módulo. Outra maneira de enxergar esse tipo de importação é pensar em uma chave que abre um cadeado. Este tipo de importação usa a sintaxe:
```python
import nomeDoModulo as nomeQueEuQuero
```
Basicamente, esta sentença diz: _"importe o módulo `nomeDoModulo` como `nomeQueEuQuero`"_. Isto fará com que `nomeQueEuQuero` seja um pseudônimo para o módulo que você quer acessar. Entretanto, isto faz mais sentido quando o pseudônimo é uma palavra com menos caracteres. Por exemplo,
```python
import nomeDoModulo as nqeq
```
Neste caso, `nqeq` é um pseudônimo mais curto. Ao longo do curso usaremos pseudônimos que já se tornaram praticamente um padrão na comunidade Python.
Como último exemplo de importação, considere a sintaxe:
```python
from nomeDoModulo import nomeDoSubmodulo as nds
```
Neste exemplo, estamos criando um pseudônimo para um submódulo.
### O módulo `math`
O módulo `math` é uma biblioteca de funções matemáticas. O que é uma função?
Em Matemática, uma função é como uma máquina que recebe uma matéria-prima e entrega um produto.
Se a matéria-prima é $x$, o produto é $y$ e a função é $f$, então, $y = f(x)$. Logo, para cada valor de entrada $x$, um valor de saída $y$ é esperado.
Neste capítulo, já lidamos com a função `print`. O que ela faz?
Ela recebe um conteúdo, o valor de entrada (denominado _argumento_), e o produto (valor de saída) é a impressão do conteúdo.
A partir de agora, vamos usar o módulo `math` para realizar operações matemáticas mais específicas.
Importaremos o módulo `math` como:
```
import math as mt
```
Note que, aparentemente, nada aconteceu. Porém, este comando permite que várias funções sejam usadas em nosso _espaço de trabalho_ usando a "chave" `mt` para abrir o "cadeado" `math`.
Para ver uma lista das funções existentes, escreva `mt.` e pressione a tecla `<TAB>`.
O número neperiano (ou número de Euler) $e$ é obtido como:
```
mt.e
```
O valor de $\pi$ pode ser obtido como:
```
mt.pi
```
Perceba, contudo, que $e$ e $\pi$ são números irracionais. No computador, eles possuem um número finito de casas decimais! Acima, ambos possuem 16 dígitos.
**Exemplo:** calcule a área de um círculo de raio $r = 3$.
```
r = 3 # raio
area = mt.pi*r**2 # area
print('A área é',area) # imprime com concatenação
```
A raiz quadrada de um número $x$, $\sqrt{x}$, é calculada pela função `sqrt` (abreviação de "square root")
```
x = 3 # note que x é um 'int'
mt.sqrt(x) # o resultado é um 'float'
mt.sqrt(16) # 16 é 'int', mas o resultado é 'float'
```
**Exemplo:** calcule o valor de $\sqrt{ \sqrt{\pi} + e + \left( \frac{3}{2} \right)^y }$, para $y = 2.1$.
```
mt.sqrt( mt.sqrt( mt.pi ) + mt.e + 3/2**2.1 ) # espaços dão legibilidade
```
O logaritmo de um número $b$ na base $a$ é dado por $\log_a \, b$, com $a > 0$, $b > 0$ e $a \neq 1$.
- Quando $a = e$ (base neperiana), temos o _logaritmo natural_ de $b$, denotado por $\text{ln} \, b$.
- Quando $a = 10$ (base 10), temos o _logaritmo em base 10_ de $b$, denotado por $\text{log}_{10} \, b$, ou simplesmente $\text{log} \, b$.
Em Python, algum cuidado deve ser tomado com a função logaritmo.
- Para calcular $\text{ln} \, b$, use `log(b)`.
- Para calcular $\text{log} \, b$, use `log10(b)`.
- Para calcular $\text{log}_2 \, b$, use `log2(b)`.
- Para calcular $\text{log}_a \, b$, use `log(b,a)`.
Pelas duas últimas colocações, vemos que `log(b,2)` é o mesmo que `log2(b)`.
Vejamos alguns exemplos:
```
mt.log(2) # isto é ln(2)
mt.log10(2) # isto é log(2) na base 10
mt.log(2,10) # isto é o mesmo que a anterior
mt.log2(2) # isto é log(2) na base 2
```
**Exemplo:** se $f(x) = \dfrac{ \text{ln}(x+4) + \log_3 x }{ \log_{10} x }$, calcule o valor de $f(e) + f(\pi)$.
```
x = mt.e # x = e
fe = ( mt.log(x + 4) + mt.log(x,3) ) / ( mt.log10(x) ) # f(e)
x = mt.pi # reatribuição do valor de x
fpi = ( mt.log(x + 4) + mt.log(x,3) ) / ( mt.log10(x) ) # f(pi)
print('O valor é', fe + fpi)
```
No exemplo anterior, espaços foram acrescentados para tornar os comandos mais legíveis.
## Introspecção
Podemos entender mais sobre módulos, funções e suas capacidades examinando seus componentes e pedindo ajuda sobre eles.
Para listar todos os objetos de um módulo, use `dir(nomeDoModulo)`.
Para pedir ajuda sobre um objeto, use a função `help` ou um ponto de interrogação após o nome `?`.
```
dir(mt) # lista todas as funções do módulo math
help(mt.pow) # ajuda sobre a função 'pow' do módulo 'math'
```
Como vemos, a função `pow` do módulo `math` serve para realizar uma potenciação do tipo $x^y$.
```
mt.pow(2,3) # 2 elevado a 3
```
**Exemplo:** considere o triângulo retângulo com catetos de comprimento $a = \frac{3}{4}\pi$ e $b = \frac{2}{e}$, ambos medidos em metros. Qual é o comprimento da hipotenusa $c$?
```
'''
Resolução pelo Teorema de Pitágoras
c = sqrt( a**2 + b**2 )
'''
a = 3./4. * mt.pi # 3. e 4. é o mesmo que 3.0 e 4.0
b = 2./mt.e
c = mt.sqrt( a**2 + b**2 )
print('O valor da hipotenusa é:', c, 'm')
```
O mesmo cálculo acima poderia ser resolvido com apenas uma linha usando a função `hypot` do módulo `math`.
```
c = mt.hypot(a,b) # hypot calcula a hipotenusa
print('O valor da hipotenusa é:', c, 'm')
```
**Exemplo:** Converta o ângulo de $270^{\circ}$ para radianos.
1 radiano ($rad$) é igual a um arco de medida $r$ de uma circunferência cujo raio mede $r$. Isto é, $r = 1 \,rad$. Uma vez que $180^{\circ}$ corresponde a meia-circunferência, $\pi r$, então, $\pi \, rad = 180^{\circ}$. Por regra de três, podemos concluir que $x^{\circ}$ equivale a $\frac{x}{180^{\circ}} \pi \, rad$.
```
ang_graus = 270 # valor em graus
ang_rad = ang_graus/180*mt.pi # valor em radianos
print(ang_rad)
```
Note que em
```python
ang_graus/180*mt.pi
```
a divisão é executada antes da multiplicação porque vem primeiro à esquerda. Parênteses não são necessários neste caso.
Poderíamos chegar ao mesmo resultado diretamente com a função `radians` do módulo `math`.
```
mt.radians?
mt.radians(ang_graus)
```
### Arredondamento de números fracionários para inteiros
Com `math`, podemos arredondar números fracionários para inteiros usando a regra "para cima" (teto) ou "para baixo" (piso/chão).
- Use `ceil` (abreviatura de _ceiling_, ou "teto") para arredondar para cima;
- Use `floor` ("chão") para arredondar para baixo;
```
mt.ceil(3.42)
mt.floor(3.42)
mt.ceil(3.5)
mt.floor(3.5)
```
## Números complexos
Números complexos são muito importantes no estudo de fenômenos físicos envolvendo sons, frequencias e vibrações. Na Matemática, o conjunto dos números complexos é definido como
$\mathbb{C} = \{ z = a + bi \, : \, a, b \in \mathbb{R} \text{ e } i = \sqrt{-1} \}$. O valor $i$ é o _número imaginário_.
Em Python, os números complexos são objetos do tipo `complex` e são escritos na forma
```python
z = a + bj
```
ou
```python
z = a + bJ
```
O símbolo `j` (ou `J`) quando vem acompanhado de um `int` ou `float` define a parte imaginária do número complexo.
```
3 - 2j
type(3 - 2j) # o número é um complex
4J # J (maiúsculo)
type(4j)
```
Se `j` ou `J` são colocados isoladamente, significarão uma variável. Caso a variável não esteja definida, um erro de indefinição resultará.
```
j
```
### Parte real, parte imaginária e conjugados
Números complexos também podem ser diretamente definidos como `complex(a,b)` onde `a` é a parte real e `b` é a parte imaginária.
```
complex(6.3,9.8)
```
As partes real e imaginária de um `complex` são extraídas usando as funções `real` e `imag`, nesta ordem.
```
z = 6.3 + 9.8j
z.real
z.imag
```
O conjugado de `z` pode ser encontrado com a função `conjugate()`.
```
z.conjugate()
```
### Módulo de um número complexo
Se $Re(z)$ e $Im(z)$ forem, respectivamente, a parte real e a parte imaginária de um número complexo, o _módulo_ de $z$ é definido como
$$|z| =\sqrt{[Re(z)]^2 + [Im(z)]^2}$$
**Exemplo:** se $z = 2 + 2i$, calcule o valor de $|z|$.
Podemos computar esta quantidade de maneiras distintas. Uma delas é:
```
(z.real ** 2 + z.imag ** 2) ** 0.5
```
Entretanto, podemos usar a função `abs` do Python. Esta função é uma função predefinida pertencente ao _core_ da linguagem.
```
abs(z)
```
A função `abs` também serve para retornar o "valor absoluto" (ou módulo) de números reais. Lembremos que o módulo de um número real $x$ é definido como
$$
|x| =
\begin{cases}
x,& \text{se } x \geq 0 \\
-x,& \text{se } x < 0 \\
\end{cases}
$$
```
help(abs)
abs(-3.1)
abs(-mt.e)
```
## Zen do Python: o estilo "pythônico" dos códigos
O Python tem um estilo de programação próprio popularmente chamado de *Zen do Python*, que advoga por princípios de projeto. O Zen do Python foi elaborado por Tim Peters e documentado no [[PEP 20]](https://legacy.python.org/dev/peps/pep-0020/). Em nossa língua, os princípios seriam traduzidos da seguinte forma:
- Bonito é melhor que feio.
- Explícito é melhor que implícito.
- Simples é melhor que complexo.
- Complexo é melhor que complicado.
- Linear é melhor do que aninhado.
- Esparso é melhor que denso.
- Legibilidade conta.
- Casos especiais não são especiais o bastante para quebrar as regras.
- Ainda que praticidade vença a pureza.
- Erros nunca devem passar silenciosamente.
- A menos que sejam explicitamente silenciados.
- Diante da ambiguidade, recuse a tentação de adivinhar.
- Deveria haver um — e preferencialmente só um — modo óbvio para fazer algo.
- Embora esse modo possa não ser óbvio a princípio a menos que você seja holandês.
- Agora é melhor que nunca.
- Embora nunca frequentemente seja melhor que já.
- Se a implementação é difícil de explicar, é uma má ideia.
- Se a implementação é fácil de explicar, pode ser uma boa ideia.
- Namespaces são uma grande ideia — vamos ter mais dessas!
Um dos mais importantes é **deveria haver um — e preferencialmente só um — modo óbvio para fazer algo**. Isto quer dizer que códigos devem ser escritos de modo "pythônico", ou seja, seguindo o estilo natural da linguagem.
Você pode sempre lembrar desses princípios com a seguinte instrução:
```python
import this
```
| github_jupyter |
<a href="https://colab.research.google.com/github/leandrobarbieri/pydata-book/blob/2nd-edition/Pandas.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Pandas
```
# Importando os pacotes/modulos do pandas
import pandas as pd
from pandas import Series, DataFrame
import numpy as np
import matplotlib.pyplot as plt
# Configurações inicias das bibliotecas
np.random.seed(1234)
plt.rc('figure', figsize=(10, 6))
# salva a quantidade original de linhas exibidas
PREVIOUS_MAX_ROWS = pd.options.display.max_rows
# set a nova quantidade de itens máximo exibidos ao dar um display em um dataframe, usar ... para representar os itens intermediarios
pd.options.display.max_rows = 20
# set o formato de apresentação
np.set_printoptions(precision=4, suppress=True)
```
## Series
### index e values
```
# Series do pandas é como um array numpy com um indice que pode conter labels
serie1 = pd.Series([4, 5, 6, -2, 2])
# atribui um indice automaticamente
print(f"{serie1}\n")
# index é um obj do tipo range com inicio, fim, step
print(f"Tipo: {type(serie1.index)}\n")
index = [x for x in serie1.index]
print(f"Somente o index: {index}\n")
print(f"Somente o index: {serie1.index.values}\n")
# definindo lables para o indice
serie2 = pd.Series(data=[1, 2, 4, 5], index=["a", "b", "c", "d"])
print(f"Labels como índice:\n{serie2}\n")
print(f"Indice: {serie2.index} \nValores: {serie2.values}")
```
### slice e atribuição
```
# um item específico
serie2["a"]
# um range de itens. Limite superior incluido (diferente de indices numpy)
serie2["a":"c"]
# itens específicos em ordem específica
serie2[["b", "d", "c"]]
# atribuir valores
serie2[["d", "b"]] = 999
print(serie2)
# operações logicas nas series retornam resultados fitrados
serie_nova = serie2[serie2 < 10]
serie_nova
# operação vetorizada
serie_nova * 2
# verificar se um item está na serie (index)
# Retorna True/false
print(f"a in serie2: {'a' in serie2}")
# Retorna True/false
print(f"1 in serie2: {[True for valor in serie2 if valor == 1]}")
# Retona o elemento
print(f"serie2['a']: {serie2['a']}")
# Retona chave/valor
print(f"serie2[serie2.index == 'a']: {serie2[serie2.index == 'a']}")
```
### pareamento a partir do index
```
# Criar uma serie a partir de um dict
estados_dict = {"Sao Paulo": 45000, "Rio de Janeiro": 50141, "Espirito Santo": 30914}
# os estados são transformados em index e os valores e values
serie3 = pd.Series(estados_dict)
print(f"serie3:\n{serie3}\n")
print(f"index:\n{serie3.index}\n")
print(f"values:\n{serie3.values}\n")
# definindo a propria lista de index.
# A lista tem apenas 2 estados, ES ficou de fora
estados_index = ["Sao Paulo", "Rio de Janeiro"]
# apesar dos dados terem o ES, ele não aparece porque foi definido apenas SP e RJ na lista de indices
serie4 = pd.Series(estados_dict, index=estados_index)
serie4
# se um index não existir na serie de dados, será preenchido com NaN
# É como se um left join entre os indices informados no parametro index de series e os indices preexistentes na serie de dados
serie5 = pd.Series(data=estados_dict, index=["Sao Paulo", "Rio de Janeiro", "Novo Estado"])
serie5
# descobrir se existe elementos null na serie
# serie5.isnull()
pd.isnull(serie5)
# Retorna somente itens não null
serie5[~pd.isnull(serie5)]
# somar series faz o pareamento pelo indice
# na serie3 existe ES mas na serie5 não
# Novo Estado existe na serie5 mas não na serie3
# qualquer um que tenha NaN vai gerar um NaN na soma final, mesmo que exista algum valor
# no final faz a soma daqueles que existem nos dois lados
serie3 + serie5
# para somar sem perder o valor caso tenha algum valor NaN na soma
# o ES agora retorna a população mesmo com uma soma com NaN (porque serie5 não tem ES)
serie3.add(serie5, fill_value=0)
# metadado da serie, identifica o nome para a serie
serie5.name = "Populacao"
# nome do index
serie5.index.name = "Estados"
# renomeando os indices
# serie5.index = ["a", "b", "c"]
# alterando o nome da coluna de index
serie5.index.rename('UF', inplace=True)
serie5
```
## DataFrames
São tabelas com linhas e colunas que podem ser indexadas com os parametros index ou columns.
Vários tipos de objetos podem ser transformados em DataFrames (lists, dicts..)
Possuem uma série de funções para manipulação dos dados e filtragem
### Dataframe a partir de um dict
```
# dataframes são como series multidimensionais que compartilham o mesmo index
# possuem index/labels nas linhas (index=) e colunas (columns=)
# criando um dataframe a partir de um dict
dados1 = {"estado": ["SP", "RJ", "ES"] * 3,
"ano": [2000, 2000, 2000, 2010, 2010, 2010, 2020, 2020, 2020],
"populacao": [50_000, 30_000, 20_000, 55_000, 33_000, 22_000, 60_000, 40_000, 30_000]}
# transforma um dicionario em um dataframe
df1 = pd.DataFrame(dados1)
df1
```
### listar dataframes
```
# alguns metodos úteis para listar dataframes
# top5
df1.head(5)
#df1[:5]
# ultimos 2
# df1.tail(2)
# df1[-2:]
# lista tudo
#display(df1)
# lista algumas colunas
df1[["estado", "populacao"]]
# estatisticas da coluna populaçao
df1["populacao"].describe()
```
### alterando a sequencia das colunas
```
print(f"Sequência atual das colunas: {list(df1.columns)}\n")
# alterando a sequencia
df_sequencia_colunas_alteradas = pd.DataFrame(df1, columns=["ano", "estado", "populacao"])
df_sequencia_colunas_alteradas
```
### criando colunas em df existente
```
# criando um df novo a partir de um que já existe mas com a coluna dif que ainda não exite
df3 = pd.DataFrame(dados1, columns=["ano", "estado", "populacao", "dif"],
index=["a", "b", "c", "d", "e", "f", "g", "h", "i"])
df3
```
### filtrando linhas e colunas loc e iloc
```
# localizando linhas e colunas usando loc e iloc
# filtrando uma coluna
dados_es = df3.loc[df3["estado"] == "ES", ["estado", "ano", "populacao"]]
dados_es
#print(dados_es[["ano", "estado", "populacao"]])
# filtrando com iloc: usando indices para buscar os 3 primeiros registros e as duas primeiras colunas
dados_es = df3.iloc[:3, :2]
print(dados_es[["ano", "estado"]])
```
### filtrando uma coluna com mais de uma condição
```
# filtrando uma coluna com mais de uma condição
dados_es_2010 = df3.loc[ (df3["ano"] == 2010) & (df3["estado"] == "ES") ]
print(dados_es_2010[["ano", "estado", "populacao"]])
```
### preencher a nova coluna
```
# preencher a nova coluna com dados incrementais de arange iniciando em zero e indo até o tamanho maximo da tabela
df3["dif"] = np.arange(len(df3))
df3
# Cria uma serie independente somente com valores ["a", "b", "c"]
# Faz um left join da serie com o dataframe. Somente onde os indices da nova serie correspondem com o dataframe será inserido
valores_dif = pd.Series(np.arange(3), index=["a", "b", "c"])
# Preenche os primeiros registros com os valores da serie. Acontece um pareamento dos índices do df com a serie
df3["dif"] = valores_dif
# completa os demais com o valor zero
df3.fillna(value=999, inplace=True)
df3
```
### Criar coluna com logica boleana
```
# Criando uma nova coluna a partir de um resultado lógico de outra
df3["Regiao"] = ["SUDESTE-ES" if uf == "ES" else "SUDESTE-OUTRA" for uf in df3["estado"]]
df3
# dict de dicts de população dos estados
pop = {"ES": {2000: 22000, 2010: 24000},
"RJ": {2000: 33000, 2010: 36000, 2020: 66000}}
# type(pop)
# pd.DataFrame(pop).T
df4 = pd.DataFrame(pop, index=[2000, 2010, 2020])
df4
# remove a ultima linha de ES que tem NaN e considera apenas as dias primeiras de RJ
pdata = {"ES": df4["ES"][:-1],
"RJ": df4["RJ"][:2]}
# Define um nome para a coluna de indices e um nome para as colunas da tabela
df4= pd.DataFrame(pdata)
df4.index.name = "ANO"
df4.columns.name = "UF"
df4
```
## Objeto Index
São os objetos responsaveis por armazenar os rotulos dos eixos e outros metadados
```
obj_index = pd.Series(np.arange(3), index=["a", "b", "c"])
# armazena o index da serie
index = obj_index.index
# pandas.core.indexes.base.Index
type(index)
print(index)
print(index[0])
# um index é sempre imutavel: TypeError: Index does not support mutable operations
# index["a"] = "x"
# criando um obj index
labels = pd.Index(np.arange(3))
labels
# Usando um obj do tipo index para criar uma series
obj_Series2 = pd.Series([1.4, 3.5, 5.2], index=labels)
obj_Series2
print(f"df original:\n {df4}\n")
# verificando a existencia de uma coluna
print("ES" in df4.columns)
# verificando a existencia de um indice
print(2010 in df4.index)
```
## Reindexação de linhas e colunas
Redefine o indice de um df já criado.
Se os novos indices já existem serão mantidos senão serão introduzidos com valores faltantes NaN
```
obj_Series3 = pd.Series(np.random.randn(4), index=["a", "b", "c", "d"])
obj_Series3
# Os indices com os mesmos valores são mantidos, os novos recebem os valores NaN
obj_Series3 = obj_Series3.reindex(["a", "b", "c", "d", "x", "y", "z"])
obj_Series3
obj_Series4 = pd.Series(["Azul", "Amarelo", "Verde"], index=[0, 2, 4])
# passando um range de 6 os valores auxentes no indice atual ficam com o valor NaN
obj_Series4.reindex(range(6))
# usando o metodo ffill (forward fill) preenche os valores NaN com o valor anterior a ocorrencia do NaN
# é como um "preencher para frente"
obj_Series4.reindex(range(6), method="ffill")
# reindexando linhas e colunas
df5 = pd.DataFrame(np.arange(9).reshape((3, 3)),
index=["a", "c", "d"],
columns=["ES","RJ", "SP"]
)
df5
# reindexar as linhas: inclui o indice "b"
# index=<default>
# columns=<lista de colunas>
df5 = df5.reindex(index=["a", "b", "c", "d"], columns=["ES","RJ", "SP", "MG"])
df5
# se passar outra lista para indexação, irá ser feito um left join, os valores atuais serão mantidos e os novos serão inseridos com o valor NaN
novos_estados = ["ES","RJ", "SP", "MG", "RS", "PR", "BA"]
df5 = df5.reindex(columns=novos_estados)
df5
# selecionando colunas a partir de uma lista
filtro_estados = ["ES","RJ", "SP"]
dados = df5.loc[:, filtro_estados]
```
### drop apagando linhas ou colunas
```
# copiando dados do df anterior
dados_para_apagar = dados
# apagar linhas
# inplace True para de verdade, False (padrão), remove na memoria mas não apaga do obj original
dados_para_apagar.drop(["b", "d"], inplace=True)
dados_para_apagar
# apagar colunas axis=1
dados_sem_RJ_SP = dados_para_apagar.drop(["SP", "RJ"], axis=1)
dados_sem_RJ_SP
dados_para_apagar.drop(["SP", "RJ"], axis="columns", inplace=True)
dados_para_apagar
dados_para_apagar.drop(["a"], axis="index", inplace=True)
dados_para_apagar
```
## Seleção e Filtragem
a seleção e filtragem de df é semelhante a arrays numpy porém com o uso de rotulos nomeados nos dois eixos.
### Series
```
# criando uma Series para testes
obj_Series5 = pd.Series(np.arange(4.0), index=["a", "b", "c", "d"])
obj_Series5
# filtrando pelo rótulo da linha
obj_Series5["d"]
# filtrando pelo indice da linha: implicitamente o pandas cria um índice mesmo que exista um rótulo nomeado
obj_Series5[3]
# slice do intervalo do índice implicito de 2 ate o 4 (não incluído)
obj_Series5[2:4]
# diferentemente do acesso pelo índice implicito, o indice pelo rotulo sempre inclui o ultimo elemento
obj_Series5["b":"d"]
# elementos específicos. Lista de elementos dentro dos colchetes
obj_Series5[["a", "c", "d"]]
# filtrando usando uma condição: retorna os itens que
obj_Series5[obj_Series5 > 2]
# atribuindo valores a uma faixa de indices
obj_Series5["b":"d"] = 4
obj_Series5
```
### DataFrame
```
df6 = pd.DataFrame(np.random.randn(16).reshape(4, 4),
index=["ES", "RJ", "SP", "MG"],
columns=[2000, 2010, 2020, 2030])
print("Dados Iniciais:\n")
df6
# uma coluna específica
df6[2020]
# subconjunto de colunas
df6[[2010, 2020]]
# quando informa um intervalo, o filtro inicia no eixo das linhas
df6[1:3]
# subconjunto de linhas e colunas (podemos misturar acesso pelo indice e pelo rotulo no mesmo comando)
df6[1:3][[2000, 2030]]
# acessando dados com filtro: somente dados onde no ano de 2030 os valores são maiores que Zero
df6[df6[2030] > 0][2030]
# atribuindo valores Zero onde o valor for negativo
df6[df6 < 0] = 0
df6
```
### usando loc e iloc
```
# selecionando linhas e colunas pelos rótulos: df6.loc[ [<linhas], [<colunas>] ]
df6.loc["ES", [2000, 2020]]
# iloc funciona da mesma forma. Mas opera com os índices implicitos que iniciam em zero
# o mesmo comando acima acessando pelos índices
df6.iloc[0, [0,2]]
# loc e iloc acessam por intervalos. Não precisam dos colchetes internos para formar os ranges
df6.loc["RJ":"MG", 2020:2030]
```
## Cálculos e Alinhamento de Dados
```
# dois DataFrames que compartilham algumas colunas e indices podem ser alinhados
# mesmo com shapes diferentes
data_left = pd.DataFrame(np.random.randn(9).reshape(3,3),
index=["ES", "RJ", "SP"],
columns=list("bcd"))
data_left
# possue shape diferente
data_rigth = pd.DataFrame(np.random.randn(12).reshape(4, 3),
index=["SP", "RJ", "MG", "RS"],
columns=list("bde"))
data_rigth
# o que acontece ao somar os dos dfs com shapes diferentes
# acontece o alinhamento pelos rotulos dos indices
# se o calculo envolver algum valor NaN então retorna NaN
# só retorna algum resultado onde houver um inner join. apenas a coluna b e d e linhas RJ e SP existem nos dois
data_left + data_rigth
# preenche com Zero o valor NaN quando um dos elementos da soma for NaN, isso presenva o valor que existe em um dos dois lados
# quando os dois valores da soma for NaN retorna NaN
data_left.add(data_rigth, fill_value=0)
# reindexa o data_left usando apenas as colunas do data_rigth. É como um RIGTH JOIN
data_left.reindex(columns=data_rigth.columns, fill_value=0)
```
## Operações entre DataFrames e Series
```
# criando um df de exemplo
df7 = pd.DataFrame(np.arange(12.).reshape((4, 3)),
columns=list('bde'),
index=['ES', 'RJ', 'SP', 'MG'])
# criando uma serie a partir de uma linha (MG)
serie6 = df7.iloc[3]
# dataframe do exemplo
df7
# serie criada a partir da linha com o indice MG
serie6
# faz o alinhamento do índice da serie com os índice das colunas em df e faz o calculo nos correspondentes para todas as linhas
df7 - serie6
# df7.sub(serie6, axis="index") ???
```
## Apply: Mapping e Function
Mapeando funções em valores de series. Aplica a função em todos elementos da serie ou coluna
```
# criando um df de exemplo
df8 = pd.DataFrame(np.random.randn(12).reshape((4, 3)),
columns=list('bde'),
index=['ES', 'RJ', 'SP', 'MG'])
print(f"df8 original:\n {df8}\n")
# transforma todos os numeros em absolutos removendo os valores negativos
np.abs(df8)
# funções que calculam a diferença entre o max e min de uma coluna
funcao_dif_max_min = lambda x: x.max() - x.min()
funcao_max = lambda x: x.max()
funcao_min = lambda x: x.min()
# APPLY: por padrão a função é aplicada no sentido de agregação das linhas (vertical)
df8.apply(funcao_dif_max_min)
df8.apply(funcao_max)
df8.apply(funcao_min)
# aplicar o calculo no sentido das colunas (horizontal)
df8.apply(funcao_min, axis="columns")
# aplicando uma função com multiplos retornos
#def f(x):
# return pd.Series([x.max(), x.min()], index=["max", "min"])
f_lambda = lambda x: pd.Series([x.max(), x.min()], index=["max", "min"])
df8.apply(f_lambda)
# applymap: aplicando uma formatação em cada elemento da serie
# diferentemente de apply, applymap não faz agregação ela aplica em todas as celulas
formato = lambda x: f"R$ {x: ,.2f}"
df8.applymap(formato)
# map aplica a formatacao
df8["b"].map(formato)
```
## Ordenação e Rank
```
obj_Series5 = pd.Series(range(5), index=["x", "y", "z", "a", "b"])
# ordenando uma serie pelo index
obj_Series5.sort_index(ascending=True)
# ordenando uma serie pelos VALORES
obj_Series5.sort_values(ascending=False)
# ordenação de índices de dataframe
df9 = pd.DataFrame(np.arange(8).reshape((2, 4)),
index=['LinhaA', 'LinhaB'],
columns=['d', 'a', 'b', 'c'])
# ordenando as linhas (padrao) pelos valores dos INDICES
df9.sort_index()
# ordenando as colunas (axis=1) pelos valores dos ROTULOS de coluna
df9.sort_index(axis=1)
# controlando o sentido da ordenação do índice. ascending=TRUE
df9.sort_index(axis=0, ascending=False)
# controlando o sentido da ordenação dos INDICES. ascending=True representa DESC
df9.sort_index(axis=1, ascending=False)
# ordenando os VALORES de uma ou várias colunas de um um DataFrame
df9.sort_values(by=["c", "d"], ascending=False)
# Ordenação dos VALORES no sentido horizontal (colunas)
df9.sort_values(by=["LinhaB"], ascending=False, axis=1)
```
## Agregação soma, max, min, mean, cumsum
Funções de agregação de colunas numéricas. Permite analisar as estatísticas e analisar as distribuições dos valores
```
# a soma acontece no sentido das linhas (vertical) para todas as colunas quando não é especificada
df9.sum()
# soma de apenas uma coluna
df9["b"].sum()
# soma de colunas específicas
df9[["c","a"]].sum()
# soma no sentido horizontal, agrega as colunas para obter o todal da linha
df9.sum(axis="columns")
df9[:1].sum(axis="columns")
# outras medidas de agregação
df9.mean(axis='columns', skipna=False)
# agregação do acumulado na horizontal (colunas)
df9.cumsum(axis=1)
# agregação do acumulado na vertical (linhas)
df9.cumsum(axis=0)
# Estatísticas descritivas das colunas numéricas
df9.describe()
```
## Valores únicos value counts
Exibe a lista com valores distintos de uma serie ou de uma coluna de um dataframe
```
# Valores únicos de uma serie
obj_Series6 = pd.Series(['c', 'a', 'd', 'a', 'a', 'b', 'b', 'c', 'c'])
obj_Series6.unique()
# valores únicos e frequencia da ocorrência do valor ordenado do maior para o menor
obj_Series6.value_counts()
# os elementos das categorias agregadas são o index e os totais agregados são os values
obj_Series6.value_counts().index
obj_Series6.value_counts().values
# totais de valores únicos e frequencia por colunas
df10 = pd.DataFrame({'Qu1': [1, 3, 4, 3, 4],
'Qu2': [2, 3, 1, 2, 3],
'Qu3': [1, 5, 2, 4, 4]})
df10["Qu1"].value_counts()
```
| github_jupyter |
# Machine Learning with PyTorch and Scikit-Learn
# -- Code Examples
## Package version checks
Add folder to path in order to load from the check_packages.py script:
```
import sys
sys.path.insert(0, '..')
```
Check recommended package versions:
```
from python_environment_check import check_packages
d = {
'torch': '1.8.0',
}
check_packages(d)
```
Chapter 15: Modeling Sequential Data Using Recurrent Neural Networks (part 3/3)
========
**Outline**
- Implementing RNNs for sequence modeling in PyTorch
- [Project two -- character-level language modeling in PyTorch](#Project-two----character-level-language-modeling-in-PyTorch)
- [Preprocessing the dataset](#Preprocessing-the-dataset)
- [Evaluation phase -- generating new text passages](#Evaluation-phase----generating-new-text-passages)
- [Summary](#Summary)
Note that the optional watermark extension is a small IPython notebook plugin that I developed to make the code reproducible. You can just skip the following line(s).
```
from IPython.display import Image
%matplotlib inline
```
## Project two: character-level language modeling in PyTorch
```
Image(filename='figures/15_11.png', width=500)
```
### Preprocessing the dataset
```
import numpy as np
## Reading and processing text
with open('1268-0.txt', 'r', encoding="utf8") as fp:
text=fp.read()
start_indx = text.find('THE MYSTERIOUS ISLAND')
end_indx = text.find('End of the Project Gutenberg')
text = text[start_indx:end_indx]
char_set = set(text)
print('Total Length:', len(text))
print('Unique Characters:', len(char_set))
Image(filename='figures/15_12.png', width=500)
chars_sorted = sorted(char_set)
char2int = {ch:i for i,ch in enumerate(chars_sorted)}
char_array = np.array(chars_sorted)
text_encoded = np.array(
[char2int[ch] for ch in text],
dtype=np.int32)
print('Text encoded shape: ', text_encoded.shape)
print(text[:15], ' == Encoding ==> ', text_encoded[:15])
print(text_encoded[15:21], ' == Reverse ==> ', ''.join(char_array[text_encoded[15:21]]))
for ex in text_encoded[:5]:
print('{} -> {}'.format(ex, char_array[ex]))
Image(filename='figures/15_13.png', width=500)
Image(filename='figures/15_14.png', width=500)
seq_length = 40
chunk_size = seq_length + 1
text_chunks = [text_encoded[i:i+chunk_size]
for i in range(len(text_encoded)-chunk_size+1)]
## inspection:
for seq in text_chunks[:1]:
input_seq = seq[:seq_length]
target = seq[seq_length]
print(input_seq, ' -> ', target)
print(repr(''.join(char_array[input_seq])),
' -> ', repr(''.join(char_array[target])))
import torch
from torch.utils.data import Dataset
class TextDataset(Dataset):
def __init__(self, text_chunks):
self.text_chunks = text_chunks
def __len__(self):
return len(self.text_chunks)
def __getitem__(self, idx):
text_chunk = self.text_chunks[idx]
return text_chunk[:-1].long(), text_chunk[1:].long()
seq_dataset = TextDataset(torch.tensor(text_chunks))
for i, (seq, target) in enumerate(seq_dataset):
print(' Input (x):', repr(''.join(char_array[seq])))
print('Target (y):', repr(''.join(char_array[target])))
print()
if i == 1:
break
device = torch.device("cuda:0")
# device = 'cpu'
from torch.utils.data import DataLoader
batch_size = 64
torch.manual_seed(1)
seq_dl = DataLoader(seq_dataset, batch_size=batch_size, shuffle=True, drop_last=True)
```
### Building a character-level RNN model
```
import torch.nn as nn
class RNN(nn.Module):
def __init__(self, vocab_size, embed_dim, rnn_hidden_size):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embed_dim)
self.rnn_hidden_size = rnn_hidden_size
self.rnn = nn.LSTM(embed_dim, rnn_hidden_size,
batch_first=True)
self.fc = nn.Linear(rnn_hidden_size, vocab_size)
def forward(self, x, hidden, cell):
out = self.embedding(x).unsqueeze(1)
out, (hidden, cell) = self.rnn(out, (hidden, cell))
out = self.fc(out).reshape(out.size(0), -1)
return out, hidden, cell
def init_hidden(self, batch_size):
hidden = torch.zeros(1, batch_size, self.rnn_hidden_size)
cell = torch.zeros(1, batch_size, self.rnn_hidden_size)
return hidden.to(device), cell.to(device)
vocab_size = len(char_array)
embed_dim = 256
rnn_hidden_size = 512
torch.manual_seed(1)
model = RNN(vocab_size, embed_dim, rnn_hidden_size)
model = model.to(device)
model
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.005)
num_epochs = 10000
torch.manual_seed(1)
for epoch in range(num_epochs):
hidden, cell = model.init_hidden(batch_size)
seq_batch, target_batch = next(iter(seq_dl))
seq_batch = seq_batch.to(device)
target_batch = target_batch.to(device)
optimizer.zero_grad()
loss = 0
for c in range(seq_length):
pred, hidden, cell = model(seq_batch[:, c], hidden, cell)
loss += loss_fn(pred, target_batch[:, c])
loss.backward()
optimizer.step()
loss = loss.item()/seq_length
if epoch % 500 == 0:
print(f'Epoch {epoch} loss: {loss:.4f}')
```
### Evaluation phase: generating new text passages
```
from torch.distributions.categorical import Categorical
torch.manual_seed(1)
logits = torch.tensor([[1.0, 1.0, 1.0]])
print('Probabilities:', nn.functional.softmax(logits, dim=1).numpy()[0])
m = Categorical(logits=logits)
samples = m.sample((10,))
print(samples.numpy())
torch.manual_seed(1)
logits = torch.tensor([[1.0, 1.0, 3.0]])
print('Probabilities:', nn.functional.softmax(logits, dim=1).numpy()[0])
m = Categorical(logits=logits)
samples = m.sample((10,))
print(samples.numpy())
def sample(model, starting_str,
len_generated_text=500,
scale_factor=1.0):
encoded_input = torch.tensor([char2int[s] for s in starting_str])
encoded_input = torch.reshape(encoded_input, (1, -1))
generated_str = starting_str
model.eval()
hidden, cell = model.init_hidden(1)
hidden = hidden.to('cpu')
cell = cell.to('cpu')
for c in range(len(starting_str)-1):
_, hidden, cell = model(encoded_input[:, c].view(1), hidden, cell)
last_char = encoded_input[:, -1]
for i in range(len_generated_text):
logits, hidden, cell = model(last_char.view(1), hidden, cell)
logits = torch.squeeze(logits, 0)
scaled_logits = logits * scale_factor
m = Categorical(logits=scaled_logits)
last_char = m.sample()
generated_str += str(char_array[last_char])
return generated_str
torch.manual_seed(1)
model.to('cpu')
print(sample(model, starting_str='The island'))
```
* **Predictability vs. randomness**
```
logits = torch.tensor([[1.0, 1.0, 3.0]])
print('Probabilities before scaling: ', nn.functional.softmax(logits, dim=1).numpy()[0])
print('Probabilities after scaling with 0.5:', nn.functional.softmax(0.5*logits, dim=1).numpy()[0])
print('Probabilities after scaling with 0.1:', nn.functional.softmax(0.1*logits, dim=1).numpy()[0])
torch.manual_seed(1)
print(sample(model, starting_str='The island',
scale_factor=2.0))
torch.manual_seed(1)
print(sample(model, starting_str='The island',
scale_factor=0.5))
```
...
# Summary
...
Readers may ignore the next cell.
```
! python ../.convert_notebook_to_script.py --input ch15_part3.ipynb --output ch15_part3.py
```
| github_jupyter |
# Script to plot GALAH spectra, but also save them into python dictionaries
## Author: Sven Buder (SB, MPIA) buder at mpia dot de
This script is intended to plot the 4 spectra of the arms of the HERMES spectrograph
History:
181012 - SB created
```
try:
%matplotlib inline
%config InlineBackend.figure_format='retina'
except:
pass
import numpy as np
import os
import astropy.io.fits as pyfits
import matplotlib.pyplot as plt
```
### Adjust script
### Definitions which will be executed in the last cell
```
def read_spectra(sobject_id, iraf_dr = 'dr5.3', SPECTRA = 'SPECTRA'):
"""
This function reads in the 4 individual spectra from the subdirectory working_directory/SPECTRA
INPUT:
sobject_id = identifier of spectra by date (6digits), plate (4digits), combination (2digits) and pivot number (3digits)
iraf_dr = reduction which shall be used, current version: dr5.3
SPECTRA = string to indicate sub directory where spectra are saved
OUTPUT
spectrum = dictionary
"""
spectrum = dict(sobject_id = sobject_id)
# Assess if spectrum is stacked
if str(sobject_id)[11] == '1':
# Single observations are saved in 'com'
com='com'
else:
# Stacked observations are saved in 'com2'
com='com'
# Iterate through all 4 CCDs
for each_ccd in [1,2,3,4]:
try:
fits = pyfits.open(SPECTRA+'/'+iraf_dr+'/'+str(sobject_id)[0:6]+'/standard/'+com+'/'+str(sobject_id)+str(each_ccd)+'.fits')
# Extension 0: Reduced spectrum
# Extension 1: Relative error spectrum
# Extension 4: Normalised spectrum, NB: cut for CCD4
# Extract wavelength grid for the reduced spectrum
start_wavelength = fits[0].header["CRVAL1"]
dispersion = fits[0].header["CDELT1"]
nr_pixels = fits[0].header["NAXIS1"]
reference_pixel = fits[0].header["CRPIX1"]
if reference_pixel == 0:
reference_pixel = 1
spectrum['wave_red_'+str(each_ccd)] = np.array(map(lambda x:((x-reference_pixel+1)*dispersion+start_wavelength),range(0,nr_pixels)))
try:
# Extract wavelength grid for the normalised spectrum
start_wavelength = fits[4].header["CRVAL1"]
dispersion = fits[4].header["CDELT1"]
nr_pixels = fits[4].header["NAXIS1"]
reference_pixel = fits[4].header["CRPIX1"]
if reference_pixel == 0:
reference_pixel=1
spectrum['wave_norm_'+str(each_ccd)] = np.array(map(lambda x:((x-reference_pixel+1)*dispersion+start_wavelength),range(0,nr_pixels)))
except:
spectrum['wave_norm_'+str(each_ccd)] = spectrum['wave_red_'+str(each_ccd)]
# Extract flux and flux error of reduced spectrum
spectrum['sob_red_'+str(each_ccd)] = np.array(fits[0].data)
spectrum['uob_red_'+str(each_ccd)] = np.array(fits[0].data * fits[1].data)
# Extract flux and flux error of reduced spectrum
try:
spectrum['sob_norm_'+str(each_ccd)] = np.array(fits[4].data)
except:
spectrum['sob_norm_'+str(each_ccd)] = np.ones(len(fits[0].data))
if each_ccd != 4:
try:
spectrum['uob_norm_'+str(each_ccd)] = np.array(fits[4].data * fits[1].data)
except:
spectrum['uob_norm_'+str(each_ccd)] = np.array(fits[1].data)
else:
# for normalised error of CCD4, only used appropriate parts of error spectrum
try:
spectrum['uob_norm_4'] = np.array(fits[4].data * (fits[1].data)[-len(spectrum['sob_norm_4']):])
except:
spectrum['uob_norm_4'] = np.zeros(len(fits[0].data))
fits.close()
except:
spectrum['wave_norm_'+str(each_ccd)] = np.arange(7693.50,7875.55,0.074)
spectrum['wave_red_'+str(each_ccd)] = np.arange(7693.50,7875.55,0.074)
spectrum['sob_norm_'+str(each_ccd)] = np.ones(len(spectrum['wave_red_'+str(each_ccd)]))
spectrum['sob_red_'+str(each_ccd)] = np.ones(len(spectrum['wave_red_'+str(each_ccd)]))
spectrum['uob_norm_'+str(each_ccd)] = np.zeros(len(spectrum['wave_red_'+str(each_ccd)]))
spectrum['uob_red_'+str(each_ccd)] = np.zeros(len(spectrum['wave_red_'+str(each_ccd)]))
return spectrum
def interpolate_spectrum_onto_cannon_wavelength(spectrum):
"""
This function interpolates the spectrum
onto the wavelength grid of The Cannon as used for GALAH DR2
INPUT:
spectrum dictionary
OUTPUT:
interpolated spectrum dictionary
"""
# Initialise interpolated spectrum from input spectrum
interpolated_spectrum = dict()
for each_key in spectrum.keys():
interpolated_spectrum[each_key] = spectrum[each_key]
# The Cannon wavelength grid as used for GALAH DR2
wave_cannon = dict()
wave_cannon['ccd1'] = np.arange(4715.94,4896.00,0.046) # ab lines 4716.3 - 4892.3
wave_cannon['ccd2'] = np.arange(5650.06,5868.25,0.055) # ab lines 5646.0 - 5867.8
wave_cannon['ccd3'] = np.arange(6480.52,6733.92,0.064) # ab lines 6481.6 - 6733.4
wave_cannon['ccd4'] = np.arange(7693.50,7875.55,0.074) # ab lines 7691.2 - 7838.5
for each_ccd in [1, 2, 3, 4]:
# exchange wavelength
interpolated_spectrum['wave_red_'+str(each_ccd)] = wave_cannon['ccd'+str(each_ccd)]
interpolated_spectrum['wave_norm_'+str(each_ccd)] = wave_cannon['ccd'+str(each_ccd)]
# interpolate and exchange flux
interpolated_spectrum['sob_red_'+str(each_ccd)] = np.interp(
x=wave_cannon['ccd'+str(each_ccd)],
xp=spectrum['wave_red_'+str(each_ccd)],
fp=spectrum['sob_red_'+str(each_ccd)],
)
interpolated_spectrum['sob_norm_'+str(each_ccd)] = np.interp(
wave_cannon['ccd'+str(each_ccd)],
spectrum['wave_norm_'+str(each_ccd)],
spectrum['sob_norm_'+str(each_ccd)],
)
# interpolate and exchange flux error
interpolated_spectrum['uob_red_'+str(each_ccd)] = np.interp(
wave_cannon['ccd'+str(each_ccd)],
spectrum['wave_red_'+str(each_ccd)],
spectrum['uob_red_'+str(each_ccd)],
)
interpolated_spectrum['uob_norm_'+str(each_ccd)] = np.interp(
wave_cannon['ccd'+str(each_ccd)],
spectrum['wave_norm_'+str(each_ccd)],
spectrum['uob_norm_'+str(each_ccd)],
)
return interpolated_spectrum
def plot_spectrum(spectrum, normalisation = True, lines_to_indicate = None, save_as_png = False):
"""
This function plots the spectrum in 4 subplots for each arm of the HERMES spectrograph
INPUT:
spectrum = dictionary created by read_spectra()
normalisation = True or False (either normalised or un-normalised spectra are plotted)
save_as_png = Save figure as png if True
OUTPUT:
Plot that spectrum!
"""
f, axes = plt.subplots(4, 1, figsize = (15,10))
kwargs_sob = dict(c = 'k', label='Flux', rasterized=True)
kwargs_error_spectrum = dict(color = 'grey', label='Flux error', rasterized=True)
# Adjust keyword used for dictionaries and plot labels
if normalisation==True:
red_norm = 'norm'
else:
red_norm = 'red'
for each_ccd in [1, 2, 3, 4]:
axes[each_ccd-1].fill_between(
spectrum['wave_'+red_norm+'_'+str(each_ccd)],
spectrum['sob_'+red_norm+'_'+str(each_ccd)] - spectrum['uob_'+red_norm+'_'+str(each_ccd)],
spectrum['sob_'+red_norm+'_'+str(each_ccd)] + spectrum['uob_'+red_norm+'_'+str(each_ccd)],
**kwargs_error_spectrum
)
# Overplot observed spectrum a bit thicker
axes[each_ccd-1].plot(
spectrum['wave_'+red_norm+'_'+str(each_ccd)],
spectrum['sob_'+red_norm+'_'+str(each_ccd)],
**kwargs_sob
)
# Plot important lines if committed
if lines_to_indicate != None:
for each_line in lines_to_indicate:
if (float(each_line[0]) >= spectrum['wave_'+red_norm+'_'+str(each_ccd)][0]) & (float(each_line[0]) <= spectrum['wave_'+red_norm+'_'+str(each_ccd)][-1]):
axes[each_ccd-1].axvline(float(each_line[0]), color = each_line[2], ls='dashed')
if red_norm=='norm':
axes[each_ccd-1].text(float(each_line[0]), 1.25, each_line[1], color = each_line[2], ha='left', va='top')
# Plot layout
if red_norm == 'norm':
axes[each_ccd-1].set_ylim(-0.1,1.3)
else:
axes[each_ccd-1].set_ylim(0,1.3*np.median(spectrum['sob_'+red_norm+'_'+str(each_ccd)]))
axes[each_ccd-1].set_xlabel(r'Wavelength CCD '+str(each_ccd)+' [$\mathrm{\AA}$]')
axes[each_ccd-1].set_ylabel(r'Flux ('+red_norm+') [a.u.]')
if each_ccd == 1:
axes[each_ccd-1].legend(loc='lower left')
plt.tight_layout()
if save_as_png == True:
plt.savefig(str(spectrum['sobject_id'])+'_'+red_norm+'.png', dpi=200)
return f
```
### Execute and have fun looking at spectra
```
# Adjust directory you want to work in
working_directory = '/Users/buder/trunk/GALAH/'
working_directory = '/avatar/buder/trunk/GALAH/'
os.chdir(working_directory)
# You can activate a number of lines that will be plotted in the spectra
important_lines = np.array([
[4861.35, 'H' , 'red'],
[6562.79, 'H' , 'red'],
[6708. , 'Li', 'orange'],
])
# Last but not least, declare which sobject_ids shall be plotted
sobject_ids_to_plot = [
190211002201088
]
for each_sobject_id in sobject_ids_to_plot:
# read in spectrum
spectrum = read_spectra(each_sobject_id)
# interpolate spectrum onto The Cannon wavelength grid
interpolated_spectrum = interpolate_spectrum_onto_cannon_wavelength(spectrum)
# plot input spectrum
plot_spectrum(spectrum,
normalisation = False,
lines_to_indicate = None,
save_as_png = True
)
# # plot interpolated spectrum
# plot_spectrum(
# interpolated_spectrum,
# normalisation = True,
# lines_to_indicate = None,
# save_as_png = True
# )
```
| github_jupyter |
# What is the True Normal Human Body Temperature?
#### Background
The mean normal body temperature was held to be 37$^{\circ}$C or 98.6$^{\circ}$F for more than 120 years since it was first conceptualized and reported by Carl Wunderlich in a famous 1868 book. But, is this value statistically correct?
<div class="span5 alert alert-info">
<h3>Exercises</h3>
<p>In this exercise, you will analyze a dataset of human body temperatures and employ the concepts of hypothesis testing, confidence intervals, and statistical significance.</p>
<p>Answer the following questions <b>in this notebook below and submit to your Github account</b>.</p>
<ol>
<li> Is the distribution of body temperatures normal?
<ul>
<li> Although this is not a requirement for the Central Limit Theorem to hold (read the introduction on Wikipedia's page about the CLT carefully: https://en.wikipedia.org/wiki/Central_limit_theorem), it gives us some peace of mind that the population may also be normally distributed if we assume that this sample is representative of the population.
<li> Think about the way you're going to check for the normality of the distribution. Graphical methods are usually used first, but there are also other ways: https://en.wikipedia.org/wiki/Normality_test
</ul>
<li> Is the sample size large? Are the observations independent?
<ul>
<li> Remember that this is a condition for the Central Limit Theorem, and hence the statistical tests we are using, to apply.
</ul>
<li> Is the true population mean really 98.6 degrees F?
<ul>
<li> First, try a bootstrap hypothesis test.
<li> Now, let's try frequentist statistical testing. Would you use a one-sample or two-sample test? Why?
<li> In this situation, is it appropriate to use the $t$ or $z$ statistic?
<li> Now try using the other test. How is the result be different? Why?
</ul>
<li> Draw a small sample of size 10 from the data and repeat both frequentist tests.
<ul>
<li> Which one is the correct one to use?
<li> What do you notice? What does this tell you about the difference in application of the $t$ and $z$ statistic?
</ul>
<li> At what temperature should we consider someone's temperature to be "abnormal"?
<ul>
<li> As in the previous example, try calculating everything using the boostrap approach, as well as the frequentist approach.
<li> Start by computing the margin of error and confidence interval. When calculating the confidence interval, keep in mind that you should use the appropriate formula for one draw, and not N draws.
</ul>
<li> Is there a significant difference between males and females in normal temperature?
<ul>
<li> What testing approach did you use and why?
<li> Write a story with your conclusion in the context of the original problem.
</ul>
</ol>
You can include written notes in notebook cells using Markdown:
- In the control panel at the top, choose Cell > Cell Type > Markdown
- Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet
#### Resources
+ Information and data sources: http://www.amstat.org/publications/jse/datasets/normtemp.txt, http://www.amstat.org/publications/jse/jse_data_archive.htm
+ Markdown syntax: http://nestacms.com/docs/creating-content/markdown-cheat-sheet
****
```
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_csv('data/human_body_temperature.csv')
```
<div class="span5 alert alert-success">
<h2>SOLUTION: Is the distribution of body temperatures normal?</h2>
</div>
```
# First, a histogram
%matplotlib inline
plt.hist(df['temperature'])
plt.xlabel('Temperature')
plt.ylabel('Frequency')
plt.title('Histogram of Body Temperature')
plt.ylim(0, 40) # Add some buffer space at the top so the bar doesn't get cut off.
# Next, a quantile plot.
import statsmodels.api as sm
mean = np.mean(df['temperature'])
sd = np.std(df['temperature'])
z = (df['temperature'] - mean) / sd
sm.qqplot(z, line='45')
# Finally, a normal distribution test. Not recommended!! Use only when you're not sure.
import scipy.stats as stats
stats.mstats.normaltest(df['temperature'])
```
<div class="span5 alert alert-success">
<h4>SOLUTION</h4>
<p>The histogram looks *very roughly* normally distributed. There is an implied bell shape, though there are some values above the mode that occur much less frequently than we would expect under a normal distribution. The shape is not so deviant as to call it some other distribution. </p>
<p>A quantile plot can help. The quantile plot computes percentiles for our data and also the percentiles for a normal distribution via sampling (mean 0, sd 1). If the quantiles/percentiles for both distributions match, we expect to see a more or less straight line of data points. Note that the quantile plot does pretty much follow a straight line, so this helps us conclude that the distribution is likely normal. Note that there are three outliers on the "high" end and two on the "low" end that cause deviations in the tail, but this is pretty typical.</p>
<p>Suppose we really aren't sure, or the plots tell us two different conclusions. We could confirm with a statistical significance test, though this should not be your first method of attack. The p-value from the normality test is 0.25 which is significantly above the usual cutoff of 0.05. The null hypothesis is that the distribution is normal. Since we fail to reject the null hypothesis, we conclude that the distribution is probably normal.</p>
</div>
<div class="span5 alert alert-success">
<h2>SOLUTION: Is the sample size large? Are the observations independent?</h2>
</div>
```
n = len(df['temperature'])
n
```
<div class="span5 alert alert-success">
<p>The sample size is 130. Literature typically suggests a lower limit of 30 observations in a sample for CLT to hold. In terms of CLT, the sample is large enough.</p>
<p>We must assume that the obserations are independent. One person's body temperature should not have any affect on another person's body temperature, so under common sense conditions, the observations are independent. Note that this condition may potentially be violated if the researcher lacked common sense and performed this study by stuffing all of the participants shoulder to shoulder in a very hot and confined room. </p>
<p>Note that the temperatures <i>may</i> be dependent on age, gender, or health status, but this is a separate issue and does not affect our conclusion that <i>another person's</i> temperature does not affect someone else's temperature.</p>
</div>
<div class="span5 alert alert-success">
<h2>SOLUTION: Is the true population mean really 98.6 degrees F?</h2>
</div>
<div class="span5 alert alert-success">
<p>We will now perform a bootstrap hypothesis test with the following:</p>
<p>$H_0$: The mean of the sample and the true mean of 98.6 are the same. $\mu=\mu_0$</p>
<p>$H_A$: The means are different. $\mu\neq\mu_0$</p>
</div>
```
# Calculates p value using 100,000 boostrap replicates
bootstrap_replicates = np.empty(100000)
size = len(bootstrap_replicates)
for i in range(size):
bootstrap_sample = np.random.choice(temperature, size=len(temperature))
bootstrap_replicates[i] = np.mean(bootstrap_sample)
p = np.sum(bootstrap_replicates >= 98.6) / len(bootstrap_replicates)
print('p =', p)
```
<div class="span5 alert alert-success">
<p>We are testing only if the true population mean temperature is 98.6. We are treating everyone as being in the same group, with one mean. We use a **one-sample** test. The population standard deviation is not given, so we assume it is not known. We do however know the sample standard deviation from the data and we know that the sample size is large enough for CLT to apply, so we can use a $z$-test.</p>
</div>
```
z = (mean - 98.6)/(sd / np.sqrt(n))
z
```
<div class="span5 alert alert-success">
Since the question does not ask if the true mean is greater than, or less than 98.6 as the alternative hypothesis, we use a two-tailed test. We have to regions where we reject the null hypothesis: if $z < -1.96$ or if $z > 1.96$, assuming $\alpha = 0.05$. Since -5.48 < -1.96, we reject the null hypothesis: the true population mean temperature is NOT 98.6.
<p>We can also use a p-value:</p>
</div>
```
stats.norm.cdf(z) * 2
# NOTE: Since CDF gives us $P(Z \le z)$ and this is a two-tailed test, we multiply the result by 2
```
<div class="span5 alert alert-success">
<p>Since the p-value is *way* below 0.05, we reject the null hypothesis. The population mean is not 98.6.</p>
<p>The $z$-test was the "correct" test to use in this case. But what if we used a $t$-test instead? The degrees of freedom is $n - 1 = 129$.</p>
</div>
```
t = (mean - 98.6)/(sd / np.sqrt(n))
```
<div class="span5 alert alert-success">
We find the critical value of $t$ and when $\vert t \vert > \vert t^* \vert$ we reject the null hypothesis.
</div>
```
t_critical = stats.t.ppf(0.05 / 2, n - 1)
t_critical
```
<div class="span5 alert alert-success">
<p>Note that the critical value of $t$ is $\pm 1.979$. This is pretty close to the $\pm 1.96$ we used for the $z$-test. *As the sample size gets larger, the student's $t$ distribution converges to the normal distribution.* So in theory, even if your sample size is large you could use the $t$-test, but the pesky degrees of freedom step is likely why people do not. If we use a sample of size, say, 1000, the critical values are close to identical.</p>
<p>So, to answer the question, the result is NOT different! The only case where it would be different is if the $t$ statistic were between -1.96 and -1.979 which would be pretty rare.</p>
<div class="span5 alert alert-success">
<h2>SOLUTION: At what temperature should we consider someone's temperature to be "abnormal"?</h2>
<p>We compute the confidence interval using $z^* = \pm 1.96$.</p>
<p>The margin of error is </p>
$$MOE = z^* \frac{\sigma}{\sqrt{n}}$$
</div>
```
sd = df['temperature'].std()
n = len(df['temperature'])
moe = 1.96 * sd / np.sqrt(n)
moe
mean = df['temperature'].mean()
ci = mean + np.array([-1, 1]) * moe
ci
```
<div class="span5 alert alert-success">At 95% confidence level, we consider a temperature abnormal if it is below 98.1 degrees or above 98.38 degrees. Since the null hypothesis 98.6 is not in the confidence interval, we reject the null hypothesis -- the true population mean is not 98.6 degrees.</div>
<div class="span5 alert alert-success">
We can also use the bootstrap approach.
</div>
```
# Define bootstrap functions:
def replicate(data, function):
"""Return replicate of a resampled data array."""
# Create the resampled array and return the statistic of interest:
return function(np.random.choice(data, size=len(data)))
def draw_replicates(data, function, size=1):
"""Draw bootstrap replicates."""
# Initialize array of replicates:
replicates = np.empty(size)
# Generate replicates:
for i in range(size):
replicates[i] = replicate(data, function)
return replicates
# Seed the random number generator:
np.random.seed(15)
# Draw bootstrap replicates of temperatures:
replicates = draw_replicates(df.temperature, np.mean, 10000)
# Compute the 99.9% confidence interval:
CI = np.percentile(replicates, [0.05, 99.95])
print('99.9% Confidence Interval:', CI)
```
<div class="span5 alert alert-success">
<h2>SOLUTION: Is there a significant difference between males and females in normal temperature?</h2>
<p>We use a two-sample test. Since the number of males is greater than 30 and the number of females is greater than 30, we use a two-sample z-test. Since the question just asks if there is a *difference* and doesn't specify a direction, we use a two-tailed test.</p>
$$z = \frac{(\bar{x}_M - \bar{x}_F) - 0}{\sqrt{\frac{\sigma_M^2}{n_M} + \frac{\sigma_F^2}{n_F}}}$$
```
males = df.gender == 'M'
diff_means = df.temperature[males].mean() - df.temperature[~males].mean()
sd_male = df.temperature[males].std()
sd_female = df.temperature[~males].std()
n_male = np.sum(males)
n_female = len(df.temperature) - n_male
z = diff_means / np.sqrt(((sd_male ** 2)/ n_male) + ((sd_female ** 2)/ n_female))
z
pval = stats.norm.cdf(z) * 2
pval
```
<div class="span5 alert alert-success">
<p>Since the p-value of 0.022 < 0.05, we reject the null hypothesis that the mean body temperature for men and women is the same. The difference in mean body temperature between men and women is statistically significant.</p>
</p>
```
diff_means + np.array([-1, 1]) * 1.96 * np.sqrt(((sd_male ** 2)/ n_male) + ((sd_female ** 2)/ n_female))
```
<div class="span5 alert alert-success">Since the null hypothesized 0 is not in the confidence interval, we reject the null hypothesis with the same conclusion as the hypothesis test.</div>
<div class="span5 alert alert-success">Now let's try the hacker stats approach.</div>
```
permutation_replicates = np.empty(100000)
size = len(permutation_replicates)
for i in range(size):
combined_perm_temperatures = np.random.permutation(np.concatenate((male_temperature, female_temperature)))
male_permutation = combined_perm_temperatures[:len(male_temperature)]
female_permutation = combined_perm_temperatures[len(male_temperature):]
permutation_replicates[i] = np.abs(np.mean(male_permutation) - np.mean(female_permutation))
p_val = np.sum(permutation_replicates >= male_and_female_diff) / len(permutation_replicates)
print('p =', p_val)
```
| github_jupyter |
```
import pandas as pd
import numpy as np
# Matplotlib forms basis for visualization in Python
import matplotlib.pyplot as plt
# We will use the Seaborn library
import seaborn as sns
sns.set()
# Graphics in SVG format are more sharp and legible
get_ipython().run_line_magic('config', "InlineBackend.figure_format = 'svg'")
# Increase the default plot size and set the color scheme
plt.rcParams['figure.figsize'] = (8, 5)
plt.rcParams['image.cmap'] = 'viridis'
df = pd.read_csv('https://stepik.org/media/attachments/lesson/413464/RFM_ht_data.csv' );
df.head()
df.dtypes
df['InvoiceNo'].unique()
df['InvoiceDate'] = pd.to_datetime(df['InvoiceDate'])
df['CustomerCode'] = df['CustomerCode'].apply(str)
df.shape[0]
df['InvoiceDate'].describe()
df.head()
last_date = df['InvoiceDate'].max()
rfmTable = df.groupby('CustomerCode').agg({'InvoiceDate': lambda x: (last_date - x.max()).days, # Recency #Количество дней с последнего заказа
'InvoiceNo': lambda x: len(x), # Frequency #Количество заказов
'Amount': lambda x: x.sum()}) # Monetary Value #Общая сумма по всем заказам
rfmTable['InvoiceDate'] = rfmTable['InvoiceDate'].astype(int)
rfmTable.rename(columns={'InvoiceDate': 'recency',
'InvoiceNo': 'frequency',
'Amount': 'monetary_value'}, inplace=True)
rfmTable.head()
rfmTable.shape[0]
quantiles = rfmTable.quantile([0.25, 0.5, 0.75])
quantiles
def RClass(value,parameter_name,quantiles_table):
if value <= quantiles_table[parameter_name][0.25]:
return 1
elif value <= quantiles_table[parameter_name][0.50]:
return 2
elif value <= quantiles_table[parameter_name][0.75]:
return 3
else:
return 4
def FMClass(value, parameter_name,quantiles_table):
if value <= quantiles_table[parameter_name][0.25]:
return 4
elif value <= quantiles_table[parameter_name][0.50]:
return 3
elif value <= quantiles_table[parameter_name][0.75]:
return 2
else:
return 1
rfmTable.head()
rfmSegmentation = rfmTable
rfmSegmentation.dtypes
rfmSegmentation['R_Quartile'] = rfmSegmentation['recency'].apply(RClass, args=('recency',quantiles))
rfmSegmentation['F_Quartile'] = rfmSegmentation['frequency'].apply(FMClass, args=('frequency',quantiles))
rfmSegmentation['M_Quartile'] = rfmSegmentation['monetary_value'].apply(FMClass, args=('monetary_value',quantiles))
rfmSegmentation['RFMClass'] = rfmSegmentation.R_Quartile.map(str) \
+ rfmSegmentation.F_Quartile.map(str) \
+ rfmSegmentation.M_Quartile.map(str)
rfmSegmentation.head()
pd.crosstab(index = rfmSegmentation.R_Quartile, columns = rfmSegmentation.F_Quartile)
rfm_table = rfmSegmentation.pivot_table(
index='R_Quartile',
columns='F_Quartile',
values='monetary_value',
aggfunc=np.median).applymap(int)
sns.heatmap(rfm_table, cmap="YlGnBu", annot=True, fmt=".0f", linewidths=4.15, annot_kws={"size": 10},yticklabels=4);
```
| github_jupyter |
```
import re
import os
import pandas as pd
import numpy as np
import scipy as sp
import seaborn as sns
from ipywidgets import interact
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_classification
import matplotlib
import matplotlib.pyplot as plt
import json
%matplotlib inline
import findspark
findspark.init()
from pyspark.sql import *
from pyspark.sql.functions import *
from pyspark.sql.functions import min
from pyspark.sql import SparkSession
from pyspark import SparkContext
from pandas.plotting import scatter_matrix
from datetime import datetime, timedelta
spark = SparkSession.builder.getOrCreate()
sc = spark.sparkContext
p_df = pd.read_csv('pokemon.csv')
c_df = pd.read_csv('combats.csv')
display(p_df.head(5))
display(c_df.head(5))
display(p_df.describe())
display(c_df.describe())
display(p_df['Class 1'].unique())
display(p_df['Class 2'].unique())
p_df.hist(column = 'Attack')
p_df.hist(column = 'Defense')
ax = p_df.hist(column='Sp. Atk', alpha = 0.5)
p_df.hist(column='Sp. Def', ax = ax, alpha = 0.5)
plt.legend(['Sp. Atk', 'Sp. Def'])
plt.title("Sp. Atk + Sp. Def")
p_df.plot(kind = 'scatter', x = 'Sp. Atk', y = 'Sp. Def')
p_df['Attack/Defense'] = p_df['Attack'] / p_df['Defense']
display(p_df.sort_values(by=['Attack/Defense'], ascending = False)[:3])
print("list the names of the 3 Pokémon with highest attack-over-defense ratio:\n")
print("\n".join(p_df.sort_values(by=['Attack/Defense'], ascending = False)[:3]['Name'].tolist()))
display(p_df.sort_values(by=['Attack/Defense'], ascending = True)[:3])
print("list the names of the 3 Pokémon with lowest attack-over-defense ratio:\n")
print("\n".join(p_df.sort_values(by=['Attack/Defense'], ascending = True)[:3]['Name'].tolist()))
display(c_df.head(5))
print('list the names of the 10 Pokémon with the largest number of victories.\n')
top_df = c_df.groupby('Winner').size().reset_index(name='counts').sort_values(by='counts', ascending = False)[:10]
print("\n".join(top_df.merge(p_df, left_on = 'Winner', right_on = 'pid')['Name'].tolist()))
grass_class = p_df[(p_df['Class 1'] == 'Grass') | (p_df['Class 2'] == 'Grass') &
~((p_df['Class 1'] != 'Rock') | (p_df['Class 2'] == 'Rock'))]
rock_class = p_df[(p_df['Class 1'] == 'Rock') | (p_df['Class 2'] == 'Rock') &
~((p_df['Class 1'] != 'Grass') | (p_df['Class 2'] == 'Grass'))]
display(grass_class.head(5))
display(rock_class.head(5))
f, (ax1, ax2) = plt.subplots(1, 2, sharey = True)
grass_class.boxplot(column = 'Attack', return_type='axes', ax = ax1)
rock_class.boxplot(column = 'Attack', ax = ax2)
spark.sql("""
SELECT Pokemons.Winner, Pokemons.Name, COUNT(*) as TotalWins
FROM Combats
INNER JOIN Pokemons on Pokemons.pid = Combats.Winner
GROUP BY Pokemnon.Winner, Pokemons.Name
ORDER BY TotalWins DESC
""")
X_ext = c_df.merge(p_df, left_on='First_pokemon', right_on='pid') \
.merge(p_df, left_on='Second_pokemon', right_on='pid', suffixes=('_x', '_y'))
X = X_ext.drop(columns=['Winner', 'First_pokemon', 'Second_pokemon', 'pid_x', 'pid_y', 'Name_x', 'Name_y', 'Attack/Defense_x', 'Attack/Defense_y'])
categories = pd.unique(p_df[['Class 1', 'Class 2']].values.ravel('K'))[:-1]
X['Class 1_x'] = pd.Categorical(X['Class 1_x'], categories=categories).codes
X['Class 1_y'] = pd.Categorical(X['Class 1_y'], categories=categories).codes
X['Class 2_x'] = pd.Categorical(X['Class 2_x'], categories=categories).codes
X['Class 2_y'] = pd.Categorical(X['Class 2_y'], categories=categories).codes
display(X)
Y = X_ext['Winner'] == X_ext['First_pokemon']
from sklearn.ensemble import RandomForestClassifier
from sklearn.datasets import make_classification
N = len(X)
N
train_size = int(N * 0.9)
test_size = N - train_size
permutation = np.random.permutation(N)
train_set_index = permutation[:train_size]
test_set_index = permutation[train_size:]
print(train_set_index)
print(test_set_index)
X_train = X.iloc[train_set_index]
Y_train = Y.iloc[train_set_index]
X_test = X.iloc[test_set_index]
Y_test = Y.iloc[test_set_index]
n_estimators = [10, 25, 50, 100]
max_depths = [2, 4, 10]
def k_fold(X, Y, K):
permutation = np.random.permutation(N)
for k in range(K):
X_test = X.iloc[permutation[k * test_size : (k + 1) * test_size]]
Y_test = Y.iloc[permutation[k * test_size : (k + 1) * test_size]]
X_train = X.iloc[permutation[:k*test_size].tolist() + permutation[(k + 1)*test_size:].tolist()]
Y_train = Y.iloc[permutation[:k*test_size].tolist() + permutation[(k + 1)*test_size:].tolist()]
yield(X_train, Y_train, X_test, Y_test)
best_acc = 0
best_n_est = 0
best_max_depth = 0
for n_estimator in n_estimators:
for max_depth in max_depths:
clf = RandomForestClassifier(n_estimators=n_estimator, max_depth=max_depth, random_state=0)
accuracies = []
for (X_train, Y_train, X_test, Y_test) in k_fold(X, Y, 5):
clf.fit(X_train, Y_train)
accuracies.append((clf.predict(X_test) == Y_test).sum() / test_size)
accuracy = np.mean(accuracies)
print(n_estimator, max_depth, accuracy)
if accuracy > best_acc:
best_acc = accuracy
best_n_est = n_estimator
best_max_depth = max_depth
print('Best accuracy: ', best_acc)
print('Best number of estimators: ', best_n_est)
print('Best max depth: ', best_max_depth)
forest = RandomForestClassifier(n_estimators=best_n_est, max_depth=best_max_depth, random_state=0)
forest.fit(X_train, Y_train)
importances = forest.feature_importances_
std = np.std([tree.feature_importances_ for tree in forest.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
print("Feature ranking:")
for f in range(X.shape[1]):
print("%d. feature %d (%s) (%f)" % (f + 1, indices[f], X.columns[indices[f]], importances[indices[f]]))
# Plot the feature importances of the forest
plt.figure()
plt.title("Feature importances")
plt.bar(range(X.shape[1]), importances[indices],
color="r", yerr=std[indices], align="center")
plt.xticks(range(X.shape[1]), indices)
plt.xlim([-1, X.shape[1]])
plt.show()
```
(5 points) Compute the winning ratio (number of wins divided by number of battles) for all Pokémon. Show the 10 Pokémon with the highest ratio and describe what they have in common with respect to their features. Discuss your results about feature importance from question 2.7 (regarding feature importance) in this context.l
```
top_df = c_df.groupby('Winner').size().reset_index(name='WinCount').sort_values(by='WinCount', ascending = False)
first_df = c_df.groupby('First_pokemon').size().reset_index(name='Battles').sort_values(by='Battles', ascending = False)
second_df = c_df.groupby('Second_pokemon').size().reset_index(name='Battles').sort_values(by='Battles', ascending = False)
merged = first_df.merge(second_df, left_on = 'First_pokemon', right_on='Second_pokemon')
merged['Battles'] = merged['Battles_x'] + merged['Battles_y']
merged = merged.drop(columns = ['Second_pokemon', 'Battles_x', "Battles_y"])
p_df_ext = p_df.merge(top_df, left_on='pid', right_on='Winner')
p_df_ext = p_df_ext.merge(merged, left_on='pid', right_on='First_pokemon')
p_df_ext = p_df_ext.drop(columns = ['First_pokemon', 'Winner'])
p_df_ext["WinninRatio"] = p_df_ext['WinCount'] / p_df_ext['Battles']
display(p_df_ext.head(5))
p_df_ext.sort_values(by = 'WinninRatio', ascending = False)[:10]
p_df_ext.describe()
wins = np.zeros(shape = (800, 800))
for row in c_df.iterrows():
if row[1]['First_pokemon'] == row[1]['Winner']:
wins[row[1]['First_pokemon'] - 1][row[1]['Second_pokemon'] - 1] += 1
else:
wins[row[1]['Second_pokemon'] - 1][row[1]['First_pokemon'] - 1] += 1
G = np.zeros(shape = (800, 800))
for i in range(800):
for j in range(800):
if wins[i][j] > wins[j][i]:
G[i][j] = 1
elif wins[i][j] > wins[j][i]:
G[j][i] = 1
A = G + (G @ G)
scores = A.sum(axis = 1)
p_df[p_df['pid'].isin(np.argsort(scores)[-10:])]
```
| github_jupyter |
<a id='Top'></a>
# MultiSurv results by cancer type<a class='tocSkip'></a>
C-index value results for each cancer type of the best MultiSurv model trained on all-cancer data.
```
%load_ext autoreload
%autoreload 2
%load_ext watermark
import sys
import os
import numpy as np
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
import torch
# Make modules in "src" dir visible
project_dir = os.path.split(os.getcwd())[0]
if project_dir not in sys.path:
sys.path.append(os.path.join(project_dir, 'src'))
import dataset
from model import Model
import utils
matplotlib.style.use('multisurv.mplstyle')
```
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Load-model" data-toc-modified-id="Load-model-1"><span class="toc-item-num">1 </span>Load model</a></span></li><li><span><a href="#Evaluate" data-toc-modified-id="Evaluate-2"><span class="toc-item-num">2 </span>Evaluate</a></span></li><li><span><a href="#Result-graph" data-toc-modified-id="Result-graph-3"><span class="toc-item-num">3 </span>Result graph</a></span><ul class="toc-item"><li><span><a href="#Save-to-files" data-toc-modified-id="Save-to-files-3.1"><span class="toc-item-num">3.1 </span>Save to files</a></span></li></ul></li><li><span><a href="#Metric-correlation-with-other-attributes" data-toc-modified-id="Metric-correlation-with-other-attributes-4"><span class="toc-item-num">4 </span>Metric correlation with other attributes</a></span><ul class="toc-item"><li><span><a href="#Collect-feature-representations" data-toc-modified-id="Collect-feature-representations-4.1"><span class="toc-item-num">4.1 </span>Collect feature representations</a></span></li><li><span><a href="#Compute-dispersion-and-add-to-selected-metric-table" data-toc-modified-id="Compute-dispersion-and-add-to-selected-metric-table-4.2"><span class="toc-item-num">4.2 </span>Compute dispersion and add to selected metric table</a></span></li><li><span><a href="#Plot" data-toc-modified-id="Plot-4.3"><span class="toc-item-num">4.3 </span>Plot</a></span></li></ul></li></ul></div>
```
DATA = utils.INPUT_DATA_DIR
MODELS = utils.TRAINED_MODEL_DIR
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
```
# Load model
```
dataloaders = utils.get_dataloaders(
data_location=DATA,
labels_file='../data/labels.tsv',
modalities=['clinical', 'mRNA'],
# exclude_patients=exclude_cancers,
return_patient_id=True
)
multisurv = Model(dataloaders=dataloaders, device=device)
multisurv.load_weights(os.path.join(MODELS, 'clinical_mRNA_lr0.005_epoch43_acc0.81.pth'))
```
# Evaluate
```
def get_patients_with(cancer_type, split_group='test'):
labels = pd.read_csv('../data/labels.tsv', sep='\t')
cancer_labels = labels[labels['project_id'] == cancer_type]
group_cancer_labels = cancer_labels[cancer_labels['group'] == split_group]
return list(group_cancer_labels['submitter_id'])
%%time
results = {}
minimum_n_patients = 0
cancer_types = pd.read_csv('../data/labels.tsv', sep='\t').project_id.unique()
for i, cancer_type in enumerate(cancer_types):
print('-' * 44)
print(' ' * 17, f'{i + 1}.', cancer_type)
print('-' * 44)
patients = get_patients_with(cancer_type)
if len(patients) < minimum_n_patients:
continue
exclude_patients = [p for p in dataloaders['test'].dataset.patient_ids
if not p in patients]
data = utils.get_dataloaders(
data_location=DATA,
labels_file='../data/labels.tsv',
modalities=['clinical', 'mRNA'],
exclude_patients=exclude_patients,
return_patient_id=True
)['test'].dataset
results[cancer_type] = utils.Evaluation(model=multisurv, dataset=data, device=device)
results[cancer_type].run_bootstrap()
print()
print()
print()
%%time
data = utils.get_dataloaders(
data_location=DATA,
labels_file='../data/labels.tsv',
modalities=['clinical', 'mRNA'],
return_patient_id=True
)['test'].dataset
results['All'] = utils.Evaluation(model=multisurv, dataset=data, device=device)
results['All'].run_bootstrap()
print()
```
In order to avoid very __noisy values__, establish a __minimum threshold__ for the number of patients in each given cancer type.
```
minimum_n_patients = 20
cancer_types = pd.read_csv('../data/labels.tsv', sep='\t').project_id.unique()
selected_cancer_types = ['All']
print('-' * 40)
print(' Cancer Ctd IBS # patients')
print('-' * 40)
for cancer_type in sorted(list(cancer_types)):
patients = get_patients_with(cancer_type)
if len(patients) > minimum_n_patients:
selected_cancer_types.append(cancer_type)
ctd = str(round(results[cancer_type].c_index_td, 3))
ibs = str(round(results[cancer_type].ibs, 3))
message = ' ' + cancer_type
message += ' ' * (11 - len(message)) + ctd
message += ' ' * (20 - len(message)) + ibs
message += ' ' * (32 - len(message)) + str(len(patients))
print(message)
# print(' ' + cancer_type + ' ' * (10 - len(cancer_type)) + \
# ctd + ' ' * (10 - len(ibs)) + ibs + ' ' * (13 - len(ctd)) \
# + str(len(patients)))
def format_bootstrap_output(evaluator):
results = evaluator.format_results()
for metric in results:
results[metric] = results[metric].split(' ')
val = results[metric][0]
ci_low, ci_high = results[metric][1].split('(')[1].split(')')[0].split('-')
results[metric] = val, ci_low, ci_high
results[metric] = [float(x) for x in results[metric]]
return results
formatted_results = {}
# for cancer_type in results:
for cancer_type in sorted(selected_cancer_types):
formatted_results[cancer_type] = format_bootstrap_output(results[cancer_type])
formatted_results
```
# Result graph
Exclude cancer types with less than a chosen minimum number of patients, to avoid extremely noisy results.
```
utils.plot.show_default_colors()
PLOT_SIZE = (15, 4)
default_colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
def get_metric_results(metric, data):
df = pd.DataFrame()
df['Cancer type'] = data.keys()
val, err = [], []
for cancer in formatted_results:
values = formatted_results[cancer][metric]
val.append(values[0])
err.append((values[0] - values[1], values[2] - values[0]))
df[metric] = val
err = np.swapaxes(np.array(err), 1, 0)
return df, err
def plot_results(metric, data, ci, y_lim=None, y_label=None, h_lines=[1, 0.5]):
fig = plt.figure(figsize=PLOT_SIZE)
ax = fig.add_subplot(1, 1, 1)
for y in h_lines:
ax.axhline(y, linestyle='--', color='grey')
ax.bar(df['Cancer type'][:1], df[metric][:1], yerr=err[:, :1],
align='center', ecolor=default_colors[0],
alpha=0.5, capsize=5)
ax.bar(df['Cancer type'][1:], df[metric][1:], yerr=err[:, 1:],
align='center', color=default_colors[6], ecolor=default_colors[6],
alpha=0.5, capsize=5)
if y_lim is None:
y_lim = (0, 1)
ax.set_ylim(y_lim)
ax.set_title('')
ax.set_xlabel('Cancer types')
if y_label is None:
ax.set_ylabel(metric + ' (95% CI)')
else:
ax.set_ylabel(y_label)
return fig
metric='Ctd'
df, err = get_metric_results(metric, formatted_results)
fig_ctd = plot_results(metric, df, err, y_label='$C^{td}$ (95% CI)')
metric='IBS'
df, err = get_metric_results(metric, formatted_results)
fig_ibs = plot_results(metric, df, err, y_lim=(0, 0.35), y_label=None, h_lines=[0.25])
```
## Save to files
```
%%javascript
IPython.notebook.kernel.execute('nb_name = "' + IPython.notebook.notebook_name + '"')
pdf_file = nb_name.split('.ipynb')[0] + '_Ctd'
utils.plot.save_plot_for_figure(figure=fig_ctd, file_name=pdf_file)
pdf_file = nb_name.split('.ipynb')[0] + '_IBS'
utils.plot.save_plot_for_figure(figure=fig_ibs, file_name=pdf_file)
pdf_file = nb_name.split('.ipynb')[0] + '_INBLL'
utils.plot.save_plot_for_figure(figure=fig_inbll, file_name=pdf_file)
```
# Watermark<a class='tocSkip'></a>
```
%watermark --iversions
%watermark -v
print()
%watermark -u -n
```
[Top of the page](#Top)
| github_jupyter |
## **Initialize the connection**
```
import sqlalchemy, os
from sqlalchemy import create_engine
import pandas as pd
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
%reload_ext sql
%config SqlMagic.displaylimit = 5
%config SqlMagic.feedback = False
%config SqlMagic.autopandas = True
hxe_connection = 'hana://ML_USER:Welcome18@hxehost:39015';
%sql $hxe_connection
pd.options.display.max_rows = 1000
pd.options.display.max_colwidth = 1000
```
# **Lag 1 And Cycles**
## Visualize the data
```
%%sql
result <<
select
l1cnn.time, l1cnn.signal as signal , l1cwn.signal as signal_wn, l1cnn.signal - l1cwn.signal as delta
from
forecast_lag_1_and_cycles l1cnn
join forecast_lag_1_and_cycles_and_wn l1cwn
on l1cnn.time = l1cwn.time
result = %sql select \
l1cnn.time, l1cnn.signal as signal , l1cwn.signal as signal_wn, l1cnn.signal - l1cwn.signal as delta \
from \
forecast_lag_1_and_cycles l1cnn \
join forecast_lag_1_and_cycles_and_wn l1cwn \
on l1cnn.time = l1cwn.time
time = matplotlib.dates.date2num(result.time)
fig, ax = plt.subplots()
ax.plot(time, result.signal, 'ro-', markersize=2, color='blue')
ax.plot(time, result.signal_wn, 'ro-', markersize=2, color='red')
ax.bar (time, result.delta , color='green')
ax.xaxis_date()
fig.autofmt_xdate()
fig.set_size_inches(20, 12)
plt.show()
```
## **Dates & intervals**
```
%%sql
select 'max' as indicator, to_varchar(max(time)) as value
from forecast_lag_1_and_cycles union all
select 'min' , to_varchar(min(time))
from forecast_lag_1_and_cycles union all
select 'delta days' , to_varchar(days_between(min(time), max(time)))
from forecast_lag_1_and_cycles union all
select 'count' , to_varchar(count(1))
from forecast_lag_1_and_cycles
%%sql
select 'max' as indicator, to_varchar(max(time)) as value
from forecast_lag_1_and_cycles_and_wn union all
select 'min' , to_varchar(min(time))
from forecast_lag_1_and_cycles_and_wn union all
select 'delta days' , to_varchar(days_between(min(time), max(time)))
from forecast_lag_1_and_cycles_and_wn union all
select 'count' , to_varchar(count(1))
from forecast_lag_1_and_cycles_and_wn
%%sql
select interval, count(1) as count
from (
select days_between (lag(time) over (order by time asc), time) as interval
from forecast_lag_1_and_cycles
order by time asc
)
where interval is not null
group by interval;
```
## **Generic statistics**
```
%%sql
with data as (
select l1cnn.signal as value_nn, l1cwn.signal as value_wn
from forecast_lag_1_and_cycles l1cnn join forecast_lag_1_and_cycles_and_wn l1cwn on l1cnn.time = l1cwn.time
)
select 'max' as indicator , round(max(value_nn), 2) as value_nn
, round(max(value_wn), 2) as value_wn from data union all
select 'min' , round(min(value_nn), 2)
, round(min(value_wn), 2) from data union all
select 'delta min/max' , round(max(value_nn) - min(value_nn), 2)
, round(max(value_wn) - min(value_wn), 2) from data union all
select 'avg' , round(avg(value_nn), 2)
, round(avg(value_wn), 2) from data union all
select 'median' , round(median(value_nn), 2)
, round(median(value_wn), 2) from data union all
select 'stddev' , round(stddev(value_nn), 2)
, round(stddev(value_wn), 2) from data
result = %sql select row_number() over (order by signal asc) as row_num, signal from forecast_lag_1_and_cycles order by 1, 2;
result_wn = %sql select row_number() over (order by signal asc) as row_num, signal from forecast_lag_1_and_cycles_and_wn order by 1, 2;
fig, ax = plt.subplots()
ax.plot(result.row_num, result.signal, 'ro-', markersize=2, color='blue')
ax.plot(result_wn.row_num, result_wn.signal, 'ro-', markersize=2, color='red')
fig.set_size_inches(20, 12)
plt.show()
```
## **Data Distribution**
```
%%sql
with data as (
select ntile(10) over (order by signal asc) as tile, signal
from forecast_lag_1_and_cycles
where signal is not null
)
select tile
, round(max(signal), 2) as max
, round(min(signal), 2) as min
, round(max(signal) - min(signal), 2) as "delta min/max"
, round(avg(signal), 2) as avg
, round(median(signal), 2) as median
, round(abs(avg(signal) - median(signal)), 2) as "delta avg/median"
, round(stddev(signal), 2) as stddev
from data
group by tile
%%sql
with data as (
select ntile(10) over (order by signal asc) as tile, signal
from forecast_lag_1_and_cycles_and_wn
where signal is not null
)
select tile
, round(max(signal), 2) as max
, round(min(signal), 2) as min
, round(max(signal) - min(signal), 2) as "delta min/max"
, round(avg(signal), 2) as avg
, round(median(signal), 2) as median
, round(abs(avg(signal) - median(signal)), 2) as "delta avg/median"
, round(stddev(signal), 2) as stddev
from data
group by tile
%%sql
with data as (
select ntile(12) over (order by signal asc) as tile, signal
from forecast_lag_1_and_cycles
where signal is not null
)
select tile
, round(max(signal), 2) as max
, round(min(signal), 2) as min
, round(max(signal) - min(signal), 2) as "delta min/max"
, round(avg(signal), 2) as avg
, round(median(signal), 2) as median
, round(abs(avg(signal) - median(signal)), 2) as "delta avg/median"
, round(stddev(signal), 2) as stddev
from data
group by tile
```
| github_jupyter |
# Preprocessing for numerical features
In this notebook, we will still use only numerical features.
We will introduce these new aspects:
* an example of preprocessing, namely **scaling numerical variables**;
* using a scikit-learn **pipeline** to chain preprocessing and model
training;
* assessing the generalization performance of our model via **cross-validation**
instead of a single train-test split.
## Data preparation
First, let's load the full adult census dataset.
```
import pandas as pd
adult_census = pd.read_csv("../datasets/adult-census.csv")
# to display nice model diagram
from sklearn import set_config
set_config(display='diagram')
```
We will now drop the target from the data we will use to train our
predictive model.
```
target_name = "class"
target = adult_census[target_name]
data = adult_census.drop(columns=target_name)
```
Then, we select only the numerical columns, as seen in the previous
notebook.
```
numerical_columns = [
"age", "capital-gain", "capital-loss", "hours-per-week"]
data_numeric = data[numerical_columns]
```
Finally, we can divide our dataset into a train and test sets.
```
from sklearn.model_selection import train_test_split
data_train, data_test, target_train, target_test = train_test_split(
data_numeric, target, random_state=42)
```
## Model fitting with preprocessing
A range of preprocessing algorithms in scikit-learn allow us to transform
the input data before training a model. In our case, we will standardize the
data and then train a new logistic regression model on that new version of
the dataset.
Let's start by printing some statistics about the training data.
```
data_train.describe()
```
We see that the dataset's features span across different ranges. Some
algorithms make some assumptions regarding the feature distributions and
usually normalizing features will be helpful to address these assumptions.
<div class="admonition tip alert alert-warning">
<p class="first admonition-title" style="font-weight: bold;">Tip</p>
<p>Here are some reasons for scaling features:</p>
<ul class="last simple">
<li>Models that rely on the distance between a pair of samples, for instance
k-nearest neighbors, should be trained on normalized features to make each
feature contribute approximately equally to the distance computations.</li>
<li>Many models such as logistic regression use a numerical solver (based on
gradient descent) to find their optimal parameters. This solver converges
faster when the features are scaled.</li>
</ul>
</div>
Whether or not a machine learning model requires scaling the features depends
on the model family. Linear models such as logistic regression generally
benefit from scaling the features while other models such as decision trees
do not need such preprocessing (but will not suffer from it).
We show how to apply such normalization using a scikit-learn transformer
called `StandardScaler`. This transformer shifts and scales each feature
individually so that they all have a 0-mean and a unit standard deviation.
We will investigate different steps used in scikit-learn to achieve such a
transformation of the data.
First, one needs to call the method `fit` in order to learn the scaling from
the data.
```
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(data_train)
```
The `fit` method for transformers is similar to the `fit` method for
predictors. The main difference is that the former has a single argument (the
data matrix), whereas the latter has two arguments (the data matrix and the
target).

In this case, the algorithm needs to compute the mean and standard deviation
for each feature and store them into some NumPy arrays. Here, these
statistics are the model states.
<div class="admonition note alert alert-info">
<p class="first admonition-title" style="font-weight: bold;">Note</p>
<p class="last">The fact that the model states of this scaler are arrays of means and
standard deviations is specific to the <tt class="docutils literal">StandardScaler</tt>. Other
scikit-learn transformers will compute different statistics and store them
as model states, in the same fashion.</p>
</div>
We can inspect the computed means and standard deviations.
```
scaler.mean_
scaler.scale_
```
<div class="admonition note alert alert-info">
<p class="first admonition-title" style="font-weight: bold;">Note</p>
<p class="last">scikit-learn convention: if an attribute is learned from the data, its name
ends with an underscore (i.e. <tt class="docutils literal">_</tt>), as in <tt class="docutils literal">mean_</tt> and <tt class="docutils literal">scale_</tt> for the
<tt class="docutils literal">StandardScaler</tt>.</p>
</div>
Scaling the data is applied to each feature individually (i.e. each column in
the data matrix). For each feature, we subtract its mean and divide by its
standard deviation.
Once we have called the `fit` method, we can perform data transformation by
calling the method `transform`.
```
data_train_scaled = scaler.transform(data_train)
data_train_scaled
```
Let's illustrate the internal mechanism of the `transform` method and put it
to perspective with what we already saw with predictors.

The `transform` method for transformers is similar to the `predict` method
for predictors. It uses a predefined function, called a **transformation
function**, and uses the model states and the input data. However, instead of
outputting predictions, the job of the `transform` method is to output a
transformed version of the input data.
Finally, the method `fit_transform` is a shorthand method to call
successively `fit` and then `transform`.

```
data_train_scaled = scaler.fit_transform(data_train)
data_train_scaled
data_train_scaled = pd.DataFrame(data_train_scaled,
columns=data_train.columns)
data_train_scaled.describe()
```
We can easily combine these sequential operations with a scikit-learn
`Pipeline`, which chains together operations and is used as any other
classifier or regressor. The helper function `make_pipeline` will create a
`Pipeline`: it takes as arguments the successive transformations to perform,
followed by the classifier or regressor model.
```
import time
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import make_pipeline
model = make_pipeline(StandardScaler(), LogisticRegression())
model
```
The `make_pipeline` function did not require us to give a name to each step.
Indeed, it was automatically assigned based on the name of the classes
provided; a `StandardScaler` will be a step named `"standardscaler"` in the
resulting pipeline. We can check the name of each steps of our model:
```
model.named_steps
```
This predictive pipeline exposes the same methods as the final predictor:
`fit` and `predict` (and additionally `predict_proba`, `decision_function`,
or `score`).
```
start = time.time()
model.fit(data_train, target_train)
elapsed_time = time.time() - start
```
We can represent the internal mechanism of a pipeline when calling `fit`
by the following diagram:

When calling `model.fit`, the method `fit_transform` from each underlying
transformer (here a single transformer) in the pipeline will be called to:
- learn their internal model states
- transform the training data. Finally, the preprocessed data are provided to
train the predictor.
To predict the targets given a test set, one uses the `predict` method.
```
predicted_target = model.predict(data_test)
predicted_target[:5]
```
Let's show the underlying mechanism:

The method `transform` of each transformer (here a single transformer) is
called to preprocess the data. Note that there is no need to call the `fit`
method for these transformers because we are using the internal model states
computed when calling `model.fit`. The preprocessed data is then provided to
the predictor that will output the predicted target by calling its method
`predict`.
As a shorthand, we can check the score of the full predictive pipeline
calling the method `model.score`. Thus, let's check the computational and
generalization performance of such a predictive pipeline.
```
model_name = model.__class__.__name__
score = model.score(data_test, target_test)
print(f"The accuracy using a {model_name} is {score:.3f} "
f"with a fitting time of {elapsed_time:.3f} seconds "
f"in {model[-1].n_iter_[0]} iterations")
```
We could compare this predictive model with the predictive model used in
the previous notebook which did not scale features.
```
model = LogisticRegression()
start = time.time()
model.fit(data_train, target_train)
elapsed_time = time.time() - start
model_name = model.__class__.__name__
score = model.score(data_test, target_test)
print(f"The accuracy using a {model_name} is {score:.3f} "
f"with a fitting time of {elapsed_time:.3f} seconds "
f"in {model.n_iter_[0]} iterations")
```
We see that scaling the data before training the logistic regression was
beneficial in terms of computational performance. Indeed, the number of
iterations decreased as well as the training time. The generalization
performance did not change since both models converged.
<div class="admonition warning alert alert-danger">
<p class="first admonition-title" style="font-weight: bold;">Warning</p>
<p class="last">Working with non-scaled data will potentially force the algorithm to iterate
more as we showed in the example above. There is also the catastrophic
scenario where the number of required iterations are more than the maximum
number of iterations allowed by the predictor (controlled by the <tt class="docutils literal">max_iter</tt>)
parameter. Therefore, before increasing <tt class="docutils literal">max_iter</tt>, make sure that the data
are well scaled.</p>
</div>
## Model evaluation using cross-validation
In the previous example, we split the original data into a training set and a
testing set. The score of a model will in general depend on the way we make
such a split. One downside of doing a single split is that it does not give
any information about this variability. Another downside, in a setting where
the amount of data is small, is that the the data available for training
and testing will be even smaller after splitting.
Instead, we can use cross-validation. Cross-validation consists of repeating
the procedure such that the training and testing sets are different each
time. Generalization performance metrics are collected for each repetition and
then aggregated. As a result we can get an estimate of the variability of the
model's generalization performance.
Note that there exists several cross-validation strategies, each of them
defines how to repeat the `fit`/`score` procedure. In this section, we will
use the K-fold strategy: the entire dataset is split into `K` partitions. The
`fit`/`score` procedure is repeated `K` times where at each iteration `K - 1`
partitions are used to fit the model and `1` partition is used to score. The
figure below illustrates this K-fold strategy.

<div class="admonition note alert alert-info">
<p class="first admonition-title" style="font-weight: bold;">Note</p>
<p class="last">This figure shows the particular case of K-fold cross-validation strategy.
As mentioned earlier, there are a variety of different cross-validation
strategies. Some of these aspects will be covered in more details in future
notebooks.</p>
</div>
For each cross-validation split, the procedure trains a model on all the red
samples and evaluate the score of the model on the blue samples.
Cross-validation is therefore computationally intensive because it requires
training several models instead of one.
In scikit-learn, the function `cross_validate` allows to do cross-validation
and you need to pass it the model, the data, and the target. Since there
exists several cross-validation strategies, `cross_validate` takes a
parameter `cv` which defines the splitting strategy.
```
%%time
from sklearn.model_selection import cross_validate
model = make_pipeline(StandardScaler(), LogisticRegression())
cv_result = cross_validate(model, data_numeric, target, cv=5)
cv_result
```
The output of `cross_validate` is a Python dictionary, which by default
contains three entries: (i) the time to train the model on the training data
for each fold, (ii) the time to predict with the model on the testing data
for each fold, and (iii) the default score on the testing data for each fold.
Setting `cv=5` created 5 distinct splits to get 5 variations for the training
and testing sets. Each training set is used to fit one model which is then
scored on the matching test set. This strategy is called K-fold
cross-validation where `K` corresponds to the number of splits.
Note that by default the `cross_validate` function discards the 5 models that
were trained on the different overlapping subset of the dataset. The goal of
cross-validation is not to train a model, but rather to estimate
approximately the generalization performance of a model that would have been
trained to the full training set, along with an estimate of the variability
(uncertainty on the generalization accuracy).
You can pass additional parameters to `cross_validate` to get more
information, for instance training scores. These features will be covered in
a future notebook.
Let's extract the test scores from the `cv_result` dictionary and compute
the mean accuracy and the variation of the accuracy across folds.
```
scores = cv_result["test_score"]
print("The mean cross-validation accuracy is: "
f"{scores.mean():.3f} +/- {scores.std():.3f}")
```
Note that by computing the standard-deviation of the cross-validation scores,
we can estimate the uncertainty of our model generalization performance. This is
the main advantage of cross-validation and can be crucial in practice, for
example when comparing different models to figure out whether one is better
than the other or whether the generalization performance differences are within
the uncertainty.
In this particular case, only the first 2 decimals seem to be trustworthy. If
you go up in this notebook, you can check that the performance we get
with cross-validation is compatible with the one from a single train-test
split.
In this notebook we have:
* seen the importance of **scaling numerical variables**;
* used a **pipeline** to chain scaling and logistic regression training;
* assessed the generalization performance of our model via **cross-validation**.
| github_jupyter |
Notebook - análise exploratória de dados
Gabriela Caesar
29/set/2021
Pergunta a ser respondida
- Defina a sua UF e o ano no input e veja as estatísticas básicas da sua UF/ano quanto ao casamento LGBT
```
# importacao da biblioteca
import pandas as pd
# leitura do dataframe
lgbt_casamento = pd.read_csv('https://raw.githubusercontent.com/gabrielacaesar/lgbt_casamento/main/data/lgbt_casamento.csv')
lgbt_casamento.head(2)
sigla_uf = pd.read_csv('https://raw.githubusercontent.com/kelvins/Municipios-Brasileiros/main/csv/estados.csv')
sigla_uf.head(2)
sigla_uf_lgbt_casamento = lgbt_casamento.merge(sigla_uf, how = 'left', left_on = 'uf', right_on = 'nome')
len(sigla_uf_lgbt_casamento['uf_y'].unique())
sigla_uf_lgbt_casamento.head(2)
sigla_uf_lgbt_casamento = sigla_uf_lgbt_casamento.drop(['uf_x', 'codigo_uf', 'latitude', 'longitude'], axis=1)
sigla_uf_lgbt_casamento = sigla_uf_lgbt_casamento.rename(columns={'uf_y':'uf', 'nome': 'nome_uf'})
sigla_uf_lgbt_casamento.columns
sigla_uf_lgbt_casamento.head(2)
print(" --------------------------- \n Bem-vindo/a! \n ---------------------------")
ano_user = int(input("Escolha um ano de 2013 a 2019: \n"))
uf_user = input("Escolha uma UF. Por exemplo, AC, AL, SP, RJ... \n")
uf_user = uf_user.upper().strip()
#print(uf_user)
print(" --------------------------- \n Já vamos calcular! \n ---------------------------")
```
# Veja os números, por mês, no ano e na UF de escolha
Gráfico mostra o número de casamentos LGBTs, por gênero, no ano e na unidade federativa informada antes pelo usuário.
Passe o mouse em cima do gráfico para mais detalhes.
```
# filtro pela UF e pelo ano informados pelo usuário
# mais gráfico
import altair as alt
alt.Chart(sigla_uf_lgbt_casamento.query('uf == @uf_user & ano == @ano_user', engine='python')).mark_line(point=True).encode(
x = alt.X('mes', title = 'Mês', sort=['Janeiro', 'Fevereiro', 'Março']),
y = alt.Y('numero', title='Número'),
color = 'genero',
tooltip = ['mes', 'ano', 'genero', 'numero']
).properties(
title = f'{uf_user}: Casamento LGBTs em {ano_user}'
).interactive()
```
# Veja as estatísticas básicas, por ano, na sua UF de escolha
Gráfico mostra todos os anos da base de dados. A unidade federativa foi informada antes pelo usuário.
Passe o mouse em cima do gráfico para mais detalhes.
```
dados_user = sigla_uf_lgbt_casamento.query('uf == @uf_user', engine='python')
alt.Chart(dados_user).mark_boxplot(size=10).encode(
x = alt.X('ano:O', title="Ano"),
y = alt.Y('numero', title="Número"),
color = 'genero',
tooltip = ['mes', 'ano', 'genero', 'numero']
).properties(
title={
"text": [f'{uf_user}: Casamento LGBTs'],
"subtitle": [f'Mulheres vs. Homens']
},
width=600,
height=300
).interactive()
```
| github_jupyter |
# `model_hod` module tutorial notebook
```
%load_ext autoreload
%autoreload 2
%pylab inline
import logging
mpl_logger = logging.getLogger('matplotlib')
mpl_logger.setLevel(logging.WARNING)
pil_logger = logging.getLogger('PIL')
plt.rcParams['font.family'] = 'sans-serif'
plt.rcParams['font.size'] = 18
plt.rcParams['axes.linewidth'] = 1.5
plt.rcParams['xtick.major.size'] = 5
plt.rcParams['ytick.major.size'] = 5
plt.rcParams['xtick.minor.size'] = 3
plt.rcParams['ytick.minor.size'] = 3
plt.rcParams['xtick.top'] = True
plt.rcParams['ytick.right'] = True
plt.rcParams['xtick.minor.visible'] = True
plt.rcParams['ytick.minor.visible'] = True
plt.rcParams['xtick.direction'] = 'in'
plt.rcParams['ytick.direction'] = 'in'
plt.rcParams['figure.figsize'] = (10,6)
from dark_emulator import model_hod
hod = model_hod.darkemu_x_hod({"fft_num":8})
```
## how to set cosmology and galaxy parameters (HOD, off-centering, satellite distribution, and incompleteness)
```
cparam = np.array([0.02225,0.1198,0.6844,3.094,0.9645,-1.])
hod.set_cosmology(cparam)
gparam = {"logMmin":13.13, "sigma_sq":0.22, "logM1": 14.21, "alpha": 1.13, "kappa": 1.25, # HOD parameters
"poff": 0.2, "Roff": 0.1, # off-centering parameters p_off is the fraction of off-centered galaxies. Roff is the typical off-centered scale with respect to R200m.
"sat_dist_type": "emulator", # satellite distribution. Chosse emulator of NFW. In the case of NFW, the c-M relation by Diemer & Kravtsov (2015) is assumed.
"alpha_inc": 0.44, "logM_inc": 13.57} # incompleteness parameters. For details, see More et al. (2015)
hod.set_galaxy(gparam)
```
## how to plot g-g lensing signal in DeltaSigma(R)
```
redshift = 0.55
r = np.logspace(-1,2,100)
plt.figure(figsize=(10,6))
plt.loglog(r, hod.get_ds(r, redshift), linewidth = 2, color = "k", label = "total")
plt.loglog(r, hod.get_ds_cen(r, redshift), "--", color = "k", label = "central")
plt.loglog(r, hod.get_ds_cen_off(r, redshift), ":", color = "k", label = "central w/offset")
plt.loglog(r, hod.get_ds_sat(r, redshift), "-.", color = "k", label = "satellite")
plt.xlabel(r"$R$ [Mpc/h]")
plt.ylabel(r"$\Delta\Sigma$ [hM$_\odot$/pc$^2$]")
plt.legend()
```
## how to plot g-g lensing signal in xi
```
redshift = 0.55
r = np.logspace(-1,2,100)
plt.figure(figsize=(10,6))
plt.loglog(r, hod.get_xi_gm(r, redshift), linewidth = 2, color = "k", label = "total")
plt.loglog(r, hod.get_xi_gm_cen(r, redshift), "--", color = "k", label = "central")
plt.loglog(r, hod.get_xi_gm_cen_off(r, redshift), ":", color = "k", label = "central w/offset")
plt.loglog(r, hod.get_xi_gm_sat(r, redshift), "-.", color = "k", label = "satellite")
plt.xlabel(r"$R$ [Mpc/h]")
plt.ylabel(r"$\xi_{\rm gm}$")
plt.legend()
```
## how to plot g-g clustering signal in wp
```
redshift = 0.55
rs = np.logspace(-1,2,100)
plt.figure(figsize=(10,6))
plt.loglog(r, hod.get_wp(r, redshift), linewidth = 2, color = "k", label = "total")
plt.loglog(r, hod.get_wp_1hcs(r, redshift), "--", color = "k", label = "1-halo cen-sat")
plt.loglog(r, hod.get_wp_1hss(r, redshift), ":", color = "k", label = "1-halo sat-sat")
plt.loglog(r, hod.get_wp_2hcc(r, redshift), "-.", color = "k", label = "2-halo cen-cen")
plt.loglog(r, hod.get_wp_2hcs(r, redshift), dashes=[4,1,1,1,1,1], color = "k", label = "2-halo cen-sat")
plt.loglog(r, hod.get_wp_2hss(r, redshift), dashes=[4,1,1,1,4,1], color = "k", label = "2-halo sat-sat")
plt.xlabel(r"$R$ [Mpc/h]")
plt.ylabel(r"$w_p$ [Mpc/h]")
plt.legend()
plt.ylim(0.1, 6e3)
```
## how to plot g-g clustering signal in xi
```
redshift = 0.55
rs = np.logspace(-1,2,100)
plt.figure(figsize=(10,6))
plt.loglog(r, hod.get_xi_gg(r, redshift), linewidth = 2, color = "k", label = "total")
plt.loglog(r, hod.get_xi_gg_1hcs(r, redshift), "--", color = "k", label = "1-halo cen-sat")
plt.loglog(r, hod.get_xi_gg_1hss(r, redshift), ":", color = "k", label = "1-halo sat-sat")
plt.loglog(r, hod.get_xi_gg_2hcc(r, redshift), "-.", color = "k", label = "2-halo cen-cen")
plt.loglog(r, hod.get_xi_gg_2hcs(r, redshift), dashes=[4,1,1,1,1,1], color = "k", label = "2-halo cen-sat")
plt.loglog(r, hod.get_xi_gg_2hss(r, redshift), dashes=[4,1,1,1,4,1], color = "k", label = "2-halo sat-sat")
plt.xlabel(r"$R$ [Mpc/h]")
plt.ylabel(r"$\xi$")
plt.legend()
plt.ylim(1e-3, 6e3)
```
| github_jupyter |
```
#default_exp resnet_08
#export
from ModernArchitecturesFromScratch.basic_operations_01 import *
from ModernArchitecturesFromScratch.fully_connected_network_02 import *
from ModernArchitecturesFromScratch.model_training_03 import *
from ModernArchitecturesFromScratch.convolutions_pooling_04 import *
from ModernArchitecturesFromScratch.callbacks_05 import *
from ModernArchitecturesFromScratch.batchnorm_06 import *
from ModernArchitecturesFromScratch.optimizers_07 import *
```
# ResNet
> Fully implemented ResNet architecture from scratch: https://arxiv.org/pdf/1512.03385.pdf
## Helper
```
#export
def get_runner(model=None, layers=None, lf=None, callbacks=[Stats([accuracy]), ProgressCallback(), HyperRecorder(['lr'])], opt=None, db=None):
"Helper function to get a quick runner"
if model is None:
model = SequentialModel(*layers) if layers is not None else get_linear_model(0.1)[0]
lf = CrossEntropy() if lf is None else lf
db = db if db is not None else get_mnist_databunch()
opt = opt if opt is not None else adam_opt()
learn = Learner(model,lf,opt,db)
return Runner(learn, callbacks)
```
## Nested Modules
We first need to make new classes that allow architectures that aren't straight forward passes through a defined set of layers. This is normally handled in the forward passes of pytorch with autograd. We need to be a bit more clever due to the fact that we need to define our gradients in each module.
```
#export
class NestedModel(Module):
"NestModel that allows for a sequential model to be called withing an outer model"
def __init__(self):
super().__init__()
def forward(self,xb): return self.layers(xb)
def bwd(self, out, inp): self.layers.backward()
def parameters(self):
for p in self.layers.parameters(): yield p
def __repr__(self): return f'\nSubModel( \n{self.layers}\n)'
#export
class TestMixingGrads(NestedModel):
"Test module to see if nested SequentialModels will work"
def __init__(self):
super().__init__()
self.layers = SequentialModel(Linear(784, 50, True), ReLU(), Linear(50,25, False))
```
Testing the gradients and the outputs:
```
m = SequentialModel(TestMixingGrads(), Linear(25,10, False))
db = get_mnist_databunch()
lf = CrossEntropy()
optimizer = adam_opt()
m
learn = Learner(m, CrossEntropy(), Optimizer, db)
run = Runner(learn, [CheckGrad()])
run.fit(1,0.1)
```
## Refactored Conv Layers
Before we can start making ResNets, we first define a few helper modules that abstract some of the layers:
```
#export
class AutoConv(Conv):
"Automatic resizing of padding based on kernel size to ensure constant dimensions of input to output"
def __init__(self, n_in, n_out, kernel_size=3, stride=1):
padding = Padding(kernel_size // 2)
super().__init__(n_in, n_out, kernel_size, stride, padding=padding)
#export
class ConvBatch(NestedModel):
"Performs conv then batchnorm"
def __init__(self, n_in, n_out, kernel_size=3, stride=1, **kwargs):
self.layers = SequentialModel(AutoConv(n_in, n_out, kernel_size, stride),
Batchnorm(n_out))
def __repr__(self): return f'{self.layers.layers[0]}, {self.layers.layers[1]}'
#export
class Identity(Module):
"Module to perform the identity connection (what goes in, comes out)"
def forward(self,xb): return xb
def bwd(self,out,inp): inp.g += out.g
def __repr__(self): return f'Identity Connection'
```
## ResBlocks
Final built up ResNet blocks that implement the skip connecton layers characteristic of a ResNet
```
#export
class BasicRes(Module):
"Basic block to implement the two different ResBlocks presented in the paper"
def __init__(self, n_in, n_out, expansion=1, stride=1, Activation=ReLU, *args, **kwargs):
super().__init__()
self.n_in, self.n_out, self.expansion, self.stride, self.Activation = n_in, n_out, expansion, stride, Activation
self.identity = Identity() if self.do_identity else AutoConv(self.n_in, self.get_expansion, kernel_size=1, stride=2)
def forward(self, xb):
self.id_out = self.identity(xb)
self.res_out = self.res_blocks(xb)
self.out = self.id_out + self.res_out
return self.out
def bwd(self, out, inp):
self.res_out.g = out.g
self.id_out.g = out.g
self.res_blocks.backward()
self.identity.backward()
@property
def get_expansion(self): return self.n_out * self.expansion
@property
def do_identity(self): return self.n_in == self.n_out
def parameters(self):
layers = [self.res_blocks, self.identity]
for m in layers:
for p in m.parameters(): yield p
#export
class BasicResBlock(BasicRes):
expansion=1
"Basic ResBlock layer, 2 `ConvBatch` layers with no expansion"
def __init__(self, n_in, n_out, *args, **kwargs):
super().__init__(n_in, n_out, *args, **kwargs)
expansion = 1
self.res_blocks = SequentialModel(
ConvBatch(n_in, n_out, stride=self.stride),
self.Activation(),
ConvBatch(n_out, self.n_out*expansion)
)
#export
class BottleneckBlock(BasicRes):
expansion=4
"Bottleneck layer, 3 `ConvBatch` layers with an expansion factor of 4"
def __init__(self, n_in, n_out, *args, **kwargs):
super().__init__(n_in, n_out, *args, **kwargs)
self.res_blocks = SequentialModel(
ConvBatch(n_in, n_out, kernel_size=1, stride=1),
self.Activation(),
ConvBatch(n_out, n_out),
self.Activation(),
ConvBatch(n_out, self.expansion, kernel_size=1)
)
#export
class ResBlock(NestedModel):
"Adds the final activation after the skip connection addition"
def __init__(self, n_in, n_out, block=BasicResBlock, stride=1, kernel_size=3, Activation=ReLU, **kwargs):
super().__init__()
self.n_in, self.n_out, self.exp, self.ks, self.stride = n_in, n_out, block.expansion, kernel_size, stride
self.layers = SequentialModel(block(n_in=n_in, n_out=n_out, expansion=block.expansion, kernel_size=kernel_size, stride=stride, Activation=Activation,**kwargs),
Activation())
def __repr__(self): return f'ResBlock({self.n_in}, {self.n_out*self.exp}, kernel_size={self.ks}, stride={self.stride})'
#export
class ResLayer(NestedModel):
"Sequential ResBlock layers as outlined in the paper"
def __init__(self, block, n, n_in, n_out, *args, **kwargs):
layers = []
self.block, self.n, self.n_in, self.n_out = block, n, n_in, n_out
downsampling = 2 if n_in != n_out else 1
layers = [ResBlock(n_in, n_out, block, stride=downsampling),
*[ResBlock(n_out * block.expansion, n_out, block, stride=1) for i in range(n-1)]]
self.layers = SequentialModel(*layers)
def __repr__(self): return f'ResLayer(\n{self.layers}\n)'
```
```python
class ResLayer(NestedModel):
"Sequential res layers"
def __init__(self, block, n, n_in, n_out, *args, **kwargs):
layers = []
self.block, self.n, self.n_in, self.n_out = block, n, n_in, n_out
downsampling = 2 if n_in != n_out else 1
layers = [ResBlock(n_in, n_out, block, stride=downsampling),
*[ResBlock(n_out * block.expansion, n_out, block, stride=1) for i in range(n-1)]]
self.layers = SequentialModel(*layers)
def __repr__(self): return f'ResLayer(\n{self.layers}\n)'
```
# ResNet
```
#export
class ResNet(NestedModel):
"Class to create ResNet architectures of dynamic sizing"
def __init__(self, block, layer_sizes=[64, 128, 256, 512], depths=[2,2,2,2], c_in=3,
c_out=1000, im_size=(28,28), activation=ReLU, *args, **kwargs):
self.layer_sizes = layer_sizes
gate = [
Reshape(c_in, im_size[0], im_size[1]),
ConvBatch(c_in, self.layer_sizes[0], stride=2, kernel_size=7),
activation(),
Pool(max_pool, ks=3, stride=2, padding=Padding(1))
]
self.conv_sizes = list(zip(self.layer_sizes, self.layer_sizes[1:]))
body = [
ResLayer(block, depths[0], self.layer_sizes[0], self.layer_sizes[0], Activation=activation, *args, **kwargs),
*[ResLayer(block, n, n_in * block.expansion, n_out, Activation=activation)
for (n_in,n_out), n in zip(self.conv_sizes, depths[1:])]
]
tail = [
Pool(avg_pool, ks=1, stride=1, padding=None),
Flatten(),
Linear(self.layer_sizes[-1]*block.expansion, c_out, relu_after=False)
]
self.layers = SequentialModel(
*[layer for layer in gate],
*[layer for layer in body],
*[layer for layer in tail]
)
def __repr__(self): return f'ResNet: \n{self.layers}'
```
```python
class ResNet(NestedModel):
"Class to create ResNet architectures of dynamic sizing"
def __init__(self, block, layer_sizes=[64, 128, 256, 512], depths=[2,2,2,2], c_in=3,
c_out=1000, im_size=(28,28), activation=ReLU, *args, **kwargs):
self.layer_sizes = layer_sizes
gate = [
Reshape(c_in, im_size[0], im_size[1]),
ConvBatch(c_in, self.layer_sizes[0], stride=2, kernel_size=7),
activation(),
Pool(max_pool, ks=3, stride=2, padding=Padding(1))
]
self.conv_sizes = list(zip(self.layer_sizes, self.layer_sizes[1:]))
body = [
ResLayer(block, depths[0], self.layer_sizes[0], self.layer_sizes[0], Activation=activation, *args, **kwargs),
*[ResLayer(block, n, n_in * block.expansion, n_out, Activation=activation)
for (n_in,n_out), n in zip(self.conv_sizes, depths[1:])]
]
tail = [
Pool(avg_pool, ks=1, stride=1, padding=None),
Flatten(),
Linear(self.layer_sizes[-1]*block.expansion, c_out, relu_after=False)
]
self.layers = SequentialModel(
*[layer for layer in gate],
*[layer for layer in body],
*[layer for layer in tail]
)
def __repr__(self): return f'ResNet: \n{self.layers}'
```
```
res = ResNet(BasicResBlock)
res
#export
def GetResnet(size, c_in=3, c_out=10, *args, **kwargs):
"Helper function to get ResNet architectures of different sizes"
if size == 18: return ResNet(c_in=c_in, c_out=c_out, block=BasicResBlock, depths=[2, 2, 2, 2], size=size, **kwargs)
elif size == 34: return ResNet(c_in=c_in, c_out=c_out, block=BasicResBlock, depths=[3, 4, 6, 3], size=size, **kwargs)
elif size == 50: return ResNet(c_in=c_in, c_out=c_out, block=BottleneckBlock, depths=[3, 4, 6, 3], size=size, **kwargs)
elif size == 150: return ResNet(c_in=c_in, c_out=c_out, block=BottleneckBlock, depths=[3, 4, 23, 3], size=size, **kwargs)
elif size == 152: return ResNet(c_in=c_in, c_out=c_out, block=BottleneckBlock, depths=[3, 8, 36, 3], size=size, **kwargs)
```
Testing out the ResNet Architectures:
```
GetResnet(18, c_in=1, c_out=10)
GetResnet(34, c_in=1, c_out=10)
GetResnet(50, c_in=1, c_out=10)
GetResnet(150, c_in=1, c_out=10)
GetResnet(152, c_in=1, c_out=10)
run = get_runner(model=GetResnet(18,c_in=1, c_out=10))
```
| github_jupyter |
```
#hide
#skip
! [ -e /content ] && pip install -Uqq self-supervised
#default_exp vision.swav
```
# SwAV
> SwAV: [Unsupervised Learning of Visual Features by Contrasting Cluster Assignments](https://arxiv.org/pdf/2006.09882.pdf)
```
#export
from fastai.vision.all import *
from self_supervised.augmentations import *
from self_supervised.layers import *
```
## Algorithm
#### SwAV

**Absract**: Unsupervised image representations have significantly reduced the gap with supervised
pretraining, notably with the recent achievements of contrastive learning
methods. These contrastive methods typically work online and rely on a large number
of explicit pairwise feature comparisons, which is computationally challenging.
In this paper, we propose an online algorithm, SwAV, that takes advantage of contrastive
methods without requiring to compute pairwise comparisons. Specifically,
our method simultaneously clusters the data while enforcing consistency between
cluster assignments produced for different augmentations (or “views”) of the same
image, instead of comparing features directly as in contrastive learning. Simply put,
we use a “swapped” prediction mechanism where we predict the code of a view
from the representation of another view. Our method can be trained with large and
small batches and can scale to unlimited amounts of data. Compared to previous
contrastive methods, our method is more memory efficient since it does not require
a large memory bank or a special momentum network. In addition, we also propose
a new data augmentation strategy, multi-crop, that uses a mix of views with
different resolutions in place of two full-resolution views, without increasing the
memory or compute requirements. We validate our findings by achieving 75:3%
top-1 accuracy on ImageNet with ResNet-50, as well as surpassing supervised
pretraining on all the considered transfer tasks.
```
#export
class SwAVModel(Module):
def __init__(self,encoder,projector,prototypes):
self.encoder,self.projector,self.prototypes = encoder,projector,prototypes
def forward(self, inputs):
if not isinstance(inputs, list): inputs = [inputs]
crop_idxs = torch.cumsum(torch.unique_consecutive(
torch.tensor([inp.shape[-1] for inp in inputs]),
return_counts=True)[1], 0)
start_idx = 0
for idx in crop_idxs:
_z = self.encoder(torch.cat(inputs[start_idx: idx]))
if not start_idx: z = _z
else: z = torch.cat((z, _z))
start_idx = idx
z = F.normalize(self.projector(z))
return z, self.prototypes(z)
#export
def create_swav_model(encoder, hidden_size=256, projection_size=128, n_protos=3000, bn=True, nlayers=2):
"Create SwAV model"
n_in = in_channels(encoder)
with torch.no_grad(): representation = encoder(torch.randn((2,n_in,128,128)))
projector = create_mlp_module(representation.size(1), hidden_size, projection_size, bn=bn, nlayers=nlayers)
prototypes = nn.Linear(projection_size, n_protos, bias=False)
apply_init(projector)
with torch.no_grad():
w = prototypes.weight.data.clone()
prototypes.weight.copy_(F.normalize(w))
return SwAVModel(encoder, projector, prototypes)
encoder = create_encoder("tf_efficientnet_b0_ns", n_in=3, pretrained=False, pool_type=PoolingType.CatAvgMax)
model = create_swav_model(encoder, hidden_size=2048, projection_size=128, n_protos=3000)
multi_view_inputs = ([torch.randn(2,3,224,224) for i in range(2)] +
[torch.randn(2,3,96,96) for i in range(4)])
embedding, output = model(multi_view_inputs)
norms = model.prototypes.weight.data.norm(dim=1)
assert norms.shape[0] == 3000
assert [n.item() for n in norms if test_close(n.item(), 1.)] == []
```
## SwAV Callback
The following parameters can be passed;
- **aug_pipelines** list of augmentation pipelines List[Pipeline, Pipeline,...,Pipeline] created using functions from `self_supervised.augmentations` module. Each `Pipeline` should be set to `split_idx=0`. You can simply use `get_swav_aug_pipelines` utility to get aug_pipelines. SWAV algorithm uses a mix of large and small scale crops.
- **crop_assgn_ids** indexes for large crops from **aug_pipelines**, e.g. if you have total of 8 Pipelines in the `aug_pipelines` list and if you define large crops as first 2 Pipelines then indexes would be [0,1], if as first 3 then [0,1,2] and if as last 2 then [6,7], so on.
- **K** is queue size. For simplicity K needs to be a multiple of batch size and it needs to be less than total training data. You can try out different values e.g. `bs*2^k` by varying k where bs i batch size. You can pass None to disable queue. Idea is similar to MoCo.
- **queue_start_pct** defines when to start using queue in terms of total training percentage, e.g if you train for 100 epochs and if `queue_start_pct` is set to 0.25 then queue will be used starting from epoch 25. You should tune queue size and queue start percentage for your own data and problem. For more information you can refer to [README from official implementation](https://github.com/facebookresearch/swav#training-gets-unstable-when-using-the-queue).
- **temp** temperature scaling for cross entropy loss similar to `SimCLR`.
SWAV algorithm uses multi-sized-multi-crop views of image. In original paper 2 large crop views and 6 small crop views are used during training. The reason of using smaller crops is to save memory and perhaps it also helps model to learn local features better.
You can manually pass a mix of large and small scale Pipeline instances within a list to **aug_pipelines** or you can simply use **get_swav_aug_pipelines()** helper function below:
- **num_crops** Number of large and small scale views to be used.
- **crop_sizes** Image crop sizes for large and small views.
- **min_scales** Min scale to use in RandomResizedCrop for large and small views.
- **max_scales** Max scale to use in RandomResizedCrop for large and small views.
I highly recommend this [UI from albumentations](https://albumentations-demo.herokuapp.com/) to get a feel about RandomResizedCrop parameters.
Let's take the following example `get_swav_aug_pipelines(num_crops=(2,6), crop_sizes=(224,96), min_scales=(0.25,0.05), max_scales=(1.,0.14))`. This will create 2 large scale view augmentations with size 224 and with RandomResizedCrop scales between 0.25-1.0. Additionally, it will create 2 small scale view augmentations with size 96 and with RandomResizedCrop scales between 0.05-0.14.
**Note**: Of course, the notion of small and large scale views depend on the values you pass to `crop_sizes`, `min_scales`, and `max_scales`. For example, if I we flip crop sizes from previous example as `crop_sizes=(96,224)`, then in this case first 2 views will have image resolution of 96 and last 6 views will have 224. For reducing confusion it's better to make relative changes, e.g. if you want to try different parameters always try to keep first values for larger resolution views and second values for smaller resolution views.
- ****kwargs** This function uses `get_multi_aug_pipelines` which then uses `get_batch_augs`. For more information you may refer to `self_supervised.augmentations` module. kwargs takes any passable argument to `get_batch_augs`
```
#export
@delegates(get_multi_aug_pipelines, but=['n', 'size', 'resize_scale'])
def get_swav_aug_pipelines(num_crops=(2,6), crop_sizes=(224,96), min_scales=(0.25,0.05), max_scales=(1.,0.14), **kwargs):
aug_pipelines = []
for nc, size, mins, maxs in zip(num_crops, crop_sizes, min_scales, max_scales):
aug_pipelines += get_multi_aug_pipelines(n=nc, size=size, resize_scale=(mins,maxs), **kwargs)
return aug_pipelines
#export
class SWAV(Callback):
order,run_valid = 9,True
def __init__(self, aug_pipelines, crop_assgn_ids,
K=3000, queue_start_pct=0.25, temp=0.1,
eps=0.05, n_sinkh_iter=3, print_augs=False):
store_attr('K,queue_start_pct,crop_assgn_ids,temp,eps,n_sinkh_iter')
self.augs = aug_pipelines
if print_augs:
for aug in self.augs: print(aug)
def before_fit(self):
self.learn.loss_func = self.lf
# init queue
if self.K is not None:
nf = self.learn.model.projector[-1].out_features
self.queue = torch.randn(self.K, nf).to(self.dls.device)
self.queue = nn.functional.normalize(self.queue, dim=1)
self.queue_ptr = 0
def before_batch(self):
"Compute multi crop inputs"
self.bs = self.x.size(0)
self.learn.xb = ([aug(self.x) for aug in self.augs],)
def after_batch(self):
with torch.no_grad():
w = self.learn.model.prototypes.weight.data.clone()
self.learn.model.prototypes.weight.data.copy_(F.normalize(w))
@torch.no_grad()
def sinkhorn_knopp(self, Q, nmb_iters, device=default_device):
"https://en.wikipedia.org/wiki/Sinkhorn%27s_theorem#Sinkhorn-Knopp_algorithm"
sum_Q = torch.sum(Q)
Q /= sum_Q
r = (torch.ones(Q.shape[0]) / Q.shape[0]).to(device)
c = (torch.ones(Q.shape[1]) / Q.shape[1]).to(device)
curr_sum = torch.sum(Q, dim=1)
for it in range(nmb_iters):
u = curr_sum
Q *= (r / u).unsqueeze(1)
Q *= (c / torch.sum(Q, dim=0)).unsqueeze(0)
curr_sum = torch.sum(Q, dim=1)
return (Q / torch.sum(Q, dim=0, keepdim=True)).t().float()
@torch.no_grad()
def _dequeue_and_enqueue(self, embedding):
assert self.K % self.bs == 0 # for simplicity
self.queue[self.queue_ptr:self.queue_ptr+self.bs, :] = embedding
self.queue_ptr = (self.queue_ptr + self.bs) % self.K # move pointer
@torch.no_grad()
def _compute_codes(self, output):
qs = []
for i in self.crop_assgn_ids:
# use queue
if (self.K is not None) and (self.learn.pct_train > self.queue_start_pct):
target_b = output[self.bs*i:self.bs*(i+1)]
queue_b = self.learn.model.prototypes(self.queue)
merged_b = torch.cat([target_b, queue_b])
q = torch.exp(merged_b/self.eps).t()
q = self.sinkhorn_knopp(q, self.n_sinkh_iter, q.device)
qs.append(q[:self.bs])
# don't use queue
else:
target_b = output[self.bs*i:self.bs*(i+1)]
q = torch.exp(target_b/self.eps).t()
q = self.sinkhorn_knopp(q, self.n_sinkh_iter, q.device)
qs.append(q)
return qs
def after_pred(self):
"Compute ps and qs"
embedding, output = self.pred
# Update - no need to store all assignment crops, e.g. just 0 from [0,1]
# Update queue only during training
if (self.K is not None) and (self.learn.training): self._dequeue_and_enqueue(embedding[:self.bs])
# Compute codes
qs = self._compute_codes(output)
# Compute predictions
log_ps = []
for v in np.arange(len(self.augs)):
log_p = F.log_softmax(output[self.bs*v:self.bs*(v+1)] / self.temp, dim=1)
log_ps.append(log_p)
log_ps, qs = torch.stack(log_ps), torch.stack(qs)
self.learn.pred, self.learn.yb = log_ps, (qs,)
def lf(self, pred, *yb):
log_ps, qs, loss = pred, yb[0], 0
t = (qs.unsqueeze(1)*log_ps.unsqueeze(0)).sum(-1).mean(-1)
for i, ti in enumerate(t): loss -= (ti.sum() - ti[i])/(len(ti)-1)/len(t)
return loss
@torch.no_grad()
def show(self, n=1):
xbs = self.learn.xb[0]
idxs = np.random.choice(range(self.bs), n, False)
images = [aug.decode(xb.to('cpu').clone()).clamp(0, 1)[i]
for i in idxs
for xb, aug in zip(xbs, self.augs)]
return show_batch(images[0], None, images, max_n=len(images), nrows=n)
```
`crop_sizes` defines the size to be used for original crops and low resolution crops respectively. `num_crops` define `N`: number of original views and `V`: number of low resolution views respectively. `min_scales` and `max_scales` are used for original and low resolution views during random resized crop. `eps` is used during Sinkhorn-Knopp algorithm for calculating the codes and `n_sinkh_iter` is the number of iterations during it's calculation. `temp` is the temperature parameter in cross entropy loss
### Example Usage
```
path = untar_data(URLs.MNIST_TINY)
items = get_image_files(path)
tds = Datasets(items, [PILImageBW.create, [parent_label, Categorize()]], splits=GrandparentSplitter()(items))
dls = tds.dataloaders(bs=4, after_item=[ToTensor(), IntToFloatTensor()], device='cpu')
fastai_encoder = create_fastai_encoder(xresnet18, n_in=1, pretrained=False)
model = create_swav_model(fastai_encoder, hidden_size=2048, projection_size=128)
aug_pipelines = get_swav_aug_pipelines(num_crops=[2,6],
crop_sizes=[28,16],
min_scales=[0.25,0.05],
max_scales=[1.0,0.3],
rotate=False, jitter=False, bw=False, blur=False, stats=None,cuda=False)
learn = Learner(dls, model,
cbs=[SWAV(aug_pipelines=aug_pipelines, crop_assgn_ids=[0,1], K=None), ShortEpochCallback(0.001)])
b = dls.one_batch()
learn._split(b)
learn('before_batch')
learn.pred = learn.model(*learn.xb)
```
Display 2 standard resolution crops and 6 additional low resolution crops
```
axes = learn.swav.show(n=4)
learn.fit(1)
learn.recorder.losses
```
## Export -
```
#hide
from nbdev.export import notebook2script
notebook2script()
```
| github_jupyter |
# Premier league: How has VAR impacted the rankings?
There has been much debate about the video assistant referee (VAR) when it was introduced last year (in 2019).
The goal is to lead to fairer refereeing, but concerns are high on whether this will really be the case and the fact that it could break the rythm of the game.
We will let football analysts – or soccer analysts depending on where you are reading this notebook from – answer this question. But one thing we can look at is how has VAR impacted the league so far.
This is what we will do in this notebook, alongside some other simulations we found interesting.
<div style="text-align:center"><a href="https://www.atoti.io/?utm_source=gallery&utm_content=premier-league" target="_blank" rel="noopener noreferrer"><img src="https://data.atoti.io/notebooks/banners/discover.png" alt="atoti" /></a></div>
## Importing the data
The data we will use is composed of events. An event can be anything that happens in a game: kick-off, goal, foul, etc.
In this dataset, we only kept kick-off and goal events to build our analysis.
Note that in the goal events we also have all the goals that were later cancelled by VAR during a game.
We will first start by importing atoti and creating a session.
```
import atoti as tt
session = tt.create_session()
```
Then load the events in a store
```
events = session.read_csv(
"s3://data.atoti.io/notebooks/premier-league/events.csv",
separator=";",
table_name="events",
)
events.head()
```
### Creating a cube
We create a cube on the event store so that some matches or teams that ended with no goal will still be reflected in the pivot tables.
When creating a cube in the default auto mode, a hierarchy will be created for each non float column, and average and sum measures for each float column. This setup can later be edited, or you could also define all hierarchies/measures by yourself switching to manual mode.
```
cube = session.create_cube(events)
cube.schema
```
Let's assign measures/levels/hierarchies to shorter variables
```
m = cube.measures
lvl = cube.levels
h = cube.hierarchies
h["Day"] = [events["Day"]]
```
## Computing the rankings from the goals
Computing the first measure below to count the total goals scored for each event. At this point the total still includes the potential own goals and VAR-refused goals.
```
m["Team Goals (incl Own Goals)"] = tt.agg.sum(
tt.where(lvl["EventType"] == "Goal", tt.agg.count_distinct(events["EventId"]), 0.0),
scope=tt.scope.origin(lvl["EventType"]),
)
```
In this data format, own goals are scored by players from a Team, but those points should be attributed to the opponent. Therefore we will isolate the own goals in a separate measure.
```
m["Team Own Goals"] = tt.agg.sum(
tt.where(lvl["IsOwnGoal"] == True, m["Team Goals (incl Own Goals)"], 0.0),
scope=tt.scope.origin(lvl["IsOwnGoal"]),
)
```
And deduce the actual goals scored for the team
```
m["Team Goals"] = m["Team Goals (incl Own Goals)"] - m["Team Own Goals"]
```
At this point we can already have a look at the goals per team. By right clicking on the chart we have sorted it descending by team goals.
```
session.visualize()
```
For a particular match, the `Opponent Goals` are equal to the `Team Goals` if we switch to the data facts where Team is replaced by Opponent and Opponent by Team
```
m["Opponent Goals"] = tt.agg.sum(
tt.at(
m["Team Goals"],
{lvl["Team"]: lvl["Opponent"], lvl["Opponent"]: lvl["Team"]},
),
scope=tt.scope.origin(lvl["Team"], lvl["Opponent"]),
)
m["Opponent Own Goals"] = tt.agg.sum(
tt.at(
m["Team Own Goals"],
{lvl["Team"]: lvl["Opponent"], lvl["Opponent"]: lvl["Team"]},
),
scope=tt.scope.origin(lvl["Team"], lvl["Opponent"]),
)
```
We are now going to add two measures `Team Score` and `Opponent Score` to compute the result of a particular match.
```
m["Team Score"] = m["Team Goals"] + m["Opponent Own Goals"]
m["Opponent Score"] = m["Opponent Goals"] + m["Team Own Goals"]
```
We can now visualize the result of each match of the season
```
session.visualize()
```
We now have the team goals/score and those of the opponent for each match. However, these measures include VAR cancelled goals. Let's create new measures that takes into account VAR.
```
m["VAR team goals impact"] = m["Team Goals"] - tt.filter(
m["Team Goals"], lvl["IsCancelledAfterVAR"] == False
)
m["VAR opponent goals impact"] = m["Opponent Goals"] - tt.filter(
m["Opponent Goals"], lvl["IsCancelledAfterVAR"] == False
)
```
We can visualize that in details, there are already 4 goals cancelled by VAR on the first day of the season !
```
session.visualize()
```
Now that for any game we have the number of goals of each team, we can compute how many points teams have earned.
Following the FIFA World Cup points system, three points are awarded for a win, one for a draw and none for a loss (before, winners received two points).
We create a measure for each of this condition.
```
m["Points for victory"] = 3.0
m["Points for tie"] = 1.0
m["Points for loss"] = 0.0
m["Points"] = tt.agg.sum(
tt.where(
m["Team Score"] > m["Opponent Score"],
m["Points for victory"],
tt.where(
m["Team Score"] == m["Opponent Score"],
m["Points for tie"],
m["Points for loss"],
),
),
scope=tt.scope.origin(lvl["League"], lvl["Day"], lvl["Team"]),
)
```
The previous points were computed including VAR-refused goals.
Filtering out these goals gives the actual rankings of the teams, as you would find on any sports websites.
```
m["Actual Points"] = tt.filter(m["Points"], lvl["IsCancelledAfterVAR"] == False)
```
And here we have our ranking. We will dive into it in the next section.
## Rankings and VAR impact
Color rules were added to show teams that benefited from the VAR in green and those who lost championship points because of it in red.
```
m["Difference in points"] = m["Actual Points"] - m["Points"]
session.visualize()
```
More than half of the teams have had their points total impacted by VAR.
Though it does not impact the top teams, it definitely has an impact in the ranking of many teams, Manchester United would have lost 2 ranks and Tottenham 4 for example!
We could also visualize the difference of points in a more graphical way:
```
session.visualize()
```
Since the rankings are computed from the goal level, we can perform any kind of simulation we want using simple UI filters.
You can filter the pivot table above to see what would happen if we only keep the first half of the games? If we only keep matches played home? What if we filter out Vardy, would Leicester lose some places?
Note that if you filter out VAR-refused goals, the `Points` measures takes the same value as the `Actual Points`.
## Evolution of the rankings over time
Atoti also enables you to define cumulative sums over a hierarchy, we will use that to see how the team rankings evolved during the season.
```
m["Points cumulative sum"] = tt.agg.sum(
m["Actual Points"], scope=tt.scope.cumulative(lvl["Day"])
)
session.visualize()
```
We can notice that data is missing for the 28th match of Manchester City. This is because the game was delayed due to weather, and then never played because of the COVID-19 pandemic.
## Players most impacted by the VAR
Until now we looked at most results at team level, but since the data exists at goal level, we could have a look at which players are most impacted by the VAR.
```
m["Valid player goals"] = tt.filter(
m["Team Goals"], lvl["IsCancelledAfterVAR"] == False
)
session.visualize()
```
Unsurprisingly Mané is the most impacted player. He is also one of the top scorers with only Vardy scoring more goals (you can sort on the Team Goals column to verify).
More surprisingly, Boly has had all the goals of his season cancelled by VAR and Antonio half of them..
## Simulation of a different scoring system
Although we are all used to a scoring system giving 3 points for a victory, 1 for a tie and 0 per lost match this was not always the case. Before the 1990's many european leagues only gave 2 points per victory, reason for the change being to encourage teams to score more goals during the games.
The premier league gifts us well with plenty of goals scored (take it from someone watching the French ligue 1), but how different would the results be with the old scoring system?
atoti enables us to simulate this very easily. We simply have to create a new scenario where we replace the number of points given for a victory.
We first setup a simulation on that measure.
```
scoring_system_simulation = cube.create_parameter_simulation(
name="Scoring system simulations",
measures={"Points for victory": 3.0},
base_scenario_name="Current System",
)
```
And create a new scenario where we give it another value
```
scoring_system_simulation += ("Old system", 2.0)
```
And that's it, no need to define anything else, all the measures will be re-computed on demand with the new value in the new scenario.
Let's compare the rankings between the two scoring systems.
```
session.visualize()
session.visualize()
```
Surprisingly, having only 2 points for a win would only have made Burnley and West Ham lose 2 ranks, but no other real impact on the standings.
<div style="text-align:center"><a href="https://www.atoti.io/?utm_source=gallery&utm_content=premier-league" target="_blank" rel="noopener noreferrer"><img src="https://data.atoti.io/notebooks/banners/discover-try.png" alt="atoti" /></a></div>
| github_jupyter |
載入SparkContext和SparkConf
```
from pyspark import SparkContext
from pyspark import SparkConf
conf = SparkConf().setAppName('appName').setMaster('local')
sc = SparkContext(conf=conf)
```
先建立M(i,j)的mapper1, 將index j取出來當成key;
再建立N(j,k)的mapper1, 將index j取出來當成key
```
def mapper1(line):
wordlist = line.split(",")
maplist = []
key = wordlist.pop(2)
maplist.append((int(key),wordlist))
return maplist
def mapper2(line):
wordlist = line.split(",")
maplist = []
key = wordlist.pop(1)
maplist.append((int(key),wordlist))
return maplist
```
定義reducer, 將相同key的value list蒐集起來
```
def reducer1(x,y):
return x+y
```
讀取檔案"5ooinput.txt"
```
data = sc.textFile('2input.txt')
```
先建立兩個空的list M,N,這兩個list之後會用來蒐集M矩陣和N矩陣的元素
```
M = []
N = []
```
將所有資料傳換成list的資料型態, 再用if-else把MN兩個矩陣的元素分開
```
d = data.collect()
for i in range(len(d)):
if 'M' in d[i]:
M.append(d[i])
else :
N.append(d[i])
```
分好之後將M,N各自由list轉成RDD
```
M
rddm = sc.parallelize(M)
rddn = sc.parallelize(N)
```
屬於M矩陣的元素藉由mapper1轉換成(j,[M,i,m])的形式, 把j拿出來當key; 同理, 將N矩陣的元素轉換成(j,[N,k,n])的形式。
```
rddm = rddm.flatMap(mapper1)
rddn = rddn.flatMap(mapper2)
```
為了後續要把對應的矩陣元素相乘,所以在執行reduce後轉換回list的資料型態。
```
m = rddm.reduceByKey(reducer1).collect()
n = rddn.reduceByKey(reducer1).collect()
m
```
接下來解釋如何將(j,(M,i,m))和(j,(N,k,n))轉換成((i,k),mn):
(1)首先建立一個空的list, 命名為aggregate, 用來蒐集轉換後的((i,k),mn)
(2)第一層for loop是將j組轉換後的((i,k),mn)集合起來
(3)第二層有三組for loop, 第一個和第二個loop執行的目的是要將list m和list n的資料排列成[['M','i','m'],...,]的形式方便後續運算值的取用,而a和b則是在loop中的暫時容器, 用來裝排列好的資料, 排列完成後第三組loop才真正執行乘法運算, 運算後再以((i,k),mn)的形式存入list1這個暫時容器, list1在第二層loop結束時存入aggregate
(4)這個三層迴圈需要非常久的時間才能跑完(大概一天多),所以不是一個好方法,不過礙於作業繳交時限,只能先用這個方法,後續有想到其他解法會再補上。
aggregate = []
for j in range(len(m)):
data1 = m[j][1]
a = []
for i in range(0, len(data1), 3):
a.append([data1[i],data1[i+1],data1[i+2]])
data2 = n[j][1]
b = []
for i in range(0, len(data2), 3):
b.append([data2[i],data2[i+1],data2[i+2]])
list1 = []
for i in range(len(a)):
mul1 = int(a[i][2])
index1 = a[i][1]
for k in range(len(b)):
mul2 = int(b[k][2])
index2 = b[k][1]
list1.append(((index1,index2),mul1*mul2))
aggregate.extend(list1)
aggregate已經將資料整理成[(index1,index2),value]的形式,接下來只要將aggregate轉換成RDD的資料型式,再用map-reduce就完成矩陣乘法了。
```
rdda = sc.parallelize(aggregate)
```
用reduceByKey將相同key(i,k)的值都相加起來。
```
outcome = rdda.reduceByKey(reducer1).collect()
```
再用sort()指令排序
```
outcome.sort()
```
開一個新的檔案
```
file = open("mapreduce_output.txt", "w")
file.write("Output\n")
```
將資料轉換成string之後一一寫入剛剛建立的檔案
```
for i in range(len(outcome)):
file.write(str(outcome[i][0][0]))
file.write(',')
file.write(str(outcome[i][0][1]))
file.write(',')
file.write(str(outcome[i][1]))
file.write(' \n')
file.close()
```
| github_jupyter |
```
#v1
#26/10/2018
dataname="epistroma" #should match the value used to train the network, will be used to load the appropirate model
gpuid=0
patch_size=256 #should match the value used to train the network
batch_size=1 #nicer to have a single batch so that we can iterately view the output, while not consuming too much
edge_weight=1
# https://github.com/jvanvugt/pytorch-unet
#torch.multiprocessing.set_start_method("fork")
import random, sys
import cv2
import glob
import math
import matplotlib.pyplot as plt
import numpy as np
import os
import scipy.ndimage
import skimage
import time
import tables
from skimage import io, morphology
from sklearn.metrics import confusion_matrix
from tensorboardX import SummaryWriter
import torch
import torch.nn.functional as F
from torch import nn
from torch.utils.data import DataLoader
from torchvision import transforms
from unet import UNet
import PIL
print(torch.cuda.get_device_properties(gpuid))
torch.cuda.set_device(gpuid)
device = torch.device(f'cuda:{gpuid}' if torch.cuda.is_available() else 'cpu')
checkpoint = torch.load(f"{dataname}_unet_best_model.pth")
#load the model, note that the paramters are coming from the checkpoint, since the architecture of the model needs to exactly match the weights saved
model = UNet(n_classes=checkpoint["n_classes"], in_channels=checkpoint["in_channels"], padding=checkpoint["padding"],depth=checkpoint["depth"],
wf=checkpoint["wf"], up_mode=checkpoint["up_mode"], batch_norm=checkpoint["batch_norm"]).to(device)
print(f"total params: \t{sum([np.prod(p.size()) for p in model.parameters()])}")
model.load_state_dict(checkpoint["model_dict"])
#this defines our dataset class which will be used by the dataloader
class Dataset(object):
def __init__(self, fname ,img_transform=None, mask_transform = None, edge_weight= False):
#nothing special here, just internalizing the constructor parameters
self.fname=fname
self.edge_weight = edge_weight
self.img_transform=img_transform
self.mask_transform = mask_transform
self.tables=tables.open_file(self.fname)
self.numpixels=self.tables.root.numpixels[:]
self.nitems=self.tables.root.img.shape[0]
self.tables.close()
self.img = None
self.mask = None
def __getitem__(self, index):
#opening should be done in __init__ but seems to be
#an issue with multithreading so doing here
if(self.img is None): #open in thread
self.tables=tables.open_file(self.fname)
self.img=self.tables.root.img
self.mask=self.tables.root.mask
#get the requested image and mask from the pytable
img = self.img[index,:,:,:]
mask = self.mask[index,:,:]
#the original Unet paper assignes increased weights to the edges of the annotated objects
#their method is more sophistocated, but this one is faster, we simply dilate the mask and
#highlight all the pixels which were "added"
if(self.edge_weight):
weight = scipy.ndimage.morphology.binary_dilation(mask==1, iterations =2) & ~mask
else: #otherwise the edge weight is all ones and thus has no affect
weight = np.ones(mask.shape,dtype=mask.dtype)
mask = mask[:,:,None].repeat(3,axis=2) #in order to use the transformations given by torchvision
weight = weight[:,:,None].repeat(3,axis=2) #inputs need to be 3D, so here we convert from 1d to 3d by repetition
img_new = img
mask_new = mask
weight_new = weight
seed = random.randrange(sys.maxsize) #get a random seed so that we can reproducibly do the transofrmations
if self.img_transform is not None:
random.seed(seed) # apply this seed to img transforms
img_new = self.img_transform(img)
if self.mask_transform is not None:
random.seed(seed)
mask_new = self.mask_transform(mask)
mask_new = np.asarray(mask_new)[:,:,0].squeeze()
random.seed(seed)
weight_new = self.mask_transform(weight)
weight_new = np.asarray(weight_new)[:,:,0].squeeze()
return img_new, mask_new, weight_new
def __len__(self):
return self.nitems
#note that since we need the transofrmations to be reproducible for both masks and images
#we do the spatial transformations first, and afterwards do any color augmentations
#in the case of using this for output generation, we want to use the original images since they will give a better sense of the exepected
#output when used on the rest of the dataset, as a result, we disable all unnecessary augmentation.
#the only component that remains here is the randomcrop, to ensure that regardless of the size of the image
#in the database, we extract an appropriately sized patch
img_transform = transforms.Compose([
transforms.ToPILImage(),
#transforms.RandomVerticalFlip(),
#transforms.RandomHorizontalFlip(),
transforms.RandomCrop(size=(patch_size,patch_size),pad_if_needed=True), #these need to be in a reproducible order, first affine transforms and then color
#transforms.RandomResizedCrop(size=patch_size),
#transforms.RandomRotation(180),
#transforms.ColorJitter(brightness=0, contrast=0, saturation=0, hue=.5),
#transforms.RandomGrayscale(),
transforms.ToTensor()
])
mask_transform = transforms.Compose([
transforms.ToPILImage(),
#transforms.RandomVerticalFlip(),
#transforms.RandomHorizontalFlip(),
transforms.RandomCrop(size=(patch_size,patch_size),pad_if_needed=True), #these need to be in a reproducible order, first affine transforms and then color
#transforms.RandomResizedCrop(size=patch_size,interpolation=PIL.Image.NEAREST),
#transforms.RandomRotation(180),
])
phases=["val"]
dataset={}
dataLoader={}
for phase in phases:
dataset[phase]=Dataset(f"./{dataname}_{phase}.pytable", img_transform=img_transform , mask_transform = mask_transform ,edge_weight=edge_weight)
dataLoader[phase]=DataLoader(dataset[phase], batch_size=batch_size,
shuffle=True, num_workers=0, pin_memory=True) #,pin_memory=True)
%matplotlib inline
#set the model to evaluation mode, since we're only generating output and not doing any back propogation
model.eval()
for ii , (X, y, y_weight) in enumerate(dataLoader["val"]):
X = X.to(device) # [NBATCH, 3, H, W]
y = y.type('torch.LongTensor').to(device) # [NBATCH, H, W] with class indices (0, 1)
output = model(X) # [NBATCH, 2, H, W]
output=output.detach().squeeze().cpu().numpy() #get output and pull it to CPU
output=np.moveaxis(output,0,-1) #reshape moving last dimension
fig, ax = plt.subplots(1,4, figsize=(10,4)) # 1 row, 2 columns
ax[0].imshow(output[:,:,1])
ax[1].imshow(np.argmax(output,axis=2))
ax[2].imshow(y.detach().squeeze().cpu().numpy())
ax[3].imshow(np.moveaxis(X.detach().squeeze().cpu().numpy(),0,-1))
```
| github_jupyter |
# Objective
Perform the exploratory data analysis (EDA) to find insights in the AWS pricing data
# Code
## Load libs
```
import sys
sys.path.append('..')
import random
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from src.data.helpers import load_aws_dataset
```
## Input params
```
interim_dir = '../data/interim'
in_fname = 'step_1_aws_filtered_sample.csv.zip'
compression = 'zip'
# Papermill parameters injection ... do not delete!
```
## Load data
```
file = f'{interim_dir}/{in_fname}'
data = load_aws_dataset(file)
print(data.shape)
data.head()
```
## Data wrangling
Let's find something interesting in the data!
- Look for most volatile instances, i.e., with more price changes, thus to avoid them;
- Least volatile instances;
- Longer price update times;
### Data check-up for nulls and missing values
Assumptions:
- Considered region: us-east-1a (Virginia);
- Check presence of null columns (i.e., there is no price change on that);
```
%%time
df = data.query('AvailabilityZone == "us-east-1a"')\
.drop('AvailabilityZone', axis=1)
print(df.shape)
# Pivot table to change a wide format for the data. Thus, we can remove
# instances that do not have any price update.
# Dropping MultiIndex column 'SpotPrice' as there is no use for it.
pvt = df.pivot_table(index=['Timestamp'],
columns=['InstanceType'])\
.droplevel(0, axis=1)
pvt.head()
# Checking if there is any column with only 'NaN'
# Returns None, meaning that all
pvt.isna().all(axis=0).loc[lambda x: x.isna()]
# Cross-check to see if this is correct. Getting a sample of confirm this
# using instance 'a1.2xlarge'
pvt['a1.2xlarge'].dropna().head()
# Picking random instance and checking if the values are not null
# just for sanity check.
for i in range(5):
rand_instance = random.randint(0, len(pvt.columns))
tmp = pvt.iloc[rand_instance].dropna().head()
print(tmp)
```
### Most volatile instances
```
# Now getting the most volatile instances
most_volatiles = pvt.count().sort_values(ascending=False).nlargest(10)
most_volatiles
# Let's quickly plot to see the pricing trends
# Some normalization is required:
# 1. Remove rows with only NaN (not columns, otherwise it will remove all pricing!);
# 2. There are gaps in the pricing. This happens because if there is no pricing
# update, then there is not price capture. Thus, we can safely use backwards fill
# to fill the missing values
fig, ax = plt.subplots(figsize=(12, 6))
pvt.loc[:, most_volatiles.index.to_list()]\
.dropna(how='all', axis=0)\
.fillna(method='bfill').plot(ax=ax)
ax.set_title('Top 10 most volatile instances')
ax.set_ylabel('Hourly Price (USD)')
ax.legend(loc='lower center', ncol=5, bbox_to_anchor=(0.5, -0.35))
```
### Least volatile instances
```
# Now getting the least volatile instances
least_volatiles = pvt.count().sort_values(ascending=False).nsmallest(10)
least_volatiles
fig, ax = plt.subplots(figsize=(12, 6))
pvt.loc[:, least_volatiles.index.to_list()]\
.dropna(how='all', axis=0)\
.fillna(method='bfill').plot(ax=ax)
ax.set_title('Top 10 least volatile instances')
ax.set_ylabel('Hourly Price (USD)')
ax.legend(loc='lower center', ncol=5, bbox_to_anchor=(0.5, -0.35))
```
| github_jupyter |
# CirComPara Pipeline
To demonstrate Dugong ́s effectiveness to distribute and run bioinformatics tools in alternative computational environments, the CirComPara pipeline was implemented in a Dugong container and tested in different OS with the aid of virtual machines (VM) or cloud computing servers.
CirComPara is a computational pipeline to detect, quantify, and correlate expression of linear and circular RNAs from RNA-seq data. Is a highly complex pipeline, which employs a series of bioinformatics software and was originally designed to run in an Ubuntu Server 16.04 LTS (x64).
Although authors provide details regarding the expected versions of each software and their dependency requirements, several problems can still be encountered during CirComPara implementation by inexperienced users.
See documentation for CirComPara installation details: https://github.com/egaffo/CirComPara
-----------------------------------------------------------------------------------------------------------------------
## Pipeline steps
- The test data is already unpacked and available in the path: **/headless/CirComPara/test_circompara/**
- The **meta.csv** and **vars.py** files are already configured to run CirComPara, as documented: https://github.com/egaffo/CirComPara
- Defining the folder for the analysis with the CirComPara of the test data provided by the developers of the tool:
```
from functools import partial
from os import chdir
chdir('/headless/CirComPara/test_circompara/analysis')
```
- Viewing files from /headless/CirComPara/test_circompara/
```
from IPython.display import FileLinks, FileLink
FileLinks('/headless/CirComPara/test_circompara/')
```
- Viewing the contents of the configuration file: vars.py
```
!cat /headless/CirComPara/test_circompara/analysis/vars.py
```
- Viewing the contents of the configuration file: meta.csv
```
!cat /headless/CirComPara/test_circompara/analysis/meta.csv
```
- Running CirCompara with test data
```
!../../circompara
```
-----------------------------------------------------------------------------------------------------------------------
## Results:
- Viewing output files after running CirComPara:
```
from IPython.display import FileLinks, FileLink
FileLinks('/headless/CirComPara/test_circompara/analysis/')
```
- Viewing graphic files after running CirComPara:
```
from IPython.display import Image
Image("/headless/CirComPara/test_circompara/analysis/circrna_analyze/Figs/corr_density_plot-1.png")
from IPython.display import Image
Image("/headless/CirComPara/test_circompara/analysis/circrna_analyze/Figs/cumulative_expression_box-1.png")
from IPython.display import Image
Image("/headless/CirComPara/test_circompara/analysis/circrna_analyze/Figs/show_circrnas_per_method-1.png")
from IPython.display import Image
Image("/headless/CirComPara/test_circompara/analysis/circrna_analyze/Figs/plot_circrnas_per_gene-1.png")
from IPython.display import Image
Image("/headless/CirComPara/test_circompara/analysis/circrna_analyze/Figs/correlations_box-1.png")
from IPython.display import Image
Image("/headless/CirComPara/test_circompara/analysis/circrna_analyze/Figs/plot_circrnas_2reads_2methods_sample-1.png")
from IPython.display import Image
Image("/headless/CirComPara/test_circompara/analysis/circrna_analyze/Figs/plot_circ_gene_expr-1.png")
from IPython.display import Image
Image("/headless/CirComPara/test_circompara/analysis/circrna_analyze/Figs/plot_gene_expressed_by_sample-1.png")
```
-----------------------------------------------------------------------------------------------------------------------
**NOTE:** This pipeline is just an example of what you can do with Dugong. I
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Carregar dados NumPy
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/load_data/numpy"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />Ver em TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/pt-br/tutorials/load_data/numpy.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Executar em Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/pt-br/tutorials/load_data/numpy.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />Ver código fonte no GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/pt-br/tutorials/load_data/numpy.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Baixar notebook</a>
</td>
</table>
Este tutorial fornece um exemplo de carregamento de dados de matrizes NumPy para um `tf.data.Dataset`.
Este exemplo carrega o conjunto de dados MNIST de um arquivo `.npz`. No entanto, a fonte das matrizes NumPy não é importante.
## Configuração
```
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
from __future__ import absolute_import, division, print_function, unicode_literals
import numpy as np
import tensorflow as tf
```
### Carregar um arquivo `.npz`
```
DATA_URL = 'https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz'
path = tf.keras.utils.get_file('mnist.npz', DATA_URL)
with np.load(path) as data:
train_examples = data['x_train']
train_labels = data['y_train']
test_examples = data['x_test']
test_labels = data['y_test']
```
## Carregar matrizes NumPy com `tf.data.Dataset`
Supondo que você tenha uma matriz de exemplos e uma matriz correspondente de rótulos, passe as duas matrizes como uma tupla para `tf.data.Dataset.from_tensor_slices` para criar um `tf.data.Dataset`.
```
train_dataset = tf.data.Dataset.from_tensor_slices((train_examples, train_labels))
test_dataset = tf.data.Dataset.from_tensor_slices((test_examples, test_labels))
```
## Usar o conjunto de dados
### Aleatório e lote dos conjuntos de dados
```
BATCH_SIZE = 64
SHUFFLE_BUFFER_SIZE = 100
train_dataset = train_dataset.shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE)
test_dataset = test_dataset.batch(BATCH_SIZE)
```
### Construir e treinar um modelo
```
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(optimizer=tf.keras.optimizers.RMSprop(),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['sparse_categorical_accuracy'])
model.fit(train_dataset, epochs=10)
model.evaluate(test_dataset)
```
| github_jupyter |
# Generic Integration With Credo AI's Governance App
Lens is primarily a framework for comprehensive assessment of AI models. However, in addition, it is the primary way to integrate assessment analysis with Credo AI's Governance App.
In this tutorial, we will take a model created and assessed _completely independently of Lens_ and send that data to Credo AI's Governance App
### Find the code
This notebook can be found on [github](https://github.com/credo-ai/credoai_lens/blob/develop/docs/notebooks/integration_demo.ipynb).
## Create an example ML Model
```
import numpy as np
from matplotlib import pyplot as plt
from pprint import pprint
from sklearn.model_selection import train_test_split
from sklearn import datasets
from sklearn.svm import SVC
from sklearn.metrics import classification_report
from sklearn.metrics import precision_recall_curve
```
### Load data and train model
For the purpose of this demonstration, we will be classifying digits after a large amount of noise has been added to each image.
We'll create some charts and assessment metrics to reflect our work.
```
# load data
digits = datasets.load_digits()
# add noise
digits.data += np.random.rand(*digits.data.shape)*16
# split into train and test
X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target)
# create and fit model
clf = SVC(probability=True)
clf.fit(X_train, y_train)
```
### Visualize example images along with predicted label
```
examples_plot = plt.figure()
for i in range(8):
image_data = X_test[i,:]
prediction = digits.target_names[clf.predict(image_data[None,:])[0]]
label = f'Pred: "{prediction}"'
# plot
ax = plt.subplot(2,4,i+1)
ax.imshow(image_data.reshape(8,8), cmap='gray')
ax.set_title(label)
ax.tick_params(labelbottom=False, labelleft=False, length=0)
plt.suptitle('Example Images and Predictions', fontsize=16)
```
### Calculate performance metrics and visualize
As a multiclassification problem, we can calculate metrics per class, or overall. We record overall metrics, but include figures for individual class performance breakdown
```
metrics = classification_report(y_test, clf.predict(X_test), output_dict=True)
overall_metrics = metrics['macro avg']
del overall_metrics['support']
pprint(overall_metrics)
probs = clf.predict_proba(X_test)
pr_curves = plt.figure(figsize=(8,6))
# plot PR curve sper digit
for digit in digits.target_names:
y_true = y_test == digit
y_prob = probs[:,digit]
precisions, recalls, thresholds = precision_recall_curve(y_true, y_prob)
plt.plot(recalls, precisions, lw=3, label=f'Digit: {digit}')
plt.xlabel('Recall', fontsize=16)
plt.ylabel('Precision', fontsize=16)
# plot iso lines
f_scores = np.linspace(0.2, 0.8, num=4)
lines = []
labels = []
for f_score in f_scores:
label = label='ISO f1 curves' if f_score==f_scores[0] else ''
x = np.linspace(0.01, 1)
y = f_score * x / (2 * x - f_score)
l, = plt.plot(x[y >= 0], y[y >= 0], color='gray', alpha=0.2, label=label)
# final touches
plt.xlim([0.5, 1.0])
plt.ylim([0.0, 1.05])
plt.tick_params(labelsize=14)
plt.title('PR Curves per Digit', fontsize=20)
plt.legend(loc='lower left', fontsize=10)
from sklearn.metrics import plot_confusion_matrix
confusion_plot = plt.figure(figsize=(6,6))
plot_confusion_matrix(clf, X_test, y_test, \
normalize='true', ax=plt.gca(), colorbar=False)
plt.tick_params(labelsize=14)
```
## Sending assessment information to Credo AI
Now that we have completed training and assessing the model, we will demonstrate how information can be sent to the Credo AI Governance App. Metrics related to performance, fairness, or other governance considerations are the most important kind of evidence needed for governance.
In addition, figures are often produced that help communicate metrics better, understand the model, or other contextualize the AI system. Credo can ingest those as well.
**Which metrics to record?**
Ideally you will have decided on the most important metrics before building the model. We refer to this stage as `Metric Alignment`. This is the phase where your team explicitly determine how you will measure whether your model can be safely deployed. It is part of the more general `Alignment Stage`, which often requires input from multiple stakeholders outside of the team specifically involved in the development of the AI model.
Of course, you may want to record more metrics than those explicitly determined during `Metric Alignment`.
For instance, in this example let's say that during `Metric Alignment`, the _F1 Score_ is the primary metric used to evaluate model performance. However, we have decided that recall and precision would be helpful supporting. So we will send those three metrics.
To reiterate: You are always free to send more metrics - Credo AI will ingest them. It is you and your team's decision which metrics are tracked specifically for governance purposes.
```
import credoai.integration as ci
from credoai.utils import list_metrics
model_name = 'SVC'
dataset_name = 'sklearn_digits'
```
## Quick reference
Below is all the code needed to record a set of metrics and figures. We will unpack each part below.
```
# metrics
metric_records = ci.record_metrics_from_dict(overall_metrics,
model_label=model_name,
dataset_label=dataset_name)
#figures
example_figure_record = ci.Figure(examples_plot._suptitle.get_text(), examples_plot)
confusion_figure_record = ci.Figure(confusion_plot.axes[0].get_title(), confusion_plot)
pr_curve_caption="""Precision-recall curves are shown for each digit separately.
These are calculated by treating each class as a separate
binary classification problem. The grey lines are
ISO f1 curves - all points on each curve have identical
f1 scores.
"""
pr_curve_figure_record = ci.Figure(pr_curves.axes[0].get_title(),
figure=pr_curves,
caption=pr_curve_caption)
figure_records = ci.MultiRecord([example_figure_record, confusion_figure_record, pr_curve_figure_record])
# export to file
# ci.export_to_file(model_record, 'model_record.json')
```
## Metric Record
To record a metric you can either record each one manually or ingest a dictionary of metrics.
### Manually entering individual metrics
```
f1_description = """Harmonic mean of precision and recall scores.
Ranges from 0-1, with 1 being perfect performance."""
f1_record = ci.Metric(metric_type='f1',
value=overall_metrics['f1-score'],
model_label=model_name,
dataset_label=dataset_name)
precision_record = ci.Metric(metric_type='precision',
value=overall_metrics['precision'],
model_label=model_name,
dataset_label=dataset_name)
recall_record = ci.Metric(metric_type='recall',
value=overall_metrics['recall'],
model_label=model_name,
dataset_label=dataset_name)
metrics = [f1_record, precision_record, recall_record]
```
### Convenience to record multiple metrics
Multiple metrics can be recorded as long as they are described using a pandas dataframe.
```
metric_records = ci.record_metrics_from_dict(overall_metrics,
model_name=model_name,
dataset_name=dataset_name)
```
## Record figures
Credo can accept a path to an image file or a matplotlib figure. Matplotlib figures are converted to PNG images and saved.
A caption can be included for futher description. Included a caption is recommended when the image is not self-explanatory, which is most of the time!
```
example_figure_record = ci.Figure(examples_plot._suptitle.get_text(), examples_plot)
confusion_figure_record = ci.Figure(confusion_plot.axes[0].get_title(), confusion_plot)
pr_curve_caption="""Precision-recall curves are shown for each digit separately.
These are calculated by treating each class as a separate
binary classification problem. The grey lines are
ISO f1 curves - all points on each curve have identical
f1 scores.
"""
pr_curve_figure_record = ci.Figure(pr_curves.axes[0].get_title(),
figure=pr_curves,
description=pr_curve_caption)
figure_records = [example_figure_record, confusion_figure_record, pr_curve_figure_record]
```
## MultiRecords
To send all the information, we wrap the records in a MuliRecord, which wraps records of the same type.
```
metric_records = ci.MultiRecord(metric_records)
figure_records = ci.MultiRecord(figure_records)
```
## Export to Credo AI
The json object of the model record can be created by calling `MultiRecord.jsonify()`. The convenience function `export_to_file` can be called to export the json record to a file. This file can then be uploaded to Credo AI's Governance App.
```
# filename is the location to save the json object of the model record
# filename="XXX.json"
# ci.export_to_file(metric_records, filename)
```
MultiRecords can be directly uploaded to Credo AI's Governance App as well. A model (or data) ID must be known to do so. You use `export_to_credo` to accomplish this.
```
# model_id = "XXX"
# ci.export_to_credo(metric_records, model_id)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/araffin/rl-tutorial-jnrr19/blob/master/4_callbacks_hyperparameter_tuning.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Stable Baselines Tutorial - Callbacks and hyperparameter tuning
Github repo: https://github.com/araffin/rl-tutorial-jnrr19
Stable-Baselines: https://github.com/hill-a/stable-baselines
Documentation: https://stable-baselines.readthedocs.io/en/master/
RL Baselines zoo: https://github.com/araffin/rl-baselines-zoo
## Introduction
In this notebook, you will learn how to use *Callbacks* which allow to do monitoring, auto saving, model manipulation, progress bars, ...
You will also see that finding good hyperparameters is key to success in RL.
## Install Dependencies and Stable Baselines Using Pip
```
# !apt install swig
# !pip install tqdm==4.36.1
# !pip install stable-baselines[mpi]==2.8.0
# # Stable Baselines only supports tensorflow 1.x for now
# %tensorflow_version 1.x
%load_ext autoreload
%autoreload 2
try:
%%tensorflow_version 1.x
except:
pass
import os
import sys
lib_path = os.path.abspath('../..')
print('inserting the following to path',lib_path)
if lib_path not in sys.path:
sys.path.insert(0,lib_path)
print(sys.path)
#-----------------------------------
# Filter tensorflow version warnings
#-----------------------------------
# https://stackoverflow.com/questions/40426502/is-there-a-way-to-suppress-the-messages-tensorflow-prints/40426709
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' # or any {'0', '1', '2'}
import warnings
# https://stackoverflow.com/questions/15777951/how-to-suppress-pandas-future-warning
warnings.simplefilter(action='ignore', category=FutureWarning)
warnings.simplefilter(action='ignore', category=Warning)
import tensorflow as tf
tf.get_logger().setLevel('INFO')
tf.autograph.set_verbosity(0)
import logging
tf.get_logger().setLevel(logging.ERROR)
# from IPython.core.debugger import set_trace # GK debug
# sys.path.pop(0)
# print(sys.path)
import gym
from stable_baselines import A2C, SAC, PPO2, TD3
```
# The importance of hyperparameter tuning
When compared with Supervised Learning, Deep Reinforcement Learning is far more sensitive to the choice of hyper-parameters such as learning rate, number of neurons, number of layers, optimizer ... etc.
Poor choice of hyper-parameters can lead to poor/unstable convergence. This challenge is compounded by the variability in performance across random seeds (used to initialize the network weights and the environment).
Here we demonstrate on a toy example the [Soft Actor Critic](https://arxiv.org/abs/1801.01290) algorithm applied in the Pendulum environment. Note the change in performance between the default and "tuned" parameters.
```
import numpy as np
def evaluate(model, env, num_episodes=100):
# This function will only work for a single Environment
all_episode_rewards = []
for i in range(num_episodes):
episode_rewards = []
done = False
obs = env.reset()
while not done:
action, _states = model.predict(obs)
obs, reward, done, info = env.step(action)
episode_rewards.append(reward)
all_episode_rewards.append(sum(episode_rewards))
mean_episode_reward = np.mean(all_episode_rewards)
return mean_episode_reward
eval_env = gym.make('Pendulum-v0')
default_model = SAC('MlpPolicy', 'Pendulum-v0', verbose=1).learn(8000)
evaluate(default_model, eval_env, num_episodes=100)
tuned_model = SAC('MlpPolicy', 'Pendulum-v0', batch_size=256, verbose=1, policy_kwargs=dict(layers=[256, 256])).learn(8000)
evaluate(tuned_model, eval_env, num_episodes=100)
```
Exploring hyperparameter tuning is out of the scope (and time schedule) of this tutorial. However, you need to know that we provide tuned hyperparameter in the [rl zoo](https://github.com/araffin/rl-baselines-zoo) as well as automatic hyperparameter optimization using [Optuna](https://github.com/pfnet/optuna).
<font color='red'> TODO : have a deeper look at the above links </font>
## Helper functions
This is to help the callbacks store variables (as they are function), but this could be also done by passing a class method.
```
def get_callback_vars(model, **kwargs):
"""
Helps store variables for the callback functions
:param model: (BaseRLModel)
:param **kwargs: initial values of the callback variables
"""
# save the called attribute in the model
if not hasattr(model, "_callback_vars"):
model._callback_vars = dict(**kwargs)
else: # check all the kwargs are in the callback variables
for (name, val) in kwargs.items():
if name not in model._callback_vars:
model._callback_vars[name] = val
return model._callback_vars # return dict reference (mutable)
```
# Callbacks
## A functional approach
A callback function takes the `locals()` variables and the `globals()` variables from the model, then returns a boolean value for whether or not the training should continue.
Thanks to the access to the models variables, in particular `_locals["self"]`, we are able to even change the parameters of the model without halting the training, or changing the model's code.
Here we have a simple callback that can only be called twice:
```
def simple_callback(_locals, _globals):
"""
Callback called at each step (for DQN an others) or after n steps (see ACER or PPO2)
:param _locals: (dict)
:param _globals: (dict)
"""
# get callback variables, with default values if unintialized
callback_vars = get_callback_vars(_locals["self"], called=False)
if not callback_vars["called"]:
print("callback - first call")
callback_vars["called"] = True
return True # returns True, training continues.
else:
print("callback - second call")
return False # returns False, training stops.
model = SAC('MlpPolicy', 'Pendulum-v0', verbose=1)
model.learn(8000, callback=simple_callback)
```
## First example: Auto saving best model
In RL, it is quite useful to keep a clean version of a model as you are training, as we can end up with burn-in of a bad policy. This is a typical use case for callback, as they can call the save function of the model, and observe the training over time.
Using the monitoring wrapper, we can save statistics of the environment, and use them to determine the mean training reward.
This allows us to save the best model while training.
Note that this is not the proper way of evaluating an RL agent, you should create an test environment and evaluate the agent performance in the callback. For simplicity, we will be using the training reward as a proxy.
```
import os
import numpy as np
from stable_baselines.bench import Monitor
from stable_baselines.common.vec_env import DummyVecEnv
from stable_baselines.results_plotter import load_results, ts2xy
def auto_save_callback(_locals, _globals):
"""
Callback called at each step (for DQN an others) or after n steps (see ACER or PPO2)
:param _locals: (dict)
:param _globals: (dict)
"""
# get callback variables, with default values if unintialized
callback_vars = get_callback_vars(_locals["self"], n_steps=0, best_mean_reward=-np.inf)
set_trace()
# skip every 20 steps
if callback_vars["n_steps"] % 20 == 0:
# Evaluate policy training performance
x, y = ts2xy(load_results(log_dir), 'timesteps')
if len(x) > 0:
mean_reward = np.mean(y[-100:])
# New best model, you could save the agent here
if mean_reward > callback_vars["best_mean_reward"]:
callback_vars["best_mean_reward"] = mean_reward
# Example for saving best model
print("Saving new best model at {} timesteps".format(x[-1]))
_locals['self'].save(log_dir + 'best_model')
callback_vars["n_steps"] += 1
return True
# Create log dir
log_dir = "/tmp/gym"
os.makedirs(log_dir, exist_ok=True)
# Create and wrap the environment
env = gym.make('CartPole-v1')
env = Monitor(env, log_dir, allow_early_resets=True)
env = DummyVecEnv([lambda: env])
model = A2C('MlpPolicy', env, verbose=0)
model.learn(total_timesteps=10000, callback=auto_save_callback)
```
## Second example: Realtime plotting of performance
While training, it is sometimes useful to how the training progresses over time, relative to the episodic reward.
For this, Stable-Baselines has [Tensorboard support](https://stable-baselines.readthedocs.io/en/master/guide/tensorboard.html), however this can be very combersome, especially in disk space usage.
**NOTE: Unfortunately live plotting does not work out of the box on google colab**
Here, we can use callback again, to plot the episodic reward in realtime, using the monitoring wrapper:
```
import matplotlib.pyplot as plt
import numpy as np
%matplotlib notebook
def plotting_callback(_locals, _globals):
"""
Callback called at each step (for DQN an others) or after n steps (see ACER or PPO2)
:param _locals: (dict)
:param _globals: (dict)
"""
# get callback variables, with default values if unintialized
callback_vars = get_callback_vars(_locals["self"], plot=None)
# get the monitor's data
x, y = ts2xy(load_results(log_dir), 'timesteps')
if callback_vars["plot"] is None: # make the plot
plt.ion()
fig = plt.figure(figsize=(6,3))
ax = fig.add_subplot(111)
line, = ax.plot(x, y)
callback_vars["plot"] = (line, ax, fig)
plt.show()
else: # update and rescale the plot
callback_vars["plot"][0].set_data(x, y)
callback_vars["plot"][-2].relim()
callback_vars["plot"][-2].set_xlim([_locals["total_timesteps"] * -0.02,
_locals["total_timesteps"] * 1.02])
callback_vars["plot"][-2].autoscale_view(True,True,True)
callback_vars["plot"][-1].canvas.draw()
# Create log dir
log_dir = "/tmp/gym/"
os.makedirs(log_dir, exist_ok=True)
# Create and wrap the environment
env = gym.make('MountainCarContinuous-v0')
env = Monitor(env, log_dir, allow_early_resets=True)
env = DummyVecEnv([lambda: env])
model = PPO2('MlpPolicy', env, verbose=0)
model.learn(20000, callback=plotting_callback)
```
## Third example: Progress bar
Quality of life improvement are always welcome when developping and using RL. Here, we used [tqdm](https://tqdm.github.io/) to show a progress bar of the training, along with number of timesteps per second and the estimated time remaining to the end of the training:
```
from tqdm.auto import tqdm
# this callback uses the 'with' block, allowing for correct initialisation and destruction
class progressbar_callback(object):
def __init__(self, total_timesteps): # init object with total timesteps
self.pbar = None
self.total_timesteps = total_timesteps
def __enter__(self): # create the progress bar and callback, return the callback
self.pbar = tqdm(total=self.total_timesteps)
def callback_progressbar(local_, global_):
self.pbar.n = local_["self"].num_timesteps
self.pbar.update(0)
return callback_progressbar
def __exit__(self, exc_type, exc_val, exc_tb): # close the callback
self.pbar.n = self.total_timesteps
self.pbar.update(0)
self.pbar.close()
model = TD3('MlpPolicy', 'Pendulum-v0', verbose=0)
with progressbar_callback(2000) as callback: # this the garanties that the tqdm progress bar closes correctly
model.learn(2000, callback=callback)
```
## Forth example: Composition
Thanks to the functional nature of callbacks, it is possible to do a composition of callbacks, into a single callback. This means we can auto save our best model, show the progess bar and episodic reward of the training:
```
%matplotlib notebook
def compose_callback(*callback_funcs): # takes a list of functions, and returns the composed function.
def _callback(_locals, _globals):
continue_training = True
for cb_func in callback_funcs:
if cb_func(_locals, _globals) is False: # as a callback can return None for legacy reasons.
continue_training = False
return continue_training
return _callback
# Create log dir
log_dir = "/tmp/gym/"
os.makedirs(log_dir, exist_ok=True)
# Create and wrap the environment
env = gym.make('CartPole-v1')
env = Monitor(env, log_dir, allow_early_resets=True)
env = DummyVecEnv([lambda: env])
model = PPO2('MlpPolicy', env, verbose=0)
with progressbar_callback(10000) as progress_callback:
model.learn(10000, callback=compose_callback(progress_callback, plotting_callback, auto_save_callback))
```
## Exercise: Code your own callback
The previous examples showed the basics of what is a callback and what you do with it.
The goal of this exercise is to create a callback that will evaluate the model using a test environment and save it if this is the best known model.
To make things easier, we are going to use a class instead of a function with the magic method `__call__`.
```
class EvalCallback(object):
"""
Callback for evaluating an agent.
:param eval_env: (gym.Env) The environment used for initialization
:param n_eval_episodes: (int) The number of episodes to test the agent
:param eval_freq: (int) Evaluate the agent every eval_freq call of the callback.
"""
def __init__(self, eval_env, n_eval_episodes=5, eval_freq=20):
super(EvalCallback, self).__init__()
self.eval_env = eval_env
self.n_eval_episodes = n_eval_episodes
self.eval_freq = eval_freq
self.n_calls = 0
self.best_mean_reward = -np.inf
def _evaluate(self,model):
# This function will only work for a single Environment
all_episode_rewards = []
for i in range(self.n_eval_episodes):
episode_rewards = []
done = False
obs = self.eval_env.reset()
while not done:
action, _states = model.predict(obs)
obs, reward, done, info = self.eval_env.step(action)
episode_rewards.append(reward)
all_episode_rewards.append(sum(episode_rewards))
mean_episode_reward = np.mean(all_episode_rewards)
return mean_episode_reward
def __call__(self, locals_, globals_):
"""
This method will be called by the model. This is the equivalent to the callback function
used the previous examples.
:param locals_: (dict)
:param globals_: (dict)
:return: (bool)
"""
# Get the self object of the model
self_ = locals_['self']
if self.n_calls % self.eval_freq == 0:
# === YOUR CODE HERE ===#
# Evaluate the agent:
# you need to do self.n_eval_episodes loop using self.eval_env
# hint: you can use self_.predict(obs)
mean_episode_reward = self._evaluate(self_)
# Save the agent if needed
# and update self.best_mean_reward
if mean_episode_reward > self.best_mean_reward:
self.best_mean_reward = mean_episode_reward
print("Best mean reward: {:.2f}".format(self.best_mean_reward))
self_.save(log_dir + 'best_model')
# ====================== #
self.n_calls += 1
return True
```
### Test your callback
```
# Env used for training
env = gym.make("CartPole-v1")
# Env for evaluating the agent
eval_env = gym.make("CartPole-v1")
# === YOUR CODE HERE ===#
# Create log dir - do I need it ?
log_dir = "/tmp/gym/1"
os.makedirs(log_dir, exist_ok=True)
# Create the callback object - do I need to wrap with Monitor ? else, how do I pass the logdir?
# I could add it to EvallCallback constructor?
# eval_env = Monitor(eval_env, log_dir, allow_early_resets=True)
callback = EvalCallback(eval_env)
# Create the RL model
model = PPO2('MlpPolicy', env, verbose=0)
# ====================== #
# Train the RL model
model.learn(int(100000), callback=callback)
```
# Conclusion
In this notebook we have seen:
- that good hyperparameters are key to the success of RL, you should not except the default ones to work on every problems
- what is a callback and what you can do with it
- how to create your own callback
| github_jupyter |
# Introduction
<hr style="border:2px solid black"> </hr>
**What?** `__call__` method
# Definition
<hr style="border:2px solid black"> </hr>
- `__call__` is a built-in method which enables to write classes where the instances behave like functions and can be called like a function.
- In practice: `object()` is shorthand for `object.__call__()`
# _ _call_ _ vs. _ _init_ _
<hr style="border:2px solid black"> </hr>
- `__init__()` is properly defined as Class Constructor which builds an instance of a class, whereas `__call__` makes such a instance callable as a function and therefore can be modifiable.
- Technically `__init__` is called once by `__new__` when object is created, so that it can be initialised
- But there are many scenarios where you might want to redefine your object, say you are done with your object, and may find a need for a new object. With `__call__` you can redefine the same object as if it were new.
# Example #1
<hr style="border:2px solid black"> </hr>
```
class Example():
def __init__(self):
print("Instance created")
# Defining __call__ method
def __call__(self):
print("Instance is called via special method __call__")
e = Example()
e.__init__()
e.__call__()
```
# Example #2
<hr style="border:2px solid black"> </hr>
```
class Product():
def __init__(self):
print("Instance created")
# Defining __call__ method
def __call__(self, a, b):
print("Instance is called via special method __call__")
print(a*b)
p = Product()
p.__init__()
# Is being call like if p was a function
p(2,3)
# The cell above is equivalent to this call
p.__call__(2,3)
```
# Example #3
<hr style="border:2px solid black"> </hr>
```
class Stuff(object):
def __init__(self, x, y, Range):
super(Stuff, self).__init__()
self.x = x
self.y = y
self.Range = Range
def __call__(self, x, y):
self.x = x
self.y = y
print("__call with (%d, %d)" % (self.x, self.y))
def __del__(self, x, y):
del self.x
del self.y
del self.Range
s = Stuff(1, 2, 3)
s.x
s(7,8)
s.x
```
# Example #4
<hr style="border:2px solid black"> </hr>
```
class Sum():
def __init__(self, x, y):
self.x = x
self.y = y
print("__init__ with (%d, %d)" % (self.x, self.y))
def __call__(self, x, y):
self.x = x
self.y = y
print("__call__ with (%d, %d)" % (self.x, self.y))
def sum(self):
return self.x + self.y
sum_1 = Sum(2,2)
sum_1.sum()
sum_1 = Sum(2,2)
sum_1(3,3)
sum_1 = Sum(2,2)
# This is equivalent to
sum_1.__call__(3,3)
# You can also do this
sum_1 = Sum(2,2)(3,3)
```
# References
<hr style="border:2px solid black"> </hr>
- https://www.geeksforgeeks.org/__call__-in-python/
| github_jupyter |
# TensorFlow
Installing TensorFlow: `conda install -c conda-forge tensorflow`
## 1. Hello Tensor World!
```
import tensorflow as tf
# Create TensorFlow object called tensor
hello_constant = tf.constant('Hello World!')
with tf.Session() as sess:
# Run the tf.constant operation in the session
output = sess.run(hello_constant)
print(output)
```
### a) Tensor
In TensorFlow, data isn’t stored as integers, floats, or strings. These values are encapsulated in an object called a tensor. In the case of `hello_constant = tf.constant('Hello World!')`, `hello_constant` is a 0-dimensional string tensor, but tensors come in a variety of sizes as shown below:
```python
# A is a 0-dimensional int32 tensor
A = tf.constant(1234)
# B is a 1-dimensional int32 tensor
B = tf.constant([123,456,789])
# C is a 2-dimensional int32 tensor
C = tf.constant([ [123,456,789], [222,333,444] ])
```
`tf.constant()` is one of many TensorFlow operations you will use in this lesson. The tensor returned by `tf.constant()` is called a constant tensor, because the value of the tensor never changes.
### b) Session
TensorFlow’s api is built around the idea of a computational graph, a way of visualizing a mathematical process which you learned about in the MiniFlow lesson. Let’s take the TensorFlow code you ran and turn that into a graph:

A "TensorFlow Session", as shown above, is an environment for running a graph. The session is in charge of allocating the operations to GPU(s) and/or CPU(s), including remote machines. Let’s see how you use it.
```
with tf.Session() as sess:
output = sess.run(hello_constant)
print(output)
```
The code has already created the tensor, `hello_constant`, from the previous lines. The next step is to evaluate the tensor in a session.
The code creates a session instance, `sess`, using `tf.Session`. The `sess.run()` function then evaluates the tensor and returns the results.
After you run the above, you will see the following printed out:
```
'Hello World!'
```
## 2. TensorFlow Input
In the last section, you passed a tensor into a session and it returned the result. What if you want to use a **non-constant**? This is where `tf.placeholder()` and `feed_dict` come into place. In this section, you'll go over the basics of feeding data into TensorFlow.
### a) [tf.placeholder() ](https://www.tensorflow.org/api_docs/python/tf/placeholder)
Sadly you can’t just set `x` to your dataset and put it in TensorFlow, because over time you'll want your TensorFlow model to take in different datasets with different parameters. You need `tf.placeholder()`!
`tf.placeholder()` returns a tensor that gets its value from data passed to the `tf.session.run()` function, allowing you to set the input right before the session runs.
### b) Session’s feed_dict
```python
x = tf.placeholder(tf.string)
with tf.Session() as sess:
output = sess.run(x, feed_dict={x: 'Hello World'})
```
Use the `feed_dict` parameter in `tf.session.run()` to set the placeholder tensor. The above example shows the tensor `x` being set to the string `"Hello, world"`. It's also possible to set more than one tensor using `feed_dict` as shown below.
```python
x = tf.placeholder(tf.string)
y = tf.placeholder(tf.int32)
z = tf.placeholder(tf.float32)
with tf.Session() as sess:
output = sess.run(x, feed_dict={x: 'Test String', y: 123, z: 45.67})
```
**Note**: If the data passed to the `feed_dict` doesn’t match the tensor type and can’t be cast into the tensor type, you’ll get the error “`ValueError: invalid literal for`...”.
### Quiz
Let's see how well you understand `tf.placeholder()` and `feed_dict`. The code below throws an error, but I want you to make it return the number `123`. Change line 11, so that the code returns the number `123`.
```
import tensorflow as tf
def run():
output = None
x = tf.placeholder(tf.int32)
with tf.Session() as sess:
output = sess.run(x, feed_dict={x: 123})
return output
print(run())
```
## 3. TensorFlow Math
Getting the input is great, but now you need to use it. You're going to use basic math functions that everyone knows and loves - add, subtract, multiply, and divide - with tensors.
### a) Addition
```python
x = tf.add(5, 2) # 7
```
### b) Subtraction and Multiplication
```python
x = tf.subtract(10, 4) # 6
y = tf.multiply(2, 5) # 10
```
The `x` tensor will evaluate to `6`, because `10 - 4 = 6`. The `y` tensor will evaluate to `10`, because `2 * 5 = 10`. That was easy!
### c) Division
`tf.divide(x, y)`
### d) Converting types
It may be necessary to convert between types to make certain operators work together. For example, if you tried the following, it would fail with an exception:
```python
tf.subtract(tf.constant(2.0),tf.constant(1)) # Fails with ValueError: Tensor conversion requested dtype float32 for Tensor with dtype int32:
```
That's because the constant `1` is an integer but the constant `2.0` is a floating point value and `subtract` expects them to match.
In cases like these, you can either make sure your data is all of the same type, or you can cast a value to another type. In this case, converting the `2.0` to an integer before subtracting, like so, will give the correct result:
```python
tf.subtract(tf.cast(tf.constant(2.0), tf.int32), tf.constant(1)) # 1
```
### d) Quiz
Let's apply what you learned to convert an algorithm to TensorFlow. The code below is a simple algorithm using division and subtraction. Convert the following algorithm in regular Python to TensorFlow and print the results of the session. You can use `tf.constant()` for the values `10`, `2`, and `1`.
```
import tensorflow as tf
# TODO: Convert the following to TensorFlow:
x = 10
y = 2
z = x/y - 1
x = tf.constant(10)
y = tf.constant(2)
z = tf.subtract(tf.divide(x, y), tf.cast(tf.constant(1), tf.float64))
# TODO: Print z from a session as the variable output
with tf.Session() as sess:
output = sess.run(z)
print(output)
```
## 4. TensorFlow Linear Function
Let’s derive the function `y = Wx + b`. We want to translate our input, `x`, to labels, `y`.
For example, imagine we want to classify images as digits.
x would be our list of pixel values, and `y` would be the logits, one for each digit. Let's take a look at `y = Wx`, where the weights, `W`, determine the influence of `x` at predicting each `y`.

`y = Wx` allows us to segment the data into their respective labels using a line.
However, this line has to pass through the origin, because whenever `x` equals 0, then `y` is also going to equal 0.
We want the ability to shift the line away from the origin to fit more complex data. The simplest solution is to add a number to the function, which we call “bias”.

Our new function becomes `Wx + b`, allowing us to create predictions on linearly separable data. Let’s use a concrete example and calculate the logits.
### a) Matrix Multiplication Quiz
Calculate the logits a and b for the following formula.

answers: a = 0.16, b = 0.06
### b) Transposition
We've been using the `y = Wx + b` function for our linear function.
But there's another function that does the same thing, `y = xW + b`. These functions do the same thing and are interchangeable, except for the dimensions of the matrices involved.
To shift from one function to the other, you simply have to swap the row and column dimensions of each matrix. This is called transposition.
For rest of this lesson, we actually use `xW + b`, because this is what TensorFlow uses.

The above example is identical to the quiz you just completed, except that the matrices are transposed.
`x` now has the dimensions 1x3, `W` now has the dimensions 3x2, and `b` now has the dimensions 1x2. Calculating this will produce a matrix with the dimension of 1x2.
You'll notice that the elements in this 1x2 matrix are the same as the elements in the 2x1 matrix from the quiz. Again, these matrices are simply transposed.

We now have our logits! The columns represent the logits for our two labels.
Now you can learn how to train this function in TensorFlow.
## 5. Weights and Bias in TensorFlow
The goal of training a neural network is to modify weights and biases to best predict the labels. In order to use weights and bias, you'll need a Tensor that can be modified. This leaves out `tf.placeholder()` and `tf.constant()`, since those Tensors can't be modified. This is where `tf.Variable` class comes in.
### a) tf.Variable()
```python
x = tf.Variable(5)
```
The `tf.Variable` class creates a tensor with an initial value that can be modified, much like a normal Python variable. This tensor stores its state in the session, so you must initialize the state of the tensor manually. You'll use the `tf.global_variables_initializer()` function to initialize the state of all the Variable tensors.
#### Initialization
```python
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
```
The `tf.global_variables_initializer()` call returns an operation that will initialize all TensorFlow variables from the graph. You call the operation using a session to initialize all the variables as shown above. Using the `tf.Variable` class allows us to change the weights and bias, but **an initial value needs to be chosen**.
Initializing the weights with random numbers from a normal distribution is good practice. Randomizing the weights helps the model from becoming stuck in the same place every time you train it. You'll learn more about this in the next lesson, when you study gradient descent.
Similarly, choosing weights from a normal distribution prevents any one weight from overwhelming other weights. You'll use the `tf.truncated_normal()` function to generate random numbers from a normal distribution.
### b) tf.truncated_normal()
```python
n_features = 120
n_labels = 5
weights = tf.Variable(tf.truncated_normal((n_features, n_labels)))
```
The `tf.truncated_normal()` function returns a tensor with random values from a normal distribution whose magnitude is no more than 2 standard deviations from the mean.
Since the weights are already helping prevent the model from getting stuck, you don't need to randomize the bias. Let's use the simplest solution, setting the bias to 0.
### c) tf.zeros()
```python
n_labels = 5
bias = tf.Variable(tf.zeros(n_labels))
```
The `tf.zeros()` function returns a tensor with all zeros.
## 6. Linear Classifier Quiz

You'll be classifying the handwritten numbers `0`, `1`, and `2` from the MNIST dataset using TensorFlow. The above is a small sample of the data you'll be training on. Notice how some of the `1`s are written with a serif at the top and at different angles. The similarities and differences will play a part in shaping the weights of the model.

The images above are trained weights for each label (0, 1, and 2). The weights display the unique properties of each digit they have found. Complete this quiz to train your own weights using the MNIST dataset.
### Instructions
1. Open quiz.py.
1. Implement `get_weights` to return a `tf.Variable` of weights
2. Implement `get_biases` to return a `tf.Variable of biases`
3. Implement `xW + b` in the `linear` function
2. Open sandbox.py
1. Initialize all weights
Since `xW` in `xW + b` is matrix multiplication, you have to use the `tf.matmul()` function instead of `tf.multiply()`. Don't forget that order matters in matrix multiplication, so `tf.matmul(a,b)` is not the same as `tf.matmul(b,a)`.
```python
import tensorflow as tf
def get_weights(n_features, n_labels):
"""
Return TensorFlow weights
:param n_features: Number of features
:param n_labels: Number of labels
:return: TensorFlow weights
"""
# TODO: Return weights
return tf.Variable(tf.truncated_normal((n_features, n_labels)))
def get_biases(n_labels):
"""
Return TensorFlow bias
:param n_labels: Number of labels
:return: TensorFlow bias
"""
# TODO: Return biases
return tf.Variable(tf.zeros(n_labels))
def linear(input, w, b):
"""
Return linear function in TensorFlow
:param input: TensorFlow input
:param w: TensorFlow weights
:param b: TensorFlow biases
:return: TensorFlow linear function
"""
# TODO: Linear Function (xW + b)
return tf.add(tf.matmul(input, w), b)
from tensorflow.examples.tutorials.mnist import input_data
def mnist_features_labels(n_labels):
"""
Gets the first <n> labels from the MNIST dataset
:param n_labels: Number of labels to use
:return: Tuple of feature list and label list
"""
mnist_features = []
mnist_labels = []
mnist = input_data.read_data_sets('/datasets/mnist', one_hot=True)
# In order to make quizzes run faster, we're only looking at 10000 images
for mnist_feature, mnist_label in zip(*mnist.train.next_batch(10000)):
# Add features and labels if it's for the first <n>th labels
if mnist_label[:n_labels].any():
mnist_features.append(mnist_feature)
mnist_labels.append(mnist_label[:n_labels])
return mnist_features, mnist_labels
# Number of features (28*28 image is 784 features)
n_features = 784
# Number of labels
n_labels = 3
# Features and Labels
features = tf.placeholder(tf.float32)
labels = tf.placeholder(tf.float32)
# Weights and Biases
w = get_weights(n_features, n_labels)
b = get_biases(n_labels)
# Linear Function xW + b
logits = linear(features, w, b)
# Training data
train_features, train_labels = mnist_features_labels(n_labels)
with tf.Session() as session:
session.run(tf.global_variables_initializer())
# Softmax
prediction = tf.nn.softmax(logits)
# Cross entropy
# This quantifies how far off the predictions were.
# You'll learn more about this in future lessons.
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
# You'll learn more about this in future lessons.
loss = tf.reduce_mean(cross_entropy)
# Rate at which the weights are changed
# You'll learn more about this in future lessons.
learning_rate = 0.08
# Gradient Descent
# This is the method used to train the model
# You'll learn more about this in future lessons.
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: train_features, labels: train_labels})
# Print loss
print('Loss: {}'.format(l))
```
## 6. Linear Update
You can’t train a neural network on a single sample. Let’s apply n samples of `x` to the function `y = Wx + b`, which becomes `Y = WX + B`.

For every sample of `X` (`X1`, `X2`, `X3`), we get logits for label 1 (`Y1`) and label 2 (`Y2`).
In order to add the bias to the product of `WX`, we had to turn `b` into a matrix of the same shape. This is a bit unnecessary, since the bias is only two numbers. It should really be a vector.
We can take advantage of an operation called broadcasting used in TensorFlow and Numpy. This operation allows arrays of different dimension to be added or multiplied with each other. For example:
```python
import numpy as np
t = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]])
u = np.array([1, 2, 3])
print(t + u)
```
The code above will print...
```python
[[ 2 4 6]
[ 5 7 9]
[ 8 10 12]
[11 13 15]]
```
This is because `u` is the same dimension as the last dimension in `t`.
## 7. Softmax

Congratulations on successfully implementing a linear function that outputs logits. You're one step closer to a working classifier.
The next step is to assign a probability to each label, which you can then use to classify the data. Use the softmax function to turn your logits into probabilities.
We can do this by using the formula above, which uses the input of y values and the mathematical constant "e" which is approximately equal to 2.718. By taking "e" to the power of any real value we always get back a positive value, this then helps us scale when having negative y values. The summation symbol on the bottom of the divisor indicates that we add together all the e^(input y value) elements in order to get our calculated probability outputs.
### Quiz
For the next quiz, you'll implement a `softmax(x)` function that takes in `x`, a one or two dimensional array of logits.
In the one dimensional case, the array is just a single set of logits. In the two dimensional case, each column in the array is a set of logits. The `softmax(x)` function should return a NumPy array of the same shape as `x`.
For example, given a one-dimensional array:
```python
# logits is a one-dimensional array with 3 elements
logits = [1.0, 2.0, 3.0]
# softmax will return a one-dimensional array with 3 elements
print softmax(logits)
```
```python
[ 0.09003057 0.24472847 0.66524096]
```
Given a two-dimensional array where each column represents a set of logits:
```python
# logits is a two-dimensional array
logits = np.array([
[1, 2, 3, 6],
[2, 4, 5, 6],
[3, 8, 7, 6]])
# softmax will return a two-dimensional array with the same shape
print softmax(logits)
```
```python
[
[ 0.09003057 0.00242826 0.01587624 0.33333333]
[ 0.24472847 0.01794253 0.11731043 0.33333333]
[ 0.66524096 0.97962921 0.86681333 0.33333333]
]
```
Implement the softmax function, which is specified by the formula at the top of the page.
The probabilities for each column must sum to 1. Feel free to test your function with the inputs above.
```
import numpy as np
def softmax(x):
"""Compute softmax values for each sets of scores in x."""
return np.exp(x) / np.sum(np.exp(x), axis=0)
logits = [3.0, 1.0, 0.2]
print(softmax(logits))
```
## 8. TensorFlow Softmax
Now that you've built a softmax function from scratch, let's see how softmax is done in TensorFlow.
```python
x = tf.nn.softmax([2.0, 1.0, 0.2])
```
Easy as that! `tf.nn.softmax()` implements the softmax function for you. It takes in logits and returns softmax activations.
### Quiz
Use the softmax function in the quiz below to return the softmax of the logits.

```
import tensorflow as tf
def run():
output = None
logit_data = [2.0, 1.0, 0.1]
logits = tf.placeholder(tf.float32)
softmax = tf.nn.softmax(logits)
with tf.Session() as sess:
output = sess.run(softmax, feed_dict={logits: logit_data})
return output
```
## 9. Corss Entropy
### Minimizing Cross Entropy
## 10. Practical Aspect of Learning
### a) How do you fill image pixels to this classifier?
#### i) Numerical Stablility
Adding small numbers to big numbers cause issues
#### ii) Normalized Inputs and Initial Weights
1. Inputs:
1. Zero Mean
2. Equal Variance(Small)
2. Initial Weights
1. Random
2. Mean = 0
3. Equal Variance(Small)
#### iii) Measuring Performance
Traning Set
Validation Set
Testing Set
#### iv) Validation and Test Set Size
Cross-Validation
Rule of '30'
### b) Where do you initialize the optimization?
Training Logistic Regression:
Optimizes error measure (Loss Function)
Scaling Issues
#### i) Stochastic Gradient Descent (S.G.D.)
Computing gradient descent takes about 3X than computing the loss function
Take a very small sliver of the training data, compute the loss and derivative, and use as direction
#### ii) Momentum and Learning Rate Decay
Keep a running average of the gradients and use that running average instead of the direction of the current batch of the data
Make learing rate samller and smaller as you train
#### iii) Hyperparameter
Initial Learing Rate
Learing Rate Decay
Momentum
Batch Size
Weight Initialization
**ADAGRAD** Approach
### Quiz 2: Mini-batch
Let's use mini-batching to feed batches of MNIST features and labels into a linear model.
Set the batch size and run the optimizer over all the batches with the `batches` function. The recommended batch size is 128. If you have memory restrictions, feel free to make it smaller.
```python
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
import numpy as np
from helper import batches
learning_rate = 0.001
n_input = 784 # MNIST data input (img shape: 28*28)
n_classes = 10 # MNIST total classes (0-9 digits)
# Import MNIST data
mnist = input_data.read_data_sets('/datasets/ud730/mnist', one_hot=True)
# The features are already scaled and the data is shuffled
train_features = mnist.train.images
test_features = mnist.test.images
train_labels = mnist.train.labels.astype(np.float32)
test_labels = mnist.test.labels.astype(np.float32)
# Features and Labels
features = tf.placeholder(tf.float32, [None, n_input])
labels = tf.placeholder(tf.float32, [None, n_classes])
# Weights & bias
weights = tf.Variable(tf.random_normal([n_input, n_classes]))
bias = tf.Variable(tf.random_normal([n_classes]))
# Logits - xW + b
logits = tf.add(tf.matmul(features, weights), bias)
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost)
# Calculate accuracy
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(labels, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# TODO: Set batch size
batch_size = 128
assert batch_size is not None, 'You must set the batch size'
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
# TODO: Train optimizer on all batches
for batch_features, batch_labels in batches(batch_size, train_features, train_labels):
sess.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Calculate accuracy for test dataset
test_accuracy = sess.run(
accuracy,
feed_dict={features: test_features, labels: test_labels})
print('Test Accuracy: {}'.format(test_accuracy))
```
Outputs:
```python
Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
Extracting /datasets/ud730/mnist/train-images-idx3-ubyte.gz
Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
Extracting /datasets/ud730/mnist/train-labels-idx1-ubyte.gz
Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
Extracting /datasets/ud730/mnist/t10k-images-idx3-ubyte.gz
Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
Extracting /datasets/ud730/mnist/t10k-labels-idx1-ubyte.gz
Test Accuracy: 0.1454000025987625
```
The accuracy is low, but you probably know that you could train on the dataset more than once. You can train a model using the dataset multiple times. You'll go over this subject in the next section where we talk about "epochs".
## 11. Epochs
An epoch is a single forward and backward pass of the whole dataset. This is used to increase the accuracy of the model without requiring more data. This section will cover epochs in TensorFlow and how to choose the right number of epochs.
The following TensorFlow code trains a model using 10 epochs.
```python
from tensorflow.examples.tutorials.mnist import input_data
import tensorflow as tf
import numpy as np
from helper import batches # Helper function created in Mini-batching section
def print_epoch_stats(epoch_i, sess, last_features, last_labels):
"""
Print cost and validation accuracy of an epoch
"""
current_cost = sess.run(
cost,
feed_dict={features: last_features, labels: last_labels})
valid_accuracy = sess.run(
accuracy,
feed_dict={features: valid_features, labels: valid_labels})
print('Epoch: {:<4} - Cost: {:<8.3} Valid Accuracy: {:<5.3}'.format(
epoch_i,
current_cost,
valid_accuracy))
n_input = 784 # MNIST data input (img shape: 28*28)
n_classes = 10 # MNIST total classes (0-9 digits)
# Import MNIST data
mnist = input_data.read_data_sets('/datasets/ud730/mnist', one_hot=True)
# The features are already scaled and the data is shuffled
train_features = mnist.train.images
valid_features = mnist.validation.images
test_features = mnist.test.images
train_labels = mnist.train.labels.astype(np.float32)
valid_labels = mnist.validation.labels.astype(np.float32)
test_labels = mnist.test.labels.astype(np.float32)
# Features and Labels
features = tf.placeholder(tf.float32, [None, n_input])
labels = tf.placeholder(tf.float32, [None, n_classes])
# Weights & bias
weights = tf.Variable(tf.random_normal([n_input, n_classes]))
bias = tf.Variable(tf.random_normal([n_classes]))
# Logits - xW + b
logits = tf.add(tf.matmul(features, weights), bias)
# Define loss and optimizer
learning_rate = tf.placeholder(tf.float32)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost)
# Calculate accuracy
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(labels, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
init = tf.global_variables_initializer()
batch_size = 128
epochs = 10
learn_rate = 0.001
train_batches = batches(batch_size, train_features, train_labels)
with tf.Session() as sess:
sess.run(init)
# Training cycle
for epoch_i in range(epochs):
# Loop over all batches
for batch_features, batch_labels in train_batches:
train_feed_dict = {
features: batch_features,
labels: batch_labels,
learning_rate: learn_rate}
sess.run(optimizer, feed_dict=train_feed_dict)
# Print cost and validation accuracy of an epoch
print_epoch_stats(epoch_i, sess, batch_features, batch_labels)
# Calculate accuracy for test dataset
test_accuracy = sess.run(
accuracy,
feed_dict={features: test_features, labels: test_labels})
print('Test Accuracy: {}'.format(test_accuracy))
```
Running the code will output the following:
```python
Epoch: 0 - Cost: 11.0 Valid Accuracy: 0.204
Epoch: 1 - Cost: 9.95 Valid Accuracy: 0.229
Epoch: 2 - Cost: 9.18 Valid Accuracy: 0.246
Epoch: 3 - Cost: 8.59 Valid Accuracy: 0.264
Epoch: 4 - Cost: 8.13 Valid Accuracy: 0.283
Epoch: 5 - Cost: 7.77 Valid Accuracy: 0.301
Epoch: 6 - Cost: 7.47 Valid Accuracy: 0.316
Epoch: 7 - Cost: 7.2 Valid Accuracy: 0.328
Epoch: 8 - Cost: 6.96 Valid Accuracy: 0.342
Epoch: 9 - Cost: 6.73 Valid Accuracy: 0.36
Test Accuracy: 0.3801000118255615
```
Each epoch attempts to move to a lower cost, leading to better accuracy.
This model continues to improve accuracy up to Epoch 9. Let's increase the number of epochs to 100.
```python
...
Epoch: 79 - Cost: 0.111 Valid Accuracy: 0.86
Epoch: 80 - Cost: 0.11 Valid Accuracy: 0.869
Epoch: 81 - Cost: 0.109 Valid Accuracy: 0.869
....
Epoch: 85 - Cost: 0.107 Valid Accuracy: 0.869
Epoch: 86 - Cost: 0.107 Valid Accuracy: 0.869
Epoch: 87 - Cost: 0.106 Valid Accuracy: 0.869
Epoch: 88 - Cost: 0.106 Valid Accuracy: 0.869
Epoch: 89 - Cost: 0.105 Valid Accuracy: 0.869
Epoch: 90 - Cost: 0.105 Valid Accuracy: 0.869
Epoch: 91 - Cost: 0.104 Valid Accuracy: 0.869
Epoch: 92 - Cost: 0.103 Valid Accuracy: 0.869
Epoch: 93 - Cost: 0.103 Valid Accuracy: 0.869
Epoch: 94 - Cost: 0.102 Valid Accuracy: 0.869
Epoch: 95 - Cost: 0.102 Valid Accuracy: 0.869
Epoch: 96 - Cost: 0.101 Valid Accuracy: 0.869
Epoch: 97 - Cost: 0.101 Valid Accuracy: 0.869
Epoch: 98 - Cost: 0.1 Valid Accuracy: 0.869
Epoch: 99 - Cost: 0.1 Valid Accuracy: 0.869
Test Accuracy: 0.8696000006198883
```
From looking at the output above, you can see the model doesn't increase the validation accuracy after epoch 80. Let's see what happens when we increase the learning rate.
learn_rate = 0.1
```python
Epoch: 76 - Cost: 0.214 Valid Accuracy: 0.752
Epoch: 77 - Cost: 0.21 Valid Accuracy: 0.756
Epoch: 78 - Cost: 0.21 Valid Accuracy: 0.756
...
Epoch: 85 - Cost: 0.207 Valid Accuracy: 0.756
Epoch: 86 - Cost: 0.209 Valid Accuracy: 0.756
Epoch: 87 - Cost: 0.205 Valid Accuracy: 0.756
Epoch: 88 - Cost: 0.208 Valid Accuracy: 0.756
Epoch: 89 - Cost: 0.205 Valid Accuracy: 0.756
Epoch: 90 - Cost: 0.202 Valid Accuracy: 0.756
Epoch: 91 - Cost: 0.207 Valid Accuracy: 0.756
Epoch: 92 - Cost: 0.204 Valid Accuracy: 0.756
Epoch: 93 - Cost: 0.206 Valid Accuracy: 0.756
Epoch: 94 - Cost: 0.202 Valid Accuracy: 0.756
Epoch: 95 - Cost: 0.2974 Valid Accuracy: 0.756
Epoch: 96 - Cost: 0.202 Valid Accuracy: 0.756
Epoch: 97 - Cost: 0.2996 Valid Accuracy: 0.756
Epoch: 98 - Cost: 0.203 Valid Accuracy: 0.756
Epoch: 99 - Cost: 0.2987 Valid Accuracy: 0.756
Test Accuracy: 0.7556000053882599
```
Looks like the learning rate was increased too much. The final accuracy was lower, and it stopped improving earlier. Let's stick with the previous learning rate, but change the number of epochs to 80.
```python
Epoch: 65 - Cost: 0.122 Valid Accuracy: 0.868
Epoch: 66 - Cost: 0.121 Valid Accuracy: 0.868
Epoch: 67 - Cost: 0.12 Valid Accuracy: 0.868
Epoch: 68 - Cost: 0.119 Valid Accuracy: 0.868
Epoch: 69 - Cost: 0.118 Valid Accuracy: 0.868
Epoch: 70 - Cost: 0.118 Valid Accuracy: 0.868
Epoch: 71 - Cost: 0.117 Valid Accuracy: 0.868
Epoch: 72 - Cost: 0.116 Valid Accuracy: 0.868
Epoch: 73 - Cost: 0.115 Valid Accuracy: 0.868
Epoch: 74 - Cost: 0.115 Valid Accuracy: 0.868
Epoch: 75 - Cost: 0.114 Valid Accuracy: 0.868
Epoch: 76 - Cost: 0.113 Valid Accuracy: 0.868
Epoch: 77 - Cost: 0.113 Valid Accuracy: 0.868
Epoch: 78 - Cost: 0.112 Valid Accuracy: 0.868
Epoch: 79 - Cost: 0.111 Valid Accuracy: 0.868
Epoch: 80 - Cost: 0.111 Valid Accuracy: 0.869
Test Accuracy: 0.86909999418258667
```
The accuracy only reached 0.86, but that could be because the learning rate was too high. Lowering the learning rate would require more epochs, but could ultimately achieve better accuracy.
In the upcoming TensorFLow Lab, you'll get the opportunity to choose your own learning rate, epoch count, and batch size to improve the model's accuracy.
## 12. TensorFlow Neural Network Lab
[Link](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html)
[THE MNIST DATABASE
of handwritten digits](http://yann.lecun.com/exdb/mnist/)

We've prepared a Jupyter notebook that will guide you through the process of creating a single layer neural network in TensorFlow.
### The Notebook
The notebook has 3 problems for you to solve:
Problem 1: Normalize the features
Problem 2: Use TensorFlow operations to create features, labels, weight, and biases tensors
Problem 3: Tune the learning rate, number of steps, and batch size for the best accuracy
This is a self-assessed lab. Compare your answers to the solutions in the **solutions.ipynb** . If you have any difficulty completing the lab, Udacity provides a few services to answer any questions you might have.
| github_jupyter |
# Supercritical Steam Cycle Example
This example uses Jupyter Lab or Jupyter notebook, and demonstrates a supercritical pulverized coal (SCPC) steam cycle model. See the ```supercritical_steam_cycle.py``` to see more information on how to assemble a power plant model flowsheet. Code comments in that file will guide you through the process.
## Model Description
The example model doesn't represent any particular power plant, but should be a reasonable approximation of a typical plant. The gross power output is about 620 MW. The process flow diagram (PFD) can be shown using the code below. The initial PFD contains spaces for model results, to be filled in later.
To get a more detailed look at the model structure, you may find it useful to review ```supercritical_steam_cycle.py``` first. Although there is no detailed boiler model, there are constraints in the model to complete the steam loop through the boiler and calculate boiler heat input to the steam cycle. The efficiency calculation for the steam cycle doesn't account for heat loss in the boiler, which would be a result of a more detailed boiler model.
```
# pkg_resources is used here to get the svg information from the
# installed IDAES package
import pkg_resources
from IPython.display import SVG, display
# Get the contents of the PFD (which is an svg file)
init_pfd = pkg_resources.resource_string(
"idaes.examples.power_generation.supercritical_steam_cycle",
"supercritical_steam_cycle.svg"
)
# Make the svg contents into an SVG object and display it.
display(SVG(init_pfd))
```
## Initialize the steam cycle flowsheet
This example is part of the ```idaes``` package, which you should have installed. To run the example, the example flowsheet is imported from the ```idaes``` package. When you write your own model, you can import and run it in whatever way is appropriate for you. The Pyomo environment is also imported as ```pyo```, providing easy access to Pyomo functions and classes.
The supercritical flowsheet example main function returns a Pyomo concrete mode (m) and a solver object (solver). The model is also initialized by the ```main()``` function.
```
import pyomo.environ as pyo
from idaes.examples.power_generation.supercritical_steam_cycle import (
main,
create_stream_table_dataframe,
pfd_result,
)
m, solver = main()
```
Inside the model, there is a subblock ```fs```. This is an IDAES flowsheet model, which contains the supercritical steam cycle model. In the flowsheet, the model called ```turb``` is a multistage turbine model. The turbine model contains an expression for total power, ```power```. In this case the model is steady-state, but all IDAES models allow for dynamic simulation, and contain time indexes. Power is indexed by time, and only the "0" time point exists. By convention, in the IDAES framework, power going into a model is positive, so power produced by the turbine is negative.
The property package used for this model uses SI (mks) units of measure, so the power is in Watts. Here a function is defined which can be used to report power output in MW.
```
# Define a function to report gross power output in MW
def gross_power_mw(model):
# pyo.value(m.fs.turb.power[0]) is the power consumed in Watts
return -pyo.value(model.fs.turb.power[0])/1e6
# Show the gross power
gross_power_mw(m)
```
## Change the model inputs
The turbine in this example simulates partial arc admission with four arcs, so there are four throttle valves. For this example, we will close one of the valves to 25% open, and observe the result.
```
m.fs.turb.throttle_valve[1].valve_opening[:].value = 0.25
```
Next, we re-solve the model using the solver created by the ```supercritical_steam_cycle.py``` script.
```
solver.solve(m, tee=True)
```
Now we can check the gross power output again.
```
gross_power_mw(m)
```
## Creating a PFD with results and a stream table
A more detailed look at the model results can be obtained by creating a stream table and putting key results on the PFD. Of course, any unit model or stream result can be obtained from the model.
```
# Create a Pandas dataframe with stream results
df = create_stream_table_dataframe(streams=m._streams, orient="index")
# Create a new PFD with simulation results
res_pfd = pfd_result(m, df, svg=init_pfd)
# Display PFD with results.
display(SVG(res_pfd))
# Display the stream table.
df
```
| github_jupyter |
## Observations and Insights
```
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import numpy as np
from scipy.stats import linregress
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
merged_df = pd.merge(mouse_metadata, study_results, on = "Mouse ID")
# Display the data table for preview
merged_df.head(30)
# Checking the number of mice.
total_mice = merged_df["Mouse ID"].nunique()
total_mice
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint
duplicate_id = merged_df.loc[merged_df.duplicated(subset = ["Mouse ID", "Timepoint"]), "Mouse ID"].unique()
duplicate_id
# Optional: Get all the data for the duplicate mouse ID.
optional_df = merged_df.loc[merged_df["Mouse ID"]=="g989"]
optional_df
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
clean_df = merged_df.loc[merged_df["Mouse ID"]!="g989"]
clean_df
# Checking the number of mice in the clean DataFrame.
total_mice = clean_df["Mouse ID"].nunique()
total_mice
```
## Summary Statistics
```
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Use groupby and summary statistical methods to calculate the following properties of each drug regimen:
# mean, median, variance, standard deviation, and SEM of the tumor volume.
# Assemble the resulting series into a single summary dataframe.
mean_data = clean_df.groupby("Drug Regimen").mean()["Tumor Volume (mm3)"]
median_data = clean_df.groupby("Drug Regimen").median()["Tumor Volume (mm3)"]
variance_data = clean_df.groupby("Drug Regimen").var()["Tumor Volume (mm3)"]
std_data = clean_df.groupby("Drug Regimen").std()["Tumor Volume (mm3)"]
sem_data = clean_df.groupby("Drug Regimen").sem()["Tumor Volume (mm3)"]
stats_df = pd.DataFrame({"Mean":mean_data,
"Median":median_data,
"Variance":variance_data,
"STD":std_data,
"SEM":sem_data})
stats_df
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
summary_df2 = clean_df.groupby("Drug Regimen").agg({"Tumor Volume (mm3)":["mean","median","var","std","sem"]})
# Using the aggregation method, produce the same summary statistics in a single line
summary_df2
```
## Bar and Pie Charts
```
# Generate a bar plot showing the total number of unique mice tested on each drug regimen using pandas.
bar_plot = clean_df.groupby(["Drug Regimen"]).count()["Mouse ID"]
bar_plot.plot(kind="bar", figsize=(10,5))
plt.title("Drug Distribution")
plt.xlabel("Drug Regimen")
plt.ylabel("Number of Mice")
plt.show()
plt.tight_layout()
# Generate a bar plot showing the total number of unique mice tested on each drug regimen using pyplot.
bar_plot
x_axis= np.arange(0, len(bar_plot))
tick_locations = []
for x in x_axis:
tick_locations.append(x)
plt.title("Drug Distribution")
plt.xlabel("Drug Regimen")
plt.ylabel("# of Mice")
plt.xlim(0, len(bar_plot)-0.25)
plt.ylim(0, max(bar_plot)+20)
plt.bar(x_axis, bar_plot, facecolor="g", alpha=0.5, align="center")
plt.xticks(tick_locations, ["Capomulin", "Ceftamin", "Infubinol", "Ketapril", "Naftisol", "Placebo", "Propriva", "Ramicane", "Stelasyn", "Zoniferol"], rotation = "vertical")
plt.show()
# Generate a pie plot showing the distribution of female versus male mice using pandas
males = clean_df[clean_df["Sex"]== "Male"]["Mouse ID"].nunique()
females = clean_df[clean_df["Sex"]== "Female"]["Mouse ID"].nunique()
gender_df = pd.DataFrame({"Sex": ["Male", "Female"], "Count": [males, females]})
gender_df_index = gender_df.set_index("Sex")
plot = gender_df_index.plot(kind="pie", y="Count", autopct="%1.1f%%", startangle=120)
plot
# Generate a pie plot showing the distribution of female versus male mice using pyp
labels = ["Male", "Female"]
sizes = ["125", "123"]
colors = ["Green", "Yellow"]
plt.pie(sizes, labels=labels, colors=colors,
autopct="%1.1f%%", shadow=True, startangle=140)
plt.axis("equal")
```
## Quartiles, Outliers and Boxplots
```
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
filt_cap = clean_df.loc[clean_df["Drug Regimen"] == "Capomulin"]
filt_ram = clean_df.loc[clean_df["Drug Regimen"] == "Ramicane"]
filt_infu = clean_df.loc[clean_df["Drug Regimen"] == "Infubinol"]
filt_ceft = clean_df.loc[clean_df["Drug Regimen"] == "Ceftamin"]
# Start by getting the last (greatest) timepoint for each mouse
last_timepoint_cap = filt_cap.groupby("Mouse ID")["Timepoint"].max()
last_timepoint_ram = filt_ram.groupby("Mouse ID")["Timepoint"].max()
last_timepoint_infu = filt_infu.groupby("Mouse ID")["Timepoint"].max()
last_timepoint_ceft = filt_ceft.groupby("Mouse ID")["Timepoint"].max()
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
fin_vol_cap = pd.DataFrame(last_timepoint_cap)
cap_merge = pd.merge(fin_vol_cap, clean_df, on = ("Mouse ID", "Timepoint"), how = "left")
fin_vol_ram = pd.DataFrame(last_timepoint_ram)
ram_merge = pd.merge(fin_vol_ram, clean_df, on = ("Mouse ID", "Timepoint"), how = "left")
fin_vol_infu = pd.DataFrame(last_timepoint_infu)
infu_merge = pd.merge(fin_vol_infu, clean_df, on = ("Mouse ID", "Timepoint"), how = "left")
fin_vol_ceft = pd.DataFrame(last_timepoint_ceft)
ceft_merge = pd.merge(fin_vol_ceft, clean_df, on = ("Mouse ID", "Timepoint"), how = "left")
# Put treatments into a list for for loop (and later for plot labels)
treatments = [cap_merge, ram_merge, infu_merge, ceft_merge]
# Create empty list to fill with tumor vol data (for plotting)
tumor_volume_data_plot = []
for treatment in treatments:
print(treatment)
# Calculate the IQR and quantitatively determine if there are any potential outliers.
# Determine outliers using upper and lower bounds
#Capomulin
cap_list = cap_merge["Tumor Volume (mm3)"]
quartiles = cap_list.quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq-lowerq
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
tumor_volume_data_plot.append(cap_list)
print(f"Capomulin potential outliers could be values below {lower_bound} and above {upper_bound} could be outliers.")
print (f"Capomulin IQR is {iqr}.")
#Ramicane
ram_list = ram_merge["Tumor Volume (mm3)"]
quartiles = ram_list.quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq-lowerq
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
tumor_volume_data_plot.append(ram_list)
print(f"Ramicane potential outliers could be values below {lower_bound} and above {upper_bound} could be outliers.")
print (f"Ramicane IQR is {iqr}.")
#Infubinol
infu_list = infu_merge["Tumor Volume (mm3)"]
quartiles = infu_list.quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq-lowerq
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
tumor_volume_data_plot.append(infu_list)
print(f"Infubinol potential outliers could be values below {lower_bound} and above {upper_bound} could be outliers.")
print (f"Infubinol IQR is {iqr}.")
#Ceftamin
ceft_list = ceft_merge["Tumor Volume (mm3)"]
quartiles = ceft_list.quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq-lowerq
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
tumor_volume_data_plot.append(ceft_list)
print(f"Ceftamin potential outliers could be values below {lower_bound} and above {upper_bound} could be outliers.")
print (f"Ceftamin IQR is {iqr}.")
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
tumor_volume_data_plot
fig1, ax1 = plt.subplots()
ax1.set_title('Final Tumor Volume of Each Mouse')
ax1.set_ylabel('Final Tumor Volume (mm3)')
ax1.set_xlabel('Drug Regimen')
ax1.boxplot(tumor_volume_data_plot, labels = ["Capomulin", "Ramicane", "Infubinol", "Ceftamin"])
plt.show()
```
## Line and Scatter Plots
```
# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin
x_axis = np.arange(0,46,5)
tumor_vol = [45, 45.41, 39.11, 39.77, 36.06, 36.61, 32.91, 30.20, 28.16, 28.48]
plt.xlabel("Time Point")
plt.ylabel("Tumor Volume")
plt.title("Capomulin (x401)")
plt.ylim(25, 50)
plt.xlim(0, 45)
tumor_line, = plt.plot(x_axis, tumor_vol, marker="*", color="blue", linewidth=1, label="Capomulin")
plt.show()
# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen
drug_df = clean_df.loc[clean_df["Drug Regimen"] == "Capomulin"]
weight_tumor = drug_df.loc[:, ["Mouse ID", "Weight (g)", "Tumor Volume (mm3)"]]
avg_tumor_volume = pd.DataFrame(weight_tumor.groupby(["Mouse ID", "Weight (g)"])["Tumor Volume (mm3)"].mean()).reset_index()
avg_tumor_volume = avg_tumor_volume.set_index("Mouse ID")
avg_tumor_volume.plot(kind="scatter", x="Weight (g)", y="Tumor Volume (mm3)", grid=True, figsize=(8,8), title="Weight vs. Average Tumor Volume for Capomulin")
plt.show()
```
## Correlation and Regression
```
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
mouse_weight = avg_tumor_volume.iloc[:,0]
tumor_volume = avg_tumor_volume.iloc[:,1]
correlation = st.pearsonr(mouse_weight,tumor_volume)
print(f"The correlation between both factors is {round(correlation[0],2)}")
x_values = avg_tumor_volume['Weight (g)']
y_values = avg_tumor_volume['Tumor Volume (mm3)']
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)
regress_values = x_values * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(x_values,y_values)
plt.plot(x_values,regress_values,"r-")
plt.annotate(line_eq,(6,10),fontsize=15,color="red")
plt.xlabel('Mouse Weight (g)')
plt.ylabel('Average Tumor Volume (mm3)')
plt.title('Linear Regression')
plt.show()
```
| github_jupyter |
# A canonical asset pricing job
Let's estimate, for each firm, for each year, the alpha, beta, and size and value loadings.
So we want a dataset that looks like this:
| Firm | Year | alpha | beta |
| --- | --- | --- | --- |
| GM | 2000 | 0.01 | 1.04 |
| GM | 2001 | -0.005 | 0.98 |
...but it will do this for every firm, every year!
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import pandas_datareader as pdr
import seaborn as sns
# import statsmodels.api as sm
```
Load your stock returns. Here, I'll use this dataset, but you can use anything.
The returns don't even have to be firms.
**They can be any asset.** (Portfolios, mutual funds, crypto, ...)
```
crsp = pd.read_stata('https://github.com/LeDataSciFi/ledatascifi-2021/blob/main/data/3firm_ret_1990_2020.dta?raw=true')
crsp['ret'] = crsp['ret']*100 # convert to precentage to match FF's convention on scaling (daily % rets)
```
Then grab the market returns. Here, we will use one of the Fama-French datasets.
```
ff = pdr.get_data_famafrench('F-F_Research_Data_5_Factors_2x3_daily',start=1980,end=2010)[0] # the [0] is because the imported obect is a dictionary, and key=0 is the dataframe
ff = ff.reset_index().rename(columns={"Mkt-RF":"mkt_excess", "Date":"date"})
```
Merge the market returns into the stock returns.
```
crsp_ready = pd.merge(left=ff, right=crsp, on='date', how="inner",
indicator=True, validate="one_to_many")
```
So the data's basically ready. Again, the goal is to estimate, for each firm, for each year, the alpha, beta, and size and value loadings.
You caught that right? I have a dataframe, and **for each** firm, and **for each** year, I want to \<do stuff\> (run regressions).
**Pandas + "for each" = groupby!**
So we will _basically_ run `crsp.groupby([firm,year]).runregression()`. Except there is no "runregression" function that applies to pandas groupby objects. Small workaround: `crsp.groupby([firm,year]).apply(<our own reg fcn>)`.
We just need to write a reg function that works on groupby objects.
```
import statsmodels.api as sm
def reg_in_groupby(df,formula="ret_excess ~ mkt_excess + SMB + HML"):
'''
Want to run regressions after groupby?
This will do it!
Note: This defaults to a FF3 model assuming specific variable names. If you
want to run any other regression, just specify your model.
Usage:
df.groupby(<whatever>).apply(reg_in_groupby)
df.groupby(<whatever>).apply(reg_in_groupby,formula=<whatever>)
'''
return pd.Series(sm.formula.ols(formula,data = df).fit().params)
```
Let's apply that to our returns!
```
(
crsp_ready # grab the data
# Two things before the regressions:
# 1. need a year variable (to group on)
# 2. the market returns in FF are excess returns, so
# our stock returns need to be excess as well
.assign(year = crsp_ready.date.dt.year,
ret_excess = crsp_ready.ret - crsp_ready.RF)
# ok, run the regs, so easy!
.groupby(['permno','year']).apply(reg_in_groupby)
# and clean up - with better var names
.rename(columns={'Intercept':'alpha','mkt_excess':'beta'})
.reset_index()
)
```
How cool is that!
## Summary
This is all you need to do:
1. Set up the data like you would have to no matter what:
1. Load your stock prices.
1. Merge in the market returns and any factors you want to include in your model.
1. Make sure your returns are scaled like your factors (e.g., above, I converted to percentages to match the FF convention)
1. Make sure your asset returns and market returns are both excess returns (or both are not excess returns)
1. Create any variables you want to group on (e.g. above, I created a year variable)
3. `df.groupby(<whatever>).apply(reg_in_groupby)`
Holy smokes!
| github_jupyter |
# NYC PLUTO Data and Noise Complaints
Investigating how PLUTO data and zoning characteristics impact spatial, temporal and types of noise complaints through out New York City. Specifically looking at noise complaints that are handled by NYC's Department of Environmental Protection (DEP).
All work performed by Zoe Martiniak.
```
import os
import pandas as pd
import numpy as np
import datetime
import urllib
import requests
from sodapy import Socrata
import matplotlib
import matplotlib.pyplot as plt
import pylab as pl
from pandas.plotting import scatter_matrix
%matplotlib inline
%pylab inline
##Geospatial
import shapely
import geopandas as gp
from geopandas import GeoDataFrame
from fiona.crs import from_epsg
from shapely.geometry import Point, MultiPoint
import io
from geopandas.tools import sjoin
from shapely.ops import nearest_points
## Statistical Modelling
import statsmodels.api as sm
import statsmodels.formula.api as smf
from statsmodels.datasets.longley import load
import sklearn.preprocessing as preprocessing
from sklearn.ensemble import RandomForestRegressor as rfr
from sklearn.cross_validation import train_test_split
from sklearn.metrics import confusion_matrix
from APPTOKEN import myToken
## Save your SOTA API Token as variable myToken in a file titled SOTAPY_APPTOKEN.py
## e.g.
## myToken = 'XXXXXXXXXXXXXXXX'
```
# DATA IMPORTING
Applying domain knowledge to only read in columns of interest to reduce computing requirements.
### PLUTO csv file
```
pluto = pd.read_csv(os.getenv('MYDATA')+'/pluto_18v2.csv', usecols=['borocode','zonedist1',
'overlay1', 'bldgclass', 'landuse',
'ownertype','lotarea', 'bldgarea', 'comarea',
'resarea', 'officearea', 'retailarea', 'garagearea', 'strgearea',
'factryarea', 'otherarea', 'numfloors',
'unitsres', 'unitstotal', 'proxcode', 'lottype','lotfront',
'lotdepth', 'bldgfront', 'bldgdepth',
'yearalter1',
'assessland', 'yearbuilt','histdist', 'landmark', 'builtfar',
'residfar', 'commfar', 'facilfar','bbl', 'xcoord','ycoord'])
```
### 2010 Census Blocks
```
census = gp.read_file('Data/2010 Census Blocks/geo_export_56edaf68-bbe6-44a7-bd7c-81a898fb6f2e.shp')
```
### Read in 311 Complaints
```
complaints = pd.read_csv('Data/311DEPcomplaints.csv', usecols=['address_type','borough','city',
'closed_date', 'community_board','created_date',
'cross_street_1', 'cross_street_2', 'descriptor', 'due_date',
'facility_type', 'incident_address', 'incident_zip',
'intersection_street_1', 'intersection_street_2', 'latitude',
'location_type', 'longitude', 'resolution_action_updated_date',
'resolution_description', 'status', 'street_name' ])
## Many missing lat/lon values in complaints file
## Is it worth it to manually fill in NaN with geopy geocded laton/long?
len(complaints[(complaints.latitude.isna()) | (complaints.longitude.isna())])/len(complaints)
```
### Mannually Filling in Missing Lat/Long from Addresses
Very time and computationally expensive, so this step should be performed on a different machine.
For our intents and purposes, I will just be dropping rows with missing lat/long
```
complaints.dropna(subset=['longitude', 'latitude'],inplace=True)
complaints['createdate'] = pd.to_datetime(complaints['created_date'])
complaints = complaints[complaints.createdate >= datetime.datetime(2018,1,1)]
complaints = complaints[complaints.createdate < datetime.datetime(2019,1,1)]
complaints['lonlat']=list(zip(complaints.longitude.astype(float), complaints.latitude.astype(float)))
complaints['geometry']=complaints[['lonlat']].applymap(lambda x:shapely.geometry.Point(x))
crs = {'init':'epsg:4326', 'no_defs': True}
complaints = gp.GeoDataFrame(complaints, crs=crs, geometry=complaints['geometry'])
```
## NYC Zoning Shapefile
```
zoning = gp.GeoDataFrame.from_file('Data/nycgiszoningfeatures_201902shp/nyzd.shp')
zoning.to_crs(epsg=4326, inplace=True)
```
# PLUTO Shapefiles
## Load in PLUTO Shapefiles by Boro
The PLUTO shapefiles are incredibly large. I used ArcMAP to separate the pluto shapefiles by borough and saved them locally.
My original plan was to perform a spatial join of the complaints to the pluto shapefiles to find the relationship between PLUTO data on the building-scale and noise complaints.
While going through this exploratory analysis, I discovered that the 311 complaints are actually all located in the street and therefore the points do not intersect with the PLUTO shapefiles. This brings up some interesting questions, such as how the lat/long coordinates are assigned by the DEP.
I am including this step to showcase that the complaints do not intersect with the shapefiles, to justify my next step of simply aggregating by zoning type with the zoning shapefiles.
```
## PLUTO SHAPEFILES BY BORO
#files = ! ls Data/PLUTO_Split | grep '.shp'
boros= ['bronx','brooklyn','man','queens','staten']
columns_to_drop = ['FID_pluto_', 'Borough','CT2010', 'CB2010',
'SchoolDist', 'Council', 'FireComp', 'PolicePrct',
'HealthCent', 'HealthArea', 'Sanitboro', 'SanitDistr', 'SanitSub',
'Address','BldgArea', 'ComArea', 'ResArea', 'OfficeArea',
'RetailArea', 'GarageArea', 'StrgeArea', 'FactryArea', 'OtherArea',
'AreaSource','LotFront', 'LotDepth', 'BldgFront', 'BldgDepth', 'Ext', 'ProxCode',
'IrrLotCode', 'BsmtCode', 'AssessLand', 'AssessTot',
'ExemptLand', 'ExemptTot','ResidFAR', 'CommFAR', 'FacilFAR',
'BoroCode','CondoNo','XCoord', 'YCoord', 'ZMCode', 'Sanborn', 'TaxMap', 'EDesigNum', 'APPBBL',
'APPDate', 'PLUTOMapID', 'FIRM07_FLA', 'PFIRM15_FL', 'Version','BoroCode_1', 'BoroName']
bx_shp = gp.GeoDataFrame.from_file('Data/PLUTO_Split/Pluto_bronx.shp')
bx_311 = complaints[complaints.borough == 'BRONX']
bx_shp.to_crs(epsg=4326, inplace=True)
bx_shp.drop(columns_to_drop, axis=1, inplace=True)
```
## Mapping
```
f, ax = plt.subplots(figsize=(15,15))
#ax.get_xaxis().set_visible(False)
#ax.get_yaxis().set_visible(False)
ax.set_xlim(-73.91, -73.9)
ax.set_ylim(40.852, 40.86)
bx_shp.plot(ax=ax, color = 'w', edgecolor='k',alpha=0.5, legend=True)
plt.title("2018 Bronx Noise Complaints", size=20)
bx_311.plot(ax=ax,marker='.', color='red')#, markersize=.4, alpha=.4)
#fname = 'Bronx2018zoomed.png'
#plt.savefig(fname)
plt.show()
```
**Fig1:** This figure shows that the complaint points are located in the street, and therefore do not intersect with a tax lot. Therefore we cannot perform a spatial join on the two shapefiles.
# Data Cleaning & Simplifying
Here we apply our domain knowledge of zoning and Pluto data to do a bit of cleaning. This includes simplifying the zoning districts to extract the first letter, which can be one of the following five options:<br />
B: Ball Field, BPC<br />
P: Public Place, Park, Playground (all public areas)<br />
C: Commercial<br />
R: Residential<br />
M: Manufacturing<br />
```
print(len(zoning.ZONEDIST.unique()))
print(len(pluto.zonedist1.unique()))
def simplifying_zone(x):
if x in ['PLAYGROUND','PARK','PUBLIC PLACE','BALL FIELD' ,'BPC']:
return 'P'
if '/' in x:
return 'O'
if x[:3] == 'R10':
return x[:3]
else:
return x[:2]
def condensed_simple(x):
if x[:2] in ['R1','R2', 'R3','R4']:
return 'R1-R4'
if x[:2] in ['R5','R6', 'R7']:
return 'R5-R7'
if x[:2] in ['R8','R9', 'R10']:
return 'R8-R10'
if x[:2] in ['C1','C2']:
return 'C1-C2'
if x[:2] in ['C5','C6']:
return 'C5-C6'
if x[:2] in ['C3','C4','C7','C8']:
return 'C'
if x[:1] =='M':
return 'M'
else:
return x[:2]
cols_to_tidy = []
notcommon = []
for c in pluto.columns:
if type(pluto[c].mode()[0]) == str:
cols_to_tidy.append(c)
for c in cols_to_tidy:
pluto[c].fillna('U',inplace=True)
pluto.fillna(0,inplace=True)
pluto['bldgclass'] = pluto['bldgclass'].map(lambda x: x[0])
pluto['overlay1'] = pluto['overlay1'].map(lambda x: x[:2])
pluto['simple_zone'] = pluto['zonedist1'].map(simplifying_zone)
pluto['condensed'] = pluto['simple_zone'].map(condensed_simple)
```
```
zoning_analysis = pluto[['lotarea', 'bldgarea', 'comarea',
'resarea', 'officearea', 'retailarea', 'garagearea', 'strgearea',
'factryarea', 'otherarea', 'areasource', 'numbldgs', 'numfloors',
'unitsres', 'unitstotal', 'lotfront', 'lotdepth', 'bldgfront',
'bldgdepth','lotfront', 'lotdepth', 'bldgfront',
'bldgdepth','yearbuilt',
'yearalter1', 'yearalter2','builtfar','simple_zone']]
zoning_analysis.dropna(inplace=True)
## Cleaning the Complaint file for easier 1-hot-encoding
def TOD_shifts(x):
if x.hour <=7:
return 'M'
if x.hour >7 and x.hour<18:
return 'D'
if x.hour >= 18:
return 'E'
def DOW_(x):
weekdays = ['mon','tues','weds','thurs','fri','sat','sun']
for i in range(7):
if x.dayofweek == i:
return weekdays[i]
def resolution_(x):
descriptions = complaints.resolution_description.unique()
for a in [2,3,4,5,11,12,14,17,20,23,25]:
if x == descriptions[a]:
return 'valid_no_vio'
continue
next
if x == descriptions[1]:
return 'violation'
next
for b in [0,6,10,16,19,21,24]:
if x == descriptions[b]:
return 'further_investigation'
continue
next
for c in [7,8,9,13,15,18,22]:
if x == descriptions[c]:
return 'access_issue'
```
#### SIMPLIFIED COMPLAINT DESCRIPTIONS
0: Did not observe violation<br/>
1: Violation issued <br/>
No violation issued yet/canceled/resolved because:<br/>
2: Duplicate<br/>
3: Not warranted<br/>
4: Complainant canceled<br/>
5: Not warranted<br/>
6: Investigate further<br/>
7: Closed becuase complainant didnt respond<br/>
8: Incorrect complainant contact info (phone)<br/>
9: Incorrect complainant contact info (address)<br/>
10: Further investigation<br/>
11: NaN<br/>
12: Status unavailable<br/>
13: Could not gain access to location<br/>
14: NYPD<br/>
15: Sent letter to complainant after calling<br/>
16: Recieved letter from dog owner<br/>
17: Resolved with complainant<br/>
18: Incorrect address<br/>
19: An inspection is warranted<br/>
20: Hydrant<br/>
21: 2nd inspection<br/>
22: No complainant info<br/>
23: Refer to other agency (not nypd)<br/>
24: Inspection is scheduled<br/>
25: Call 311 for more info<br/>
Violation: [1]
not warranted/canceled/otheragency/duplicate: [2,3,4,5,11,12,14,17,20,23,25]
Complainant/access issue: [7,8,9,13,15,18,22]
Further investigtion: [0,6,10,16,19,21,24]
```
complaints['TOD']=complaints.createdate.map(TOD_shifts)
complaints['DOW']=complaints.createdate.map(DOW_)
```
## PLUTO/Zoning Feature Analysis
```
## Obtained this line of code from datascience.stackexchange @ the following link:
## https://datascience.stackexchange.com/questions/10459/calculation-and-visualization-of-correlation-matrix-with-pandas
def drange(start, stop, step):
r = start
while r <= stop:
yield r
r += step
def correlation_matrix(df):
from matplotlib import pyplot as plt
from matplotlib import cm as cm
fig = plt.figure(figsize=(10,10))
ax1 = fig.add_subplot(111)
cmap = cm.get_cmap('jet', 30)
cax = ax1.imshow(df.corr(), interpolation="nearest", cmap=cmap)
ax1.grid(True)
plt.title('PLUTO Correlation', size=20)
labels =[x for x in zoning_analysis.columns ]
ax1.set_yticklabels(labels,fontsize=14)
ax1.set_xticklabels(labels,fontsize=14, rotation='90')
# Add colorbar, make sure to specify tick locations to match desired ticklabels
fig.colorbar(cax, ticks = list(drange(-1, 1, 0.25)))
plt.show()
correlation_matrix(zoning_analysis)
zoning_analysis.sort_values(['simple_zone'],ascending=False, inplace=True)
y = zoning_analysis.groupby('simple_zone').mean()
f, axes = plt.subplots(figsize=(8,25), nrows=6, ncols=1)
cols = ['lotarea', 'bldgarea', 'comarea', 'resarea', 'officearea', 'retailarea']
for colind in range(6):
y[cols[colind]].plot(ax = plt.subplot(6,1,colind+1), kind='bar')
plt.ylabel('Avg. {} Units'.format(cols[colind]))
plt.title(cols[colind])
zoning['simple_zone'] = zoning['ZONEDIST'].map(simplifying_zone)
zoning['condensed'] = zoning['simple_zone'].map(condensed_simple)
zoning = zoning.reset_index().rename(columns={'index':'zdid'})
```
## Perform Spatial Joins
```
## Joining Census group shapefile to PLUTO shapefile
sjoin(census, plutoshp)
## Joining the zoning shapefile to complaints
zoning_joined = sjoin(zoning, complaints).reset_index()
zoning_joined.drop('index',axis=1, inplace=True)
print(zoning.shape)
print(complaints.shape)
print(zoning_joined.shape)
zoning_joined.drop(columns=['index_right', 'address_type', 'borough',
'city', 'closed_date', 'community_board', 'created_date',
'cross_street_1', 'cross_street_2', 'due_date',
'facility_type', 'incident_address', 'incident_zip',
'intersection_street_1', 'intersection_street_2',
'location_type', 'resolution_action_updated_date',
'resolution_description', 'status', 'street_name', 'lonlat'], inplace=True)
## Joining each borough PLUTO shapefile to zoning shapefile
bx_shp['centroid_colum'] = bx_shp.centroid
bx_shp = bx_shp.set_geometry('centroid_colum')
pluto_bx = sjoin(zoning, bx_shp).reset_index()
print(zoning.shape)
print(bx_shp.shape)
print(pluto_bx.shape)
pluto_bx = pluto_bx.groupby('zdid')['LandUse', 'LotArea', 'NumBldgs', 'NumFloors', 'UnitsRes',
'UnitsTotal', 'LotType', 'YearBuilt','YearAlter1', 'YearAlter2','BuiltFAR'].mean()
pluto_bx = zoning.merge(pluto_bx, on='zdid')
```
# ANALYSIS
## Visual Analysis
```
x = zoning_joined.groupby('simple_zone')['ZONEDIST'].count().index
y = zoning_joined.groupby('simple_zone')['ZONEDIST'].count()
f, ax = plt.subplots(figsize=(12,9))
plt.bar(x, y)
plt.ylabel('Counts', size=12)
plt.title('Noise Complaints by Zoning Districts (2018)', size=15)
```
**FIg 1** This shows the total counts of complaints by Zoning district. Clearly there are more complaints in middle/high-population density residential zoning districts. There are also high complaints in commercial districts C5 & C6. These commercial districts tend to have a residential overlay.
```
y.sort_values(ascending=False, inplace=True)
x = y.index
descriptors = zoning_joined.descriptor.unique()
df = pd.DataFrame(index=x)
for d in descriptors:
df[d] = zoning_joined[zoning_joined.descriptor == d].groupby('simple_zone')['ZONEDIST'].count()
df = df.div(df.sum(axis=1), axis=0)
ax = df.plot(kind="bar", stacked=True, figsize=(18,12))
df.sum(axis=1).plot(ax=ax, color="k")
plt.title('Noise Complaints by Descriptor', size=20)
plt.xlabel('Simplified Zone District (Decreasing Total Count -->)', size=12)
plt.ylabel('%', size=12)
fname = 'Descriptorpercent.jpeg'
#plt.savefig(fname)
plt.show()
```
**FIg 2** This figure shows the breakdown of the main noise complaint types per zoning district.
```
descriptors
complaints_by_zone = pd.get_dummies(zoning_joined, columns=['TOD','DOW'])
complaints_by_zone = complaints_by_zone.rename(columns={'TOD_D':'Day','TOD_E':'Night',
'TOD_M':'Morning','DOW_fri':'Friday','DOW_mon':'Monday','DOW_sat':'Saturday',
'DOW_sun':'Sunday','DOW_thurs':'Thursday','DOW_tues':'Tuesday','DOW_weds':'Wednesday'})
complaints_by_zone.drop(columns=['descriptor', 'latitude', 'longitude','createdate'],inplace=True)
complaints_by_zone = complaints_by_zone.groupby('zdid').sum()[['Day', 'Night', 'Morning', 'Friday',
'Monday', 'Saturday', 'Sunday', 'Thursday', 'Tuesday', 'Wednesday']].reset_index()
## Creating total counts of complaints by zoning district
complaints_by_zone['Count_TOD'] = (complaints_by_zone.Day +
complaints_by_zone.Night +
complaints_by_zone.Morning)
complaints_by_zone['Count_DOW'] = (complaints_by_zone.Monday +
complaints_by_zone.Tuesday +
complaints_by_zone.Wednesday +
complaints_by_zone.Thursday +
complaints_by_zone.Friday +
complaints_by_zone.Saturday +
complaints_by_zone.Sunday)
## Verifying the counts are the same
complaints_by_zone[complaints_by_zone.Count_TOD != complaints_by_zone.Count_DOW]
print(complaints_by_zone.shape)
print(zoning.shape)
complaints_by_zone = zoning.merge(complaints_by_zone, on='zdid')
print(complaints_by_zone.shape)
f, ax = plt.subplots(1,figsize=(13,13))
ax.set_axis_off()
ax.set_title('Avg # of Complaints',size=15)
complaints_by_zone.plot(ax=ax, column='Count_TOD', cmap='gist_earth', k=3, alpha=0.7, legend=True)
fname = 'AvgComplaintsbyZD.png'
plt.savefig(fname)
plt.show()
complaints_by_zone['Norm_count'] = complaints_by_zone.Count_TOD/complaints_by_zone.Shape_Area*1000000
f, ax = plt.subplots(1,figsize=(13,13))
ax.set_axis_off()
ax.set_title('Complaints Normalized by ZD Area',size=15)
complaints_by_zone[complaints_by_zone.Norm_count < 400].plot(ax=ax, column='Norm_count', cmap='gist_earth', k=3, alpha=0.7, legend=True)
fname = 'NormComplaintsbyZD.png'
plt.savefig(fname)
plt.show()
```
**Fig 3** This figure shows the spread of noise complaint density (complaints per unit area) of each zoning district.
```
complaints_by_zone.columns
TODcols = ['Day', 'Night', 'Morning']
fig = pl.figure(figsize=(30,20))
for x in range(1,8):
fig.add_subplot(2,3,x).set_axis_off()
fig.add_subplot(2,3,x).set_title(title[x-1], size=28)
pumashp.plot(column=column[x-1],cmap='Blues', alpha=1,
edgecolor='k', ax=fig.add_subplot(2,3,x), legend=True)
DOWcols = ['Friday', 'Monday', 'Saturday', 'Sunday', 'Thursday', 'Tuesday', 'Wednesday']
fig = pl.figure(figsize=(30,20))
for x in range(1,7):
fig.add_subplot(2,3,x).set_axis_off()
fig.add_subplot(2,3,x).set_title(DOWcols[x-1], size=28)
complaints_by_zone.plot(column=DOWcols[x-1],cmap='gist_stern', alpha=1,
ax=fig.add_subplot(2,3,x), legend=True)
```
## Regression
Define lat/long coordinates of zoning centroids for regression
```
complaints_by_zone.shape
complaints_by_zone['centerlong'] = complaints_by_zone.centroid.x
complaints_by_zone['centerlat'] = complaints_by_zone.centroid.y
mod = smf.ols(formula =
'Norm_count ~ centerlat + centerlong', data=complaints_by_zone)
results1 = mod.fit()
results1.summary()
len(complaints_by_zone.ZONEDIST.unique())
mod = smf.ols(formula =
'Norm_count ~ ZONEDIST', data=complaints_by_zone)
results1 = mod.fit()
results1.summary()
len(complaints_by_zone.simple_zone.unique())
mod = smf.ols(formula =
'Norm_count ~ simple_zone', data=complaints_by_zone)
results1 = mod.fit()
results1.summary()
complaints_by_zone.condensed.unique()
mod = smf.ols(formula =
'Norm_count ~ condensed', data=complaints_by_zone)
results1 = mod.fit()
results1.summary()
```
### PLAN
- JOIN ALL ZONE DIST TO PLUTO SHAPEFILES, AGGREGATE FEATURES
- PERFORM REGRESSION
COMPLEX CLASSIFIERS
- DECISION TREE AND CLUSTERING
```
import folium
from folium.plugins import HeatMap
hmap = folium.Map()
hm_wide = HeatMap( list(zip()))
f, ax = plt.subplots(figsize=(15,15))
#ax.get_xaxis().set_visible(False)
#ax.get_yaxis().set_visible(False)
zoning.plot(column='counts',ax=ax, cmap='plasma', alpha = 0.9, legend=True)
plt.title("Complaints by Zone", size=20)
```
| github_jupyter |
```
import os
import re
import random
import tensorflow as tf
import tensorflow.python.platform
from tensorflow.python.platform import gfile
import numpy as np
import pandas as pd
import sklearn
from sklearn import metrics
from sklearn import model_selection
import sklearn.linear_model
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, confusion_matrix
from sklearn.svm import SVC, LinearSVC
import matplotlib.pyplot as plt
%matplotlib inline
import pickle
import scipy.linalg
import sklearn.preprocessing
import sklearn.linear_model
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import BaggingClassifier
```
### Warm-up
(a) In a one-vs-one fashion, for each pairs of classes, train a linear SVM classifier using scikit-learn's function LinearSVC, with the default value for the regularization parameter. Compute the multi-class misclassification error obtained using these classifiers trained in a one-vs-one fashion.
```
X_train = pickle.load(open('features_train_all','rb'))
y_train = pickle.load(open('labels_train_all','rb'))
X_train1, X_test1, y_train1, y_test1 = train_test_split(X_train, y_train, test_size=0.2, random_state=42)
LinearSVC_ovo = SVC(C=1.0, kernel='linear', max_iter=1000, decision_function_shape = 'ovo')
LinearSVC_ovo.fit(X_train1, y_train1)
y_lrSVC_ovo = LinearSVC_ovo.predict(X_test1)
accuracy_lrSVC_ovo = accuracy_score(y_test1, y_lrSVC_ovo)
misclassification_error = 1 - accuracy_lrSVC_ovo
print("The multi-class misclassification error obtained using classifiers trained in a one-vs-one fashion is ", + misclassification_error)
```
(b) In a one-vs-rest fashion, for each class, train a linear SVM classifier using scikit-learn's function LinearSVC, with the default value for $\lambda_c$. Compute the multi-class misclassification error obtained using these classifiers trained in a one-vs-rest fashion.
```
linearSVC_ovr = LinearSVC(C=1.0, loss='squared_hinge', penalty='l2',multi_class='ovr')
linearSVC_ovr.fit(X_train1, y_train1)
y_lrSVC_ovr = linearSVC_ovr.predict(X_test1)
accuracy_lrSVC_ovr = accuracy_score(y_test1, y_lrSVC_ovr)
misclassification_error1 = 1 - accuracy_lrSVC_ovr
print("The multi-class misclassification error obtained using classifiers trained in a one-vs-rest fashion is ", + misclassification_error1)
```
(c) Using the option multi class='crammer singer' in scikitlearn's function LinearSVC, train a multi-class linear SVM classifier using the default value for the regularization parameter. Compute the multi-class misclassification error obtained using this multi-class linear SVM classifier.
```
linearSVC_cs = LinearSVC(C=1.0, loss='squared_hinge', penalty='l2',multi_class='crammer_singer')
linearSVC_cs.fit(X_train1, y_train1)
y_lrSVC_cs = linearSVC_cs.predict(X_test1)
accuracy_lrSVC_cs = accuracy_score(y_test1, y_lrSVC_cs)
misclassification_error2 = 1 - accuracy_lrSVC_cs
print("The multi-class misclassification error obtained using multi-class linear SVM classifier is ", + misclassification_error2)
```
### Linear SVMs for multi-class classification
- Redo all questions above now tuning the regularization parameters using cross-validation.
```
X_train_sub = X_train[:500]
y_train_sub = y_train[:500]
#Redo Model one: linearSVC with one-vs-one
ovo_svm = SVC(kernel='linear', max_iter=1000, decision_function_shape = 'ovo')
parameters = {'C':[10**i for i in range(-4, 5)]}
clf_ovo = GridSearchCV(ovo_svm, parameters)
clf_ovo.fit(X_train_sub, y_train_sub)
clf_ovo.best_params_
LinearSVC_ovo_opt = SVC(C=0.1, kernel='linear', max_iter=1000, decision_function_shape = 'ovo')
LinearSVC_ovo_opt.fit(X_train1, y_train1)
y_lrSVC_ovo_opt = LinearSVC_ovo_opt.predict(X_test1)
accuracy_lrSVC_ovo_opt = accuracy_score(y_test1, y_lrSVC_ovo_opt)
misclassification_error_opt = 1 - accuracy_lrSVC_ovo_opt
print("The multi-class misclassification error obtained using classifiers trained in a one-vs-one fashion with lambda=0.1 is ", + misclassification_error_opt)
#Redo model 2: LinearSVC with one-vs-rest
ovr_svm = LinearSVC(loss='squared_hinge', penalty='l2',multi_class='ovr')
parameters = {'C':[10**i for i in range(-4, 5)]}
clf_ovr = GridSearchCV(ovr_svm, parameters)
clf_ovr.fit(X_train_sub, y_train_sub)
clf_ovr.best_params_
linearSVC_ovr_opt = LinearSVC(C=0.01, loss='squared_hinge', penalty='l2',multi_class='ovr')
linearSVC_ovr_opt.fit(X_train1, y_train1)
y_lrSVC_ovr_opt = linearSVC_ovr_opt.predict(X_test1)
accuracy_lrSVC_ovr_opt = accuracy_score(y_test1, y_lrSVC_ovr_opt)
misclassification_error1_opt = 1 - accuracy_lrSVC_ovr_opt
print("The multi-class misclassification error obtained using classifiers trained in a one-vs-rest fashion with lambda=0.01 is ", + misclassification_error1_opt)
#Redo model 3: multi-class linear SVM
cs_svm = LinearSVC(loss='squared_hinge', penalty='l2',multi_class='crammer_singer')
parameters = {'C':[10**i for i in range(-4, 5)]}
clf_cs = GridSearchCV(cs_svm, parameters)
clf_cs.fit(X_train_sub, y_train_sub)
clf_cs.best_params_
linearSVC_cs_opt = LinearSVC(C=0.1, loss='squared_hinge', penalty='l2',multi_class='crammer_singer')
linearSVC_cs_opt.fit(X_train1, y_train1)
y_lrSVC_cs_opt = linearSVC_cs_opt.predict(X_test1)
accuracy_lrSVC_cs_opt = accuracy_score(y_test1, y_lrSVC_cs_opt)
misclassification_error2_opt = 1 - accuracy_lrSVC_cs_opt
print("The multi-class misclassification error obtained using multi-class linear SVM with lambda=0.1 is ", + misclassification_error2_opt)
```
### Kernel SVMs for multi-class classification
- Redo all questions above now using the polynomial kernel of order 2 (and tuning the regularization parameters using cross-validation).
```
#Redo Model 1: polynomial kernel SVM of order 2 with one-vs-one
ovo_svm_poly = SVC(kernel='poly', degree=2, max_iter=1000, decision_function_shape = 'ovo')
parameters = {'C':[10**i for i in range(-4, 5)], 'coef0': [0, 1e-1, 1e-2, 1e-3, 1e-4]}
clf_ovo_poly = GridSearchCV(ovo_svm_poly, parameters)
clf_ovo_poly.fit(X_train_sub, y_train_sub)
clf_ovo_poly.best_params_
polySVC_ovo_opt = SVC(C=1000, coef0=0.1, kernel='poly', degree=2, max_iter=1000, decision_function_shape = 'ovo')
polySVC_ovo_opt.fit(X_train1, y_train1)
y_ovo_poly = polySVC_ovo_opt.predict(X_test1)
accuracy_poly_ovo_opt = accuracy_score(y_test1, y_ovo_poly)
misclassification_error_poly1 = 1 - accuracy_poly_ovo_opt
print("The multi-class misclassification error obtained using polynomial kernel SVM in one-vs-one with lambda=1000 is ", + misclassification_error_poly1)
#Redo Model 2: polynomial kernel SVM of order 2 with one-vs-rest
ovr_svm_poly = SVC(kernel='poly', degree=2, max_iter=1000, decision_function_shape = 'ovr')
parameters = {'C':[10**i for i in range(-4, 5)], 'coef0': [0, 1e-1, 1e-2, 1e-3, 1e-4]}
clf_ovr_poly = GridSearchCV(ovo_svm_poly, parameters)
clf_ovr_poly.fit(X_train_sub, y_train_sub)
clf_ovr_poly.best_params_
polySVC_ovr_opt = SVC(C=1000, coef0=0.1, kernel='poly', degree=2, max_iter=1000, decision_function_shape = 'ovr')
polySVC_ovr_opt.fit(X_train1, y_train1)
y_ovr_poly = polySVC_ovr_opt.predict(X_test1)
accuracy_poly_ovr_opt = accuracy_score(y_test1, y_ovr_poly)
misclassification_error_poly2 = 1 - accuracy_poly_ovr_opt
print("The multi-class misclassification error obtained using polynomial kernel SVM in one-vs-rest with lambda=1000 is ", + misclassification_error_poly2)
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.