code stringlengths 2.5k 150k | kind stringclasses 1 value |
|---|---|
# Part 1: Getting Started with Sionna
This tutorial will guide you through Sionna, from its basic principles to the implementation of a point-to-point link with a 5G NR compliant code and a 3GPP channel model.
You will also learn how to write custom trainable layers by implementing a state of the art neural receiver, and how to train and evaluate end-to-end communication systems.
The tutorial is structured in four notebooks:
- **Part I: Getting started with Sionna**
- Part II: Differentiable Communication Systems
- Part III: Advanced Link-level Simulations
- Part IV: Toward Learned Receivers
The [official documentation](https://nvlabs.github.io/sionna) provides key material on how to use Sionna and how its components are implemented.
* [Imports & Basics](#Imports-&-Basics)
* [Sionna Data-flow and Design Paradigms](#Sionna-Data-flow-and-Design-Paradigms)
* [Hello, Sionna!](#Hello,-Sionna!)
* [Communication Systems as Keras Models](#Communication-Systems-as-Keras-Models)
* [Forward Error Correction](#Forward-Error-Correction-(FEC))
* [Eager vs. Graph Mode](#Eager-vs-Graph-Mode)
* [Exercise](#Exercise)
## Imports & Basics
```
# Import TensorFlow and NumPy
import tensorflow as tf
import numpy as np
# Import Sionna
try:
import sionna as sn
except ImportError as e:
# Install Sionna if package is not already installed
import os
os.system("pip install sionna")
import sionna as sn
# For plotting
%matplotlib inline
# also try %matplotlib widget
import matplotlib.pyplot as plt
# for performance measurements
import time
# For the implementation of the Keras models
from tensorflow.keras import Model
```
We can now access Sionna functions within the `sn` namespace.
**Hint**: In Jupyter notebooks, you can run bash commands with `!`.
```
!nvidia-smi
```
## Sionna Data-flow and Design Paradigms
Sionna inherently parallelizes simulations via *batching*, i.e., each element in the batch dimension is simulated independently.
This means the first tensor dimension is always used for *inter-frame* parallelization similar to an outer *for-loop* in Matlab/NumPy simulations, but operations can be operated in parallel.
To keep the dataflow efficient, Sionna follows a few simple design principles:
* Signal-processing components are implemented as an individual [Keras layer](https://keras.io/api/layers/).
* `tf.float32` is used as preferred datatype and `tf.complex64` for complex-valued datatypes, respectively.
This allows simpler re-use of components (e.g., the same scrambling layer can be used for binary inputs and LLR-values).
* `tf.float64`/`tf.complex128` are available when high precision is needed.
* Models can be developed in *eager mode* allowing simple (and fast) modification of system parameters.
* Number crunching simulations can be executed in the faster *graph mode* or even *XLA* acceleration (experimental) is available for most components.
* Whenever possible, components are automatically differentiable via [auto-grad](https://www.tensorflow.org/guide/autodiff) to simplify the deep learning design-flow.
* Code is structured into sub-packages for different tasks such as channel coding, mapping,... (see [API documentation](http://nvlabs.github.io/sionna/api/sionna.html) for details).
These paradigms simplify the re-useability and reliability of our components for a wide range of communications related applications.
## Hello, Sionna!
Let's start with a very simple simulation: Transmitting QAM symbols over an AWGN channel. We will implement the system shown in the figure below.

We will use upper case for naming simulation parameters that are used throughout this notebook
Every layer needs to be initialized once before it can be used.
**Tip**: Use the [API documentation](http://nvlabs.github.io/sionna/api/sionna.html) to find an overview of all existing components.
You can directly access the signature and the docstring within jupyter via `Shift+TAB`.
*Remark*: Most layers are defined to be complex-valued.
We first need to create a QAM constellation.
```
NUM_BITS_PER_SYMBOL = 2 # QPSK
constellation = sn.mapping.Constellation("qam", NUM_BITS_PER_SYMBOL)
constellation.show();
```
**Task:** Try to change the modulation order, e.g., to 16-QAM.
We then need to setup a mapper to map bits into constellation points. The mapper takes as parameter the constellation.
We also need to setup a corresponding demapper to compute log-likelihood ratios (LLRs) from received noisy samples.
```
mapper = sn.mapping.Mapper(constellation=constellation)
# The demapper uses the same constellation object as the mapper
demapper = sn.mapping.Demapper("app", constellation=constellation)
```
**Tip**: You can access the signature+docstring via `?` command and print the complete class definition via `??` operator.
Obviously, you can also access the source code via [https://github.com/nvlabs/sionna/](https://github.com/nvlabs/sionna/).
```
# print class definition of the Constellation class
sn.mapping.Mapper??
```
As can be seen, the `Mapper` class inherits from `Layer`, i.e., implements a Keras layer.
This allows to simply built complex systems by using the [Keras functional API](https://keras.io/guides/functional_api/) to stack layers.
Sionna provides as utility a binary source to sample uniform i.i.d. bits.
```
binary_source = sn.utils.BinarySource()
```
Finally, we need the AWGN channel.
```
awgn_channel = sn.channel.AWGN()
```
Sionna provides a utility function to compute the noise power spectral density ratio $N_0$ from the energy per bit to noise power spectral density ratio $E_b/N_0$ in dB and a variety of parameters such as the coderate and the nunber of bits per symbol.
```
no = sn.utils.ebnodb2no(ebno_db=10.0,
num_bits_per_symbol=NUM_BITS_PER_SYMBOL,
coderate=1.0) # Coderate set to 1 as we do uncoded transmission here
```
We now have all the components we need to transmit QAM symbols over an AWGN channel.
Sionna natively supports multi-dimensional tensors.
Most layers operate at the last dimension and can have arbitrary input shapes (preserved at output).
```
BATCH_SIZE = 64 # How many examples are processed by Sionna in parallel
bits = binary_source([BATCH_SIZE,
1024]) # Blocklength
print("Shape of bits: ", bits.shape)
x = mapper(bits)
print("Shape of x: ", x.shape)
y = awgn_channel([x, no])
print("Shape of y: ", y.shape)
llr = demapper([y, no])
print("Shape of llr: ", llr.shape)
```
In *Eager* mode, we can directly access the values of each tensor. This simplify debugging.
```
num_samples = 8 # how many samples shall be printed
num_symbols = int(num_samples/NUM_BITS_PER_SYMBOL)
print(f"First {num_samples} transmitted bits: {bits[0,:num_samples]}")
print(f"First {num_symbols} transmitted symbols: {np.round(x[0,:num_symbols], 2)}")
print(f"First {num_symbols} received symbols: {np.round(y[0,:num_symbols], 2)}")
print(f"First {num_samples} demapped llrs: {np.round(llr[0,:num_samples], 2)}")
```
Let's visualize the received noisy samples.
```
plt.figure(figsize=(8,8))
plt.axes().set_aspect(1)
plt.grid(True)
plt.title('Channel output')
plt.xlabel('Real Part')
plt.ylabel('Imaginary Part')
plt.scatter(tf.math.real(y), tf.math.imag(y))
plt.tight_layout()
```
**Task:** One can play with the SNR to visualize the impact on the received samples.
**Advanced Task:** Compare the LLR distribution for "app" demapping with "maxlog" demapping.
The [Bit-Interleaved Coded Modulation](https://nvlabs.github.io/sionna/examples/Bit_Interleaved_Coded_Modulation.html) example notebook can be helpful for this task.
## Communication Systems as Keras Models
It is typically more convenient to wrap a Sionna-based communication system into a [Keras models](https://keras.io/api/models/model/).
These models can be simply built by using the [Keras functional API](https://keras.io/guides/functional_api/) to stack layers.
The following cell implements the previous system as a Keras model.
The key functions that need to be defined are `__init__()`, which instantiates the required components, and `__call()__`, which performs forward pass through the end-to-end system.
```
class UncodedSystemAWGN(Model): # Inherits from Keras Model
def __init__(self, num_bits_per_symbol, block_length):
"""
A keras model of an uncoded transmission over the AWGN channel.
Parameters
----------
num_bits_per_symbol: int
The number of bits per constellation symbol, e.g., 4 for QAM16.
block_length: int
The number of bits per transmitted message block (will be the codeword length later).
Input
-----
batch_size: int
The batch_size of the Monte-Carlo simulation.
ebno_db: float
The `Eb/No` value (=rate-adjusted SNR) in dB.
Output
------
(bits, llr):
Tuple:
bits: tf.float32
A tensor of shape `[batch_size, block_length] of 0s and 1s
containing the transmitted information bits.
llr: tf.float32
A tensor of shape `[batch_size, block_length] containing the
received log-likelihood-ratio (LLR) values.
"""
super().__init__() # Must call the Keras model initializer
self.num_bits_per_symbol = num_bits_per_symbol
self.block_length = block_length
self.constellation = sn.mapping.Constellation("qam", self.num_bits_per_symbol)
self.mapper = sn.mapping.Mapper(constellation=self.constellation)
self.demapper = sn.mapping.Demapper("app", constellation=self.constellation)
self.binary_source = sn.utils.BinarySource()
self.awgn_channel = sn.channel.AWGN()
# @tf.function # Enable graph execution to speed things up
def __call__(self, batch_size, ebno_db):
# no channel coding used; we set coderate=1.0
no = sn.utils.ebnodb2no(ebno_db,
num_bits_per_symbol=self.num_bits_per_symbol,
coderate=1.0)
bits = self.binary_source([batch_size, self.block_length]) # Blocklength set to 1024 bits
x = self.mapper(bits)
y = self.awgn_channel([x, no])
llr = self.demapper([y,no])
return bits, llr
```
We need first to instantiate the model.
```
model_uncoded_awgn = UncodedSystemAWGN(num_bits_per_symbol=NUM_BITS_PER_SYMBOL, block_length=1024)
```
Sionna provides a utility to easily compute and plot the bit error rate (BER).
```
EBN0_DB_MIN = -3.0 # Minimum value of Eb/N0 [dB] for simulations
EBN0_DB_MAX = 5.0 # Maximum value of Eb/N0 [dB] for simulations
BATCH_SIZE = 2000 # How many examples are processed by Sionna in parallel
ber_plots = sn.utils.PlotBER("AWGN")
ber_plots.simulate(model_uncoded_awgn,
ebno_dbs=np.linspace(EBN0_DB_MIN, EBN0_DB_MAX, 20),
batch_size=BATCH_SIZE,
num_target_block_errors=100, # simulate until 100 block errors occured
legend="Uncoded",
soft_estimates=True,
max_mc_iter=100, # run 100 Monte-Carlo simulations (each with batch_size samples)
show_fig=True);
```
The `sn.utils.PlotBER` object stores the results and allows to add additional simulations to the previous curves.
*Remark*: In Sionna, a block error is defined to happen if for two tensors at least one position in the last dimension differs (i.e., at least one bit wrongly received per codeword).
The bit error rate the total number of erroneous positions divided by the total number of transmitted bits.
## Forward Error Correction (FEC)
We now add channel coding to our transceiver to make it more robust against transmission errors. For this, we will use [5G compliant low-density parity-check (LDPC) codes and Polar codes](https://portal.3gpp.org/desktopmodules/Specifications/SpecificationDetails.aspx?specificationId=3214).
You can find more detailed information in the notebooks [Bit-Interleaved Coded Modulation (BICM)](https://nvlabs.github.io/sionna/examples/Bit_Interleaved_Coded_Modulation.html) and [5G Channel Coding and Rate-Matching: Polar vs. LDPC Codes](https://nvlabs.github.io/sionna/examples/5G_Channel_Coding_Polar_vs_LDPC_Codes.html).
```
k = 12
n = 20
encoder = sn.fec.ldpc.LDPC5GEncoder(k, n)
decoder = sn.fec.ldpc.LDPC5GDecoder(encoder, hard_out=True)
```
Let us encode some random input bits.
```
BATCH_SIZE = 1 # one codeword in parallel
u = binary_source([BATCH_SIZE, k])
print("Input bits are: \n", u.numpy())
c = encoder(u)
print("Encoded bits are: \n", c.numpy())
```
One of the fundamental paradigms of Sionna is batch-processing.
Thus, the example above could be executed with for arbitrary batch-sizes to simulate `batch_size` codewords in parallel.
However, Sionna can do more - it supports *N*-dimensional input tensors and, thereby, allows the processing of multiple samples of multiple users and several antennas in a single command line.
Let's say we want to encoded `batch_size` codewords of length `n` for each of the `num_users` connected to each of the `num_basestations`.
This means in total we transmit `batch_size` * `n` * `num_users` * `num_basestations` bits.
```
BATCH_SIZE = 10 # samples per scenario
num_basestations = 4
num_users = 5 # users per basestation
n = 1000 # codeword length per transmitted codeword
coderate = 0.5 # coderate
k = int(coderate * n) # number of info bits per codeword
# instantiate a new encoder for codewords of length n
encoder = sn.fec.ldpc.LDPC5GEncoder(k, n)
# the decoder must be linked to the encoder (to know the exact code parameters used for encoding)
decoder = sn.fec.ldpc.LDPC5GDecoder(encoder,
hard_out=True, # binary output or provide soft-estimates
return_infobits=True, # or also return (decoded) parity bits
num_iter=20, # number of decoding iterations
cn_type="boxplus-phi") # also try "minsum" decoding
# draw random bits to encode
u = binary_source([BATCH_SIZE, num_basestations, num_users, k])
print("Shape of u: ", u.shape)
# We can immediately encode u for all users, basetation and samples
# This all happens with a single line of code
c = encoder(u)
print("Shape of c: ", c.shape)
print("Total number of processed bits: ", np.prod(c.shape))
```
This works for arbitrary dimensions and allows a simple extension of the designed system to multi-user or multi-antenna scenarios.
Let us now replace the LDPC code by a Polar code. The API remains similar.
```
k = 64
n = 128
encoder = sn.fec.polar.Polar5GEncoder(k, n)
decoder = sn.fec.polar.Polar5GDecoder(encoder,
dec_type="SCL") # you can also use "SCL"
```
*Advanced Remark:* The 5G Polar encoder/decoder class directly applies rate-matching and the additional CRC concatenation.
This is all done internally and transparent to the user.
In case you want to access low-level features of the Polar codes, please use `sionna.fec.polar.PolarEncoder` and the desired decoder (`sionna.fec.polar.PolarSCDecoder`, `sionna.fec.polar.PolarSCLDecoder` or `sionna.fec.polar.PolarBPDecoder`).
Further details can be found in the tutorial notebook on [5G Channel Coding and Rate-Matching: Polar vs. LDPC Codes](https://nvlabs.github.io/sionna/examples/5G_Channel_Coding_Polar_vs_LDPC_Codes.html).

```
class CodedSystemAWGN(Model): # Inherits from Keras Model
def __init__(self, num_bits_per_symbol, n, coderate):
super().__init__() # Must call the Keras model initializer
self.num_bits_per_symbol = num_bits_per_symbol
self.n = n
self.k = int(n*coderate)
self.coderate = coderate
self.constellation = sn.mapping.Constellation("qam", self.num_bits_per_symbol)
self.mapper = sn.mapping.Mapper(constellation=self.constellation)
self.demapper = sn.mapping.Demapper("app", constellation=self.constellation)
self.binary_source = sn.utils.BinarySource()
self.awgn_channel = sn.channel.AWGN()
self.encoder = sn.fec.ldpc.LDPC5GEncoder(self.k, self.n)
self.decoder = sn.fec.ldpc.LDPC5GDecoder(self.encoder, hard_out=True)
#@tf.function # activate graph execution to speed things up
def __call__(self, batch_size, ebno_db):
no = sn.utils.ebnodb2no(ebno_db, num_bits_per_symbol=self.num_bits_per_symbol, coderate=self.coderate)
bits = self.binary_source([batch_size, self.k])
codewords = self.encoder(bits)
x = self.mapper(codewords)
y = self.awgn_channel([x, no])
llr = self.demapper([y,no])
bits_hat = self.decoder(llr)
return bits, bits_hat
CODERATE = 0.5
BATCH_SIZE = 2000
model_coded_awgn = CodedSystemAWGN(num_bits_per_symbol=NUM_BITS_PER_SYMBOL,
n=2048,
coderate=CODERATE)
ber_plots.simulate(model_coded_awgn,
ebno_dbs=np.linspace(EBN0_DB_MIN, EBN0_DB_MAX, 15),
batch_size=BATCH_SIZE,
num_target_block_errors=500,
legend="Coded",
soft_estimates=False,
max_mc_iter=15,
show_fig=True,
forward_keyboard_interrupt=False);
```
As can be seen, the `BerPlot` class uses multiple stopping conditions and stops the simulation after no error occured at a specifc SNR point.
**Task**: Replace the coding scheme by a Polar encoder/decoder or a convolutional code with Viterbi decoding.
## Eager vs Graph Mode
So far, we have executed the example in *eager* mode.
This allows to run TensorFlow ops as if it was written NumPy and simplifies development and debugging.
However, to unleash Sionna's full performance, we need to activate *graph* mode which can be enabled with the function decorator *@tf.function()*.
We refer to [TensorFlow Functions](https://www.tensorflow.org/guide/function) for further details.
```
@tf.function() # enables graph-mode of the following function
def run_graph(batch_size, ebno_db):
# all code inside this function will be executed in graph mode, also calls of other functions
print(f"Tracing run_graph for values batch_size={batch_size} and ebno_db={ebno_db}.") # print whenever this function is traced
return model_coded_awgn(batch_size, ebno_db)
batch_size = 10 # try also different batch sizes
ebno_db = 1.5
# run twice - how does the output change?
run_graph(batch_size, ebno_db)
```
In graph mode, Python code (i.e., *non-TensorFlow code*) is only executed whenever the function is *traced*.
This happens whenever the input signature changes.
As can be seen above, the print statement was executed, i.e., the graph was traced again.
To avoid this re-tracing for different inputs, we now input tensors.
You can see that the function is now traced once for input tensors of same dtype.
See [TensorFlow Rules of Tracing](https://www.tensorflow.org/guide/function#rules_of_tracing) for details.
**Task:** change the code above such that tensors are used as input and execute the code with different input values. Understand when re-tracing happens.
*Remark*: if the input to a function is a tensor its signature must change and not *just* its value. For example the input could have a different size or datatype.
For efficient code execution, we usually want to avoid re-tracing of the code if not required.
```
# You can print the cached signatures with
print(run_graph.pretty_printed_concrete_signatures())
```
We now compare the throughput of the different modes.
```
repetitions = 4 # average over multiple runs
batch_size = BATCH_SIZE # try also different batch sizes
ebno_db = 1.5
# --- eager mode ---
t_start = time.perf_counter()
for _ in range(repetitions):
bits, bits_hat = model_coded_awgn(tf.constant(batch_size, tf.int32),
tf.constant(ebno_db, tf. float32))
t_stop = time.perf_counter()
# throughput in bit/s
throughput_eager = np.size(bits.numpy())*repetitions / (t_stop - t_start) / 1e6
print(f"Throughput in Eager mode: {throughput_eager :.3f} Mbit/s")
# --- graph mode ---
# run once to trace graph (ignored for throughput)
run_graph(tf.constant(batch_size, tf.int32),
tf.constant(ebno_db, tf. float32))
t_start = time.perf_counter()
for _ in range(repetitions):
bits, bits_hat = run_graph(tf.constant(batch_size, tf.int32),
tf.constant(ebno_db, tf. float32))
t_stop = time.perf_counter()
# throughput in bit/s
throughput_graph = np.size(bits.numpy())*repetitions / (t_stop - t_start) / 1e6
print(f"Throughput in graph mode: {throughput_graph :.3f} Mbit/s")
```
Let's run the same simulation as above in graph mode.
```
ber_plots.simulate(run_graph,
ebno_dbs=np.linspace(EBN0_DB_MIN, EBN0_DB_MAX, 12),
batch_size=BATCH_SIZE,
num_target_block_errors=500,
legend="Coded (Graph mode)",
soft_estimates=True,
max_mc_iter=100,
show_fig=True,
forward_keyboard_interrupt=False);
```
**Task:** TensorFlow allows to *compile* graphs with [XLA](https://www.tensorflow.org/xla). Try to further accelerate the code with XLA (`@tf.function(jit_compile=True)`).
*Remark*: XLA is still an experimental feature and not all TensorFlow (and, thus, Sionna) functions support XLA.
**Task 2:** Check the GPU load with `!nvidia-smi`. Find the best tradeoff between batch-size and throughput for your specific GPU architecture.
## Exercise
Simulate the coded bit error rate (BER) for a Polar coded and 64-QAM modulation.
Assume a codeword length of n = 200 and coderate = 0.5.
**Hint**: For Polar codes, successive cancellation list decoding (SCL) gives the best BER performance.
However, successive cancellation (SC) decoding (without a list) is less complex.
```
n = 200
coderate = 0.5
# *You can implement your code here*
```
| github_jupyter |
<a href="https://colab.research.google.com/github/MikelKN/Standford_NLU_Spring_2021_CS224U/blob/master/tutorial_pytorch.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Tutorial: PyTorch
```
from google.colab import drive
drive.mount('/content/drive')
__author__ = "Ignacio Cases"
__version__ = "CS224u, Stanford, Spring 2021"
```
## Contents
1. [Motivation](#Motivation)
1. [Importing PyTorch](#Importing-PyTorch)
1. [Tensors](#Tensors)
1. [Tensor creation](#Tensor-creation)
1. [Operations on tensors](#Operations-on-tensors)
1. [GPU computation](#GPU-computation)
1. [Neural network foundations](#Neural-network-foundations)
1. [Automatic differentiation](#Automatic-differentiation)
1. [Modules](#Modules)
1. [Sequential](#Sequential)
1. [Criteria and loss functions](#Criteria-and-loss-functions)
1. [Optimization](#Optimization)
1. [Training a simple model](#Training-a-simple-model)
1. [Reproducibility](#Reproducibility)
1. [References](#References)
## Motivation
PyTorch is a Python package designed to carry out scientific computation. We use PyTorch in a range of different environments: local model development, large-scale deployments on big clusters, and even _inference_ in embedded, low-power systems. While similar in many aspects to NumPy, PyTorch enables us to perform fast and efficient training of deep learning and reinforcement learning models not only on the CPU but also on a GPU or other ASICs (Application Specific Integrated Circuits) for AI, such as Tensor Processing Units (TPU).
## Importing PyTorch
This tutorial assumes a working installation of PyTorch using your `nlu` environment, but the content applies to any regular installation of PyTorch. If you don't have a working installation of PyTorch, please follow the instructions in [the setup notebook](setup.ipynb).
To get started working with PyTorch we simply begin by importing the torch module:
```
import torch
```
**Side note**: why not `import pytorch`? The name of the package is `torch` for historical reasons: `torch` is the orginal name of the ancestor of the PyTorch library that got started back in 2002 as a C library with Lua scripting. It was only much later that the original `torch` was ported to Python. The PyTorch project decided to prefix the Py to make clear that this library refers to the Python version, as it was confusing back then to know which `torch` one was referring to. All the internal references to the library use just `torch`. It's possible that PyTorch will be renamed at some point, as the original `torch` is no longer maintained and there is no longer confusion.
We can see the version installed and determine whether or not we have a GPU-enabled PyTorch install by issuing
```
print("PyTorch version {}".format(torch.__version__))
print("GPU-enabled installation? {}".format(torch.cuda.is_available()))
```
PyTorch has good [documentation](https://pytorch.org/docs/stable/index.html) but it can take some time to familiarize oneself with the structure of the package; it's worth the effort to do so!
We will also make use of other imports:
```
import numpy as np
```
## Tensors
Tensors collections of numbers represented as an array, and are the basic building blocks in PyTorch.
You are probably already familiar with several types of tensors:
- A scalar, a single number, is a zero-th order tensor.
- A column vector $v$ of dimensionality $d_c \times 1$ is a tensor of order 1.
- A row vector $x$ of dimensionality $1 \times d_r$ is a tensor of order 1.
- A matrix $A$ of dimensionality $d_r \times d_c$ is a tensor of order 2.
- A cube $T$ of dimensionality $d_r \times d_c \times d_d$ is a tensor of order 3.
Tensors are the fundamental blocks that carry information in our mathematical models, and they are composed using several operations to create mathematical graphs in which information can flow (propagate) forward (functional application) and backwards (using the chain rule).
We have seen multidimensional arrays in NumPy. These NumPy objects are also a representation of tensors.
**Side note**: what is a tensor __really__? Tensors are important mathematical objects with applications in multiple domains in mathematics and physics. The term "tensor" comes from the usage of these mathematical objects to describe the stretching of a volume of matter under *tension*. They are central objects of study in a subfield of mathematics known as differential geometry, which deals with the geometry of continuous vector spaces. As a very high-level summary (and as first approximation), tensors are defined as multi-linear "machines" that have a number of slots (their order, a.k.a. rank), taking a number of "column" vectors and "row" vectors *to produce a scalar*. For example, a tensor $\mathbf{A}$ (represented by a matrix with rows and columns that you could write on a sheet of paper) can be thought of having two slots. So when $\mathbf{A}$ acts upon a column vector $\mathbf{v}$ and a row vector $\mathbf{x}$, it returns a scalar:
$$\mathbf{A}(\mathbf{x}, \mathbf{v}) = s$$
If $\mathbf{A}$ only acts on the column vector, for example, the result will be another column tensor $\mathbf{u}$ of one order less than the order of $\mathbf{A}$. Thus, when $\mathbf{v}$ acts is similar to "removing" its slot:
$$\mathbf{u} = \mathbf{A}(\mathbf{v})$$
The resulting $\mathbf{u}$ can later interact with another row vector to produce a scalar or be used in any other way.
This can be a very powerful way of thinking about tensors, as their slots can guide you when writing code, especially given that PyTorch has a _functional_ approach to modules in which this view is very much highlighted. As we will see below, these simple equations above have a completely straightforward representation in the code. In the end, most of what our models will do is to process the input using this type of functional application so that we end up having a tensor output and a scalar value that measures how good our output is with respect to the real output value in the dataset.
### Tensor creation
Let's get started with tensors in PyTorch. The framework supports eight different types ([Lapan 2018](#References)):
- 3 float types (16-bit, 32-bit, 64-bit): `torch.FloatTensor` is the class name for the commonly used 32-bit tensor.
- 5 integer types (signed 8-bit, unsigned 8-bit, 16-bit, 32-bit, 64-bit): common tensors of these types are the 8-bit unsigned tensor `torch.ByteTensor` and the 64-bit `torch.LongTensor`.
There are three fundamental ways to create tensors in PyTorch ([Lapan 2018](#References)):
- Call a tensor constructor of a given type, which will create a non-initialized tensor. So we then need to fill this tensor later to be able to use it.
- Call a built-in method in the `torch` module that returns a tensor that is already initialized.
- Use the PyTorch–NumPy bridge.
#### Calling the constructor
Let's first create a 2 x 3 dimensional tensor of the type `float`:
```
t = torch.FloatTensor(2, 3)
print(t)
print(t.size())
```
Note that we specified the dimensions as the arguments to the constructor by passing the numbers directly – and not a list or a tuple, which would have very different outcomes as we will see below! We can always inspect the size of the tensor using the `size()` method.
The constructor method allocates space in memory for this tensor. However, the tensor is *non-initialized*. In order to initialize it, we need to call any of the tensor initialization methods of the basic tensor types. For example, the tensor we just created has a built-in method `zero_()`:
```
t.zero_()
```
The underscore after the method name is important: it means that the operation happens _in place_: the returned object is the same object but now with different content. A very handy way to construct a tensor using the constructor happens when we have available the content we want to put in the tensor in the form of a Python iterable. In this case, we just pass it as the argument to the constructor:
```
torch.FloatTensor([[1, 2, 3], [4, 5, 6]])
```
#### Calling a method in the torch module
A very convenient way to create tensors, in addition to using the constructor method, is to use one of the multiple methods provided in the `torch` module. In particular, the `tensor` method allows us to pass a number or iterable as the argument to get the appropriately typed tensor:
```
tl = torch.tensor([1, 2, 3])
t = torch.tensor([1., 2., 3.])
print("A 64-bit integer tensor: {}, {}".format(tl, tl.type()))
print("A 32-bit float tensor: {}, {}".format(t, t.type()))
```
We can create a similar 2x3 tensor to the one above by using the `torch.zeros()` method, passing a sequence of dimensions to it:
```
t = torch.zeros(2, 3)
print(t)
```
There are many methods for creating tensors. We list some useful ones:
```
t_zeros = torch.zeros_like(t) # zeros_like returns a new tensor
t_ones = torch.ones(2, 3) # creates a tensor with 1s
t_fives = torch.empty(2, 3).fill_(5) # creates a non-initialized tensor and fills it with 5
t_random = torch.rand(2, 3) # creates a uniform random tensor
t_normal = torch.randn(2, 3) # creates a normal random tensor
print(t_zeros)
print(t_ones)
print(t_fives)
print(t_random)
print(t_normal)
```
We now see emerging two important paradigms in PyTorch. The _imperative_ approach to performing operations, using _inplace_ methods, is in marked contrast with an additional paradigm also used in PyTorch, the _functional_ approach, where the returned object is a copy of the original object. Both paradigms have their specific use cases as we will be seeing below. The rule of thumb is that _inplace_ methods are faster and don't require extra memory allocation in general, but they can be tricky to understand (keep this in mind regarding the computational graph that we will see below). _Functional_ methods make the code referentially transparent, which is a highly desired property that makes it easier to understand the underlying math, but we rely on the efficiency of the implementation:
```
# creates a new copy of the tensor that is still linked to
# the computational graph (see below)
t1 = torch.clone(t)
assert id(t) != id(t1), 'Functional methods create a new copy of the tensor'
# To create a new _independent_ copy, we do need to detach
# from the graph
t1 = torch.clone(t).detach()
```
#### Using the PyTorch–NumPy bridge
A quite useful feature of PyTorch is its almost seamless integration with NumPy, which allows us to perform operations on NumPy and interact from PyTorch with the large number of NumPy libraries as well. Converting a NumPy multi-dimensional array into a PyTorch tensor is very simple: we only need to call the `tensor` method with NumPy objects as the argument:
```
# Create a new multi-dimensional array in NumPy with the np datatype (np.float32)
a = np.array([1., 2., 3.])
# Convert the array to a torch tensor
t = torch.tensor(a)
print("NumPy array: {}, type: {}".format(a, a.dtype))
print("Torch tensor: {}, type: {}".format(t, t.dtype))
```
We can also seamlessly convert a PyTorch tensor into a NumPy array:
```
t.numpy()
```
**Side note**: why not `torch.from_numpy(a)`? The `from_numpy()` method is depecrated in favor of `tensor()`, which is a more capable method in the torch package. `from_numpy()` is only there for backwards compatibility. It can be a little bit quirky, so I recommend using the newer method in PyTorch >= 0.4.
#### Indexing
Indexing works as expected with NumPy:
```
t = torch.randn(2, 3)
t
t[ : , 0]
```
PyTorch also supports indexing using long tensors, for example:
```
t = torch.randn(5, 6)
print("1.Here is the 5 x 6 tensor:\n", t)
i = torch.tensor([1, 3])
j = torch.tensor([4, 5])
print("\n2.Here is row 1 and 3 selected from the 5 x 6 tensor in 1. above:\n", t[i]) # selects rows 1 and 3
print("\n3.And here is column 5 Extracted from the tensor in 2. above:\n", t[i, j]) # selects (1, 4) and (3, 5)
```
#### Type conversion
Each tensor has a set of convenient methods to convert types. For example, if we want to convert the tensor above to a 32-bit float tensor, we use the method `.float()`:
```
t = t.float() # converts to 32-bit float
print(t)
t = t.double() # converts to 64-bit float
print(t)
t = t.byte() # converts to unsigned 8-bit integer
print(t)
```
### Operations on tensors
Now that we know how to create tensors, let's create some of the fundamental tensors and see some common operations on them:
```
# Scalars =: creates a tensor with a scalar
# (zero-th order tensor, i.e. just a number)
s = torch.tensor(42)
print(s)
```
**Tip**: a very convenient to access scalars is with `.item()`:
```
s.item()
```
Let's see higher-order tensors – remember we can always inspect the dimensionality of a tensor using the `.size()` method:
```
# Row vector
x = torch.randn(1,3)
print("Row vector:\n{}\nwith size: {}\n".format(x, x.size()))
# Column vector
v = torch.randn(3,1)
print("Column vector:\n{}\nwith size: {}\n".format(v, v.size()))
# Matrix
A = torch.randn(3, 3)
print("Matrix:\n{}\nwith size: {}".format(A, A.size()))
```
A common operation is matrix-vector multiplication (and in general tensor-tensor multiplication). For example, the product $\mathbf{A}\mathbf{v} + \mathbf{b}$ is as follows:
```
u = torch.matmul(A, v)
print(u)
b = torch.randn(3,1)
y = u + b # we can also do torch.add(u, b)
print('\n',y)
```
where we retrieve the expected result (a column vector of dimensions 3x1). We can of course compose operations:
```
s = torch.matmul(x, torch.matmul(A, v))
print(s.item())
```
There are many functions implemented for every tensor, and we encourage you to study the documentation. Some of the most common ones:
```
# common tensor methods (they also have the counterpart in
# the torch package, e.g. as torch.sum(t))
t = torch.randn(2,3)
t.sum(dim=0)
t.t() # transpose
t.numel() # number of elements in tensor
t.nonzero() # indices of non-zero elements
t.view(-1, 2) # reorganizes the tensor to these dimensions
t.squeeze() # removes size 1 dimensions
t.unsqueeze(0) # inserts a dimension
# operations in the package
torch.arange(0, 10) # tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
torch.eye(3, 3) # creates a 3x3 matrix with 1s in the diagonal (identity in this case)
t = torch.arange(0, 3)
torch.cat((t, t)) # tensor([0, 1, 2, 0, 1, 2])
torch.stack((t, t)) # tensor([[0, 1, 2],
# [0, 1, 2]])
```
## GPU computation
Deep Learning frameworks take advantage of the powerful computational capabilities of modern graphic processing units (GPUs). GPUs were originally designed to perform frequent operations for graphics very efficiently and fast, such as linear algebra operations, which makes them ideal for our interests. PyTorch makes it very easy to use the GPU: the common scenario is to tell the framework that we want to instantiate a tensor with a type that makes it a GPU tensor, or move a given CPU tensor to the GPU. All the tensors that we have seen above are CPU tensors, and PyTorch has the counterparts for GPU tensors in the `torch.cuda` module. Let's see how this works.
A common way to explicitly declare the tensor type as a GPU tensor is through the use of the constructor method for tensor creation inside the `torch.cuda` module:
```
try:
t_gpu = torch.cuda.FloatTensor(3, 3) # creation of a GPU tensor
t_gpu.zero_() # initialization to zero
except TypeError as err:
print(err)
```
However, a more common approach that gives us flexibility is through the use of devices. A device in PyTorch refers to either the CPU (indicated by the string "cpu") or one of the possible GPU cards in the machine (indicated by the string "cuda:$n$", where $n$ is the index of the card). Let's create a random gaussian matrix using a method from the `torch` package, and set the computational device to be the GPU by specifying the `device` to be `cuda:0`, the first GPU card in our machine (this code will fail if you don't have a GPU, but we will work around that below):
```
try:
t_gpu = torch.randn(3, 3, device="cuda:0")
except AssertionError as err:
print(err)
t_gpu = None
t_gpu
```
As you can notice, the tensor now has the explicit device set to be a CUDA device, not a CPU device. Let's now create a tensor in the CPU and move it to the GPU:
```
# we could also state explicitly the device to be the
# CPU with torch.randn(3,3,device="cpu")
t = torch.randn(3, 3)
t
```
In this case, the device is the CPU, but PyTorch does not explicitly say that given that this is the default behavior. To copy the tensor to the GPU we use the `.to()` method that every tensor implements, passing the device as an argument. This method creates a copy in the specified device or, if the tensor already resides in that device, it returns the original tensor ([Lapan 2018](#References)):
```
try:
t_gpu = t.to("cuda:0") # copies the tensor from CPU to GPU
# note that if we do now t_to_gpu.to("cuda:0") it will
# return the same tensor without doing anything else
# as this tensor already resides on the GPU
print(t_gpu)
print(t_gpu.device)
except AssertionError as err:
print(err)
```
**Tip**: When we program PyTorch models, we will have to specify the device in several places (not so many, but definitely more than once). A good practice that is consistent accross the implementation and makes the code more portable is to declare early in the code a device variable by querying the framework if there is a GPU available that we can use. We can do this by writing
```
device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")
print(device)
```
We can then use `device` as an argument of the `.to()` method in the rest of our code:
```
# moves t to the device (this code will **not** fail if the
# local machine has not access to a GPU)
t.to(device)
```
**Side note**: having good GPU backend support is a critical aspect of a deep learning framework. Some models depend crucially on performing computations on a GPU. Most frameworks, including PyTorch, only provide good support for GPUs manufactured by Nvidia. This is mostly due to the heavy investment this company made on CUDA (Compute Unified Device Architecture), the underlying parallel computing platform that enables this type of scientific computing (and the reason for the device label), with specific implementations targeted to Deep Neural Networks as cuDNN. Other GPU manufacturers, most notably AMD, are making efforts to towards enabling ML computing in their cards, but their support is still partial.
## Neural network foundations
Computing gradients is a crucial feature in deep learning, given that the training procedure of neural networks relies on optimization techniques that update the parameters of the model by using the gradient information of a scalar magnitude – the loss function. How is it possible to compute the derivatives? There are different methods, namely
- **Symbolic Differentiation**: given a symbolic expression, the software provides the derivative by performing symbolic transformations (e.g. Wolfram Alpha). The benefits are clear, but it is not always possible to compute an analytical expression.
- **Numerical Differentiation**: computes the derivatives using expressions that are suitable to be evaluated numerically, using the finite differences method to several orders of approximation. A big drawback is that these methods are slow.
- **Automatic Differentiation**: a library adds to the set of functional primitives an implementation of the derivative for each of these functions. Thus, if the library contains the function $sin(x)$, it also implements the derivative of this function, $\frac{d}{dx}sin(x) = cos(x)$. Then, given a composition of functions, the library can compute the derivative with respect a variable by successive application of the chain rule, a method that is known in deep learning as backpropagation.
### Automatic differentiation
Modern deep learning libraries are capable of performing automatic differentiation. The two main approaches to computing the graph are _static_ and _dynamic_ processing ([Lapan 2018](#References)):
- **Static graphs**: the deep learning framework converts the computational graph into a static representation that cannot be modified. This allows the library developers to do very aggressive optimizations on this static graph ahead of computation time, pruning some areas and transforming others so that the final product is highly optimized and fast. The drawback is that some models can be really hard to implement with this approach. **For example, TensorFlow uses static graphs. Having static graphs is part of the reason why TensorFlow has excellent support for sequence processing, which makes it very popular in NLP.**
- **Dynamic graphs**: the framework does not create a graph ahead of computation, but records the operations that are performed, which can be quite different for different inputs. When it is time to compute the gradients, it unrolls the graph and perform the computations. A major benefit of this approach is that implementing complex models can be easier in this paradigm. **This flexibility comes at the expense of the major drawback of this approach: speed**. Dynamic graphs cannot leverage the same level of ahead-of-time optimization as static graphs, which makes them slower. *PyTorch uses dynamic graphs as the underlying paradigm for gradient computation.*
Here is simple graph to compute $y = wx + b$ (from [Rao and MacMahan 2019](#References-and-Further-Reading)):
<img src="https://github.com/MikelKN/Standford_NLU_Spring_2021_CS224U/blob/master/fig/simple_computation_graph.png?raw=1" width=500 />
PyTorch computes the graph using the **Autograd system**. Autograd records a graph when performing the forward pass (function application), keeping track of all the tensors defined as inputs. These are the leaves of the graph. The output tensors are the roots of the graph. By navigating this graph from root to leaves, the gradients are automatically computed using the chain rule. In summary,
- Forward pass (the successive function application) goes from leaves to root. We use the `apply` method in PyTorch.
- Once the forward pass is completed, Autograd has recorded the graph and the backward pass (chain rule) can be done. We use the method `backwards` on the root of the graph.
### Modules
The base implementation for all neural network models in PyTorch is the class `Module` in the package `torch.nn`:
```
import torch.nn as nn
```
All our models subclass this base `nn.Module` class, which provides an interface to important methods used for constructing and working with our models, and which contains sensible initializations for our models. Modules can contain other modules (and usually do).
Let's see a simple, custom implementation of a multi-layer feed forward network. In the example below, our simple mathematical model is
$$\mathbf{y} = \mathbf{U}(f(\mathbf{W}(\mathbf{x})))$$
where $f$ is a non-linear function (a `ReLU`), is directly translated into a similar expression in PyTorch. To do that, we simply subclass `nn.Module`, register the two affine transformations and the non-linearity, and implement their composition within the `forward` method:
```
class MyCustomModule(nn.Module):
def __init__(self, n_inputs, n_hidden, n_output_classes):
# call super to initialize the class above in the hierarchy
super(MyCustomModule, self).__init__()
# first affine transformation
self.W = nn.Linear(n_inputs, n_hidden)
# non-linearity (here it is also a layer!)
self.f = nn.ReLU()
# final affine transformation
self.U = nn.Linear(n_hidden, n_output_classes)
def forward(self, x):
y = self.U(self.f(self.W(x)))
return y
```
Then, we can use our new module as follows:
```
# set the network's architectural parameters
n_inputs = 3
n_hidden= 4
n_output_classes = 2
# instantiate the model
model = MyCustomModule(n_inputs, n_hidden, n_output_classes)
# create a simple input tensor
# size is [1,3]: a mini-batch of one example,
# this example having dimension 3
x = torch.FloatTensor([[0.3, 0.8, -0.4]])
# compute the model output by **applying** the input to the module
y = model(x)
# inspect the output
print(y)
```
As we see, the output is a tensor with its gradient function attached – Autograd tracks it for us.
**Tip**: modules overrides the `__call__()` method, where the framework does some work. Thus, instead of directly calling the `forward()` method, we apply the input to the model instead.
### Sequential
A powerful class in the `nn` package is `Sequential`, which allows us to express the code above more succinctly:
```
class MyCustomModule(nn.Module):
def __init__(self, n_inputs, n_hidden, n_output_classes):
super(MyCustomModule, self).__init__()
self.network = nn.Sequential(
nn.Linear(n_inputs, n_hidden),
nn.ReLU(),
nn.Linear(n_hidden, n_output_classes))
def forward(self, x):
y = self.network(x)
return y
```
As you can imagine, this can be handy when we have a large number of layers for which the actual names are not that meaningful. It also improves readability:
```
class MyCustomModule(nn.Module):
def __init__(self, n_inputs, n_hidden, n_output_classes):
super(MyCustomModule, self).__init__()
self.p_keep = 0.7
self.network = nn.Sequential(
nn.Linear(n_inputs, n_hidden),
nn.ReLU(),
nn.Linear(n_hidden, 2*n_hidden),
nn.ReLU(),
nn.Linear(2*n_hidden, n_output_classes),
# dropout argument is probability of dropping
nn.Dropout(1 - self.p_keep),
# applies softmax in the data dimension
nn.Softmax(dim=1)
)
def forward(self, x):
y = self.network(x)
return y
```
**Side note**: Another important package in `torch.nn` is `Functional`, typically imported as `F`. `Functional` **contains many useful functions, from non-linear activations to convolutional, dropout, and even distance functions**. Many of these functions have counterpart implementations as layers in the `nn` package so that they can be easily used in pipelines like the one above implemented using `nn.Sequential`.
```
import torch.nn.functional as F
y = F.relu(torch.FloatTensor([[-5, -1, 0, 5]]))
y
```
### Criteria and loss functions
PyTorch has implementations for the most common criteria in the `torch.nn` package. You may notice that, as with many of the other functions, there are two implementations of loss functions: the reference functions in `torch.nn.functional` and practical class in `torch.nn`, which are the ones we typically use. Probably the two most common ones are ([Lapan 2018](#References)):
- `nn.MSELoss` (mean squared error): squared $L_2$ norm used for regression.
- `nn.CrossEntropyLoss`: criterion used for classification as the result of combining `nn.LogSoftmax()` and `nn.NLLLoss()` (negative log likelihood), operating on the input scores directly. When possible, we recommend using this class instead of using a softmax layer plus a log conversion and `nn.NLLLoss`, given that the `LossSoftmax` implementation guards against common numerical errors, resulting in less instabilities.
Once our model produces a prediction, we pass it to the criteria to obtain a measure of the loss:
```
# the true label (in this case, 2) from our dataset wrapped
# as a tensor of minibatch size of 1
y_gold = torch.tensor([1])
# our simple classification criterion for this simple example
criterion = nn.CrossEntropyLoss()
# forward pass of our model (remember, using apply instead of forward)
y = model(x)
# apply the criterion to get the loss corresponding to the pair (x, y)
# with respect to the real y (y_gold)
loss = criterion(y, y_gold)
# the loss contains a gradient function that we can use to compute
# the gradient dL/dw (gradient with respect to the parameters
# for a given fixed input)
print(loss)
```
### Optimization
Once we have computed the loss for a training example or minibatch of examples, we update the parameters of the model guided by the information contained in the gradient. The role of updating the parameters belongs to the optimizer, and PyTorch has a number of implementations available right away – and if you don't find your preferred optimizer as part of the library, chances are that you will find an existing implementation. Also, coding your own optimizer is indeed quite easy in PyTorch.
**Side Note** The following is a summary of the most common optimizers. It is intended to serve as a reference (I use this table myself quite a lot). In practice, most people pick an optimizer that has been proven to behave well on a given domain, but optimizers are also a very active area of research on numerical analysis, so it is a good idea to pay some attention to this subfield. **We recommend using second-order dynamics with an adaptive time step**:
- First-order dynamics
- Search direction only: `optim.SGD`
- Adaptive: `optim.RMSprop`, `optim.Adagrad`, `optim.Adadelta`
- Second-order dynamics
- Search direction only: Momentum `optim.SGD(momentum=0.9)`, Nesterov, `optim.SGD(nesterov=True)`
- ***Adaptive***: `optim.Adam`, `optim.Adamax` (Adam with $L_\infty$)
### Training a simple model
In order to illustrate the different concepts and techniques above, let's put them together in a very simple example: our objective will be to fit a very simple non-linear function, a sine wave:
$$y = a \sin(x + \phi)$$
where $a, \phi$ are the given amplitude and phase of the sine function. Our objective is to learn to adjust this function using a feed forward network, this is:
$$ \hat{y} = f(x)$$
such that the error between $y$ and $\hat{y}$ is minimal according to our criterion. A natural criterion is to minimize the squared distance between the actual value of the sine wave and the value predicted by our function approximator, measured using the $L_2$ norm.
**Side Note**: Although this example is easy, simple variations of this setting can pose a big challenge, and are used currently to illustrate difficult problems in learning, especially in a very active subfield known as meta-learning.
Let's import all the modules that we are going to need:
```
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.data as data
import numpy as np
import matplotlib.pyplot as plt
import math
```
Early on the code, we define the device that we want to use:
```
device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")
```
Let's fix $a=1$, $\phi=1$ and generate traning data in the interval $x \in [0,2\pi)$ using NumPy:
```
M = 1200
# sample from the x axis M points
x = np.random.rand(M) * 2*math.pi
# add noise
eta = np.random.rand(M) * 0.01
# compute the function
y = np.sin(x) + eta
# plot
_ = plt.scatter(x,y)
x_train = torch.tensor(x[0:1000]) #creats a 1000 row tensor
x_train[:10] #Lets observe the first 10 element in the tensor
x_train = torch.tensor(x[0:1000]).float() #Converts the tensors to 32 bit floats
x_train[:10]
x_train[:10].view(-1, 1) # Works a little like np.reshape
#For more infor on the .View(-1) from pytorch -> https://stackoverflow.com/questions/50792316/what-does-1-mean-in-pytorch-view
x_train[:10].view(-1, 1).to(device) #move a tensor to a device(the CUDA GPU in this case)
# use the NumPy-PyTorch bridge
x_train = torch.tensor(x[0:1000]).float().view(-1, 1).to(device)
y_train = torch.tensor(y[0:1000]).float().view(-1, 1).to(device)
x_test = torch.tensor(x[1000:]).float().view(-1, 1).to(device)
y_test = torch.tensor(y[1000:]).float().view(-1, 1).to(device)
class SineDataset(data.Dataset):
def __init__(self, x, y):
super(SineDataset, self).__init__()
assert x.shape[0] == y.shape[0]
self.x = x
self.y = y
def __len__(self):
return self.y.shape[0]
def __getitem__(self, index):
return self.x[index], self.y[index]
sine_dataset = SineDataset(x_train, y_train) #Gets the train sets
sine_dataset_test = SineDataset(x_test, y_test)#gets the test sets
# Dataloaders : represents a Python iterable over a dataset
#I.e feeds the data to the model in small chunks of size 32 in this case
sine_loader = torch.utils.data.DataLoader(
sine_dataset, batch_size=32, shuffle=True)
sine_loader_test = torch.utils.data.DataLoader(
sine_dataset_test, batch_size=32)
# Building our Sequential Model
class SineModel(nn.Module):
def __init__(self):
super(SineModel, self).__init__()
self.network = nn.Sequential(
nn.Linear(1, 5),
nn.ReLU(),
nn.Linear(5, 5),
nn.ReLU(),
nn.Linear(5, 5),
nn.ReLU(),
nn.Linear(5, 1))
def forward(self, x):
return self.network(x)
# declare the model
model = SineModel().to(device)
# define the criterion
criterion = nn.MSELoss()
# select the optimizer and pass to it the parameters of the model it will optimize
optimizer = torch.optim.Adam(model.parameters(), lr = 0.01)
epochs = 1000
# training loop
for epoch in range(epochs):
for i, (x_i, y_i) in enumerate(sine_loader):
y_hat_i = model(x_i) # forward pass
loss = criterion(y_hat_i, y_i) # compute the loss and perform the backward pass
optimizer.zero_grad() # cleans the gradients
loss.backward() # computes the gradients
optimizer.step() # update the parameters
if epoch % 20:
plt.scatter(x_i.data.cpu().numpy(), y_hat_i.data.cpu().numpy())
```
# Side:
- `model.eval()` notifies all your layers that you are in eval mode, that way, batchnorm or dropout layers will work in eval mode instead of training mode.
- `torch.no_grad()` impacts the **autograd engine** and deactivate it. This helps to reduce memory usage and speed up computations but you won’t be able to backprop (which you don’t want in an eval script). It is also the recommended way to perfrom validation in pytorch.
[Source](https://discuss.pytorch.org/t/model-eval-vs-with-torch-no-grad/19615)
```
# testing
with torch.no_grad():
model.eval()
total_loss = 0.
for k, (x_k, y_k) in enumerate(sine_loader_test):
y_hat_k = model(x_k)
loss_test = criterion(y_hat_k, y_k)
total_loss += float(loss_test)
print(total_loss)
```
## Reproducibility
```
def enforce_reproducibility(seed=42):
# Sets seed manually for both CPU and CUDA
torch.manual_seed(seed)
# For atomic operations there is currently
# no simple way to enforce determinism, as
# the order of parallel operations is not known.
#
# CUDNN
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
# System based
np.random.seed(seed)
enforce_reproducibility()
```
The function `utils.fix_random_seeds()` extends the above to the random seeds for NumPy and the Python `random` library.
## References
Lapan, Maxim (2018) *Deep Reinforcement Learning Hands-On*. Birmingham: Packt Publishing
Rao, Delip and Brian McMahan (2019) *Natural Language Processing with PyTorch*. Sebastopol, CA: O'Reilly Media
| github_jupyter |
# División mediante desplazamiento a derecha
Un número puede representarse usando cualquier base. Cuando usamos la base decimal, expresamos un número como combinación de potencias de 10:
$$35041 = \sum_{i=\infty}^{i=0}{c_i \cdot 10^i} = 3 \cdot 10^4 + 5 \cdot 10^3 + 0 \cdot 10^2 + 4 \cdot 10^1 + 1 \cdot 10^0$$
y la combinación se expresa mediante coeficientes $c_i$ en el rango $[0,9]$.
Esto mismo puede hacerse para cualquier base arbitraria.
Por ejemplo, $25_{10} = 221_{3}$:
$$25 = \sum_{i=\infty}^{i=0}{c_i \cdot 3^i} = 2 \cdot 3^2 + 2 \cdot 3^1 + 1 \cdot 3^0$$
Cuando representamos un número en base 3, los coeficientes $c_i$ están en el rango $[0,2]$.
En general:
$$n = \sum_{i=\infty}^{i=0}{c_i \cdot b^i} \,\, , \,\, c_i \in [0,b-1]$$
Cuando representamos un número en _binario_ (*base 2*), los coeficientes $c_i$ están en el rango $[0,1]$ y representan la combinatoria de la base $2^i$:
$$44_{10} = 101100_{2} = \sum_{i=\infty}^{i=0}{c_i \cdot 2^i} = 1 \cdot 2^5 + 0 \cdot 2^4 + 1 \cdot 2^3 + 1 \cdot 2^2 + 0 \cdot 2^1 + 0 \cdot 2^0$$
## Representación de números enteros de 64 bits
La siguiente función muestra un número entero en base binaria (64 bits):
```
def printbinary(i):
print(f'{i:064b}')
```
Veamos un ejemplo:
```
n = 2**5 + 2**3 + 2**2
print(n)
printbinary(n)
```
## División mediante desplazamiento a derecha
Un desplazamiento de un bit a derecha equivale a dividir por 2:
```
print(n//2)
print(n >> 1)
printbinary(n//2)
printbinary(n >> 1)
```
Un desplazamiento de $k$ bits a derecha equivale a dividir por $2^k$:
```
print(n//4)
print(n >> 2)
printbinary(n//4)
printbinary(n >> 2)
```
## Cuando el desplazamiento a derecha deja de ser equivalente...
En muchos lenguajes de programación (C, C++, ...) existe una _pequeña_ diferencia entre el desplazamiento a derecha y la división por $2^k$. No es así en python, por lo que haremos uso de llamadas al shell que sí que tiene este comportamiento:
```
!echo $(( 44 >> 1))
!echo $(( 44 >> 2))
!echo $(( 44 >> 3))
!echo $(( 44 >> 4))
!echo $(( 44 >> 5))
!echo $(( 44 >> 6))
!echo $(( 44 >> 7))
```
El problema surge cuando el desplazamiento es mayor o igual al número de bits (64 en nuestro caso):
```
!echo $(( 44 >> 62))
!echo $(( 44 >> 63))
!echo $(( 44 >> 64))
!echo $(( 44 >> 65))
!echo $(( 44 >> 66))
!echo $(( 44 >> 67))
!echo $(( 44 >> 68))
!echo $(( 44 >> 69))
!echo $(( 44 >> 70))
!echo $(( 44 >> 71))
```
## Entendiendo la divergencia
Cuando el desplazamiernto es mayor que el tamaño del dato sobre el que se realiza dicho desplazamiento... el comportamiento queda indefindo (el resultado no está determinado en la especificación del lenguaje). En la casi totalidad de las plataformas,
el desplazamiento _efectivo_ es el módulo frente al tamaño `T` del dato (sólo se hace caso a los T bits inferiores del desplazamiento):
`a >> b : a >> (b mod T)`
Para los números enteros de 64 bits:
`a >> b : a >> (b mod 64)`
`a >> 64 : a >> 0`
`a >> 65 : a >> 1`
`a >> 66 : a >> 2`
Por lo que
$$a >> k \ne a / 2^k \,\, , \,\, k > 63$$
| github_jupyter |
# Support Vector Machines
Support vector machines (SVMs) are a particularly powerful and flexible class of supervised algorithms for both classification and regression. In this section, we will develop the intuition behind support vector machines and their use in classification problems.
We begin with the standard imports:
```
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
# use seaborn plotting defaults
import seaborn as sns; sns.set()
```
## Motivating Support Vector Machines
As part of our disussion of Bayesian classification (see In Depth: Naive Bayes Classification), we learned a simple model describing the distribution of each underlying class, and used these generative models to probabilistically determine labels for new points. That was an example of generative classification; here we will consider instead discriminative classification: rather than modeling each class, we simply find a line or curve (in two dimensions) or manifold (in multiple dimensions) that divides the classes from each other.
As an example of this, consider the simple case of a classification task, in which the two classes of points are well separated:
```
from sklearn.datasets.samples_generator import make_blobs
X, y = make_blobs(n_samples=50, centers=2,
random_state=0, cluster_std=0.60)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn');
```
A linear discriminative classifier would attempt to draw a straight line separating the two sets of data, and thereby create a model for classification. For two dimensional data like that shown here, this is a task we could do by hand. But immediately we see a problem: there is more than one possible dividing line that can perfectly discriminate between the two classes!
We can draw them as follows:
```
xfit = np.linspace(-1, 3.5)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')
plt.plot([0.6], [2.1], 'x', color='red', markeredgewidth=2, markersize=10)
for m, b in [(1, 0.65), (0.5, 1.6), (-0.2, 2.9)]:
plt.plot(xfit, m * xfit + b, '-k')
plt.xlim(-1, 3.5);
```
These are three very different separators which, nevertheless, perfectly discriminate between these samples. Depending on which you choose, a new data point (e.g., the one marked by the "X" in this plot) will be assigned a different label! Evidently our simple intuition of "drawing a line between classes" is not enough, and we need to think a bit deeper.
## Support Vector Machines: Maximizing the Margin
Support vector machines offer one way to improve on this. The intuition is this: rather than simply drawing a zero-width line between the classes, we can draw around each line a margin of some width, up to the nearest point. Here is an example of how this might look:
```
xfit = np.linspace(-1, 3.5)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')
for m, b, d in [(1, 0.65, 0.33), (0.5, 1.6, 0.55), (-0.2, 2.9, 0.2)]:
yfit = m * xfit + b
plt.plot(xfit, yfit, '-k')
plt.fill_between(xfit, yfit - d, yfit + d, edgecolor='none',
color='#AAAAAA', alpha=0.4)
plt.xlim(-1, 3.5);
```
In support vector machines, the line that maximizes this margin is the one we will choose as the optimal model. Support vector machines are an example of such a maximum margin estimator.
## Fitting the Support Vector Machine
Let's see the result of an actual fit to this data: we will use Scikit-Learn's support vector classifier to train an SVM model on this data. For the time being, we will use a linear kernel and set the `C` parameter to a very large number (we'll discuss the meaning of these in more depth momentarily).
```
from sklearn.svm import SVC # "Support vector classifier"
model = SVC(kernel='linear', C=1E10)
model.fit(X, y)
```
To better visualise what is going on lets make a function that plots the decision function for us.
```
def plot_svc_decision_function(model, ax=None, plot_support=True):
"""Plot the decision function for a 2D SVC"""
if ax is None:
ax = plt.gca()
xlim = ax.get_xlim()
ylim = ax.get_ylim()
# create grid to evaluate model
x = np.linspace(xlim[0], xlim[1], 30)
y = np.linspace(ylim[0], ylim[1], 30)
Y, X = np.meshgrid(y, x)
xy = np.vstack([X.ravel(), Y.ravel()]).T
P = model.decision_function(xy).reshape(X.shape)
# plot decision boundary and margins
ax.contour(X, Y, P, colors='k',
levels=[-1, 0, 1], alpha=0.5,
linestyles=['--', '-', '--'])
# plot support vectors
if plot_support:
ax.scatter(model.support_vectors_[:, 0],
model.support_vectors_[:, 1],
s=300, linewidth=1, facecolors='none');
ax.set_xlim(xlim)
ax.set_ylim(ylim)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')
plot_svc_decision_function(model);
```
This is the dividing line that maximizes the margin between the two sets of points. Notice that a few of the training points just touch the margin: they are indicated by the black circles in this figure. These points are the pivotal elements of this fit, and are known as the support vectors, and give the algorithm its name. In Scikit-Learn, the identity of these points are stored in the `support_vectors_` attribute of the classifier:
```
model.support_vectors_
```
A key to this classifier's success is that for the fit, only the position of the support vectors matter; any points further from the margin which are on the correct side do not modify the fit! Technically, this is because these points do not contribute to the loss function used to fit the model, so their position and number do not matter so long as they do not cross the margin.
We can see this, for example, if we plot the model learned from the first 60 points and first 120 points of this dataset:
```
def plot_svm(N=10, ax=None):
X, y = make_blobs(n_samples=200, centers=2,
random_state=0, cluster_std=0.60)
X = X[:N]
y = y[:N]
model = SVC(kernel='linear', C=1E10)
model.fit(X, y)
ax = ax or plt.gca()
ax.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')
ax.set_xlim(-1, 4)
ax.set_ylim(-1, 6)
plot_svc_decision_function(model, ax)
fig, ax = plt.subplots(1, 2, figsize=(16, 6))
fig.subplots_adjust(left=0.0625, right=0.95, wspace=0.1)
for axi, N in zip(ax, [60, 120]):
plot_svm(N, axi)
axi.set_title('N = {0}'.format(N))
from ipywidgets import interact, fixed
interact(plot_svm, N=[10, 200], ax=fixed(None));
```
# Beyond Linear Boundaries: Kernel SVMs
Where SVM becomes extremely powerful is when it is combined with kernels. We have seen a version of kernels before, in the basis function regressions of In Depth: Linear Regression. There we projected our data into higher-dimensional space defined by polynomials and Gaussian basis functions, and thereby were able to fit for nonlinear relationships with a linear classifier.
In SVM models, we can use a version of the same idea. To motivate the need for kernels, let's look at some data that is not linearly separable:
```
from sklearn.datasets.samples_generator import make_circles
X, y = make_circles(100, factor=.1, noise=.1)
clf = SVC(kernel='linear').fit(X, y)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')
plot_svc_decision_function(clf, plot_support=False);
```
It is clear that no linear discrimination will ever be able to separate this data. But we can draw a lesson from the basis function regressions in In Depth: Linear Regression, and think about how we might project the data into a higher dimension such that a linear separator would be sufficient. For example, one simple projection we could use would be to compute a radial basis function centered on the middle clump:
```
r = np.exp(-(X ** 2).sum(1))
from mpl_toolkits import mplot3d
def plot_3D(elev=30, azim=30, X=X, y=y):
ax = plt.subplot(projection='3d')
ax.scatter3D(X[:, 0], X[:, 1], r, c=y, s=50, cmap='autumn')
ax.view_init(elev=elev, azim=azim)
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('r')
interact(plot_3D, elev=[-90, 90], azip=(-180, 180),
X=fixed(X), y=fixed(y));
```
We can see that with this additional dimension, the data becomes trivially linearly separable, by drawing a separating plane at, say, r=0.7.
Here we had to choose and carefully tune our projection: if we had not centered our radial basis function in the right location, we would not have seen such clean, linearly separable results. In general, the need to make such a choice is a problem: we would like to somehow automatically find the best basis functions to use.
One strategy to this end is to compute a basis function centered at every point in the dataset, and let the SVM algorithm sift through the results. This type of basis function transformation is known as a kernel transformation, as it is based on a similarity relationship (or kernel) between each pair of points.
A potential problem with this strategy—projecting $N$ points into $N$ dimensions—is that it might become very computationally intensive as $N$ grows large. However, because of a neat little procedure known as the kernel trick, a fit on kernel-transformed data can be done implicitly—that is, without ever building the full $N$-dimensional representation of the kernel projection! This kernel trick is built into the SVM, and is one of the reasons the method is so powerful.
In Scikit-Learn, we can apply kernelized SVM simply by changing our linear kernel to an RBF (radial basis function) kernel, using the kernel model hyperparameter:
```
clf = SVC(kernel='rbf', C=1E6)
clf.fit(X, y)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')
plot_svc_decision_function(clf)
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1],
s=300, lw=1, facecolors='none');
```
Using this kernelized support vector machine, we learn a suitable nonlinear decision boundary. This kernel transformation strategy is used often in machine learning to turn fast linear methods into fast nonlinear methods, especially for models in which the kernel trick can be used.
## Tuning the SVM: Softening Margins
Our discussion thus far has centered around very clean datasets, in which a perfect decision boundary exists. But what if your data has some amount of overlap? For example, you may have data like this:
```
X, y = make_blobs(n_samples=100, centers=2,
random_state=0, cluster_std=1.2)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn');
```
To handle this case, the SVM implementation has a bit of a fudge-factor which "softens" the margin: that is, it allows some of the points to creep into the margin if that allows a better fit. The hardness of the margin is controlled by a tuning parameter, most often known as $C$. For very large $C$, the margin is hard, and points cannot lie in it. For smaller $C$, the margin is softer, and can grow to encompass some points.
The plot shown below gives a visual picture of how a changing $C$ parameter affects the final fit, via the softening of the margin:
```
X, y = make_blobs(n_samples=100, centers=2,
random_state=0, cluster_std=0.8)
fig, ax = plt.subplots(1, 2, figsize=(16, 6))
fig.subplots_adjust(left=0.0625, right=0.95, wspace=0.1)
for axi, C in zip(ax, [10.0, 0.1]):
model = SVC(kernel='linear', C=C).fit(X, y)
axi.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap='autumn')
plot_svc_decision_function(model, axi)
axi.scatter(model.support_vectors_[:, 0],
model.support_vectors_[:, 1],
s=300, lw=1, facecolors='none');
axi.set_title('C = {0:.1f}'.format(C), size=14)
```
The optimal value of the $C$ parameter will depend on your dataset, and should be tuned using cross-validation or a similar procedure (refer back to Hyperparameters and Model Validation).
# Example: Face Recognition
As an example of support vector machines in action, let's take a look at the facial recognition problem. We will use the Labeled Faces in the Wild dataset, which consists of several thousand collated photos of various public figures. A fetcher for the dataset is built into Scikit-Learn:
```
from sklearn.datasets import fetch_lfw_people
faces = fetch_lfw_people(min_faces_per_person=60)
print(faces.target_names)
print(faces.images.shape)
```
Lets plot a few to look at what it looks like
```
fig, ax = plt.subplots(3, 5)
for i, axi in enumerate(ax.flat):
axi.imshow(faces.images[i], cmap='bone')
axi.set(xticks=[], yticks=[],
xlabel=faces.target_names[faces.target[i]])
```
Each image contains [62×47] or nearly 3,000 pixels. We could proceed by simply using each pixel value as a feature, but often it is more effective to use some sort of preprocessor to extract more meaningful features; here we will use a principal component analysis (see In Depth: Principal Component Analysis) to extract 150 fundamental components to feed into our support vector machine classifier. We can do this most straightforwardly by packaging the preprocessor and the classifier into a single pipeline:
```
from sklearn.svm import SVC
from sklearn.decomposition import RandomizedPCA
from sklearn.pipeline import make_pipeline
pca = RandomizedPCA(n_components=150, whiten=True, random_state=42)
svc = SVC(kernel='rbf', class_weight='balanced')
model = make_pipeline(pca, svc)
```
For the sake of testing our classifier output, we will split the data into a training and testing set:
```
from sklearn.cross_validation import train_test_split
Xtrain, Xtest, ytrain, ytest = train_test_split(faces.data, faces.target,
random_state=42)
```
Finally, we can use a grid search cross-validation to explore combinations of parameters. Here we will adjust C (which controls the margin hardness) and gamma (which controls the size of the radial basis function kernel), and determine the best model:
```
from sklearn.grid_search import GridSearchCV
param_grid = {'svc__C': [1, 5, 10, 50],
'svc__gamma': [0.0001, 0.0005, 0.001, 0.005]}
grid = GridSearchCV(model, param_grid)
%time grid.fit(Xtrain, ytrain)
print(grid.best_params_)
```
The optimal values fall toward the middle of our grid; if they fell at the edges, we would want to expand the grid to make sure we have found the true optimum.
Now with this cross-validated model, we can predict the labels for the test data, which the model has not yet seen:
```
model = grid.best_estimator_
yfit = model.predict(Xtest)
```
Let's take a look at a few of the test images along with their predicted values:
```
fig, ax = plt.subplots(4, 6)
for i, axi in enumerate(ax.flat):
axi.imshow(Xtest[i].reshape(62, 47), cmap='bone')
axi.set(xticks=[], yticks=[])
axi.set_ylabel(faces.target_names[yfit[i]].split()[-1],
color='black' if yfit[i] == ytest[i] else 'red')
fig.suptitle('Predicted Names; Incorrect Labels in Red', size=14);
```
Out of this small sample, our optimal estimator mislabeled only a single face (Bush’s face in the bottom row was mislabeled as Blair). We can get a better sense of our estimator's performance using the classification report, which lists recovery statistics label by label:
```
from sklearn.metrics import classification_report
print(classification_report(ytest, yfit,
target_names=faces.target_names))
```
We might also display the confusion matrix between these classes:
```
from sklearn.metrics import confusion_matrix
mat = confusion_matrix(ytest, yfit)
sns.heatmap(mat.T, square=True, annot=True, fmt='d', cbar=False,
xticklabels=faces.target_names,
yticklabels=faces.target_names)
plt.xlabel('true label')
plt.ylabel('predicted label');
```
This helps us get a sense of which labels are likely to be confused by the estimator.
For a real-world facial recognition task, in which the photos do not come pre-cropped into nice grids, the only difference in the facial classification scheme is the feature selection: you would need to use a more sophisticated algorithm to find the faces, and extract features that are independent of the pixellation. For this kind of application, one good option is to make use of OpenCV, which, among other things, includes pre-trained implementations of state-of-the-art feature extraction tools for images in general and faces in particular.
# Support Vector Machine Summary
We have seen here a brief intuitive introduction to the principals behind support vector machines. These methods are a powerful classification method for a number of reasons:
- Their dependence on relatively few support vectors means that they are very compact models, and take up very little memory.
- Once the model is trained, the prediction phase is very fast.
- Because they are affected only by points near the margin, they work well with high-dimensional data—even data with more dimensions than samples, which is a challenging regime for other algorithms.
- Their integration with kernel methods makes them very versatile, able to adapt to many types of data.
However, SVMs have several disadvantages as well:
- The scaling with the number of samples $N$ is $\mathcal{O}[N^3]$ at worst, or $\mathcal{O}[N^2]$ for efficient implementations. For large numbers of training samples, this computational cost can be prohibitive.
- The results are strongly dependent on a suitable choice for the softening parameter $C$. This must be carefully chosen via cross-validation, which can be expensive as datasets grow in size.
- The results do not have a direct probabilistic interpretation. This can be estimated via an internal cross-validation (see the probability parameter of SVC), but this extra estimation is costly.
With those traits in mind, I generally only turn to SVMs once other simpler, faster, and less tuning-intensive methods have been shown to be insufficient for my needs. Nevertheless, if you have the CPU cycles to commit to training and cross-validating an SVM on your data, the method can lead to excellent results.
| github_jupyter |
# Gap Framework - Natural Language Processing
## Syntax Module
<b>[Github] (https://github.com/andrewferlitsch/gap)</b>
# Document Preparation for NLP with Gap (Session 2)
Let's dig deeper into the basics. We will be using the <b style='color: saddlebrown'>SYNTAX</b> module in the **Gap** framework.
## <span style='color: saddlebrown'>Words</span> Object
Let's directly use the <b style='color: saddlebrown'>Words</b> object to control how the text is NLP preprocessed. We will cover the following:
- Syntax Preprocessing
- Text Reduction (Stopwords)
- Name Entity Recognition
- Parts of Speech Tagging
- De-Identification
- Measurement Extraction
```
import os
os.chdir("../")
!cd
#!ls #on linux
# import the Words class
from gapml.syntax import Words
```
### Syntax Preprocessing
The <b style='color: saddlebrown'>SYNTAX</b> module supports various keyword parameters to configure how the text is NLP preprocessed. We will cover just a few in this code-along. When the text is preprocessed, an ordered sequential list of <b style='color: saddlebrown'>Word</b> objects are generated; each consisting a set of key/value pairs.
In *bare* mode, all the text and punctuation is preserved, and no tagging, parts of speech (POS), stemming, lemmatization, name entity recognition (NER) or stopword removal is perform.
#### Bare
Let's look at the preprocessing of a simple sentence in *bare* mode.
```
# Process this well-known typing phrase which contains all 26 letters of the alphabet
w = Words('The quick brown fox jumped over the lazy dog.', bare=True)
print(w.words)
```
As you can see, the *words* property displays a list, where each entry is an object consisting of a word and tag key value pair. I know you don't know what the integer values of the tags mean (see Vocabulary.py). In bare mode, all words are tagged as UNTAGGED (0) and punctuation as PUNCT (23).
Note how in bare mode, all words are kept, their capitalization, order and punctuation.
#### Stopwords and Stemming
Let's do some text reduction. In NLP, a lot of things add very little to the understanding of the text, such as common words like 'the', 'and', 'a', and punctuation. Removing these common words is called stopword removal. There are several lists for doing this, the most common being the *Porter* list.
Additionallly, we can make it easier to match words if we lowercase all the words and remove word endings, such as plural and 'ing'; which is called stemming. Let's give it a try with the same sentence.
Note how words like 'the', and 'over' have been removed, the punctuation has been removed, words have been lowercased and 'jumped' has been stemmed to its root word 'jump'.
```
# Stem words using the NLTK Porter stemmer
w = Words('The quick brown fox jumped over the lazy dog.', stem='porter')
print(w.words)
```
Stemmers sometimes reduce words into something that isn't the root. Like 'riding' could end up being 'rid', after cutting off 'ing'. Note above how the *NLTK Porter* stemmer changed 'lazy' into 'lazi'.
Different stemmers have different errors. This can be corrected using a lemmatization. Let's repeat the above but use the **Gap** stemmer which has a lemmatizer correction.
```
# Stem words using the Gap stemmer
w = Words('The quick brown fox jumped over the lazy dog.', stem='gap')
print(w.words)
```
#### Gender Recognition
The <b style='color: saddlebrown'>Words</b> object will also recognize gender specific words. We will preprocess four different ways of saying 'father'. In each case, the tag will be set to MALE (15) and each word will be replaced (reduced) with its common equivalent 'father'.
```
# Let's recognize various forms of father
w = Words("dad daddy father papa", gender=True)
w.words
```
Let's now try a variety of words indicating the gender FEMALE (16). Note now 'mom' and 'mother' got reduced to the common equivalent 'mother', and the slang 'auntie' and 'sis' got reduced to 'aunt' and sister', respectively.
```
w = Words("girl lady mother mom auntie sis", gender=True)
w.words
```
#### NER (Name Entity Recognition)
The <b style='color: saddlebrown'>SYNTAX</b> module will recognize a wide variety of proper names, places and identification, such as a person's name (11), a social security number (9) a title (33), geographic location.
```
# Let's look at a string with a name, social security number, and title.
w = Words("Patient: Jim Jones, SSN: 123-12-1234. Dr. Nancy Lou", stopwords=True)
# Let's print the word list. Note that jim and jones are tagged 11 (Proper Name), 123121234 is tagged 9 (SSN), and
# Dr is tagged 33 (Title)
w.words
```
Let's now try an address. Nice, in our example we recognized (tagged) a street number (27), street direction (28), street name (29), street type (30), a secondary address unit (36), a city (31), a state (32) and postal code (34).
Both US and Canadian street and postal addresses are recognized. Note how the state name "Oregon" got replaced with its ISO international standard code.
```
w = Words("124 NE Main Ave, Apt #6, Portland, OR 97221", address=True)
w.words
```
#### De-Identification
The <b style='color: saddlebrown'>SYNTAX</b> module supports de-identification of the text. One can remove names, dates of birth, gender, social security number, telephone numbers and addresses.
```
# Let's remove any names and SSN from our text
w = Words("Patient: Jim Jones, SSN: 123-12-1234", name=False, ssn=False)
w.words
```
#### Measurements
The <b style='color: saddlebrown'>SYNTAX</b> module supports extracting measurement units, such as height, weight, speed, volume and quantity (38). You can also configure to convert measurements (25) to Standard or Metric system. A wide variety of acronyms and formats are recognized. Note that numbers are tagged as 1.
```
# Let's do height using ' for foot and " for inches
w = Words("Height: 5'7\"", stopwords=True)
w.words
# Let's do height using the acronym ft and in.
w = Words("Height: 5 ft 7 in", stopwords=True)
w.words
# Let's do height using the acronym ft and in, with no space between the value and unit
w = Words("Height: 5ft 7in", stopwords=True)
w.words
# Let's do an example in Standard and convert to Metric system.
w = Words("Weight is 120lbs", stopwords=True, metric=True)
w.words
```
## THAT'S ALL FOR SESSION 2
Look forward to seeing everyone again on session 3 where we will do some data preparation for computer vision.
| github_jupyter |
```
import csv
import os.path
import configparser as ConfigParser
import pandas as pd
import numpy as np
import requests
import csv
import urllib.parse
```
# Read Files
```
# These should be the full file paths or at least relative to the cwd
file_cfg = ConfigParser.ConfigParser()
file_cfg.read('C:\\Users\\asd\\python-workspace\\zucchini\\config\\zucchini.ini')
main_dir = os.path.dirname(os.getcwd())
param_file = os.path.join(main_dir, file_cfg.get("Input", "input_file"))
param_file
main_dir
mapping_file = os.path.join(main_dir, file_cfg.get("Input", "mapping_file"))
cols_params = ['Parameter', 'Abbreviation','Unit','ID parameter']
df_mapping = pd.read_csv(mapping_file, header=None, delimiter="\t", encoding='ISO-8859-1',names = ["NativeUOM", "UCUM"])
df_mapping.head()
df_mapping.info()
df_mapping.nunique()
pd.concat(g for _, g in df_mapping.groupby("UCUM") if len(g) > 1).head()
df = pd.read_csv(param_file, header=0, usecols=cols_params, delimiter="\t", encoding='utf-8')
df.head()
df.info()
#remove lines with no parameter
df = df[df.Parameter != '-']
df.head(2)
'nan' in df.Unit.values
df.info()
#List unique values each cols
df.nunique()
df.Unit.unique()[:20]
df.Unit.isnull().values.any()
list_of_uom= df.Unit.dropna().unique()
df["QuantityKind"] = np.nan
df.head(2)
```
# Clean UoM based on the mapping file
```
matched_uom = []
def validate_units(uom):
valid_uom = np.nan
new_uom = df_mapping.loc[df_mapping['NativeUOM'] == uom, 'UCUM']
if not new_uom.empty:
valid_uom = new_uom.iloc[0]
matched_uom.append(uom)
return valid_uom
for uom_ori in list_of_uom:
uom_ucum = validate_units(uom_ori)
if uom_ucum is not None:
df.loc[df.Unit == uom_ori, 'Unit'] = uom_ucum
total_exist_mapping = df_mapping.NativeUOM.nunique()
print('Total robert mappings: ',total_exist_mapping)
print('Total matched units: ',len(matched_uom), len(set(matched_uom)))
print('unused uom mappings:')
list(set(df_mapping.NativeUOM.unique()) - set(matched_uom))
df.head(3)
```
# Update Quantity Kind
```
uom_not_found =[]
uom_empty_result =[]
quantity_null=[]
list_of_uom_updated= df.Unit.dropna().unique()
for u in list_of_uom_updated:
q = 'http://dataportals.pangaea.de/test/ucum/?ucum='+ str(u)
resp = requests.get(q)
if '"input":' in resp.text:
data = resp.json()
if 'error:' in resp.text:
uom_not_found.append(u)
else:
if 'Quantities' in data:
quantity = data['Quantities']
else:
quantity = data['quantities']
if(quantity):
quantity_str = ','.join(quantity)
df.loc[df.Unit == u, 'QuantityKind'] = quantity_str
else:
quantity_null.append(u)
else:
uom_empty_result.append(u)
len(uom_not_found),len(set(uom_not_found)),uom_not_found[:6]
len(uom_empty_result),uom_empty_result[:6]
len(quantity_null), quantity_null[:6]
df.head(5)
df.count()
len(df.QuantityKind.dropna().unique())
def write_data_tofile(datalist, filename):
with open(filename,'w') as outfile:
for entries in datalist:
outfile.write(entries)
outfile.write("\n")
write_data_tofile(uom_not_found,os.path.join(main_dir, 'output\\uom_not_found.csv'))
write_data_tofile(uom_empty_result,os.path.join(main_dir, 'output\\uom_empty_result.csv'))
write_data_tofile(quantity_null,os.path.join(main_dir, 'output\\quantity_null.csv'))
```
# Validate Units with UCUM Service
```
ucum_compliant_uom =[]
ucum_non_compliant_uom =[]
ucum_rh = df_mapping.UCUM.dropna().unique()
for u in ucum_rh:
#quoting special characters and appropriately encoding non-ASCII text
unit_encoded = urllib.parse.quote(u)
q = 'https://ucum.nlm.nih.gov/ucum-service/v1/isValidUCUM/'+ unit_encoded
resp = requests.get(q)
if resp.status_code == 200 and resp.text == 'true':
ucum_compliant_uom.append(u)
else:
ucum_non_compliant_uom.append(u)
print(len(set(ucum_compliant_uom)),len(ucum_compliant_uom))
print(len(set(ucum_non_compliant_uom)),len(ucum_non_compliant_uom))
ucum_non_compliant_uom[:10]
```
# Elastic Query
| github_jupyter |
# Planar data classification with one hidden layer
Welcome to your week 3 programming assignment. It's time to build your first neural network, which will have a hidden layer. You will see a big difference between this model and the one you implemented using logistic regression.
**You will learn how to:**
- Implement a 2-class classification neural network with a single hidden layer
- Use units with a non-linear activation function, such as tanh
- Compute the cross entropy loss
- Implement forward and backward propagation
## 1 - Packages ##
Let's first import all the packages that you will need during this assignment.
- [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.
- [sklearn](http://scikit-learn.org/stable/) provides simple and efficient tools for data mining and data analysis.
- [matplotlib](http://matplotlib.org) is a library for plotting graphs in Python.
- testCases provides some test examples to assess the correctness of your functions
- planar_utils provide various useful functions used in this assignment
```
# Package imports
import numpy as np
import matplotlib.pyplot as plt
from testCases import *
import sklearn
import sklearn.datasets
import sklearn.linear_model
from planar_utils import plot_decision_boundary, sigmoid, load_planar_dataset, load_extra_datasets
%matplotlib inline
np.random.seed(1) # set a seed so that the results are consistent
```
## 2 - Dataset ##
First, let's get the dataset you will work on. The following code will load a "flower" 2-class dataset into variables `X` and `Y`.
```
X, Y = load_planar_dataset()
```
Visualize the dataset using matplotlib. The data looks like a "flower" with some red (label y=0) and some blue (y=1) points. Your goal is to build a model to fit this data.
```
# Visualize the data:
plt.scatter(X[0, :], X[1, :], c=np.squeeze(Y), s=40, cmap=plt.cm.Spectral);
```
You have:
- a numpy-array (matrix) X that contains your features (x1, x2)
- a numpy-array (vector) Y that contains your labels (red:0, blue:1).
Lets first get a better sense of what our data is like.
**Exercise**: How many training examples do you have? In addition, what is the `shape` of the variables `X` and `Y`?
**Hint**: How do you get the shape of a numpy array? [(help)](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.shape.html)
```
### START CODE HERE ### (≈ 3 lines of code)
shape_X = X.shape
shape_Y = Y.shape
m = shape_X[1] # training set size
### END CODE HERE ###
print ('The shape of X is: ' + str(shape_X))
print ('The shape of Y is: ' + str(shape_Y))
print ('I have m = %d training examples!' % (m))
```
**Expected Output**:
<table style="width:20%">
<tr>
<td>**shape of X**</td>
<td> (2, 400) </td>
</tr>
<tr>
<td>**shape of Y**</td>
<td>(1, 400) </td>
</tr>
<tr>
<td>**m**</td>
<td> 400 </td>
</tr>
</table>
## 3 - Simple Logistic Regression
Before building a full neural network, lets first see how logistic regression performs on this problem. You can use sklearn's built-in functions to do that. Run the code below to train a logistic regression classifier on the dataset.
```
# Train the logistic regression classifier
clf = sklearn.linear_model.LogisticRegressionCV(cv=5);
clf.fit(X.T, Y.T.ravel());
```
You can now plot the decision boundary of these models. Run the code below.
```
# Plot the decision boundary for logistic regression
plot_decision_boundary(lambda x: clf.predict(x), X, Y)
plt.title("Logistic Regression")
# Print accuracy
LR_predictions = clf.predict(X.T)
print ('Accuracy of logistic regression: %d ' % float((np.dot(Y,LR_predictions) + np.dot(1-Y,1-LR_predictions))/float(Y.size)*100) +
'% ' + "(percentage of correctly labelled datapoints)")
```
**Expected Output**:
<table style="width:20%">
<tr>
<td>**Accuracy**</td>
<td> 47% </td>
</tr>
</table>
**Interpretation**: The dataset is not linearly separable, so logistic regression doesn't perform well. Hopefully a neural network will do better. Let's try this now!
## 4 - Neural Network model
Logistic regression did not work well on the "flower dataset". You are going to train a Neural Network with a single hidden layer.
**Here is our model**:
<img src="images/classification_kiank.png" style="width:600px;height:300px;">
**Mathematically**:
For one example $x^{(i)}$:
$$z^{[1] (i)} = W^{[1]} x^{(i)} + b^{[1] (i)}\tag{1}$$
$$a^{[1] (i)} = \tanh(z^{[1] (i)})\tag{2}$$
$$z^{[2] (i)} = W^{[2]} a^{[1] (i)} + b^{[2] (i)}\tag{3}$$
$$\hat{y}^{(i)} = a^{[2] (i)} = \sigma(z^{ [2] (i)})\tag{4}$$
$$y^{(i)}_{prediction} = \begin{cases} 1 & \mbox{if } a^{[2](i)} > 0.5 \\ 0 & \mbox{otherwise } \end{cases}\tag{5}$$
Given the predictions on all the examples, you can also compute the cost $J$ as follows:
$$J = - \frac{1}{m} \sum\limits_{i = 0}^{m} \large\left(\small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large \right) \small \tag{6}$$
**Reminder**: The general methodology to build a Neural Network is to:
1. Define the neural network structure ( # of input units, # of hidden units, etc).
2. Initialize the model's parameters
3. Loop:
- Implement forward propagation
- Compute loss
- Implement backward propagation to get the gradients
- Update parameters (gradient descent)
You often build helper functions to compute steps 1-3 and then merge them into one function we call `nn_model()`. Once you've built `nn_model()` and learnt the right parameters, you can make predictions on new data.
### 4.1 - Defining the neural network structure ####
**Exercise**: Define three variables:
- n_x: the size of the input layer
- n_h: the size of the hidden layer (set this to 4)
- n_y: the size of the output layer
**Hint**: Use shapes of X and Y to find n_x and n_y. Also, hard code the hidden layer size to be 4.
```
# GRADED FUNCTION: layer_sizes
def layer_sizes(X, Y):
"""
Arguments:
X -- input dataset of shape (input size, number of examples)
Y -- labels of shape (output size, number of examples)
Returns:
n_x -- the size of the input layer
n_h -- the size of the hidden layer
n_y -- the size of the output layer
"""
### START CODE HERE ### (≈ 3 lines of code)
n_x = X.shape[0] # size of input layer
n_h = 4
n_y = Y.shape[0] # size of output layer
### END CODE HERE ###
return (n_x, n_h, n_y)
X_assess, Y_assess = layer_sizes_test_case()
(n_x, n_h, n_y) = layer_sizes(X_assess, Y_assess)
print("The size of the input layer is: n_x = " + str(n_x))
print("The size of the hidden layer is: n_h = " + str(n_h))
print("The size of the output layer is: n_y = " + str(n_y))
```
**Expected Output** (these are not the sizes you will use for your network, they are just used to assess the function you've just coded).
<table style="width:20%">
<tr>
<td>**n_x**</td>
<td> 5 </td>
</tr>
<tr>
<td>**n_h**</td>
<td> 4 </td>
</tr>
<tr>
<td>**n_y**</td>
<td> 2 </td>
</tr>
</table>
### 4.2 - Initialize the model's parameters ####
**Exercise**: Implement the function `initialize_parameters()`.
**Instructions**:
- Make sure your parameters' sizes are right. Refer to the neural network figure above if needed.
- You will initialize the weights matrices with random values.
- Use: `np.random.randn(a,b) * 0.01` to randomly initialize a matrix of shape (a,b).
- You will initialize the bias vectors as zeros.
- Use: `np.zeros((a,b))` to initialize a matrix of shape (a,b) with zeros.
```
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
"""
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
params -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
"""
np.random.seed(2) # we set up a seed so that your output matches ours although the initialization is random.
### START CODE HERE ### (≈ 4 lines of code)
W1 = np.random.randn(n_h, n_x) * 0.01
b1 = np.zeros((n_h, 1))
W2 = np.random.randn(n_y, n_h) * 0.01
b2 = np.zeros((n_y, 1))
### END CODE HERE ###
assert (W1.shape == (n_h, n_x))
assert (b1.shape == (n_h, 1))
assert (W2.shape == (n_y, n_h))
assert (b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
n_x, n_h, n_y = initialize_parameters_test_case()
parameters = initialize_parameters(n_x, n_h, n_y)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected Output**:
<table style="width:90%">
<tr>
<td>**W1**</td>
<td> [[-0.00416758 -0.00056267]
[-0.02136196 0.01640271]
[-0.01793436 -0.00841747]
[ 0.00502881 -0.01245288]] </td>
</tr>
<tr>
<td>**b1**</td>
<td> [[ 0.]
[ 0.]
[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td>**W2**</td>
<td> [[-0.01057952 -0.00909008 0.00551454 0.02292208]]</td>
</tr>
<tr>
<td>**b2**</td>
<td> [[ 0.]] </td>
</tr>
</table>
### 4.3 - The Loop ####
**Question**: Implement `forward_propagation()`.
**Instructions**:
- Look above at the mathematical representation of your classifier.
- You can use the function `sigmoid()`. It is built-in (imported) in the notebook.
- You can use the function `np.tanh()`. It is part of the numpy library.
- The steps you have to implement are:
1. Retrieve each parameter from the dictionary "parameters" (which is the output of `initialize_parameters()`) by using `parameters[".."]`.
2. Implement Forward Propagation. Compute $Z^{[1]}, A^{[1]}, Z^{[2]}$ and $A^{[2]}$ (the vector of all your predictions on all the examples in the training set).
- Values needed in the backpropagation are stored in "`cache`". The `cache` will be given as an input to the backpropagation function.
```
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
"""
Argument:
X -- input data of size (n_x, m)
parameters -- python dictionary containing your parameters (output of initialization function)
Returns:
A2 -- The sigmoid output of the second activation
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2"
"""
# Retrieve each parameter from the dictionary "parameters"
### START CODE HERE ### (≈ 4 lines of code)
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
### END CODE HERE ###
# Implement Forward Propagation to calculate A2 (probabilities)
### START CODE HERE ### (≈ 4 lines of code)
Z1 = np.dot(W1, X) + b1
A1 = np.tanh(Z1)
Z2 = np.dot(W2, A1) + b2
A2 = sigmoid(Z2)
### END CODE HERE ###
assert(A2.shape == (1, X.shape[1]))
cache = {"Z1": Z1,
"A1": A1,
"Z2": Z2,
"A2": A2}
return A2, cache
X_assess, parameters = forward_propagation_test_case()
A2, cache = forward_propagation(X_assess, parameters)
# Note: we use the mean here just to make sure that your output matches ours.
print(np.mean(cache['Z1']) ,np.mean(cache['A1']),np.mean(cache['Z2']),np.mean(cache['A2']))
```
**Expected Output**:
<table style="width:55%">
<tr>
<td> -0.0004997557777419902 -0.000496963353231779 0.00043818745095914653 0.500109546852431 </td>
</tr>
</table>
Now that you have computed $A^{[2]}$ (in the Python variable "`A2`"), which contains $a^{[2](i)}$ for every example, you can compute the cost function as follows:
$$J = - \frac{1}{m} \sum\limits_{i = 0}^{m} \large{(} \small y^{(i)}\log\left(a^{[2] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[2] (i)}\right) \large{)} \small\tag{13}$$
**Exercise**: Implement `compute_cost()` to compute the value of the cost $J$.
**Instructions**:
- There are many ways to implement the cross-entropy loss. To help you, we give you how we would have implemented
$- \sum\limits_{i=0}^{m} y^{(i)}\log(a^{[2](i)})$:
```python
logprobs = np.multiply(np.log(A2),Y)
cost = - np.sum(logprobs) # no need to use a for loop!
```
(you can use either `np.multiply()` and then `np.sum()` or directly `np.dot()`).
```
# GRADED FUNCTION: compute_cost
def compute_cost(A2, Y, parameters):
"""
Computes the cross-entropy cost given in equation (13)
Arguments:
A2 -- The sigmoid output of the second activation, of shape (1, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
parameters -- python dictionary containing your parameters W1, b1, W2 and b2
Returns:
cost -- cross-entropy cost given equation (13)
"""
m = Y.shape[1] # number of example
# Compute the cross-entropy cost
### START CODE HERE ### (≈ 2 lines of code)
logprobs = Y * np.log(A2) + (1 - Y) * np.log(1 - A2)
cost = -(1/m) * np.sum(logprobs)
### END CODE HERE ###
cost = np.squeeze(cost) # makes sure cost is the dimension we expect.
# E.g., turns [[17]] into 17
assert(isinstance(cost, float))
return cost
A2, Y_assess, parameters = compute_cost_test_case()
print("cost = " + str(compute_cost(A2, Y_assess, parameters)))
```
**Expected Output**:
<table style="width:50%">
<tr>
<td>**cost**</td>
<td> 0.6929198937761265 </td>
</tr>
</table>
Using the cache computed during forward propagation, you can now implement backward propagation.
**Question**: Implement the function `backward_propagation()`.
**Instructions**:
Backpropagation is usually the hardest (most mathematical) part in deep learning. To help you, here again is the slide from the lecture on backpropagation. You'll want to use the six equations on the right of this slide, since you are building a vectorized implementation.
<img src="images/grad_summary.png" style="width:600px;height:300px;">
<!--
$\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } = \frac{1}{m} (a^{[2](i)} - y^{(i)})$
$\frac{\partial \mathcal{J} }{ \partial W_2 } = \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } a^{[1] (i) T} $
$\frac{\partial \mathcal{J} }{ \partial b_2 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)}}}$
$\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } = W_2^T \frac{\partial \mathcal{J} }{ \partial z_{2}^{(i)} } * ( 1 - a^{[1] (i) 2}) $
$\frac{\partial \mathcal{J} }{ \partial W_1 } = \frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)} } X^T $
$\frac{\partial \mathcal{J} _i }{ \partial b_1 } = \sum_i{\frac{\partial \mathcal{J} }{ \partial z_{1}^{(i)}}}$
- Note that $*$ denotes elementwise multiplication.
- The notation you will use is common in deep learning coding:
- dW1 = $\frac{\partial \mathcal{J} }{ \partial W_1 }$
- db1 = $\frac{\partial \mathcal{J} }{ \partial b_1 }$
- dW2 = $\frac{\partial \mathcal{J} }{ \partial W_2 }$
- db2 = $\frac{\partial \mathcal{J} }{ \partial b_2 }$
!-->
- Tips:
- To compute dZ1 you'll need to compute $g^{[1]'}(Z^{[1]})$. Since $g^{[1]}(.)$ is the tanh activation function, if $a = g^{[1]}(z)$ then $g^{[1]'}(z) = 1-a^2$. So you can compute
$g^{[1]'}(Z^{[1]})$ using `(1 - np.power(A1, 2))`.
```
# GRADED FUNCTION: backward_propagation
def backward_propagation(parameters, cache, X, Y):
"""
Implement the backward propagation using the instructions above.
Arguments:
parameters -- python dictionary containing our parameters
cache -- a dictionary containing "Z1", "A1", "Z2" and "A2".
X -- input data of shape (2, number of examples)
Y -- "true" labels vector of shape (1, number of examples)
Returns:
grads -- python dictionary containing your gradients with respect to different parameters
"""
m = X.shape[1]
# First, retrieve W1 and W2 from the dictionary "parameters".
### START CODE HERE ### (≈ 2 lines of code)
W1 = parameters["W1"]
W2 = parameters["W2"]
### END CODE HERE ###
# Retrieve also A1 and A2 from dictionary "cache".
### START CODE HERE ### (≈ 2 lines of code)
A1 = cache['A1']
A2 = cache['A2']
### END CODE HERE ###
# Backward propagation: calculate dW1, db1, dW2, db2.
### START CODE HERE ### (≈ 6 lines of code, corresponding to 6 equations on slide above)
dZ2 = A2 - Y
dW2 = (1/m) * np.dot(dZ2, A1.T)
db2 = (1/m) * np.sum(dZ2, axis = 1, keepdims = True)
dZ1 = np.dot(W2.T, dZ2) * (1 - np.power(A1, 2))
dW1 = (1/m) * np.dot(dZ1, X.T)
db1 = (1/m) * np.sum(dZ1, axis = 1, keepdims = True)
### END CODE HERE ###
grads = {"dW1": dW1,
"db1": db1,
"dW2": dW2,
"db2": db2}
return grads
parameters, cache, X_assess, Y_assess = backward_propagation_test_case()
grads = backward_propagation(parameters, cache, X_assess, Y_assess)
print ("dW1 = "+ str(grads["dW1"]))
print ("db1 = "+ str(grads["db1"]))
print ("dW2 = "+ str(grads["dW2"]))
print ("db2 = "+ str(grads["db2"]))
```
**Expected output**:
<table style="width:80%">
<tr>
<td>**dW1**</td>
<td> [[ 0.01018708 -0.00708701]
[ 0.00873447 -0.0060768 ]
[-0.00530847 0.00369379]
[-0.02206365 0.01535126]] </td>
</tr>
<tr>
<td>**db1**</td>
<td> [[-0.00069728]
[-0.00060606]
[ 0.000364 ]
[ 0.00151207]] </td>
</tr>
<tr>
<td>**dW2**</td>
<td> [[ 0.00363613 0.03153604 0.01162914 -0.01318316]] </td>
</tr>
<tr>
<td>**db2**</td>
<td> [[ 0.06589489]] </td>
</tr>
</table>
**Question**: Implement the update rule. Use gradient descent. You have to use (dW1, db1, dW2, db2) in order to update (W1, b1, W2, b2).
**General gradient descent rule**: $ \theta = \theta - \alpha \frac{\partial J }{ \partial \theta }$ where $\alpha$ is the learning rate and $\theta$ represents a parameter.
**Illustration**: The gradient descent algorithm with a good learning rate (converging) and a bad learning rate (diverging). Images courtesy of Adam Harley.
<img src="images/sgd.gif" style="width:400;height:400;"> <img src="images/sgd_bad.gif" style="width:400;height:400;">
```
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate = 1.2):
"""
Updates parameters using the gradient descent update rule given above
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients
Returns:
parameters -- python dictionary containing your updated parameters
"""
# Retrieve each parameter from the dictionary "parameters"
### START CODE HERE ### (≈ 4 lines of code)
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
### END CODE HERE ###
# Retrieve each gradient from the dictionary "grads"
### START CODE HERE ### (≈ 4 lines of code)
dW1 = grads["dW1"]
db1 = grads["db1"]
dW2 = grads["dW2"]
db2 = grads["db2"]
## END CODE HERE ###
# Update rule for each parameter
### START CODE HERE ### (≈ 4 lines of code)
W1 -= learning_rate * dW1
b1 -= learning_rate * db1
W2 -= learning_rate * dW2
b2 -= learning_rate * db2
### END CODE HERE ###
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected Output**:
<table style="width:80%">
<tr>
<td>**W1**</td>
<td> [[-0.00643025 0.01936718]
[-0.02410458 0.03978052]
[-0.01653973 -0.02096177]
[ 0.01046864 -0.05990141]]</td>
</tr>
<tr>
<td>**b1**</td>
<td> [[ -1.02420756e-06]
[ 1.27373948e-05]
[ 8.32996807e-07]
[ -3.20136836e-06]]</td>
</tr>
<tr>
<td>**W2**</td>
<td> [[-0.01041081 -0.04463285 0.01758031 0.04747113]] </td>
</tr>
<tr>
<td>**b2**</td>
<td> [[ 0.00010457]] </td>
</tr>
</table>
### 4.4 - Integrate parts 4.1, 4.2 and 4.3 in nn_model() ####
**Question**: Build your neural network model in `nn_model()`.
**Instructions**: The neural network model has to use the previous functions in the right order.
```
# GRADED FUNCTION: nn_model
def nn_model(X, Y, n_h, num_iterations = 10000, print_cost=False):
"""
Arguments:
X -- dataset of shape (2, number of examples)
Y -- labels of shape (1, number of examples)
n_h -- size of the hidden layer
num_iterations -- Number of iterations in gradient descent loop
print_cost -- if True, print the cost every 1000 iterations
Returns:
parameters -- parameters learnt by the model. They can then be used to predict.
"""
np.random.seed(3)
n_x = layer_sizes(X, Y)[0]
n_y = layer_sizes(X, Y)[2]
# Initialize parameters, then retrieve W1, b1, W2, b2. Inputs: "n_x, n_h, n_y". Outputs = "W1, b1, W2, b2, parameters".
### START CODE HERE ### (≈ 5 lines of code)
parameters = initialize_parameters(n_x, n_h, n_y)
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
### END CODE HERE ###
# Loop (gradient descent)
for i in range(0, num_iterations):
### START CODE HERE ### (≈ 4 lines of code)
# Forward propagation. Inputs: "X, parameters". Outputs: "A2, cache".
A2, cache = forward_propagation(X, parameters)
# Cost function. Inputs: "A2, Y, parameters". Outputs: "cost".
cost = compute_cost(A2, Y, parameters)
# Backpropagation. Inputs: "parameters, cache, X, Y". Outputs: "grads".
grads = backward_propagation(parameters, cache, X, Y)
# Gradient descent parameter update. Inputs: "parameters, grads". Outputs: "parameters".
parameters = update_parameters(parameters, grads)
### END CODE HERE ###
# Print the cost every 1000 iterations
if print_cost and i % 1000 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
return parameters
X_assess, Y_assess = nn_model_test_case()
parameters = nn_model(X_assess, Y_assess, 4, num_iterations=10000, print_cost=False)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected Output**:
<table style="width:90%">
<tr>
<td>**W1**</td>
<td> [[-4.18494056 5.33220609]
[-7.52989382 1.24306181]
[-4.1929459 5.32632331]
[ 7.52983719 -1.24309422]]</td>
</tr>
<tr>
<td>**b1**</td>
<td> [[ 2.32926819]
[ 3.79458998]
[ 2.33002577]
[-3.79468846]]</td>
</tr>
<tr>
<td>**W2**</td>
<td> [[-6033.83672146 -6008.12980822 -6033.10095287 6008.06637269]] </td>
</tr>
<tr>
<td>**b2**</td>
<td> [[-52.66607724]] </td>
</tr>
</table>
### 4.5 Predictions
**Question**: Use your model to predict by building predict().
Use forward propagation to predict results.
**Reminder**: predictions = $y_{prediction} = \mathbb 1 \text{{activation > 0.5}} = \begin{cases}
1 & \text{if}\ activation > 0.5 \\
0 & \text{otherwise}
\end{cases}$
As an example, if you would like to set the entries of a matrix X to 0 and 1 based on a threshold you would do: ```X_new = (X > threshold)```
```
# GRADED FUNCTION: predict
def predict(parameters, X):
"""
Using the learned parameters, predicts a class for each example in X
Arguments:
parameters -- python dictionary containing your parameters
X -- input data of size (n_x, m)
Returns
predictions -- vector of predictions of our model (red: 0 / blue: 1)
"""
# Computes probabilities using forward propagation, and classifies to 0/1 using 0.5 as the threshold.
### START CODE HERE ### (≈ 2 lines of code)
A2, cache = forward_propagation(X, parameters)
predictions = (A2 > 0.5)
### END CODE HERE ###
return predictions
parameters, X_assess = predict_test_case()
predictions = predict(parameters, X_assess)
print("predictions mean = " + str(np.mean(predictions)))
```
**Expected Output**:
<table style="width:40%">
<tr>
<td>**predictions mean**</td>
<td> 0.6666666666666666 </td>
</tr>
</table>
It is time to run the model and see how it performs on a planar dataset. Run the following code to test your model with a single hidden layer of $n_h$ hidden units.
```
# Build a model with a n_h-dimensional hidden layer
parameters = nn_model(X, Y, n_h = 4, num_iterations = 10000, print_cost=True)
# Plot the decision boundary
plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)
plt.title("Decision Boundary for hidden layer size " + str(4))
```
**Expected Output**:
<table style="width:40%">
<tr>
<td>**Cost after iteration 9000**</td>
<td> 0.218607 </td>
</tr>
</table>
```
# Print accuracy
predictions = predict(parameters, X)
print ('Accuracy: %d' % float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100) + '%')
```
**Expected Output**:
<table style="width:15%">
<tr>
<td>**Accuracy**</td>
<td> 90% </td>
</tr>
</table>
Accuracy is really high compared to Logistic Regression. The model has learnt the leaf patterns of the flower! Neural networks are able to learn even highly non-linear decision boundaries, unlike logistic regression.
Now, let's try out several hidden layer sizes.
### 4.6 - Tuning hidden layer size (optional/ungraded exercise) ###
Run the following code. It may take 1-2 minutes. You will observe different behaviors of the model for various hidden layer sizes.
```
# This may take about 2 minutes to run
plt.figure(figsize=(16, 32))
hidden_layer_sizes = [1, 2, 3, 4, 5, 10, 20]
for i, n_h in enumerate(hidden_layer_sizes):
plt.subplot(5, 2, i+1)
plt.title('Hidden Layer of size %d' % n_h)
parameters = nn_model(X, Y, n_h, num_iterations = 5000)
plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y)
predictions = predict(parameters, X)
accuracy = float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100)
print ("Accuracy for {} hidden units: {} %".format(n_h, accuracy))
```
**Interpretation**:
- The larger models (with more hidden units) are able to fit the training set better, until eventually the largest models overfit the data.
- The best hidden layer size seems to be around n_h = 5. Indeed, a value around here seems to fits the data well without also incurring noticable overfitting.
- You will also learn later about regularization, which lets you use very large models (such as n_h = 50) without much overfitting.
**Optional questions**:
Some optional/ungraded questions that you can explore if you wish:
- What happens when you change the tanh activation for a sigmoid activation or a ReLU activation?
- Play with the learning_rate. What happens?
- What if we change the dataset? (See part 5 below!)
<font color='blue'>
**You've learnt to:**
- Build a complete neural network with a hidden layer
- Make a good use of a non-linear unit
- Implemented forward propagation and backpropagation, and trained a neural network
- See the impact of varying the hidden layer size, including overfitting.
Nice work!
## 5) Performance on other datasets
If you want, you can rerun the whole notebook (minus the dataset part) for each of the following datasets.
```
# Datasets
noisy_circles, noisy_moons, blobs, gaussian_quantiles, no_structure = load_extra_datasets()
datasets = {"noisy_circles": noisy_circles,
"noisy_moons": noisy_moons,
"blobs": blobs,
"gaussian_quantiles": gaussian_quantiles,
"no_structure": no_structure}
### START CODE HERE ### (choose your dataset)
dataset = "noisy_moons"
### END CODE HERE ###
X, Y = datasets[dataset]
X, Y = X.T, Y.reshape(1, Y.shape[0])
# make blobs binary
if dataset == "blobs":
Y = Y%2
# Visualize the data
plt.scatter(X[0, :], X[1, :], c=np.squeeze(Y), s=40, cmap=plt.cm.Spectral);
```
Congrats on finishing this Programming Assignment!
| github_jupyter |
Saturation curves for SM-omics and ST<br>
Input files are generated by counting number of unique molecules and number of annotated reads per annotated region after adjusting for sequencing depth, in downsampled fastq files (proportions 0.001, 0.01, 0.05, 0.1, 0.2, 0.4, 0.6, 0.8, 1) processed using ST-pipeline.<br>
```
%matplotlib inline
import os
import numpy
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
import seaborn as sns
import glob
import warnings
matplotlib.rcParams['pdf.fonttype'] = 42
matplotlib.rcParams['ps.fonttype'] = 42
warnings.filterwarnings('ignore')
def condition(row):
""" Takes row in pandas df as input and returns type of condition
"""
# The samples are run in triplicate based on condition
condition = ['HE', 'DAPI', 'Nestin']
if row['Name'] in ['10015CN108fl_D1', '10015CN108fl_D2', '10015CN108flfl_E2']:
return condition[2]
elif row['Name'] in ['10015CN90_C2', '10015CN90_D2', '10015CN90_E2']:
return condition[1]
elif row['Name'] in ['10015CN108_C2', '10015CN108_D2', '10015CN108_E1']:
return condition[0]
# Load input files
path = '../../smomics_data'
stats_list = []
samples_list = ['10015CN108fl_D2',
'10015CN108flfl_E2',
'10015CN108fl_D1',
'10015CN90_C2',
'10015CN90_D2',
'10015CN90_E2',
'10015CN108_C2',
'10015CN108_D2',
'10015CN108_E1']
prop_list = [0.001, 0.01, 0.05, 0.1, 0.2, 0.4, 0.6, 0.8, 1]
for filename in samples_list:
cond_file = pd.read_csv(os.path.join(path, filename + '_umi_after_seq_depth_in_spots_under_outside_tissue.txt'), sep = '\t')
print(cond_file)
cond_file.sort_values(by='Num reads', inplace=True)
cond_file['Prop_annot_reads'] = prop_list
cond_file['Condition'] = cond_file.apply(lambda row: condition(row), axis = 1)
cond_file['norm uniq mol inside'] = cond_file['UMI inside']
cond_file['norm uniq mol outside'] = cond_file['UMI outside']
stats_list.append(cond_file)
# Concat all files
cond_merge = pd.concat(stats_list)
#Plot
fig = plt.figure(figsize=(20, 10))
x="Prop_annot_reads"
y="norm uniq mol inside"
#y="Genes"
hue='Condition'
################ LINE PLOT
ax = sns.lineplot(x=x, y=y, data=cond_merge,hue=hue,
palette = ['mediumorchid', 'goldenrod', 'blue'], hue_order = ['HE', 'DAPI', 'Nestin'],ci=95)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_color('k')
ax.spines['left'].set_color('k')
# X and y label size
ax.set_xlabel("Proportion annotated reads", fontsize=15)
ax.set_ylabel("Number of unique molecules under tissue", fontsize=15)
# Set ticks size
ax.tick_params(axis='y', labelsize=15)
ax.tick_params(axis='x', labelsize=15)
# change background color
back_c = 'white'
ax.set_facecolor(back_c)
ax.grid(False)
# Thousand seprator on y axis
ax.get_yaxis().set_major_formatter(
matplotlib.ticker.FuncFormatter(lambda x, p: format(int(x), ',')))
# LEGEND
handles, labels = ax.get_legend_handles_labels()
ax.legend(handles=handles[0:], labels=['HE', 'DAPI', 'Nestin'],loc='upper left', ncol=2, fontsize=20)
fig.set_size_inches(20, 10)
# plt.savefig("../../figures/saturation_sm_stainings_saturation.pdf", transparent=True, bbox_inches = 'tight',
# pad_inches = 0, dpi=1200)
plt.show()
cond_file['Prop_annot_reads'] = 100*cond_file['Prop_annot_reads']
#cond_merge.to_csv('../../smomics_data/sm_stainings_unique_molecules_under_outside_tissue.csv')
```
| github_jupyter |
<a href="https://colab.research.google.com/github/gptix/DS-Unit-3-Sprint-2-SQL-and-Databases/blob/master/Copy_of_Titanic_PG_DB.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
# Connect to postgres server in cloud
!pip install psycopg2-binary
import psycopg2
dbname = 'sjnmahbj'
user = 'sjnmahbj'
password = ''
port = '5432'
host = 'rajje.db.elephantsql.com' # port should be included by defult
pg_conn = psycopg2.connect(dbname=dbname, user=user,
password=password, host=host)
# pg_conn
# retrieve Titanic data file with pandas
import pandas as pd
# ingest CSV data
titanic_url = 'https://raw.githubusercontent.com/gptix/DS-Unit-3-Sprint-2-SQL-and-Databases/master/module2-sql-for-analysis/titanic.csv'
titanic_df = pd.read_csv(titanic_url)
# Clean data
# titanic_df.head()
# titanic_df[titanic_df.isna().any(axis=1)]
# # None!
# titanic_df[titanic_df.isna().any(axis=1)]
# # None!
# titanic_df.describe()
# clean data
# titanic_df_cleaned = clean(titanic_df)
titanic_df_cols = list(titanic_df.columns)
titanic_df_cols
create_enum_sex = """
CREATE TYPE SEX AS ENUM ('female', 'male');
"""
pg_conn = psycopg2.connect(dbname=dbname, user=user,
password=password, host=host)
pg_curs = pg_conn.cursor()
# pg_curs
pg_curs.execute(create_enum_sex)
pg_conn.commit()
# Create table in Postgres db
# Column names from CSV file
# ['Survived',
# 'Pclass',
# 'Name',
# 'Sex',
# 'Age',
# 'Siblings/Spouses Aboard',
# 'Parents/Children Aboard',
# 'Fare']
create_titanic_table_SQL = """
CREATE TABLE titanic (
id SERIAL PRIMARY KEY,
Survived BOOLEAN NOT NULL,
Pclass INT NOT NULL,
Name VARCHAR(50),
Sex SEX,
Age INT NOT NULL,
Siblings_Spouses_Aboard INT NOT NULL,
Parents_Children_Aboard INT NOT NULL,
Fare REAL NOT NULL
);"""
# print(create_titanic_table_SQL)
pg_conn = psycopg2.connect(dbname=dbname, user=user,
password=password, host=host)
pg_curs = pg_conn.cursor()
# pg_curs
pg_curs.execute(create_titanic_table_SQL)
pg_conn.commit()
titanic_csv = titanic_df.to_csv()
# print(titanic_csv)
# one_row = titanic_df.iloc[0]
one_row
def make_insert_row(row):
s = "place "
for col in titanic_df_cols:
s = s + str(row[col]) + " "
return s
make_insert_row(one_row)
col_list = "id, Survived, Pclass, Name, Sex, Age, Siblings_Spouses_Aboard, Parents_Children_Aboard, Fare"
table_name = "titanic"
insert_statement = "INSERT INTO " + table_name + " (" + \
col_list + """)
VALUES
""" + "values" + ");"
print(insert_statement)
# populate table
# create SQL to import data
# execute SQL tro import data
# define import SQL
import_sql = foo
```
| github_jupyter |
### Инструкция по выполнению
Мы будем использовать в данном задании набор данных Boston, где нужно предсказать стоимость жилья на основе различных характеристик расположения (загрязненность воздуха, близость к дорогам и т.д.). Подробнее о признаках можно почитать по адресу https://archive.ics.uci.edu/ml/machine-learning-databases/housing/
1. Загрузите выборку Boston с помощью функции sklearn.datasets.load_boston(). Результатом вызова данной функции является объект, у которого признаки записаны в поле data, а целевой вектор — в поле target.
2. Приведите признаки в выборке к одному масштабу при помощи функции sklearn.preprocessing.scale.
3. Переберите разные варианты параметра метрики p по сетке от 1 до 10 с таким шагом, чтобы всего было протестировано 200 вариантов (используйте функцию numpy.linspace). Используйте KNeighborsRegressor с n_neighbors=5 и weights='distance' — данный параметр добавляет в алгоритм веса, зависящие от расстояния до ближайших соседей. В качестве метрики качества используйте среднеквадратичную ошибку (параметр scoring='mean_squared_error' у cross_val_score; при использовании библиотеки scikit-learn версии 0.18.1 и выше необходимо указывать scoring='neg_mean_squared_error'). Качество оценивайте, как и в предыдущем задании, с помощью кросс-валидации по 5 блокам с random_state = 42, не забудьте включить перемешивание выборки (shuffle=True).
4. Определите, при каком p качество на кросс-валидации оказалось оптимальным. Обратите внимание, что cross_val_score возвращает массив показателей качества по блокам; необходимо максимизировать среднее этих показателей. Это значение параметра и будет ответом на задачу.
Если ответом является нецелое число, то целую и дробную часть необходимо разграничивать точкой, например, 0.4. При необходимости округляйте дробную часть до одного знака.
```
import pandas as pd
from sklearn.datasets import load_boston
boston = load_boston()
attrs = pd.DataFrame(boston.data)
target = pd.DataFrame(boston.target)
attrs.head(10)
from sklearn.preprocessing import scale
attrs = scale(attrs)
import numpy as np
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from sklearn.neighbors import KNeighborsRegressor
max_MSE = 0
best_p = 0
for test_p in np.linspace(1, 10, 200):
kf = KFold(n_splits=5, shuffle=True, random_state=42)
neigh = KNeighborsRegressor(
n_neighbors=5,
weights='distance',
metric='minkowski',
p=test_p
)
scores = cross_val_score(
neigh,
attrs,
target,
cv=kf,
scoring='neg_mean_squared_error'
)
cur_MSE = scores.mean()
# print(scores)
if max_MSE < cur_MSE or max_MSE == 0:
max_MSE = cur_MSE
best_p = test_p
print(max_MSE, best_p)
file = open('/home/topcoder2k/HSE_ML_week2_answers/best_p.txt', 'w')
file.write('1')
file.close()
```
| github_jupyter |
# Federated Learning Training Plan: Host Plan & Model
Here we load Plan and Model params created earlier in "Create Plan" notebook, host them to PyGrid,
and run sample syft.js app that executes them.
```
%load_ext autoreload
%autoreload 2
import websockets
import json
import base64
import requests
import torch
import syft as sy
from syft.grid.grid_client import GridClient
from syft.serde import protobuf
from syft_proto.execution.v1.plan_pb2 import Plan as PlanPB
from syft_proto.execution.v1.state_pb2 import State as StatePB
sy.make_hook(globals())
# force protobuf serialization for tensors
hook.local_worker.framework = None
async def sendWsMessage(data):
async with websockets.connect('ws://' + gatewayWsUrl) as websocket:
await websocket.send(json.dumps(data))
message = await websocket.recv()
return json.loads(message)
def deserializeFromBin(worker, filename, pb):
with open(filename, "rb") as f:
bin = f.read()
pb.ParseFromString(bin)
return protobuf.serde._unbufferize(worker, pb)
```
## Step 4a: Host in PyGrid
Here we load "ops list" Plan.
PyGrid should translate it to other types (e.g. torchscript) automatically.
```
# Load files with protobuf created in "Create Plan" notebook.
training_plan = deserializeFromBin(hook.local_worker, "tp_full.pb", PlanPB())
model_params_state = deserializeFromBin(hook.local_worker, "model_params.pb", StatePB())
```
Follow PyGrid README.md to build `openmined/grid-gateway` image from the latest `dev` branch
and spin up PyGrid using `docker-compose up --build`.
```
# Default gateway address when running locally
gatewayWsUrl = "127.0.0.1:5000"
grid = GridClient(id="test", address=gatewayWsUrl, secure=False)
grid.connect()
```
Define name, version, configs.
```
# These name/version you use in worker
name = "mnist"
version = "1.0.0"
client_config = {
"name": name,
"version": version,
"batch_size": 64,
"lr": 0.01,
"max_updates": 100 # custom syft.js option that limits number of training loops per worker
}
server_config = {
"min_workers": 3, # temporarily this plays role "min # of worker's diffs" for triggering cycle end event
"max_workers": 3,
"pool_selection": "random",
"num_cycles": 5,
"do_not_reuse_workers_until_cycle": 4,
"cycle_length": 28800,
"minimum_upload_speed": 0,
"minimum_download_speed": 0
}
```
Shoot!
If everything's good, success is returned.
If the name/version already exists in PyGrid, change them above or cleanup PyGrid db by re-creating docker containers (e.g. `docker-compose up --force-recreate`).
```
response = grid.host_federated_training(
model=model_params_state,
client_plans={'training_plan': training_plan},
client_protocols={},
server_averaging_plan=None,
client_config=client_config,
server_config=server_config
)
print("Host response:", response)
```
Let's double-check that data is loaded by requesting a cycle.
(Request is made directly, will be methods on grid client in the future)
```
auth_request = {
"type": "federated/authenticate",
"data": {}
}
auth_response = await sendWsMessage(auth_request)
print('Auth response: ', json.dumps(auth_response, indent=2))
cycle_request = {
"type": "federated/cycle-request",
"data": {
"worker_id": auth_response['data']['worker_id'],
"model": name,
"version": version,
"ping": 1,
"download": 10000,
"upload": 10000,
}
}
cycle_response = await sendWsMessage(cycle_request)
print('Cycle response:', json.dumps(cycle_response, indent=2))
worker_id = auth_response['data']['worker_id']
request_key = cycle_response['data']['request_key']
model_id = cycle_response['data']['model_id']
training_plan_id = cycle_response['data']['plans']['training_plan']
```
Let's download model and plan (both versions) and check they are actually workable.
```
# Model
req = requests.get(f"http://{gatewayWsUrl}/federated/get-model?worker_id={worker_id}&request_key={request_key}&model_id={model_id}")
model_data = req.content
pb = StatePB()
pb.ParseFromString(req.content)
model_params_downloaded = protobuf.serde._unbufferize(hook.local_worker, pb)
print(model_params_downloaded)
# Plan "list of ops"
req = requests.get(f"http://{gatewayWsUrl}/federated/get-plan?worker_id={worker_id}&request_key={request_key}&plan_id={training_plan_id}&receive_operations_as=list")
pb = PlanPB()
pb.ParseFromString(req.content)
plan_ops = protobuf.serde._unbufferize(hook.local_worker, pb)
print(plan_ops.role.actions)
print(plan_ops.torchscript)
# Plan "torchscript"
req = requests.get(f"http://{gatewayWsUrl}/federated/get-plan?worker_id={worker_id}&request_key={request_key}&plan_id={training_plan_id}&receive_operations_as=torchscript")
pb = PlanPB()
pb.ParseFromString(req.content)
plan_ts = protobuf.serde._unbufferize(hook.local_worker, pb)
print(plan_ts.role.actions)
print(plan_ts.torchscript.code)
```
## Step 5a: Train
Start and open "with-grid" example in syft.js project (http://localhost:8080 by default),
enter model name and version and start FL training.
## Step 6a: Submit diff
This emulates submitting worker's diff (created earlier in Execute Plan notebook) to PyGrid.
After several diffs submitted, PyGrid will end the cycle and create new model checkpoint and cycle.
(Request is made directly, will be methods on grid client in the future)
```
with open("diff.pb", "rb") as f:
diff = f.read()
report_request = {
"type": "federated/report",
"data": {
"worker_id": auth_response['data']['worker_id'],
"request_key": cycle_response['data']['request_key'],
"diff": base64.b64encode(diff).decode("utf-8")
}
}
report_response = await sendWsMessage(report_request)
print('Report response:', json.dumps(report_response, indent=2))
```
| github_jupyter |
## Prerequisites
This notebook contains examples which are expected *to be run with exactly 4 MPI processes*; not because they wouldn't work otherwise, but simply because it's what their description assumes. For this, you need to:
* Install an MPI distribution on your system, such as OpenMPI, MPICH, or Intel MPI (if not already available).
* Install some optional dependencies, including `mpi4py` and `ipyparallel`; from the root Devito directory, run
```
pip install -r requirements-optional.txt
```
* Create an `ipyparallel` MPI profile, by running our simple setup script. From the root directory, run
```
./scripts/create_ipyparallel_mpi_profile.sh
```
## Launch and connect to an ipyparallel cluster
We're finally ready to launch an ipyparallel cluster. Open a new terminal and run the following command
```
ipcluster start --profile=mpi -n 4
```
Once the engines have started successfully, we can connect to the cluster
```
import ipyparallel as ipp
c = ipp.Client(profile='mpi')
```
In this tutorial, to run commands in parallel over the engines, we will use the %px line magic.
```
%%px --group-outputs=engine
from mpi4py import MPI
print(f"Hi, I'm rank %d." % MPI.COMM_WORLD.rank)
```
## Overview of MPI in Devito
Distributed-memory parallelism via MPI is designed so that users can "think sequentially" for as much as possible. The few things requested to the user are:
* Like any other MPI program, run with `mpirun -np X python ...`
* Some pre- and/or post-processing may be rank-specific (e.g., we may want to plot on a given MPI rank only, even though this might be hidden away in the next Devito releases, when newer support APIs will be provided.
* Parallel I/O (if and when necessary) to populate the MPI-distributed datasets in input to a Devito Operator. If a shared file system is available, there are a few simple alternatives to pick from, such as NumPy’s memory-mapped arrays.
To enable MPI, users have two options. Either export the environment variable `DEVITO_MPI=1` or, programmatically:
```
%%px --group-outputs=engine
from devito import configuration
configuration['mpi'] = True
%%px --block --group-outputs=engine
# Keep generated code as simple as possible
configuration['openmp'] = False
# Fix platform so that this notebook can be tested by py.test --nbval
configuration['platform'] = 'knl7210'
```
An `Operator` will then generate MPI code, including sends/receives for halo exchanges. Below, we introduce a running example through which we explain how domain decomposition as well as data access (read/write) and distribution work. Performance optimizations are discussed [in a later section](#Performance-optimizations).
Let's start by creating a `TimeFunction`.
```
%%px --group-outputs=engine
from devito import Grid, TimeFunction, Eq, Operator
grid = Grid(shape=(4, 4))
u = TimeFunction(name="u", grid=grid, space_order=2, time_order=0)
```
Domain decomposition is performed when creating a `Grid`. Users may supply their own domain decomposition, but this is not shown in this notebook. Devito exploits the MPI Cartesian topology abstraction to logically split the `Grid` over the available MPI processes. Since `u` is defined over a decomposed `Grid`, its data get distributed too.
```
%%px --group-outputs=engine
u.data
```
Globally, `u` consists of 4x4 points -- this is what users "see". But locally, as shown above, each rank has got a 2x2 subdomain. The key point is: **for the user, the fact that `u.data` is distributed is completely abstracted away -- the perception is that of indexing into a classic NumPy array, regardless of whether MPI is enabled or not**. All sort of NumPy indexing schemes (basic, slicing, etc.) are supported. For example, we can write into a slice-generated view of our data.
```
%%px --group-outputs=engine
u.data[0, 1:-1, 1:-1] = 1.
%%px --group-outputs=engine
u.data
```
The only limitation, currently, is that a data access cannot require a direct data exchange among two or more processes (e.g., the assignment `u.data[0, 0] = u.data[3, 3]` will raise an exception unless both entries belong to the same MPI rank).
We can finally write out a trivial `Operator` to try running something.
```
%%px --group-outputs=engine
#NBVAL_IGNORE_OUTPUT
op = Operator(Eq(u.forward, u + 1))
summary = op.apply(time_M=0)
```
And we can now check again the (distributed) content of our `u.data`
```
%%px --group-outputs=engine
u.data
```
Everything as expected. We could also peek at the generated code, because we may be curious to see what sort of MPI calls Devito has generated...
```
%%px --targets 0
print(op)
```
Hang on. There's nothing MPI-specific here! At least apart from the header file `#include "mpi.h"`. What's going on? Well, it's simple. Devito was smart enough to realize that this trivial `Operator` doesn't even need any sort of halo exchange -- the `Eq` implements a pure "map computation" (i.e., fully parallel), so it can just let each MPI process do its job without ever synchronizing with halo exchanges. We might want try again with a proper stencil `Eq`.
```
%%px --targets 0
op = Operator(Eq(u.forward, u.dx + 1))
print(op)
```
Uh-oh -- now the generated code looks more complicated than before, though it still is pretty much human-readable. We can spot the following routines:
* `haloupdate0` performs a blocking halo exchange, relying on three additional functions, `gather0`, `sendrecv0`, and `scatter0`;
* `gather0` copies the (generally non-contiguous) boundary data into a contiguous buffer;
* `sendrecv0` takes the buffered data and sends it to one or more neighboring processes; then it waits until all data from the neighboring processes is received;
* `scatter0` copies the received data into the proper array locations.
This is the simplest halo exchange scheme available in Devito. There are a few, and some of them apply aggressive optimizations, [as shown later on](#Performance-optimizations).
Before looking at other scenarios and performance optimizations, there is one last thing it is worth discussing -- the `data_with_halo` view.
```
%%px --group-outputs=engine
u.data_with_halo
```
This is again a global data view. The shown *with_halo* is the "true" halo surrounding the physical domain, **not** the halo used for the MPI halo exchanges (often referred to as "ghost region"). So it gets trivial for a user to initialize the "true" halo region (which is typically read by a stencil `Eq` when an `Operator` iterates in proximity of the domain bounday).
```
%%px --group-outputs=engine
u.data_with_halo[:] = 1.
%%px --group-outputs=engine
u.data_with_halo
```
## MPI and SparseFunction
A `SparseFunction` represents a sparse set of points which are generically unaligned with the `Grid`. A sparse point could be anywhere within a grid, and is therefore attached some coordinates. Given a sparse point, Devito looks at its coordinates and, based on the domain decomposition, **logically** assigns it to a given MPI process; this is purely logical ownership, as in Python-land, before running an Operator, the sparse point physically lives on the MPI rank which created it. Within `op.apply`, right before jumping to C-land, the sparse points are scattered to their logical owners; upon returning to Python-land, the sparse points are gathered back to their original location.
In the following example, we attempt injection of four sparse points into the neighboring grid points via linear interpolation.
```
%%px --group-outputs=engine
from devito import Function, SparseFunction
grid = Grid(shape=(4, 4), extent=(3.0, 3.0))
x, y = grid.dimensions
f = Function(name='f', grid=grid)
coords = [(0.5, 0.5), (1.5, 2.5), (1.5, 1.5), (2.5, 1.5)]
sf = SparseFunction(name='sf', grid=grid, npoint=len(coords), coordinates=coords)
```
Let:
* O be a grid point
* x be a halo point
* A, B, C, D be the sparse points
We show the global view, that is what the user "sees".
```
O --- O --- O --- O
| A | | |
O --- O --- O --- O
| | C | B |
O --- O --- O --- O
| | D | |
O --- O --- O --- O
```
And now the local view, that is what the MPI ranks own when jumping to C-land.
```
Rank 0 Rank 1
O --- O --- x x --- O --- O
| A | | | | |
O --- O --- x x --- O --- O
| | C | | C | B |
x --- x --- x x --- x --- x
Rank 2 Rank 3
x --- x --- x x --- x --- x
| | C | | C | B |
O --- O --- x x --- O --- O
| | D | | D | |
O --- O --- x x --- O --- O
```
We observe that the sparse points along the boundary of two or more MPI ranks are _duplicated_ and thus redundantly computed over multiple processes. However, the contributions from these points to the neighboring halo points are naturally ditched, so the final result of the interpolation is as expected. Let's convince ourselves that this is the case. We assign a value of $5$ to each sparse point. Since we are using linear interpolation and all points are placed at the exact center of a grid quadrant, we expect that the contribution of each sparse point to a neighboring grid point will be $5 * 0.25 = 1.25$. Based on the global view above, we eventually expect `f` to look like as follows:
```
1.25 --- 1.25 --- 0.00 --- 0.00
| | | |
1.25 --- 2.50 --- 2.50 --- 1.25
| | | |
0.00 --- 2.50 --- 3.75 --- 1.25
| | | |
0.00 --- 1.25 --- 1.25 --- 0.00
```
Let's check this out.
```
%%px
#NBVAL_IGNORE_OUTPUT
sf.data[:] = 5.
op = Operator(sf.inject(field=f, expr=sf))
summary = op.apply()
%%px --group-outputs=engine
f.data
```
## Performance optimizations
The Devito compiler applies several optimizations before generating code.
* Redundant halo exchanges are identified and removed. A halo exchange is redundant if a prior halo exchange carries out the same `Function` update and the data is not “dirty” yet.
* Computation/communication overlap, with explicit prodding of the asynchronous progress engine to make sure that non-blocking communications execute in background during the compute part.
* Halo exchanges could also be reshuffled to maximize the extension of the computation/communication overlap region.
To run with all these optimizations enabled, instead of `DEVITO_MPI=1`, users should set `DEVITO_MPI=full`, or, equivalently
```
%%px --group-outputs=engine
configuration['mpi'] = 'full'
```
We could now peek at the generated code to see that things now look differently.
```
%%px --group-outputs=engine
op = Operator(Eq(u.forward, u.dx + 1))
# Uncomment below to show code (it's quite verbose)
# print(op)
```
The body of the time-stepping loop has changed, as it now implements a classic computation/communication overlap scheme:
* `haloupdate0` triggers non-blocking communications;
* `compute0` executes the core domain region, that is the sub-region which doesn't require reading from halo data to be computed;
* `halowait0` wait and terminates the non-blocking communications;
* `remainder0`, which internally calls `compute0`, computes the boundary region requiring the now up-to-date halo data.
| github_jupyter |
```
import os
import jieba
import re
import pandas as pd
import numpy as np
from sklearn.metrics.pairwise import cosine_similarity
from sklearn.feature_extraction.text import CountVectorizer
```
## 读数据
```
df=pd.read_csv('./data/biquge_2500.csv',encoding='UTF-8-sig')
df.head()
```
## 获得id和标题2id的字典
```
df['id']=np.array([df['link'][i][-9:-5] for i in range(len(df['link']))])
#df.drop(columns=['link'],inplace=True)
df.head()
t2id={}
id2t={}
for i in range(len(df)):
t2id[df.loc[i]['title']]=df.loc[i]['id']
id2t[df.loc[i]['id']]=df.loc[i]['title']
t2id['重生之极品赘婿']
df.set_index('id',inplace=True)
df.head()
```
## 描述信息分词,去停用词
```
def get_stopwords_list():
stopwords = [line.strip() for line in open('./stopwords.txt',encoding='UTF-8-sig').readlines()]
return stopwords
stopwords_list=get_stopwords_list()
stopwords_list[:5]
def remove_digits(input_str):
punc = u'0123456789.'
output_str = re.sub(r'[{}]+'.format(punc), '', input_str)
return output_str
def move_stopwords(sentence_list, stopwords_list):
out_list = []#results
for word in sentence_list:
if word not in stopwords_list:
if not remove_digits(word):
continue
if word != '\t':
out_list.append(word)
return ' '.join(out_list)
#df['split_title']=df['title'].map(lambda x:move_stopwords(list(jieba.cut(x)),stopwords_list))
df['discription']=df['discription'].astype('str')
df.head()
df['discription']=df['discription'].map(lambda x:move_stopwords(list(jieba.cut(x.replace(u'\xa0', u''))),stopwords_list))
df.head()
df['discription'][0]
```
## 向量化&计算余弦相似度
```
count = CountVectorizer()
count_matrix = count.fit_transform(df['discription'])
cosine_sim = cosine_similarity(count_matrix, count_matrix)
cosine_sim
indices = pd.Series(df.index)
indices[:5]
```
## 选择10部值得推荐的小说
```
def recommendations(title, cosine_sim = cosine_sim):
recommended_movies = []
# gettin the index of the movie that matches the title
idx = indices[indices == title].index[0]
# creating a Series with the similarity scores in descending order
score_series = pd.Series(cosine_sim[idx]).sort_values(ascending = False)
# getting the indexes of the 10 most similar movies
top_10_indexes = list(score_series.iloc[1:10].index)
# populating the list with the titles of the best 10 matching movies
for i in top_10_indexes:
recommended_movies.append(list(df.index)[i])
return recommended_movies
rec_list=recommendations('7260')
print('《'+id2t['7260']+"》"+'的相关推荐小说是:')
print()
for v in rec_list:
print(id2t[v])
df.head()
df['rec']=df.index.map(lambda x:' '.join([id2t[y] for y in recommendations(x)]))
df.head()
df.drop(columns=['discription'],inplace=True)
```
# 保存数据
```
df.to_csv("./results/biquge2500_results.csv",index=False, sep=',',encoding='utf-8')
```
| github_jupyter |
```
# USAGE
# python cnn_regression.py --dataset Houses-dataset/Houses\ Dataset/
# import the necessary packages
from keras.optimizers import Adam
from sklearn.model_selection import train_test_split
from pyimagesearch import datasets
from pyimagesearch import models
import numpy as np
import argparse
import locale
import os
# construct the argument parser and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-d", "--dataset", type=str, required=True, help="path to input dataset of house images")
args = vars(ap.parse_args())
# construct the path to the input .txt file that contains information
# on each house in the dataset and then load the dataset
print("[INFO] loading house attributes...")
inputPath = os.path.sep.join([args["dataset"], "HousesInfo.txt"])
df = datasets.load_house_attributes(inputPath)
# load the house images and then scale the pixel intensities to the
# range [0, 1]
print("[INFO] loading house images...")
images = datasets.load_house_images(df, args["dataset"])
images = images / 255.0
# partition the data into training and testing splits using 75% of
# the data for training and the remaining 25% for testing
split = train_test_split(df, images, test_size=0.25, random_state=42)
(trainAttrX, testAttrX, trainImagesX, testImagesX) = split
# find the largest house price in the training set and use it to
# scale our house prices to the range [0, 1] (will lead to better
# training and convergence)
maxPrice = trainAttrX["price"].max()
trainY = trainAttrX["price"] / maxPrice
testY = testAttrX["price"] / maxPrice
# create our Convolutional Neural Network and then compile the model
# using mean absolute percentage error as our loss, implying that we
# seek to minimize the absolute percentage difference between our
# price *predictions* and the *actual prices*
model = models.create_cnn(64, 64, 3, regress=True)
opt = Adam(lr=1e-3, decay=1e-3 / 200)
model.compile(loss="mean_absolute_percentage_error", optimizer=opt)
# train the model
print("[INFO] training model...")
model.fit(trainImagesX, trainY, validation_data=(testImagesX, testY), epochs=200, batch_size=8)
# make predictions on the testing data
print("[INFO] predicting house prices...")
preds = model.predict(testImagesX)
# compute the difference between the *predicted* house prices and the
# *actual* house prices, then compute the percentage difference and
# the absolute percentage difference
diff = preds.flatten() - testY
percentDiff = (diff / testY) * 100
absPercentDiff = np.abs(percentDiff)
# compute the mean and standard deviation of the absolute percentage
# difference
mean = np.mean(absPercentDiff)
std = np.std(absPercentDiff)
# finally, show some statistics on our model
locale.setlocale(locale.LC_ALL, "en_US.UTF-8")
print("[INFO] avg. house price: {}, std house price: {}".format(locale.currency(df["price"].mean(), grouping=True),locale.currency(df["price"].std(), grouping=True)))
print("[INFO] mean: {:.2f}%, std: {:.2f}%".format(mean, std))
```
| github_jupyter |
# Programming with Python
## Episode 1b - Introduction to Plotting
Teaching: 60 min,
Exercises: 30 min
Objectives
- Perform operations on arrays of data.
- Plot simple graphs from data.
### Array operations
Often, we want to do more than add, subtract, multiply, and divide array elements. NumPy knows how to do more complex operations, too. If we want to find the average inflammation for all patients on all days, for example, we can ask NumPy to compute data's mean value:
```
print(numpy.mean(data))
```
```
import numpy
data = numpy.loadtxt(fname = 'data/inflammation-01.csv', delimiter =',')
print(numpy.mean(data))
```
`mean()` is a function that takes an array as an argument.
However, not all functions have input.
Generally, a function uses inputs to produce outputs. However, some functions produce outputs without needing any input. For example, checking the current time doesn't require any input.
```
import time
print(time.ctime())
```
```
import time
print(time.ctime())
from datetime import datetime
print(datetime.today().strftime('%Y%m%d%))
```
For functions that don't take in any arguments, we still need parentheses `()` to tell Python to go and do something for us.
NumPy has lots of useful functions that take an array as input. Let's use three of those functions to get some descriptive values about the dataset. We'll also use *multiple assignment*, a convenient Python feature that will enable us to do this all in one line.
```
maxval, minval, stdval = numpy.max(data), numpy.min(data), numpy.std(data)
```
```
maxval, minval, stdval = numpy.max(data), numpy.min(data), numpy.std(data)
```
Here we've assigned the return value from `numpy.max(data)` to the variable `maxval`, the return value from `numpy.min(data)` to `minval`, and so on.
Let's have a look at the results:
```
print('maximum inflammation:', maxval)
print('minimum inflammation:', minval)
print('standard deviation:', stdval)
```
```
print('maximim inflammation:', maxval)
print('minimum inflammation:', minval)
print('standard deviation:', stdval)
```
#### Mystery Functions in IPython
How did we know what functions NumPy has and how to use them?
If you are working in IPython or in a Jupyter Notebook (which we are), there is an easy way to find out. If you type the name of something followed by a dot `.`, then you can use `Tab` completion (e.g. type `numpy.` and then press `tab`) to see a list of all functions and attributes that you can use.
```
numpy.cumprod?
```
After selecting one, you can also add a question mark `?` (e.g. `numpy.cumprod?`), and IPython will return an explanation of the method!
This is the same as running `help(numpy.cumprod)`.
```
help(numpy.cumprod)
```
When analysing data, though, we often want to look at variations in statistical values, such as the maximum inflammation per patient or the average inflammation per day. One way to do this is to create a new temporary array of the data we want, then ask it to do the calculation:
```
patient_0 = data[0, :] # Comment: 0 on the first axis (rows), everything on the second (columns)
print('maximum inflammation for patient 0:', numpy.max(patient_0))
```
```
patient_0 = data[0, :] # Comment: 0 on the first axis (rows), everything on the second axis (columns)
print('maximum inflammation for patient 0: ', numpy.max(patient_0))
```
Everything in a line of code following the `#` symbol is a comment that is ignored by Python. Comments allow programmers to leave explanatory notes for other programmers or their future selves.
```
# TEst comment
```
We don't actually need to store the row in a variable of its own. Instead, we can combine the selection and the function call:
```
print('maximum inflammation for patient 2:', numpy.max(data[2, :]))
```
```
print('maximum inflammation for patient 2: ', numpy.max(data[2, :]))
```
Operations Across Axes
What if we need the maximum inflammation for each patient over all days or the average for each day ? In other words want to perform the operation across a different axis.
To support this functionality, most array functions allow us to specify the axis we want to work on. If we ask for the average across axis 0 (rows in our 2D example), we get:
```
print(numpy.mean(data, axis=0))
```
```
print(numpy.mean(data, axis = 0))
```
As a quick check, we can ask this array what its shape is:
```
print(numpy.mean(data, axis=0).shape)
```
The results (40,) tells us we have an N×1 vector, so this is the average inflammation per day for all 40 patients. If we average across axis 1 (columns in our example), we use:
```
print(numpy.mean(data, axis=1))
```
```
print(numpy.mean(data, axis=0).shape)
print(numpy.mean(data, axis = 1).shape)
```
which is the average inflammation per patient across all days.
And if you are now confused, here's a simpler example:
```
tiny = [[1, 2, 3, 4],
[10, 20, 30, 40],
[100, 200, 300, 400]]
print(tiny)
print('Sum the entire matrix: ', numpy.sum(tiny))
```
```
tiny = [[1, 2, 3, 4], [10, 20, 30, 40], [100, 200, 300, 400]]
print(tiny)
print('Sum the entire matrix: ', numpy.sum(tiny))
```
Now let's add the rows (first axis, i.e. zeroth)
```
print('Sum the columns (i.e. add the rows): ', numpy.sum(tiny, axis=0))
```
and now on the other dimension (axis=1, i.e. the second dimension)
```
print('Sum the rows (i.e. add the columns): ', numpy.sum(tiny, axis=1))
```
Here's a diagram to demonstrate how array axes work in NumPy:

- `numpy.sum(data)` --> Sum all elements in data
- `numpy.sum(data, axis=0)` --> Sum vertically (down, axis=0)
- `numpy.sum(data, axis=1)` --> Sum horizontally (across, axis=1)
### Visualising data
The mathematician Richard Hamming once said, “The purpose of computing is insight, not numbers,” and the best way to develop insight is often to visualise data.
Visualisation deserves an entire workshop of its own, but we can explore a few features of Python's `matplotlib` library here. While there is no official plotting library, `matplotlib` is the de facto standard. First, we will import the `pyplot` module from `matplotlib` and use two of its functions to create and display a heat map of our data:
```
import matplotlib.pyplot
plot = matplotlib.pyplot.imshow(data)
```
```
import matplotlib.pyplot
plot = matplotlib.pyplot.imshow(data)
```
#### Heatmap of the Data
Blue pixels in this heat map represent low values, while yellow pixels represent high values. As we can see, inflammation rises and falls over a 40-day period.
#### Some IPython Magic
If you're using a Jupyter notebook, you'll need to execute the following command in order for your matplotlib images to appear in the notebook when show() is called:
```
%matplotlib inline
```
The `%` indicates an IPython magic function - a function that is only valid within the notebook environment. Note that you only have to execute this function once per notebook.
Let's take a look at the average inflammation over time:
```
ave_inflammation = numpy.mean(data, axis=0)
ave_plot = matplotlib.pyplot.plot(ave_inflammation)
```
Here, we have put the average per day across all patients in the variable `ave_inflammation`, then asked `matplotlib.pyplot` to create and display a line graph of those values. The result is a roughly linear rise and fall, which is suspicious: we might instead expect a sharper rise and slower fall.
Let's have a look at two other statistics, the maximum inflammation of all the patients each day:
```
max_plot = matplotlib.pyplot.plot(numpy.max(data, axis=0))
```
... and the minimum inflammation across all patient each day ...
```
min_plot = matplotlib.pyplot.plot(numpy.min(data, axis=0))
matplotlib.pyplot.show()
```
The maximum value rises and falls smoothly, while the minimum seems to be a step function. Neither trend seems particularly likely, so either there's a mistake in our calculations or something is wrong with our data. This insight would have been difficult to reach by examining the numbers themselves without visualisation tools.
### Grouping plots
You can group similar plots in a single figure using subplots. This script below uses a number of new commands. The function `matplotlib.pyplot.figure()` creates a space into which we will place all of our plots. The parameter `figsize` tells Python how big to make this space.
Each subplot is placed into the figure using its `add_subplot` method. The `add_subplot` method takes 3 parameters. The first denotes how many total rows of subplots there are, the second parameter refers to the total number of subplot columns, and the final parameter denotes which subplot your variable is referencing (left-to-right, top-to-bottom). Each subplot is stored in a different variable (`axes1`, `axes2`, `axes3`).
Once a subplot is created, the axes can be labelled using the `set_xlabel()` command (or `set_ylabel()`). Here are our three plots side by side:
```
import numpy
import matplotlib.pyplot
data = numpy.loadtxt(fname='data/inflammation-01.csv', delimiter=',')
fig = matplotlib.pyplot.figure(figsize=(15.0, 5.0))
axes1 = fig.add_subplot(1, 3, 1)
axes2 = fig.add_subplot(1, 3, 2)
axes3 = fig.add_subplot(1, 3, 3)
axes1.set_ylabel('average')
plot = axes1.plot(numpy.mean(data, axis=0))
axes2.set_ylabel('max')
plot = axes2.plot(numpy.max(data, axis=0))
axes3.set_ylabel('min')
axes3.plot(numpy.min(data, axis=0))
fig.tight_layout()
```
##### The Previous Plots as Subplots
The call to `loadtxt` reads our data, and the rest of the program tells the plotting library how large we want the figure to be, that we're creating three subplots, what to draw for each one, and that we want a tight layout. (If we leave out that call to `fig.tight_layout()`, the graphs will actually be squeezed together more closely.)
Exercise: See if you can add the label `Days` to the X-Axis of each subplot
##### Scientists Dislike Typing.
We will always use the syntax `import numpy` to import NumPy. However, in order to save typing, it is often suggested to make a shortcut like so: `import numpy as np`. If you ever see Python code online using a NumPy function with np (for example, `np.loadtxt(...))`, it's because they've used this shortcut. When working with other people, it is important to agree on a convention of how common libraries are imported.
In other words:
```
import numpy
numpy.random.rand()
```
is the same as:
```
import numpy as np
np.random.rand()
```
## Exercises
### Plot Scaling
Why do all of our plots stop just short of the upper end of our graph?
Solution:
If we want to change this, we can use the `set_ylim(min, max)` method of each ‘axes’, for example:
```
axes3.set_ylim(0,6)
```
Update your plotting code to automatically set a more appropriate scale. (Hint: you can make use of the max and min methods to help.)
### Drawing Straight Lines
In the centre and right subplots above, we expect all lines to look like step functions because non-integer value are not realistic for the minimum and maximum values. However, you can see that the lines are not always vertical or horizontal, and in particular the step function in the subplot on the right looks slanted. Why is this?
Try adding a `drawstyle` parameter to your plotting:
```
axes2.set_ylabel('average')
axes2.plot(numpy.mean(data, axis=0), drawstyle='steps-mid')
```
Solution:
### Make Your Own Plot
Create a plot showing the standard deviation (using `numpy.std`) of the inflammation data for each day across all patients.
### Moving Plots Around
Modify the program to display the three plots vertically rather than side by side.
### Stacking Arrays
Arrays can be concatenated and stacked on top of one another, using NumPy’s `vstack` and `hstack` functions for vertical and horizontal stacking, respectively.
Run the following code to view `A`, `B` and `C`
```
import numpy
A = numpy.array([[1,2,3], [4,5,6], [7, 8, 9]])
print('A = ')
print(A)
B = numpy.hstack([A, A])
print('B = ')
print(B)
C = numpy.vstack([A, A])
print('C = ')
print(C)
```
Write some additional code that slices the first and last columns of `A`,
and stacks them into a 3x2 array. Make sure to print the results to verify your solution.
```
print(A[:,0]) # all rows from first column
print(result)
```
### Change In Inflammation
This patient data is longitudinal in the sense that each row represents a series of observations relating to one individual. This means that the change in inflammation over time is a meaningful concept.
The `numpy.diff()` function takes a NumPy array and returns the differences between two successive values along a specified axis. For example, with the following `numpy.array`:
```
npdiff = numpy.array([ 0, 2, 5, 9, 14])
```
Calling `numpy.diff(npdiff)` would do the following calculations
`2 - 0`, `5 - 2`, `9 - 5`, `14 - 9`
and produce the following array.
`[2, 3, 4, 5]`
```
npdiff = numpy.array([ 0, 2, 5, 9, 14])
numpy.diff(npdiff)
```
In our `data` Which axis would it make sense to use this function along?
Solution
If the shape of an individual data file is (60, 40) (60 rows and 40 columns), what would the shape of the array be after you run the diff() function and why?
Solution
How would you find the largest change in inflammation for each patient? Does it matter if the change in inflammation is an increase or a decrease? Hint: NumPy has a function called `numpy.absolute()`,
Solution:
## Key Points
Use `numpy.mean(array)`, `numpy.max(array)`, and `numpy.min(array)` to calculate simple statistics.
Use `numpy.mean(array, axis=0)` or `numpy.mean(array, axis=1)` to calculate statistics across the specified axis.
Use the `pyplot` library from `matplotlib` for creating simple visualizations.
# Save, and version control your changes
- save your work: `File -> Save`
- add all your changes to your local repository: `Terminal -> git add .`
- commit your updates a new Git version: `Terminal -> git commit -m "End of Episode 1b"`
- push your latest commits to GitHub: `Terminal -> git push`
| github_jupyter |
```
# Imports
import numpy as np
import torch
from phimal_utilities.data import Dataset
from phimal_utilities.data.burgers import BurgersDelta
from phimal_utilities.analysis import load_tensorboard
from DeePyMoD_SBL.deepymod_torch.library_functions import library_1D_in
from DeePyMoD_SBL.deepymod_torch.DeepMod import DeepModDynamic
from DeePyMoD_SBL.deepymod_torch.training import train_dynamic
from sklearn.linear_model import LassoLarsIC, LassoCV
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format = 'svg'
if torch.cuda.is_available():
torch.set_default_tensor_type('torch.cuda.FloatTensor')
np.random.seed(42)
torch.manual_seed(42)
torch.backends.cudnn.deterministic = True
torch.backends.cudnn.benchmark = False
v = 0.1
A = 1.0
# Making grid
x = np.linspace(-3, 4, 100)
t = np.linspace(0.5, 5.0, 50)
x_grid, t_grid = np.meshgrid(x, t, indexing='ij')
dataset = Dataset(BurgersDelta, v=v, A=A)
X_train, y_train, rand_idx = dataset.create_dataset(x_grid.reshape(-1, 1), t_grid.reshape(-1, 1), n_samples=2000, noise=0.05, random=True, return_idx=True)
# Making data
estimator = LassoLarsIC(fit_intercept=False)
#estimator = LassoCV()
config = {'n_in': 2, 'hidden_dims': [30, 30, 30, 30, 30], 'n_out': 1, 'library_function':library_1D_in, 'library_args':{'poly_order':2, 'diff_order': 3}, 'sparsity_estimator': estimator}
model = DeepModDynamic(**config)
optimizer = torch.optim.Adam(model.parameters(), betas=(0.99, 0.999), amsgrad=True)
train_dynamic(model, X_train, y_train, optimizer, 5000, log_dir='runs/testing/')
df_part1 = load_tensorboard('runs/testing/')
plt.semilogy(df_part1.index, df_part1['MSE_0'])
plt.semilogy(df_part1.index, df_part1['Regression_0'])
df_part1.keys()
plt.plot(df_part1.index, df_part1['L1_0'])
plt.ylim([1.3, 6])
coeff_keys = [key for key in df_part1.keys() if key[:5]=='coeff']
scaled_coeff_keys = [key for key in df_part1.keys() if key[:6]=='scaled']
coeff_keys
plt.plot(df_part1[coeff_keys])
plt.ylim([-2, 2])
plt.plot(df_part1[scaled_coeff_keys])
plt.ylim([-2, 2])
active_map = df_part1[coeff_keys][::100]
active_map[active_map!=0.0] = 1.0
sns.heatmap(active_map)
df_part1['L1_0'].idxmin()
model.constraints.sparsity_mask
model.constraints.coeff_vector
model.sparsity_estimator.coef_
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=2)
kmeans.fit_predict(np.abs(model.sparsity_estimator.coef_[:, None]))
train_dynamic(model, X_train, y_train, optimizer, 1000, loss_func_args={'patience': 50, 'initial_idx': 200}, log_dir='runs/part_3/')
model.constraints.sparsity_mask
model.constraints.coeff_vector
df_part2 = load_tensorboard('runs/part_3/')
#plt.semilogy(df_part2.index, df_part2['MSE_0'])
plt.semilogy(df_part2.index, df_part2['Regression_0'])
plt.plot(df_part2.index, df_part2['L1_0'])
```
| github_jupyter |
<img src="images/dask_horizontal.svg" align="right" width="30%">
# Distributed, Advanced
## Distributed futures
```
from dask.distributed import Client
c = Client(n_workers=4)
c.cluster
```
In the previous chapter, we showed that executing a calculation (created using delayed) with the distributed executor is identical to any other executor. However, we now have access to additional functionality, and control over what data is held in memory.
To begin, the `futures` interface (derived from the built-in `concurrent.futures`) allows map-reduce like functionality. We can submit individual functions for evaluation with one set of inputs, or evaluated over a sequence of inputs with `submit()` and `map()`. Notice that the call returns immediately, giving one or more *futures*, whose status begins as "pending" and later becomes "finished". There is no blocking of the local Python session.
Here is the simplest example of `submit` in action:
```
def inc(x):
return x + 1
fut = c.submit(inc, 1)
fut
```
We can re-execute the following cell as often as we want as a way to poll the status of the future. This could of course be done in a loop, pausing for a short time on each iteration. We could continue with our work, or view a progressbar of work still going on, or force a wait until the future is ready.
In the meantime, the `status` dashboard (link above next to the Cluster widget) has gained a new element in the task stream, indicating that `inc()` has completed, and the progress section at the problem shows one task complete and held in memory.
```
fut
```
Possible alternatives you could investigate:
```python
from dask.distributed import wait, progress
progress(fut)
```
would show a progress bar in *this* notebook, rather than having to go to the dashboard. This progress bar is also asynchronous, and doesn't block the execution of other code in the meanwhile.
```python
wait(fut)
```
would block and force the notebook to wait until the computation pointed to by `fut` was done. However, note that the result of `inc()` is sitting in the cluster, it would take **no time** to execute the computation now, because Dask notices that we are asking for the result of a computation it already knows about. More on this later.
```
# grab the information back - this blocks if fut is not ready
c.gather(fut)
# equivalent action when only considering a single future
# fut.result()
```
Here we see an alternative way to execute work on the cluster: when you submit or map with the inputs as futures, the *computation moves to the data* rather than the other way around, and the client, in the local Python session, need never see the intermediate values. This is similar to building the graph using delayed, and indeed, delayed can be used in conjunction with futures. Here we use the delayed object `total` from before.
```
# Some trivial work that takes time
# repeated from the Distributed chapter.
from dask import delayed
import time
def inc(x):
time.sleep(5)
return x + 1
def dec(x):
time.sleep(3)
return x - 1
def add(x, y):
time.sleep(7)
return x + y
x = delayed(inc)(1)
y = delayed(dec)(2)
total = delayed(add)(x, y)
# notice the difference from total.compute()
# notice that this cell completes immediately
fut = c.compute(total)
fut
c.gather(fut) # waits until result is ready
```
### `Client.submit`
`submit` takes a function and arguments, pushes these to the cluster, returning a *Future* representing the result to be computed. The function is passed to a worker process for evaluation. Note that this cell returns immediately, while computation may still be ongoing on the cluster.
```
fut = c.submit(inc, 1)
fut
```
This looks a lot like doing `compute()`, above, except now we are passing the function and arguments directly to the cluster. To anyone used to `concurrent.futures`, this will look familiar. This new `fut` behaves the same way as the one above. Note that we have now over-written the previous definition of `fut`, which will get garbage-collected, and, as a result, that previous result is released by the cluster
### Exercise: Rebuild the above delayed computation using `Client.submit` instead
The arguments passed to `submit` can be futures from other submit operations or delayed objects. The former, in particular, demonstrated the concept of *moving the computation to the data* which is one of the most powerful elements of programming with Dask.
```
# Your code here
x = c.submit(inc, 1)
y = c.submit(dec, 2)
total = c.submit(add, x, y)
print(total) # This is still a future
c.gather(total) # This blocks until the computation has finished
```
Each futures represents a result held, or being evaluated by the cluster. Thus we can control caching of intermediate values - when a future is no longer referenced, its value is forgotten. In the solution, above, futures are held for each of the function calls. These results would not need to be re-evaluated if we chose to submit more work that needed them.
We can explicitly pass data from our local session into the cluster using `scatter()`, but usually better is to construct functions that do the loading of data within the workers themselves, so that there is no need to serialise and communicate the data. Most of the loading functions within Dask, sudh as `dd.read_csv`, work this way. Similarly, we normally don't want to `gather()` results that are too big in memory.
The [full API](http://distributed.readthedocs.io/en/latest/api.html) of the distributed scheduler gives details of interacting with the cluster, which remember, can be on your local machine or possibly on a massive computational resource.
The futures API offers a work submission style that can easily emulate the map/reduce paradigm (see `c.map()`) that may be familiar to many people. The intermediate results, represented by futures, can be passed to new tasks without having to bring the pull locally from the cluster, and new work can be assigned to work on the output of previous jobs that haven't even begun yet.
Generally, any Dask operation that is executed using `.compute()` can be submitted for asynchronous execution using `c.compute()` instead, and this applies to all collections. Here is an example with the calculation previously seen in the Bag chapter. We have replaced the `.compute()` method there with the distributed client version, so, again, we could continue to submit more work (perhaps based on the result of the calculation), or, in the next cell, follow the progress of the computation. A similar progress-bar appears in the monitoring UI page.
```
%run prep.py -d accounts
import dask.bag as db
import os
import json
filename = os.path.join('data', 'accounts.*.json.gz')
lines = db.read_text(filename)
js = lines.map(json.loads)
f = c.compute(js.filter(lambda record: record['name'] == 'Alice')
.pluck('transactions')
.flatten()
.pluck('amount')
.mean())
from dask.distributed import progress
# note that progress must be the last line of a cell
# in order to show up
progress(f)
# get result.
c.gather(f)
# release values by deleting the futures
del f, fut, x, y, total
```
### Persist
Considering which data should be loaded by the workers, as opposed to passed, and which intermediate values to persist in worker memory, will in many cases determine the computation efficiency of a process.
In the example here, we repeat a calculation from the Array chapter - notice that each call to `compute()` is roughly the same speed, because the loading of the data is included every time.
```
%run prep.py -d random
import h5py
import os
f = h5py.File(os.path.join('data', 'random.hdf5'), mode='r')
dset = f['/x']
import dask.array as da
x = da.from_array(dset, chunks=(1000000,))
%time x.sum().compute()
%time x.sum().compute()
```
If, instead, we persist the data to RAM up front (this takes a few seconds to complete - we could `wait()` on this process), then further computations will be much faster.
```
# changes x from a set of delayed prescriptions
# to a set of futures pointing to data in RAM
# See this on the UI dashboard.
x = c.persist(x)
%time x.sum().compute()
%time x.sum().compute()
```
Naturally, persisting every intermediate along the way is a bad idea, because this will tend to fill up all available RAM and make the whole system slow (or break!). The ideal persist point is often at the end of a set of data cleaning steps, when the data is in a form which will get queried often.
**Exercise**: how is the memory associated with `x` released, once we know we are done with it?
## Asynchronous computation
<img style="float: right;" src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/32/Rosenbrock_function.svg/450px-Rosenbrock_function.svg.png" height=200 width=200>
One benefit of using the futures API is that you can have dynamic computations that adjust as things progress. Here we implement a simple naive search by looping through results as they come in, and submit new points to compute as others are still running.
Watching the [diagnostics dashboard](../../9002/status) as this runs you can see computations are being concurrently run while more are being submitted. This flexibility can be useful for parallel algorithms that require some level of synchronization.
Lets perform a very simple minimization using dynamic programming. The function of interest is known as Rosenbrock:
```
# a simple function with interesting minima
import time
def rosenbrock(point):
"""Compute the rosenbrock function and return the point and result"""
time.sleep(0.1)
score = (1 - point[0])**2 + 2 * (point[1] - point[0]**2)**2
return point, score
```
Initial setup, including creating a graphical figure. We use Bokeh for this, which allows for dynamic update of the figure as results come in.
```
from bokeh.io import output_notebook, push_notebook
from bokeh.models.sources import ColumnDataSource
from bokeh.plotting import figure, show
import numpy as np
output_notebook()
# set up plot background
N = 500
x = np.linspace(-5, 5, N)
y = np.linspace(-5, 5, N)
xx, yy = np.meshgrid(x, y)
d = (1 - xx)**2 + 2 * (yy - xx**2)**2
d = np.log(d)
p = figure(x_range=(-5, 5), y_range=(-5, 5))
p.image(image=[d], x=-5, y=-5, dw=10, dh=10, palette="Spectral11");
```
We start off with a point at (0, 0), and randomly scatter test points around it. Each evaluation takes ~100ms, and as result come in, we test to see if we have a new best point, and choose random points around that new best point, as the search box shrinks.
We print the function value and current best location each time we have a new best value.
```
from dask.distributed import as_completed
from random import uniform
scale = 5 # Initial random perturbation scale
best_point = (0, 0) # Initial guess
best_score = float('inf') # Best score so far
startx = [uniform(-scale, scale) for _ in range(10)]
starty = [uniform(-scale, scale) for _ in range(10)]
# set up plot
source = ColumnDataSource({'x': startx, 'y': starty, 'c': ['grey'] * 10})
p.circle(source=source, x='x', y='y', color='c')
t = show(p, notebook_handle=True)
# initial 10 random points
futures = [c.submit(rosenbrock, (x, y)) for x, y in zip(startx, starty)]
iterator = as_completed(futures)
for res in iterator:
# take a completed point, is it an improvement?
point, score = res.result()
if score < best_score:
best_score, best_point = score, point
print(score, point)
x, y = best_point
newx, newy = (x + uniform(-scale, scale), y + uniform(-scale, scale))
# update plot
source.stream({'x': [newx], 'y': [newy], 'c': ['grey']}, rollover=20)
push_notebook(document=t)
# add new point, dynamically, to work on the cluster
new_point = c.submit(rosenbrock, (newx, newy))
iterator.add(new_point) # Start tracking new task as well
# Narrow search and consider stopping
scale *= 0.99
if scale < 0.001:
break
point
```
## Debugging
When something goes wrong in a distributed job, it is hard to figure out what the problem was and what to do about it. When a task raises an exception, the exception will show up when that result, or other result that depend upon it, is gathered.
Consider the following delayed calculation to be computed by the cluster. As usual, we get back a future, which the cluster is working on to compute (this happens very slowly for the trivial procedure).
```
@delayed
def ratio(a, b):
return a // b
ina = [5, 25, 30]
inb = [5, 5, 6]
out = delayed(sum)([ratio(a, b) for (a, b) in zip(ina, inb)])
f = c.compute(out)
f
```
We only get to know what happened when we gather the result (this is also true for `out.compute()`, except we could not have done other stuff in the meantime). For the first set of inputs, it works fine.
```
c.gather(f)
```
But if we introduce bad input, an exception is raised. The exception happens in `ratio`, but only comes to our attention when calculating the sum.
```
ina = [5, 25, 30]
inb = [5, 0, 6]
out = delayed(sum)([ratio(a, b) for (a, b) in zip(ina, inb)])
f = c.compute(out)
c.gather(f)
```
The display in this case makes the origin of the exception obvious, but this is not always the case. How should this be debugged, how would we go about finding out the exact conditions that caused the exception?
The first step, of course, is to write well-tested code which makes appropriate assertions about its input and clear warnings and error messages when something goes wrong. This applies to all code.
The most typical thing to do is to execute some portion of the computation in the local thread, so that we can run the Python debugger and query the state of things at the time that the exception happened. Obviously, this cannot be performed on the whole data-set when dealing with Big Data on a cluster, but a suitable sample will probably do even then.
```
import dask
with dask.config.set(scheduler="sync"):
# do NOT use c.compute(out) here - we specifically do not
# want the distributed scheduler
out.compute()
# uncomment to enter post-mortem debugger
# %debug
```
The trouble with this approach is that Dask is meant for the execution of large datasets/computations - you probably can't simply run the whole thing
in one local thread, else you wouldn't have used Dask in the first place. So the code above should only be used on a small part of the data that also exihibits the error.
Furthermore, the method will not work when you are dealing with futures (such as `f`, above, or after persisting) instead of delayed-based computations.
As an alternative, you can ask the scheduler to analyze your calculation and find the specific sub-task responsible for the error, and pull only it and its dependnecies locally for execution.
```
c.recreate_error_locally(f)
# uncomment to enter post-mortem debugger
# %debug
```
Finally, there are errors other than exceptions, when we need to look at the state of the scheduler/workers. In the standard "LocalCluster" we started, we
have direct access to these.
```
[(k, v.state) for k, v in c.cluster.scheduler.tasks.items() if v.exception is not None]
```
| github_jupyter |
```
"""
A randomly connected network learning a sequence
This example contains a reservoir network of 500 neurons.
400 neurons are excitatory and 100 neurons are inhibitory.
The weights are initialized randomly, based on a log-normal distribution.
The network activity is stimulated with three different inputs (A, B, C).
The inputs are given in i a row (A -> B -> C -> A -> ...)
The experiment is defined in 'pelenet/experiments/sequence.py' file.
A log file, parameters, and plot figures are stored in the 'log' folder for every run of the simulation.
NOTE: The main README file contains some more information about the structure of pelenet
"""
# Load pelenet modules
from pelenet.utils import Utils
from pelenet.experiments.sequence import SequenceExperiment
# Official modules
import numpy as np
import matplotlib.pyplot as plt
# Overwrite default parameters (pelenet/parameters/ and pelenet/experiments/sequence.py)
parameters = {
# Experiment
'seed': 1, # Random seed
'trials': 10, # Number of trials
'stepsPerTrial': 60, # Number of simulation steps for every trial
# Neurons
'refractoryDelay': 2, # Refactory period
'voltageTau': 100, # Voltage time constant
'currentTau': 5, # Current time constant
'thresholdMant': 1200, # Spiking threshold for membrane potential
# Network
'reservoirExSize': 400, # Number of excitatory neurons
'reservoirConnPerNeuron': 35, # Number of connections per neuron
'isLearningRule': True, # Apply a learning rule
'learningRule': '2^-2*x1*y0 - 2^-2*y1*x0 + 2^-4*x1*y1*y0 - 2^-3*y0*w*w', # Defines the learning rule
# Input
'inputIsSequence': True, # Activates sequence input
'inputSequenceSize': 3, # Number of input clusters in sequence
'inputSteps': 20, # Number of steps the trace input should drive the network
'inputGenSpikeProb': 0.8, # Probability of spike for the generator
'inputNumTargetNeurons': 40, # Number of neurons activated by the input
# Probes
'isExSpikeProbe': True, # Probe excitatory spikes
'isInSpikeProbe': True, # Probe inhibitory spikes
'isWeightProbe': True # Probe weight matrix at the end of the simulation
}
# Initilizes the experiment, also initializes the log
# Creating a new object results in a new log entry in the 'log' folder
# The name is optional, it is extended to the folder in the log directory
exp = SequenceExperiment(name='random-network-sequence-learning', parameters=parameters)
# Instantiate the utils singleton
utils = Utils.instance()
# Build the network, in this function the weight matrix, inputs, probes, etc. are defined and created
exp.build()
# Run the network simulation, afterwards the probes are postprocessed to nice arrays
exp.run()
# Weight matrix before learning (randomly initialized)
exp.net.plot.initialExWeightMatrix()
# Plot distribution of weights
exp.net.plot.initialExWeightDistribution(figsize=(12,3))
# Plot spike trains of the excitatory (red) and inhibitory (blue) neurons
exp.net.plot.reservoirSpikeTrain(figsize=(12,6), to=600)
# Weight matrix after learning
exp.net.plot.trainedExWeightMatrix()
# Sorted weight matrix after learning
supportMask = utils.getSupportWeightsMask(exp.net.trainedWeightsExex)
exp.net.plot.weightsSortedBySupport(supportMask)
```
| github_jupyter |
# Алгоритм Гровера
Предположен, нам дана функция $F:X\rightarrow \{0, 1\}$.
Если существует только один единственный аргумент $x\in X$, для которого $F(x)=1$, то на классическом компьютере нам потребуется $\mathcal{O}(|X|)$ проверок, чтобы найти его. То есть в худшем случае мы просмотрим все зачения $x$ из $X$, и искомое будет последним в нашем переборе. Даже если таких аргументов больше -- поиск хотя бы одного из них тоже будет занимать линейное время.
В квантовых системах мы можем таким образом определить функцию $\tilde{F}$, чтобы она помечала искомые аргументы с помощью фазового множителя (-1). Вспомним, что в квантовых системах функции можно вызывать для суперпозиции [всех] аргументов. Поэтому, если мы вызовем $\tilde{F}(H^{\otimes N})$, то в получившейся суперпозиции все состояния, которые не удовлетворяют функции, будут иметь амплитуду $+1*\frac{1}{\sqrt{2^N}}$, а те, которые удовлетворяют -- $-1*\frac{1}{\sqrt{2^N}}$.
Когда мы помечаем аргументы знаком минус, мы делаем ровно то, что делали а функции `inverse()` в задании про обращение относительно среднего.
Используя специально подготовленную $\tilde{F}$, мы сможем найти искомый $x$ за $\mathcal{O}(\sqrt{|X|})$. И даже если их несколько -- то какой-то из них.
Основные шаги алгоритма:
1. Делаем из обычного оракула фазовый
2. Имлементируем диффузор Гровера
3. Определяемся с количеством итераций алгоритма. Повторяем.
Начнём в первого пункта.
## Фазовый оракул
## Constructing the Gorver Oracle
Детали этого метода описаны в материале [qiskit texbook](https://qiskit.org/textbook/ch-algorithms/grover.html). Там показана идея, как сконвертировать "классическую" функцию оракула $F$ в функцию нужного нам (фазового) вида. Механизм, на которые опирается этот метод -- *Phase Kickback*.
Напомним, что **Phase Kickback** изменяет фазу у амплитуды **контролирующего** кубита или даже регистра, если управляемый кубит находится в собственном состоянии котролируемого оператора.
В случаем с оракулом, предполагаемся, что вы можете представить его в таком виде:

Собственные векторы и значения оператора $X$ нам уже хорошо известны. Поэтому мы заготовим $|-\rangle$ в управляемом кубите... и забудем про него.

Давайте сконвертируем уже знакомый нам по форме оракул.
```
if (|x> >= 6) {
|y> = X|y>
}
```
в фазовый вид.
Для начала напишем его полностью:
```
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit
from qiskit.visualization import plot_histogram
from qiskit import execute, BasicAer
def get_oracle():
x = QuantumRegister(4, 'x')
y = QuantumRegister(1, 'y')
qc = QuantumCircuit(x, y, name="oracle")
#######
## x -= 4
qc.x(x[2])
qc.cx(x[2], x[3])
## x -= 2
qc.x(x[1])
qc.cx(x[1], x[2])
qc.ccx(x[1], x[2], x[3])
qc.x(x[3])
qc.cx(x[3], y[0])
qc.x(x[3])
## x += 2
qc.ccx(x[1], x[2], x[3])
qc.cx(x[1], x[2])
qc.x(x[1])
#######
## x += 4
qc.cx(x[2], x[3])
qc.x(x[2])
return qc
```
Проверим, что он работает "классически".
```
qc = QuantumCircuit(5)
qc.h(range(3))
qc.append(get_oracle(), range(5))
qc.measure_all()
job = execute(qc, BasicAer.get_backend('qasm_simulator'), shots=100)
counts = job.result().get_counts(qc)
plot_histogram(counts)
```
А теперь проверим, как это выглядит в фазовой форме:
```
import numpy as np
qc = QuantumCircuit(5)
qc.h(range(3))
# подготовим управляемые кубит в |->
qc.x(4)
qc.h(4)
qc.append(get_oracle(), range(5))
# вернём его в 0
qc.h(4)
qc.x(4)
job = execute(qc, BasicAer.get_backend('statevector_simulator'))
vector = job.result().get_statevector()
print(np.round_(vector.real, 4))
```
## Количество итерация
Ранее мы видели, что обращение относительно среднего даёт нам периодическую функцию. На пике этой функции можной поймать самую "выгодную" разницу между всеми элементами, и "помеченными". Но как поймать такой пик?
Если в списке из $ N $ элементов (=квантовых состояний) помечен всего один (=одно квантовое состояние), тогда помеченный элемент (состояние) будет иметь наибольшую амплитуду после $ \pi \dfrac{\sqrt{N}}{4} $ итераций алгоритма Гровера.
Если же помечены $k$ элементов, тогда оптимальные значения достигаются после $ \pi \dfrac{\sqrt{\frac{N}{k}}}{4} $ повторений.
Если же $k$ неизвестно, то нам придётся его подбирать. Один из способов это сделать -- запустить алгоритм $ \pi \dfrac{\sqrt{\frac{N}{1}}}{4}, \pi \dfrac{\sqrt{\frac{N}{2}}}{4}, \pi \dfrac{\sqrt{\frac{N}{4}}}{4}, \pi \dfrac{\sqrt{\frac{N}{8}}}{4}, \ldots $ раз.
И даже в этом случае число повторений алгоритма будет пропорционально $ \pi \dfrac{\sqrt{N}}{4} $: $ O \Big( \pi \dfrac{\sqrt{N}}{4} \Big) $. А значит, он асимптотически быстрее, чем метод простого перебора.
Давайте, наконец, реализуем это в виде алгоритма, и запустим на настоящем квантовом компьютере!
## Полная имплементация алгоритма Гровера
```
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit
from qiskit.visualization import plot_histogram
from qiskit import execute, BasicAer
from qiskit.circuit.library import ZGate
ccz = ZGate().control(2)
def get_diffusor():
qdiff = QuantumRegister(3)
diffusor = QuantumCircuit(qdiff, name="IoM")
diffusor.h(qdiff)
diffusor.x(qdiff)
diffusor.append(ccz, qdiff)
diffusor.x(qdiff)
diffusor.h(qdiff)
return diffusor
print(get_diffusor())
data = QuantumRegister(3, name="data")
sign = QuantumRegister(1, name="sign")
ancilla = QuantumRegister(1, name="ancilla")
result = ClassicalRegister(3, name="result")
qc = QuantumCircuit(data, sign, ancilla, result)
oracle = get_oracle()
diffusor = get_diffusor()
# подготовим суперпозицию неотрицательных чисел 0..7
qc.h(data)
# подготовим |-> во вспомогательном кубите
qc.x(ancilla)
qc.h(ancilla)
qc.barrier()
# сколько нам нужно итераций?
k = 2
N = 2 ** 3
N_ITERATIONS = int((np.pi / 4) * (N / k) ** .5)
print("Итераций:", N_ITERATIONS)
for i in range(N_ITERATIONS):
# добавим оракула
qc.append(oracle, data[:] + sign[:] + ancilla[:])
qc.barrier()
# добавим обращение относительно среднго
qc.append(diffusor, data)
qc.barrier()
qc.measure(data, result)
qc.draw()
job = execute(qc, BasicAer.get_backend('qasm_simulator'), shots=100)
counts = job.result().get_counts(qc)
plot_histogram(counts)
```
## Давайте обсудим результат
1. Как проинтерпретировать этот результат?
2. Сколько на самом деле простейших действий мы сделали?
3. Сколько раз мы запустили квантовую программу?
4. Что будет если запустить её всего один раз?
```
from qiskit.compiler import transpile
# запустите строчку, чтобы увидеть, какие на самом деле примитивные
# операции совершал симулятор:
qct = transpile(qc, BasicAer.get_backend('qasm_simulator'))
print("Глубина (depth) контура:", qct.depth())
qct.draw(output='mpl')
```
## ... и наконец запустим на настоящем квантовом компьютере!
Если вы ещё этого не сделали - зайдите на https://quantum-computing.ibm.com/account, зарегистрируйтесь и получите токен.
```
from qiskit import IBMQ
# если запускаете первый раз - подставьте токен, если не первый - можно убрать эту строку
# IBMQ.save_account("TOKEN")
IBMQ.load_account()
provider = IBMQ.get_provider('ibm-q')
available_cloud_backends = provider.backends()
for backend in available_cloud_backends:
status = backend.status()
nqubits = backend.configuration().n_qubits
is_operational = status.operational
jobs_in_queue = status.pending_jobs
if is_operational and 'ibmq_' in str(backend):
print(f"{nqubits} Qubits {backend} has a queue={jobs_in_queue}")
```
Выберите наименее загуженный 5-кубитный компьютер. Когда я запускал этот код, это был `ibmq_belem`.
```
backend = provider.get_backend('ibmq_belem')
# как будет выглядеть наша программа на этом компьютере?
qct = transpile(qc, backend)
print(qct.depth())
qct.draw(output='mpl')
# запускаем задачу на компьютере. Она будет проверена и поставлена в очередь.
job = backend.run(qct)
from qiskit.tools.monitor import job_monitor
# монитор будет дожидаться успешного статуса
job_monitor(job)
print(job.status())
# покажем гистограмму!
counts = job.result().get_counts()
plot_histogram(counts)
```
| github_jupyter |
<img src="https://cellstrat2.s3.amazonaws.com/PlatformAssets/bluewhitelogo.svg" alt="drawing" width="200"/>
# ML Tuesdays - Session 2
## Machine Learning Track
### Diabetes Classification Exercise (Solution)
### Guidelines
1. The notebook has been split into multiple steps with fine-grained instructions for each step. Use the instructions for each code cell to complete the code.
2. You can refer the Logistic Regression Module in the Machine Learning Pack from CellStrat Hub.
3. Make use of the docstrings of the functions and classes using the `shift+tab` shortcut key.
4. Refer the internet for the explanation of any algorithm.
## About the Dataset
The Pima Indians Diabetes Dataset involves predicting the onset of diabetes within 5 years in Pima Indians given medical details.
It is a binary (2-class) classification problem. The number of observations for each class is not balanced. There are 768 observations with 8 input variables and 1 output variable. Missing values are believed to be encoded with zero values. The variable names are as follows:
- Number of times pregnant.
- Plasma glucose concentration a 2 hours in an oral glucose tolerance test.
- Diastolic blood pressure (mm Hg).
- Triceps skinfold thickness (mm).
- 2-Hour serum insulin (mu U/ml).
- Body mass index (weight in kg/(height in m)^2).
- Diabetes pedigree function.
- Age (years).
- Class variable (0 or 1).
```
import pandas as pd
dataset = pd.read_csv('pima-indians-diabetes.csv')
dataset
```
## Data Preprocessing
1. Split to X and y data
2. Perform Train Test Split
3. Feature Scaling (Use Standard or Normalization)
```
X_data = dataset.iloc[:, :-1]
y_data = dataset.iloc[:, -1]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X_data, y_data, test_size=0.2, random_state=0)
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)
```
## Training
You need to train 4 different models i.e.,
1. LogisticRegression
2. K Nearest Neighbours (KNN)
3. Decision Tree
4. Random Forest
Make optimal use of the scikit-learn documentation and google to understand each algorithm and apply it.
```
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
logistic_model = LogisticRegression()
knn = KNeighborsClassifier()
tree_model = DecisionTreeClassifier()
forest_model = RandomForestClassifier()
logistic_model.fit(X_train, y_train)
knn.fit(X_train, y_train)
tree_model.fit(X_train, y_train)
forest_model.fit(X_train, y_train)
```
## Evaluation
1. Evaluate the results of each model using the `classification_report` function in `sklearn.metrics`.
2. Check which model has the best results on the train and test set.
3. Have some models overfitted?
```
from sklearn.metrics import classification_report
def evaluate(model, X, y):
return classification_report(y, model.predict(X))
print('Train Results with Logistic Regression')
print(evaluate(logistic_model, X_train, y_train))
print('\nTest Results with Logistic Regression')
print(evaluate(logistic_model, X_test, y_test))
print('Train Results with KNN')
print(evaluate(knn, X_train, y_train))
print('\nTest Results with KNN')
print(evaluate(knn, X_test, y_test))
print('Train Results with Decision Tree Classifier')
print(evaluate(tree_model, X_train, y_train))
print('\nTest Results with Decision Tree Classifier')
print(evaluate(tree_model, X_test, y_test))
print('Train Results with Random Forest Classification')
print(evaluate(forest_model, X_train, y_train))
print('\nTest Results with Random Forest Classification')
print(evaluate(forest_model, X_test, y_test))
```
Tree based algorithms seem to have overfitted. Logistic Regression has the optimal performance among these 4.
| github_jupyter |
# Transformação de Fontes
Jupyter Notebook desenvolvido por [Gustavo S.S.](https://github.com/GSimas)
**Transformação de fontes é o processo de substituir uma fonte de tensão
vs em série com um resistor R por uma fonte de corrente is em paralelo com
um resistor R, ou vice-versa.**
Assim como na transformação estrela-triângulo, uma
transformação de fontes não afeta a parte remanescente do circuito.

Portanto, a transformação de fontes requer que
\begin{align}
{\Large v_s = i_sR}
\\
\\{\Large i_s = \frac{v_s}{R}}
\end{align}
A transformação de fontes também se aplica a fontes dependentes, desde
que tratemos adequadamente a variável dependente.

**Exemplo 4.6**
Use transformação de fontes para determinar vo no circuito da Figura 4.17.

```
print("Exemplo 4.6")
#trasforma fonte 1 (corrente -> tensao)
#vs1 = is*R = 12V
#Req em serie entre 4 e 2
#Req1 = 4 + 2 = 6
#transforma fonte 2 (tensao -> corrente)
#is2 = 12/3 = 4A
#transforma fonte 1 (tensao -> corrente)
#is1 = 12/6 = 2A
#Req paralelo entre 6 e 3
#Req2 = 6*3/(6 + 3) = 2
#fonte resultante
#ir = is2 - is1 = 4 - 2 = 2A
#transforma fonte 2 (corrente -> tensao)
#vs2 = Req2*ir = 2 * 2 = 4V
#divisor tensao
#v0 = vs2*8/(8 + Req2)
v0 = 4*8/(8 + 2)
print("Tensao v0",v0,"V")
```
**Problema Prático 4.6**
Determine io no circuito da Figura 4.19 usando transformação de fontes.

```
print("Problema Prático 4.6")
#Req serie 4 e 1 = 5
#Req paralelo 6 e 3 = 2
#transforma fonte 1 (corrente -> tensao)
#vs1 = R*is1 = 5*2 = 10V
#soma fonte 1 e 2 = 5 + 10 = 15V
#transforma fonte soma (tensao -> corrente)
#is = 15/2 = 7,5A
#Req paralelo 5 e 2 = 10/7
#soma fonte corrente = 7,5 + 3 = 10,5 A
#divisor corrente
i0 = 10.5*(10/7)/((10/7) + 7)
print("Corrente i0:",i0,"A")
```
**Exemplo 4.7**
Determine vx na Figura 4.20 usando transformação de fontes.

```
print("Exemplo 4.7")
#transforma fonte 1 (tensao -> corrente)
#is1 = 6/2 = 3 A
#transforma fonte dep. (corrente -> tensao)
#vs_dep = 0.25Vx * 4 = Vx
#soma fonte dep. e fonte 2 = 18 + Vx
#Req paralelo 2 e 2 = 1
#transforma fontes soma (tensao -> corrente)
#is_soma = 18/4 + Vx/4
#soma fontes = 18/4 + Vx/4 + 3 = 30/4 + Vx/4 = (30 + Vx)/4
#transforma fontes soma (corrente -> tensao)
#fonte resultante = ((30 + Vx)/4)*4 = 30 + Vx
#LKT
#(30 + Vx) - 4*ix - Vx = 0
#ix = (30 + Vx)/5 = 6 + Vx/5
#30 - 24 - 4Vx/5 = 0
vx = 6*5/4
print("Tensão Vx",vx,"V")
```
**Problema Prático 4.7**
Use transformação de fontes para determinar ix no circuito exposto na Figura 4.22.

```
print("Problema Prático 4.7")
#transforma fonte dep. (tensao -> corrente)
#is_dep = 2ix/5
#soma fontes = 0.024 - 2ix
#divisor corrente
#ix = (24m - 2ix)*5/(5 + 10)
#ix = (0.12 - 10ix)/15
#ix + 2ix/3 = 0.008
#5ix/3 = 0.008
ix = 0.008*3/5
print("Corrente ix:",ix,"A")
```
# Teorema de Thèvenin
**O teorema de Thévenin afirma que um circuito linear de dois terminais
pode ser substituído por um circuito equivalente formado por uma fonte
de tensão VTh em série com um resistor RTh, onde VTh é a tensão de circuito
aberto nos terminais e RTh, a resistência de entrada ou equivalente nos
terminais quando as fontes independentes forem desativadas.**
O teorema de Thévenin é muito importante na análise de circuitos, porque
ajuda a simplificar um circuito, e um circuito grande pode ser substituído por
uma única fonte de tensão independente e um único resistor.

Para tanto, suponha
que os dois circuitos da Figura 4.23 sejam equivalentes – dois circuitos são
ditos equivalentes se tiverem a mesma relação tensão-corrente em seus terminais. Se
os terminais a-b forem tornados um circuito aberto (eliminando-se a carga),
nenhuma corrente fluirá e, portanto, a tensão nos terminais a-b da Figura 4.23a
terá de ser igual à fonte de tensão VTh da Figura 4.23b, já que os dois circuitos
são equivalentes. Logo:
\begin{align}
{\Large V_{Th} = v_{oc}}
\end{align}
A resistência de entrada (ou
resistência equivalente) do circuito inativo nos terminais a-b da Figura 4.23a
deve ser igual a RTh da Figura 4.23b, pois os dois circuitos são equivalentes.
Portanto, RTh é a resistência de entrada nos terminais quando as fontes
independentes forem desligadas. Logo:
\begin{align}
{\Large R_{Th} = R_{oc}}
\end{align}

- **Caso 1:** Se a rede não tiver fontes dependentes, **desligamos todas as fontes independentes**. RTh é a resistência de entrada da rede, olhando-se entre os terminais a e b.
- **Caso 2:** Se a rede tiver fontes dependentes, **desligamos todas as fontes independentes**. As fontes dependentes não devem ser desligadas, pois elas são controladas por variáveis de circuito. Aplicamos uma tensão vo aos terminais a e b, e determinamos a corrente resultante io. Então, RTh = vo/io. De forma alternativa, poderíamos inserir uma fonte de corrente io nos terminais a e b, como na Figura 4.25b, e encontrar a tensão entre os terminais vo. Chegamos novamente a RTh = vo/io. Qualquer um dos dois métodos leva ao mesmo resultado. Em ambos os métodos, podemos supor qualquer valor de vo e io. Poderíamos usar, por exemplo, vo = 1 V ou io = 1 A, ou até mesmo valores não especificados de vo ou io.

Muitas vezes, pode ocorrer de RTh assumir um valor negativo; nesse caso,
a resistência negativa (v = –iR) implica o fato de o circuito estar **fornecendo
energia.**
**Exemplo 4.8**
Determine o circuito equivalente de Thévenin do circuito mostrado na Figura 4.27,
à esquerda dos terminais a-b. Em seguida, determine a corrente através de RL = 6 Ω,
16 Ω e 36 Ω.

```
print("Exemplo 4.8")
#Req1 = 4*12/(4 + 12) = 48/16 = 3
#Rth = 3 + 1 = 4
#transforma fonte 1 (tensao -> corrente)
#is1 = 32/4 = 8 A
#soma fontes = 8 + 2 = 10 A
#ix = 10*4/(4 + 12) = 40/16 = 5/2
#Vab = 12*(5/2) = 30 = Vth
Vth = 30
Rth = 4
Rl = 6
Il = Vth/(Rl + Rth)
print("Para RL = 6, Corrente:",Il,"A")
Rl = 16
Il = Vth/(Rl + Rth)
print("Para RL = 6, Corrente:",Il,"A")
Rl = 36
Il = Vth/(Rl + Rth)
print("Para RL = 6, Corrente:",Il,"A")
```
**Problema Prático 4.8**
Usando o teorema de Thévenin, determine o circuito equivalente à esquerda dos terminais do circuito da Figura 4.30. Em seguida, determine I.

```
print("Problema Prático 4.8")
#Req1 = 6 + 6 = 12
#Rth = Req1*4/(Req1 + 4) = 48/16 = 3
Rth = 3
#Superposicao Vsource
#Vab1 = Vs*4/(4 + 6 + 6) = 12*4/16 = 3V
#Superposicao Csource
#Iab = Is*6/(4 + 6 + 6) = 2*6/16 = 3/4
#Vab2 = Iab*4 = 3V
#Vth = Vab1 + Vab2
Vth = 6
I = Vth/(Rth + 1)
print("Tensao Vth:",Vth,"V")
print("Resistencia Rth:",Rth)
print("Corrente I:",I,"A")
```
**Exemplo 4.9**
Determine o equivalente de Thévenin do circuito da Figura 4.31.

```
print("Exemplo 4.9")
import numpy as np
#Descobrir Rth - desliga fontes indep., nao se alteram fontes dep.
#Aplicar tensao vo arbitraria entre terminais a b
#vo = 1 V
#Analise de malhas
#-2Vx + 2(i1 - i2) = 0
#Vx = i1 - i2
#Vx = -4i2
#i1 + 3i2 = 0
#-Vx + 2(i2 - i1) + 6(i2 - i3) = 0
#2i2 - 2i1 + 6i2 - 6i3 = Vx
#-3i1 + 9i2 - 6i3 = 0
#-i1 + 3i2 - 2i3 = 0
#Vo + 6(i3 - i2) + 2i3 = 0
#6i3 - 6i2 + 2i3 = -1
#-6i2 + 8i3 = -1
coef = np.matrix("1 3 0;-1 3 -2;0 -6 8")
res = np.matrix("0;0;-1")
I = np.linalg.inv(coef)*res
#i3 = -i0
io = -I[2]
#Rth = Vo/io
Rth = 1/io
print("Resistencia Rth:",float(Rth))
#Descobrir Vth
#Analise de tensao em terminais a b
#Analise de Malhas
#i1 = 5 A
#-2Vx + 2(i2 - i3) = 0
#Vx = i2 - i3
#Vx = 4(5 - i3) = 20 - 4i3
#i2 + 3i3 = 20
#4(i3 - 5) + 2(i3 - i2) + 6i3 = 0
#4i3 +2i3 - 2i2 + 6i3 = 20
#-2i2 + 12i3 = 20
#-i2 + 6i3 = 10
coef = np.matrix("1 3;-1 6")
res = np.matrix("20;10")
I = np.linalg.inv(coef)*res
Vth = 6*I[1]
print("Tensão Vth:",float(Vth),"V")
```
**Problema Prático 4.9**
Determine o equivalente de Thévenin do circuito da Figura 4.34 à esquerda dos terminais.

```
print("Problema Prático 4.9")
#Descobrir Rth
#Vo = 1V
#Analise Nodal
#i1 - Ix/2 = 0
#v1/5 - Ix/2 = 0
#Ix = (v1 - 1)/3
#v1/5 - (v1 - 1)/6 = 0
#v1/5 - v1/6 = -1/6
#v1/30 = -1/6
#v1 = -5
#Ix = (v1 - 1)/3 = -6/3 = -2 A
#i2 = 1/4 A
#io = -Ix + i2 = 9/4 A
#Rth = 1/(9/4) = 4/9
Rth = 4/9
print("Resistencia Rth:",Rth)
#Descobrir Vth
#Analise de Malhas
#-6 + 5i1 + 3Ix + 4Ix = 0
#5i1 + 7Ix = 6
#3Ix/2 + i1 = Ix
#Ix/2 + i1 = 0
#2i1 + Ix = 0
coef = np.matrix("5 7;2 1")
res = np.matrix("6;0")
I = np.linalg.inv(coef)*res
Ix = float(I[1])
Vth = 4*Ix
print("Tensão Vth:",Vth,"V")
```
**Exemplo 4.10**
Determine o equivalente de Thévenin do circuito da Figura 4.35a nos terminais a-b.

```
print("Exemplo 4.10")
#vab = -vo = -1 V
#i1 = 1/4 A
#ix = 1/2 A
#i0 = 2ix - ix - i1 = 1 - 1/2 - 1/4 = 1/4 A
#Rth = -1/(1/4) = -4
Rth = -4
print("Resistencia Rth:", Rth)
print("Tensao Vth:",0,"V")
```
**Problema Prático 4.10**
Obtenha o equivalente de Thévenin do circuito da Figura 4.36.

```
print("Problema Prático 4.10")
#iab = 1 A
#-vx + 10i1 + 4vx + 15(i1 - iab) = 0
#3vx + 25i1 - 15iab = 0
#vx = -5i1
#-15i1 + 25i1 = 15
#10i1 = 15
#i1 = 1,5 A = 3/2 A
#vx = -5i1 = -7,5 V = -15/2 V
#vdep = 4*vx = -30V
#vab = vo = 15(i1 - iab) = 15/2 = 7,5V
#Rth = vo/(-iab) = -7,5
Rth = -7.5
print("Tensao Vth:",0,"V")
print("Resistencia Rth",Rth)
```
| github_jupyter |
# Computingthe mean of a bunch of images:
```
# computing statistics:
import torch
from torchvision import transforms, datasets
import numpy as np
import time
unlab_ddset = datasets.ImageFolder('./surrogate_dataset/unlab_dataset_055/train_set/',
transform = transforms.Compose([transforms.ToTensor()]))
unlab_loader = torch.utils.data.DataLoader(unlab_ddset,
batch_size = 20,
shuffle = True,
) # iterating over the DataLoader gives the tuple (input, target)
def compute_mean(loader):
mean = [0, 0, 0]
std = [0, 0, 0]
for i, (images, targets) in enumerate(unlab_loader):
mean0, mean1, mean2 = (0.0, 0.0, 0.0)
std0, std1, std2 = (0.0, 0.0, 0.0)
for num, t in enumerate(images):
mean0 += t[0].mean()
mean1 += t[1].mean()
mean2 += t[2].mean()
std0 += t[0].std()
std1 += t[1].std()
std2 += t[2].std()
mean[0] += mean0/num
mean[1] += mean1/num
mean[2] += mean2/num
std[0] += std0/num
std[1] += std1/num
std[2] += std2/num
return ([x / i for x in mean], [x / i for x in std])
st = time.time()
mean, std = compute_mean(unlab_loader)
end = time.time()
print 'Time to compute the statistics: ' + str(end-st)
print "Mean of xxx random images transformed 100 each:"
print mean
print std
# computing statistics:
import torch
from torchvision import transforms, datasets
import numpy as np
import time
unlab_ddset = datasets.ImageFolder('./surrogate_dataset/unlab_dataset007/data/',
transform = transforms.Compose([transforms.ToTensor()]))
unlab_loader = torch.utils.data.DataLoader(unlab_ddset,
batch_size = 20,
shuffle = True,
) # iterating over the DataLoader gives the tuple (input, target)
def compute_mean(loader):
mean = [0, 0, 0]
for i, (images, targets) in enumerate(unlab_loader):
mean0, mean1, mean2 = (0, 0, 0)
for num, t in enumerate(images):
mean0 += t[0].mean()
mean1 += t[1].mean()
mean2 += t[2].mean()
mean[0] += mean0/num
mean[1] += mean1/num
mean[2] += mean2/num
return [x / i for x in mean]
st = time.time()
mean = compute_mean(unlab_loader)
end = time.time()
print 'Time to compute the statistics: ' + str(end-st)
print "Mean of xxx random images transformed 100 each:"
print mean
```
# Checking how the normalization affects the images:
```
import torch
import torchvision
import torchvision.transforms as transforms
import torchvision.datasets as datasets
from matplotlib import pyplot as plt
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import os
import time
import numpy as np
from PIL import Image
experiment = '002_6'
path = '../saving_model/alexNet' + str(experiment) + '.pth.tar'
#print path
normalize = transforms.Normalize(mean = [0.6128879173491645, 0.6060359745417173, 0.5640660479324938],
std=[1, 1, 1])
batch_size = 100
unlab_ddset = datasets.ImageFolder('./surrogate_dataset/unlab_train/',
transform = transforms.Compose([transforms.ToTensor()]))
unlab_loader = torch.utils.data.DataLoader(unlab_ddset,
batch_size = batch_size,
shuffle = True,
)
for i, data in enumerate(unlab_loader):
break
# data loaded with the pytorch loader and no normalization
type(data[0]), type(data[1]), data[1][5], data[0][5].max(), data[0][5].min(), data[0][5].mean()
experiment = '002_6'
path = '../saving_model/alexNet' + str(experiment) + '.pth.tar'
#print path
normalize = transforms.Normalize(mean = [0.6128879173491645, 0.6060359745417173, 0.5640660479324938],
std=[1, 1, 1])
batch_size = 100
unlab_ddset = datasets.ImageFolder('./surrogate_dataset/unlab_train/',
transform = transforms.Compose([transforms.ToTensor(), normalize]))
unlab_loader = torch.utils.data.DataLoader(unlab_ddset,
batch_size = batch_size,
shuffle = True,
)
for i, data in enumerate(unlab_loader):
break
# data loaded with the pytorch loader and normalization like follows:
# (mean = [0.6128879173491645, 0.6060359745417173, 0.5640660479324938], std=[1, 1, 1])
type(data[0]), type(data[1]), data[1][5], data[0][5].max(), data[0][5].min(), data[0][5].mean()
```
| github_jupyter |
### Bouns: Difference of proportions
Another simple way to calculate distinctive words in two texts is to calculate the words with the highest and lowest difference or proportions. In theory frequent words like 'the' and 'of' will have a small difference. In practice this doesn't happen.
To demonstrate this we will run a difference of proportion calculation on *Pride and Prejudice* and *A Garland for Girls*.
To get the text in shape for scikit-learn we need to creat a list object with each novel as an element in a list. We'll use the append function to do this.
```
import pandas
from sklearn.feature_extraction.text import CountVectorizer
text_list = []
#open and read the novels, save them as variables
austen_string = open('../Data/Austen_PrideAndPrejudice.txt', encoding='utf-8').read()
alcott_string = open('../Data/Alcott_GarlandForGirls.txt', encoding='utf-8').read()
#append each novel to the list
text_list.append(austen_string)
text_list.append(alcott_string)
print(text_list[0][:100])
```
Create a DTM from these two novels, force it into a pandas DF, and inspect the output:
```
countvec = CountVectorizer()
novels_df = pandas.DataFrame(countvec.fit_transform(text_list).toarray(), columns=countvec.get_feature_names())
novels_df
```
Notice the number of rows and columns.
Question: What does this mean?
Next, we need to get a word frequency count for each novel, which we can do by summing across the entire row. Note how the syntax is different here compared to when we summed one column across all rows.
```
novels_df['word_count'] = novels_df.sum(axis=1)
novels_df
```
Next we divide each frequency cell by the word count. This syntax gets a bit tricky, so let's walk through it.
```
novels_df = novels_df.iloc[:,:].div(novels_df.word_count, axis=0)
novels_df
```
Finally, we subtract one row from another, and add the output as a third row.
```
novels_df.loc[2] = novels_df.loc[0] - novels_df.loc[1]
novels_df
```
We can sort based of the values of this row
```
novels_df.loc[2].sort_values(ascending=False)
```
Stop words are still in there. Why?
We can, of course, manually remove stop words. This does successfully identify distinctive content words.
We can do this in the CountVectorizer step, by setting the correct option.
```
#change stop_words option to 'english
countvec_sw = CountVectorizer(stop_words="english")
#same as code above
novels_df_sw = pandas.DataFrame(countvec_sw.fit_transform(text_list).toarray(), columns=countvec_sw.get_feature_names())
novels_df_sw['word_count'] = novels_df_sw.sum(axis=1)
novels_df_sw = novels_df_sw.iloc[:,0:].div(novels_df_sw.word_count, axis=0)
novels_df_sw.loc[2] = novels_df_sw.loc[0] - novels_df_sw.loc[1]
novels_df_sw.loc[2].sort_values(axis=0, ascending=False)
```
We can also do this by setting the max_df option (maximum document frequency) to either an absolute value, or a decimal between 0 and 1. An absolute value indicate that if the word occurs in more documents than the stated value, that word **will not** be included in the DTM. A decimal value will do the same, but proportion of documents.
Question: In the case of this corpus, what does setting the max_df value to 1 do? What output do you expect?
```
#Change max_df option to 1
countvec_freq = CountVectorizer(max_df=1)
#same as the code above
novels_df_freq = pandas.DataFrame(countvec_freq.fit_transform(text_list).toarray(), columns=countvec_freq.get_feature_names())
novels_df_freq['word_count'] = novels_df_freq.sum(axis=1)
novels_df_freq = novels_df_freq.iloc[:,0:].div(novels_df_freq.word_count, axis=0)
novels_df_freq.loc[2] = novels_df_freq.loc[0] - novels_df_freq.loc[1]
novels_df_freq.loc[2].sort_values(axis=0, ascending=False)
```
Question: What would happen if we set the max_df to 2, in this case?
Question: What might we do for the music reviews dataset?
### Exercise:
Use the difference of proportions calculation to compare two genres, or two artists, in the music reviews dataset. There are many ways you can do this. Think through the problem in steps.
| github_jupyter |
# Experimental design and pattern estimation
This week's lab will be about the basics of pattern analysis of (f)MRI data. We assume that you've worked through the two Nilearn tutorials already.
Functional MRI data are most often stored as 4D data, with 3 spatial dimensions ($X$, $Y$, and $Z$) and 1 temporal dimension ($T$). But most pattern analyses assume that data are formatted in 2D: trials ($N$) by patterns (often a subset of $X$, $Y$, and $Z$). Where did the time dimension ($T$) go? And how do we "extract" the patterns of the $N$ trials? In this lab, we'll take a look at various methods to estimate patterns from fMRI time series. Because these methods often depend on your experimental design (and your research question, of course), the first part of this lab will discuss some experimental design considerations. After this more theoretical part, we'll dive into how to estimate patterns from fMRI data.
**What you'll learn**: At the end of this tutorial, you ...
* Understand the most important experimental design factors for pattern analyses;
* Understand and are able to implement different pattern estimation techniques
**Estimated time needed to complete**: 8-12 hours
```
# We need to limit the amount of threads numpy can use, otherwise
# it tends to hog all the CPUs available when using Nilearn
import os
os.environ['MKL_NUM_THREADS'] = '1'
os.environ['OPENBLAS_NUM_THREADS'] = '1'
import numpy as np
```
## Experimental design
Before you can do any fancy machine learning or representational similarity analysis (or any other pattern analysis), there are several decisions you need to make and steps to take in terms of study design, (pre)processing, and structuring your data. Roughly, there are three steps to take:
1. Design your study in a way that's appropriate to answer your question through a pattern analysis; this, of course, needs to be done *before* data acquisition!
2. Estimate/extract your patterns from the (functional) MRI data;
3. Structure and preprocess your data appropriately for pattern analyses;
While we won't go into all the design factors that make for an *efficient* pattern analysis (see [this article](http://www.sciencedirect.com/science/article/pii/S105381191400768X) for a good review), we will now discuss/demonstrate some design considerations and how they impact the rest of the MVPA pipeline.
### Within-subject vs. between-subject analyses
As always, your experimental design depends on your specific research question. If, for example, you're trying to predict schizophrenia patients from healthy controls based on structural MRI, your experimental design is going to be different than when you, for example, are comparing fMRI activity patterns in the amygdala between trials targeted to induce different emotions. Crucially, with *design* we mean the factors that you as a researcher control: e.g., which schizophrenia patients and healthy control to scan in the former example and which emotion trials to present at what time. These two examples indicate that experimental design considerations are quite different when you are trying to model a factor that varies *between subjects* (the schizophrenia vs. healthy control example) versus a factor that varies *within subjects* (the emotion trials example).
<div class='alert alert-warning'>
<b>ToDo/ToThink</b> (1.5 points): before continuing, let's practice a bit. For the three articles below, determine whether they used a within-subject or between-subject design.<br>
<ol>
<li><a href="https://www.nature.com/articles/nn1444">https://www.nature.com/articles/nn1444</a> (machine learning based)</li>
<li><a href="http://www.jneurosci.org/content/33/47/18597.short">http://www.jneurosci.org/content/33/47/18597.short</a> (RSA based)</li>
<li><a href="https://www.sciencedirect.com/science/article/pii/S1053811913000074">https://www.sciencedirect.com/science/article/pii/S1053811913000074</a> (machine learning based)</li>
</ol>
Assign either 'within' or 'between' to the variables corresponding to the studies above (i.e., <tt>study_1</tt>, <tt>study_2</tt>, <tt>study_3</tt>).
</div>
```
''' Implement the ToDo here. '''
study_1 = '' # fill in 'within' or 'between'
study_2 = '' # fill in 'within' or 'between'
study_3 = '' # fill in 'within' or 'between'
# YOUR CODE HERE
raise NotImplementedError()
''' Tests the above ToDo. '''
for this_study in [study_1, study_2, study_3]:
if not this_study: # if empty string
raise ValueError("You haven't filled in anything!")
else:
if this_study not in ['within', 'between']:
raise ValueError("Fill in either 'within' or 'between'!")
print("Your answer will be graded by hidden tests.")
```
Note that, while we think it is a useful way to think about different types of studies, it is possible to use "hybrid" designs and analyses. For example, you could compare patterns from a particular condition (within-subject) across different participants (between-subject). This is, to our knowledge, not very common though, so we won't discuss it here.
<div class='alert alert-info'>
<b>ToThink</b> (1 point)<br>
Suppose a researcher wants to implement a decoding analysis in which he/she aims to predict schizophrenia (vs. healthy control) from gray-matter density patterns in the orbitofrontal cortex. Is this an example of a within-subject or between-subject pattern analysis? Can it be either one? Why (not)?
</div>
YOUR ANSWER HERE
That said, let's talk about something that is not only important for univariate MRI analyses, but also for pattern-based multivariate MRI analyses: confounds.
### Confounds
For most task-based MRI analyses, we try to relate features from our experiment (stimuli, responses, participant characteristics; let's call these $\mathbf{S}$) to brain features (this is not restricted to "activity patterns"; let's call these $\mathbf{R}$\*). Ideally, we have designed our experiment that any association between our experimental factor of interest ($\mathbf{S}$) and brain data ($\mathbf{R}$) can *only* be due to our experimental factor, not something else.
If another factor besides our experimental factor of interest can explain this association, this "other factor" may be a *confound* (let's call this $\mathbf{C}$). If we care to conclude anything about our experimental factor of interest and its relation to our brain data, we should try to minimize any confounding factors in our design.
---
\* Note that the notation for experimental variables ($\mathbf{S}$) and brain features ($\mathbf{R}$) is different from what we used in the previous course, in which we used $\mathbf{X}$ for experimental variables and $\mathbf{y}$ for brain signals. We did this to conform to the convention to use $\mathbf{X}$ for the set of independent variables and $\mathbf{y}$ for dependent variables. In some pattern analyses (such as RSA), however, this independent/dependent variable distintion does not really apply, so that's why we'll stick to the more generic $\mathbf{R}$ (for brain features) and $\mathbf{S}$ (for experimental features) terms.
<div class='alert alert-success'>
<b>Note</b>: In some situations, you may only be interested in maximizing your explanatory/predictive power; in that case, you could argue that confounds are not a problem. The article by <a href="https://www.sciencedirect.com/science/article/pii/S1053811917306523"> Hebart & Baker (2018)</a> provides an excellent overview of this issue.
</div>
Statistically speaking, you should design your experiment in such a way that there are no associations (correlations) between $\mathbf{R}$ and $\mathbf{C}$, such that any association between $\mathbf{S}$ and $\mathbf{R}$ can *only* be due to $\mathbf{R}$. Note that this is not trivial, because this presumes that you (1) know which factors might confound your study and (2) if you know these factors, that they are measured properly ([Westfall & Yarkoni, 2016)](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0152719)).
Minimizing confounds in between-subject studies is notably harder than in within-subject designs, especially when dealing with clinical populations that are hard to acquire, because it is simply easier to experimentally control within-subject factors (especially when they are stimulus- rather than response-based). There are ways to deal with confounds post-hoc, but ideally you prevent confounds in the first place. For an overview of confounds in (multivariate/decoding) neuroimaging analyses and a proposed post-hoc correction method, see [this article](https://www.sciencedirect.com/science/article/pii/S1053811918319463) (apologies for the shameless self-promotion) and [this follow-up article](https://www.biorxiv.org/content/10.1101/2020.08.17.255034v1.abstract).
In sum, as with *any* (neuroimaging) analysis, a good experimental design is one that minimizes the possibilities of confounds, i.e., associations between factors that are not of interest ($\mathbf{C}$) and experimental factors that *are* of interest ($\mathbf{S}$).
<div class='alert alert-info'>
<b>ToThink</b> (0 points): Suppose that you are interested in the neural correlates of ADHD. You want to compare multivariate resting-state fMRI networks between ADHD patients and healthy controls. What is the experimental factor ($\mathbf{S}$)? And can you think of a factor that, when unaccounted for, presents a major confound ($\mathbf{C}$) in this study/analysis?
</div>
<div class='alert alert-info'>
<b>ToThink</b> (1 point): Suppose that you're interested in the neural representation of "cognitive effort". You think of an experimental design in which you show participants either easy arithmetic problems, which involve only single-digit addition/subtraction (e.g., $2+5-4$) or hard(er) arithmetic problems, which involve two-digit addition/subtraction and multiplication (e.g., $12\times4-2\times11$), for which they have to respond whether the solution is odd (press left) or even (press right) as fast as possible. You then compare patterns during the between easy and hard trials. What is the experimental factor of interest ($\mathbf{S}$) here? And what are <em>possible</em> confounds ($\mathbf{C}$) in this design? Name at least two. (Note: this is a separate hypothetical experimental from the previous ToThink.)
</div>
YOUR ANSWER HERE
### What makes up a "pattern"?
So far, we talked a lot about "patterns", but what do we mean with that term? There are different options with regard to *what you choose as your unit of measurement* that makes up your pattern. The far majority of pattern analyses in functional MRI use patterns of *activity estimates*, i.e., the same unit of measurement — relative (de)activation — as is common in standard mass-univariate analyses. For example, decoding object category (e.g., images of faces vs. images of houses) from fMRI activity patterns in inferotemporal cortex is an example of a pattern analysis that uses *activity estimates* as its unit of measurement.
However, you are definitely not limited to using *activity estimates* for your patterns. For example, you could apply pattern analyses to structural data (e.g., patterns of voxelwise gray-matter volume values, like in [voxel-based morphometry](https://en.wikipedia.org/wiki/Voxel-based_morphometry)) or to functional connectivity data (e.g., patterns of time series correlations between voxels, or even topological properties of brain networks). (In fact, the connectivity examples from the Nilearn tutorial represents a way to estimate these connectivity features, which can be used in pattern analyses.) In short, pattern analyses can be applied to patterns composed of *any* type of measurement or metric!
Now, let's get a little more technical. Usually, as mentioned in the beginning, pattern analyses represent the data as a 2D array of brain patterns. Let's call this $\mathbf{R}$. The rows of $\mathbf{R}$ represent different instances of patterns (sometimes called "samples" or "observations") and the columns represent different brain features (e.g., voxels; sometimes simply called "features"). Note that we thus lose all spatial information by "flattening" our patterns into 1D rows!
Let's call the number of samples $N$ and the number of brain features $K$. We can thus represent $\mathbf{R}$ as a $N\times K$ matrix (2D array):
\begin{align}
\mathbf{R} =
\begin{bmatrix}
R_{1,1} & R_{1,2} & R_{1,3} & \dots & R_{1,K}\\
R_{2,1} & R_{1,2} & R_{1,3} & \dots & R_{2,K}\\
R_{3,1} & R_{1,2} & R_{1,3} & \dots & R_{3,K}\\
\vdots & \vdots & \vdots & \ddots & \vdots\\
R_{N,1} & R_{1,2} & R_{1,3} & \dots & R_{N,K}\\
\end{bmatrix}
\end{align}
As discussed before, the values themselves (e.g., $R_{1,1}$, $R_{1,2}$, $R_{3,6}$) represent whatever you chose for your patterns (fMRI activity, connectivity estimates, VBM, etc.). What is represented by the rows (samples/observations) of $\mathbf{R}$ depends on your study design: in between-subject studies, these are usually participants, while in within-subject studies, these samples represent trials (or averages of trials or sometimes runs). The columns of $\mathbf{R}$ represent the different (brain) features in your pattern; for example, these may be different voxels (or sensors/magnetometers in EEG/MEG), vertices (when working with cortical surfaces), edges in functional brain networks, etc. etc.
Let's make it a little bit more concrete. We'll make up some random data below that represents a typical data array in pattern analyses:
```
import numpy as np
N = 100 # e.g. trials
K = 250 # e.g. voxels
R = np.random.normal(0, 1, size=(N, K))
R
```
Let's visualize this:
```
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(12, 4))
plt.imshow(R, aspect='auto')
plt.xlabel('Brain features', fontsize=15)
plt.ylabel('Samples', fontsize=15)
plt.title(r'$\mathbf{R}_{N\times K}$', fontsize=20)
cbar = plt.colorbar()
cbar.set_label('Feature value', fontsize=13, rotation=270, labelpad=10)
plt.show()
```
<div class='alert alert-warning'>
<b>ToDo</b> (1 point): Extract the pattern of the 42nd trial and store it in a variable called <tt>trial42</tt>. Then, extract the values of 187th brain feature across all trials and store it in a variable called <tt>feat187</tt>. Lastly, extract feature value of the 60th trial and the 221nd feature and store it in a variable called <tt>t60_f221</tt>. Remember: Python uses zero-based indexing (first value in an array is indexed by 0)!
</div>
```
''' Implement the ToDo here.'''
# YOUR CODE HERE
raise NotImplementedError()
''' Tests the above ToDo. '''
from niedu.tests.nipa.week_1 import test_R_indexing
test_R_indexing(R, trial42, feat187, t60_f221)
```
Alright, to practice a little bit more. We included whole-brain VBM data for 20 subjects in the `vbm/` subfolder:
```
import os
sorted(os.listdir('vbm'))
```
The VBM data represents spatially normalized (to MNI152, 2mm), whole-brain voxelwise gray matter volume estimates (read more about VBM [here](https://en.wikipedia.org/wiki/Voxel-based_morphometry)).
Let's inspect the data from a single subject:
```
import os
import nibabel as nib
from nilearn import plotting
sub_01_vbm_path = os.path.join('vbm', 'sub-01.nii.gz')
sub_01_vbm = nib.load(sub_01_vbm_path)
print("Shape of Nifti file: ", sub_01_vbm.shape)
# Let's plot it as well
plotting.plot_anat(sub_01_vbm)
plt.show()
```
As you can see, the VBM data is a 3D array of shape 91 ($X$) $\times$ 109 ($Y$) $\times$ 91 ($Z$) (representing voxels). These are the spatial dimensions associated with the standard MNI152 (2 mm) template provided by FSL. As VBM is structural (not functional!) data, there is no time dimension ($T$).
Now, suppose that we want to do a pattern analysis on the data of all 20 subjects. We should then create a 2D array of shape 20 (subjects) $\times\ K$ (number of voxels, i.e., $91 \times 109 \times 91$). To do so, we need to create a loop over all files, load them in, "flatten" the data, and ultimately stack them into a 2D array.
Before you'll implement this as part of the next ToDo, we will show you a neat Python function called `glob`, which allows you to simply find files using "[wildcards](https://en.wikipedia.org/wiki/Wildcard_character)":
```
from glob import glob
```
It works as follows:
```
list_of_files = glob('path/with/subdirectories/*/*.nii.gz')
```
Importantly, the string you pass to `glob` can contain one or more wildcard characters (such as `?` or `*`). Also, *the returned list is not sorted*! Let's try to get all our VBM subject data into a list using this function:
```
# Let's define a "search string"; we'll use the os.path.join function
# to make sure this works both on Linux/Mac and Windows
search_str = os.path.join('vbm', 'sub-*.nii.gz')
vbm_files = glob(search_str)
# this is also possible: vbm_files = glob(os.path.join('vbm', 'sub-*.nii.gz'))
# Let's print the returned list
print(vbm_files)
```
As you can see, *the list is not alphabetically sorted*, so let's fix that with the `sorted` function:
```
vbm_files = sorted(vbm_files)
print(vbm_files)
# Note that we could have done that with a single statement
# vbm_files = sorted(glob(os.path.join('vbm', 'sub-*.nii.gz')))
# But also remember: shorter code is not always better!
```
<div class='alert alert-warning'>
<b>ToDo</b> (2 points): Create a 2D array with the vertically stacked subject-specific (flattened) VBM patterns, in which the first subject should be the first row. You may want to pre-allocate this array before starting your loop (using, e.g., <tt>np.zeros</tt>). Also, the <tt>enumerate</tt> function may be useful when writing your loop. Try to google how to flatten an N-dimensional array into a single vector. Store the final 2D array in a variable named <tt>R_vbm</tt>.
</div>
```
''' Implement the ToDo here. '''
# YOUR CODE HERE
raise NotImplementedError()
''' Tests the above ToDo. '''
from niedu.tests.nipa.week_1 import test_R_vbm_loop
test_R_vbm_loop(R_vbm)
```
<div class='alert alert-success'>
<b>Tip</b>: While it is a good exercise to load in the data yourself, you can also easily load in and concatenate a set of Nifti files using Nilearn's <a href="https://nilearn.github.io/modules/generated/nilearn.image.concat_imgs.html">concat_imgs</a> function (which returns a 4D <tt>Nifti1Image</tt>, with the different patterns as the fourth dimension). You'd still have to reorganize this data into a 2D array, though.
</div>
```
# Run this cell after you're done with the ToDo
# This will remove the all numpy arrays from memory,
# clearing up RAM for the next sections
%reset -f array
```
### Patterns as "points in space"
Before we continue with the topic of pattern estimation, there is one idea that we'd like to introduce: thinking of patterns as points (i.e., coordinates) in space. Thinking of patterns this way is helpful to understanding both machine learning based analyses and representational similarity analysis. While for some, this idea might sound trivial, we believe it's worth going over anyway. Now, let's make this idea more concrete.
Suppose we have estimated fMRI activity patterns for 20 trials (rows of $\mathbf{R}$). Now, we will also assume that those patterns consist of only two features (e.g., voxels; columns of $\mathbf{R}$), because this will make visualizing patterns as points in space easier than when we choose a larger number of features.
Alright, let's simulate and visualize the data (as a 2D array):
```
K = 2 # features (voxels)
N = 20 # samples (trials)
R = np.random.multivariate_normal(np.zeros(K), np.eye(K), size=N)
print("Shape of R:", R.shape)
# Plot 2D array as heatmap
fig, ax = plt.subplots(figsize=(2, 10))
mapp = ax.imshow(R)
cbar = fig.colorbar(mapp, pad=0.1)
cbar.set_label('Feature value', fontsize=13, rotation=270, labelpad=15)
ax.set_yticks(np.arange(N))
ax.set_xticks(np.arange(K))
ax.set_title(r"$\mathbf{R}$", fontsize=20)
ax.set_xlabel('Voxels', fontsize=15)
ax.set_ylabel('Trials', fontsize=15)
plt.show()
```
Now, we mentioned that each pattern (row of $\mathbf{R}$, i.e., $\mathbf{R}_{i}$) can be interpreted as a point in 2D space. With space, here, we mean a space where each feature (e.g., voxel; column of $\mathbf{R}$, i.e., $\mathbf{R}_{j}$) represents a separate axis. In our simulated data, we have two features (e.g., voxel 1 and voxel 2), so our space will have two axes:
```
plt.figure(figsize=(5, 5))
plt.title("A two-dimensional space", fontsize=15)
plt.grid()
plt.xlim(-3, 3)
plt.ylim(-3, 3)
plt.xlabel('Activity voxel 1', fontsize=13)
plt.ylabel('Activity voxel 2', fontsize=13)
plt.show()
```
Within this space, each of our patterns (samples) represents a point. The values of each pattern represent the *coordinates* of its location in this space. For example, the coordinates of the first pattern are:
```
print(R[0, :])
```
As such, we can plot this pattern as a point in space:
```
plt.figure(figsize=(5, 5))
plt.title("A two-dimensional space", fontsize=15)
plt.grid()
# We use the "scatter" function to plot this point, but
# we could also have used plt.plot(R[0, 0], R[0, 1], marker='o')
plt.scatter(R[0, 0], R[0, 1], marker='o', s=75)
plt.axhline(0, c='k')
plt.axvline(0, c='k')
plt.xlabel('Activity voxel 1', fontsize=13)
plt.ylabel('Activity voxel 2', fontsize=13)
plt.xlim(-3, 3)
plt.ylim(-3, 3)
plt.show()
```
If we do this for all patterns, we get an ordinary scatter plot of the data:
```
plt.figure(figsize=(5, 5))
plt.title("A two-dimensional space", fontsize=15)
plt.grid()
# We use the "scatter" function to plot this point, but
# we could also have used plt.plot(R[0, 0], R[0, 1], marker='o')
plt.scatter(R[:, 0], R[:, 1], marker='o', s=75)
plt.axhline(0, c='k')
plt.axvline(0, c='k')
plt.xlabel('Activity voxel 1', fontsize=13)
plt.ylabel('Activity voxel 2', fontsize=13)
plt.xlim(-3, 3)
plt.ylim(-3, 3)
plt.show()
```
It is important to realize that both perspectives — as a 2D array and as a set of points in $K$-dimensional space — represents the same data! Practically, pattern analysis algorithms usually expect the data as a 2D array, but (in our experience) the operations and mechanisms implemented by those algorithms are easiest to explain and to understand from the "points in space" perspective.
You might think, "but how does this work for data with more than two features?" Well, the idea of patterns as points in space remains the same: each feature represents a new dimension (or "axis"). For three features, this means that a pattern represents a point in 3D (X, Y, Z) space; for four features, a pattern represents a point in 4D space (like a point moving in 3D space) ... but what about a pattern with 14 features? Or 500? Actually, this is impossible to visualize or even make sense of mentally. As the famous artificial intelligence researcher Geoffrey Hinton put it:
> "To deal with ... a 14 dimensional space, visualize a 3D space and say 'fourteen' very loudly. Everyone does it." (Geoffrey Hinton)
The important thing to understand, though, is that most operations, computations, and algorithms that deal with patterns do not care about whether your data is 2D (two features) or 14D (fourteen features) — we just have to trust the mathematicians that whatever we do on 2D data will generalize to $K$-dimensional data :-)
That said, people still try to visualize >2D data using *dimensionality reduction* techniques. These techniques try to project data to a lower-dimensional space. For example, you can transform a dataset with 500 features (i.e., a 500-dimensional dataset) to a 2D dimensional dataset using techniques such as principal component analysis (PCA), Multidimensional Scaling (MDS), and t-SNE. For example, PCA tries to a subset of uncorrelated lower-dimensional features (e.g., 2) from linear combinations of high-dimensional features (e.g., 4) that still represent as much variance of the high-dimensional components as possible. We'll show you an example below using an implementation of PCA from the machine learning library [scikit-learn](https://scikit-learn.org/stable/), which we'll use extensively in next week's lab:
```
from sklearn.decomposition import PCA
# Let's create a dataset with 100 samples and 4 features
R4D = np.random.normal(0, 1, size=(100, 4))
print("Shape R4D:", R4D.shape)
# We'll instantiate a PCA object that will
# transform our data into 2 components
pca = PCA(n_components=2)
# Fit and transform the data from 4D to 2D
R2D = pca.fit_transform(R4D)
print("Shape R2D:", R2D.shape)
# Plot the result
plt.figure(figsize=(5, 5))
plt.scatter(R2D[:, 0], R2D[:, 1], marker='o', s=75)
plt.axhline(0, c='k')
plt.axvline(0, c='k')
plt.xlabel('PCA component 1', fontsize=13)
plt.ylabel('PCA component 2', fontsize=13)
plt.grid()
plt.xlim(-4, 4)
plt.ylim(-4, 4)
plt.show()
```
<div class='alert alert-warning'>
<b>ToDo</b> (optional): As discussed, PCA is a specific dimensionality reduction technique that uses linear combinations of features to project the data to a lower-dimensional space with fewer "components". Linear combinations are simply weighted sums of high-dimensional features. In a 4D dimensional space that is project to 2D, PCA component 1 might be computed as $\mathbf{R}_{j=1}\theta_{1}+\mathbf{R}_{j=2}\theta_{2}+\mathbf{R}_{j=3}\theta_{3}+\mathbf{R}_{j=4}\theta_{4}$, where $R_{j=1}$ represents the 4th feature of $\mathbf{R}$ and $\theta_{1}$ represents the <em>weight</em> for the 4th feature.
The weights of the fitted PCA model can be accessed by, confusingly, <tt>pca.components_</tt> (shape: $K_{lower} \times K_{higher}$. Using these weights, can you recompute the lower-dimensional features from the higher-dimensional features yourself? Try to plot it like the figure above and check whether it matches.
</div>
```
''' Implement the (optional) ToDo here. '''
# YOUR CODE HERE
raise NotImplementedError()
```
Note that dimensionality reduction is often used for visualization, but it can also be used as a preprocessing step in pattern analyses. We'll take a look this in more detail next week.
Alright, back to the topic of pattern extraction/estimation. You saw that preparing VBM data for (between-subject) pattern analyses is actually quite straightforward, but unfortunately, preparing functional MRI data for pattern analysis is a little more complicated. The reason is that we are dealing with time series in which different trials ($N$) are "embedded". The next section discusses different methods to "extract" (estimate) these trial-wise patterns.
## Estimating patterns
As we mentioned before, we should prepare our data as an $N$ (samples) $\times$ $K$ (features) array. With fMRI data, our data is formatted as a $X \times Y \times Z \times T$ array; we can flatten the $X$, $Y$, and $Z$ dimensions, but we still have to find a way to "extract" patterns for our $N$ trials from the time series (i.e., the $T$ dimension).
### Important side note: single trials vs. (runwise) average trials
In this section, we often assume that our "samples" refer to different *trials*, i.e., single instances of a stimulus or response (or another experimentally-related factor). This is, however, not the only option. Sometimes, researchers choose to treat multiple repetitions of a trial as a single sample or multiple trials within a condition as a single sample. For example, suppose you design a simple passive-viewing experiment with images belonging two one of three conditions: faces, houses, and chairs. Each condition has ten exemplars (face1, face2, ..., face10, house1, house2, ..., house10, chair1, chair2, ... , chair10) and each exemplar/item is repeated six times. So, in total there are 3 (condition) $\times$ 10 (examplars) $\times$ 6 (repetitions) = 180 trials. Because you don't want to bore the participant to death, you split the 180 trials into two runs (90 each).
Now, there are different ways to define your samples. One is to treat every single trial as a sample (so you'll have a 180 samples). Another way is to treat each exemplar as a sample. If you do so, you'll have to "pool" the pattern estimates across all 6 repetitions (so you'll have $10 \times 3 = 30$ samples). And yet another way is to treat each condition as a sample, so you'll have to pool the pattern estimates across all 6 repetitions and 10 exemplars per condition (so you'll end up with only 3 samples). Lastly, with respect to the latter two approaches, you may choose to only average repetitions and/or exemplars *within* runs. So, for two runs, you end up with either $10 \times 3 \times 2 = 60$ samples (when averaging across repetitions only) or $3 \times 2 = 6$ samples (when averaging across examplars and repetitions).
Whether you should perform your pattern analysis on the trial, examplar, or condition level, and whether you should estimate these patterns across runs or within runs, depends on your research question and analysis technique. For example, if you want to decode exemplars from each other, you obviously should not average across exemplars. Also, some experiments may not have different exemplars per condition (or do not have categorical conditions at all). With respect to the importance of analysis technique: when applying machine learning analyses to fMRI data, people often prefer to split their trials across many (short) runs and — if using a categorical design — prefer to estimate a single pattern per run. This is because samples across runs are not temporally autocorrelated, which is an important assumption in machine learning based analyses. Lastly, for any pattern analysis, averaging across different trials will increase the signal-to-noise ratio (SNR) for any sample (because you average out noise), but will decrease the statistical power of the analysis (because you have fewer samples).
Long story short: whatever you treat as a sample — single trials, (runwise) exemplars or (runwise) conditions — depends on your design, research question, and analysis technique. In the rest of the tutorial, we will usually refer to samples as "trials", as this scenario is easiest to simulate and visualize, but remember that this term may equally well refer to (runwise) exemplar-average or condition-average patterns.
---
To make the issue of estimating patterns from time series a little more concrete, let's simulate some signals. We'll assume that we have a very simple experiment with two conditions (A, B) with ten trials each (interleaved, i.e., ABABAB...AB), a trial duration of 1 second, spaced evenly within a single run of 200 seconds (with a TR of 2 seconds, so 100 timepoints). Note that you are not necessarily limited to discrete categorical designs for all pattern analyses! While for machine learning-based methods (topic of week 2) it is common to have a design with a single categorical feature of interest (or some times a single continuous one), representional similarity analyses (topic of week 3) are often applied to data with more "rich" designs (i.e., designs that include many, often continuously varying, factors of interest). Also, using twenty trials is probably way too few for any pattern analysis, but it'll make the examples (and visualizations) in this section easier to understand.
Alright, let's get to it.
```
TR = 2
N = 20 # 2 x 10 trials
T = 200 # duration in seconds
# t_pad is a little baseline at the
# start and end of the run
t_pad = 10
onsets = np.linspace(t_pad, T - t_pad, N, endpoint=False)
durations = np.ones(onsets.size)
conditions = ['A', 'B'] * (N // 2)
print("Onsets:", onsets, end='\n\n')
print("Conditions:", conditions)
```
We'll use the `simulate_signal` function used in the introductory course to simulate the data. This function is like a GLM in reverse: it assumes that a signal ($R$) is generated as a linear combination between (HRF-convolved) experimental features ($\mathbf{S}$) weighted by some parameters ( $\beta$ ) plus some additive noise ($\epsilon$), and simulates the signal accordingly (you can check out the function by running `simulate_signal??` in a new code cell).
Because we simulate the signal, we can use "ground-truth" activation parameters ( $\beta$ ). In this simulation, we'll determine that the signal responds more strongly to trials of condition A ($\beta = 0.8$) than trials of condition B ($\beta = 0.2$) in *even* voxels (voxel 0, 2, etc.) and vice versa for *odd* voxels (voxel 1, 3, etc.):
```
params_even = np.array([0.8, 0.2])
params_odd = 1 - params_even
```
<div class='alert alert-info'>
<b>ToThink</b> (0 points): Given these simulation parameters, how do you think that the corresponding $N\times K$ pattern array ($\mathbf{R}$) would roughly look like visually (assuming an efficient pattern estimation method)?
</div>
Alright, We simulate some data for, let's say, four voxels ($K = 4$). (Again, you'll usually perform pattern analyses on many more voxels.)
```
from niedu.utils.nii import simulate_signal
K = 4
ts = []
for i in range(K):
# Google "Python modulo" to figure out
# what the line below does!
is_even = (i % 2) == 0
sig, _ = simulate_signal(
onsets,
conditions,
duration=T,
plot=False,
std_noise=0.25,
params_canon=params_even if is_even else params_odd
)
ts.append(sig[:, np.newaxis])
# ts = timeseries
ts = np.hstack(ts)
print("Shape of simulated signals: ", ts.shape)
```
And let's plot these voxels. We'll show the trial onsets as arrows (red = condition A, orange = condition B):
```
import seaborn as sns
fig, axes = plt.subplots(ncols=K, sharex=True, sharey=True, figsize=(10, 12))
t = np.arange(ts.shape[0])
for i, ax in enumerate(axes.flatten()):
# Plot signal
ax.plot(ts[:, i], t, marker='o', ms=4, c='tab:blue')
# Plot trial onsets (as arrows)
for ii, to in enumerate(onsets):
color = 'tab:red' if ii % 2 == 0 else 'tab:orange'
ax.arrow(-1.5, to / TR, dy=0, dx=0.5, color=color, head_width=0.75, head_length=0.25)
ax.set_xlim(-1.5, 2)
ax.set_ylim(0, ts.shape[0])
ax.grid(b=True)
ax.set_title(f'Voxel {i+1}', fontsize=15)
ax.invert_yaxis()
if i == 0:
ax.set_ylabel("Time (volumes)", fontsize=20)
# Common axis labels
fig.text(0.425, -.03, "Activation (A.U.)", fontsize=20)
fig.tight_layout()
sns.despine()
plt.show()
```
<div class='alert alert-success'>
<b>Tip</b>: Matplotlib is a very flexible plotting package, but arguably at the expense of how fast you can implement something. <a href="https://seaborn.pydata.org/">Seaborn</a> is a great package (build on top of Matplotlib) that offers some neat functionality that makes your life easier when plotting in Python. For example, we used the <tt>despine</tt> function to remove the top and right spines to make our plot a little nicer. In this course, we'll mostly use Matplotlib, but we just wanted to make you aware of this awesome package.
</div>
Alright, now we can start discussing methods for pattern estimation! Unfortunately, as pattern analyses are relatively new, there no concensus yet about the "best" method for pattern estimation. In fact, there exist many different methods, which we can roughly divided into two types:
1. Timepoint-based method (for lack of a better name) and
2. GLM-based methods
We'll discuss both of them, but spend a little more time on the latter set of methods as they are more complicated (and are more popular).
### Timepoint-based methods
Timepoint-based methods "extract" patterns by simply using a single timepoint (e.g., 6 seconds after stimulus presentation) or (an average of) multiple timepoints (e.g., 4, 6, and 8 seconds after stimulus presentation).
Below, we visualize how a single-timepoint method would look like (assuming that we'd want to extract the timepoint 6 seconds after stimulus presentation, i.e., around the assumed peak of the BOLD response). The stars represent the values that we would extract (red when condition A, orange when condition B). Note, we only plot the first 60 volumes.
```
fig, axes = plt.subplots(ncols=4, sharex=True, sharey=True, figsize=(10, 12))
t_fmri = np.linspace(0, T, ts.shape[0], endpoint=False)
t = np.arange(ts.shape[0])
for i, ax in enumerate(axes.flatten()):
# Plot signal
ax.plot(ts[:, i], t, marker='o', ms=4, c='tab:blue')
# Plot trial onsets (as arrows)
for ii, to in enumerate(onsets):
plus6 = np.interp(to+6, t_fmri, ts[:, i])
color = 'tab:red' if ii % 2 == 0 else 'tab:orange'
ax.arrow(-1.5, to / TR, dy=0, dx=0.5, color=color, head_width=0.75, head_length=0.25)
ax.plot([plus6, plus6], [(to+6) / TR, (to+6) / TR], marker='*', ms=15, c=color)
ax.set_xlim(-1.5, 2)
ax.set_ylim(0, ts.shape[0] // 2)
ax.grid(b=True)
ax.set_title(f'Voxel {i+1}', fontsize=15)
ax.invert_yaxis()
if i == 0:
ax.set_ylabel("Time (volumes)", fontsize=20)
# Common axis labels
fig.text(0.425, -.03, "Activation (A.U.)", fontsize=20)
fig.tight_layout()
sns.despine()
plt.show()
```
Now, extracting these timepoints 6 seconds after stimulus presentation is easy when this timepoint is a multiple of the scan's TR (here: 2 seconds). For example, to extract the value for the first trial (onset: 10 seconds), we simply take the 8th value in our timeseries, because $(10 + 6) / 2 = 8$. But what if our trial onset + 6 seconds is *not* a multiple of the TR, such as with trial 2 (onset: 19 seconds)? Well, we can interpolate this value! We will use the same function for this operation as we did for slice-timing correction (from the previous course): `interp1d` from the `scipy.interpolate` module.
To refresh your memory: this function takes the timepoints associated with the values (or "frame_times" in Nilearn lingo) and the values itself to generate a new object which we'll later use to do the actual (linear) interpolation. First, let's define the timepoints:
```
t_fmri = np.linspace(0, T, ts.shape[0], endpoint=False)
```
<div class='alert alert-warning'>
<b>ToDo</b> (1 point): The above timepoints assume that all data was acquired at the onset of the volume acquisition ($t=0$, $t=2$, etc.). Suppose that we actually slice-time corrected our data to the middle slice, i.e., the 18th slice (out of 36 slices) — create a new array (using <tt>np.linspace</tt> with timepoints that reflect these slice-time corrected acquisition onsets) and store it in a variable named <tt>t_fmri_middle_slice</tt>.
</div>
```
''' Implement your ToDo here. '''
# YOUR CODE HERE
raise NotImplementedError()
''' Tests the above ToDo. '''
from niedu.tests.nipa.week_1 import test_frame_times_stc
test_frame_times_stc(TR, T, ts.shape[0], t_fmri_middle_slice)
```
For now, let's assume that all data was actually acquired at the start of the volume ($t=0$, $t=2$, etc.). We can "initialize" our interpolator by giving it both the timepoints (`t_fmri`) and the data (`ts`). Note that `ts` is not a single time series, but a 2D array with time series for four voxels (across different columns). By specifying `axis=0`, we tell `interp1d` that the first axis represents the axis that we want to interpolate later:
```
from scipy.interpolate import interp1d
interpolator = interp1d(t_fmri, ts, axis=0)
```
Now, we can give the `interpolator` object any set of timepoints and it will return the linearly interpolated values associated with these timepoints for all four voxels. Let's do this for our trial onsets plus six seconds:
```
onsets_plus_6 = onsets + 6
R_plus6 = interpolator(onsets_plus_6)
print("Shape extracted pattern:", R_plus6.shape)
fig, ax = plt.subplots(figsize=(2, 10))
mapp = ax.imshow(R_plus6)
cbar = fig.colorbar(mapp)
cbar.set_label('Feature value', fontsize=13, rotation=270, labelpad=15)
ax.set_yticks(np.arange(N))
ax.set_xticks(np.arange(K))
ax.set_title(r"$\mathbf{R}$", fontsize=20)
ax.set_xlabel('Voxels', fontsize=15)
ax.set_ylabel('Trials', fontsize=15)
plt.show()
```
Yay, we have extracted our first pattern! Does it look like what you expected given the known mean amplitude of the trials from the two conditions ($\beta_{\mathrm{A,even}} = 0.8, \beta_{\mathrm{B,even}} = 0.2$ and vice versa for odd voxels)?
<div class='alert alert-warning'>
<b>ToDo</b> (3 points): An alternative to the single-timepoint method is to extract, per trial, the <em>average</em> activity within a particular time window, for example 5-7 seconds post-stimulus. One way to do this is by perform interpolation in steps of (for example) 0.1 within the 5-7 post-stimulus time window (i.e., $5.0, 5.1, 5.2, \dots , 6.8, 6.9, 7.0$) and subsequently averaging these values, per trial, into a single activity estimate. Below, we defined these different steps (<tt>t_post_stimulus</tt>) for you already. Use the <tt>interpolator</tt> object to extract the timepoints for these different post-stimulus times relative to our onsets (<tt>onsets</tt> variable) from our data (<tt>ts</tt> variable). Store the extracted patterns in a new variable called <tt>R_av</tt>.
Note: this is a relatively difficult ToDo! Consider skipping it if it takes too long.
</div>
```
''' Implement your ToDo here. '''
t_post_stimulus = np.linspace(5, 7, 21, endpoint=True)
print(t_post_stimulus)
# YOUR CODE HERE
raise NotImplementedError()
''' Tests the above ToDo. '''
from niedu.tests.nipa.week_1 import test_average_extraction
test_average_extraction(onsets, ts, t_post_stimulus, interpolator, R_av)
```
These timepoint-based methods are relatively simple to implement and computationally efficient. Another variation that you might see in the literature is that extracted (averages of) timepoints are baseline-subtracted ($\mathbf{R}_{i} - \mathrm{baseline}_{i}$) or baseline-normalized ($\frac{\mathbf{R}_{i}}{\mathrm{baseline}_{i}}$), where the baseline is usually chosen to be at the stimulus onset or a small window before the stimulus onset. This technique is, as far as we know, not very popular, so we won't discuss it any further in this lab.
### GLM-based methods
One big disadvantage of timepoint-based methods is that it cannot disentangle activity due to different sources (such as trials that are close in time), which is a major problem for fast (event-related) designs. For example, if you present a trial at $t=10$ and another at $t=12$ and subsequently extract the pattern six seconds post-stimulus (at $t=18$ for the second trial), then the activity estimate for the second trial is definitely going to contain activity due to the first trial because of the sluggishness of the HRF.
As such, nowadays GLM-based pattern estimation techniques, which *can* disentangle the contribution of different sources, are more popular than timepoint-based methods. (Although, technically, you can use timepoint-based methods using the GLM with FIR-based designs, but that's beyond the scope of this course.) Again, there are multiple flavors of GLM-based pattern estimation, of which we'll discuss the two most popular ones.
#### Least-squares all (LSA)
The most straightforward GLM-based pattern estimation technique is to fit a single GLM with a design matrix that contains one or more regressors for each sample that you want to estimate (in addition to any confound regressors). The estimated parameters ($\hat{\beta}$) corresponding to our samples from this GLM — representing the relative (de)activation of each voxel for each trial — will then represent our patterns!
This technique is often reffered to as "least-squares all" (LSA). Note that, as explained before, a sample can refer to either a single trial, a set of repetitions of a particuar exemplar, or even a single condition. For now, we'll assume that samples refer to single trials. Often, each sample is modelled by a single (canonical) HRF-convolved regressor (but you could also use more than one regressor, e.g., using a basis set with temporal/dispersion derivatives or a FIR-based basis set), so we'll focus on this approach.
Let's go back to our simulated data. We have a single run containing 20 trials, so ultimately our design matrix should contain twenty columns: one for every trial. We can use the `make_first_level_design_matrix` function from Nilearn to create the design matrix. Importantly, we should make sure to give a separate and unique "trial_type" values for all our trials. If we don't do this (e.g., set trial type to the trial condition: "A" or "B"), then Nilearn won't create separate regressors for our trials.
```
import pandas as pd
from nilearn.glm.first_level import make_first_level_design_matrix
# We have to create a dataframe with onsets/durations/trial_types
# No need for modulation!
events_sim = pd.DataFrame(onsets, columns=['onset'])
events_sim.loc[:, 'duration'] = 1
events_sim.loc[:, 'trial_type'] = ['trial_' + str(i).zfill(2) for i in range(1, N+1)]
# lsa_dm = least squares all design matrix
lsa_dm = make_first_level_design_matrix(
frame_times=t_fmri, # we defined this earlier for interpolation!
events=events_sim,
hrf_model='glover',
drift_model=None # assume data is already high-pass filtered
)
# Check out the created design matrix
# Note that the index represents the frame times
lsa_dm
```
Note that the design matrix contains 21 regressors: 20 trialwise regressors and an intercept (the last column). Let's also plot it using Nilearn:
```
from nilearn.plotting import plot_design_matrix
plot_design_matrix(lsa_dm);
```
And, while we're at it, plot it as time series (rather than a heatmap):
```
fig, ax = plt.subplots(figsize=(12, 12))
for i in range(lsa_dm.shape[1]):
ax.plot(i + lsa_dm.iloc[:, i], np.arange(ts.shape[0]))
ax.set_title("LSA design matrix", fontsize=20)
ax.set_ylim(0, lsa_dm.shape[0]-1)
ax.set_xlabel('')
ax.set_xticks(np.arange(N+1))
ax.set_xticklabels(['trial ' + str(i+1) for i in range(N)] + ['icept'], rotation=-90)
ax.invert_yaxis()
ax.grid()
ax.set_ylabel("Time (volumes)", fontsize=15)
plt.show()
```
<div class='alert alert-warning'>
<b>ToDo/ToThink</b> (2 points): One "problem" with LSA-type design matrices, especially in fast event-related designs, is that they are not very statistically <em>efficient</em>, i.e., they lead to relatively high variance estimates of your parameters ($\hat{\beta}$), mainly due to relatively high predictor variance. Because we used a fixed inter-trial interval (here: 9 seconds), the correlation between "adjacent" trials are (approximately) the same. <br>
Compute the correlation between, for example, the predictors associated with trial 1 and trial 2, using the <tt>pearsonr</tt> function imported below, and store it in a variable named <tt>corr_t1t2</tt> (1 point). Then, try to think of a way to improve the efficiency of this particular LSA design and write it down in the cell below the test cell.
</div>
```
''' Implement your ToDO here. '''
# For more info about the `pearsonr` function, check
# https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.pearsonr.html
# Want a challenge? Try to compute the correlation from scratch!
from scipy.stats import pearsonr
# YOUR CODE HERE
raise NotImplementedError()
''' Tests the ToDo above. '''
from niedu.tests.nipa.week_1 import test_t1t2_corr
test_t1t2_corr(lsa_dm, corr_t1t2)
```
YOUR ANSWER HERE
Alright, let's actually fit the model! When dealing with real fMRI data, we'd use Nilearn to fit our GLM, but for now, we'll just use our own implementation of an (OLS) GLM. Note that we can actually fit a *single* GLM for all voxels at the same time by using `ts` (a $T \times K$ matrix) as our dependent variable due to the magic of linear algebra. In other words, we can run $K$ OLS models at once!
```
# Let's use 'X', because it's shorter
X = lsa_dm.values
# Note we can fit our GLM for all K voxels at
# the same time! As such, betas is not a vector,
# but an n_regressor x k_voxel matrix!
beta_hat_all = np.linalg.inv(X.T @ X) @ X.T @ ts
print("Shape beta_hat_all:", beta_hat_all.shape)
# Ah, the beta for the intercept is still in there
# Let's remove it
beta_icept = beta_hat_all[-1, :]
beta_hat = beta_hat_all[:-1, :]
print("Shape beta_hat (intercept removed):", beta_hat.shape)
```
Alright, let's visualize the estimated parameters ($\hat{\beta}$). We'll do this by plotting the scaled regressors (i.e., $X_{j}\hat{\beta}_{j}$) on top of the original signal. Each differently colored line represents a different regressor (so a different trial):
```
fig, axes = plt.subplots(ncols=4, sharex=True, sharey=True, figsize=(10, 12))
t = np.arange(ts.shape[0])
for i, ax in enumerate(axes.flatten()):
# Plot signal
ax.plot(ts[:, i], t, marker='o', ms=4, lw=0.5, c='tab:blue')
# Plot trial onsets (as arrows)
for ii, to in enumerate(onsets):
color = 'tab:red' if ii % 2 == 0 else 'tab:orange'
ax.arrow(-1.5, to / TR, dy=0, dx=0.5, color=color, head_width=0.75, head_length=0.25)
# Compute x*beta for icept only
scaled_icept = lsa_dm.iloc[:, -1].values * beta_icept[i]
for ii in range(N):
this_x = lsa_dm.iloc[:, ii].values
# Compute x*beta for this particular trial (ii)
xb = scaled_icept + this_x * beta_hat[ii, i]
ax.plot(xb, t, lw=2)
ax.set_xlim(-1.5, 2)
ax.set_ylim(0, ts.shape[0] // 2)
ax.grid(b=True)
ax.set_title(f'Voxel {i+1}', fontsize=15)
ax.invert_yaxis()
if i == 0:
ax.set_ylabel("Time (volumes)", fontsize=20)
# Common axis labels
fig.text(0.425, -.03, "Activation (A.U.)", fontsize=20)
fig.tight_layout()
sns.despine()
plt.show()
```
Ultimately, though, the estimated GLM parameters are just another way to estimate our pattern array ($\mathbf{R}$) — this time, we just estimated it using a different method (GLM-based) than before (timepoint-based). Therefore, let's visualize this array as we did with the other methods:
```
fig, ax = plt.subplots(figsize=(2, 10))
mapp = ax.imshow(beta_hat)
cbar = fig.colorbar(mapp)
cbar.set_label(r'$\hat{\beta}$', fontsize=25, rotation=0, labelpad=10)
ax.set_yticks(np.arange(N))
ax.set_xticks(np.arange(K))
ax.set_title(r"$\mathbf{R}$", fontsize=20)
ax.set_xlabel('Voxels', fontsize=15)
ax.set_ylabel('Trials', fontsize=15)
plt.show()
```
<div class='alert alert-warning'>
<b>ToDo</b> (optional, 0 points): It would be nice to visualize the patterns, but this is very hard because we have four dimenions (because we have four voxels)! <br><br>PCA to the rescue! Run PCA on the estimated patterns (<tt>beta_hat</tt>) and store the PCA-transformed array (shape: $20 \times 2$) in a variable named <tt>beta_hat_2d</tt>. Then, try to plot the first two components as a scatterplot. Make it even nicer by plotting the trials from condition A as red points and trials from condition B als orange points.
</div>
```
# YOUR CODE HERE
raise NotImplementedError()
from niedu.tests.nipa.week_1 import test_pca_beta_hat
test_pca_beta_hat(beta_hat, beta_hat_2d)
```
#### Noise normalization
One often used preprocessing step for pattern analyses (using GLM-estimation methods) is to use "noise normalization" on the estimated patterns. There are two flavours: "univariate" and "multivariate" noise normalization. In univariate noise normalization, the estimated parameters ($\hat{\beta}$) are divided (normalized) by the standard deviation of the estimated parameters — which you might recognize as the formula for $t$-values (for a contrast against baseline)!
\begin{align}
t_{c\hat{\beta}} = \frac{c\hat{\beta}}{\sqrt{\hat{\sigma}^{2}c(X^{T}X)^{-1}c^{T}}}
\end{align}
where $\hat{\sigma}^{2}$ is the estimate of the error variance (sum of squared errors divided by the degrees of freedom) and $c(X^{T}X)^{-1}c^{T}$ is the "design variance". Sometimes people disregard the design variance and the degrees of freedom (DF) and instead only use the standard deviation of the noise:
\begin{align}
t_{c\hat{\beta}} \approx \frac{c\hat{\beta}}{\sqrt{\sum (y_{i} - X_{i}\hat{\beta})^{2}}}
\end{align}
<div class='alert alert-info'>
<b>ToThink</b> (1 point): When experiments use a fixed ISI (in the context of single-trial GLMs), the omission of the design variance in univariate noise normalization is warranted. Explain why.
</div>
YOUR ANSWER HERE
Either way, this univariate noise normalization is a way to "down-weigh" the uncertain (noisy) parameter estimates. Although this type of univariate noise normalization seems to lead to better results in both decoding and RSA analyses (e.g., [Misaki et al., 2010](https://www.ncbi.nlm.nih.gov/pubmed/20580933)), the jury is still out on this issue.
Multivariate noise normalization will be discussed in week 3 (RSA), so let's focus for now on the implementation of univariate noise normalization using the approximate method (which disregards design variance). To compute the standard deviation of the noise ($\sqrt{\sum (y_{i} - X_{i}\hat{\beta})^{2}}$), we first need to compute the noise, i.e., the unexplained variance ($y - X\hat{\beta}$) also known as the residuals:
```
residuals = ts - X @ beta_hat_all
print("Shape residuals:", residuals.shape)
```
So, for each voxel ($K=4$), we have a timeseries ($T=100$) with unexplained variance ("noise"). Now, to get the standard deviation across all voxels, we can do the following:
```
std_noise = np.std(residuals, axis=0)
print("Shape noise std:", std_noise.shape)
```
To do the actual normalization step, we simply divide the columns of the pattern matrix (`beta_hat`, which we estimated before) by the estimated noise standard deviation:
```
# unn = univariate noise normalization
# Note that we don't have to do this for each trial (row) separately
# due to Numpy broadcasting!
R_unn = beta_hat / std_noise
print("Shape R_unn:", R_unn.shape)
```
And let's visualize it:
```
fig, ax = plt.subplots(figsize=(2, 10))
mapp = ax.imshow(R_unn)
cbar = fig.colorbar(mapp)
cbar.set_label(r'$t$', fontsize=25, rotation=0, labelpad=10)
ax.set_yticks(np.arange(N))
ax.set_xticks(np.arange(K))
ax.set_title(r"$\mathbf{R}_{unn}$", fontsize=20)
ax.set_xlabel('Voxels', fontsize=15)
ax.set_ylabel('Trials', fontsize=15)
plt.show()
```
<div class='alert alert-info'>
<b>ToThink</b> (1 point): In fact, univariate noise normalization didn't really change the pattern matrix much. Why do you think this is the case for our simulation data? Hint: check out the parameters for the simulation.
</div>
YOUR ANSWER HERE
#### LSA on real data
Alright, enough with all that fake data — let's work with some real data! We'll use the face perception task data from the *NI-edu* dataset, which we briefly mentioned in the fMRI-introduction course.
In the face perception task, participants were presented with images of faces (from the publicly available [Face Research Lab London Set](https://figshare.com/articles/Face_Research_Lab_London_Set/5047666)). In total, frontal face images from 40 different people ("identities") were used, which were either without expression ("neutral") or were smiling. Each face image (from in total 80 faces, i.e., 40 identities $\times$ 2, neutral/smiling) was shown, per participant, 6 times across the 12 runs (3 times per session).
<div class='alert alert-info'>
<b>Mini ToThink</b> (0 points): Why do you think we show the same image multiple times?
</div>
Identities were counterbalanced in terms of biological sex (male vs. female) and ethnicity (Caucasian vs. East-Asian vs. Black). The Face Research Lab London Set also contains the age of the people in the stimulus dataset and (average) attractiveness ratings for all faces from an independent set of raters. In addition, we also had our own participants rate the faces on perceived attractiveness, dominance, and trustworthiness after each session (rating each face, on each dimension, four times in total for robustness). The stimuli were chosen such that we have many different attributes that we could use to model brain responses (e.g., identity, expression, ethnicity, age, average attractiveness, and subjective/personal perceived attractiveness/dominance/trustworthiness).
In this paradigm, stimuli were presented for 1.25 seconds and had a fixed interstimulus interval (ISI) of 3.75 seconds. While sub-optimal for univariate "detection-based" analyses, we used a fixed ISI — rather than jittered — to make sure it can also be used for "single-trial" multivariate analyses. Each run contained 40 stimulus presentations. To keep the participants attentive, a random selection of 5 stimuli (out of 40) were followed by a rating on either perceived attractiveness, dominance, or trustworthiness using a button-box with eight buttons (four per hand) lasting 2.5 seconds. After the rating, a regular ISI of 3.75 seconds followed. See the figure below for a visualization of the paradigm.

First, let's set up all the data that we need for our LSA model. Let's see where our data is located:
```
import os
data_dir = os.path.join(os.path.expanduser('~'), 'NI-edu-data')
print("Downloading Fmriprep data (+- 175MB) ...\n")
!aws s3 sync --no-sign-request s3://openneuro.org/ds003477 {data_dir} --exclude "*" --include "sub-03/ses-1/func/*task-face*run-1*events.tsv"
!aws s3 sync --no-sign-request s3://openneuro.org/ds003477 {data_dir} --exclude "*" --include "derivatives/fmriprep/sub-03/ses-1/func/*task-face*run-1*space-T1w*bold.nii.gz"
!aws s3 sync --no-sign-request s3://openneuro.org/ds003477 {data_dir} --exclude "*" --include "derivatives/fmriprep/sub-03/ses-1/func/*task-face*run-1*space-T1w*mask.nii.gz"
!aws s3 sync --no-sign-request s3://openneuro.org/ds003477 {data_dir} --exclude "*" --include "derivatives/fmriprep/sub-03/ses-1/func/*task-face*run-1*confounds_timeseries.tsv"
print("\nDone!")
```
As you can see, it contains both "raw" (not-preprocessed) subject data (e.g., sub-03) and derivatives, which include Fmriprep-preprocessed data:
```
fprep_sub03 = os.path.join(data_dir, 'derivatives', 'fmriprep', 'sub-03')
print("Contents derivatives/fmriprep/sub-03:", os.listdir(fprep_sub03))
```
There is preprocessed anatomical data and session-specific functional data:
```
fprep_sub03_ses1_func = os.path.join(fprep_sub03, 'ses-1', 'func')
contents = sorted(os.listdir(fprep_sub03_ses1_func))
print("Contents ses-1/func:", '\n'.join(contents))
```
That's a lot of data! Importantly, we will only use the "face" data ("task-face") in T1 space ("space-T1w"), meaning that this dat has not been normalized to a common template (unlike the "task-MNI152NLin2009cAsym" data). Here, we'll only analyze the first run ("run-1") data. Let's define the functional data, the associated functional brain mask (a binary image indicating which voxels are brain and which are not), and the file with timepoint-by-timepoint confounds (such as motion parameters):
```
func = os.path.join(fprep_sub03_ses1_func, 'sub-03_ses-1_task-face_run-1_space-T1w_desc-preproc_bold.nii.gz')
# Notice this neat little trick: we use the string method "replace" to define
# the functional brain mask
func_mask = func.replace('desc-preproc_bold', 'desc-brain_mask')
confs = os.path.join(fprep_sub03_ses1_func, 'sub-03_ses-1_task-face_run-1_desc-confounds_timeseries.tsv')
confs_df = pd.read_csv(confs, sep='\t')
confs_df
```
Finally, we need the events-file with onsets, durations, and trial-types for this particular run:
```
events = os.path.join(data_dir, 'sub-03', 'ses-1', 'func', 'sub-03_ses-1_task-face_run-1_events.tsv')
events_df = pd.read_csv(events, sep='\t')
events_df.query("trial_type != 'rating' and trial_type != 'response'")
```
Now, it's up to you to use this data to fit an LSA model!
<div class='alert alert-warning'>
<b>ToDo</b> (2 points): in this first ToDo, you define your events and the confounds you want to include.<br>
1. Remove all columns except "onset", "duration", and "trial_type". You should end up with a DataFrame with 40 rows and 3 columns. You can check this with the <tt>.shape</tt> attribute of the DataFrame. (Note that, technically, you could model the reponse and rating-related events as well! For now, we'll exclude them.) Name this filtered DataFrame <tt>events_df_filt</tt>.
2. You also need to select specific columns from the confounds DataFrame, as we don't want to include <em>all</em> confounds! For now, include only the motion parameters (<tt>trans_x, trans_y, trans_z, rot_x, rot_y, rot_z</tt>). You should end up with a confounds DataFrame with 342 rows and 6 columns. Name this filtered DataFrame <tt>confs_df_filt</tt>.
</div>
```
''' Implement your ToDo here. '''
# YOUR CODE HERE
raise NotImplementedError()
''' Tests the above ToDo. '''
assert(events_df_filt.shape == (40, 3))
assert(events_df_filt.columns.tolist() == ['onset', 'duration', 'trial_type'])
assert(confs_df_filt.shape == (confs_df.shape[0], 6))
assert(all('trans' in col or 'rot' in col for col in confs_df_filt.columns))
print("Well done!")
```
<div class='alert alert-warning'>
<b>ToDo</b> (2 points): in this Todo, you'll fit your model! Define a <tt>FirstLevelModel</tt> object, name this <tt>flm_todo</tt> and make sure you do the following:<br>
1. Set the correct TR (this is 0.7)
2. Set the slice time reference to 0.5
3. Set the mask image to the one we defined before
4. Use a "glover" HRF
5. Use a "cosine" drift model with a cutoff of 0.01 Hz
6. Do not apply any smoothing
7. Set minimize_memory to true
8. Use an "ols" noise model
Then, fit your model using the functional data (<tt>func</tt>), filtered confounds, and filtered events we defined before.
</div>
```
''' Implement your ToDo here. '''
# Ignore the DeprecationWarning!
from nilearn.glm.first_level import FirstLevelModel
# YOUR CODE HERE
raise NotImplementedError()
""" Tests the above ToDo. """
from niedu.tests.nipa.week_1 import test_lsa_flm
test_lsa_flm(flm_todo, func_mask, func, events_df_filt, confs_df_filt)
```
<div class='alert alert-warning'>
<b>ToDo</b> (2 points): in this Todo, you'll run the single-trial contrasts ("against baseline"). To do so, write a for-loop in which you call the <tt>compute_contrast</tt> method every iteration with a new contrast definition for a new trial. Make sure to output the "betas" (by using <tt>output_type='effect_size'</tt>).
Note that the <tt>compute_contrast</tt> method returns the "unmasked" results (i.e., from all voxels). Make sure that, for each trial, you mask the results using the <tt>func_mask</tt> variable and the <tt>apply_mask</tt> function from Nilearn. Save these masked results (which should be patterns of 66298 voxels) for each trial. After the loop, stack all results in a 2D array with the different trials in different rows and the (flattened) voxels in columns. This array should be of shape 40 (trials) by 65643 (nr. of masked voxels). The variable name of this array should be <tt>R_todo</tt>.
</div>
```
''' Implement your ToDo here. '''
from nilearn.masking import apply_mask
# YOUR CODE HERE
raise NotImplementedError()
''' Tests the above ToDo. '''
from niedu.tests.nipa.week_1 import test_lsa_R
test_lsa_R(R_todo, events_df_filt, flm_todo, func_mask)
```
<div class='alert alert-success'>
<b>Disclaimer</b>: In this ToDo, we asked you <em>not</em> to spatially smooth the data. This is often recommended for pattern analyses, as they arguably use information that is encoded in finely distributed patterns. However, several studies have shown that smoothing may sometimes benefit pattern analyses (e.g., <a href="https://www.frontiersin.org/articles/10.3389/fneur.2017.00222/full">Hendriks et al., 2017</a>). In general, in line with the <a href="https://en.wikipedia.org/wiki/Matched_filter">matched filter theorem</a>, we recommend smoothing your data with a kernel equal to how finegrained you think your experimental feature is encoded in the brain patterns.
</div>
## Dealing with trial correlations
When working with single-trial experimental designs (such as the LSA designs discussed previously), one often occurring problem is correlation between trial predictors and their resulting estimates. Trial correlations in such designs occur when the inter-stimulus interval (ISI) is sufficiently short such that trial predictors overlap and thus correlate. This, in turn, leads to relatively unstable (high-variance) pattern estimates and, as we will see later in this section, trial patterns that correlate with each other (which is sometimes called [pattern drift](https://www.biorxiv.org/content/10.1101/032391v2)).
This is also the case in our data from the NI-edu dataset. In the "face" task, stimuli were presented for 1.25 seconds, followed by a 3.75 ISI, which causes a slightly positive correlation between a given trial ($i$) and the next trial ($i + 1$) and a slightly negative correlation between the trial after that ($i + 2$). We'll show this below by visualizing the correlation matrix of the design matrix:
```
dm_todo = pd.read_csv('dm_todo.tsv', sep='\t')
dm_todo = dm_todo.iloc[:, :40]
fig, ax = plt.subplots(figsize=(8, 8))
# Slightly exaggerate by setting the limits to (-.3, .3)
mapp = ax.imshow(dm_todo.corr(), vmin=-0.3, vmax=0.3)
# Some styling
ax.set_xticks(range(dm_todo.shape[1]))
ax.set_xticklabels(dm_todo.columns, rotation=90)
ax.set_yticks(range(dm_todo.shape[1]))
ax.set_yticklabels(dm_todo.columns)
cbar = plt.colorbar(mapp, shrink=0.825)
cbar.ax.set_ylabel('Correlation', fontsize=15, rotation=-90)
plt.show()
```
<div class='alert alert-info'>
<b>ToThink</b> (1 point): Explain why trials (at index $i$) correlate slightly <em>negatively</em> with the the second trial coming after it (at index $i + 2$). Hint: try to plot it!
</div>
YOUR ANSWER HERE
The trial-by-trial correlation structure in the design leads to a trial-by-trial correlation structure in the estimated patterns as well (as explained by [Soch et al., 2020](https://www.sciencedirect.com/science/article/pii/S1053811919310407)). We show this below by computing and visualizing the $N \times N$ correlation matrix of the patterns:
```
# Load in R_todo if you didn't manage to do the
# previous ToDo
R_todo = np.load('R_todo.npy')
# Compute the NxN correlation matrix
R_corr = np.corrcoef(R_todo)
fig, ax = plt.subplots(figsize=(8, 8))
mapp = ax.imshow(R_corr, vmin=-1, vmax=1)
# Some styling
ax.set_xticks(range(dm_todo.shape[1]))
ax.set_xticklabels(dm_todo.columns, rotation=90)
ax.set_yticks(range(dm_todo.shape[1]))
ax.set_yticklabels(dm_todo.columns)
cbar = plt.colorbar(mapp, shrink=0.825)
cbar.ax.set_ylabel('Correlation', fontsize=15, rotation=-90)
plt.show()
```
This correlation structure across trials poses a problem for representational similarity analysis (the topic of week 3) especially. Although this issue is still debated and far from solved, in this section we highlight two possible solutions to this problem: least-squares separate designs and temporal "uncorrelation".
### Least-squares separate (LSS)
The least-squares separate LSS) design is a slight modifcation of the LSA design ([Mumford et al., 2014](https://www.sciencedirect.com/science/article/pii/S105381191400768X)). In LSS, you fit a separate model per trial. Each model contains one regressor for the trial that you want to estimate and, for each condition in your experimental design (in case of a categorical design), another regressor containing all other trials.
So, suppose you have a run with 30 trials across 3 conditions (A, B, and C); using an LSS approach, you'd fit 30 different models, each containing four regressors (one for the single trial, one for all (other) trials of condition A, one for all (other) trials of condition B, and one for all (other) trials of condition C). The apparent upside of this is that it strongly reduces the collinearity of trials close in time, which in turn makes the trial parameters more efficient to estimate.
<div class='alert alert-info'>
<b>ToThink</b> (1 point): Suppose my experiment contains 90 stimuli which all belong to their own condition (i.e., there are 90 conditions). Explain why LSS provides no improvement over LSA in this case.
</div>
YOUR ANSWER HERE
We'll show this for our example data. It's a bit complicated (and not necessarily the best/fastest/clearest way), but the comments will explain what it's doing. Essentially, what we're doing, for each trial, is to extract that regressor for a standard LSA design and, for each condition, create a single regressor by summing all single-trial regressors from that condition together.
```
# First, well make a standard LSA design matrix
lsa_dm = make_first_level_design_matrix(
frame_times=t_fmri, # we defined this earlier for interpolation!
events=events_sim,
hrf_model='glover',
drift_model=None # assume data is already high-pass filtered
)
# Then, we will loop across trials, making a single GLM
lss_dms = [] # we'll store the design matrices here
# Do not include last column, the intercept, in the loop
for i, col in enumerate(lsa_dm.columns[:-1]):
# Extract the single-trial predictor
single_trial_reg = lsa_dm.loc[:, col]
# Now, we need to create a predictor per condition
# (one for A, one for B). We'll store these in "other_regs"
other_regs = []
# Loop across unique conditions ("A" and "B")
for con in np.unique(conditions):
# Which columns belong to the current condition?
idx = con == np.array(conditions)
# Make sure NOT to include the trial we're currently estimating!
idx[i] = False
# Also, exclude the intercept (last column)
idx = np.append(idx, False)
# Now, extract all N-1 regressors
con_regs = lsa_dm.loc[:, idx]
# And sum them together!
# This creates a single predictor for the current
# condition
con_reg_all = con_regs.sum(axis=1)
# Save for later
other_regs.append(con_reg_all)
# Concatenate the condition regressors (one of A, one for B)
other_regs = pd.concat(other_regs, axis=1)
# Concatenate the single-trial regressor and two condition regressors
this_dm = pd.concat((single_trial_reg, other_regs), axis=1)
# Add back an intercept!
this_dm.loc[:, 'intercept'] = 1
# Give it sensible column names
this_dm.columns = ['trial_to_estimate'] + list(set(conditions)) + ['intercept']
# Save for alter
lss_dms.append(this_dm)
print("We have created %i design matrices!" % len(lss_dms))
```
Alright, now let's check out the first five design matrices, which should estimate the first five trials and contain 4 regressors each (one for the single trial, two for the separate conditions, and one for the intercept):
```
fig, axes = plt.subplots(ncols=5, figsize=(15, 10))
for i, ax in enumerate(axes.flatten()):
plot_design_matrix(lss_dms[i], ax=ax)
ax.set_title("Design for trial %i" % (i+1), fontsize=20)
plt.tight_layout()
plt.show()
```
<div class='alert alert-warning'>
<b>ToDo</b> (optional; 1 bonus point): Can you implement an LSS approach to estimate our patterns on the real data? You can reuse the <tt>flm_todo</tt> you created earlier; the only thing you need to change each time is the design matrix. Because we have 40 trials, you need to fit 40 different models (which takes a while). Note that our experimental design does not necessarily have discrete categories, so your LSS design matrices should only have 3 columns: one for the trial to estimate, one for all other trials, and one for the intercept. After fitting each model, compute the trial-against-baseline contrast for the single trial and save the parameter ("beta") map. Then, after the loop, create the same pattern matrix as the previous ToDo, which should also have the same shape, but name it this time <tt>R_todo_lss</tt>. Note, this is a <em>very</em> hard ToDo, but a great way to test your programming skills :-)
</div>
```
''' Implement your ToDo here. Note that we already created the LSA design matrix for you. '''
func_img = nib.load(func)
n_vol = func_img.shape[-1]
lsa_dm = make_first_level_design_matrix(
frame_times=np.linspace(0, n_vol * 0.7, num=n_vol, endpoint=False),
events=events_df_filt,
drift_model=None
)
# YOUR CODE HERE
raise NotImplementedError()
''' Tests the above ToDo. '''
from niedu.tests.nipa.week_1 import test_lss
test_lss(R_todo_lss, func, flm_todo, lsa_dm, confs_df_filt)
```
<div class='alert alert-success'>
<b>Tip</b>: Programming your own pattern estimation pipeline allows you to be very flexible and is a great way to practice your programming skills, but if you want a more "pre-packaged" tool, I recommend the <a href="https://nibetaseries.readthedocs.io/en/stable/">nibetaseries</a> package. The package's name is derived from a specific analysis technique called "beta-series correlation", which is a type of analysis that allows for resting-state like connectivity analyses of task-based fMRI data (which we won't discuss in this course). For this technique, you need to estimate single-trial activity patterns — just like we need to do for pattern analyses! I've used this package to estimate patterns for pattern analysis and I highly recommend it!
</div>
### Temporal uncorrelation
Another method to deal with trial-by-trial correlations is the "uncorrelation" method by [Soch and colleagues (2020)](https://www.sciencedirect.com/science/article/pii/S1053811919310407). As opposed to the LSS method, the uncorrelation approach takes care of the correlation structure in the data in a post-hoc manner. It does so, in essence, by "removing" the correlations in the data that are due to the correlations in the design in a way that is similar to what prewhitening does in generalized least squares.
Formally, the "uncorrelated" patterns ($R_{\mathrm{unc}}$) are estimated by (matrix) multiplying the square root ($^{\frac{1}{2}}$) of covariance matrix of the LSA design matrix ($X^{T}X$) with the patterns ($R$):
\begin{align}
R_{\mathrm{unc}} = (X^{T}X)^{\frac{1}{2}}R
\end{align}
Here, $(X^{T}X)^{\frac{1}{2}}$ represents the "whitening" matrix which uncorrelates the patterns. Let's implement this in code. Note that we can use the `sqrtm` function from the `scipy.linalg` package to take the square root of a matrix:
```
from scipy.linalg import sqrtm
# Design matrix
X = dm_todo.to_numpy()
R_unc = (X.T @ X) @ R_todo
```
This uncorrelation technique is something we'll see again in week 3 when we'll talk about multivariate noise normalization!
Alright, that was it for this lab! We have covered the basics of experimental design and pattern estimation techniques for fMRI data. Note that there are many other (more advanced) things related to pattern estimation that we haven't discussed, such as standardization of patterns, multivariate noise normalization, [hyperalignment](https://www.sciencedirect.com/science/article/pii/S0896627311007811), etc. etc. Some of these topics will be discussed in week 2 (decoding) or week 3 (RSA).
| github_jupyter |
Deep Learning
=============
Assignment 3
------------
前回、「2_fullyconnected.ipynb」で、ロジスティック回帰とニューラルネットワークモデルをトレーニングしていました。
この課題の目標は、正則化手法を探ることです。
```
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle
```
まず、 `1_notmnist.ipynb`で生成したデータをリロードします。
```
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
save = pickle.load(f)
train_dataset = save['train_dataset']
train_labels = save['train_labels']
valid_dataset = save['valid_dataset']
valid_labels = save['valid_labels']
test_dataset = save['test_dataset']
test_labels = save['test_labels']
del save # hint to help gc free up memory
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
```
トレーニングするモデルにより適合した形状に再フォーマットします。
- フラットマトリックスとしてのデータ、
- フロート1-hotエンコーディングとしてのラベル。
```
image_size = 28
num_labels = 10
def reformat(dataset, labels):
dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
# Map 1 to [0.0, 1.0, 0.0 ...], 2 to [0.0, 0.0, 1.0 ...]
labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)
def accuracy(predictions, labels):
return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
/ predictions.shape[0])
```
---
Problem 1
---------
ロジスティックモデルとニューラルネットワークモデルの両方のL2正則化を導入および調整します。 L2は、損失の重みの標準にペナルティを追加することに注意してください。 TensorFlowでは、 `nn.l2_loss(t)`を使用してテンソル `t`のL2損失を計算できます。 適切な量の正則化により、検証/テストの精度が向上します。
---
---
Problem 2
---------
オーバーフィッティングの極端なケースを示しましょう。 トレーニングデータを数バッチに制限します。 何が起こるのですか?
---
---
Problem 3
---------
ニューラルネットワークの隠れ層にドロップアウトを導入します。 注意:ドロップアウトはトレーニングではなく評価時にのみ導入する必要があります。導入しないと、評価結果も確率的です。 TensorFlowはそのためにnn.dropout()を提供しますが、トレーニング中にのみ挿入されるようにする必要があります。
私たちの極端なオーバーフィッティングのケースはどうなりますか?
---
---
Problem 4
---------
多層モデルを使用して、最高のパフォーマンスを実現してください! ディープネットワークを使用して報告されたテストの精度が最も高いのは[97.1%](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html?showComment=1391023266211#c8758720086795711595)です。
探索できる1つの方法は、複数のレイヤーを追加することです。
もう1つは、学習率の減衰を使用することです。
global_step = tf.Variable(0) #実行されたステップの数をカウントします。
learning_rate = tf.train.exponential_decay(0.5、global_step、...)
オプティマイザー= tf.train.GradientDescentOptimizer(learning_rate).minimize(loss、global_step = global_step)
---
| github_jupyter |
# Verificação das hipóteses relacionadas a nota média
## Verificação das hipóteses 0, 1 e 5
Hipótese 0: Se o pedido é cancelado, a nota do pedido é menor \
Hipótese 1: Se o pedido foi entregue com atraso, a nota do pedido será menor \
Hipótese 5: Se o pedido atrasar sua nota será menor que três
### Definição do dataframe
```
from pyspark.sql import SparkSession, functions as F
spark = SparkSession.builder.getOrCreate()
df_reviews = spark.read \
.option('escape', '\"') \
.csv('./dataset/olist_order_reviews_dataset.csv', header=True, multiLine=True, inferSchema=True)
df_orders = spark.read \
.option('escape', '\"') \
.csv('./dataset/olist_orders_dataset.csv', header=True, multiLine=True, inferSchema=True)
df = df_orders.join(df_reviews, df_orders.order_id == df_reviews.order_id)
df.printSchema()
```
### Calculo nota média geral
```
df.select(F.mean('review_score')).show()
```
### Calculo nota média dos cancelados
```
df_canceled = df.filter(F.col('order_status')=='canceled')
df_canceled.select(F.mean('review_score')).show()
```
## Calculo nota média dos atrasados
```
df_late = df.filter(F.col('order_delivered_customer_date') > F.col('order_estimated_delivery_date'))
df_late.select(F.mean('review_score')).show()
```
### Calculo de pedidos atrasados com nota maior ou igual a 3 (hipótese 5)
```
print("O numero de pedidos atrasados com nota >=3 é de",df_late.filter(F.col('review_score')>=3).count())
print("Porcentagem de pedidos atrasados com nota >= 3:",
round(df_late.filter(F.col('review_score')>=3).count() / df_late.count() * 100,2))
```
## Testes
```
import pandas as pd
import matplotlib.pyplot as plt
df_new = df.groupBy(F.month('order_purchase_timestamp').alias('month'),F.year('order_purchase_timestamp') \
.alias('year')).count() \
.orderBy(F.col('year'),F.col('month'))
from pyspark.sql import functions as sf
df_new = df_new.withColumn('month_year',
sf.concat(sf.col('month'),sf.lit('/'), sf.col('year')))
df_new = df_new.selectExpr('month', 'year', 'count as demand', 'month_year')
df_new.show()
df_new.show(50)
df_new.toPandas().plot(x ='month_year', y='demand', kind = 'line')
df_new_2 = df.select(F.month('order_purchase_timestamp').alias('month'),F.year('order_purchase_timestamp').alias('year'),F.col('review_score')) \
.orderBy(F.col('year'),F.col('month'))
df_new_2.show()
from pyspark.sql import functions as sf
df_new_2 = df_new_2.withColumn('month_year',
sf.concat(sf.col('month'),sf.lit('/'), sf.col('year')))
df_new_2.show()
import seaborn as sns
import pandas as pd
sns.set(style="whitegrid")
ax = sns.boxplot(x='month_year', y='review_score',color="g", data=df_new_2.toPandas())
ax2 = ax.twinx()
ax2 = sns.lineplot(x='month_year', y='demand',color='.0', label='demand', data=df_new.toPandas())
ax2 = sns.lineplot(x='month_year', y='demand',color='.0', label='demand', data=df_new.toPandas())
```
## Conclusões
### A hipótese 0 é válida, pois a média das notas de todos os pedidos é maior que a média das notas dos pedidos cancelados
### A hipótese 1 é válida, pois a média das notas de todos os pedidos é maior que a média das notas dos pedidos cancelados
### A hipótese 5 é inválida pois existem pedidos atrasados que possuem nota maior ou igual a 3, são 3572 pedidos com essas características
| github_jupyter |
# Learning Embeddings with Continuous Bag of Words (CBOW)
## Imports
```
import os
from argparse import Namespace
from collections import Counter
import json
import re
import string
import numpy as np
import pandas as pd
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.utils.data import Dataset, DataLoader
from tqdm import tqdm_notebook
import os
dir_base='/Users/thchang/Documents/dev/git/ml-tools/ml/nlp/pytorch/PyTorchNLPBook'
dir_chapter=f'{dir_base}/chapters/chapter_5'
dir_data=f'{dir_base}/data'
os.chdir(dir_chapter)
print(f'Loaded successfully: {os.getcwd()}')
```
## Data Vectorization classes
### The Vocabulary
```
class Vocabulary(object):
"""Class to process text and extract vocabulary for mapping"""
def __init__(self, token_to_idx=None, mask_token="<MASK>", add_unk=True, unk_token="<UNK>"):
"""
Args:
token_to_idx (dict): a pre-existing map of tokens to indices
mask_token (str): the MASK token to add into the Vocabulary; indicates
a position that will not be used in updating the model's parameters
add_unk (bool): a flag that indicates whether to add the UNK token
unk_token (str): the UNK token to add into the Vocabulary
"""
if token_to_idx is None:
token_to_idx = {}
self._token_to_idx = token_to_idx
self._idx_to_token = {idx: token
for token, idx in self._token_to_idx.items()}
self._add_unk = add_unk
self._unk_token = unk_token
self._mask_token = mask_token
self.mask_index = self.add_token(self._mask_token)
self.unk_index = -1
if add_unk:
self.unk_index = self.add_token(unk_token)
def to_serializable(self):
""" returns a dictionary that can be serialized """
return {'token_to_idx': self._token_to_idx,
'add_unk': self._add_unk,
'unk_token': self._unk_token,
'mask_token': self._mask_token}
@classmethod
def from_serializable(cls, contents):
""" instantiates the Vocabulary from a serialized dictionary """
return cls(**contents)
def add_token(self, token):
"""Update mapping dicts based on the token.
Args:
token (str): the item to add into the Vocabulary
Returns:
index (int): the integer corresponding to the token
"""
if token in self._token_to_idx:
index = self._token_to_idx[token]
else:
index = len(self._token_to_idx)
self._token_to_idx[token] = index
self._idx_to_token[index] = token
return index
def add_many(self, tokens):
"""Add a list of tokens into the Vocabulary
Args:
tokens (list): a list of string tokens
Returns:
indices (list): a list of indices corresponding to the tokens
"""
return [self.add_token(token) for token in tokens]
def lookup_token(self, token):
"""Retrieve the index associated with the token
or the UNK index if token isn't present.
Args:
token (str): the token to look up
Returns:
index (int): the index corresponding to the token
Notes:
`unk_index` needs to be >=0 (having been added into the Vocabulary)
for the UNK functionality
"""
if self.unk_index >= 0:
return self._token_to_idx.get(token, self.unk_index)
else:
return self._token_to_idx[token]
def lookup_index(self, index):
"""Return the token associated with the index
Args:
index (int): the index to look up
Returns:
token (str): the token corresponding to the index
Raises:
KeyError: if the index is not in the Vocabulary
"""
if index not in self._idx_to_token:
raise KeyError("the index (%d) is not in the Vocabulary" % index)
return self._idx_to_token[index]
def __str__(self):
return "<Vocabulary(size=%d)>" % len(self)
def __len__(self):
return len(self._token_to_idx)
class CBOWVectorizer(object):
""" The Vectorizer which coordinates the Vocabularies and puts them to use"""
def __init__(self, cbow_vocab):
"""
Args:
cbow_vocab (Vocabulary): maps words to integers
"""
self.cbow_vocab = cbow_vocab
def vectorize(self, context, vector_length=-1):
"""
Args:
context (str): the string of words separated by a space
vector_length (int): an argument for forcing the length of index vector
"""
indices = [self.cbow_vocab.lookup_token(token) for token in context.split(' ')]
if vector_length < 0:
vector_length = len(indices)
out_vector = np.zeros(vector_length, dtype=np.int64)
out_vector[:len(indices)] = indices
out_vector[len(indices):] = self.cbow_vocab.mask_index
return out_vector
@classmethod
def from_dataframe(cls, cbow_df):
"""Instantiate the vectorizer from the dataset dataframe
Args:
cbow_df (pandas.DataFrame): the target dataset
Returns:
an instance of the CBOWVectorizer
"""
cbow_vocab = Vocabulary()
for index, row in cbow_df.iterrows():
for token in row.context.split(' '):
cbow_vocab.add_token(token)
cbow_vocab.add_token(row.target)
return cls(cbow_vocab)
@classmethod
def from_serializable(cls, contents):
cbow_vocab = \
Vocabulary.from_serializable(contents['cbow_vocab'])
return cls(cbow_vocab=cbow_vocab)
def to_serializable(self):
return {'cbow_vocab': self.cbow_vocab.to_serializable()}
```
### The Dataset
```
class CBOWDataset(Dataset):
def __init__(self, cbow_df, vectorizer):
"""
Args:
cbow_df (pandas.DataFrame): the dataset
vectorizer (CBOWVectorizer): vectorizer instatiated from dataset
"""
self.cbow_df = cbow_df
self._vectorizer = vectorizer
measure_len = lambda context: len(context.split(" "))
self._max_seq_length = max(map(measure_len, cbow_df.context))
self.train_df = self.cbow_df[self.cbow_df.split=='train']
self.train_size = len(self.train_df)
self.val_df = self.cbow_df[self.cbow_df.split=='val']
self.validation_size = len(self.val_df)
self.test_df = self.cbow_df[self.cbow_df.split=='test']
self.test_size = len(self.test_df)
self._lookup_dict = {'train': (self.train_df, self.train_size),
'val': (self.val_df, self.validation_size),
'test': (self.test_df, self.test_size)}
self.set_split('train')
@classmethod
def load_dataset_and_make_vectorizer(cls, cbow_csv):
"""Load dataset and make a new vectorizer from scratch
Args:
cbow_csv (str): location of the dataset
Returns:
an instance of CBOWDataset
"""
cbow_df = pd.read_csv(cbow_csv)
train_cbow_df = cbow_df[cbow_df.split=='train']
return cls(cbow_df, CBOWVectorizer.from_dataframe(train_cbow_df))
@classmethod
def load_dataset_and_load_vectorizer(cls, cbow_csv, vectorizer_filepath):
"""Load dataset and the corresponding vectorizer.
Used in the case in the vectorizer has been cached for re-use
Args:
cbow_csv (str): location of the dataset
vectorizer_filepath (str): location of the saved vectorizer
Returns:
an instance of CBOWDataset
"""
cbow_df = pd.read_csv(cbow_csv)
vectorizer = cls.load_vectorizer_only(vectorizer_filepath)
return cls(cbow_df, vectorizer)
@staticmethod
def load_vectorizer_only(vectorizer_filepath):
"""a static method for loading the vectorizer from file
Args:
vectorizer_filepath (str): the location of the serialized vectorizer
Returns:
an instance of CBOWVectorizer
"""
with open(vectorizer_filepath) as fp:
return CBOWVectorizer.from_serializable(json.load(fp))
def save_vectorizer(self, vectorizer_filepath):
"""saves the vectorizer to disk using json
Args:
vectorizer_filepath (str): the location to save the vectorizer
"""
with open(vectorizer_filepath, "w") as fp:
json.dump(self._vectorizer.to_serializable(), fp)
def get_vectorizer(self):
""" returns the vectorizer """
return self._vectorizer
def set_split(self, split="train"):
""" selects the splits in the dataset using a column in the dataframe """
self._target_split = split
self._target_df, self._target_size = self._lookup_dict[split]
def __len__(self):
return self._target_size
def __getitem__(self, index):
"""the primary entry point method for PyTorch datasets
Args:
index (int): the index to the data point
Returns:
a dictionary holding the data point's features (x_data) and label (y_target)
"""
row = self._target_df.iloc[index]
context_vector = \
self._vectorizer.vectorize(row.context, self._max_seq_length)
target_index = self._vectorizer.cbow_vocab.lookup_token(row.target)
return {'x_data': context_vector,
'y_target': target_index}
def get_num_batches(self, batch_size):
"""Given a batch size, return the number of batches in the dataset
Args:
batch_size (int)
Returns:
number of batches in the dataset
"""
return len(self) // batch_size
def generate_batches(dataset, batch_size, shuffle=True,
drop_last=True, device="cpu"):
"""
A generator function which wraps the PyTorch DataLoader. It will
ensure each tensor is on the write device location.
"""
dataloader = DataLoader(dataset=dataset, batch_size=batch_size,
shuffle=shuffle, drop_last=drop_last)
for data_dict in dataloader:
out_data_dict = {}
for name, tensor in data_dict.items():
out_data_dict[name] = data_dict[name].to(device)
yield out_data_dict
```
## The Model: CBOW
```
class CBOWClassifier(nn.Module): # Simplified cbow Model
def __init__(self, vocabulary_size, embedding_size, padding_idx=0):
"""
Args:
vocabulary_size (int): number of vocabulary items, controls the
number of embeddings and prediction vector size
embedding_size (int): size of the embeddings
padding_idx (int): default 0; Embedding will not use this index
"""
super(CBOWClassifier, self).__init__()
self.embedding = nn.Embedding(num_embeddings=vocabulary_size,
embedding_dim=embedding_size,
padding_idx=padding_idx)
self.fc1 = nn.Linear(in_features=embedding_size,
out_features=vocabulary_size)
def forward(self, x_in, apply_softmax=False):
"""The forward pass of the classifier
Args:
x_in (torch.Tensor): an input data tensor.
x_in.shape should be (batch, input_dim)
apply_softmax (bool): a flag for the softmax activation
should be false if used with the Cross Entropy losses
Returns:
the resulting tensor. tensor.shape should be (batch, output_dim)
"""
x_embedded_sum = F.dropout(self.embedding(x_in).sum(dim=1), 0.3)
y_out = self.fc1(x_embedded_sum)
if apply_softmax:
y_out = F.softmax(y_out, dim=1)
return y_out
```
## Training Routine
### Helper functions
```
def make_train_state(args):
return {'stop_early': False,
'early_stopping_step': 0,
'early_stopping_best_val': 1e8,
'learning_rate': args.learning_rate,
'epoch_index': 0,
'train_loss': [],
'train_acc': [],
'val_loss': [],
'val_acc': [],
'test_loss': -1,
'test_acc': -1,
'model_filename': args.model_state_file}
def update_train_state(args, model, train_state):
"""Handle the training state updates.
Components:
- Early Stopping: Prevent overfitting.
- Model Checkpoint: Model is saved if the model is better
:param args: main arguments
:param model: model to train
:param train_state: a dictionary representing the training state values
:returns:
a new train_state
"""
# Save one model at least
if train_state['epoch_index'] == 0:
torch.save(model.state_dict(), train_state['model_filename'])
train_state['stop_early'] = False
# Save model if performance improved
elif train_state['epoch_index'] >= 1:
loss_tm1, loss_t = train_state['val_loss'][-2:]
# If loss worsened
if loss_t >= train_state['early_stopping_best_val']:
# Update step
train_state['early_stopping_step'] += 1
# Loss decreased
else:
# Save the best model
if loss_t < train_state['early_stopping_best_val']:
torch.save(model.state_dict(), train_state['model_filename'])
# Reset early stopping step
train_state['early_stopping_step'] = 0
# Stop early ?
train_state['stop_early'] = \
train_state['early_stopping_step'] >= args.early_stopping_criteria
return train_state
def compute_accuracy(y_pred, y_target):
_, y_pred_indices = y_pred.max(dim=1)
n_correct = torch.eq(y_pred_indices, y_target).sum().item()
return n_correct / len(y_pred_indices) * 100
```
#### general utilities
```
def set_seed_everywhere(seed, cuda):
np.random.seed(seed)
torch.manual_seed(seed)
if cuda:
torch.cuda.manual_seed_all(seed)
def handle_dirs(dirpath):
if not os.path.exists(dirpath):
os.makedirs(dirpath)
```
### Settings and some prep work
```
args = Namespace(
# Data and Path information
cbow_csv=f"{dir_data}/books/frankenstein_with_splits.csv",
vectorizer_file="vectorizer.json",
model_state_file="model.pth",
save_dir="model_storage/ch5/cbow",
# Model hyper parameters
embedding_size=50,
# Training hyper parameters
seed=1337,
num_epochs=100,
learning_rate=0.0001,
batch_size=32,
early_stopping_criteria=5,
# Runtime options
cuda=True,
catch_keyboard_interrupt=True,
reload_from_files=False,
expand_filepaths_to_save_dir=True
)
if args.expand_filepaths_to_save_dir:
args.vectorizer_file = os.path.join(args.save_dir,
args.vectorizer_file)
args.model_state_file = os.path.join(args.save_dir,
args.model_state_file)
print("Expanded filepaths: ")
print("\t{}".format(args.vectorizer_file))
print("\t{}".format(args.model_state_file))
# Check CUDA
if not torch.cuda.is_available():
args.cuda = False
args.device = torch.device("cuda" if args.cuda else "cpu")
print("Using CUDA: {}".format(args.cuda))
# Set seed for reproducibility
set_seed_everywhere(args.seed, args.cuda)
# handle dirs
handle_dirs(args.save_dir)
```
### Initializations
```
if args.reload_from_files:
print("Loading dataset and loading vectorizer")
dataset = CBOWDataset.load_dataset_and_load_vectorizer(args.cbow_csv,
args.vectorizer_file)
else:
print("Loading dataset and creating vectorizer")
dataset = CBOWDataset.load_dataset_and_make_vectorizer(args.cbow_csv)
dataset.save_vectorizer(args.vectorizer_file)
vectorizer = dataset.get_vectorizer()
classifier = CBOWClassifier(vocabulary_size=len(vectorizer.cbow_vocab),
embedding_size=args.embedding_size)
```
### Training loop
```
classifier = classifier.to(args.device)
loss_func = nn.CrossEntropyLoss()
optimizer = optim.Adam(classifier.parameters(), lr=args.learning_rate)
scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer=optimizer,
mode='min', factor=0.5,
patience=1)
train_state = make_train_state(args)
epoch_bar = tqdm_notebook(desc='training routine',
total=args.num_epochs,
position=0)
dataset.set_split('train')
train_bar = tqdm_notebook(desc='split=train',
total=dataset.get_num_batches(args.batch_size),
position=1,
leave=True)
dataset.set_split('val')
val_bar = tqdm_notebook(desc='split=val',
total=dataset.get_num_batches(args.batch_size),
position=1,
leave=True)
try:
for epoch_index in range(args.num_epochs):
train_state['epoch_index'] = epoch_index
# Iterate over training dataset
# setup: batch generator, set loss and acc to 0, set train mode on
dataset.set_split('train')
batch_generator = generate_batches(dataset,
batch_size=args.batch_size,
device=args.device)
running_loss = 0.0
running_acc = 0.0
classifier.train()
for batch_index, batch_dict in enumerate(batch_generator):
# the training routine is these 5 steps:
# --------------------------------------
# step 1. zero the gradients
optimizer.zero_grad()
# step 2. compute the output
y_pred = classifier(x_in=batch_dict['x_data'])
# step 3. compute the loss
loss = loss_func(y_pred, batch_dict['y_target'])
loss_t = loss.item()
running_loss += (loss_t - running_loss) / (batch_index + 1)
# step 4. use loss to produce gradients
loss.backward()
# step 5. use optimizer to take gradient step
optimizer.step()
# -----------------------------------------
# compute the accuracy
acc_t = compute_accuracy(y_pred, batch_dict['y_target'])
running_acc += (acc_t - running_acc) / (batch_index + 1)
# update bar
train_bar.set_postfix(loss=running_loss, acc=running_acc,
epoch=epoch_index)
train_bar.update()
train_state['train_loss'].append(running_loss)
train_state['train_acc'].append(running_acc)
# Iterate over val dataset
# setup: batch generator, set loss and acc to 0; set eval mode on
dataset.set_split('val')
batch_generator = generate_batches(dataset,
batch_size=args.batch_size,
device=args.device)
running_loss = 0.
running_acc = 0.
classifier.eval()
for batch_index, batch_dict in enumerate(batch_generator):
# compute the output
y_pred = classifier(x_in=batch_dict['x_data'])
# step 3. compute the loss
loss = loss_func(y_pred, batch_dict['y_target'])
loss_t = loss.item()
running_loss += (loss_t - running_loss) / (batch_index + 1)
# compute the accuracy
acc_t = compute_accuracy(y_pred, batch_dict['y_target'])
running_acc += (acc_t - running_acc) / (batch_index + 1)
val_bar.set_postfix(loss=running_loss, acc=running_acc,
epoch=epoch_index)
val_bar.update()
train_state['val_loss'].append(running_loss)
train_state['val_acc'].append(running_acc)
train_state = update_train_state(args=args, model=classifier,
train_state=train_state)
scheduler.step(train_state['val_loss'][-1])
if train_state['stop_early']:
break
train_bar.n = 0
val_bar.n = 0
epoch_bar.update()
except KeyboardInterrupt:
print("Exiting loop")
# compute the loss & accuracy on the test set using the best available model
classifier.load_state_dict(torch.load(train_state['model_filename']))
classifier = classifier.to(args.device)
loss_func = nn.CrossEntropyLoss()
dataset.set_split('test')
batch_generator = generate_batches(dataset,
batch_size=args.batch_size,
device=args.device)
running_loss = 0.
running_acc = 0.
classifier.eval()
for batch_index, batch_dict in enumerate(batch_generator):
# compute the output
y_pred = classifier(x_in=batch_dict['x_data'])
# compute the loss
loss = loss_func(y_pred, batch_dict['y_target'])
loss_t = loss.item()
running_loss += (loss_t - running_loss) / (batch_index + 1)
# compute the accuracy
acc_t = compute_accuracy(y_pred, batch_dict['y_target'])
running_acc += (acc_t - running_acc) / (batch_index + 1)
train_state['test_loss'] = running_loss
train_state['test_acc'] = running_acc
print("Test loss: {};".format(train_state['test_loss']))
print("Test Accuracy: {}".format(train_state['test_acc']))
```
### Trained Embeddings
```
def pretty_print(results):
"""
Pretty print embedding results.
"""
for item in results:
print ("...[%.2f] - %s"%(item[1], item[0]))
def get_closest(target_word, word_to_idx, embeddings, n=5):
"""
Get the n closest
words to your word.
"""
# Calculate distances to all other words
word_embedding = embeddings[word_to_idx[target_word.lower()]]
distances = []
for word, index in word_to_idx.items():
if word == "<MASK>" or word == target_word:
continue
distances.append((word, torch.dist(word_embedding, embeddings[index])))
results = sorted(distances, key=lambda x: x[1])[1:n+2]
return results
word = input('Enter a word: ')
embeddings = classifier.embedding.weight.data
word_to_idx = vectorizer.cbow_vocab._token_to_idx
pretty_print(get_closest(word, word_to_idx, embeddings, n=5))
target_words = ['frankenstein', 'monster', 'science', 'sickness', 'lonely', 'happy']
embeddings = classifier.embedding.weight.data
word_to_idx = vectorizer.cbow_vocab._token_to_idx
for target_word in target_words:
print(f"======={target_word}=======")
if target_word not in word_to_idx:
print("Not in vocabulary")
continue
pretty_print(get_closest(target_word, word_to_idx, embeddings, n=5))
```
| github_jupyter |
# Policy compared to Covid-19 Case Rate
All the typical caviates apply...
- for example testing goes up through time... so case rate is skewed through time
## Bring in df and aggregate to index by date for all of UK
```
import numpy as np
import pandas as pd
df = pd.read_csv('cases_analysis.csv')
df.drop(columns = ('Unnamed: 0'), inplace = True)
# make wrapped line plots of cases through time
# so to get uk overall, may need to do several aggregations and merge
# for now let's drop category data and will reattach later
#df = df[df.columns[0:13]]
#df = df.drop(df.iloc[:, 0:3], inplace = True, axis = 1)
#df1 = df.groupby('date').sum()
#df1.drop(df1.iloc[:, 1:2], inplace = True, axis=1)
#df1.drop(columns = ['days_since_first', 'case_rate_in_2_weeks', 'case_rate_on_day', ''])
# nope just gonna start from scratch wayyy easier that way
df = pd.read_csv('UK_cases.csv')
df.drop(columns=['Area name', 'Area code', 'Area type'], inplace=True)
df['Date'] = pd.to_datetime(df['Date'])
df = df.sort_values(by='Date')
# create daily lab confirmed cases
df['Cumulative lab-confirmed cases rate'] = df['Cumulative lab-confirmed cases rate'].astype(float)
df['Daily_lab_confirmed_case_rate'] = df['Cumulative lab-confirmed cases rate'].diff()
df = df.dropna()
# now plot date anf Daily lab_confirmed_case_rate
df = df.set_index('Date')
df['Daily_lab_confirmed_case_rate'].plot()
# create 7 day rolling ave
df['7_day_rolling_ave'] = df['Daily_lab_confirmed_case_rate'].rolling(window=5).mean()
df = df.dropna()
df['7_day_rolling_ave'] = df['7_day_rolling_ave'].astype(float)
df['7_day_rolling_ave'].plot()
# looks nice and smooth now
# now want to put on a zero scale i.e. from one day to the next what is the difference?
# does case rate go up or down? (on 7 day rolling ave)
# I want the % change not the difference, that way it's relative
df['pctchange_in_case_rate'] = df['7_day_rolling_ave'].pct_change()
df = df.dropna()# drop first row
df = df.drop(df.index[0])
# so - means case rate goes down, + means case rate goes up
# now want a % change from current day rolling ave to rolling ave in 2 weeks
df1 = df[['7_day_rolling_ave']]
df['pctchange_in_case_rate'].plot()
# + means cases going up
# then adapt this to a column to line up with 2 weeks in the future
# add date for 2 weeks in future
df1 = df1.reset_index()
from datetime import timedelta
df1['Date_in_2weeks'] = df1['Date'] + timedelta(days=14)
temp = df1[['Date', '7_day_rolling_ave']]
temp.rename(columns = {'7_day_rolling_ave': '7_day_rolling_ave_in_2weeks'}, inplace=True)
df1 = pd.merge(df1, temp, how='left', left_on=('Date_in_2weeks'), right_on=('Date'))
#what's the % change in rolling ave case rate today and two weeks from now
# two weeks from now did case rate go up or down
df1 = df1.dropna()
# now % change in rolling average from current day to two weeks from now
#df1['pct_change_between_now_and_two_weeks'] = df1[['7_day_rolling_ave', '7_day_rolling_ave_in_2weeks']].pct_change(axis=1)
df1['pct_change_between_now_and_two_weeks'] = df1[['7_day_rolling_ave', '7_day_rolling_ave_in_2weeks']].apply(lambda row: (row.iloc[0]-row.iloc[1])/row.iloc[0]*100, axis=1)
# make % change + means increase in number of cases - means decrease in number of cases
df1['pct_change_between_now_and_two_weeks'] = df1['pct_change_between_now_and_two_weeks'] * -1
# making sure 2week movement worked
df1[df1['Date_x'] == '2020-03-16']
# so basically a massive increase in percent change increase at start and after april may start to get the first dips in
#covid cases i.e. if cases in two weeks were to be less than today there would be a negative - number
# percent change between day and two weeks from day
# so do covid cases go up or down based on policies
df1 = df1[['Date_x', 'Date_in_2weeks', '7_day_rolling_ave_in_2weeks','pct_change_between_now_and_two_weeks']]
df = df.reset_index()
df = pd.merge(df, df1, how='left', left_on=('Date'), right_on='Date_x')
df = df.drop(columns = ['Date_x'])
policy = pd.read_csv('Policy_for_analysis.csv')
policy = policy.drop(columns = ['Introduced by'])
policy['Date'] = pd.to_datetime(policy['Date'])
policy = policy.rename(columns = {'Date':'Date_1'})
df = pd.merge(df, policy, how='left', left_on=('Date'), right_on=('Date_1'))
#df = df.drop(columns = ['Date_y'])
# if want just set rows then needs to
policy2 = policy.drop_duplicates(['Date_1'])
df_left = pd.merge(df,policy2.drop_duplicates(),how='left', left_on=('Date'), right_on=('Date_1'))
df
df.isnull().sum()
# so last two weeks and 118 days where no policies eneacted
# there's also the issue of if two or more policies are on the same day, only the first is enacted
```
# What the XGBoost Model Showed
We'll use these F-values to determine the policies to examine as they have the greatest correlation with predicting COVID cases..
1. Deemed significant by policy tracker
2. Social Distancing Measures
2. Testing, surveillance and contact tracing
3. Infection prevention and control
```
df.Category.unique()
# get ave % change in case rate for each catergory
SDM = df[df['Category']=='Social distancing measures']
TSC = df[df['Category']=='Testing, surveillance and contact tracing']
IPC = df[df['Category']=='Infection prevention and control']
# i.e. with such and such policy what is the percent change in confirmed COVID case rate
```
### Deemed significant by tracker
### Social Distancing Measures
```
print('Number of policies', len(SDM))
print(SDM['pct_change_between_now_and_two_weeks'].describe()) # so still an average increase in case rate
SDM['pct_change_between_now_and_two_weeks'].median() # but in general these policys lead to a decrease in case rate
# 23% reduction in case rate if a policy of this type is enacted
Temp = SDM[['Date','Policy','pct_change_between_now_and_two_weeks']].nsmallest(10, 'pct_change_between_now_and_two_weeks')
Policy_list = list(Temp['Policy'].values)
print(Policy_list)
print("key dates seem to be the 21st and 22nd as one group, 03/04 as another, and 13/14th")
# seem to be clusters of dates, could do some unsupervised clustering to get key dates of policy
Temp
```
### Testing, surveillance and contact tracing
```
print('Number of policies', len(TSC))
print(TSC['pct_change_between_now_and_two_weeks'].describe()) # so still an average increase in case rate #ave is higher
TSC['pct_change_between_now_and_two_weeks'].median() # but in general these policys lead to a decrease in case rate
# 26% reduction in case rate if a policy of this type is enacted
# so what were the actual policies?
# can check here to see if other policies on that day
# can also throw in policies from day before and after as resonable to assume influential / relevent
# Top ten policies correlated with drop in case rate
#TSC.nsmallest(10, 'pct_change_between_now_and_two_weeks')
Temp = TSC[['Date','Policy','pct_change_between_now_and_two_weeks']].nsmallest(10, 'pct_change_between_now_and_two_weeks')
Policy_list = list(Temp['Policy'].values)
print(Policy_list)
print("key dates seem to be the 21st and 22nd as one group, 03/04 as another, and 13/14th")
# seem to be clusters of dates, could do some unsupervised clustering to get key dates of policy
Temp
```
### Infection Prevention and Control
```
print('Number of policies', len(IPC))
print(IPC['pct_change_between_now_and_two_weeks'].describe()) # so still an average increase in case rate #ave is higher
IPC['pct_change_between_now_and_two_weeks'].median()
# fairly minimal reduction of 8%
Temp = IPC[['Date','Policy','pct_change_between_now_and_two_weeks']].nsmallest(10, 'pct_change_between_now_and_two_weeks')
Policy_list = list(Temp['Policy'].values)
print(Policy_list)
print("key dates seem to be the 21st and 22nd as one group, 03/04 as another, and 13/14th")
# seem to be clusters of dates, could do some unsupervised clustering to get key dates of policy
Temp
```
### What were the main dates in curbing the spread?
```
df_single = df.drop_duplicates(['Date'])
df_single[['Date', 'pct_change_between_now_and_two_weeks']].nsmallest(30, 'pct_change_between_now_and_two_weeks')
```
Key Dates of Decrease
1. 2020-05-19th to 27th
2. 2020-05-13th to 19th
3. 2020-05-03rd to 5th
4. 2020-06-15th to 19th
5. 2020-04-26th to 30th
```
# corresponding policies
df_sig1 = df[(df['Date'] >= '2020-05-13') & (df['Date'] <= '2020-05-27')]
print(df_sig1.shape)
df_sig2 = df[(df['Date'] >= '2020-05-03') & (df['Date'] <= '2020-05-05')]
print(df_sig2.shape)
df_sig3 = df[(df['Date'] >= '2020-06-15') & (df['Date'] <= '2020-06-19')]
print(df_sig3.shape)
df_sig4 = df[(df['Date'] >= '2020-06-26') & (df['Date'] <= '2020-06-30')]
print(df_sig4.shape)
df_sig = df_sig1.append(df_sig2, ignore_index = True)
df_sig = df_sig.append(df_sig3, ignore_index = True)
df_sig = df_sig.append(df_sig4, ignore_index = True)
SDM_main = SDM.nsmallest(10, ['pct_change_between_now_and_two_weeks'])
TSC_main = TSC.nsmallest(10, ['pct_change_between_now_and_two_weeks'])
IPC_main = IPC.nsmallest(10, ['pct_change_between_now_and_two_weeks'])
df_sig = df_sig.append(SDM_main, ignore_index = False)
df_sig = df_sig.append(TSC_main, ignore_index = False)
df_sig = df_sig.append(IPC_main, ignore_index = False)
df_sig.drop_duplicates(['Policy'], inplace = True)
df_sig.Category.value_counts()
```
### Is There a sig difference between these event types and the norm?
```
# now do ANOVA and compar to norm
# do ANOVA of three events and the remaining values
import researchpy as rp
import seaborn as sns
sns.boxplot(x=boston_df['DIS'])
```
## Significant Policy List
```
# make csv, with date, policy, category, and order of importance
df_sig.to_csv('significant_policies.csv')
```
| github_jupyter |
```
##### FOREMATTER #####
import numpy as np
import matplotlib as mpl
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
# USER INPUT path to polcurve file and wavelength range (note plot names will need updating by hand)
prefix = '/HabWorlds20192020Karalidi/VSTAR-DAP_data/NewRunsKennyPaper/ClearForest/clearupf_p3tp6um_'
start = 300
stop = 600
step = 100
####NOTE: if you change the step in PA you'll need to change the np.zeros size and the number you * the wl by for that dimension (both 44 right now)
##### DEFINED SUBROUTINES #####
######################################
def extract_data(filename):
infile = open(filename, "r")
data = []
for line in infile:
words = line.split()
data.append(words[:])
infile.close()
return data
###### BUILD AND CHECK CUBE ######
#loop through wavelengths to make the file name and 1D arrays of
listofiles = []
wllist = []
cube = np.zeros([36,7])
for wavelen in range(start,stop+step,step): #NOTE! The end always needs to be one step up from the last wl for stupid programmer reasons
slice = extract_data('./' + prefix + str(wavelen) + '_polcurvenew.dat')
a_slice = np.asarray(slice) #turn that into an array so you can append it to empty array
wlrow = [wavelen] * 36 # add a column with the wl
wlcol = np.transpose(np.asarray(wlrow))
print 'wavelength column', wlcol
print 'dimensions of wl column', wlcol.shape
print 'size of a_slice:', a_slice.shape
print '2nd col, 2nd row in a_slice', a_slice[1,1] #good to here...
a_slice = np.insert(a_slice, 0, wlcol, axis=1)
print 'NEW SLICE WITH WAVELENGTH:', a_slice
cube = np.dstack((cube, a_slice)) #append each slice into a cube
print '2nd col, 2nd row, 2nd slice in a 3D cube', cube[1,1,2] #it's already 2D so this is 3D
print cube.shape
cube = cube.astype('float64')
print cube[0,:,:] #this is zero degrees PA for all wls (a slice)
print cube[:,0,:] #col of all PAs for all WL (slice)
print cube[:,:,0] # slice at a given WL (so typical polcurve output)
cube = np.delete(cube, 0, 2) #skimming the zeros off
print cube[0,:,:] #this is zero degrees PA for all wls (a slice)
print cube[:,0,:] #col of all PAs for all WL (slice)
print cube[:,:,0] # slice at a given WL (so typical polcurve output)
print 'cube:', cube
print cube.shape
print 'PA?', cube[:,1,0].shape , cube[:,1,0] #
print 'wl?', cube[0,0,:].shape, cube[0,0,:] #
print 'slice of all col 7?', cube[:,6,:].shape, cube[:,6,:] #think so, see ipad notes
################## WHICH COLUMN IN THE CUBE SHOULD BE PLOTTED (Z) ##################
X, Y = np.meshgrid(cube[0,0,:], cube[:,1,0])
Z = cube[:,6,:]
# Columns in polcurve (remember to use num bc a col is added with wl (so index not at zero):
# 1:iff 2:100*iq/ii 3:100*iu/ii 4:refl 5:pfpfs 6:ipfpfs (new 7: average theta)
##### MAKE 3D PLOT #####
%matplotlib widget
fig = plt.figure()
ax = plt.axes(projection='3d')
ax.plot_surface(X, Y, Z, rstride=1, cstride=1, cmap='viridis', edgecolor='none')
ax.set_xlabel('WL nm')
ax.set_ylabel('PA')
ax.set_zlabel('ipfpfs')
plt.ticklabel_format(axis='z', style='sci', scilimits=(4,4))
ax.set_title('Cloud Forest'+' RPA2=1, 22.5 RPIX');
plt.show()
```
| github_jupyter |
<h1>Logistic Regression</h1>
Notebook Goals
* Learn how to create a logistic regression model using scikit-learn
<h2> What are some advantages of logistic regression?</h2>
How do you create a logistic regression model using Scikit-Learn? The first thing you need to know is that despite the name logistic regression containing the word regression, logistic regression is a model used for classification. Classification models can be used for tasks like classifying flower species or image recognition. All of this of course depends on the availability and quality of your data. Logistic Regression has some advantages including
* Model training and predictions are relatively fast
* No tuning is usually needed for logistic regression unless you want to regularize your model.
* Finally, it can perform well with a small number of observations.
<h2> Import Libraries</h2>
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import pandas as pd
from IPython.display import Video
from matplotlib.ticker import FormatStrFormatter
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn import metrics
```
## Load the Dataset
The Iris dataset is one of datasets scikit-learn comes with that do not require the downloading of any file from some external website. The code below loads the iris dataset.
```
df = pd.read_csv('data/modifiedIris2Classes.csv')
df.head()
```
<h2> Remove Missing or Impute Values </h2>
If you want to build models with your data, null values are (almost) never allowed. It is important to always see how many samples have missing values and for which columns.
```
# Look at the shape of the dataframe
df.shape
# There is a missing value in the Length column which is a feature
df.isnull().sum()
```
<h2> Train Test Split </h2>
```
X_train, X_test, y_train, y_test = train_test_split(df[['petal length (cm)']], df['target'], random_state=0)
```
<h2> Standardize the Data</h2>
Logistic Regression is effected by scale so you need to scale the features in the data before using Logistic Regresison. You can transform the data onto unit scale (mean = 0 and variance = 1) for better performance. Scikit-Learn's `StandardScaler` helps standardize the dataset’s features. Note you fit on the training set and transform on the training and test set.
```
scaler = StandardScaler()
# Fit on training set only.
scaler.fit(X_train)
# Apply transform to both the training set and the test set.
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
```
<h2>Logistic Regression</h2>
<b>Step 1:</b> Import the model you want to use
In sklearn, all machine learning models are implemented as Python classes
```
# This was already imported earlier in the notebook so commenting out
#from sklearn.linear_model import LogisticRegression
```
<b>Step 2:</b> Make an instance of the Model
This is a place where we can tune the hyperparameters of a model. Typically this is where you tune C which is related to regularization
```
clf = LogisticRegression()
```
<b>Step 3:</b> Training the model on the data, storing the information learned from the data
Model is learning the relationship between x (features sepal width, sepal height etc) and y (labels-which species of iris)
```
clf.fit(X_train, y_train)
```
<b>Step 4:</b> Predict the labels of new data (new flowers)
Logistic regression also allows you to see prediction probabilities as well as a prediction. This is not like other algorithms like decision trees for classification which only give you a prediction not a probability.
```
# One observation's petal length after standardization
X_test[0].reshape(1,-1)
print('prediction', clf.predict(X_test[0].reshape(1,-1))[0])
print('probability', clf.predict_proba(X_test[0].reshape(1,-1)))
```
If this is unclear, let's visualize how logistic regression makes predictions by looking at our test data!
```
example_df = pd.DataFrame()
example_df.loc[:, 'petal length (cm)'] = X_test.reshape(-1)
example_df.loc[:, 'target'] = y_test.values
example_df['logistic_preds'] = pd.DataFrame(clf.predict_proba(X_test))[1]
example_df.head()
fig, ax = plt.subplots(nrows = 1, ncols = 1, figsize = (10,7));
virginicaFilter = example_df['target'] == 1
versicolorFilter = example_df['target'] == 0
ax.scatter(example_df.loc[virginicaFilter, 'petal length (cm)'].values,
example_df.loc[virginicaFilter, 'logistic_preds'].values,
color = 'g',
s = 60,
label = 'virginica')
ax.scatter(example_df.loc[versicolorFilter, 'petal length (cm)'].values,
example_df.loc[versicolorFilter, 'logistic_preds'].values,
color = 'b',
s = 60,
label = 'versicolor')
ax.axhline(y = .5, c = 'y')
ax.axhspan(.5, 1, alpha=0.05, color='green')
ax.axhspan(0, .4999, alpha=0.05, color='blue')
ax.text(0.5, .6, 'Classified as viginica', fontsize = 16)
ax.text(0.5, .4, 'Classified as versicolor', fontsize = 16)
ax.set_ylim(0,1)
ax.legend(loc = 'lower right', markerscale = 1.0, fontsize = 12)
ax.tick_params(labelsize = 18)
ax.set_xlabel('petal length (cm)', fontsize = 24)
ax.set_ylabel('probability of virginica', fontsize = 24)
ax.set_title('Logistic Regression Predictions', fontsize = 24)
fig.tight_layout()
```
<h2> Measuring Model Performance</h2>
While there are other ways of measuring model performance (precision, recall, F1 Score, ROC Curve, etc), let's keep this simple and use accuracy as our metric.
To do this are going to see how the model performs on new data (test set)
Accuracy is defined as:
(fraction of correct predictions): correct predictions / total number of data points
```
score = clf.score(X_test, y_test)
print(score)
```
Accuracy is one metric, but it doesn't say give much insight into what was wrong. Let's look at a confusion matrix
```
cm = metrics.confusion_matrix(y_test, clf.predict(X_test))
plt.figure(figsize=(9,9))
sns.heatmap(cm, annot=True,
fmt=".0f",
linewidths=.5,
square = True,
cmap = 'Blues');
plt.ylabel('Actual label', fontsize = 17);
plt.xlabel('Predicted label', fontsize = 17);
plt.title('Accuracy Score: {}'.format(score), size = 17);
plt.tick_params(labelsize= 15)
```
<h2>What went wrong with the confusion matrix? It looks bad!</h2>
```
cm = metrics.confusion_matrix(y_test, clf.predict(X_test))
plt.figure(figsize=(9,9))
sns.heatmap(cm, annot=True,
fmt=".0f",
linewidths=.5,
square = True,
cmap = 'Blues');
plt.ylabel('Actual label', fontsize = 17);
plt.xlabel('Predicted label', fontsize = 17);
plt.title('Accuracy Score: {}'.format(score), size = 17);
plt.tick_params(labelsize= 15)
# You can comment out the next 4 lines if you like
b, t = plt.ylim() # discover the values for bottom and top
b += 0.5 # Add 0.5 to the bottom
t -= 0.5 # Subtract 0.5 from the top
plt.ylim(b, t) # update the ylim(bottom, top) values
```
Let's look at the same information in a table in a clearer way.
```
# ignore this code
modified_cm = []
for index,value in enumerate(cm):
if index == 0:
modified_cm.append(['TN = ' + str(value[0]), 'FP = ' + str(value[1])])
if index == 1:
modified_cm.append(['FN = ' + str(value[0]), 'TP = ' + str(value[1])])
plt.figure(figsize=(9,9))
sns.heatmap(cm, annot=np.array(modified_cm),
fmt="",
annot_kws={"size": 20},
linewidths=.5,
square = True,
cmap = 'Blues',
xticklabels = ['versicolor', 'viginica'],
yticklabels = ['versicolor', 'viginica'],
);
plt.ylabel('Actual label', fontsize = 17);
plt.xlabel('Predicted label', fontsize = 17);
plt.title('Accuracy Score: {:.3f}'.format(score), size = 17);
plt.tick_params(labelsize= 15)
# You can comment out the next 4 lines if you like
b, t = plt.ylim() # discover the values for bottom and top
b += 0.5 # Add 0.5 to the bottom
t -= 0.5 # Subtract 0.5 from the top
plt.ylim(b, t) # update the ylim(bottom, top) values
```
Notice that the score stops improving after a certain number of estimators (decision trees). One way to get a better score would be to include more features in the features matrix.
## Common questions
<h3>What would happen if you change the prediction threshold from .5 for picking a positive class</h3>
By default, and with respect to the underlying assumptions of logistic regression, we predict a positive class when the probability of the class is greater than .5 and predict a negative class otherwise.
If you changed the prediction threshold from .5 to .2, you would predict more true positives but fewer true negatives. You can see this clearly using <a href="http://mfviz.com/binary-predictions/">this visual by Michael Freeman.</a>
<h3>What is the effect of changing the hyperparameter C?</h3>
Looking at the effect of increasing C if you have `l1` regularization. Smaller values specify stronger regularization. The code below shows this for the Wisconsin breast cancer dataset in an effort to mimic Michael Freeman's visualization
See the following file to look at the effect of changing C
```
#Video('imagesanimation/effectOfCLogisticRegression.mp4')
df = pd.read_csv('data/wisconsinBreastCancer.csv')
# Same code was earlier in notebook, but here for clarity
# The rest of the lines in this sectionis just code I used to make the animation above
col_names = ['worst_concave_points']
X = df[col_names].values.reshape(-1,1)
y = df['diagnosis']
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X,
y,
random_state = 0)
# Standardize Data
scaler = StandardScaler()
# Fit on training set only.
scaler.fit(X_train)
# Apply transform to both the training set and the test set.
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
for index,c in enumerate(np.linspace(-3, 3, num = 25)):
c_value = 10**c
c_value_str = "{0:0.3f}".format(c_value)
# Keep in mind that there is l2 penalty by default like we have for ridge regression
logreg = LogisticRegression(C = c_value)
logreg.fit(X_train, y_train)
example_df = pd.DataFrame()
example_df.loc[:, 'worst_concave_points'] = X_train.reshape(-1)
example_df.loc[:, 'diagnosis'] = y_train.values
example_df['logistic_preds'] = pd.DataFrame(logreg.predict_proba(X_train))[1]
example_df = example_df.sort_values(['logistic_preds'])
plt.scatter(example_df['worst_concave_points'], example_df['diagnosis'])
plt.plot(example_df['worst_concave_points'], example_df['logistic_preds'].values, color='red')
plt.ylabel('malignant (1) or benign (0)', fontsize = 13)
plt.xlabel('worst_concave_points', fontsize = 13)
plt.title("Logistic Regression (L1) C = " + str(c_value_str), fontsize = 15)
plt.savefig('imagesanimation/' + 'initial' + str(index).zfill(4) + '.png', dpi = 100)
plt.cla()
```
<h3>What is the effect of regularization on accuracy?</h3>
You can look at the video imagesanimation2/logisticRegularizationEffectAccuracy.mp4
```
# Same code was earlier in notebook, but here for clarity
col_names = ['worst_concave_points']
X = df[col_names].values.reshape(-1,1)
y = df['diagnosis']
# Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X,
y,
random_state = 0)
# Standardize Data
scaler = StandardScaler()
# Fit on training set only.
scaler.fit(X_train)
# Apply transform to both the training set and the test set.
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
model_list, coef_list, c_value_list, accuracy_list, example_df_list = [], [], [], [], []
for index,c in enumerate(np.linspace(-3, 3, num = 25)):
c_value = 10**c
c_value_str = "{0:0.3f}".format(c_value)
# Keep in mind that there is l2 penalty by default like we have for ridge regression
logreg = LogisticRegression(C = c_value,
penalty = 'l1',
solver = 'saga',
max_iter = 100000)
logreg.fit(X_train, y_train)
# Subplot (top)
example_df = pd.DataFrame()
example_df.loc[:, 'worst_concave_points'] = X_train.reshape(-1)
example_df.loc[:, 'diagnosis'] = y_train.values
example_df['logistic_preds'] = pd.DataFrame(logreg.predict_proba(X_train))[1]
example_df = example_df.sort_values(['logistic_preds'])
example_df_list.append(example_df)
model_list.append(logreg)
accuracy_list.append(logreg.score(X_test, y_test))
coef_list.append(logreg.coef_[0])
c_value_list.append(c_value)
temp_df = pd.DataFrame(coef_list, index = c_value_list, columns = col_names)
temp_df.loc[:, 'model'] = model_list
# Giving the index a name (it is not a column)
temp_df.index.name = 'C (Inverse of Regularization Strength)'
for index, (c_value,example_df) in enumerate(zip(c_value_list, example_df_list)):
c_value_str = "{0:0.3f}".format(c_value)
fig, axes = plt.subplots(nrows = 2,
ncols = 1,
figsize = (12, 7));
# Just formatting, not relevant for this class
fig.subplots_adjust(wspace=0.1, hspace = .55)
"""
fig.suptitle("Logistic Regression (L1) C = " + str(c_value_str),
fontsize = 15,
y=.94)
"""
# Code is just to make it so you have different colors in the "title"
# https://stackoverflow.com/questions/9350171/figure-title-with-several-colors-in-matplotlib
fig.text(0.45,
0.92,
"Logistic Regression (L1) C = ",
ha="center",
va="bottom",
size=20,
color="black")
fig.text(0.68,
0.92,
str(c_value_str),
ha="center",
va="bottom",
size=20,
color="purple",)
axes[0].scatter(example_df['worst_concave_points'], example_df['diagnosis'])
axes[0].plot(example_df['worst_concave_points'], example_df['logistic_preds'].values, color='red')
axes[0].set_ylabel('malignant (1) or benign (0)', fontsize = 13)
axes[0].set_xlabel('worst_concave_points', fontsize = 11)
axes[1].plot(temp_df.index,
temp_df.loc[:, 'worst_concave_points'],
label = 'worst_concave_points',
color = 'purple');
axes[1].axvspan(c_value - c_value/10,c_value + c_value/10, color='orange', alpha=0.3, zorder = 1);
coefLimits = temp_df.min().min(), temp_df.max().max()
accuracyLimits = min(accuracy_list), max(accuracy_list)
axes[1].tick_params('y', colors='purple');
axes[1].set_ylim(coefLimits)
axes[1].set_yticks(np.linspace(coefLimits[0],coefLimits[1], 11))
axes[1].set_xscale('log')
axes[1].yaxis.set_major_formatter(FormatStrFormatter('%.1f'))
axes[1].set_ylabel('weights', color='purple', fontsize = 13)
axes[1].set_xlabel('C', fontsize = 11)
axesTwin=axes[1].twinx()
axesTwin.plot(temp_df.index, accuracy_list, color = 'g')
axesTwin.tick_params('y', colors='g');
axesTwin.set_ylim(accuracyLimits)
axesTwin.set_yticks(np.linspace(accuracyLimits[0],accuracyLimits[1], 11))
axesTwin.set_ylabel('Accuracy', color='g', fontsize = 13);
axes[1].grid();
###
fig.savefig('imagesanimation2/' + 'initial' + str(index).zfill(4) + '.png', dpi = 100)
# If you are really curious, I can share how this works.
#!ffmpeg -framerate 1 -i 'initial%04d.png' -c:v libx264 -r 30 -pix_fmt yuv420p initial_002.mp4
```
| github_jupyter |
## <font color=black>sutils</font>
**Change default region**:
Use the `sutils.reset_profiles()` method and a prompt will appear with options and ask you to select a default region and AMI.
Use the `price_increase` argument to set the maximum bid for each instance. This number will multiple the lowest spot-instance cost such that, for a spot-instance with a base price of \$0.3 with a `price_increase=1.15`, the maximum bid for that instance type would be set at $0.345.
Sometimes your price will still be too low, in that case you will have to use `sutils.reset_profiles()` again with a higher price increase.
```
from spot_connect import sutils
first_profile_b4_change = sutils.load_profiles()['t2.micro']
# Use the reset_profiles command to change the default region and AMI.
sutils.reset_profiles(price_increase=1.15)
print('\nFirst profile before change')
print(first_profile_b4_change)
print('\nFirst profile after change')
print(sutils.load_profiles()['t2.micro'])
```
**Show all profiles**:
```
sutils.load_profiles()
```
## spotted
**SpotInstance class**
The spot instance class is the main feature in `spotted`. Use the class to specify instance configurations and bid on, launch, and connect to spot-instances.
<font color=red>**Warning: the following examples will create live instances on your account. These examples are cheap but they are not free, make sure to terminate them at the end of this notebook.**</font>
```
from spot_connect import spotted
instance = spotted.SpotInstance('monitor', profile='t2.micro')
```
You should now be able to see a live instance in your console as shown in the image below (in the photo the instance name is "instance1"). The `spot-connect` module automatically creates a new security group for each instance with that instance's name. This is so you can identify the instance name quickly (red square below).
<img src='media/live_instance.png'>
Try connecting a live prompt to that instance. Open a command prompt, use `spot_connect instance1` and you will be connected to the instance you just created. This spot-instance is cheap but it is not free. Don't forget to terminate it when you're done.
## instance_manager
**InstanceManager class**:
The instance manager class lets you handle spot instances and access other module functionality directly.
```
from spot_connect import instance_manager
im = instance_manager.InstanceManager()
```
**Launch/re-connect to instance:**
Launch instances directly using the `InstanceManager`. If you use the `launch_instance` command with the name of an instance that is already online it will simply reconnect to that instance as well as add it to `InstanceManager.instances`.
```
im.launch_instance('monitor', profile='t2.micro')
```
**You can now find this instance in the instance manager's list of instances**:
```
im.show_instances()
```
**Run commands**:
We can use the `run` command to execute commands on any instance from the notebook. Use the `cmd=True` option to submit a command.
```
im.instances['monitor'].run('pwd', cmd=True)
```
**Terminate an instance**:
```
im.terminate('monitor')
```
| github_jupyter |
```
#반드시 실행시켜주세요
from IPython.display import Image
from functools import reduce
```
1. 평균변화율, 2. 순간기울기, 3.미분, 4.행렬식, 5. 역행렬, 6.사전확률, 7.사후확률, 8. 기술통계 or 대푯값, 9. 사분범위, 10. 아웃라이어, 11. 분산, 12. 공분산 13. 상관계수, 14. 이산확률변수, 15.확률변수와 각 확률의 곱 16.연속확률분포 17.적분 18.표준편차 19.정규화(표준화도 맞게함) 20.표준정규분포
```
Image("img1.png") #퀴즈 파일과 이미지 파일의 위치를 같은 디렉토리(파일)에 위치해주세요
```
(1. ) 은 두 점 A,B를 연결하는 직선의 기울기입니다.
점 B가 점 A로 다가오면 빨간 점선은 점 A의 접선과 같고 접선의 기울기는 점 A의 (2. )이며 이것를 구하기 위해 사용하는 것을(3. )라고 합니다
행렬 A의 역행렬이 존재하려면 (4. )이 0이 아니어야 합니다
A의 (5. )이 존재하지 않는 경우 행렬 곱 AB=BA 는 성립x
```
Image("img.png") #퀴즈 파일과 이미지 파일의 위치를 같은 디렉토리(파일)에 위치해주세요
```
베이즈 정리는 다음과 같은 식으로 나타낼 수 있습니다.
여기에서 P(A)는 (6. )이라고 하며 사건 B가 발생하기 전에 가지고 있던 사건 A의 확률입니다. 만약 사건 B가 발생하게 되면 이 정보를 반영하여 사건 A의 확률은 P(A|B)라는 값으로 변하게 되며 이를 (7. )이라고 하고.사후 확률 값은 기존 확률값에 P(B|A)/P(B)라는 값을 곱하면 얻을 수 있습니다.
통계기법 중 분포를 표현하는 좋은 방법은 분포의 특징을 나타내는 어떤 숫자를 계산하여 그 숫자로서 분포를 표현하는 것 입니다.
이것을 (8. ) 라고 하며 대표적으로 평균, 중앙값, 최빈값 등이 있습니다.
분포에서 (9. ) 3분위수와 1분위수의 간격를 나타내며 (10. )검출하는데 사용됩니다.
(11. )는 데이터가 퍼져 있는 정도를 나타냅니다.
(12. ) 2개의 데이터가 가지는 상관성 정도를 나타낸 것으로 부호에 따라 방향을 알 수 있고 x,y 단위의 크기가 주는 영향을 줄이기 위해 각각 표준편차로 나누어 정규화한 계수를
(13. )라고 합니다.
정수과 같이 연속되지 않고 끊어지는 값을 갖는 확률분포를 (14. )라고 하며 주사위 확률분포를 대표적인 예로 들을 수 있고 이 분포의 확률을 (15. )으로 나타냅니다.
반대로 실수와 같이 연속되는 값을 갖는 확률분포를(16. )라고 하고 대표적으로 정규분포를 예로 들을 수 있고 이 분포의 확률은 16.을 (17. )한 값과 같습니다.
정규분포의 모양은 평균과 (18. )에 의해 결정이 되며 (18. ) 이 작을수록 평균과 가까워 좁은 종모양 클 수록 평균과 멀어지는 넓은 종모양으로 나타납니다.
따라서 서로다른 모양의 정규분포 가지는 확률분포를 비교하려면 (19. )를 해야합니다.
대표적인 방법으로 확률변수 - 평균/표준편차를 하여 (z-score를 계산하여) 평균이 0이고 표준편차가 1인 (20. )를 따르게 하는 방법입니다.
<h4>21. 다음 word_list를 아래의 결과 값을 출력 할 수 있도록 하는 #TO DO를 작성하세요</h4>
```
word_list=['i hope', 'to see you guys ', 'soon']
#TO DO sort
word_list.sort()
word_list=" ".join(word_list)
word_list
```
'people think python is easy but truly it is not'
<h4>다음 주어진 users 데이터를 사용해 아래결과 값이 나올 수 있도록 TO DO 를 작성해주세요</h4>
<h4>반드시 제어문 또는 함수를 사용해 주세요 :^)</h4>
```
users = [{'mail': 'gregorythomas@gmail.com', 'name': 'Brett Holland', 'sex': 'M', 'age': 73},
... {'mail': 'hintoncynthia@hotmail.com', 'name': 'Madison Martinez', 'sex': 'F', 'age': 29},
... {'mail': 'wwagner@gmail.com', 'name': 'Michael Jenkins', 'sex': 'M', 'age': 51},
... {'mail': 'daniel79@gmail.com', 'name': 'Karen Rodriguez', 'sex': 'F', 'age': 32},
... {'mail': 'ujackson@gmail.com', 'name': 'Amber Rhodes', 'sex': 'F', 'age': 42}]
#22. TO DO : 각 유저들의 나이의 합을 구하는 코드를 작성하세요
sum(age["age"] for age in users )
reduce(lambda x, y: x + y["age"], users,0)
sum(tuple(map(lambda x: x["age"], users)))
```
결과값:227
```
#23. TO DO : 각 유저들의 이름이 들어간 이름 리스트를 출력하는 코드를 작성하세요
[user["name"] for user in users]
list(map(lambda x: x["name"], users))
reduce(lambda x, y: x + [y["name"]], users,[])
```
결과값:['Brett Holland',
'Madison Martinez',
'Michael Jenkins',
'Karen Rodriguez',
'Amber Rhodes']
```
#24. TO DO : 다음은 gmail을 사용하는 유저 중 남성 유저 정보 리스트를 출력하는 함수를 작성하세요
list(filter(lambda x : (x["mail"][-9 : ] == 'gmail.com') & (x["sex"]=="M"),users))
# 수강생님들 중 한분이 괜찮은 답을 주신것 같아서 이렇게 공유합니다 endswith
list(filter(lambda x: (x["mail"].endswith("@gmail.com"))&(x["sex"] == "M"), users))
```
결과값:[{'mail': 'gregorythomas@gmail.com',
'name': 'Brett Holland',
'sex': 'M',
'age': 73},
{'mail': 'wwagner@gmail.com',
'name': 'Michael Jenkins',
'sex': 'M',
'age': 51}]
<h4>25. 다음 (student) 클래스에서 자신의 이름, 나이, 성별을 출력하는 who() 함수를 추가하세요.</h4>
```
class student:
def __init__(self, name, age, sex):
self.name = name
self.age = age
self.sex = sex
def who(self):
print("이름: {} 나이: {} 성별: {}".format(self.name, self.age, self.sex))
itsme = student("최영환", 20, "남자")
itsme.who()
#클래스 매니저 이름은 예시일뿐입니다
```
결과값:이름: xxx 나이: xx 성별: xx
| github_jupyter |
```
import os
import pandas as pd
filepath_old = '/media/sf_VBox_Shared/Arabic/Fiqh/2018-04-24-Fiqh/Fiqh'
filepath_new = '/media/sf_VBox_Shared/Arabic/Fiqh/2018-06-08-Fiqh/'
def get_metadata(filepath):
metadata_dict = {}
for filename in os.listdir(filepath):
with open(os.path.join(filepath, filename)) as f:
metadata = {}
for line in f.readlines():
# TODO: metadata is sometimes inconsistent, (missing # before META,
# and fields not separated by :: but single :)
if line.startswith('#META#'):
splitted = line.split(u'::')
if(len(splitted)==2):
name, value = line.split(u'::')
value = value.strip()
name = name.strip()
# only save metadata that has a value
#if value != 'NODATA':
_, name = name.split(u' ', 1)
name = name.replace(u' ', u'_')
# remove left to right mark
name = name.replace(u"\u200F", u'')
name = name.split(u'.')[-1]
metadata[name] = value
metadata_dict[filename] = metadata
return metadata_dict
metadata_old = get_metadata(filepath_old)
metadata_new = get_metadata(filepath_new)
metadata_old_df = pd.DataFrame.from_dict(metadata_old, orient='index')
metadata_new_df = pd.DataFrame.from_dict(metadata_new, orient='index')
metadata_old_df.index.name = 'filename'
metadata_new_df.index.name = 'filename'
metadata_df.to_csv(os.path.join(filepath, 'metadata-from-files.csv'))
metadata_old_df.info()
metadata_new_df.info()
metadata_old_df.head()
metadata_new_df.head()
for col in metadata_old_df.columns:
if col in metadata_new_df.columns:
print(col, len(set(metadata_old_df[col]).intersection(set(metadata_new_df[col]))))
metadata_old_df.LibURI.value_counts().head()
metadata_new_df.LibURI.value_counts().head()
metadata_old_df.SortField.value_counts().head(), metadata_new_df.SortField.value_counts().head()
metadata_old_df.groupby(['SortField', 'BookTITLE']).size().sort_values(ascending=False).head()
title_to_filename = metadata_old_df[['SortField', 'BookTITLE']].reset_index()
print(len(title_to_filename))
print(title_to_filename[['SortField', 'BookTITLE']].drop_duplicates().shape)
merged = metadata_new_df.reset_index().merge(title_to_filename, left_on=['SortField', 'BookTITLE'], right_on=['SortField', 'BookTITLE'], how='left', suffixes=('_new', '_old'))
merged['filename_old']
merged.filename_old.value_counts().head()
merged[merged.filename_old.isnull()]
merged = merged.dropna(subset=['filename_old'])
merged.to_csv('/media/sf_VBox_Shared/Arabic/Fiqh/merged_metadata.csv', index=False)
```
| github_jupyter |
```
import requests
import shutil
import os
import getpass
from urllib.parse import urlparse
import json
import arrow
import glob
import os
import arrow
import time
import subprocess
def mkyr(blogpath):
raw = arrow.now()
if raw.strftime("%Y") not in os.listdir(blogpath + '/galleries'):
os.mkdir(blogpath + '/galleries/' + raw.strftime("%Y"))
#return(raw.strftime("%Y"))
else:
return('ERROR: Year already exists')
def mkmth(blogpath):
raw = arrow.now()
if raw.strftime("%m") not in os.listdir(blogpath + '/galleries/' + raw.strftime("%Y")):
os.mkdir(blogpath + '/galleries/' + raw.strftime("%Y") + '/' + raw.strftime("%m"))
return(raw.strftime("%m"))
else:
return('ERROR: Month already exists')
def mkday(blogpath):
raw = arrow.now()
if raw.strftime("%d") not in os.listdir(blogpath + '/galleries/' + raw.strftime("%Y") + '/' + raw.strftime("%m")):
os.mkdir(blogpath + '/galleries/' + raw.strftime("%Y") + '/' + raw.strftime("%m") + '/' + raw.strftime('%d'))
return(raw.strftime('%d'))
else:
return('ERROR: Day already exists')
def cpdayimg(orginpath, blogpath):
#copies images from origin folder to blog folder
raw = arrow.now()
files = os.listdir(orginpath)
for f in files:
shutil.copy(orginpath + '/' + f, blogpath + '/galleries/' + raw.strftime("%Y") + '/' + raw.strftime("%m") + '/' + raw.strftime('%d'))
return(os.listdir(blogpath + '/galleries/' + raw.strftime("%Y") + '/' + raw.strftime("%m") + '/' + raw.strftime('%d')))
cpdayimg('/home/wcmckee/imgtest', '/home/wcmckee/artctrl-test/')
def mkblogpost(blogpath, postname, tagblog):
raw = arrow.now()
fultim = raw.datetime
if postname + '.md' not in os.listdir(blogpath + '/posts'):
with open(blogpath + '/posts/' + postname + '.meta', 'w') as daympo:
daympo.write('.. title: {}\n.. slug: {}\n.. date: {}\n.. tags: {}\n.. link:\n.. description:\n.. type: text'.format(postname, postname, fultim, tagblog))
with open(blogpath + '/posts/' + postname + '.md', 'w') as daymark:
for toar in os.listdir(blogpath + '/galleries/' + raw.strftime("%Y") + '/' + raw.strftime("%m") + '/' + raw.strftime('%d')):
daymark.write('\n\n'.format(toar.replace('.png', ''), '/galleries/' + raw.strftime("%Y") + '/' + raw.strftime("%m") + '/' + raw.strftime('%d') + '/', toar))
else:
return('ERROR: post already exists')
import awscli
awscli.en
awscli.EnvironmentVariables
def syncblogpost():
#rsync the galleries and post files to various services - aws and digitalocean bucket.
```
folder of images. create request that creates json of image name, id, and text descrion. this is the text that goes after the image.
```
def imgjstru(blogpath, postname):
sampdict = dict()
raw = arrow.now()
for osli in os.listdir(blogpath + '/galleries/' + raw.strftime("%Y") + '/' + raw.strftime("%m") + '/' + raw.strftime('%d')):
#print(osli)
sampdict.update({osli : dict({'id' : 'one', 'text' : 'sampletext'})})
#os.listdir(blogpath + '/galleries/' + raw.strftime("%Y") + '/' + raw.strftime("%m") + '/' + raw.strftime('%d'))
return(sampdict)
def singimg(blogpath, postname, imgname, imgtxt):
sampdict = dict()
raw = arrow.now()
with open(blogpath + '/posts/' + postname + '.md', 'a') as daymark:
daymark.write('\n\n{}\n\n'.format(imgname.replace('.png', ''), '/galleries/' + raw.strftime("%Y") + '/' + raw.strftime("%m") + '/' + raw.strftime('%d') + '/', imgname, imgtxt))
singimg('/home/wcmckee/artctrl-test', 'omgz', 'hello.png', 'this is lame')
singimg('/home/wcmckee/artctrl-test', 'omgz', 'never.png', 'well this didnt look good')
sam
imgjstru('/home/wcmckee/artctrl-test', 'testing')
```
| github_jupyter |
# USGS Historical Earthquake Events
I'm querying the US Geological Service Common Catalog (ComCat) through their API [here](https://github.com/usgs/libcomcat). It works with bounding boxes, not particular countries, so ran three different downloads for bounding boxes around the Lower 48 states, Alaska, and Hawaii. I'll combine them into a single dataframe, and then filter them by county.
```
import pandas as pd
import geopandas
import matplotlib.pyplot as plt
```
# API Queries
```
## Query generator
# Lower 48 USA
lon = (-125.0011,-66.9326)
lat = (24.9493, 49.5904)
start = '1996-01-01'
end = '2019-01-01'
name = 'lower48.csv'
minmag = 4
maxmag = 9.9
# Remove '-x' to download results, leave it there to get record count.
print(f'getcsv {name} -b {lon[0]} {lon[1]} {lat[0]} {lat[1]} -s {start} -e {end} -f csv -x -m {minmag} {maxmag}')
## Query generator
# Alaska
lon = (-179.1505,-129.9795)
lat = (51.2097, 71.4410)
start = '1996-01-01'
end = '2019-01-01'
name = 'alaska.csv'
minmag = 4
maxmag = 9.9
# Remove '-x' to download results, leave it there to get record count.
print(f'getcsv {name} -b {lon[0]} {lon[1]} {lat[0]} {lat[1]} -s {start} -e {end} -f csv -x -m {minmag} {maxmag}')
## Query generator
# Hawaii
lon = (-160.2471,-154.8066)
lat = (18.9117, 22.2356)
start = '1996-01-01'
end = '2019-01-01'
name = 'hawaii.csv'
minmag = 4
maxmag = 9.9
# Remove '-x' to download results, leave it there to get record count.
print(f'getcsv {name} -b {lon[0]} {lon[1]} {lat[0]} {lat[1]} -s {start} -e {end} -f csv -x -m {minmag} {maxmag}')
```
## Combinining datasets
```
lower48 = pd.read_csv('../data_input/5_USGS_quakes/lower48.csv')
alaska = pd.read_csv('../data_input/5_USGS_quakes/alaska.csv')
hawaii = pd.read_csv('../data_input/5_USGS_quakes/hawaii.csv')
print(lower48.shape, alaska.shape, hawaii.shape)
quakes = pd.concat([lower48, alaska, hawaii], ignore_index=True)
print(quakes.shape)
quakes.head(2)
```
# Map earthquakes to counties
That is, assign the correct county to each earthquake.
```
# Import a shape file with all the counties in the US.
# Note how it doesn't include all the same territories as the
# quake contour map.
counties = geopandas.read_file('../data_input/1_USCounties/')
# Turn state codes from strings to integers
for col in ['STATE_FIPS', 'CNTY_FIPS', 'FIPS']:
counties[col] = counties[col].astype(int)
print(counties.shape)
counties.head()
print(quakes_coords.shape)
quakes_coords.head(2)
# Create geoDF of all the points
quakes_coords = geopandas.GeoDataFrame(
quakes, geometry=geopandas.points_from_xy(quakes.longitude, quakes.latitude))
# Mark those points with their respective counties, keeping the point coordinates
quakes_county = geopandas.sjoin(quakes_coords, counties, how='left', op='within').dropna()
print(quakes_county.shape)
quakes_county.head(2)
# Make FIPS codes back into integers
quakes_county['FIPS'] = quakes_county['FIPS'].astype(int)
# Extract year as its own column
quakes_county['year'] = [t.year for t in pd.to_datetime(quakes_county['time'])]
# Trim unnecessary columns
quakes_county = quakes_county[['FIPS','year','magnitude','geometry']]
print(quakes_county.shape)
quakes_county.head(2)
```
# Plots
```
# These are all the earthquakes in the bounding boxes for the lower 48, Alaska, and Hawaii
fig, ax = plt.subplots(figsize=(20,20))
counties.plot(ax=ax, color='white', edgecolor='black');
quakes_coords.plot(ax=ax, marker='o')
plt.show()
#And the same, but trimmed to only show the earthquakes that happened within the county boundaries.
fig, ax = plt.subplots(figsize=(20,20))
counties.plot(ax=ax, color='white', edgecolor='black');
quakes_county.plot(ax=ax, marker='o')
plt.show()
# And just California
fig, ax = plt.subplots(figsize=(10,10))
counties.plot(ax=ax, color='white', edgecolor='black');
quakes_county.plot(ax=ax, marker='o')
ax.set_xlim(-125,-114)
ax.set_ylim(32,42.1)
plt.show()
# Write to shape file
quakes_county.to_file("../data_output/5__USGS_quakes/quakes1.geojson",
driver='GeoJSON')
```
# Map counties to earthquakes
That is, count how many occurred in each county and add it to the pre-existing list of natural disasters by county (the NOAA dataset).
```
noaa = pd.read_csv('../data_output/5__NOAA/noaa_2.csv')
print(noaa.shape)
noaa.head()
# Organize our earthquake events into a table indexed by FIPS and year.
df1 = quakes_county.drop(columns='geometry').groupby(['FIPS','year']).count().unstack(fill_value=0)
print(df1.shape)
df1.head()
# Merge with the official county list so that all the counties are
# represented in the index even if no earthquakes happened there in
# a given year. Unstack and reset index
official_county_list = sorted(counties['FIPS'].tolist())
df2 = df1.reindex(official_county_list, fill_value=0)
print(df2.shape)
df2.head()
# Unstack and reset the index, so that we turn the indexes into columns
# and fill in all the missing values.
df3 = df2.unstack().reset_index()
# Cleanup columns and their names
df3 = df3.rename(columns={0:'earthquakes'}).drop(columns='level_0')
print(df3.shape)
df3.head()
# Now the data are organized by year and FIPS in the same order as they were
# in the NOAA dataset, so that they can be integrated seamlessly.
noaa['earthquakes'] = df3['earthquakes']
print(noaa.shape)
noaa.head()
# Here are the total events for the last 5 years.
noaa.groupby(['year']).sum().tail()
# Export!
noaa.to_csv('../data_output/5_USGS__quakes/noaa_plus_quakes.csv', index=False)
```
| github_jupyter |
# Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
## Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
```
## Explore the Data
Play around with view_sentence_range to view different parts of the data.
```
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
```
## Implement Preprocessing Function
### Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function `text_to_ids()`, you'll turn `source_text` and `target_text` from words to ids. However, you need to add the `<EOS>` word id at the end of `target_text`. This will help the neural network predict when the sentence should end.
You can get the `<EOS>` word id by doing:
```python
target_vocab_to_int['<EOS>']
```
You can get other word ids using `source_vocab_to_int` and `target_vocab_to_int`.
```
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
"""
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
"""
# Implement Function
source_id_text = []
target_id_text = []
# Getting lines from source and target
for source_line,target_line in zip(source_text.split('\n'), target_text.split('\n')):
# Append ids for source text
source_id_text.append([source_vocab_to_int.get(word, source_vocab_to_int['<UNK>']) for word in source_line.split(' ')])
# Append ids for target text
target_id_text.append([target_vocab_to_int.get(word, target_vocab_to_int['<UNK>']) for word in target_line.split(' ')] + [target_vocab_to_int['<EOS>']])
return source_id_text, target_id_text
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_text_to_ids(text_to_ids)
```
### Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
```
# Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
import helper
import problem_unittests as tests
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
```
### Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
```
## Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- `model_inputs`
- `process_decoder_input`
- `encoding_layer`
- `decoding_layer_train`
- `decoding_layer_infer`
- `decoding_layer`
- `seq2seq_model`
### Input
Implement the `model_inputs()` function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
- Targets placeholder with rank 2.
- Learning rate placeholder with rank 0.
- Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
- Target sequence length placeholder named "target_sequence_length" with rank 1
- Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.
- Source sequence length placeholder named "source_sequence_length" with rank 1
Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)
```
def model_inputs():
"""
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
"""
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='target')
learning_rate = tf.placeholder(tf.float32, [],name='learning_rate')
keep_prob = tf.placeholder(tf.float32, [],name='keep_prob')
target_seq_length = tf.placeholder(tf.int32, [None], name='target_sequence_length')
max_target_seq = tf.reduce_max(target_seq_length, name='max_target_sequence')
source_seq_len = tf.placeholder(tf.int32, [None], name='source_sequence_length')
return inputs, targets, learning_rate, keep_prob, target_seq_length, max_target_seq, source_seq_len
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
```
### Process Decoder Input
Implement `process_decoder_input` by removing the last word id from each batch in `target_data` and concat the GO ID to the begining of each batch.
```
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
"""
Preprocess target data for encoding
:param target_data: Target Placeholder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
"""
# TODO: Implement Function
sliced = tf.strided_slice(target_data, [0,0], [batch_size, -1], [1,1])
return tf.concat([tf.fill([batch_size, 1],target_vocab_to_int['<GO>']), sliced],1)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_process_encoding_input(process_decoder_input)
```
### Encoding
Implement `encoding_layer()` to create a Encoder RNN layer:
* Embed the encoder input using [`tf.contrib.layers.embed_sequence`](https://www.tensorflow.org/api_docs/python/tf/contrib/layers/embed_sequence)
* Construct a [stacked](https://github.com/tensorflow/tensorflow/blob/6947f65a374ebf29e74bb71e36fd82760056d82c/tensorflow/docs_src/tutorials/recurrent.md#stacking-multiple-lstms) [`tf.contrib.rnn.LSTMCell`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/LSTMCell) wrapped in a [`tf.contrib.rnn.DropoutWrapper`](https://www.tensorflow.org/api_docs/python/tf/contrib/rnn/DropoutWrapper)
* Pass cell and embedded input to [`tf.nn.dynamic_rnn()`](https://www.tensorflow.org/api_docs/python/tf/nn/dynamic_rnn)
```
from imp import reload
reload(tests)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
"""
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
"""
# Implement Function
encod_embed = tf.contrib.layers.embed_sequence(rnn_inputs, vocab_size=source_vocab_size,embed_dim=encoding_embedding_size)
def lstm_cell():
rnn_cell = tf.contrib.rnn.LSTMCell(rnn_size,
initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))
drop = tf.contrib.rnn.DropoutWrapper(rnn_cell)
return drop
stacked_lstm = tf.contrib.rnn.MultiRNNCell([lstm_cell() for _ in range(num_layers)])
outputs, final_state = tf.nn.dynamic_rnn(stacked_lstm, encod_embed, sequence_length=source_sequence_length,dtype=tf.float32)
return outputs, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_encoding_layer(encoding_layer)
```
### Decoding - Training
Create a training decoding layer:
* Create a [`tf.contrib.seq2seq.TrainingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/TrainingHelper)
* Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)
* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode)
```
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
"""
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
"""
# TODO: Implement Function
training_helper = tf.contrib.seq2seq.TrainingHelper(dec_embed_input, target_sequence_length)
decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, training_helper, encoder_state, output_layer)
outputs, states, sequence_lengths = tf.contrib.seq2seq.dynamic_decode(decoder)
return outputs
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_train(decoding_layer_train)
```
### Decoding - Inference
Create inference decoder:
* Create a [`tf.contrib.seq2seq.GreedyEmbeddingHelper`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/GreedyEmbeddingHelper)
* Create a [`tf.contrib.seq2seq.BasicDecoder`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/BasicDecoder)
* Obtain the decoder outputs from [`tf.contrib.seq2seq.dynamic_decode`](https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/dynamic_decode)
```
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
"""
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
"""
# TODO: Implement Function
start_token = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32), [batch_size])
helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(dec_embeddings, start_token, end_of_sequence_id)
decoder = tf.contrib.seq2seq.BasicDecoder(dec_cell, helper, encoder_state, output_layer)
outputs, states, sequence_lengths = tf.contrib.seq2seq.dynamic_decode(decoder)
return outputs
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer_infer(decoding_layer_infer)
```
### Build the Decoding Layer
Implement `decoding_layer()` to create a Decoder RNN layer.
* Embed the target sequences
* Construct the decoder LSTM cell (just like you constructed the encoder cell above)
* Create an output layer to map the outputs of the decoder to the elements of our vocabulary
* Use the your `decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob)` function to get the training logits.
* Use your `decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob)` function to get the inference logits.
Note: You'll need to use [tf.variable_scope](https://www.tensorflow.org/api_docs/python/tf/variable_scope) to share variables between training and inference.
```
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
"""
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
"""
# TODO: Implement Function
def lstm_cell():
rnn_cell = tf.contrib.rnn.LSTMCell(rnn_size,
initializer=tf.random_uniform_initializer(-0.1, 0.1, seed=2))
drop = tf.contrib.rnn.DropoutWrapper(rnn_cell)
return drop
start_sequence_id = target_vocab_to_int['<GO>']
end_sequence_id = target_vocab_to_int['<EOS>']
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
stacked_lstm = tf.contrib.rnn.MultiRNNCell([lstm_cell() for _ in range(num_layers)])
output_layer = Dense(target_vocab_size,
kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1))
with tf.variable_scope('decode') as decoding_scope:
training_decoder_output = decoding_layer_train(encoder_state,
stacked_lstm,
dec_embed_input,
target_sequence_length,
max_target_sequence_length,
output_layer,
keep_prob)
inference_decoder_output = decoding_layer_infer(encoder_state,
stacked_lstm,
dec_embeddings,
start_sequence_id,
end_sequence_id,
max_target_sequence_length,
target_vocab_size,
output_layer,
batch_size,
keep_prob)
return training_decoder_output, inference_decoder_output
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_decoding_layer(decoding_layer)
```
### Build the Neural Network
Apply the functions you implemented above to:
- Encode the input using your `encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size)`.
- Process target data using your `process_decoder_input(target_data, target_vocab_to_int, batch_size)` function.
- Decode the encoded input using your `decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size)` function.
```
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
"""
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
"""
# TODO: Implement Function
_, enc_state = encoding_layer(input_data,
rnn_size,
num_layers,
keep_prob,
source_sequence_length,
source_vocab_size,
enc_embedding_size)
dec_input = process_decoder_input(target_data, target_vocab_to_int, batch_size)
training_decoder_output, inference_decoder_output = decoding_layer(dec_input,
enc_state,
target_sequence_length,
max_target_sentence_length,
rnn_size,
num_layers,
target_vocab_to_int,
target_vocab_size,
batch_size,
keep_prob,
dec_embedding_size)
return training_decoder_output, inference_decoder_output
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_seq2seq_model(seq2seq_model)
```
## Neural Network Training
### Hyperparameters
Tune the following parameters:
- Set `epochs` to the number of epochs.
- Set `batch_size` to the batch size.
- Set `rnn_size` to the size of the RNNs.
- Set `num_layers` to the number of layers.
- Set `encoding_embedding_size` to the size of the embedding for the encoder.
- Set `decoding_embedding_size` to the size of the embedding for the decoder.
- Set `learning_rate` to the learning rate.
- Set `keep_probability` to the Dropout keep probability
- Set `display_step` to state how many steps between each debug output statement
```
# Number of Epochs
epochs = 10
# Batch Size
batch_size = 200
# RNN Size
rnn_size = 256
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 256
decoding_embedding_size = 256
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.75
display_step = 100
```
### Build the Graph
Build the graph using the neural network you implemented.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
```
Batch and pad the source and target sequences
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def pad_sentence_batch(sentence_batch, pad_int):
"""Pad sentences with <PAD> so that each sentence of a batch has the same length"""
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
"""Batch targets, sources, and the lengths of their sentences together"""
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
```
### Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
def get_accuracy(target, logits):
"""
Calculate accuracy
"""
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
```
### Save Parameters
Save the `batch_size` and `save_path` parameters for inference.
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params(save_path)
```
# Checkpoint
```
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
```
## Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function `sentence_to_seq()` to preprocess new sentences.
- Convert the sentence to lowercase
- Convert words into ids using `vocab_to_int`
- Convert words not in the vocabulary, to the `<UNK>` word id.
```
def sentence_to_seq(sentence, vocab_to_int):
"""
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
"""
# TODO: Implement Function
word_list = sentence.lower().split(' ')
return [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in word_list]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_sentence_to_seq(sentence_to_seq)
```
## Translate
This will translate `translate_sentence` from English to French.
```
translate_sentence = 'he saw a old yellow truck .'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
```
## Imperfect Translation
You might notice that some sentences translate better than others. Since the dataset you're using only has a vocabulary of 227 English words of the thousands that you use, you're only going to see good results using these words. For this project, you don't need a perfect translation. However, if you want to create a better translation model, you'll need better data.
You can train on the [WMT10 French-English corpus](http://www.statmt.org/wmt10/training-giga-fren.tar). This dataset has more vocabulary and richer in topics discussed. However, this will take you days to train, so make sure you've a GPU and the neural network is performing well on dataset we provided. Just make sure you play with the WMT10 corpus after you've submitted this project.
## Submitting This Project
When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_language_translation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.
| github_jupyter |
## PA Approval and Date
Here, we will look at how much PA authorization varies with the date.
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import matplotlib.axes
from sklearn.model_selection import train_test_split
from pandas import Timestamp
import datetime
```
First, we create a training set with same seed as in our other files (10475).
```
cmm = pd.read_csv("Data/CMM.csv")
cmm_pa = cmm[cmm['dim_pa_id'].notna()]
cmm_pa_train, cmm_pa_test = train_test_split(cmm_pa.copy(), test_size = 0.2,
random_state = 10475, shuffle = True,
stratify = cmm_pa.pa_approved)
cmm_pa_train.head()
cmm_pa_train[cmm_pa_train["day_of_week"] == 1]
cmm.calendar_year.unique()
#An auxilliary function to find the percentage of df[column1 == val1] given that df[column2 == val2 ].
def percentage_given(df, column1, val1, column2, val2):
intersection = np.sum(np.logical_and(df[column1] == val1,df[column2] == val2));
total = np.sum(df[column2] == val2);
return np.round(100*intersection/total,3);
```
## pa_approved vs day of the week
```
plt.figure()
sns.histplot(cmm_pa_train, x='day_of_week', hue = 'pa_approved',multiple = 'stack',discrete=True)
plt.xticks(ticks=[1,2,3,4,5,6,7], labels=["Sun","Mon","Tue","Wed","Thu","Fri","Sat"],fontsize=18)
plt.title("Day of the week and Approval",fontsize = 30)
plt.xlabel("Day of the week",fontsize=25)
plt.ylabel("Count",fontsize = 25)
plt.legend(["Yes", "No"],title="PA Approved",fontsize=13,title_fontsize=15)
plt.show()
days = cmm_pa_train.day_of_week.unique()
days = np.sort(days)
for day in days:
print("The percentage of people whose PA is approved given that they apply on the ", day, "-th day of the week is : "
, percentage_given(cmm_pa_train,'pa_approved',1,'day_of_week',day))
```
## pa_approved vs month
```
plt.figure(figsize=(12,8))
sns.histplot(cmm_pa_train, x='calendar_month', hue = 'pa_approved',multiple = 'stack',discrete=True)
plt.xticks(ticks=range(1,13), labels=["Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec"],fontsize=18)
plt.title("Calendar month and Approval",fontsize = 30)
plt.xlabel("Month",fontsize=25)
plt.ylabel("Count",fontsize = 25)
plt.legend(["Yes", "No"],title="PA Approved",fontsize=13,title_fontsize=15)
plt.show()
months = cmm_pa_train.calendar_month.unique()
months = np.sort(months)
month_name=["Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec"]
for month in months:
print("The percentage of people whose PA is approved given that they apply in", month_name[month-1], "is: "
, percentage_given(cmm_pa_train,'pa_approved',1,'calendar_month',month))
```
## pa_approved vs is_holiday
```
plt.figure()
sns.histplot(cmm_pa_train, x='is_holiday', hue = 'pa_approved',multiple = 'stack',discrete=True)
plt.xticks(ticks=[0,1], labels=["no","yes"],fontsize=18)
plt.title("Holidays and Approval",fontsize = 30)
plt.xlabel("Holidays",fontsize=25)
plt.ylabel("Count",fontsize = 25)
plt.legend(["Yes", "No"],title="PA Approved",fontsize=13,title_fontsize=15)
plt.show()
print("The percentage of people whose PA is approved on a holiday is: "
, percentage_given(cmm_pa_train,'pa_approved',1,'is_holiday',1))
print("The percentage of people whose PA is approved on a non-holiday is: "
, percentage_given(cmm_pa_train,'pa_approved',1,'is_holiday',0))
```
## pa_approved vs calenday_day
```
plt.figure(figsize=(15,7))
sns.histplot(cmm_pa_train, x='calendar_day', hue = 'pa_approved',multiple = 'stack',discrete=True)
plt.xticks(ticks=range(1,32),fontsize=10)
plt.title("Calendar day and Approval",fontsize = 30)
plt.xlabel("Calendar Day",fontsize=25)
plt.ylabel("Count",fontsize = 25)
plt.legend(["Yes", "No"],title="PA Approved",fontsize=13,title_fontsize=15)
plt.show()
days = cmm_pa_train.calendar_day.unique()
days= np.sort(days)
for day in days:
print("The percentage of people whose PA is approved given that they apply on the ",day,"-th day of a month is : "
, percentage_given(cmm_pa_train,'pa_approved',1,'calendar_day',day))
#Adding a datetime column to cmm_pa_train
cmm_pa_train['calendar_date'] = cmm_pa_train['date_val'].apply(Timestamp) #= Timestamp(cmm_pa_train['calendar_date'])
# List of all the dates in a sorted manner
date_list = cmm_pa_train.calendar_date.unique()
date_list = np.sort(date_list)
count_PA_forms = [0]*len(date_list);
count_PA_approved = [0]*len(date_list);
#Number of entries(i.e. PA forms submitted) and number of PA forms approved on each of these dates
i = 0;
for date in date_list:
count_PA_forms[i] = np.sum(cmm_pa_train['calendar_date'] == date)
count_PA_approved[i] = np.sum(np.logical_and(cmm_pa_train['calendar_date'] == date,cmm_pa_train['pa_approved'] == 1))
i += 1;
PA_approved_percentage = np.divide(count_PA_approved,count_PA_forms)
datePA_df=pd.DataFrame(np.transpose([date_list,count_PA_forms,count_PA_approved,PA_approved_percentage]), columns=('Date','PA_count','PA_approved_count','PA_approved_percent'))
datePA_df['count_rolling'] = datePA_df.iloc[:,1].rolling(window=7).mean()
datePA_df['approved_rolling']=datePA_df.iloc[:,2].rolling(window=7).mean()
datePA_df['percent_rolling']=datePA_df.iloc[:,3].rolling(window=7).mean()
datePA_df.head(15)
plt.figure(figsize=(15,8))
plt.plot(date_list,count_PA_forms,'b')
plt.plot(date_list,count_PA_approved,'r')
plt.plot(date_list,datePA_df.count_rolling,'w')
plt.plot(date_list,datePA_df.approved_rolling,'k')
plt.legend(['PA forms submitted','PA forms approved','7 day rolling avg submitted','7 day rolling avg approved'])
plt.show()
plt.figure(figsize=(15,8))
plt.plot(date_list,PA_approved_percentage,'g')
plt.plot(date_list,datePA_df.percent_rolling,'k')
plt.ylabel("Percentage of PA forms approved")
plt.legend(["Daily percentage approved","7 day rolling average approved"])
plt.show()
```
## PA approval vs time depending on the reject code
In this section, we see if the reject code influences the above graph.
```
## date_list has the list of all the dates.
## We do similar computation as before, this time we make an array to keep track of the rejection codes.
## The zero-th index is 70, first index is 75 and the second index is 76
## To store the number and percentage of PA forms approved on each day with a specific reject code.
count_PA_forms = np.zeros((3,len(date_list)));
count_PA_approved = np.zeros((3,len(date_list)));
reject_code_list = [70,75,76]
#Number of entries(i.e. PA forms submitted) and number of PA forms approved on each of these dates
i = 0; #To keep track of date index
for date in date_list:
for j in range(0,3):
#Approved PA forms
list_PA_forms = np.logical_and(cmm_pa_train['calendar_date'] == date,cmm_pa_train['reject_code'] == reject_code_list[j])
list_PA_approved = np.logical_and(list_PA_forms,cmm_pa_train['pa_approved'] == 1)
count_PA_forms[j,i] = np.sum(list_PA_forms)
count_PA_approved[j,i] = np.sum(list_PA_approved)
i += 1;
PA_approved_percentage = np.divide(count_PA_approved,count_PA_forms)
#A dataframes for each reject code
datePA_df = [0]*3; #Initialize the array of dataframes
for j in range(0,3):
datePA_df[j]=pd.DataFrame(np.transpose([date_list,count_PA_forms[j],count_PA_approved[j],PA_approved_percentage[j]]), columns=('Date','PA_count','PA_approved_count','PA_approved_percent'))
datePA_df[j]['count_rolling'] = datePA_df[j].iloc[:,1].rolling(window=7).mean()
datePA_df[j]['approved_rolling']=datePA_df[j].iloc[:,2].rolling(window=7).mean()
datePA_df[j]['percent_rolling']=datePA_df[j].iloc[:,3].rolling(window=7).mean()
```
### Reject Code 70
```
plt.figure(figsize=(15,8))
plt.plot(date_list,count_PA_forms[0],'b')
plt.plot(date_list,count_PA_approved[0],'r')
plt.plot(date_list,datePA_df[0].count_rolling,'w')
plt.plot(date_list,datePA_df[0].approved_rolling,'k')
plt.legend(['PA forms submitted','PA forms approved','7 day rolling avg submitted','7 day rolling avg approved'])
plt.title("PA form submission and acceptance rate for reject code 70")
plt.show()
```
### Reject code 75
```
plt.figure(figsize=(15,8))
plt.plot(date_list,count_PA_forms[1],'b')
plt.plot(date_list,count_PA_approved[1],'r')
plt.plot(date_list,datePA_df[1].count_rolling,'w')
plt.plot(date_list,datePA_df[1].approved_rolling,'k')
plt.legend(['PA forms submitted','PA forms approved','7 day rolling avg submitted','7 day rolling avg approved'])
plt.title("PA form submission and acceptance rate for reject code 75")
plt.show()
```
### Reject code 76
```
plt.figure(figsize=(15,8))
plt.plot(date_list,count_PA_forms[2],'b')
plt.plot(date_list,count_PA_approved[2],'r')
plt.plot(date_list,datePA_df[2].count_rolling,'w')
plt.plot(date_list,datePA_df[2].approved_rolling,'k')
plt.legend(['PA forms submitted','PA forms approved','7 day rolling avg submitted','7 day rolling avg approved'])
plt.title("PA form submission and acceptance rate for reject code 76")
plt.show()
```
| github_jupyter |
# VacationPy
----
#### Note
* Keep an eye on your API usage. Use https://developers.google.com/maps/reporting/gmp-reporting as reference for how to monitor your usage and billing.
* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
```
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import gmaps
import os
# Import API key
from api_keys import g_key
```
### Store Part I results into DataFrame
* Load the csv exported in Part I to a DataFrame
```
VacDF = pd.read_csv("CityOutput.csv",index_col = 0)
VacDF.head()
```
### Humidity Heatmap
* Configure gmaps.
* Use the Lat and Lng as locations and Humidity as the weight.
* Add Heatmap layer to map.
```
gmaps.configure(api_key=g_key)
locations = VacDF[["Lat","Lng"]].astype(float)
weights = VacDF["Humidity"]
figure_layout = {
'width': '800px',
'height': '600px',
'border': '1px solid black',
'padding': '1px',
'margin': '0 auto 0 auto'
}
fig = gmaps.figure(layout = figure_layout)
heat_layer = gmaps.heatmap_layer(locations, weights)
fig.add_layer(heat_layer)
fig
```
### Create new DataFrame fitting weather criteria
* Narrow down the cities to fit weather conditions.
* Drop any rows will null values.
```
# Miami Beach Weather
MiamiDF = VacDF.loc[VacDF["Max Temp"].between(80,90)]
MiamiDF = MiamiDF.loc[MiamiDF["Humidity"].between(0,75)]
MiamiDF = MiamiDF.loc[MiamiDF["Cloudiness"].between(0,15)]
MiamiDF = MiamiDF.loc[MiamiDF["Wind Speed"].between(0,10)]
MiamiDF = MiamiDF.dropna()
MiamiDF.head()
```
### Hotel Map
* Store into variable named `hotel_df`.
* Add a "Hotel Name" column to the DataFrame.
* Set parameters to search for hotels with 5000 meters.
* Hit the Google Places API for each city's coordinates.
* Store the first Hotel result into the DataFrame.
* Plot markers on top of the heatmap.
```
hotel_df = MiamiDF
hotel_df["Hotel Name"]=""
base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json"
params = {
# "radius":"5000",
"rankby": "distance",
"keyword": "hotel",
"key": g_key,
}
for index, row in hotel_df.iterrows():
params["location"] = f"{row['Lat']},{row['Lng']}"
try:
response = requests.get(base_url, params=params).json()
hotel_df.loc[index, 'Hotel Name'] = response["results"][0]["name"]
except:
hotel_df.loc[index, 'Hotel Name']=""
print(hotel_df["Hotel Name"])
# NOTE: Do not change any of the code in this cell
# Using the template add the hotel marks to the heatmap
info_box_template = """
<dl>
<dt>Name</dt><dd>{Hotel Name}</dd>
<dt>City</dt><dd>{City}</dd>
<dt>Country</dt><dd>{Country}</dd>
</dl>
"""
# Store the DataFrame Row
# NOTE: be sure to update with your DataFrame name
hotel_info = [info_box_template.format(**row) for index, row in hotel_df.iterrows()]
locations = hotel_df[["Lat", "Lng"]]
# Add marker layer ontop of heat map
markers = gmaps.marker_layer(locations,info_box_content = hotel_info)
fig.add_layer(markers)
# Display figure
fig
```
| github_jupyter |
```
import copy
from collections import deque
from rdkit.Chem.Draw import IPythonConsole
import torch
import torch.nn as nn
from torch.utils.tensorboard import SummaryWriter
from rdkit import Chem
from rdkit.Chem import RWMol
from enviroment.ChemEnv import ChemEnv
from enviroment.Utils import mol_to_graph_full
from Rewards.rewards import SizeReward, SingleReward, FinalRewardModule
from models import BaseLine
from rdkit import Chem
from rdkit.Chem import rdBase
from rdkit.Chem import Draw
from rdkit.Chem.Draw import rdMolDraw2D
from rdkit.Chem.Draw import IPythonConsole
from IPython.display import SVG
import networkx as nx
from networkx.readwrite import cytoscape_data
import cyjupyter
from cyjupyter import Cytoscape
from rdkit.Chem import AllChem
from rdkit.Chem.Scaffolds import rdScaffoldNetwork
from urllib import parse
writer = SummaryWriter(f'./logs_test/_logs/tb')
reward_module = FinalRewardModule(writer, [SizeReward()])
env = ChemEnv(54,reward_module,mol_to_graph_full,writer)
class MolTree():
"""Class for holding molecular iteration tree
"""
def __init__(self,root_mol: Chem.RWMol, idx: int):
"""init fn
Args:
root_mol (Chem.RWMol): the molecule to use as the node
"""
self.root_mol = root_mol
self.idx = idx
self.children = []
def addChild(self, mol: Chem.RWMol):
"""add a child molecule
Args:
mol (Chem.RWMol): [description]
"""
child = MolTree(mol)
self.children.append(child)
def addChildren(self, mols: 'list[Chem.RWMol]', i: int):
"""adds children molecules
Args:
mols (list[Chem.RWMol]): mols to add
i (int): starting idx for node numbering
Returns:
int: number of children added
"""
# self.children += list(map(lambda mol: MolTree(mol), mols))
for j,mol in enumerate(mols):
self.children.append(MolTree(mol,i+j))
return len(self.children)
class Handler():
"""Class for handling model. inference and that sort of stuff"""
def __init__(self, path: str, model: nn.Module, env: ChemEnv):
"""create handler insance
Args:
path (str): path to saved model
model (nn.Module): model for params to be loaded into
env ([type]): Chem environment
"""
self.model = model
# self.model.load(path)
self.env = env
def __get_n_best(self,mol: Chem.RWMol, n: int):
"""gets the top n most likely actions given mol
Args:
mol (Chem.RWMol): mol to set as state
n (int): number of actions to return
Returns:
Torch.tensor: tensor containing the actions
"""
# mol = Chem.RWMol(Chem.MolFromSmiles('CC-N'))
self.env.assignMol(mol)
obs = self.env.getObs()
predictions = self.model(*obs)
_, actions = torch.topk(predictions,n)
return actions
def __run_actions(self, mol: Chem.RWMol, actions: 'list[int]'):
"""calculates new mols updated by actions
Args:
mol (Chem.RWMol): starting structure
actions (list[int]): actions to take
Returns:
list[Chem.RWMol]: newly generated molecules
"""
new_mols = []
for action in torch.squeeze(actions):
action_int = int(action)
mol_copy = copy.deepcopy(mol)
self.env.assignMol(mol_copy)
_,_,_,reward_dict = self.env.step(action_int)
if reward_dict['step_reward'] > 0:
new_mols.append(self.env.StateSpace)
return new_mols
def iterate(self, mol, n):
"""Expands the passed molecule by one step
Args:
mol (Chem.RWMol): base molecule to iterate on
n (int): How many different one step iterations to make
Returns:
list[Chem.RWMol]: The mutated molecules
"""
actions = self.__get_n_best(mol, n)
mols = self.__run_actions(mol,actions)
return mols
def treeSearch(self,initial_mol: Chem.RWMol, width: int, size: int):
"""search chemical space around the initial molecule
Args:
initial_mol (Chem.RWMol): starting
width (int): how many branches to make at each step
size (int): total size of the tree
Returns:
[type]: [description]
"""
molTree = MolTree(initial_mol,0)
queue = deque([molTree])
i = 1
while queue:
if size <= 0:
break
mol_node = queue.pop()
children = self.iterate(mol_node.root_mol, width)
j = mol_node.addChildren(children,i)
i = i+j
for child in mol_node.children:
print(Chem.MolToSmiles(child.root_mol))
queue.appendleft(child)
size -= 1
return molTree
def inference():
pass
def smi2svg(mol):
try:
Chem.rdmolops.Kekulize(mol)
except:
pass
drawer = rdMolDraw2D.MolDraw2DSVG(690, 400)
AllChem.Compute2DCoords(mol)
drawer.DrawMolecule(mol)
drawer.FinishDrawing()
svg = drawer.GetDrawingText().replace("svg:", "")
return svg
(smi2svg(mol))
mol = Chem.MolFromSmiles("CC-N")
drawer = rdMolDraw2D.MolDraw2DSVG(690, 400)
drawer.DrawMolecule(mol)
drawer.FinishDrawing()
SVG(drawer.GetDrawingText())
def GraphFromMolTree(mol: MolTree):
"""Function for transforming a Molecule Tree to a nx Graph for use with cytoscape
Args:
mol (MolTree): Tree to be converted
Returns:
nx.graph.Graph: converted graph
"""
g = nx.graph.Graph()
queue = deque([mol])
while queue:
print('s')
mol_tree = queue.pop()
mol = mol_tree.root_mol
if g.number_of_nodes() == 0:
print('X')
g.add_node(mol_tree.idx,mol = Chem.MolToSmiles(mol))#, img=smi2svg(mol), hac=mol.GetNumAtoms())
for child in mol_tree.children:
child_mol = child.root_mol
g.add_node(child.idx, mol = Chem.MolToSmiles(child_mol))#, img = smi2svg(mol))
g.add_edge(mol_tree.idx, child.idx)
queue.appendleft(child)
return g
model = BaseLine(54,300,17)
handler = Handler('af',model,env)
mol = Chem.RWMol(Chem.MolFromSmiles('CC-N'))
tree = handler.treeSearch(mol,3,12)
graph = GraphFromMolTree(tree)
cy_g = cytoscape_data(graph)
stobj=[
{'style': [{'css': {
'shape' : 'circle',
'width':100,
'height':100,
# 'border-color': 'rgb(0,0,0)',
# 'border-opacity': .5,
# 'border-width': 0.0,
# 'color': '#4579e8',
'label': 'data(mol)',
'font-size' : 40,
'layout': {'name' : 'grid'}
# 'background-fit':'contain'
},
'selector': 'node'},
{'css': {
'width': 10.0,
"target-arrow-shape": "triangle",
"line-color": "#9dbaea",
"target-arrow-color": "#9dbaea",
"curve-style": "bezier"
},
'selector': 'edge'}
],
}]
cyobj=Cytoscape(data=cy_g, visual_style=stobj[0]['style'])
cyobj
Chem.MolFromSmiles('N1C2SC12')
cy_g
cyg = {'data' : [],
'directed': True,
'multigraph': False,
'elements': {
'nodes': [
{ 'data': { id: 0 } },
{ 'data': { id: 1 } },
{ 'data': { id: 2 } },
{ 'data': { id: 3 } },
{ 'data': { id: 4 } },
{ 'data': { id: 5 } },
{ 'data': { id: 6 } },
{ 'data': { id: 7 } },
{ 'data': { id: 8 } },
{ 'data': { id: 9 } },
{ 'data': { id: 10 } },
{ 'data': { id: 11 } },
{ 'data': { id: 12 } },
{ 'data': { id: 13 } },
{ 'data': { id: 14 } },
{ 'data': { id: 15 } },
{ 'data': { id: 16 } }
],
'edges': [
{ 'data': { 'source': 0, 'target': 1 } },
{ 'data': { 'source': 1, 'target': 2 } },
{ 'data': { 'source': 1, 'target': 3 } },
{ 'data': { 'source': 4, 'target': 5 } },
{ 'data': { 'source': 4, 'target': 6 } },
{ 'data': { 'source': 6, 'target': 7 } },
{ 'data': { 'source': 6, 'target': 8 } },
{ 'data': { 'source': 8, 'target': 9 } },
{ 'data': { 'source': 8, 'target': 10 } },
{ 'data': { 'source': 11, 'target': 12 } },
{ 'data': { 'source': 12, 'target': 13 } },
{ 'data': { 'source': 13, 'target': 14 } },
{ 'data': { 'source': 13, 'target': 15 } }
]
}}
cyobj=Cytoscape(data=cyg, visual_style=stobj[0]['style'])#, layout_name='circle')
cyobj
```
| github_jupyter |
[Reinforcement Learning TF-Agents](https://colab.research.google.com/drive/1FXh1BQgMI5xE1yIV1CQ25TyRVcxvqlbH?usp=sharing)
```
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib as mpl
from matplotlib import pyplot as plt
# nice plot figures
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
import matplotlib.animation as animation
# smooth animations
mpl.rc('animation', html='jshtml')
import PIL
import os
import gym
import tf_agents
from tf_agents.environments import suite_atari, suite_gym
from tf_agents.environments.atari_preprocessing import AtariPreprocessing
from tf_agents.environments.atari_wrappers import FrameStack4
from tf_agents.environments.tf_py_environment import TFPyEnvironment
from tf_agents.networks.q_network import QNetwork
from tf_agents.agents.dqn.dqn_agent import DqnAgent
from tf_agents.replay_buffers.tf_uniform_replay_buffer import TFUniformReplayBuffer
from tf_agents.metrics import tf_metrics
from tf_agents.drivers.dynamic_step_driver import DynamicStepDriver
from tf_agents.policies.random_tf_policy import RandomTFPolicy
from tf_agents.utils.common import function
# functions to plot animations on a per frame basis
def update_scene(num, frames, patch):
patch.set_data(frames[num])
return patch,
def plot_animation(frames, repeat=False, interval=40):
fig = plt.figure()
patch = plt.imshow(frames[0])
plt.axis('off')
anim = animation.FuncAnimation(
fig, update_scene, fargs=(frames, patch),
frames=len(frames), repeat=repeat, interval=interval)
plt.close()
return anim
# save an agent's demo (after training)
saved_frames = []
def save_frames(trajectory):
global saved_frames
saved_frames.append(tf_env.pyenv.envs[0].render(mode="rgb_array"))
def play_game_demo(tf_env, the_agent, obs_list, n_steps):
watch_driver = DynamicStepDriver(
tf_env,
the_agent.policy,
observers=[save_frames] + obs_list,
num_steps=n_steps)
final_time_step, final_policy_state = watch_driver.run()
def save_animated_gif(frames): # saved_frames is passed in
image_path = os.path.join("images", "rl", "breakout.gif")
frame_images = [PIL.Image.fromarray(frame) for frame in frames[:150]]
frame_images[0].save(image_path, format='GIF',
append_images=frame_images[1:],
save_all=True,
duration=30,
loop=0)
# %%html
# <img src="images/rl/breakout.gif" /> runs the gif in a jupyter/colab environment
# 8
# install this dependency for LunarLander
# pip install gym[box2d]
test_env = gym.make("LunarLander-v2")
test_env # seems like there is a time limit
test_env.reset() # 8 values from each observation
```
From the source code, we can see that these each 8D observation (x, y, h, v, a, w, l, r) correspond to:
+ x,y: the coordinates of the spaceship. It starts at a random location near (0, 1.4) and must land near the target at (0, 0).
+ h,v: the horizontal and vertical speed of the spaceship. It starts with a small random speed.
+ a,w: the spaceship's angle and angular velocity.
+ l,r: whether the left or right leg touches the ground (1.0) or not (0.0).
```
print(test_env.observation_space) #
print(test_env.action_space, test_env.action_space.n) # 4 possible values
```
Looking at the https://gym.openai.com/envs/LunarLander-v2/, these actions are:
+ do nothing
+ fire left orientation engine
+ fire main engine
+ fire right orientation engine
```
# PG REINFORCE algorithm
keras.backend.clear_session()
np.random.seed(42)
tf.random.set_seed(42)
n_inputs = test_env.observation_space.shape[0]
n_outputs = test_env.action_space.n
model = keras.models.Sequential([
keras.layers.Dense(32, activation="relu", input_shape=[n_inputs]),
keras.layers.Dense(32, activation='relu'),
keras.layers.Dense(32, activation='relu'),
keras.layers.Dense(n_outputs, activation="softmax")
])
# play multiple episodes, exploring the environment randomly and recording
# gradients and rewards
def play_one_step(env, obs, model, loss_fn):
with tf.GradientTape() as tape:
probas = model(obs[np.newaxis])
logits = tf.math.log(probas + keras.backend.epsilon())
action = tf.random.categorical(logits, num_samples=1)
loss = tf.reduce_mean(loss_fn(action, probas))
grads = tape.gradient(loss, model.trainable_variables)
return obs, reward, done, grads
def play_multiple_episodes(env, n_episodes, n_max_steps, model, loss_fn):
all_grads, all_rewards = [], []
for episode in range(n_episodes):
current_grads, current_rewards = [], []
obs = env.reset()
for step in range(n_max_steps):
obs, reward, done, grads = play_one_step(env, obs, model, loss_fn)
current_rewards.append(reward)
current_grads.append(grads)
if done:
break
all_grads.append(current_grads)
all_rewards.append(current_rewards)
return all_rewards, all_grads
# compute sum of future discounted rewards and standardize to differentiate
# good and bad decisions
def discount_rewards(discounted, discount_rate):
discounted = np.array(discounted)
for step in range(len(discounted) - 2, -1, -1):
discounted[step] += discounted[step + 1] * discount_rate
return discount
def discount_and_normalize_rewards(all_rewards, discount_rate):
discounted_rewards = [discount_rewards(reward, discount_rate) for reward in all_rewards]
flattened_rewards = np.concatenate(discounted_rewards)
rewards_mean = flattened_rewards.mean()
rewards_stddev = flattened_rewards.std()
return [(reward - rewards_mean) / rewards_stddev for reward in discounted_rewards]
n_iterations = 200
n_episodes_per_update = 16
n_max_steps = 1000
discount_rate = 0.99
env = gym.make("LunarLander-v2")
optimizer = keras.optimizers.Nadam(lr=0.005)
loss_fn = keras.losses.sparse_categorical_crossentropy
# the model outputs probabilities for each class so we use categorical_crossentropy
# and the action is just 1 value (not a 1 hot vector so we use sparse_categorical_crossentropy)
env.seed(42)
# this will take very long, so I'm not calling it for the sake of my computer's mental health
def train(n_iterations, env, n_episodes_per_update, n_max_steps, model, loss_fn, discount_rate):
for iteration in range(n_iterations):
all_rewards, all_grads = play_multiple_episodes(env, n_episodes_per_update, n_max_steps, model, loss_fn)
# for plotting the learning curve with undiscounted rewards
# alternatively, just use a reduce_sum from tf and extract the numpy scalar value using .numpy()
mean_reward = sum(map(sum, all_rewards)) / n_episodes_per_update
print("\rIteration: {}/{}, mean reward: {:.1f} ".format( # \r means that it will not return a new line, it will just replace the current line
iteration + 1, n_iterations, mean_reward), end="")
mean_rewards.append(mean_reward)
all_discounted_rewards = discount_and_normalize_rewards(all_rewards, discount_rate)
all_mean_grads = []
for var_index in range(len(model.trainable_variables)):
mean_grads = tf.reduce_mean(
[final_reward * all_grads[episode_index][step][var_index]
for episode_index, final_rewards in enumerate(all_discounted_rewards)
for step, final_reward in enumerate(final_rewards)], axis=0)
all_mean_grads.append(mean_grads)
optimizer.apply_gradients(zip(all_mean_grads, model.trainable_variables))
# 9 TF-Agents SpaceInvaders-v4
environment_name = "SpaceInvaders-v4"
env = suite_atari.load(
environment_name,
max_episode_steps=27000,
gym_env_wrappers=[AtariPreprocessing, FrameStack4]
)
env
```
+ environment ✓
+ driver ✓
+ observer(s) ✓
+ replay buffer ✓
+ dataset ✓
+ agent with collect policy ✓
+ DQN ✓
+ training loop ✓
```
# environment officially built
tf_env = TFPyEnvironment(env)
dropout_params = [0.4]
fc_params = [512]
conv_params = [(32, (8, 8), 5),
(64, (4, 4), 4),
(64, (3, 3), 1),]
preprocessing_layer = keras.layers.Lambda(lambda obs: tf.cast(obs, np.float32) / 255.) # uint8 beforehand
dqn = QNetwork(
tf_env.observation_spec(),
tf_env.action_spec(),
preprocessing_layers=preprocessing_layer,
conv_layer_params=conv_params,
fc_layer_params=fc_params,
dropout_layer_params=dropout_params,
activation_fn=keras.activations.relu,
)
# dqn agent with collect policy officially built
update_period = 4
train_step = tf.Variable(0)
epsilon_greedy_policy = keras.optimizers.schedules.PolynomialDecay(
initial_learning_rate=1.0,
decay_steps=250000 // update_period,
end_learning_rate=0.01,
)
dqn_agent = DqnAgent(
tf_env.time_step_spec(),
tf_env.action_spec(),
q_network=dqn,
optimizer=keras.optimizers.RMSprop(learning_rate=0.001, rho=0.9, momentum=0.0, epsilon=1e-07, centered=False),
train_step_counter=train_step,
gamma=0.99,
td_errors_loss_fn=keras.losses.Huber(reduction="none"),
target_update_period=2000,
epsilon_greedy=lambda: epsilon_greedy_policy(train_step)
)
dqn_agent.initialize()
# uniform replay buffer officially built
replay_buffer = TFUniformReplayBuffer(
dqn_agent.collect_data_spec,
batch_size = tf_env.batch_size,
max_length=100000,
)
replay_buffer_observer = replay_buffer.add_batch
# observers + metrics officially built
training_metrics = [
tf_metrics.AverageEpisodeLengthMetric(),
tf_metrics.AverageReturnMetric(),
tf_metrics.NumberOfEpisodes(),
tf_metrics.EnvironmentSteps(),
]
class ShowProgress:
def __init__(self, total):
self.counter = 0
self.total = total
def __call__(self, trajectory):
if not trajectory.is_boundary():
self.counter += 1
if self.counter % 100 == 0:
print("\r{}/{}".format(self.counter, self.total), end="")
# driver officially created
driver = DynamicStepDriver(
tf_env,
dqn_agent.collect_policy,
observers = training_metrics + [ShowProgress(2000)],
num_steps=update_period
)
random_policy = RandomTFPolicy(
tf_env.time_step_spec(),
tf_env.action_spec()
)
initial_driver = DynamicStepDriver(
tf_env,
random_policy,
observers = [replay_buffer.add_batch] + [ShowProgress(2000)],
num_steps=update_period
)
final_time_step, final_policy_state = initial_driver.run()
# dataset officially built
dataset = replay_buffer.as_dataset(
sample_batch_size=64,
num_steps=2,
num_parallel_calls=3,
).prefetch(3)
driver.run = function(driver.run)
dqn_agent.train = function(dqn_agent.train)
# I would train it, but my computer suffers from dementia
# training loop officially built
def training(n_iterations, agent, driver, tf_env, dataset):
time_step = None
initial_policy_state = agent.collect_policy.get_initial_state(tf_env.batch_size)
iterator = iter(dataset) # forgot to do this!
for iteration in range(n_iterations):
time_step, policy_state = driver.run(time_step, policy_state)
trajectories, buffer_info = next(iterator)
train_loss = agent.train(trajectories)
```
| github_jupyter |
# A quick introduction to Blackjax
BlackJAX is an MCMC sampling library based on [JAX](https://github.com/google/jax). BlackJAX provides well-tested and ready to use sampling algorithms. It is also explicitly designed to be modular: it is easy for advanced users to mix-and-match different metrics, integrators, trajectory integrations, etc.
In this notebook we provide a simple example based on basic Hamiltonian Monte Carlo and the NUTS algorithm to showcase the architecture and interfaces in the library
```
import jax
import jax.numpy as jnp
import jax.scipy.stats as stats
import matplotlib.pyplot as plt
import numpy as np
import blackjax
%load_ext watermark
%watermark -d -m -v -p jax,jaxlib,blackjax
jax.devices()
```
## The problem
We'll generate observations from a normal distribution of known `loc` and `scale` to see if we can recover the parameters in sampling. Let's take a decent-size dataset with 1,000 points:
```
loc, scale = 10, 20
observed = np.random.normal(loc, scale, size=1_000)
def logprob_fn(loc, scale, observed=observed):
"""Univariate Normal"""
logpdf = stats.norm.logpdf(observed, loc, scale)
return jnp.sum(logpdf)
logprob = lambda x: logprob_fn(**x)
```
## HMC
### Sampler parameters
```
inv_mass_matrix = np.array([0.5, 0.5])
num_integration_steps = 60
step_size = 1e-3
hmc = blackjax.hmc(logprob, step_size, inv_mass_matrix, num_integration_steps)
```
### Set the initial state
The initial state of the HMC algorithm requires not only an initial position, but also the potential energy and gradient of the potential energy at this position. BlackJAX provides a `new_state` function to initialize the state from an initial position.
```
initial_position = {"loc": 1.0, "scale": 2.0}
initial_state = hmc.init(initial_position)
initial_state
```
### Build the kernel and inference loop
The HMC kernel is easy to obtain:
```
%%time
hmc_kernel = jax.jit(hmc.step)
```
BlackJAX does not provide a default inference loop, but it easy to implement with JAX's `lax.scan`:
```
def inference_loop(rng_key, kernel, initial_state, num_samples):
@jax.jit
def one_step(state, rng_key):
state, _ = kernel(rng_key, state)
return state, state
keys = jax.random.split(rng_key, num_samples)
_, states = jax.lax.scan(one_step, initial_state, keys)
return states
```
### Inference
```
%%time
rng_key = jax.random.PRNGKey(0)
states = inference_loop(rng_key, hmc_kernel, initial_state, 10_000)
loc_samples = states.position["loc"].block_until_ready()
scale_samples = states.position["scale"]
fig, (ax, ax1) = plt.subplots(ncols=2, figsize=(15, 6))
ax.plot(loc_samples)
ax.set_xlabel("Samples")
ax.set_ylabel("loc")
ax1.plot(scale_samples)
ax1.set_xlabel("Samples")
ax.set_ylabel("scale")
```
## NUTS
NUTS is a *dynamic* algorithm: the number of integration steps is determined at runtime. We still need to specify a step size and a mass matrix:
```
inv_mass_matrix = np.array([0.5, 0.5])
step_size = 1e-3
nuts = blackjax.nuts(logprob, step_size, inv_mass_matrix)
initial_position = {"loc": 1.0, "scale": 2.0}
initial_state = nuts.init(initial_position)
initial_state
%%time
rng_key = jax.random.PRNGKey(0)
states = inference_loop(rng_key, nuts.step, initial_state, 4_000)
loc_samples = states.position["loc"].block_until_ready()
scale_samples = states.position["scale"]
fig, (ax, ax1) = plt.subplots(ncols=2, figsize=(15, 6))
ax.plot(loc_samples)
ax.set_xlabel("Samples")
ax.set_ylabel("loc")
ax1.plot(scale_samples)
ax1.set_xlabel("Samples")
ax1.set_ylabel("scale")
```
### Use Stan's window adaptation
Specifying the step size and inverse mass matrix is cumbersome. We can use Stan's window adaptation to get reasonable values for them so we have, in practice, no parameter to specify.
The adaptation algorithm takes a function that returns a transition kernel given a step size and an inverse mass matrix:
```
%%time
warmup = blackjax.window_adaptation(
blackjax.nuts,
logprob,
1000,
)
state, kernel, _ = warmup.run(
rng_key,
initial_position,
)
```
We can use the obtained parameters to define a new kernel. Note that we do not have to use the same kernel that was used for the adaptation:
```
%%time
states = inference_loop(rng_key, nuts.step, initial_state, 1_000)
loc_samples = states.position["loc"].block_until_ready()
scale_samples = states.position["scale"]
fig, (ax, ax1) = plt.subplots(ncols=2, figsize=(15, 6))
ax.plot(loc_samples)
ax.set_xlabel("Samples")
ax.set_ylabel("loc")
ax1.plot(scale_samples)
ax1.set_xlabel("Samples")
ax1.set_ylabel("scale")
```
## Sample multiple chains
We can easily sample multiple chains using JAX's `vmap` construct. See the [documentation](https://jax.readthedocs.io/en/latest/jax.html?highlight=vmap#jax.vmap) to understand how the mapping works.
```
num_chains = 4
initial_positions = {"loc": np.ones(num_chains), "scale": 2.0 * np.ones(num_chains)}
initial_states = jax.vmap(nuts.init, in_axes=(0))(initial_positions)
def inference_loop_multiple_chains(
rng_key, kernel, initial_state, num_samples, num_chains
):
def one_step(states, rng_key):
keys = jax.random.split(rng_key, num_chains)
states, _ = jax.vmap(kernel)(keys, states)
return states, states
keys = jax.random.split(rng_key, num_samples)
_, states = jax.lax.scan(one_step, initial_state, keys)
return states
%%time
states = inference_loop_multiple_chains(
rng_key, nuts.step, initial_states, 2_000, num_chains
)
states.position["loc"].block_until_ready()
```
This scales very well to hundreds of chains on CPU, tens of thousand on GPU:
```
%%time
num_chains = 40
initial_positions = {"loc": np.ones(num_chains), "scale": 2.0 * np.ones(num_chains)}
initial_states = jax.vmap(nuts.init, in_axes=(0,))(initial_positions)
states = inference_loop_multiple_chains(
rng_key, nuts.step, initial_states, 1_000, num_chains
)
states.position["loc"].block_until_ready()
```
In this example the result is a dictionnary and each entry has shape `(num_samples, num_chains)`. Here's how to access the samples of the second chains for `loc`:
| github_jupyter |
# Curso de introducción a Python: procesamiento y análisis de datos
La mejor forma de aprender a programar es haciendo algo útil, por lo que esta introducción a Python se centrará alrededor de una tarea común: el _análisis de datos_. En este taller práctico se hará un breve repaso a los conceptos básicos de programación con el fin de automatizar procesos cubriendo la sintaxis de Python (junto a NumPy y matplotlib). Para ello, seguiremos los materiales de [Software-Carpentry](https://software-carpentry.org/) ([ver apuntes](http://swcarpentry.github.io/python-novice-inflammation/)).
__Nuestra herramienta fundamental de trabajo es el Notebook de Jupyter__, podrás conocer más acerca de él en las siguientes clases. Durante el curso te familiarizarás con él y aprenderás a manejarlo (este documento ha sido generado a partir de un notebook).
En esta sesión inicial, veremos los pasos a seguir para que __instales Python y puedas empezar a aprender a tu ritmo.__
## Pasos a seguir:
### 1. Descarga de Python.
La instalación de Python, el Notebook y todos los paquetes que utilizaremos, por separado puede ser una tarea ardua y agotadora, pero no te preocupes: ¡alguien ha hecho ya el trabajo duro!
__[Anaconda](https://continuum.io/anaconda/) es una distribución de Python que recopila muchas de las bibliotecas necesarias en el ámbito de la computación científica__ y desde luego, todas las que necesitaremos en este curso. Además __incluye herramientas para programar en Python, como [Jupyter Notebook](http://jupyter.org/) o [Spyder](https://github.com/spyder-ide/spyder#spyder---the-scientific-python-development-environment)__ (un IDE al estilo de MATLAB).
Lo único que necesitas hacer es:
* Ir a la [página de descargas de Anaconda](http://continuum.io/downloads).
* Seleccionar tu sistema operativo (Windows, OSX, Linux).
* Descargar Anaconda (utilizaremos Python 3.X).
<img src="../images/download_anaconda.png" alt="Download" />
### 2. Instalación de Python.
Consulta las __[instrucciones de instalación](http://docs.continuum.io/anaconda/install.html)__ de Anaconda para tu sistema operativo. En el caso de Windows y OS X, te encontrarás con los típicos instaladores gráficos a los que ya estás acostumbrado. Si te encuentras en Linux, deberás ejectuar el script de instalación desde la consola de comandos, así que recuerda comprobar que tienes bash instalado y asignar permisos de ejecución al script.
__Importante: asegurate de instalar Anaconda sólo para tu usuario y sin permisos de administrador, no son necesarios y te pueden dar problemas más tarde si no tienes derechos de acceso siempre.__
¡Muy bien! Ya tienes instalado ¿pero dónde?
* En __Windows__, desde `Inicio > Anaconda` verás una serie de herramientas de las que ahora dispones ¡no tengas miedo de abrirlas!
* En __OS X__, podrás acceder a un launcher con las mismas herramientas desde la carpeta `anaconda` dentro de tu carpeta personal.
* En __Linux__, debido al gran número de combinaciones de distribuciones más escritorios no tendrás esos accesos directos gráficos (lo que no quiere decir que no puedas crearlos tú a posteriori) pero, como comprobarás, no hacen ninguna falta y no forman parte de nuestra forma de trabajar en el curso.
Ahora, vamos a __actualizar Anaconda__ para asegurarnos de que tenemos nuestra distribución de Python con todos sus paquetes al día para lo que abrimos una __ventana de comandos__ (símbolo de sistema en Windows o terminal en OS X) y ejecutamos los siguientes comandos de actualización (confirmando en el caso de tener que instalar paquetes nuevos):
```
conda update conda
conda update --all
```
Si experimentas cualquier clase de problema durante este proceso, [desinstala tu distribución de Anaconda](http://docs.continuum.io/anaconda/install.html) y vuelve a instalarla donde puedas asegurarte de tener una conexión a internet estable.
Por último, comprueba que Jupyter Notebook funciona correctamente. Escribe esto en una ventana de comandos y espera a que se abra el navegador.
```
jupyter notebook
```
Deberías ver [esta interfaz](https://try.jupyter.org/) (aunque sin archivos).
Ya tenemos nuestra distribución de Python con todos los paquetes que necesitemos (y prácticamente todos los que en un futuro podamos necesitar).
En caso de que tengas cualquier caso de duda durante el proceso, pregúntanos y recuerda que __¡los buscadores de internet son tus mejores amigos!__
_¡A trabajar!_
---
Clase en vídeo, parte del [Curso de Python para científicos e ingenieros](http://cacheme.org/curso-online-python-cientifico-ingenieros/) grabado en la Escuela Politécnica Superior de la Universidad de Alicante.
```
from IPython.display import YouTubeVideo
YouTubeVideo("x4xegDME5C0", width=560, height=315, list="PLGBbVX_WvN7as_DnOGcpkSsUyXB1G_wqb")
```
---
###### Este material es un resumen actualizado del magnífico [Curso de AeroPython](https://github.com/AeroPython/Curso_AeroPython) realizado por: Juan Luis Cano, Mabel Delgado y Álex Sáez
<br/>
##### <a rel="license" href="http://creativecommons.org/licenses/by/4.0/deed.es"><img alt="Licencia Creative Commons" style="border-width:0" src="http://i.creativecommons.org/l/by/4.0/88x31.png" /></a><br /><span xmlns:dct="http://purl.org/dc/terms/" property="dct:title">(*) El Curso AeroPython</span> por <span xmlns:cc="http://creativecommons.org/ns#" property="cc:attributionName">Juan Luis Cano Rodriguez y Alejandro Sáez Mollejo</span> se distribuye bajo una <a rel="license" href="http://creativecommons.org/licenses/by/4.0/deed.es">Licencia Creative Commons Atribución 4.0 Internacional</a>.
| github_jupyter |
# CSV2VCF_v2
```
# Requeritments:
# 1) The first column must contain Gene_symbol
# If your csv file doesnt have gene name or Gene_symbol
# That is fine, in the report document, you will be inform that the gene_symbol
# of your variant are wrong but the convertion will be perform anyway
# 2) The second column must contain the coordinates of your variants like this (3:987654321)
# Call this column hg19 if your are working with this building or
# call this column hg38 if your are working with the hg38 building
# 3) Third column must contain your variand ID
# Variant
# Let's import all modules used in this script:
# The next 19 lines imports the modules needed, if some module is not installed, a exception will installed the
# package(s) and it will import the module(s).
# This has been tested. When execute the script in new envs, this works
modules = ['imp','sys', 'requests', 'chardet', 'json', 'pandas', 'numpy', 're','httplib2', 'liftover']
import os
import subprocess
for library in modules:
try:
exec("import {module}".format(module=library))
except:
subprocess.call(['pip', 'install', library])
import pandas as pd
import numpy as np
from liftover import get_lifter
from liftover import ChainFile
from datetime import date
# For a reason I can't explain, requests module did not work sometimes when testing our script in a new envs
# To avoid this, this is imported again.
import requests
# The ultimate goal of this programme is to be executable without any interection with the terminal
# owing to, the motivation of this programme is to create a programme
# to be used by users without strong bioinformatic background.
# To achieve this, the script will be in a folder called CSP2VCF.
# In this folder will contain the script and 2 folders, one called input and a second called output.
# Idealy, the user only have to put the input file (a CSV file) in input folder
# and get the output in the output folder.
# The next lines allow this in any location of the user's computer
# This read the file located in the folder input
actual_path = os.getcwd()
folder_input = '/input/'
input_file_name = os.listdir(actual_path+folder_input)
input_file_name= ''.join(input_file_name)
CSV_input = actual_path+folder_input+input_file_name
# This is to avoid UnicodeDecodeError: 'utf-8'
# If the excel document is saved as CSV, not utf-8 errors should occur
# However, if the user doesn't do this, and some special characters are found in the input
# This partially avoids the issue.
with open(CSV_input, 'rb') as f:
result = chardet.detect(f.read())
# Now, we can read and convert the input in a panda df
df_input = pd.read_csv(CSV_input, encoding=result['encoding'])
# For future manipulation, we will need two copies of the input
df_draft = df
# Let's obtain the coordinates of the varfiant in both buildings
# Because we don't know which buildind the user will use
# One requeritment will be to specifify the name of the second column
# as hg38 or hg19
# so that we can generate the coordinates of both buidilng now
if df.columns[1] == 'hg38':
# I need to convert the coordinates to hg19
# Let's get the coordinates from the input file
df["chro"] = df["hg38"].apply(lambda x: x.split(":")[0])
df["location_hg38"] = df["hg38"].apply(lambda x: x.split(":")[1])
df['location_hg38'] = df['location_hg38'].apply(pd.to_numeric)
Chrom2Pos_hg19 = dict()
converter = get_lifter('hg38', 'hg19')
df['chro_N'] = ''
df['location_hg19'] = ''
for index, chro,location_hg38 in zip(df.index,df.chro, df.location_hg38):
df.loc[index, ['chro_N']] = [converter.query(chro, location_hg38)[0][0]]
df.loc[index, ['location_hg19']] = [converter.query(chro, location_hg38)[0][1]]
else:
#The opposite
df["chro"] = df["hg19"].apply(lambda x: x.split(":")[0])
df["location_hg19"] = df["hg19"].apply(lambda x: x.split(":")[1])
df['location_hg19'] = df['location_hg19'].apply(pd.to_numeric)
Chrom2Pos_hg19 = dict()
converter = get_lifter('hg19', 'hg38')
df['chro_N'] = ''
df['location_hg38'] = ''
for index, chro,location_hg19 in zip(df.index,df.chro, df.location_hg19):
df.loc[index, ['chro_N']] = [converter.query(chro, location_hg19)[0][0]]
df.loc[index, ['location_hg38']] = [converter.query(chro, location_hg19)[0][1]]
df
# I will need the ref and the alt to request the API
df['ref:c.'] = ''
df['alt:c.'] = ''
for idx, value in df.iloc[:,2].iteritems():
if ':c.' in value and ">" in value:
df.loc[idx, ['ref:c.']] = list(value.split(">")[0])[-1]
df.loc[idx, ['alt:c.']] = value.split(">")[1]
df['ref:g.'] = df['ref:c.'].replace({"C": "G", "G": "C", "A":"T", "T":"A"})
df['alt:g.'] = df['alt:c.'].replace({"C": "G", "G": "C", "A":"T", "T":"A"})
if ':g.' in value and ">" in value:
df.loc[idx, ['ref:g.']] = list(value.split(">")[0])[-1]
df.loc[idx, ['alt:g.']] = value.split(">")[1]
df['ref:c.'] = df['ref:g.'].replace({"C": "G", "G": "C", "A":"T", "T":"A"})
df['alt:c.'] = df['alt:g.'].replace({"C": "G", "G": "C", "A":"T", "T":"A"})
df
# I will also need the location of del and ins
df['ins'] = ''
df['insertion:g'] = ''
for idx, value in df.iloc[:,2].iteritems():
if 'ins' in value:
df.loc[idx, ['ins']] = len(list(value.split("ins")[1]))
df.loc[idx, ['insertion:g']] = 'ins' + df.iloc[idx,[2]].apply(lambda x: x.split("ins")[1])[0]
#df.loc[idx, ['insertion:g']] = df.loc[idx, ['insertion:g']].str.replace({"C": "G", "G": "C", "A":"T", "T":"A"})
else:
df.loc[idx, ['ins']] = 0
df['del'] = ''
df['deletion'] = ''
for idx, value in df.iloc[:,2].iteritems():
if 'del' in value:
df.loc[idx, ['del']] = len(list(value.split("del")[1]))
df.loc[idx, ['deletion']] = 'del' + df.iloc[idx,[2]].apply(lambda x: x.split("del")[1])[0]
else:
df.loc[idx, ['del']] = 0
df['sum_ins'] = df['location_hg19'].astype(int) + df['ins'].astype(int)
df['sum_del'] = df['location_hg19'].astype(int) + df['del'].astype(int)
df['sum_ins'] = df['sum_ins'].astype(str)
df['sum_del'] = df['sum_del'].astype(str)
df['location_hg19'] = df['location_hg19'].astype(str)
df['location_ins'] = df[['location_hg19', 'sum_ins']].agg('_'.join, axis=1)
df['location_del'] = df[['location_hg19', 'sum_del']].agg('_'.join, axis=1)
df
# I am going to create the columns where I am going to store the info I want from the API
df['API_REF'] = ''
df['API_ALT'] = ''
df['API_GENE_SYM'] = ''
df['API_LOCATION'] = ''
• Insertion example:chr2:g.17142_17143insA
Deletion example:chrMT:g.8271_8279del
df_draft = df_draft.applymap(str)
rs = df_draft[df_draft.apply(lambda x:x.str.contains("rs\d+"))].dropna(how='all').dropna(axis=1, how='all')
if rs.empty == False:
ind = rs.index.to_list()
vals = list(rs.stack().values)
row2rs = dict(zip(ind, vals))
# I have seen occasionally how users save more than one variant ID in the same row.
# To avoid repetition, I delete the row
for index, rs in row2rs.items():
# This will be done in df_draft
df_draft = df_draft.drop(index)
# Second kind of variant ID is insertion
insertion = df_draft[df_draft.apply(lambda x:x.str.contains("ins"))].dropna(how='all').dropna(axis=1, how='all')
if insertion.empty == False:
ind = insertion.index.to_list()
vals = list(insertion.stack().values)
row2insertion = dict(zip(ind, vals))
for index, insertion in row2insertion.items():
df_draft = df_draft.drop(index)
# Third kind of variant ID is deletion
deletion = df_draft[df_draft.apply(lambda x:x.str.contains("del"))].dropna(how='all').dropna(axis=1, how='all')
if deletion.empty == False:
ind = deletion.index.to_list()
vals = list(deletion.stack().values)
row2deletion = dict(zip(ind, vals))
for index, deletion in row2deletion.items():
df_draft = df_draft.drop(index)
subtitution = df_draft[df_draft.apply(lambda x:x.str.contains(">"))].dropna(how='all').dropna(axis=1, how='all')
if subtitution.empty == False:
ind = subtitution.index.to_list()
vals = list(subtitution.stack().values)
row2subtitution = dict(zip(ind, vals))
for index, subtitution in row2subtitution.items():
df_draft = df_draft.drop(index)
######################################
##### Its time for requesting API#####
######################################
#### If the Variant ID is dbnsp RS123 ####
row2gene_symbol = dict()
row2CHROM = dict()
row2POS = dict()
row2ALT = dict()
row2REF = dict()
server = "http://myvariant.info/v1/query?q="
for row, rs in row2rs.items():
# Request API here
r = requests.get(server+rs)
if not r.ok:
r.raise_for_status()
sys.exit()
decoded = r.json()
# We select the info we want and store it in dict
# For row2gene_symbol
row2gene_symbol[row]= decoded['hits'][0]['dbsnp']['gene']['symbol']
# For row2CHROM
row2CHROM[row] = decoded['hits'][0]['chrom']
# For row2POS
row2POS[row] = decoded['hits'][0]['vcf']['position']
# For row2ALT
row2ALT[row] = decoded['hits'][0]['vcf']['alt']
# For row2REF
row2REF[row] = decoded['hits'][0]['vcf']['ref']
# Next idea, to provide the citations
#### If the Variant ID is a SUBSTITUTION ####
g = ':g.'
R_S_A = '>'
server = 'http://myvariant.info/v1/variant/'
for idx, value in df.iloc[:,2].iteritems():
if ':c.' in value and ">" in value:
API_input = server + df.loc[idx, ['chro_N']][0] + g + df.loc[idx, ['location_hg19']][0] + df.loc[idx, ['ref:g.']][0] + R_S_A + df.loc[idx, ['alt:g.']][0]
r = requests.get(API_input)
if r.status_code == 200:
decoded = r.json()
row2gene_symbol[idx]= decoded['cadd']['gene']['genename']
row2ALT[idx] = decoded['vcf']['alt']
row2REF[idx] = decoded['vcf']['ref']
row2POS[idx] = decoded['vcf']['position']
else:
row2gene_symbol[idx] = "Variant not found in API"
row2ALT[idx] = "Variant not found in API"
row2REF[idx] = "Variant not found in API"
row2POS[idx] = "Variant not found in API"
#### If the Variant ID is a DELETION ####
for idx, value in df.iloc[:,2].iteritems():
if 'del' in value :
API_input = server + df.loc[idx, ['chro_N']][0] + g + df.loc[idx, ['location_del']][0] + 'del'
r = requests.get(API_input)
if r.status_code == 200:
decoded = r.json()
row2gene_symbol[idx]= decoded['dbsnp']['gene']['symbol']
row2ALT[idx] = decoded['vcf']['alt']
row2REF[idx] = decoded['vcf']['ref']
row2POS[idx] = decoded['vcf']['position']
else:
print(API_input)
row2gene_symbol[idx] = "Variant not found in API"
row2ALT[idx] = "Variant not found in API"
row2REF[idx] = "Variant not found in API"
row2POS[idx] = "Variant not found in API"
#### If the Variant ID is a INSERTION ####
for idx, value in df.iloc[:,2].iteritems():
if 'ins' in value :
API_input = server + df.loc[idx, ['chro_N']][0] + g + df.loc[idx, ['location_ins']][0] + 'ins'
r = requests.get(API_input)
if r.status_code == 200:
decoded = r.json()
row2gene_symbol[idx]= decoded['dbsnp']['gene']['symbol']
row2ALT[idx] = decoded['vcf']['alt']
row2REF[idx] = decoded['vcf']['ref']
row2POS[idx] = decoded['vcf']['position']
else:
print(API_input)
row2gene_symbol[idx] = "Variant not found in API"
row2ALT[idx] = "Variant not found in API"
row2REF[idx] = "Variant not found in API"
row2POS[idx] = "Variant not found in API"
df.loc[idx, ['chro_N']][0] + g + df.loc[idx, ['location_ins']][0] + 'ins'
df.columns()
chr2:g.17142_17143insA
# If the variant ID is dbsnp (i.e. rs00001)
# The only thing I have to do is to request this variant ID directly in My.variant API
row2gene_symbol = dict()
row2CHROM = dict()
row2POS = dict()
row2ALT = dict()
row2REF = dict()
server = "http://myvariant.info/v1/query?q="
for row, rs in row2rs.items():
# Request API here
r = requests.get(server+rs)
if not r.ok:
r.raise_for_status()
sys.exit()
decoded = r.json()
# We select the info we want and store it in dict
# For row2gene_symbol
row2gene_symbol[row]= decoded['hits'][0]['dbsnp']['gene']['symbol']
# For row2CHROM
row2CHROM[row] = decoded['hits'][0]['chrom']
# For row2POS
row2POS[row] = decoded['hits'][0]['vcf']['position']
# For row2ALT
row2ALT[row] = decoded['hits'][0]['vcf']['alt']
# For row2REF
row2REF[row] = decoded['hits'][0]['vcf']['ref']
# Next idea, to provide the citations
# Now, we do the same with other kind of variant IDs
# With the dbSNP variant ID was very easy
# however with other ID things become more complicated
# I still want to used my.variant API
# This API so far only works with hg19
# That means that we need to convert the location if the user use hg38.
# I want to be ask the user which genome building is going to use.
# To know in which genomic construction the user is working,
# we will ask him/her (in requeritments) to change the name of the second column.
# to "hg19" or "hg38"
df
if df.columns[1] == 'hg38':
# I need to convert the coordinates to hg19
# Let's get the coordinates from the input file
df["chro"] = df["hg38"].apply(lambda x: x.split(":")[0])
df["location_hg38"] = df["hg38"].apply(lambda x: x.split(":")[1])
df['location_hg38'] = df['location_hg38'].apply(pd.to_numeric)
converter = get_lifter('hg38', 'hg19')
Chrom2Pos_hg19 = dict()
Chrom2Pos_hg38 = dict(zip(df.chro, df.location_hg38))
for index, chro,location_hg38 in zip(df.index,df.chro, df.location_hg38):
Chrom = chro
pos = location_hg38
Chrom2Pos_hg19[converter.query(chro, location_hg38)[0][0]]= converter.query(chro, location_hg38)[0][1]
df[['chro_N','location_hg_19']] = pd.DataFrame(list(Chrom2Pos_hg19.items()))
df
#for_test2 = pd.DataFrame(list(Chrom2Pos_hg19.items()),columns = ['CHROM','POS'])
# Now we do and save the convertion
#for Chrom,Pos in Chrom2Pos_hg38.items():
# chrom = Chrom
# pos = Pos
# converter[chrom][pos]
# Chrom2Pos_hg19[converter.query(chrom, pos)[0][0]]= converter.query(chrom, pos)[0][1]
# Chrom2Pos_hg38 = dict(zip(for_test.CHROM, for_test.POS))
server = 'http://myvariant.info/v1/variant/'
for index, chro_N,location_hg_19 in zip(df.index,df.chro_N, df.location_hg_19):
# Request API here
r = requests.get(server+str(chro_N)+':g.'+str(location_hg_19)+)
decoded = r.json()
# We select the info we want and store it in dict
# For row2gene_symbol
row2gene_symbol[index]= decoded['hits'][0]['dbsnp']['gene']['symbol']
# For row2CHROM
#row2CHROM[index] = decoded['hits'][0]['chrom']
# For row2POS
#row2POS[index] = decoded['hits'][0]['vcf']['position']
# For row2ALT
#row2ALT[index] = decoded['hits'][0]['vcf']['alt']
# For row2REF
#row2REF[index] = decoded['hits'][0]['vcf']['ref']
server+str(chro_N)+str(location_hg_19)
decoded
http://myvariant.info/v1/variant/chr1:g.35367G>A
df
import random
inp = [{'c1':10, 'c2':100}, {'c1':11,'c2':110}, {'c1':12,'c2':120}]
df = pd.DataFrame(inp)
df['newColumn'] = ""
yourCondition = True
for i in range(len(df)):
# put your condition here
if (yourCondition):
# now you can update what you want
df['newColumn'].values[i] = random.randint(0,9)
print(df)
# First, let identify what column contains the Variant ID
#df['rs'] = df.astype(str).applymap(lambda x: 'rs' in x).any(1)
#df['del'] = df.astype(str).applymap(lambda x: 'del' in x).any(1)
#df['ins'] = df.astype(str).applymap(lambda x: 'ins' in x).any(1)
#df['>'] = df.astype(str).applymap(lambda x: '>' in x).any(1)
#df['<'] = df.astype(str).applymap(lambda x: '<' in x).any(1)
#df['type']=''
#df.loc[df['rs'] == True, 'type'] = 'dbsnp'
#df.loc[df['del'] == True, 'type'] = 'deletion'
#df.loc[df['ins'] == True, 'type'] = 'insertion'
#df.loc[df['>'] == True, 'type'] = 'substitution'
#df.loc[df['<'] == True, 'type'] = 'substitution'
for index, value in df:
print(index)
# if df.iloc[index,2].str.contains(">"):
# print(value)
#df["alt"] = df.iloc[:,2].apply(lambda x: x.split(">")[0])
#df
#df[df['A'].str.contains("hello")]
#df.iloc[:,2]
```
| github_jupyter |
# Learning avalanche problems by meteorological factors
```
import pandas as pd
import numpy as np
import json
import graphviz
import matplotlib.pyplot as plt
from sklearn import tree
from sklearn.preprocessing import LabelEncoder
from pprint import pprint
pd.set_option("display.max_rows",6)
%matplotlib inline
```
## Split into test and traininng data to run a prediction
We use the avalanche forecasts from _Nordvestlandet_ including the forecasting regions _Trollheimen_, _Romsdalen_ and _Sunnmøre_. We keep only the parameters provided by the mountain weather forecast. Besides the weather data for the current day we add the precipitation from the previous day as an additional parameter.
We use 75% of the data for training the model and the remaining 25% to test the model afterwards.
```
df_numdata = pd.read_csv('varsel_nordvestlandet_17_18.csv', index_col=0)
### Remove the "2|" in column Rainfall_Average
df_numdata = df_numdata[df_numdata['Rainfall_Average'] != '2|']
### create new data columns with previous days weather data
df_numdata['Rainfall_Most_exposed_area_-1day'] = 0 # precip on the day before - be aware that missing index/day will set previous day to zero
for index, row in df_numdata.iterrows():
try:
df_numdata.loc[index, 'Rainfall_Most_exposed_area_-1day'] = df_numdata.loc[index-1, 'Rainfall_Most_exposed_area']
except KeyError:
print(index-1)
### Randomly shuffle the index of nba.
random_indices = np.random.permutation(df_numdata.index)
### Set a cutoff for how many items we want in the test set (in this case 1/3 of the items)
test_cutoff = np.int(np.floor(len(df_numdata)/4))
print(test_cutoff)
### Generate the test set by taking the first 1/3 of the randomly shuffled indices.
df_test = df_numdata.loc[random_indices[1:test_cutoff]]
### Generate the train set with the rest of the data.
df_train = df_numdata.loc[random_indices[test_cutoff:]]
### Keep only the columns containing weather data...
df_train_target = df_train.filter(['AvalancheProblems_0_Class_AvalancheProblemTypeId'], axis=1)
df_train_input = df_train.filter(['Rainfall_Most_exposed_area',
'Rainfall_Average',
'Wind_Speed_Num',
'Wind_Direction_Num',
'Temperature_Min',
'Temperature_Max',
'Temperature_masl',
'Freezing_Level_masl',
'Rainfall_Most_exposed_area_-1day'], axis=1)
### ...and split between input and target
df_test_target = df_test.filter(['AvalancheProblems_0_Class_AvalancheProblemTypeId'], axis=1)
df_test_input = df_test.filter(['Rainfall_Most_exposed_area',
'Rainfall_Average',
'Wind_Speed_Num',
'Wind_Direction_Num',
'Temperature_Min',
'Temperature_Max',
'Temperature_masl',
'Freezing_Level_masl',
'Rainfall_Most_exposed_area_-1day'], axis=1)
### get the correct target labels
with open(r'../config/snoskred_keys.json') as jdata:
snoskred_keys = json.load(jdata)
enc = LabelEncoder()
label_encoder = enc.fit(df_train_target['AvalancheProblems_0_Class_AvalancheProblemTypeId'])
print ("Categorical classes:", label_encoder.classes_)
class_names2 = []
for l in label_encoder.classes_:
class_names2.append(snoskred_keys['Class_AvalancheProblemTypeName'][str(l)])
print(class_names2)
###
train_input = np.array(df_train_input.values, dtype=float)
train_target = np.array(df_train_target.values, dtype=float)
clf2 = tree.DecisionTreeClassifier(min_samples_leaf=8)
clf2 = clf2.fit(train_input, train_target)
### could also use
#clf2 = clf2.fit(df_train_input.values, df_train_target.values)
dot_data2 = tree.export_graphviz(clf2, out_file=None,
feature_names = df_train_input.columns.values,
class_names = class_names2,
#proportion = True, # show precentages instead of members
label = "root",
filled=True, rounded=True, special_characters=True
)
graph2 = graphviz.Source(dot_data2)
graph2.render("avalanche_problem_meteo_train")
```
We can now compare the prediction by the model to the given target values in the test dataset.
```
test_input = np.array(df_test_input.values, dtype=float)
test_target = np.array(df_test_target.values, dtype=float)
y = clf2.predict(test_input)
s = clf2.score(test_input, test_target)
i = np.arange(len(y))
fig = plt.figure(figsize=(15,10))
ax = fig.gca()
ax.scatter(i, np.squeeze(test_target), label='Truth')
ax.scatter(i, y, label='Prediction')
plt.xlabel('Index')
#ax = fig.gca()
#index_labels = ax.get_yticklabels()
#named_labels = [snoskred_keys['Class_AvalancheProblemTypeName'][l] for l in index_labels]
#print(list(index_labels))#, named_labels)
named_labels = ["Loose dry", "Loose wet", "Glide avalanche", "Wet slab", "Storm slab", "Wind slab", "Persistent slab",]
ax.set_yticklabels(named_labels)
plt.title('Trained on {2} cases\nTesting {0} cases\nClassification score = {1:0.2f}'.format(len(test_target), s, len(train_target)))
plt.legend()
plt.savefig('nordvestlandet_prediction.pdf')
```
### Investigating the metrics of the model
```
from sklearn import metrics
def measure_performance(X,y,clf, show_accuracy=True, show_classification_report=True, show_confusion_matrix=True):
y_pred=clf.predict(X)
if show_accuracy:
print ("Accuracy:{0:.3f}".format(metrics.accuracy_score(y,y_pred)),"\n")
if show_classification_report:
print ("Classification report")
print (metrics.classification_report(y,y_pred),"\n")
if show_confusion_matrix:
print ("Confusion matrix")
print (metrics.confusion_matrix(y,y_pred),"\n")
measure_performance(test_input, test_target,clf2)#, show_classification_report=False, show_confusion_matrix=False)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/roupenminassian/Freelance/blob/main/NLP%20(Logistic_Regression)%20for%20Twitter%20Event%20Prediction.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import pandas as pd
import numpy as np
import tensorflow as tf
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.layers import Embedding, LSTM, Dense, Bidirectional
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam
import nltk
from nltk.corpus import twitter_samples
import matplotlib.pyplot as plt
import random
nltk.download('stopwords')
import re
import string
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
from nltk.tokenize import TweetTokenizer
from google.colab import drive
drive.mount('/content/drive')
df = pd.read_csv('/content/drive/MyDrive/nlp-getting-started/train.csv')
df2 = pd.read_csv('/content/drive/MyDrive/nlp-getting-started/test.csv')
df2.head()
tweet_test = df2['text']
tweet = df['text']
label = df['target']
tweet2 = []
for i in tweet:
i = re.sub('#', '', i)
i = re.sub(r'https?:\/\/.*[\r\n]*', '', i)
tweet2.append(i)
label_final = []
for i in label:
label_final.append(i)
tokenizer = Tokenizer()
tokenizer.fit_on_texts(tweet2)
stopwords_english = stopwords.words('english')
tweets_clean = []
for word in tweet2:
if (word not in stopwords_english and # remove stopwords
word not in string.punctuation): # remove punctuation
tweets_clean.append(word)
stemmer = PorterStemmer()
# Create an empty list to store the stems
tweets_final = []
for word in tweets_clean:
stem_word = stemmer.stem(word) # stemming word
tweets_final.append(stem_word)
def process_tweet(tweet):
stemmer = PorterStemmer()
stopwords_english = stopwords.words('english')
tweet = re.sub(r'\$\w*', '', tweet)
tweet = re.sub(r'^RT[\s]+', '', tweet)
tweet = re.sub(r'https?:\/\/.*[\r\n]*', '', tweet)
tweet = re.sub(r'#', '', tweet)
tokenizer = TweetTokenizer(preserve_case=False, strip_handles=True,reduce_len=True)
tweet_tokens = tokenizer.tokenize(tweet)
tweets_clean = []
for word in tweet_tokens:
if (word not in stopwords_english and
word not in string.punctuation):
stem_word = stemmer.stem(word) # stemming word
tweets_clean.append(stem_word)
return tweets_clean
def build_freqs(tweets, ys):
yslist = np.squeeze(ys).tolist()
freqs = {}
for y, tweet in zip(yslist, tweets):
for word in process_tweet(tweet):
pair = (word, y)
if pair in freqs:
freqs[pair] += 1
else:
freqs[pair] = 1
return freqs
freqs = build_freqs(tweets_final, label_final)
keys = ['earthquak', 'forest', 'flood', 'evacu', 'disast', 'accident', 'wildfir', '...', 'peopl']
data = []
for word in keys:
pos = 0
neg = 0
if (word, 1) in freqs:
pos = freqs[(word, 1)]
if (word, 0) in freqs:
neg = freqs[(word, 0)]
data.append([word, pos, neg])
data
fig, ax = plt.subplots(figsize = (8, 8))
x = np.log([x[1] + 1 for x in data])
y = np.log([x[2] + 1 for x in data])
ax.scatter(x, y)
plt.xlabel("Log Positive count")
plt.ylabel("Log Negative count")
for i in range(0, len(data)):
ax.annotate(data[i][0], (x[i], y[i]), fontsize=12)
ax.plot([0, 9], [0, 9], color = 'red')
plt.show()
print(freqs)
from sklearn.feature_extraction.text import CountVectorizer
vectorizer = CountVectorizer()
train_x = vectorizer.fit_transform(tweets_final).toarray()
test_x = vectorizer.transform(tweet_test).toarray()
lr = 0.004
n_epochs = 12000
weights_vector = np.random.random(train_x.shape[1])
def cost_function(pred,truth):
return -truth*np.log(pred) - (1-truth)*np.log(1-pred)
def sigmoid(x):
return 1.0/(1+np.exp(-x))
def lin_mul(x,weights):
return np.dot(x,weights)
for ep in range(n_epochs):
avg_cost = 0
for i in range(len(train_x)):
data_point = train_x[i]
label = label_final[i]
pred_prob = sigmoid(lin_mul(weights_vector,data_point))
avg_cost += cost_function(pred_prob,label)
weights_vector = weights_vector - lr*(pred_prob - label)* data_point
if ep%100==0:
print ("Epoch {} has finished. Error is {}".format(ep+1,avg_cost/4.0))
preds = np.where(sigmoid(lin_mul(test_x,weights_vector))>.75,1,0)
print(preds)
id = pd.DataFrame(df2['id'])
preds = pd.DataFrame(preds, columns=['target'])
final = pd.concat([id, preds], axis=1)
final.to_csv('final_submission.csv',index=False)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/yvishyst/DS-Unit-1-Sprint-2-Data-Wrangling-and-Storytelling/blob/master/module4-sequence-your-narrative/LS_DS_124_Sequence_your_narrative_Assignment.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
_Lambda School Data Science_
# Sequence Your Narrative - Assignment
Today we will create a sequence of visualizations inspired by [Hans Rosling's 200 Countries, 200 Years, 4 Minutes](https://www.youtube.com/watch?v=jbkSRLYSojo).
Using this [data from Gapminder](https://github.com/open-numbers/ddf--gapminder--systema_globalis/):
- [Income Per Person (GDP Per Capital, Inflation Adjusted) by Geo & Time](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--income_per_person_gdppercapita_ppp_inflation_adjusted--by--geo--time.csv)
- [Life Expectancy (in Years) by Geo & Time](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--life_expectancy_years--by--geo--time.csv)
- [Population Totals, by Geo & Time](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--population_total--by--geo--time.csv)
- [Entities](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--entities--geo--country.csv)
- [Concepts](https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--concepts.csv)
Objectives
- sequence multiple visualizations
- combine qualitative anecdotes with quantitative aggregates
Links
- [Hans Rosling’s TED talks](https://www.ted.com/speakers/hans_rosling)
- [Spiralling global temperatures from 1850-2016](https://twitter.com/ed_hawkins/status/729753441459945474)
- "[The Pudding](https://pudding.cool/) explains ideas debated in culture with visual essays."
- [A Data Point Walks Into a Bar](https://lisacharlotterost.github.io/2016/12/27/datapoint-in-bar/): a thoughtful blog post about emotion and empathy in data storytelling
# ASSIGNMENT
1. Replicate the Lesson Code
2. Take it further by using the same gapminder dataset to create a sequence of visualizations that combined tell a story of your choosing.
Get creative! Use text annotations to call out specific countries, maybe: change how the points are colored, change the opacity of the points, change their sized, pick a specific time window. Maybe only work with a subset of countries, change fonts, change background colors, etc. make it your own!
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
income = pd.read_csv("https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--income_per_person_gdppercapita_ppp_inflation_adjusted--by--geo--time.csv")
print(income.shape)
income.head()
lifespan = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--life_expectancy_years--by--geo--time.csv')
print(lifespan.shape)
lifespan.head()
population = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--datapoints--population_total--by--geo--time.csv')
print(population.shape)
population.head()
entities = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--entities--geo--country.csv')
print(entities.shape)
entities.head()
concepts = pd.read_csv('https://raw.githubusercontent.com/open-numbers/ddf--gapminder--systema_globalis/master/ddf--concepts.csv')
print(concepts.shape)
concepts.head()
#merging the dataframes to get the desired dataframe
#merging income, population and lifespan
inc_pop_life = income.merge(population).merge(lifespan)
print(inc_pop_life.shape)
inc_pop_life.head()
entities.columns
#merging with relevant entities data
world_data = inc_pop_life.merge(entities[['country','name','world_6region','world_4region']],left_on='geo',right_on='country').drop(columns='geo')
print(world_data.shape)
world_data = world_data.rename(columns = {
'country': 'country_code',
'time': 'year',
'income_per_person_gdppercapita_ppp_inflation_adjusted': 'income',
'life_expectancy_years': 'lifespan',
'population_total': 'population',
'name': 'country',
'world_6region': '6region',
'world_4region': '4region'
})
world_data.head()
sns.relplot('income','lifespan',data=world_data);
sns.relplot('income','lifespan',data=world_data[world_data.year==2018]);
sns.relplot('income','population',data=world_data);
world_data[world_data.country_code=='ind']
sns.relplot('income','lifespan',data=world_data[(world_data.year>2005)],hue='6region',sizes=(500,5000),col='year')
plt.title("The Income Vs Lifespan graph for various years")
```
# STRETCH OPTIONS
## 1. Animate!
- [How to Create Animated Graphs in Python](https://towardsdatascience.com/how-to-create-animated-graphs-in-python-bb619cc2dec1)
- Try using [Plotly](https://plot.ly/python/animations/)!
- [The Ultimate Day of Chicago Bikeshare](https://chrisluedtke.github.io/divvy-data.html) (Lambda School Data Science student)
- [Using Phoebe for animations in Google Colab](https://colab.research.google.com/github/phoebe-project/phoebe2-docs/blob/2.1/tutorials/animations.ipynb)
## 2. Study for the Sprint Challenge
- Concatenate DataFrames
- Merge DataFrames
- Reshape data with `pivot_table()` and `.melt()`
- Be able to reproduce a FiveThirtyEight graph using Matplotlib or Seaborn.
## 3. Work on anything related to your portfolio site / Data Storytelling Project
```
# TODO
```
| github_jupyter |
```
# List of high schools
high_schools = ["Hernandez High School", "Figueroa High School",
"Wilson High School","Wright High School"]
high_schools
for school in high_schools:
print(school)
# A dictionary of high schools and the type of school.
high_school_types = [{"High School": "Griffin", "Type":"District"},
{"High School": "Figueroa", "Type": "District"},
{"High School": "Wilson", "Type": "Charter"},
{"High School": "Wright", "Type": "Charter"}]
high_school_types
for school in high_school_types:
print(f"{school['High School']} High School is a {school['Type']} school")
# List of high schools
high_schools = ["Huang High School", "Figueroa High School", "Shelton High School", "Hernandez High School","Griffin High School","Wilson High School", "Cabrera High School", "Bailey High School", "Holden High School", "Pena High School", "Wright High School","Rodriguez High School", "Johnson High School", "Ford High School", "Thomas High School"]
high_schools
# Add the Pandas dependency.
import pandas as pd
# Create a Pandas Series froma list.
school_series = pd.Series(high_schools)
school_series
for index in range(0,len(school_series)):
print(school_series[index])
# A dictionary of high schools
high_school_dicts = [{"School ID": 0, "school_name": "Huang High School", "type": "District"},
{"School ID": 1, "school_name": "Figueroa High School", "type": "District"},
{"School ID": 2, "school_name":"Shelton High School", "type": "Charter"},
{"School ID": 3, "school_name":"Hernandez High School", "type": "District"},
{"School ID": 4, "school_name":"Griffin High School", "type": "Charter"}]
school_df =pd.DataFrame(high_school_dicts)
school_df
# Three separate lists of information on high schools
school_id = [0, 1, 2, 3, 4]
school_name = ["Huang High School", "Figueroa High School",
"Shelton High School", "Hernandez High School","Griffin High School"]
type_of_school = ["District", "District", "Charter", "District","Charter"]
# Initialize a new DataFrame.
schools_df = pd.DataFrame()
# Add the list to a new DataFrame.
schools_df["School ID"] = school_id
# Print the DataFrame.
schools_df
schools_df["school_name"] = school_name
schools_df["Type"] = type_of_school
schools_df
school_df.columns
school_df.index
school_df.values
# Skill Drill 4.3.5
School_dict = {
"school_ID": [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,15],
"school_name": ["Huang High School", "Figueroa High School",
"Shelton High School", "Hernandez High School",
"Griffin High School","Wilson High School",
"Cabrera High School", "Bailey High School",
"Holden HighSchool", "Pena High School",
"Wright Hihg School", "Rodrigues High School",
"Johnson High School","Ford High School",
"Thomas High School"],
"type": ["D", "D", "C", "D","C", "C","C","D","C","C","C","D","D","D","C"]
}
pd.DataFrame(School_dict)
```
| github_jupyter |
# NLP model creation and training
```
from fastai.gen_doc.nbdoc import *
from fastai.text import *
```
The main thing here is [`RNNLearner`](/text.learner.html#RNNLearner). There are also some utility functions to help create and update text models.
## Quickly get a learner
```
show_doc(language_model_learner)
```
`bptt` (for backprop trough time) is the number of words we will store the gradient for, and use for the optimization step.
The model used is an [AWD-LSTM](https://arxiv.org/abs/1708.02182) that is built with embeddings of size `emb_sz`, a hidden size of `nh`, and `nl` layers (the `vocab_size` is inferred from the [`data`](/text.data.html#text.data)). All the dropouts are put to values that we found worked pretty well and you can control their strength by adjusting `drop_mult`. If <code>qrnn</code> is True, the model uses [QRNN cells](https://arxiv.org/abs/1611.01576) instead of LSTMs. The flag `tied_weights` control if we should use the same weights for the encoder and the decoder, the flag `bias` controls if the last linear layer (the decoder) has bias or not.
You can specify `pretrained_model` if you want to use the weights of a pretrained model. If you have your own set of weights and the corrsesponding dictionary, you can pass them in `pretrained_fnames`. This should be a list of the name of the weight file and the name of the corresponding dictionary. The dictionary is needed because the function will internally convert the embeddings of the pretrained models to match the dictionary of the [`data`](/text.data.html#text.data) passed (a word may have a different id for the pretrained model). Those two files should be in the models directory of `data.path`.
```
path = untar_data(URLs.IMDB_SAMPLE)
data = TextLMDataBunch.from_csv(path, 'texts.csv')
learn = language_model_learner(data, pretrained_model=URLs.WT103, drop_mult=0.5)
show_doc(text_classifier_learner)
```
`bptt` (for backprop trough time) is the number of words we will store the gradient for, and use for the optimization step.
The model used is the encoder of an [AWD-LSTM](https://arxiv.org/abs/1708.02182) that is built with embeddings of size `emb_sz`, a hidden size of `nh`, and `nl` layers (the `vocab_size` is inferred from the [`data`](/text.data.html#text.data)). All the dropouts are put to values that we found worked pretty well and you can control their strength by adjusting `drop_mult`. If <code>qrnn</code> is True, the model uses [QRNN cells](https://arxiv.org/abs/1611.01576) instead of LSTMs.
The input texts are fed into that model by bunch of `bptt` and only the last `max_len` activations are considerated. This gives us the backbone of our model. The head then consists of:
- a layer that concatenates the final outputs of the RNN with the maximum and average of all the intermediate outputs (on the sequence length dimension),
- blocks of ([`nn.BatchNorm1d`](https://pytorch.org/docs/stable/nn.html#torch.nn.BatchNorm1d), [`nn.Dropout`](https://pytorch.org/docs/stable/nn.html#torch.nn.Dropout), [`nn.Linear`](https://pytorch.org/docs/stable/nn.html#torch.nn.Linear), [`nn.ReLU`](https://pytorch.org/docs/stable/nn.html#torch.nn.ReLU)) layers.
The blocks are defined by the `lin_ftrs` and `drops` arguments. Specifically, the first block will have a number of inputs inferred from the backbone arch and the last one will have a number of outputs equal to data.c (which contains the number of classes of the data) and the intermediate blocks have a number of inputs/outputs determined by `lin_ftrs` (of course a block has a number of inputs equal to the number of outputs of the previous block). The dropouts all have a the same value ps if you pass a float, or the corresponding values if you pass a list. Default is to have an intermediate hidden size of 50 (which makes two blocks model_activation -> 50 -> n_classes) with a dropout of 0.1.
```
jekyll_note("Using QRNN require to have cuda installed (same version as pytorhc is using).")
path = untar_data(URLs.IMDB_SAMPLE)
data = TextClasDataBunch.from_csv(path, 'texts.csv')
learn = text_classifier_learner(data, drop_mult=0.5)
show_doc(RNNLearner)
```
Handles the whole creation from <code>data</code> and a `model` with a text data using a certain `bptt`. The `split_func` is used to properly split the model in different groups for gradual unfreezing and differential learning rates. Gradient clipping of `clip` is optionally applied. `adjust`, `alpha` and `beta` are all passed to create an instance of [`RNNTrainer`](/callbacks.rnn.html#RNNTrainer). Can be used for a language model or an RNN classifier. It also handles the conversion of weights from a pretrained model as well as saving or loading the encoder.
```
show_doc(RNNLearner.get_preds)
```
If `ordered=True`, returns the predictions in the order of the dataset, otherwise they will be ordered by the sampler (from the longest text to the shortest). The other arguments are passed [`Learner.get_preds`](/basic_train.html#Learner.get_preds).
### Loading and saving
```
show_doc(RNNLearner.load_encoder)
show_doc(RNNLearner.save_encoder)
show_doc(RNNLearner.load_pretrained)
```
Opens the weights in the `wgts_fname` of `self.model_dir` and the dictionary in `itos_fname` then adapts the pretrained weights to the vocabulary of the <code>data</code>. The two files should be in the models directory of the `learner.path`.
## Utility functions
```
show_doc(lm_split)
show_doc(rnn_classifier_split)
show_doc(convert_weights)
```
Uses the dictionary `stoi_wgts` (mapping of word to id) of the weights to map them to a new dictionary `itos_new` (mapping id to word).
## Get predictions
```
show_doc(LanguageLearner, title_level=3)
show_doc(LanguageLearner.predict)
```
If `no_unk=True` the unknown token is never picked. Words are taken randomly with the distribution of probabilities returned by the model. If `min_p` is not `None`, that value is the minimum probability to be considered in the pool of words. Lowering `temperature` will make the texts less randomized.
## Undocumented Methods - Methods moved below this line will intentionally be hidden
```
show_doc(RNNLearner.get_preds)
show_doc(LanguageLearner.show_results)
```
## New Methods - Please document or move to the undocumented section
| github_jupyter |
## KMEANS CLUSTERING
Project follows the CRISP-DM Process while analyzing their data.
PROBLEM :
PREDICT THE CLUSTER OF CUSTOMERS BASED ON ANNUAL INCOME AND SPENDING TO BRING VALUABLE INSIGHTS FOR THE MALL.
## Questions :
## 1.Which cluster has both spending good score and income?
## 2.On which cluster should company concentrate to increase sales?
## 3.which cluster has maximum probability to get into high spending score?
# IMPORTING THE DATASET AND LIBRARIES
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import LabelEncoder,OneHotEncoder
# Importing the dataset
dataset = pd.read_csv(r'C:\Users\neeraj\OneDrive\Desktop\data challenge\Mall_Customers.csv')
X=dataset.iloc[:,:].values
```
## Explore the Dataset
```
dataset.head()
dataset.info()
dataset.isnull().sum()
```
## Check for categories in object variable(categorical variable)
```
dataset['Genre'].value_counts()
```
## Replace categories by one hot encoding
Here this method works fine as there are only 2 categories in object variable
```
labelencoder_X=LabelEncoder()
X[:,1]= labelencoder_X.fit_transform(X[:,1])
Data=pd.DataFrame(X)
```
## Now check for categorical values if any
```
Data.head()
```
## loading data (test and train)
```
x= dataset.iloc[:, [3,4]].values
Final=pd.DataFrame(x)
Final.head()
```
## USING ELBOW METHOD FOR OPTIMAL CLUSTERS
Here I have used a function which taken in the 'i' value and returns the graph between 'i' and WCSS(Sum of squares of distances within clusters)
```
from sklearn.cluster import KMeans
wcss = []
for i in range(1, 11):
kmeans = KMeans(n_clusters = i, init = 'k-means++', random_state = 42)
kmeans.fit(x)
wcss.append(kmeans.inertia_)
plt.plot(range(1, 11), wcss)
plt.title('The Elbow Method')
plt.xlabel('Number of clusters')
plt.ylabel('WCSS')
plt.show()
```
## Training the model
```
kmeans = KMeans(n_clusters = 5, init = 'k-means++', random_state = 42)
y_kmeans = kmeans.fit_predict(x)
print(y_kmeans)
```
## LETS VISUALISE OUR RESULT
```
# Visualising the clusters
plt.scatter(x[y_kmeans == 0, 0], x[y_kmeans == 0, 1], s = 100, c = 'red', label = 'Cluster 1')
plt.scatter(x[y_kmeans == 1, 0], x[y_kmeans == 1, 1], s = 100, c = 'blue', label = 'Cluster 2')
plt.scatter(x[y_kmeans == 2, 0], x[y_kmeans == 2, 1], s = 100, c = 'green', label = 'Cluster 3')
plt.scatter(x[y_kmeans == 3, 0], x[y_kmeans == 3, 1], s = 100, c = 'cyan', label = 'Cluster 4')
plt.scatter(x[y_kmeans == 4, 0], x[y_kmeans == 4, 1], s = 100, c = 'magenta', label = 'Cluster 5')
plt.scatter(kmeans.cluster_centers_[:, 0], kmeans.cluster_centers_[:, 1], s = 300, c = 'yellow', label = 'Centroids')
plt.title('Clusters of customers')
plt.xlabel('Annual Income (k$)')
plt.ylabel('Spending Score (1-100)')
plt.legend()
plt.show()
```
## Our insights are :
## 1. Cluster 1 has an average income of 60k and has spending score of 50 on an average .
## 2. Cluster 2 has an average of 90k and has spending score of 18 on an average .
## 3.Cluster 3 has an average income of 30k and has spending score of 20 on an average .
## 4.Cluster 4 has an average income of 30k and has spending score of 80 on an average.
## 5.Cluster 5 has an average income of 85k and has spending score of 80 on an average.
## Deeper intution(Answering our questions)
## Customers Belonging to cluster 4 and 5 are having good spending score so are valuable for our mall .(Can give special cards , discounts etc)
## Customers Belonging to cluster 2 should have a high spending score henceforth the company should concentrate on these type of customers for increasing profits.
## Customers Belonging to cluster 1 should be given discounts to increase the spending score
| github_jupyter |
# Starbucks Capstone Challenge
### Introduction
This data set contains simulated data that mimics customer behavior on the Starbucks rewards mobile app. Once every few days, Starbucks sends out an offer to users of the mobile app. An offer can be merely an advertisement for a drink or an actual offer such as a discount or BOGO (buy one get one free). Some users might not receive any offer during certain weeks.
Not all users receive the same offer, and that is the challenge to solve with this data set.
The task is to combine transaction, demographic and offer data to determine which demographic groups respond best to which offer type. This data set is a simplified version of the real Starbucks app because the underlying simulator only has one product whereas Starbucks actually sells dozens of products.
Every offer has a validity period before the offer expires. As an example, a BOGO offer might be valid for only 5 days. You'll see in the data set that informational offers have a validity period even though these ads are merely providing information about a product; for example, if an informational offer has 7 days of validity, you can assume the customer is feeling the influence of the offer for 7 days after receiving the advertisement.
Transactional data is given showing user purchases made on the app including the timestamp of purchase and the amount of money spent on a purchase. This transactional data also has a record for each offer that a user receives as well as a record for when a user actually views the offer. There are also records for when a user completes an offer.
This project is part of the Capstone project for Udacity's Data Science Nanodegree.
# Data Sets
The data is contained in three files:
* portfolio.json - containing offer ids and meta data about each offer (duration, type, etc.)
* profile.json - demographic data for each customer
* transcript.json - records for transactions, offers received, offers viewed, and offers completed
Here is the schema and explanation of each variable in the files:
**portfolio.json**
* id (string) - offer id
* offer_type (string) - type of offer ie BOGO, discount, informational
* difficulty (int) - minimum required spend to complete an offer
* reward (int) - reward given for completing an offer
* duration (int) - time for offer to be open, in days
* channels (list of strings)
**profile.json**
* age (int) - age of the customer
* became_member_on (int) - date when customer created an app account
* gender (str) - gender of the customer (note some entries contain 'O' for other rather than M or F)
* id (str) - customer id
* income (float) - customer's income
**transcript.json**
* event (str) - record description (ie transaction, offer received, offer viewed, etc.)
* person (str) - customer id
* time (int) - time in hours since start of test. The data begins at time t=0
* value - (dict of strings) - either an offer id or transaction amount depending on the record
# Approach:
Overall, the following big tasks are identified for this exercise:
1. EDA and cleaning data. This includes combining the portfolio, profile and transcript data.
2. Preprocess data for modeling
3. Run model and assess result
The following brainstorming questions may serve to inform the decisions in data processing for model input:
* Does the number of channels affect effectiveness of offer?
* Does the type of channels affect effectiveness of offer?
* What kind of customers are impacted by offers (non-frequent customers)
* How to identify frequent and non-frequent customers? (can we use std of time deltas between transactions?)
* How to identify transactions within and without offer effectiveness range?
* Can we use the ratio of transactions within and without offer effectiveness range to determine if customers is subceptible to offers?
```
import pandas as pd
import numpy as np
import math
import json
import matplotlib
from matplotlib import pyplot as plt
%matplotlib inline
# read in the json files
portfolio = pd.read_json('data/portfolio.json', orient='records', lines=True)
profile = pd.read_json('data/profile.json', orient='records', lines=True)
transcript = pd.read_json('data/transcript.json', orient='records', lines=True)
portfolio.head()
```
From the above, it seems the *channels* column needs to be processed
```
profile.head()
transcript.head()
```
From the above, it is clear that the *value* column needs some cleaning.
Specifically, the column contains:
* Offer IDs in case the event is related to an offer (either received, viewed or completed)
* Sales amount in case of transactions
We start with the exploration of the profiles of the customers
```
fig, ax1 = plt.subplots()
ax1.bar(profile.gender.value_counts().index, profile.gender.value_counts().values);
ax1.set_title("Customer distribution by gender")
ax1.set_xlabel("Gender")
plt.show()
```
A look at the distribution of gender shows that male customers hold the majority in the customer profile (somewhat surprising given Starbucks do not give the impression of a male-focused coffee brand)
```
fig, ax2 = plt.subplots(figsize=(10,12))
ax2.scatter(profile.age.values, profile.income.values);
ax2.set_title("Distribution of customer by age and income")
ax2.set_xlabel("Age")
ax2.set_ylabel("Income")
plt.show();
```
An attempt at looking the correspondence between age and income, shows that it's hard to identify groupings based on these 2 features.
However, it can be clearly seen that customers are separated clearly in to income brackets that correspond to ages, which is not surprising.
Also not surprising is the fact that most of the customers belong to the income bracket of sub-800000.
The shape of the data is most likely due to it being generated based on Starbucks data, and not the actual data.
```
age_female = profile[(profile.age != 118) & (profile.gender=="F")].age.values
age_male = profile[(profile.age != 118) & (profile.gender=="M")].age.values
fig, ax1 = plt.subplots()
ax1.hist(age_male, bins=20, label="male");
ax1.hist(age_female, bins = 20, alpha=0.5, label="female")
ax1.set_title("Distribution of customers by age");
ax1.legend();
print("Female median age: {}".format(np.median(age_female)))
print("Male median age: {}".format(np.median(age_male)))
```
**Observation**:
1. Female members seems to have a higher tendency towards the middle-age group, with lower percentages comparatively in the low age group
2. Male members have much higher percentage of members in the lower age brackets (defined as below 40 years old)
```
income_female = profile[profile.gender == "F"].income.dropna().values
income_male = profile[profile.gender == "M"].income.dropna().values
fig, ax = plt.subplots()
ax.hist(income_male, bins=20, label="male");
ax.hist(income_female, bins=20, alpha=0.5, label="female")
ax.set_title("Distribution of customers by income");
ax.legend();
print("Female median income: {}".format(np.median(income_female)))
print("Male median income: {}".format(np.median(income_male)))
```
**Observations:**
1. In the low to middle income groups (<=70K/year), there is a large gap between the male member counts and female member counts (which makes sense, since there are many more male members.
2. Interestingly, in the high income bracket (>70K/year), there is a slight advantage for female members.
```
fig, ax = plt.subplots()
ax.bar(transcript.event.value_counts().index, transcript.event.value_counts().values);
ax.set_title("Count of different transcript events")
```
Just a tiny peek at the transaction data, reveal a big imbalance between the amount of transactions and the amount of offers received.
More insights can be got after the data has been processed and combined.
## Data cleaning
The steps for cleaning data is as follows:
For **transcript.json**:
1. Change time from hours to days
2. One-hot encode where possible
3. Get the offer ids from the value columns
4. Separating time of each event in the cycle
5. Separate offer and transaction
```
transcript = pd.read_json('data/transcript.json', orient='records', lines=True)
transcript["time_in_days"] = transcript["time"]/24
def encode_one_hot(df, col):
"""
Create one-hot encoded columns
df: Target dataframe
col: target column name (must be string)
Return: Dataframe with encoded columns
"""
categories = df[col].value_counts().index
for cat in categories:
df[cat] = df[col].apply(lambda x: 1 if x==cat else 0)
return df
encode_one_hot(transcript, "event")
def clean_transcript_value(df):
# The "value" column of transcript data has value in the form of dictionaries
# Since there similar but not same keys, need to standardize the keys by popping the value of the key to be removed, and reassign it
# to the target key with below function
def clean_offer_id(dic):
try:
dic["offer_id"] = dic.pop("offer id")
return dic
except:
return dic
cleaned_values = df["value"].apply(clean_offer_id)
# Get unique list of offer type keys for spliting columns
offer_type_keys = cleaned_values.apply(lambda x: x.keys()).value_counts().index
offer_type_keys_unique = list(set([key for keys in offer_type_keys for key in keys]))
# Create new columns in dataframe from the value column
for key in offer_type_keys_unique:
df[key] = df["value"].apply(lambda x: x[key] if key in x.keys() else np.nan)
return df
transcript_cleaned = clean_transcript_value(transcript)
transcript_cleaned = transcript.drop(["time","value"], axis=1)
transcript_cleaned = transcript_cleaned.reset_index()
# Separating timing of different actions
time_event = transcript_cleaned.pivot(index="index", columns="event", values="time_in_days")
time_event.columns = ["time_" + col.replace(" ","_") for col in time_event.columns]
transcript_cleaned = transcript_cleaned.merge(time_event, how="left", left_on="index", right_on="index")
# Separating offer data from transaction data
transcript_cleaned["key"] = transcript_cleaned["person"] + "|" + transcript_cleaned["offer_id"]
transcript_offer = transcript_cleaned[transcript_cleaned.transaction==0].drop(["amount"], axis=1)
transcript_transaction = transcript_cleaned[transcript_cleaned.transaction==1].drop(["reward"],axis=1)
```
4.
For **portfolio.json**:
1. Get count of channels for each offer
2. One-hot encode channel column and offer type
3. Drop cleaned columns, keeping only clean outputs
```
portfolio = pd.read_json('data/portfolio.json', orient='records', lines=True)
def clean_portfolio(df):
df["channel_count"] = df["channels"].apply(lambda x: len(x))
channels_unique = set([channel for channels in df.channels.values for channel in channels])
for channel in channels_unique:
df[channel] = df["channels"].apply(lambda x: 1 if channel in x else 0)
encode_one_hot(df, "offer_type")
return df
portfolio_cleaned = clean_portfolio(portfolio)
portfolio_cleaned.drop(["channels", "offer_type"], axis=1, inplace=True)
portfolio_cleaned.set_index("id", inplace=True)
portfolio_cleaned.head()
```
For **customer profiles**, the following cleaning steps are done:
1. Scaling membership age (defined as the amount of time passed since customer becomes member, relative to the newest member)
2. Scaling member age
3. One-hot encoding gender
4. Scaling income
5. Dropping processed columns
Afterwards, uses BIRCH clustering to attempt to find customer clusters.
```
profile.gender = profile.gender.fillna("Unknown")
encode_one_hot(profile, "gender");
profile["membership_age_scaled"] = profile.became_member_on.apply(lambda x: pd.to_datetime(x, format="%Y%m%d").value / 10**9)
def scaler(df, col_name):
return (df[col_name] - min(df[col_name])) / (max(df[col_name]) - min(df[col_name]))
#scaling membership age
profile["membership_age_scaled"] = scaler(profile, "membership_age_scaled")
# Imputing age with mean age and scale age
# avg_age = np.round(np.mean(profile[profile.age != 118].age.values))
# profile.age.apply(lambda x: avg_age if x == 118 else x)
profile["age_scaled"] = scaler(profile, "age")
# Imputing income with mean income and scale income
profile["income_scaled"] = profile.income.fillna(np.mean(profile.income))
profile["income_scaled"] = scaler(profile, "income_scaled")
profile.set_index("id", inplace=True)
profile_cleaned = profile.drop(labels=["age","became_member_on","gender","income"], axis=1)
profile_cleaned.head()
```
## Testing of clustering
Since the problem statement calls for identifying effective coupons for certain customer groups, it is a nice occasion to try out some clustering algorithm.
The algorithm chosen this time is **BIRCH** (which stands for "balanced iterative reducing and clustering using hierarchies").
BIRCH is a popular unsupervised clustering algorithm for large datasets. It clusters by continually building a tree structure of clustering features, separating the data into subclusters, before performing clustering on these subclusters. BIRCH's initial output of subclusters can also be used as input for other clustering algorithm to reduce time.
While the current data set is not large, BIRCH has shown to give good result, and therefore a good candidate for this exercise.
```
#Clustering using BIRCH
from sklearn.cluster import Birch
birch = Birch()
birch.fit(profile_cleaned)
cluster_labels = birch.predict(profile_cleaned)
```
The cluster_labels variable contains the labels of the data points in profile.
Below is an attempt to create 3 clusters of data and visualize the result in terms of distribution of income
```
fig, axes = plt.subplots(1,3, figsize=(15,5))
for i in set(cluster_labels):
filt = cluster_labels == i
axes[i].hist(profile_cleaned[filt].income_scaled, bins=20)
axes[i].set_title("group_{}".format(i+1))
plt.show()
```
It can be seen above that the BIRCH algorithm gives quite accurate result, as it correctly identifies the customer group where income is unknown as a different group (middle).
The x-value is due to the fact that the unknown income is replaced with the mean income in the input data.
```
fig, ax = plt.subplots()
plt.scatter(profile.age, profile.income, c=cluster_labels, cmap="rainbow", alpha=0.7)
ax.set_title("Colored distribution of age vs income")
```
Since we have some more info related the the customer's interaction history, let's add these info to the profile and re-run the clustering algorithm
```
#Aggregate transaction history to create feature list
history_agg = transcript.groupby("person")["transaction","offer received", "offer viewed", "offer completed"].sum()
history_agg_scaled = history_agg.copy()
for col in history_agg_scaled.columns:
history_agg_scaled[col] = scaler(history_agg_scaled, col)
profile_cleaned_with_hist = profile_cleaned.merge(history_agg_scaled, how="left", left_index=True, right_index=True)
#Clustering using BIRCH
from sklearn.cluster import Birch
birch = Birch(n_clusters=4)
birch.fit(profile_cleaned_with_hist)
cluster_labels = birch.predict(profile_cleaned_with_hist)
fig, axes = plt.subplots(1,4, figsize=(15,5))
for i in set(cluster_labels):
filt = cluster_labels == i
axes[i].hist(profile_cleaned[filt & (profile_cleaned["M"]==1)].income_scaled, bins=20, alpha=0.8, label="male")
axes[i].hist(profile_cleaned[filt & (profile_cleaned["F"]==1)].income_scaled, bins=20, alpha = 0.5, label="female")
axes[i].hist(profile_cleaned[filt & (profile_cleaned["O"]==1)].income_scaled, bins=20, alpha = 0.5, label="unknown")
axes[i].hist(profile_cleaned[filt & (profile_cleaned["Unknown"]==1)].income_scaled, bins=20, alpha=0.5, label="Unknown")
axes[i].legend()
plt.tight_layout()
```
**Observations:**
BIRCH clustering seems to have separated the group mostly based on gender, which is not surprising, since the other metrics do not produce any meaningful groupings.
Finally, let's manually creating the groupings based on age and income, and have a look at the activities done per group
```
# Create new aggregated table of a person's offer history
person_history = transcript_offer[["person","event","offer_id"]].merge(portfolio[['email', 'mobile','social',
'web', 'discount', 'bogo', 'informational']],
how="left",
left_on="offer_id",
right_index=True)
person_history = person_history.pivot_table(index="person", columns="event", values=["email","mobile","social","web","discount","bogo","informational"], aggfunc="sum")
person_history.columns = person_history.columns.swaplevel(1,0)
person_history.sort_index(axis=1, level=0, inplace=True)
person_history = person_history.fillna(0)
# Create new aggregated profile with transaction history
profile_agg = profile.merge(person_history, how="left", left_index=True, right_index=True)
profile_agg["income_group"] = profile_agg.income.apply(lambda x: "unknown" if pd.isnull(x)
else "<40000" if x < 40000
else "40000~80000" if x <= 80000
else ">80000")
profile_agg["age_group"] = profile_agg.age.apply(lambda x: "unknown" if x == 118
else "<18" if x<18
else "18~44" if x<45
else "45~64" if x<65
else ">65")
gender_groups = list(set(profile_agg.gender.values))
age_groups = list(set(profile_agg.age_group.values))
income_groups = list(set(profile_agg.income_group.values))
groups_dict = {"gender": gender_groups,
"age_group": age_groups,
"income_group": income_groups}
offer_event_types = ["offer received", "offer viewed", "offer completed"]
labels_offer_type = ['bogo', 'discount', 'informational']
labels_channel = ['email','mobile', 'social', 'web']
def draw_grid(group_1_label, group_2_label):
group_1 = groups_dict[group_1_label]
group_2 = groups_dict[group_2_label]
fig, axes = plt.subplots(len(group_1), len(group_2), figsize=(20,20))
for i in range(len(group_1)):
for j in range(len(group_2)):
ax = axes[i][j]
val1 = group_1[i]
val2 = group_2[j]
temp = profile_agg[(profile_agg[group_1_label]==val1) & (profile_agg[group_2_label]==val2)]
val_list = []
for event in offer_event_types:
val_cols = [col for col in temp.columns if event in col and col[1] in labels_offer_type and type(col) is not str]
vals = temp[val_cols].sum().values
val_list.append(vals)
if len(val_list) <=1:
ax.bar(labels_offer_type, vals, label=event)
else:
ax.bar(labels_offer_type, vals, bottom = np.sum(val_list[:-1], axis=0), label=event)
ax.set_xticklabels(labels_offer_type, rotation=45, ha='right')
ax.set_title(val1 + " & " + val2)
ax.legend()
plt.tight_layout()
draw_grid("gender", "age_group")
```
# Data preprocessing for model
Based on the given data, the problem can be framed as a **Regression** problem.
An offer's "effectiveness" can be measured by how much sales it generates, versus how much it rewards the customer.
The level of effectiveness, therefore, can be reframed as a continuous value representing the ratio between sales from transactions and rewards from offers.
Of course, it is reasonable to assume that even discounting offers that are completed without viewed (therefore "ineffective"), not all offers that are viewed gets completed.
In this case, while there is no "actual reward", we can still use "potential reward" - in other words, the reward had the customer completed the offer - as a reasonable baseline against any sales generated from the offer.
### Definition of "effective" offers:
* Offers that are viewed and completed
* Offers that are viewed, and not completed, but still generate transactions
The following actions need to be done
1. Determining transactions that happens within and without an offer cycle (either between when offer is viewed and completed or was done within the validity period of offer)
2. Assign offer IDs to transactions that can be associated with an offer
3. Calculate the "effectiveness" ratio
No. 1 can be achieved by separating timing of different actions in an offer cycle.
Afterwards, if transaction timing is between a certain slot, then we can assign that transaction the corresponding offer id.
In order to identify order cycles, some assumptions need to be made:
1. A customer always has to receive an offer before completing it.
1. Offer activities are always done in the following order per offer cycle: offer received -> offer viewed -> offer completed
```
# Identify offer cycles
transcript_offer = transcript_offer.sort_values(by=["key","time_in_days"])
#create special id for each offer completion cycle
cycle_ids = []
initial_id = 0
for i in range(transcript_offer.shape[0]):
if transcript_offer["offer received"].iloc[i] == 1:
initial_id +=1
cycle_ids.append(initial_id)
else:
cycle_ids.append(initial_id)
transcript_offer["cycle_id"] = cycle_ids
#Make a different table to calculate time elapsed betwen each stage of the offer cycle
time_elapse_table = transcript_offer.groupby(["cycle_id","key"])["offer received", "offer viewed", "offer completed", "time_offer_received", "time_offer_viewed", "time_offer_completed", "reward"].sum()
time_elapse_table["time_from_received_to_viewed"] = ((time_elapse_table["time_offer_viewed"] - time_elapse_table["time_offer_received"])
*time_elapse_table["offer viewed"]*time_elapse_table["offer received"])
time_elapse_table["time_from_viewed_to_completed"] = ((time_elapse_table["time_offer_completed"] - time_elapse_table["time_offer_viewed"]).apply(lambda x: max(x,0))
*time_elapse_table["offer completed"]*time_elapse_table["offer viewed"]).apply(lambda x: max(x,0))
time_elapse_table = time_elapse_table.reset_index()
time_elapse_table["offer_id"] = time_elapse_table["key"].apply(lambda x: x.split("|")[1])
time_elapse_table["customer_id"] = time_elapse_table["key"].apply(lambda x: x.split("|")[0])
time_elapse_table = time_elapse_table.merge(portfolio_cleaned, how="left", left_on="offer_id", right_index=True)
time_elapse_table["time_offer_expires"] = time_elapse_table["time_offer_received"] + time_elapse_table["duration"]
# time_elapse_table["reward"] = time_elapse_table["reward"].fillna(0)
time_elapse_table.rename(columns={"reward_x":"actual_reward", "reward_y":"potential_reward"}, inplace=True)
time_elapse_table.head()
#map transaction to corresponding customer/offer key if available
import os
import pickle
from tqdm import tqdm_notebook as tqdm
file_name = "key_list.pkl"
if file_name in os.listdir():
file = open(file_name, "rb")
corresponding_keys = pickle.load(file)
file.close()
else:
corresponding_keys = []
for i in tqdm(range(transcript_transaction.shape[0])):
time_transaction = transcript_transaction["time_transaction"].values[i]
temp = time_elapse_table[time_elapse_table["customer_id"] == transcript_transaction["person"].values[i]]
for j in range(temp.shape[0]):
key = np.nan
if (time_transaction >= temp["time_offer_viewed"].values[j]
and temp["offer viewed"].values[j] != 0
and (time_transaction <= temp["time_offer_completed"].values[j]
or time_transaction <= temp["time_offer_expires"].values[j]
)
):
key = temp["key"].values[j]
break
corresponding_keys.append(key)
filename="key_list.pkl"
file = open(filename, "wb")
pickle.dump(corresponding_keys, file)
file.close()
# Aggregate sales count and amount based on customer-offer key
transcript_transaction["link_offer_cust_key"] = corresponding_keys
transcript_transaction["sales_count"] = 1
influenced_sales = transcript_transaction.groupby("link_offer_cust_key")[["sales_count","amount"]].sum()
```
The input for the model will be a combination of order event history, characteristics of customers, and characteristics of offers.
In order to create this, aggregation of transcript data is needed.
```
#create dictionary to aggregate
def agg_dict_creator(nested_col_list, agg_types):
"""
Create a dictionary for groupby aggregation
Input:
nested_col_list: list of cols to apply aggregation. Works for single agg type for multiple columns,
in which case please put columns with same agg type in inner list
agg_types: list of aggregation methods to use. Should match valid input for groupby.agg() function.
Also length agg_types must equal length of col list
"""
agg_dict = {}
for cols, agg in zip(nested_col_list, agg_types):
if type(cols) is list:
for col in cols:
agg_dict[col] = agg
else:
agg_dict[cols] = agg
return agg_dict
cols_to_sum = ["offer received", "offer viewed", "offer completed", "actual_reward", "potential_reward"]
unneeded_cols = ["cycle_id", "key", 'time_offer_received', 'time_offer_viewed', 'time_offer_completed','time_offer_expires', "offer_id", "customer_id"]
cols_to_avg = [col for col in time_elapse_table.columns if col not in cols_to_sum and col not in unneeded_cols]
agg_dict = agg_dict_creator([cols_to_sum, cols_to_avg], ["sum", "mean"])
transaction_agg = time_elapse_table.groupby("key").agg(agg_dict)
transaction_agg = transaction_agg.merge(influenced_sales, how="left", left_index = True, right_index=True)
transaction_agg["sales_count"] = transaction_agg["sales_count"].fillna(0)
transaction_agg["amount"] = transaction_agg["amount"].fillna(0)
transaction_agg.reset_index(inplace=True)
transaction_agg["customer_id"] = transaction_agg["key"].apply(lambda x: x.split("|")[0])
transaction_agg = transaction_agg.merge(profile_cleaned, how="left", left_on="customer_id", right_index=True)
transaction_agg = transaction_agg.drop("customer_id", axis=1).set_index("key")
# Measure effectiveness as the ratio between sales amount vs customer rewards
effectiveness = []
for i in range(transaction_agg.shape[0]):
sales_amt = transaction_agg["amount"].values[i]
actual_reward = transaction_agg["actual_reward"].values[i]
potential_reward = transaction_agg["potential_reward"].values[i]
if actual_reward != 0:
ratio = sales_amt/actual_reward
elif potential_reward != 0:
ratio = sales_amt/potential_reward
else:
ratio = 0
effectiveness.append(ratio)
# Create ceiling for effectiveness:
effectiveness_ceiled = [val if val <=1 else 1 for val in effectiveness]
# Alternatively assign binary labels to effectiveness
effectiveness_binary = [1 if x >= 1 else 0 for x in effectiveness]
scaled_cols = []
for col in transaction_agg.columns:
if max(transaction_agg[col])>1:
transaction_agg[col + "_scaled"] = scaler(transaction_agg, col)
scaled_cols.append(col + "_scaled")
else:
scaled_cols.append(col)
scaled_cols
model_input = transaction_agg[scaled_cols].values
model_input.shape
transaction_agg_reduced = transaction_agg.drop(["amount", "actual_reward", "potential_reward"],axis=1)
model_input_reduced = transaction_agg_reduced[scaled_cols].values
```
# Running Model
## Model
The model chosen for this exercise is Random Forest Regressor.
Random Forest is a popular ensemble model.
While it is commonly used for classification task, it can also be used for regression task, using the above scikit-learn model.
## Metrics
The following metrics are used for evaluation of the regression model:
* RMSE (Root Mean Squared error)
* R-squared score
The above 2 metrics will give an overall picture of both the relative and absolute fit of the model to the data.
```
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split, cross_validate
from sklearn.metrics import r2_score
import random
random.seed(42)
rfr = RandomForestRegressor()
# K-Fold evaluation using cross_validate
result = cross_validate(rfr, model_input_reduced, effectiveness_ceiled, cv=5, scoring=["neg_mean_squared_error","r2"])
result_df = pd.DataFrame(result)
result_df
print("Max effectiveness: {}\nMin effectiveness: {}".format(max(effectiveness), min(effectiveness)))
```
As can be seen from the above, the model achieve good result in determining the "effectiveness" score of a certain customer-offer pair i.e. whether an offer is suitable or not for a customer.
As we use multiple features of both the customer and the offer, this model should still work for new customer or new offers (reduced fit is expected)
# Hyper parameter tuning
To tune hyper parameter, uses RandomSearchCV.
Code and explanation from this [article](https://towardsdatascience.com/hyperparameter-tuning-the-random-forest-in-python-using-scikit-learn-28d2aa77dd74) is referenced.
Since we are using a Random Forest Regressor model, the following features can be target for hyper-parameter tuning:
* max_depth: maximum tree depth
* max_features: maximum number of features to consider when splitting nodes
* n_estimators: number of trees
* min_samples_leaf: minimum number of data points in a leaf node
```
# Hyper parameter tuning
# To tune hyper parameter, uses RandomizedSearchCV
from sklearn.model_selection import RandomizedSearchCV, train_test_split
X_train, X_test, y_train, y_test = train_test_split(model_input_reduced, effectiveness_ceiled, test_size=0.2)
depths = [int(i) for i in np.linspace(10, 100, num=10)]
features = ["auto","sqrt","log2"]
trees = [int(i) for i in np.linspace(200,1000,num=10)]
leaves = [1,2,4]
search_grid = {"max_depth": depths,
"max_features": features,
"n_estimators": trees,
"min_samples_leaf": leaves}
rfr = RandomForestRegressor()
rfr_random_search = RandomizedSearchCV(estimator=rfr, param_distributions=search_grid,
n_iter=50, cv=3, verbose = 1, random_state=42, n_jobs = -1)
base_model = RandomForestRegressor(random_state=42)
def evaluate(y_true, y_pred):
errors_squared = np.square(y_true - y_pred)
rmse = np.sqrt(np.mean(errors_squared))
r2 = r2_score(y_true, y_pred)
print("RMSE :{:0.4f}".format(rmse))
print("R-squared: {:0.2f}".format(r2))
return
rfr_random_search.fit(X_train, y_train)
base_model.fit(X_train, y_train)
y_pred_random = rfr_random_search.predict(X_test)
y_pred_base = base_model.predict(X_test)
print("Random Model Performance:")
evaluate(y_test, y_pred_random)
print("Base Model Performance:")
evaluate(y_test, y_pred_random)
print("Random best params")
print(rfr_random_search.best_params_)
```
| github_jupyter |
# 📝 Exercise M1.03
The goal of this exercise is to compare the performance of our classifier in
the previous notebook (roughly 81% accuracy with `LogisticRegression`) to
some simple baseline classifiers. The simplest baseline classifier is one
that always predicts the same class, irrespective of the input data.
- What would be the score of a model that always predicts `' >50K'`?
- What would be the score of a model that always predicts `' <=50K'`?
- Is 81% or 82% accuracy a good score for this problem?
Use a `DummyClassifier` and do a train-test split to evaluate
its accuracy on the test set. This
[link](https://scikit-learn.org/stable/modules/model_evaluation.html#dummy-estimators)
shows a few examples of how to evaluate the generalization performance of these
baseline models.
```
import pandas as pd
adult_census = pd.read_csv('../datasets/adult-census.csv')
```
We will first split our dataset to have the target separated from the data
used to train our predictive model.
```
target_name = "class"
target = adult_census[target_name]
data = adult_census.drop(columns=[target_name, ])
```
We start by selecting only the numerical columns as seen in the previous
notebook.
```
numerical_columns = [
"age", "capital-gain", "capital-loss", "hours-per-week"]
data_numeric = data[numerical_columns]
```
Split the data and target into a train and test set.
```
from sklearn.model_selection import train_test_split
data_numeric_train, data_numeric_test, target_train, target_test = train_test_split(
data_numeric, target, random_state=42
)
```
Use a `DummyClassifier` such that the resulting classifier will always
predict the class `' >50K'`. What is the accuracy score on the test set?
Repeat the experiment by always predicting the class `' <=50K'`.
Hint: you can set the `strategy` parameter of the `DummyClassifier` to
achieve the desired behavior.
```
from sklearn.dummy import DummyClassifier
class_to_predict = ' >50K'
high_revenue_clf = DummyClassifier(
strategy='constant',
constant=class_to_predict
)
high_revenue_clf.fit(data_numeric_train, target_train)
score = high_revenue_clf.score(data_numeric_test, target_test)
print(f"Accuracy of a model predicting only high revenue: {score:.3f}")
from sklearn.dummy import DummyClassifier
class_to_predict = ' <=50K'
low_revenue_clf = DummyClassifier(
strategy='constant',
constant=class_to_predict
)
low_revenue_clf.fit(data_numeric_train, target_train)
score = low_revenue_clf.score(data_numeric_test, target_test)
print(f"Accuracy of a model predicting only high revenue: {score:.3f}")
adult_census["class"].value_counts()
(target == " <=50K").mean()
most_frequent_clf = DummyClassifier(strategy='most_frequent')
most_frequent_clf.fit(data_numeric_train, target_train)
score = most_frequent_clf.score(data_numeric_test, target_test)
print(f"Accuracy of a model predictiing the most frequent class: {score:.3f}")
```
| github_jupyter |
# HyperParameter Tuning
### `keras.wrappers.scikit_learn`
Example adapted from: [https://github.com/fchollet/keras/blob/master/examples/mnist_sklearn_wrapper.py]()
## Problem:
Builds simple CNN models on MNIST and uses sklearn's GridSearchCV to find best model
```
import numpy as np
np.random.seed(1337) # for reproducibility
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras.utils import np_utils
from keras.wrappers.scikit_learn import KerasClassifier
from keras import backend as K
from sklearn.model_selection import GridSearchCV
```
# Data Preparation
```
nb_classes = 10
# input image dimensions
img_rows, img_cols = 28, 28
# load training data and do basic data normalization
(X_train, y_train), (X_test, y_test) = mnist.load_data()
if K.image_dim_ordering() == 'th':
X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1)
X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255
X_test /= 255
# convert class vectors to binary class matrices
y_train = np_utils.to_categorical(y_train, nb_classes)
y_test = np_utils.to_categorical(y_test, nb_classes)
```
## Build Model
```
def make_model(dense_layer_sizes, filters, kernel_size, pool_size):
'''Creates model comprised of 2 convolutional layers followed by dense layers
dense_layer_sizes: List of layer sizes. This list has one number for each layer
nb_filters: Number of convolutional filters in each convolutional layer
nb_conv: Convolutional kernel size
nb_pool: Size of pooling area for max pooling
'''
model = Sequential()
model.add(Conv2D(filters, (kernel_size, kernel_size),
padding='valid', input_shape=input_shape))
model.add(Activation('relu'))
model.add(Conv2D(filters, (kernel_size, kernel_size)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(pool_size, pool_size)))
model.add(Dropout(0.25))
model.add(Flatten())
for layer_size in dense_layer_sizes:
model.add(Dense(layer_size))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy',
optimizer='adadelta',
metrics=['accuracy'])
return model
dense_size_candidates = [[32], [64], [32, 32], [64, 64]]
my_classifier = KerasClassifier(make_model, batch_size=32)
```
## GridSearch HyperParameters
```
validator = GridSearchCV(my_classifier,
param_grid={'dense_layer_sizes': dense_size_candidates,
# nb_epoch is avail for tuning even when not
# an argument to model building function
'epochs': [3, 6],
'filters': [8],
'kernel_size': [3],
'pool_size': [2]},
scoring='neg_log_loss',
n_jobs=1)
validator.fit(X_train, y_train)
print('The parameters of the best model are: ')
print(validator.best_params_)
# validator.best_estimator_ returns sklearn-wrapped version of best model.
# validator.best_estimator_.model returns the (unwrapped) keras model
best_model = validator.best_estimator_.model
metric_names = best_model.metrics_names
metric_values = best_model.evaluate(X_test, y_test)
for metric, value in zip(metric_names, metric_values):
print(metric, ': ', value)
```
---
# There's more:
The `GridSearchCV` model in scikit-learn performs a complete search, considering **all** the possible combinations of Hyper-parameters we want to optimise.
If we want to apply for an optmised and bounded search in the hyper-parameter space, I strongly suggest to take a look at:
* `Keras + hyperopt == hyperas`: [http://maxpumperla.github.io/hyperas/](http://maxpumperla.github.io/hyperas/)
| github_jupyter |
### Keras implementation of Brain CNN
```
import tensorflow as tf
import numpy as np
import sklearn.metrics
from keras.utils.np_utils import to_categorical
from keras.models import Sequential
from keras.layers import Dense, Activation, Flatten
from keras.layers.convolutional import Convolution2D, Conv2D
from keras.layers.pooling import MaxPooling2D
from keras.layers.normalization import BatchNormalization
from keras.optimizers import SGD
from keras.optimizers import Adam
from keras.regularizers import l2
from keras.callbacks import ModelCheckpoint
from keras.callbacks import Callback
from keras.callbacks import EarlyStopping
IMG_WIDTH = 64
IMG_HEIGHT = 64
SNAPS = 8
SLICE = 5
CLASSES = 2
CHANNELS = 1
conv1_filter = 4
conv2_filter = 8
conv3_filter = 16
conv4_filter = 32
conv5_filter = 4
conv6_filter = 4
experiment = '1snap3d'
path = '/work/aaung/datasets/' + experiment + '/'
_04847_img = np.load(path + '4847_' + experiment + '-image.npy')
_04799_img = np.load(path + '4799_' + experiment + '-image.npy')
_04820_img = np.load(path + '4820_' + experiment + '-image.npy')
_05675_img = np.load(path + '5675_' + experiment + '-image.npy')
_05680_img = np.load(path + '5680_' + experiment + '-image.npy')
_05710_img = np.load(path + '5710_' + experiment + '-image.npy')
_04847_lbl = np.load(path + '4847_' + experiment + '-label-onehot.npy')
_04799_lbl = np.load(path + '4799_' + experiment + '-label-onehot.npy')
_04820_lbl = np.load(path + '4820_' + experiment + '-label-onehot.npy')
_05675_lbl = np.load(path + '5675_' + experiment + '-label-onehot.npy')
_05680_lbl = np.load(path + '5680_' + experiment + '-label-onehot.npy')
_05710_lbl = np.load(path + '5710_' + experiment + '-label-onehot.npy')
```
### Leave on example out validation
```
# n = 24
train_img = np.vstack((_04847_img[n:,], _04799_img[n:,], _04820_img[n:,], _05675_img[n:,], _05680_img[n:,], _05710_img[n:,]))
train_lbl = np.vstack((_04847_lbl[n:,], _04799_lbl[n:,], _04820_lbl[n:,], _05675_lbl[n:,], _05680_lbl[n:,], _05710_lbl[n:,]))
val_img = np.vstack((_04847_img[:n,], _04799_img[:n,], _04820_img[:n,], _05675_img[:n,], _05680_img[:n,], _05710_img[:n,]))
val_lbl = np.vstack((_04847_lbl[:n,], _04799_lbl[:n,], _04820_lbl[:n,], _05675_lbl[:n,], _05680_lbl[:n,], _05710_lbl[:n,]))
# Cross Subject
# train_img = np.vstack((_05710_img, _04847_img, _04799_img, _05675_img, _05680_img))
# train_lbl = np.vstack((_05710_lbl, _04847_lbl, _04799_lbl, _05675_lbl, _05680_lbl))
# val_img = _04820_img
# val_lbl = _04820_lbl
STRIP_HEIGHT = train_img.shape[2]
STRIP_WIDTH = train_img.shape[3]
print train_img.shape
print val_img.shape
print train_lbl.shape
print val_lbl.shape
np.random.seed(0)
# shuffle = np.random.permutation(database.shape[0])
# test = database[shuffle[0:100],:]
# val = database[shuffle[100:200],:]
# train = database[shuffle[200:],:]
xtrain = train_img[:,SLICE,:,:]
xtrain = np.reshape(xtrain, (xtrain.shape[0], xtrain.shape[1], xtrain.shape[2], 1))
ytrain = train_lbl
xval = val_img[:,SLICE,:,:]
xval = np.reshape(xval, (xval.shape[0], xval.shape[1], xval.shape[2], 1))
yval = val_lbl
print xtrain.shape
print ytrain.shape
print xval.shape
print yval.shape
%matplotlib inline
import matplotlib.pyplot as plt
for i in range(5):
plt.show(plt.imshow(xtrain[i,:,:,0]))
class EvaluateValidation(Callback):
def __init__(self, test_data):
self.test_data = test_data
def on_epoch_end(self, epoch, logs={}):
x, y = self.test_data
loss, acc = self.model.evaluate(x, y, verbose=0)
print('\nValidation loss: {}, acc: {}\n'.format(loss, acc))
### Model ###
model = Sequential()
mde = 0
k_init = 'he_normal'
ridge = 0.0005
model.add(Convolution2D(conv1_filter, kernel_size=(3, 3), strides=(1, 1),
padding='same', data_format="channels_last", activation=None, use_bias=True,
kernel_regularizer=l2(ridge),
kernel_initializer=k_init, bias_initializer='zeros', input_shape=(STRIP_HEIGHT, STRIP_WIDTH, CHANNELS)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Convolution2D(conv2_filter, kernel_size=(5, 5), strides=(1, 1),
padding='same', data_format="channels_last", activation=None, use_bias=True,
kernel_regularizer=l2(ridge),
kernel_initializer=k_init, bias_initializer='zeros'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size = (2, 2), strides=(2, 2), padding='valid'))
model.add(Convolution2D(conv3_filter, kernel_size=(7, 7), strides=(1, 1),
padding='same', data_format="channels_last", activation=None, use_bias=True,
kernel_regularizer=l2(ridge),
kernel_initializer=k_init, bias_initializer='zeros'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Convolution2D(conv4_filter, kernel_size=(9, 9), strides=(1, 1),
padding='same', data_format="channels_last", activation=None, use_bias=True,
kernel_regularizer=l2(ridge),
kernel_initializer=k_init, bias_initializer='zeros'))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size = (2, 2), strides=2, padding='valid'))
model.add(Flatten())
model.add(Dense(1024, kernel_initializer=k_init,kernel_regularizer=l2(ridge)))
model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dense(512, kernel_initializer=k_init,kernel_regularizer=l2(ridge)))
# model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dense(256, kernel_initializer=k_init,kernel_regularizer=l2(ridge)))
# model.add(BatchNormalization())
model.add(Activation('relu'))
model.add(Dense(CLASSES, kernel_initializer=k_init,kernel_regularizer=l2(ridge)))
# model.add(BatchNormalization())
model.add(Activation('softmax'))
Lr = 1e-4
dcy = 1e-5
m = 0.5
batch_sz = 25
epoch = 25
# sgd = SGD(lr=Lr, momentum=m, decay=dcy, nesterov=True)
adam = Adam(lr=Lr, decay=dcy)
model.compile(optimizer = adam, loss = 'categorical_crossentropy', metrics = ['accuracy'])
model.summary()
print('learning rate: %f, decay: %f' %(Lr, dcy))
from keras.backend import get_session
get_session().run(tf.global_variables_initializer())
a = model.fit(xtrain, ytrain, batch_size = batch_sz, epochs= epoch, verbose = 1,
callbacks=[EvaluateValidation((xval, yval))])
loss_and_metrics = model.evaluate(xval, yval, batch_size=batch_sz)
print "Loss and accuracy: ", loss_and_metrics
y_pred_one_hot = model.predict(xval, batch_size=128)
y_pred = np.argmax(y_pred_one_hot, axis=1)
y_true = np.argmax(yval, axis=1)
print "Test loss: {}".format(loss_and_metrics[0])
print "Test Acc: {} %".format(loss_and_metrics[1] * 100)
print "Precision", sklearn.metrics.precision_score(y_true, y_pred)
print "Recall", sklearn.metrics.recall_score(y_true, y_pred)
print "f1_score", sklearn.metrics.f1_score(y_true, y_pred)
print "confusion_matrix"
print sklearn.metrics.confusion_matrix(y_true, y_pred, labels=[0, 1])
fpr, tpr, tresholds = sklearn.metrics.roc_curve(y_true, y_pred)
ras = sklearn.metrics.auc(fpr, tpr)
roauc_score = sklearn.metrics.roc_auc_score(y_true, y_pred)
print ras
print roauc_score
print "{}".format(loss_and_metrics[0])
print "{}".format(loss_and_metrics[1] * 100)
print sklearn.metrics.precision_score(y_true, y_pred)
print sklearn.metrics.recall_score(y_true, y_pred)
print sklearn.metrics.f1_score(y_true, y_pred)
print sklearn.metrics.confusion_matrix(y_true, y_pred, labels=[0, 1])
# Compute ROC curve and ROC area for each class
n_classes = 2
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = sklearn.metrics.roc_curve(yval[:, i], y_pred_one_hot[:, i])
roc_auc[i] = sklearn.metrics.auc(fpr[i], tpr[i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = sklearn.metrics.roc_curve(yval.ravel(), y_pred_one_hot.ravel())
roc_auc["micro"] = sklearn.metrics.auc(fpr["micro"], tpr["micro"])
roauc_score = sklearn.metrics.roc_auc_score(y_true, y_pred)
print roauc_score
plt.figure()
lw = 2
plt.plot(fpr["micro"], tpr["micro"], color='darkorange',
lw=lw, label='ROC curve (area = %0.2f)' % roc_auc["micro"])
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
# plt.title('ROC classifying Faces of class {} (Pure class {} vs Mixed class {})'.format(CLASS, CLASS, CLASS))
plt.legend(loc="lower right")
plt.show()
print tpr["micro"]
print fpr["micro"]
```
| github_jupyter |
```
%matplotlib inline
from taxi_pakage import *
taxi = pd.read_csv("edited_train.csv")
# 날씨 데이터 생성
weather_event = ['20160110', '20160113', '20160117', '20160123', '20160205', '20160208', '20160215', '20160216',
'20160224', '20160225', '20160314', '20160315', '20160328', '20160329', '20160403', '20160404',
'20160530', '20160628']
weather_event = pd.Series(pd.to_datetime(weather_event, format = '%Y%m%d')).dt.date
weather_event = weather_event.astype('<U32')
weather_event = list(weather_event)
taxi["y-m-d"] = pd.to_datetime(taxi["pickup_datetime"]).apply(lambda x: x.strftime("%Y-%m-%d"))
taxi["extreme_weather"] = taxi["y-m-d"].apply(lambda x: 1 if x in weather_event else 0)
taxi["weather_event"] = taxi["extreme_weather"] # 날씨 (1:자연재해, 0:자연재해X)
taxi.drop(['y-m-d', 'extreme_weather'], axis=1, inplace=True)
taxi['sqrt_log_dist'] = taxi['dist'].apply(lambda x: np.sqrt(np.log1p(x)))
taxi['log_duration'] = taxi['trip_duration'].apply(lambda x: np.log1p(x))
taxi['velo'] = taxi['dist']/taxi['trip_duration']*3600 # 시속
taxi['no_passenger'] = taxi['passenger_count'].apply(lambda x: 1 if x == 0 else 0)
# 아웃라이어 제거
taxi = taxi[taxi['trip_duration'] < 1500000].reset_index(drop=True)
taxi = taxi[taxi['velo']<100]
filtered = taxi[taxi['velo']>2]
# origin data model
model = sm.OLS.from_formula("log_duration ~ scale(hour) +scale(hour**2) +scale(hour**3) + scale(hour**4) +scale(hour**5) +scale(hour**6)+scale(hour**7) + scale(hour**8) + scale(hour**9)", data = taxi)
result2 = model.fit_regularized(alpha=0.01, L1_wt=1)
print(result2.params)
score, result_set = cross_validater("log_duration ~ \
scale(sqrt_log_dist)*C(vendor_id)\
+ scale(sqrt_log_dist)*C(work)\
+ scale(sqrt_log_dist)*scale(weather_event)\
+ scale(weekday)+ scale(weekday**2)\
+ scale(hour)+ scale(hour**6)+ scale(hour**7)\
+ scale(month)+ scale(month**2)\
+ scale(pickup_latitude) + scale(dropoff_latitude)+ scale(pickup_longitude) \
+ 0", taxi, 5, r_seed=3, target_log=True)
result_set
score
```
---
```
t2 = taxi.loc[:1000000]
# regularize
model = sm.OLS.from_formula("log_duration ~ \
scale(sqrt_log_dist)*C(vendor_id)\
+ scale(sqrt_log_dist)*scale(work)\
+ scale(sqrt_log_dist)*scale(month)\
+ C(hour)\
+ C(store_and_fwd_flag)\
+ scale(weather_event)\
+ scale(weekday)+ scale(weekday**2)\
+ scale(pickup_latitude) + scale(dropoff_latitude)+ scale(pickup_longitude) \
+ 0", data = taxi)
result2 = model.fit_regularized(alpha=0.001, L1_wt=1)
print(result2.params)
model1 = sm.OLS.from_formula("log_duration ~ \
scale(sqrt_log_dist)*C(vendor_id)\
+ scale(sqrt_log_dist)*scale(work)\
+ scale(sqrt_log_dist)*scale(month)\
+ C(hour)\
+ C(store_and_fwd_flag)\
+ scale(weather_event)\
+ scale(weekday)+ scale(weekday**2)\
+ scale(dropoff_latitude)+ scale(dropoff_longitude) \
+ 0", data = taxi)
result = model1.fit()
result.summary()
test = pd.read_csv("edited_test.csv")
test['sqrt_log_dist'] = test['dist'].apply(lambda x: np.sqrt(np.log1p(x)))
# 테스트 데이터를 통해 y값 예측
y_hat = result.predict(test)
y_hat = y_hat.apply(lambda x: (round(np.exp(x))))
ans = pd.concat([test['id'], y_hat], axis=1)
ans.rename(columns={'id':'id' , 0:'trip_duration'}, inplace=True)
ans.tail()
ans[ans['trip_duration']>50000]
test.loc[324125]
ans['trip_duration'].loc[324125] = 0
ans['trip_duration'].loc[324125]
ans['trip_duration'] = ans['trip_duration'].apply(lambda x: int(x))
# Kaggle 제출파일
ans.to_csv('basic_model.csv', index=False)
```
0.48294 - 757/1257
0.48361 - 758/1257 (60%)
```
757/1257
```
---
## location
```
results = pd.DataFrame(columns = ["R-square", "AIC", "BIC", "Cond.No.", "Pb(Fstatics)", "Pb(omnibus)", "Pb(jb)", "Dub-Wat","Remarks"])
# origin data model
model1 = sm.OLS.from_formula("log_duration ~ scale(sqrt_log_dist) + scale(pickup_latitude) + scale(dropoff_latitude)+ scale(pickup_longitude)+scale(dropoff_longitude)", data = taxi)
result1 = model1.fit()
storage(result1, results, 'sqrt dist + s(location)')
model1 = sm.OLS.from_formula("log_duration ~ scale(sqrt_log_dist) + scale(dropoff_latitude)+ scale(pickup_longitude)+scale(dropoff_longitude)", data = taxi)
result1 = model1.fit()
storage(result1, results, 'sqrt dist + w/o p_la')
model1 = sm.OLS.from_formula("log_duration ~ scale(sqrt_log_dist) + scale(pickup_latitude) + scale(pickup_longitude)+scale(dropoff_longitude)", data = taxi)
result1 = model1.fit()
storage(result1, results, 'sqrt dist + w/o d_la')
model1 = sm.OLS.from_formula("log_duration ~ scale(sqrt_log_dist) + scale(pickup_latitude) + scale(pickup_longitude)+scale(dropoff_longitude)", data = taxi)
result1 = model1.fit()
storage(result1, results, 'sqrt dist + w/o p_lo')
model1 = sm.OLS.from_formula("log_duration ~ scale(sqrt_log_dist) + scale(pickup_latitude) + scale(dropoff_latitude)+ scale(pickup_longitude)", data = taxi)
result1 = model1.fit()
storage(result1, results, 'sqrt dist + w/o d_lo')
model1 = sm.OLS.from_formula("log_duration ~ scale(sqrt_log_dist) + scale(pickup_latitude) + scale(pickup_longitude)", data = taxi)
result1 = model1.fit()
storage(result1, results, 'sqrt dist + pickup')
model1 = sm.OLS.from_formula("log_duration ~ scale(sqrt_log_dist) + scale(dropoff_latitude)+ scale(dropoff_longitude)", data = taxi)
result1 = model1.fit()
storage(result1, results, 'sqrt dist + dropoff')
model1 = sm.OLS.from_formula("log_duration ~ scale(sqrt_log_dist) + scale(pickup_latitude) + scale(dropoff_latitude)", data = taxi)
result1 = model1.fit()
storage(result1, results, 'sqrt dist + lati')
model1 = sm.OLS.from_formula("log_duration ~ scale(sqrt_log_dist) + scale(pickup_longitude)+scale(dropoff_longitude)", data = taxi)
result1 = model1.fit()
storage(result1, results, 'sqrt dist + logi')
results
```
| github_jupyter |
[this doc on github](https://github.com/dotnet/interactive/tree/main/samples/notebooks/csharp/Samples)
# .NET interactive report
project report for [.NET interactive repo]()
## Setup
Importing pacakges and setting up connection
```
#i "nuget:https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet5/nuget/v3/index.json"
#i "nuget:https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-tools/nuget/v3/index.json"
#r "nuget:NodaTime, 2.4.8"
#r "nuget:Octokit, 0.47.0"
#r "nuget: XPlot.Plotly.Interactive, 4.0.6"
using static Microsoft.DotNet.Interactive.Formatting.PocketViewTags;
using Microsoft.DotNet.Interactive.Formatting;
using Octokit;
using NodaTime;
using NodaTime.Extensions;
using XPlot.Plotly;
var organization = "dotnet";
var repositoryName = "interactive";
var options = new ApiOptions();
var gitHubClient = new GitHubClient(new ProductHeaderValue("notebook"));
```
[Generate a user token](https://help.github.com/en/github/authenticating-to-github/creating-a-personal-access-token-for-the-command-line) to get rid of public [api](https://github.com/octokit/octokit.net/blob/master/docs/getting-started.md) throttling policies for anonymous users
```
var tokenAuth = new Credentials("your token");
gitHubClient.Credentials = tokenAuth;
var today = SystemClock.Instance.InUtc().GetCurrentDate();
var startOfTheMonth = today.With(DateAdjusters.StartOfMonth);
var startOfPreviousMonth = today.With(DateAdjusters.StartOfMonth) - Period.FromMonths(1);
var startOfTheYear = new LocalDate(today.Year, 1, 1).AtMidnight();
var currentYearIssuesRequest = new RepositoryIssueRequest {
State = ItemStateFilter.All,
Since = startOfTheYear.ToDateTimeUnspecified()
};
var pullRequestRequest = new PullRequestRequest {
State = ItemStateFilter.All
};
```
Perform github queries
```
#!time
var branches = await gitHubClient.Repository.Branch.GetAll(organization, repositoryName);
var pullRequests = await gitHubClient.Repository.PullRequest.GetAllForRepository(organization, repositoryName, pullRequestRequest);
var forks = await gitHubClient.Repository.Forks.GetAll(organization, repositoryName);
var currentYearIssues = await gitHubClient.Issue.GetAllForRepository(organization, repositoryName, currentYearIssuesRequest);
```
Branch data
Pull request data
```
var pullRequestCreatedThisMonth = pullRequests.Where(pr => pr.CreatedAt > startOfTheMonth.ToDateTimeUnspecified());
var pullRequestClosedThisMonth =pullRequests.Where(pr => (pr.MergedAt != null && pr.MergedAt > startOfTheMonth.ToDateTimeUnspecified()));
var contributorsCount = pullRequestClosedThisMonth.GroupBy(pr => pr.User.Login);
var pullRequestLifespan = pullRequests.GroupBy(pr =>
{
var lifeSpan = (pr.ClosedAt ?? today.ToDateTimeUnspecified()) - pr.CreatedAt;
return Math.Max(0, Math.Ceiling(lifeSpan.TotalDays));
})
.Where(g => g.Key > 0)
.OrderBy(g => g.Key)
.ToDictionary(g => g.Key, g => g.Count());
```
Fork data
```
var forkCreatedThisMonth = forks.Where(fork => fork.CreatedAt >= startOfTheMonth.ToDateTimeUnspecified());
var forkCreatedPreviousMonth = forks.Where(fork => (fork.CreatedAt >= startOfPreviousMonth.ToDateTimeUnspecified()) && (fork.CreatedAt < startOfTheMonth.ToDateTimeUnspecified()));
var forkCreatedByMonth = forks.GroupBy(fork => new DateTime(fork.CreatedAt.Year, fork.CreatedAt.Month, 1));
var forkUpdateByMonth = forks.GroupBy(f => new DateTime(f.UpdatedAt.Year, f.UpdatedAt.Month, 1) ).Select(g => new {Date = g.Key, Count = g.Count()}).OrderBy(g => g.Date).ToArray();
var total = 0;
var forkCountByMonth = forkCreatedByMonth.OrderBy(g => g.Key).Select(g => new {Date = g.Key, Count = total += g.Count()}).ToArray();
```
Issues data
```
bool IsBug(Issue issue){
return issue.Labels.FirstOrDefault(l => l.Name == "bug")!= null;
}
bool TargetsArea(Issue issue){
return issue.Labels.FirstOrDefault(l => l.Name.StartsWith("Area-"))!= null;
}
string GetArea(Issue issue){
return issue.Labels.FirstOrDefault(l => l.Name.StartsWith("Area-"))?.Name;
}
var openIssues = currentYearIssues.Where(IsBug).Where(issue => issue.State == "open");
var closedIssues = currentYearIssues.Where(IsBug).Where(issue => issue.State == "closed");
var oldestIssues = openIssues.OrderBy(issue => today.ToDateTimeUnspecified() - issue.CreatedAt).Take(20);
var createdCurrentMonth = currentYearIssues.Where(IsBug).Where(issue => issue.CreatedAt >= startOfTheMonth.ToDateTimeUnspecified());
var createdPreviousMonth = currentYearIssues.Where(IsBug).Where(issue => (issue.CreatedAt >= startOfPreviousMonth.ToDateTimeUnspecified()) && (issue.CreatedAt < startOfTheMonth.ToDateTimeUnspecified()));
var openFromPreviousMonth = openIssues.Where(issue => (issue.CreatedAt > startOfPreviousMonth.ToDateTimeUnspecified()) && (issue.CreatedAt < startOfTheMonth.ToDateTimeUnspecified()));
var createdByMonth = currentYearIssues.Where(IsBug).GroupBy(issue => new DateTime(issue.CreatedAt.Year, issue.CreatedAt.Month, 1)).OrderBy(g=>g.Key).ToDictionary(g => g.Key, g => g.Count());
var closedByMonth = closedIssues.GroupBy(issue => new DateTime((int) issue.ClosedAt?.Year, (int) issue.ClosedAt?.Month, 1)).OrderBy(g=>g.Key).ToDictionary(g => g.Key, g => g.Count());
var openIssueAge = openIssues.GroupBy(issue => new DateTime(issue.CreatedAt.Year, issue.CreatedAt.Month, issue.CreatedAt.Day)).ToDictionary(g => g.Key, g => g.Max(issue =>Math.Max(0, Math.Ceiling( (today.ToDateTimeUnspecified() - issue.CreatedAt).TotalDays))));
var openByMonth = new Dictionary<DateTime, int>();
var minDate = createdByMonth.Min(g => g.Key);
var maxCreatedAtDate = createdByMonth.Max(g => g.Key);
var maxClosedAtDate = closedByMonth.Max(g => g.Key);
var maxDate = maxCreatedAtDate > maxClosedAtDate ?maxCreatedAtDate : maxClosedAtDate;
var cursor = minDate;
var runningTotal = 0;
var issuesCreatedThisMonthByArea = currentYearIssues.Where(issue => issue.CreatedAt >= startOfTheMonth.ToDateTimeUnspecified()).Where(issue => IsBug(issue) && TargetsArea(issue)).GroupBy(issue => GetArea(issue)).ToDictionary(g => g.Key, g => g.Count());
var openIssueByArea = currentYearIssues.Where(issue => issue.State == "open").Where(issue => IsBug(issue) && TargetsArea(issue)).GroupBy(issue => GetArea(issue)).ToDictionary(g => g.Key, g => g.Count());
while (cursor <= maxDate )
{
createdByMonth.TryGetValue(cursor, out var openCount);
closedByMonth.TryGetValue(cursor, out var closedCount);
runningTotal += (openCount - closedCount);
openByMonth[cursor] = runningTotal;
cursor = cursor.AddMonths(1);
}
var issueLifespan = currentYearIssues.Where(IsBug).GroupBy(issue =>
{
var lifeSpan = (issue.ClosedAt ?? today.ToDateTimeUnspecified()) - issue.CreatedAt;
return Math.Max(0, Math.Round(Math.Ceiling(lifeSpan.TotalDays),0));
})
.Where(g => g.Key > 0)
.OrderBy(g => g.Key)
.ToDictionary(g => g.Key, g => g.Count());
display(new {
less_then_one_sprint = issueLifespan.Where(i=> i.Key < 21).Select(i => i.Value).Sum(),
less_then_two_sprint = issueLifespan.Where(i=> i.Key >= 21 && i.Key < 42).Select(i => i.Value).Sum(),
more_then_two_sprint = issueLifespan.Where(i=> i.Key >= 42).Select(i => i.Value).Sum()
});
```
# Activity dashboard
```
Scattergl ScatterglDict<T1, T2>(Dictionary<T1, T2> dict, string name)
{
return new Scattergl{
name = name,
x = dict.Select(pair => pair.Key),
y = dict.Select(pair => pair.Value)
};
}
var issueChart = Chart.Plot(new[] {
ScatterglDict(createdByMonth, "Created"),
ScatterglDict(openByMonth, "Open"),
ScatterglDict(closedByMonth, "Closed")
});
issueChart.WithTitle("Bugs by month");
display(issueChart);
// Helper function for some of the below charts.
Bar BarIEnumerableKeyValuePair<T1,T2>(IEnumerable<KeyValuePair<T1,T2>> seq, string name, string color)
{
return new Bar
{
name = name,
y = seq.OrderBy(issue => issue.Key).Select(issue => issue.Value),
x = seq.OrderBy(issue => issue.Key).Select(issue => issue.Key),
marker = new Marker{ color = color }
};
}
var issueLifespanChart = Chart.Plot(new[] {
BarIEnumerableKeyValuePair(issueLifespan.Where(issue => issue.Key < 7), "One week old", "green"),
BarIEnumerableKeyValuePair(issueLifespan.Where(issue => issue.Key >= 7 && issue.Key < 21), "One Sprint old", "yellow"),
BarIEnumerableKeyValuePair(issueLifespan.Where(issue => issue.Key >= 21), "More then a Sprint", "red")
});
issueLifespanChart.WithLayout(new Layout.Layout
{
title = "Bugs by life span",
xaxis = new Xaxis {
title = "Number of days a bug stays open",
showgrid = false,
zeroline = false
},
yaxis = new Yaxis {
showgrid = true,
zeroline = false
}
});
display(issueLifespanChart);
var openIssuesAgeChart = Chart.Plot(new[] {
BarIEnumerableKeyValuePair(openIssueAge.Where(issue => issue.Value < 7), "Closed in a week", "green"),
BarIEnumerableKeyValuePair(openIssueAge.Where(issue => issue.Value >= 7 && issue.Value < 21), "Closed within a sprint", "yellow"),
BarIEnumerableKeyValuePair(openIssueAge.Where(issue => issue.Value >= 21), "Long standing", "red")
});
openIssuesAgeChart.WithLayout(new Layout.Layout
{
title = "Open bugs age",
yaxis = new Yaxis {
title = "Number of days a bug stays open",
showgrid = true,
zeroline = false
}
});
display(openIssuesAgeChart);
var createdThisMonthAreaSeries = new Pie {
values = issuesCreatedThisMonthByArea.Select(e => e.Value),
labels = issuesCreatedThisMonthByArea.Select(e => e.Key),
};
var createdArea = Chart.Plot(new[] {createdThisMonthAreaSeries});
createdArea.WithLayout(new Layout.Layout
{
title = "Bugs created this month by Area",
});
display(createdArea);
var openAreaSeries = new Pie {
values = openIssueByArea.Select(e => e.Value),
labels = openIssueByArea.Select(e => e.Key),
};
var openArea = Chart.Plot(new[] {openAreaSeries});
openArea.WithLayout(new Layout.Layout
{
title = "Open bugs by Area",
});
display(openArea);
var prLifespanChart = Chart.Plot(new[] {
BarIEnumerableKeyValuePair(pullRequestLifespan.Where(issue => issue.Key < 7), "One week", "green"),
BarIEnumerableKeyValuePair(pullRequestLifespan.Where(issue => issue.Key >= 7 && issue.Key < 21), "One Sprint", "yellow"),
BarIEnumerableKeyValuePair(pullRequestLifespan.Where(issue => issue.Key >= 21), "More than a Sprint", "red")
});
prLifespanChart.WithLayout(new Layout.Layout
{
title = "Pull Request by life span",
xaxis = new Xaxis {
title = "Number of days a PR stays open",
showgrid = false,
zeroline = false
},
yaxis = new Yaxis {
title = "Number of PR",
showgrid = true,
zeroline = false
}
});
display(prLifespanChart);
var forkCreationSeries = new Scattergl
{
name = "created by month",
y = forkCreatedByMonth.Select(g => g.Count() ).ToArray(),
x = forkCreatedByMonth.Select(g => g.Key ).ToArray()
};
var forkTotalSeries = new Scattergl
{
name = "running total",
y = forkCountByMonth.Select(g => g.Count ).ToArray(),
x = forkCountByMonth.Select(g => g.Date ).ToArray()
};
var forkUpdateSeries = new Scattergl
{
name = "last update by month",
y = forkUpdateByMonth.Select(g => g.Count ).ToArray(),
x = forkUpdateByMonth.Select(g => g.Date ).ToArray()
};
var chart = Chart.Plot(new[] {forkCreationSeries,forkTotalSeries,forkUpdateSeries});
chart.WithTitle("Fork activity");
display(chart);
```
| github_jupyter |
```
import pandas as pd
import numpy as np
from tqdm.auto import tqdm
import sys
sys.path.insert(1, '../oracle-polimi-contest-2019')
from evaluation_script import read_file
from collections import Counter
import similaripy as sim
from scipy import *
from scipy.sparse import *
import string
import unidecode
def create_name_letters_matrix(df):
df = df[['record_id','name']]
df.name = df.name.astype(str) # convert to string
df.name = df.name.str.lower() # lowercase
df.name = df.name.str.translate(str.maketrans('', '', string.punctuation)) # remove punctuation
# remove accented letters
no_accents = []
for s in df.name:
no_accents.append(unidecode.unidecode(s))
df.name = no_accents
# create return matrix
columns = ['record_id','name','a','b','c','d','e','f','g','h','i','j','k','l',
'm','n','o','p','q','r','s','t','u','v','w','x','y','z']
name_letters_matrix = pd.DataFrame(columns=columns)
name_letters_matrix.record_id = df.record_id.copy()
name_letters_matrix.name = df.name.copy()
# count occurence of each letter and add the columns to the return df
for l in tqdm(['a','b','c','d','e','f','g','h','i','j','k','l','m','n',
'o','p','q','r','s','t','u','v','w','x','y','z']):
new_col = []
for (i,n) in zip(name_letters_matrix.index, name_letters_matrix.name):
new_col.append(n.count(l))
name_letters_matrix[l] = new_col
return name_letters_matrix
def get_mcn_matrix_train(train):
group = train[['name', 'linked_id']].groupby('linked_id').apply(lambda x: list(x['name']))
link_mc_name = {}
for (l, names) in tqdm(zip(group.keys(), group)):
link_mc_name[l] = Counter(names).most_common(1)[0][0]
most_common_name = pd.DataFrame.from_dict(link_mc_name, orient='index', columns=['most_common_name'])
df_train_clean = pd.merge(train, most_common_name, how='left', left_on='linked_id', right_index=True)
df_train_clean = df_train_clean.drop_duplicates(subset=['linked_id','most_common_name']).drop(['record_id', 'name'], axis=1)
df_train_clean = df_train_clean.rename(columns={"linked_id":"record_id", "most_common_name":"name"})
m_train = create_name_letters_matrix(df_train_clean)
m_train = m_train.reset_index(drop=True)
return m_train
def cosine_similarity(m_train, m_test, path='val_cosine', k=10):
m_train_csr = csr_matrix(m_train.drop(['record_id','name'], axis=1))
m_test_csr = csr_matrix(m_test.drop(['record_id','name'], axis=1))
output = sim.cosine(m_test_csr, m_train_csr.T, k=k)
save_npz(path + '.npz', output.tocsr())
return output.tocsr()
def clean_cosine_output(output, df_test, m_train):
output = output.tocsr()
r_nnz = output.nonzero()[0]
c_nnz = output.nonzero()[1]
l = []
for i in tqdm(range(len(r_nnz))):
l.append([output[r_nnz[i], c_nnz[i]],r_nnz[i],c_nnz[i]])
l.sort(key= lambda x: (x[1], -x[0]))
rec_id = [x[1] for x in l]
rec_id = [df_test.at[i,'record_id'] for i in tqdm(rec_id)]
lin_id = [x[2] for x in l]
lin_id = [m_train.at[i,'record_id'] for i in tqdm(lin_id)]
scores = [x[0] for x in l]
df = pd.DataFrame()
df['queried_record_id'] = rec_id
df['predicted_record_id'] = lin_id
df['cosine_score'] = scores
return df
# Splitting Train in Train-Validation set
train = read_file("../dataset/original/train.csv")
train = train.drop(['modification', 'type'], axis=1)
train['name'] = train['name'].str.lower()
from sklearn.model_selection import train_test_split
target = train.linked_id
X_train, X_val, y_train, y_val = train_test_split(train, target, test_size=0.33, random_state=42)
m_train = get_mcn_matrix_train(X_train)
m_train
m_test = create_name_letters_matrix(X_val)
cosine_output = cosine_similarity(m_train, m_test)
X_val = X_val.reset_index(drop=True)
# Extract top10 from cosine similarity and create xgboost skeleton dataframe: validation set becomes xgboost train
xgb_train_df = clean_cosine_output(cosine_output, X_val, m_train)
xgb_train_df
```
## The same for the real test set
```
test = read_file("../oracle-polimi-contest-2019/test_data.csv")
test = test.drop(['modification', 'type'], axis=1)
test['name'] = test['name'].str.lower()
m_train_full = get_mcn_matrix_train(train)
m_test_full = create_name_letters_matrix(test)
m_train_full.shape
m_test_full.shape
full_cosine_out = cosine_similarity(m_train_full, m_test_full, path='full_cosine_sim')
xgb_test_df = clean_cosine_output(full_cosine_out, test, m_train_full)
xgb_test_df
```
# Extract features
```
def adding_names(xgb_df, m_train, m_test):
xgb_df = df.merge(m_train[['record_id', 'name']], left_on='predicted_record_id', right_on='record_id').drop('record_id', axis=1)
xgb_df = xgb_df.rename(columns={'name': 'predicted_record_name'})
xgb_df = xgb_df.merge(m_test[['record_id', 'name']], left_on='queried_record_id', right_on='record_id' ).rename(columns={'name':'queried_name'})
xgb_df = xgb_df.drop('record_id', axis=1)
return xgb_df
def extract_target(predicted, linked):
res = np.empty(len(predicted))
res = np.where(predicted == linked, 1, 0)
return res
def train_target(xgb_df_train, X_val):
xgb_df_train = xgb_df_train.merge(X_val[['record_id', 'linked_id']], left_on='queried_record_id', right_on='record_id')
xgb_df_train = xgb_df_train.drop('record_id', axis=1)
xgb_df_train['linked_id'] = xgb_df_train['linked_id'].astype(int)
xgb_df_train['target'] = extract_target(xgb_df_train.predicted_record_id.values, xgb_df_train.linked_id.values)
return xgb_df_train.drop('linked_id', axis=1)
def extract_editdistance(queried_name, predicted_name):
res = np.empty(len(queried_name))
for i in tqdm(range(len(queried_name))):
res[i] = editdistance.eval(queried_name[i], predicted_name[i])
return res
xgb_train_df = train_target(xgb_train_df, X_val)
xgb_train_df['editdistance'] = extract_editdistance(xgb_train_df.predicted_record_name.values, xgb_train_df.queried_name.values)
# TODO da concludere questa parte: aggiungere le stesse features anche per xgb_test_df
import xgboost as xgb
group = xgb_train_df.groupby('queried_record_id').size().values
ranker = xgb.XGBRanker()
ranker.fit(df_xgb.drop(['queried_record_id', 'target', 'nysiis_distance'], axis=1), df_xgb['target'], group=group)
# Get predictions
predictions = ranker.predict(xgb_test_df[['predicted_record_id', 'score', 'editdistance']])
xgb_test_df['predictions'] = predictions
df_predictions = xgb_test_df[['queried_record_id', 'predicted_record_id', 'predictions']]
```
# Extract Submission
```
rec_pred = []
for (r,p) in zip(df_predictions.predicted_record_id, df_predictions.predictions):
rec_pred.append((r, p))
rec_pred
df_predictions['rec_pred'] = rec_pred
group_queried = df_predictions[['queried_record_id', 'rec_pred']].groupby('queried_record_id').apply(lambda x: list(x['rec_pred']))
df_predictions = pd.DataFrame(group_queried).reset_index().rename(columns={0 : 'rec_pred'})
def reorder_preds(preds):
sorted_list = []
for i in range(len(preds)):
l = sorted(preds[i], key=lambda t: t[1], reverse=True)
l = [x[0] for x in l]
sorted_list.append(l)
return sorted_list
df_predictions['ordered_preds'] = reorder_preds(df_predictions.rec_pred.values)
df_predictions = df_predictions[['queried_record_id', 'ordered_preds']].rename(columns={'ordered_preds': 'predicted_record_id'})
new_col = []
for t in tqdm(df_predictions.predicted_record_id):
new_col.append(' '.join([str(x) for x in t]))
new_col
# Adding missing values
missing_values = {'queried_record_id' : ['12026587-TST-MR', '13009531-TST-MR', '12091134-TST-M', '12091134-NV0-TST-CP'],
'predicted_record_id': [10111147, 10111147, 10111147, 10111147]}
missing_df = pd.DataFrame(missing_values)
missing_df
df_predictions.predicted_record_id = new_col
df_predictions = pd.concat([df_predictions, missing_df])
df_predictions.to_csv('xgb_sub2.csv', index=False)
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sn
import pylab as pl
data = pd.read_csv('data/cell_samples.csv')
data.head()
data = data.drop(['BareNuc'], axis = 1)
data.head()
data.info()
ax = data[data['Class'] == 4][0:100].plot(kind = 'scatter', x = 'SingEpiSize', y = 'MargAdh', color = 'k', label = 'Malignant')
data[data['Class'] == 2][0:100].plot(kind = 'scatter', x = 'SingEpiSize', y = 'MargAdh', label = 'Benign', color = 'red', ax = ax)
plt.show()
data.columns
x = data[['Clump', 'UnifSize', 'UnifShape', 'MargAdh', 'SingEpiSize',
'BlandChrom', 'NormNucl', 'Mit']]
x.head()
y = data['Class']
y.head()
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.3, random_state = 4)
x_train.shape
x_test.shape
y_train.shape
y_test.shape
from sklearn.linear_model import LogisticRegression
regr = LogisticRegression(solver = 'lbfgs', C = 0.01, max_iter = 1000)
regr.fit(x_train, y_train)
y_test[0:10]
yhat = regr.predict(x_test)
yhat[0:20]
a_1 = regr.score(x_train, y_train)
a_1
a_2 = regr.score(x_test, y_test)
a_2
a_3 = regr.score(x_test, yhat)
a_3
yhat_proba = regr.predict_proba(x_test)
yhat_proba[0:5]
from sklearn.metrics import accuracy_score, jaccard_similarity_score
a_4 = accuracy_score(y_test, yhat)
a_4
a_5 = jaccard_similarity_score(y_test, yhat)
a_5
from sklearn import neighbors
knn = neighbors.KNeighborsClassifier(n_neighbors = 1, metric = 'manhattan')
knn.fit(x_train, y_train)
y_test[0:10]
y_pred = knn.predict(x_test)
y_pred[0:10]
b_1 = knn.score(x_train, y_train)
b_1
b_2 = knn.score(x_test, y_test)
b_2
b_3 = knn.score(x_test, y_pred)
b_3
b_4 = accuracy_score(y_test, y_pred)
b_4
b_5 = jaccard_similarity_score(y_test, y_pred)
b_5
from sklearn import svm
clf = svm.SVC(kernel = 'poly', gamma = 'auto', C = 0.001)
clf.fit(x_train, y_train)
y_test[0:10]
yhat_1 = clf.predict(x_test)
yhat_1[0:10]
c_1 = clf.score(x_train, y_train)
c_1
c_2 = clf.score(x_test, y_test)
c_2
c_3 = clf.score(x_test, yhat_1)
c_3
c_4 = accuracy_score(y_test, yhat_1)
c_4
c_5 = jaccard_similarity_score(y_test, yhat_1)
c_5
from sklearn import tree
clf_1 = tree.DecisionTreeClassifier()
clf_1.fit(x_train, y_train)
y_test[0:10]
ypred_1 = clf_1.predict(x_test)
ypred_1[0:10]
d_1 = clf_1.score(x_train, y_train)
d_1
d_2 = clf_1.score(x_test, y_test)
d_2
d_3 = clf.score(x_test, ypred_1)
d_3
d_4 = accuracy_score(y_test, ypred_1)
d_4
d_5 = jaccard_similarity_score(y_test, ypred_1)
d_5
from sklearn.naive_bayes import GaussianNB, MultinomialNB, BernoulliNB
gsn = GaussianNB()
gsn.fit(x_train, y_train)
y_test
yhat_2 = gsn.predict(x_test)
yhat_2[0:5]
e_1 = gsn.score(x_train, y_train)
e_1
e_2 = gsn.score(x_test, y_test)
e_2
e_3 = gsn.score(x_test, yhat_2)
e_3
e_4 = accuracy_score(y_test, yhat_2)
e_4
e_5 = jaccard_similarity_score(y_test, yhat_2)
e_5
mul = MultinomialNB()
mul.fit(x_train, y_train)
y_test[0:5]
ypred_2 = mul.predict(x_test)
ypred_2[0:5]
f_1 = mul.score(x_train, y_train)
f_1
f_2 = mul.score(x_test, y_test)
f_2
f_3 = mul.score(x_test, ypred_2)
f_3
f_4 = np.nan
f_4
f_5 = np.nan
f_5
ber = BernoulliNB()
ber.fit(x_train, y_train)
y_test
yhat_3 = ber.predict(x_test)
yhat_3
g_1 = ber.score(x_train, y_train)
g_1
g_2 = ber.score(x_test, y_test)
g_2
g_3 = ber.score(x_test, yhat_3)
g_3
g_4 = accuracy_score(y_test, yhat_3)
g_4
g_5 = jaccard_similarity_score(y_test, yhat_3)
g_5
from sklearn.ensemble import RandomForestClassifier
cl = RandomForestClassifier(n_estimators = 1000)
cl.fit(x_train, y_train)
y_test[0:10]
ypred_3 = cl.predict(x_test)
ypred_3
h_1 = cl.score(x_train, y_train)
h_1
h_2 = cl.score(x_test, y_test)
h_2
h_3 = cl.score(x_test, ypred_3)
h_3
h_4 = accuracy_score(y_test, ypred_3)
h_4
h_5 = jaccard_similarity_score(y_test, ypred_3)
h_5
df = pd.DataFrame({'Training Score' : [a_1, b_1, c_1, d_1, e_1, f_1, g_1, h_1],
'Testing Score' : [a_2, b_2, c_2, d_2, e_2, f_2, g_2, h_2],
'Predicted Score' : [a_3, b_3, c_3, d_3, e_3, f_3, g_3, h_3],
'Accuracy Score' : [a_4, b_4, c_4, d_4, e_4, f_4, g_4, h_4],
'Jaccard Similarity Score' : [a_5, b_5, c_5, d_5, e_5, f_5, g_5, h_5]}, index = ['Logistic Regression', 'KNN', 'SVM', 'Decision Tree', 'Gaussian NB', 'Multinomial NB', 'Bernoulli NB', 'Random Forest'])
df
```
| github_jupyter |
```
import pandas as pd
# from sqlalchemy import create_engine
import random
from collections import namedtuple
from typing import List
from itertools import combinations
from functools import reduce
rds_connection_string = "root:password@127.0.0.1/test"
engine = create_engine(f'mysql://{rds_connection_string}')
engine.table_names()
pd.read_sql_query('select * from Cards', con=engine).head()
pd.read_sql_query('select * from Deck', con=engine).head()
Card = namedtuple("Card", ['suit', 'face', 'owner'])
faces = list(range(1,11))
suits = ['B', 'O', 'E', 'C']
print(Card)
print(faces)
print(suits)
def make_deck():
return { str(n) + s: Card(suit=s, face=n, owner='') for n in faces for s in suits }
CardStore = make_deck()
print(CardStore)
CardStore
def make_deck_from(card_ids :list):
return { card_id: CardStore[card_id] for card_id in card_ids }
def shuffle(deck :list):
random.shuffle(deck)
return deck
def get_escombinations(cards: set, pivot: str):
combos = set()
#combos.add(frozenset(cards | set([pivot])))
for i in range(len(cards)):
r = len(cards) + 1 - i # combinatorial order (from the 5 choose 'x' where 'x' is order)
combs = combinations(cards | set([pivot]),r)
for combo in combs:
combo_vals = [CardStore[c].face for c in combo ]
if ( pivot in combo # pivot card is the player's card - has to be part of combo
and sum(combo_vals) == 15 # only plays that add to 15 are considered, all other plays are equivalent to laying down card on table
or r > len(cards) ):
combos.add(combo)
return combos
class Deck:
card_store = {}
deck_order = []
def __init__(self,card_store={}):
self.card_store = card_store
self.deck_order = list(card_store.keys())
random.shuffle(self.deck_order)
print("Hello deck {}".format(self.deck_order))
def shuffle(self):
return random.shuffle(self.deck_order)
def deal(self, n=1, owner=''):
d = self.deck_order[:n]
self.deck_order = self.deck_order[n:]
return set(d)
# def update_store(card,store, owner):
# store.owner = owner
# return store[card]
# return [update_store(delt,self.card_store, owner) for delt in d]
def cards(self):
return self.card_store
def order(self):
return self.deck_order
deck = Deck(CardStore)
print(deck.order())
print(deck.deal())
print(deck.cards())
cards = [c for c in deck.cards()]
print(cards)
class Player:
score = 0
hand = set()
name = ''
def __init__(self, name, hand=[]):
self.name = name
def play_turn(self,deck):
return NotImplemented
def new_hand(self, cards):
self.hand = cards
def award_point(self, points=1):
self.score += points
def get_play(self, playable: list, table_cards=[]):
play = set()
if len(playable) == 0: # never happens
play.add(random.choice(list(self.hand))) # no playable because table_cards were probably empty so play random card
else:
play = playable
#for card in play:
#if card in self.hand:
#self.hand.discard(card)
if len(play) > 1 :
good_hands = [p for p in play if sum([CardStore[c].face for c in p]) == 15 ]
if len(good_hands) > 0:
return random.choice(good_hands)
return random.choice(list(play))
return play.pop()
class Game:
deck = Deck()
pl1 = Player('p1')
pl2 = Player('p2')
table_cards = set()
def __init__(self, pl1, pl2, deck=Deck()):
self.pl1 = pl1
self.pl2 = pl2
self.deck = deck
print("Start game")
#return NotImplemented
def set_card_owner(self, card, owner):
c = self.deck.cards()[card]
self.deck.cards()[card] = Card(suit=c.suit, face=c.face, owner=owner)
def reset_deck(self):
self.deck = Deck(make_deck())
def deal_hand(self):
p1 = set()
p2 = set()
# deal out to p1 and p2 alternating 3 each
for count in range(0,3):
[ p1.add(d) for d in self.deck.deal()]
[ p2.add(d) for d in self.deck.deal()]
return p1,p2
def deal_start(self):
p1,p2 = self.deal_hand()
start_table_cards = self.deck.deal(4)
return p1,p2,start_table_cards
def valid_plays(self, player, table_cards: set):
# visible_cards = [card for card_id, card in self.deck.cards.items() if (card.owner == 'table') ]
plays = set()
for card in player.hand:
if (len(table_cards) > 0):
combo_cards = set(table_cards)
escombinations = get_escombinations(combo_cards,card)
for combo in escombinations:
plays.add(combo)
else:
plays.add(tuple(player.hand))
return plays
def apply_play(self,play, player):
# validate(play)
playable = self.valid_plays(player, self.table_cards)
scored = False
t_cards = self.table_cards
#p_cards = play.pop()
p_cards = play
if p_cards in playable:
if isinstance(p_cards,str):
card_values = [self.deck.card_store[p_cards].face]
else:
card_values = [self.deck.card_store[c].face for c in p_cards ]
# assign card owners
s = sum(card_values)
if ( s == 15):
scored = True
for card in p_cards:
self.set_card_owner(card, player.name)
if card in self.table_cards:
self.table_cards.discard(card)
else:
if isinstance(p_cards, str):
self.table_cards.update({p_cards})
else:
self.table_cards.update(p_cards)
for card in p_cards:
if card in player.hand:
player.hand.discard(card)
if not self.table_cards:
player.award_point() #escoba
print(f"{player.name} Escoba!")
self.print_score()
return scored
def apply_score(self):
p1_total = set()
p1_oros = set()
p1_sevens = set()
p2_total = set()
p2_sevens = set()
p2_oros = set()
for card_id, card in self.deck.cards().items():
if (card.owner == self.pl1.name):
p1_total.add(card_id)
if card.suit == 'O': p1_oros.add(card_id)
if card.face == 7: p1_sevens.add(card_id)
else:
p2_total.add(card_id)
if card.suit == 'O': p2_oros.add(card_id)
if card.face == 7: p2_sevens.add(card_id)
if card_id == '7O':
self.pl1.award_point() if card.owner == self.pl1.name else self.pl2.award_point()
if len(p1_total) > len(p2_total):
self.pl1.award_point()
elif len(p2_total) > len(p1_total):
self.pl2.award_point()
if len(p1_oros) > len(p2_oros):
self.pl1.award_point()
elif len(p2_oros) > len(p1_oros):
self.pl2.award_point()
if len(p1_sevens) > len(p2_sevens):
self.pl1.award_point()
elif len(p2_sevens) > len(p1_sevens):
self.pl2.award_point()
print(f'Points:\tPL1\tPL2\nOros:\t[{len(p1_oros)}]\t[{len(p2_oros)}]\nSevens:\t[{len(p1_sevens)}]\t[{len(p2_sevens)}]\nCards:\t[{len(p1_total)}]\t[{len(p2_total)}]')
def print_score(self):
print("Player 1 score: {}\nPlayer 2 score: {}".format(self.pl1.score, self.pl2.score))
def play_round(self, first_player, second_player):
p1_cards, p2_cards ,table_cards = self.deal_start()
first_player.new_hand(p1_cards)
second_player.new_hand(p2_cards)
self.table_cards = table_cards
last_scored = ''
cards_left = len(self.deck.order())
while len(self.deck.order()) > 0:
if len(first_player.hand) == 0 and len(second_player.hand) == 0:
p1_cards, p2_cards = self.deal_hand()
first_player.new_hand(p1_cards)
second_player.new_hand(p2_cards)
cards_left = len(self.deck.order())
# hand per player
while (len(first_player.hand) + len(second_player.hand) > 0):
if (len(first_player.hand)):
playable = self.valid_plays(first_player,self.table_cards)
play = first_player.get_play(playable)
if self.apply_play(play,first_player): last_scored = first_player.name
if (len(second_player.hand)):
playable = self.valid_plays(second_player,self.table_cards)
play = second_player.get_play(playable)
if self.apply_play(play,second_player): last_scored = second_player.name
# award last_player_to_score remaining cards
[self.set_card_owner(card_id, last_scored) for card_id, card in self.deck.cards().items() if card.owner == '']
self.apply_score()
class Game:
deck = Deck()
pl1 = Player('p1')
pl2 = Player('p2')
table_cards = set()
def __init__(self, pl1, pl2, deck=Deck()):
self.pl1 = pl1
self.pl2 = pl2
self.deck = deck
print("Start game")
#return NotImplemented
def set_card_owner(self, card, owner):
c = self.deck.cards()[card]
self.deck.cards()[card] = Card(suit=c.suit, face=c.face, owner=owner)
def reset_deck(self):
self.deck = Deck(make_deck())
def deal_hand(self):
p1 = set()
p2 = set()
# deal out to p1 and p2 alternating 3 each
for count in range(0,3):
[ p1.add(d) for d in self.deck.deal()]
[ p2.add(d) for d in self.deck.deal()]
return p1,p2
def deal_start(self):
p1,p2 = self.deal_hand()
start_table_cards = self.deck.deal(4)
return p1,p2,start_table_cards
def valid_plays(self, player, table_cards: set):
# visible_cards = [card for card_id, card in self.deck.cards.items() if (card.owner == 'table') ]
plays = set()
for card in player.hand:
if (len(table_cards) > 0):
combo_cards = set(table_cards)
escombinations = get_escombinations(combo_cards,card)
for combo in escombinations:
plays.add(combo)
else:
plays.add(tuple(player.hand))
return plays
def apply_play(self,play, player):
# validate(play)
playable = self.valid_plays(player, self.table_cards)
scored = False
t_cards = self.table_cards
#p_cards = play.pop()
p_cards = play
if p_cards in playable:
if isinstance(p_cards,str):
card_values = [self.deck.card_store[p_cards].face]
else:
card_values = [self.deck.card_store[c].face for c in p_cards ]
# assign card owners
s = sum(card_values)
if ( s == 15):
scored = True
for card in p_cards:
self.set_card_owner(card, player.name)
if card in self.table_cards:
self.table_cards.discard(card)
else:
if isinstance(p_cards, str):
self.table_cards.update({p_cards})
else:
self.table_cards.update(p_cards)
for card in p_cards:
if card in player.hand:
player.hand.discard(card)
if not self.table_cards:
player.award_point() #escoba
print(f"{player.name} Escoba!")
self.print_score()
return scored
def apply_score(self):
p1_total = set()
p1_oros = set()
p1_sevens = set()
p2_total = set()
p2_sevens = set()
p2_oros = set()
for card_id, card in self.deck.cards().items():
if (card.owner == self.pl1.name):
p1_total.add(card_id)
if card.suit == 'O': p1_oros.add(card_id)
if card.face == 7: p1_sevens.add(card_id)
else:
p2_total.add(card_id)
if card.suit == 'O': p2_oros.add(card_id)
if card.face == 7: p2_sevens.add(card_id)
if card_id == '7O':
self.pl1.award_point() if card.owner == self.pl1.name else self.pl2.award_point()
if len(p1_total) > len(p2_total):
self.pl1.award_point()
elif len(p2_total) > len(p1_total):
self.pl2.award_point()
if len(p1_oros) > len(p2_oros):
self.pl1.award_point()
elif len(p2_oros) > len(p1_oros):
self.pl2.award_point()
if len(p1_sevens) > len(p2_sevens):
self.pl1.award_point()
elif len(p2_sevens) > len(p1_sevens):
self.pl2.award_point()
print(f'Points:\tPL1\tPL2\nOros:\t[{len(p1_oros)}]\t[{len(p2_oros)}]\nSevens:\t[{len(p1_sevens)}]\t[{len(p2_sevens)}]\nCards:\t[{len(p1_total)}]\t[{len(p2_total)}]')
def play_round(self, first_player, second_player):
p1_cards, p2_cards ,table_cards = self.deal_start()
first_player.new_hand(p1_cards)
second_player.new_hand(p2_cards)
self.table_cards = table_cards
last_scored = ''
cards_left = len(self.deck.order())
while len(self.deck.order()) > 0:
if len(first_player.hand) == 0 and len(second_player.hand) == 0:
p1_cards, p2_cards = self.deal_hand()
first_player.new_hand(p1_cards)
second_player.new_hand(p2_cards)
cards_left = len(self.deck.order())
# hand per player
while (len(first_player.hand) + len(second_player.hand) > 0):
if (len(first_player.hand)):
playable = self.valid_plays(first_player,self.table_cards)
play = first_player.get_play(playable)
if self.apply_play(play,first_player): last_scored = first_player.name
if (len(second_player.hand)):
playable = self.valid_plays(second_player,self.table_cards)
play = second_player.get_play(playable)
if self.apply_play(play,second_player): last_scored = second_player.name
# award last_player_to_score remaining cards
[self.set_card_owner(card_id, last_scored) for card_id, card in self.deck.cards().items() if card.owner == '']
self.apply_score()
def print_score(self):
print("Player 1 score: {}\nPlayer 2 score: {}".format(self.pl1.score, self.pl2.score))
p1 = Player('player_1')
p2 = Player('player_2')
deck = Deck(make_deck())
g = Game(p1,p2,deck)
rounds = 0
while (p1.score < 15 and p2.score < 15):
rounds += 1
g.reset_deck()
if (rounds % 2 == 1):
g.play_round(p1,p2)
else:
g.play_round(p2,p1)
print("Round {}:\n\tPlayer 1 score: {}\n\tPlayer 2 score: {}".format(rounds, p1.score, p2.score))
```
| github_jupyter |
### Creating superposition states associated with discretized probability distributions
#### Prerequisites
Here are a few things you should be up to speed on before we start:
- [Python fundamentals](https://qiskit.org/textbook/ch-prerequisites/python-and-jupyter-notebooks.html)
- [Programming quantum computers using Qiskit](https://qiskit.org/textbook/ch-prerequisites/qiskit.html)
- [Single qubit gates](https://qiskit.org/textbook/ch-states/single-qubit-gates.html)
Additiona resources can be found [here](https://github.com/QForestCommunity/launchpad/blob/master/README.md).
#### Dependencies
We also need a couple of Python packages to build our distribution encoder:
- [Qiskit](https://qiskit.org/)
- [Numpy](https://numpy.org/)
- [SciPy](https://www.scipy.org/)
- [Matplotlib](https://matplotlib.org/)
#### Contributors
[Sashwat Anagolum](https://github.com/SashwatAnagolum)
#### Qiskit Package Versions
```
import qiskit
qiskit.__qiskit_version__
```
#### Introduction
Given a probability distribution $p$, we want to create a quantum state $|\psi\rangle$ such that
$$|\psi\rangle = \sum_{i} \sqrt{p_i} |i\rangle$$
where $|i\rangle$ represents one of an orthonormal set of states.
While we don't known when (for what kinds of distributions) we can do this, we do know that if you can efficiently integrate over a distribution classically, then we can efficiently construct a quantum state associated with a discretized version of that distribution.
It may seem kind of trivial - we can integrate over the distribution classicaly, so why not just create the mixed state shown here?
$$\sum_i p_i |i\rangle \langle i |$$
If all we needed to do was sample from the distribution, we could use this state - but then if we were efficiently integrating the distribution classicaly, say using Monte Carlo methods, we might as well sample from the classical distribution as well.
The reason we avoid generating the distribution as a mixed quantum state is that we often need to perfom further, uniquely quantum, processing on it after creation - in this case, we cannot use the mixed state apporach.
#### Encoding the distribution
If we wanted to create a $N$ region discretization, we would need $n = log N$ qubits to represent the distribution. Let's look at a super simple case to start off: $N = 2$, so $n = 1$.
We have probabilities $p_{0}^{(1)}$ and $p_1^{(1)}$, of a random variable following the distribution lying in region $0$ and region $1$, respectively, with $p^{(i)}_{j}$ representing the probability of measuring a random variable in region $j$ if it follows the discretized distribution over $i$ qubits.
Since we only use one qubit, all we need to do is integrate over region $0$ to find the probability of a variable lying within it. Let's take a quick look at the Bloch sphere:

If a qubit is rotated about the y-axis by angle $\theta$, then the probability of measuring it as zero is given by $\cos (\frac{\theta}{2})^2$ - so we can figure out how much to rotate a qubit by if we're using it to encode a distribution:
$$ \theta = 2 * \cos^{-1} \left ( \sqrt{p_{0}^{(1)}}\right )$$
$$p_{0}^{(1)} = \int_{x^{(1)}_{0}}^{x_{1}^{(1)}}p(x) dx$$
Where $x^{(1)}_{0}$ and $x_{1}^{(1)}$ are the first and second region boundaries when 1 qubit is used. This leaves us with
$$|\psi \rangle = \sqrt{p_{0}^{(1)}} |0\rangle + \sqrt{p_{1}^{(1)}} |1\rangle$$
Awesome!
Now that we know how to do it for distributions with two regions, let's see if we can expand it to include more regions - i.e., can we convert a quantum state encoding a $N$ region discretization into one encoding a discretization with $2N$ regions?
To get started, let's avoid all the complicated integration stuff we'll need to do later by defining a function $f(i, n)$ such that
$$f(i, n) = \frac{\int_{x_{k}^{(n + 1)}}^{x_{k + 1}^{(n + 1)}} p(x) dx}{\int^{x_{i + 1}^{(n)}}_{x_{i}^{(n)}} p(x) dx}$$
Where $k = 2 * \left ( \frac{i}{2} - \frac{i \% 2}{2} \right )$. The equation above probably looks a little hopeless, but all it does it computes the conditional probability of a value lying in the left subregion of region $i$ (when we have $N$ regions), given that it lies in region $i$.
Why do we need this?
We're assuming that dividing the distribution into $N$ regions is just an intermediary step in the process of dividing it into the desired $2^{m}$ regions - so $x_{k}^{(n + 1)}$ refers to the same boundary that $x_{i}^{(n)}$ does.
Now that we've defined $f(i, n)$, all we need to do to figure out how much to rotate the $(n + 1)^{th}$ qubit is compute
$$\theta_{i}^{(n + 1)} = 2 * \cos^{-1} \left ( \sqrt{f(i, n)}\right )$$
Now all we need to do is rotate the $(n + 1)^{th}$ qubit by $\theta_{i}^{(n + 1)}$ conditioned on the state $|i\rangle$ represented using $n$ qubits:
$$\sqrt{p_{i}^{(n)}}|i\rangle \rightarrow \sqrt{p^{(n + 1)}_{k}}|k\rangle + \sqrt{p^{(n + 1)}_{k + 1}}|k+1\rangle$$
Since we showed that constructing a state for $n = 1$ was possible, and given a $2^n$ region discretization, we could convert into a distribution with $2^{(n + 1)}$ regions, we just inductively proved that we can construct a superposition state corresponding to a $2^n, n \in \mathbb{N}$ region discretized distribution - pretty cool!
Now that we've gotten the concepts down, let's move on to building our own quantum distribution encoder.
#### Required modules
```
from qiskit import QuantumRegister, ClassicalRegister
from qiskit import Aer, execute, QuantumCircuit
from qiskit.circuit.library.standard_gates import RYGate
from qiskit.tools.visualization import circuit_drawer
from numpy import pi, e, sqrt, arccos, log2
from scipy.integrate import quad
%matplotlib inline
import matplotlib.pyplot as plt
```
Let's define a function representing our distribution, so that we can change super quickly whenever we want to. We'll start off with a super simple function, like $N(0, 2)$:
```
def distribution(x):
"""
Returns the value of a chosen probability distribution at the given value
of x. Mess around with this function to see how the encoder works!
The current distribution being used is N(0, 2).
"""
# Use these with normal distributions
mu = 0
sigma = 2
return (((e ** (-0.5 * ((x - mu) / sigma) ** 2)) / (sigma * sqrt(2 * pi))) / 0.99993665)
```
The 0.99993665 is a normalisation factor used to make sure the sum of probabilities over the regions we've chosen adds up to 1.
Next, let's create everything else we need to compute $f(i, n)$:
```
def integrate(dist, lower, upper):
"""
Perform integration using numpy's quad method. We can use parametrized
distributions as well by using this syntax instead:
quad(integrand, lower, upper, args=(tupleOfArgsForIntegrand))
"""
return quad(dist, lower, upper)[0]
def computeRegionProbability(dist, regBounds, numRegions, j):
"""
Given a distribution dist, a list of adjacent regions regBounds, the
current level of discretization numRegions, a region number j, computes
the probability that the value random variable following dist lies in
region j given that it lies in the larger region made up of regions
[(j // 2) * 2, ((j + 2) // 2) * 2]
"""
totalRegions = len(regBounds) - 1
k = 2 * j
prob = integrate(dist, regBounds[(totalRegions // numRegions) * k],
regBounds[(totalRegions // numRegions) * (k + 1)]) / integrate(
dist, regBounds[(totalRegions // numRegions) * ((k // 2) * 2)],
regBounds[(totalRegions // numRegions) * (((k + 2) // 2) * 2)])
return prob
```
$computeRegionProbability$ gives us the value of $f(i, n)$. We're finally ready to start writing the quantum part of our program - let's start by creating the registers and circuit we need:
```
def encodeDist(dist, regBounds):
numQubits = int(log2(len(regBounds) - 1))
a = QuantumRegister(2 * numQubits - 2)
c = ClassicalRegister(numQubits)
qc = QuantumCircuit(a, c)
```
Now we can create the looping construct we need to be able to iteratively divide the distribution into $2^m$ regions, starting from $n = 1$ ($2$ regions), and dividing until $n = log N$ ($N$ regions). We need to loop over the different regions in the current , and compute the value of $f(i, n)$ for each one:
```
for i in range(numQubits):
numRegions = int(2 ** (i + 1))
for j in range(numRegions // 2):
prob = computeRegionProbability(dist, regBounds, numRegions, j)
```
Now we need to apply the controlled rotations - but we also need to write in a special case for $n = 1$, because there are no qubits to condition the rotation on:
```
if not i:
qc.ry(2 * arccos(sqrt(prob)), a[2 * numQubits - 3])
```
Since we'll be using gates with an arbitrary number of control qubits, we use the ControlledGate:
```
else:
cGate = RYGate(2 * arccos(sqrt(prob))).control(i)
```
We know that we need to use the qubits indexed by $[0, 1, ..., i - 1]$ as control qubits, and the $n^{th}$ one as the target - but before we can apply the gate we need to perform a few bit flips to make sure that the $n^{th}$ qubit is rotated only when the control qubits are in the state $|i\rangle$. We can figure out which qubits to flip using this function:
```
def getFlipList(i, j, numQubits):
"""
Given the current level of desired level of discretization, the
current level of discretization i and a region number j,
returns the binary bit string associated with j in the form of
a list of bits to be flipped.
"""
binString = str(bin(j))[2:]
binString = ("0" * (numQubits - len(binString))) + binString
bitFlips = []
for k in range(numQubits - i, numQubits):
if binString[k] == '0':
bitFlips.append(3 * numQubits - 3 - k - i)
return bitFlips
```
Here the variable j represents the region number, which we convert to binary, and then flip qubits so that the resulting binary string is all ones. After finding out which qubits we need to flip, we can create a controlled gate and append it to the quantum circuit back in $encodeDist$:
```
for k in listOfFlips:
qc.x(a[k])
qubitsUsed = [a[k] for k in
range(2 * numQubits - 2 - i, 2 * numQubits - 2)]
qubitsUsed.append(a[2 * numQubits - 3 - i])
qc.append(cGate, qubitsUsed)
for k in listOfFlips:
qc.x(a[k])
```
All that's left is to return the quantum circuit:
```
return qc, a, c
```
Here's the entire function, so that we can run it in the notebook:
```
def encodeDist(dist, regBounds):
"""
Discretize the distribution dist into multiple regions with boundaries
given by regBounds, and store the associated quantum superposition
state in a new quantum register reg. Please make sure the number of
regions is a power of 2, i.e. len(regBounds) = (2 ** n) + 1.
Additionally, the number of regions is limited to a maximum of
2^(n // 2 + 1), where n is the number of qubits available in the backend
being used - this is due to the requirement of (n - 2) ancilla qubits in
order to perform (n - 1) control operations with minimal possible depth.
Returns a new quantum circuit containing the instructions and registers
needed to create the superposition state, along with the size of the
quantum register.
"""
numQubits = int(log2(len(regBounds) - 1))
a = QuantumRegister(2 * numQubits - 2)
c = ClassicalRegister(numQubits)
qc = QuantumCircuit(a, c)
for i in range(numQubits):
numRegions = int(2 ** (i + 1))
for j in range(numRegions // 2):
prob = computeRegionProbability(dist, regBounds, numRegions, j)
if not i:
qc.ry(2 * arccos(sqrt(prob)), a[2 * numQubits - 3])
else:
cGate = RYGate(2 * arccos(sqrt(prob))).control(i)
listOfFlips = getFlipList(i, j, numQubits)
for k in listOfFlips:
qc.x(a[k])
qubitsUsed = [a[k] for k in
range(2 * numQubits - 2 - i, 2 * numQubits - 2)]
qubitsUsed.append(a[2 * numQubits - 3 - i])
qc.append(cGate, qubitsUsed)
for k in listOfFlips:
qc.x(a[k])
return qc, a, c
```
Finally, we can call our function, and compare the results with those from a classical computer - we also need a helper function that pads bit strings for us, so that we can plot the classical results on the same axis as the quantum ones:
```
def pad(x, numQubits):
"""
Utility function that returns a left padded version of the bit string
passed.
"""
string = str(x)[2:]
string = ('0' * (numQubits - len(string))) + string
return string
regBounds = [i for i in range(-16, 17)]
qc, a, c = encodeDist(distribution, regBounds)
numQubits = (qc.num_qubits + 2) // 2
for i in range(numQubits - 2, 2 * numQubits - 2):
qc.measure(a[i], c[i - (numQubits - 2)])
backend = Aer.get_backend('qasm_simulator')
shots = 100000
job = execute(qc, backend=backend, shots=shots)
results = job.result().get_counts()
resultsX = []
resultsY = []
for i in [pad(bin(x), numQubits) for x in range(2 ** (numQubits))]:
resultsX.append(i)
if i in results.keys():
resultsY.append(results[i])
else:
resultsY.append(0)
truthDisc = [integrate(distribution, regBounds[i], regBounds[i + 1]) * shots for i in range(
len(regBounds) - 1)]
plt.figure(figsize=[16, 9])
plt.plot(resultsX, resultsY)
plt.plot(resultsX, truthDisc, '--')
plt.legend(['quantum estimate', 'classical estimate'])
plt.show()
```
Let's take a look at the quantum circuit:
```
circuit_drawer(qc, output='mpl')
```
#### Things to do next
Looks like we're done - awesome!
Taking all the functions from this notebook and pasting them into a python file will give you a working copy of this program, provided you have all the dependencies installed - if you want a regular python file instead, you can get a copy [here](https://github.com/SashwatAnagolum/DoNew/blob/master/loadProbDist/loadProbDist.py).
A possible next step after getting the hang of encoding distributions is to figure out ways to process the quantum state further, leading to purely quantum transformed versions of the distribution.
Let me know if you figure out any other ways we can work with the quantum state we get using this circuit, or if you have any other questions - you can reach me at [sashwat.anagolum@gmail.com](mailto:sashwat.anagolum@gmail.com)
| github_jupyter |
# Layer wise learning rate settings
In this tutorial, we introduce how to easily select or filter out network layers and set specific learning rate values for transfer learning.
MONAI provides a utility function to achieve this requirements: `generate_param_groups`, for example:
```py
net = Unet(dimensions=3, in_channels=1, out_channels=3, channels=[2, 2, 2], strides=[1, 1, 1])
print(net) # print out network components to select expected items
print(net.named_parameters()) # print out all the named parameters to filter out expected items
params = generate_param_groups(
network=net,
layer_matches=[lambda x: x.model[-1], lambda x: "conv.weight" in x],
match_types=["select", "filter"],
lr_values=[1e-2, 1e-3],
)
optimizer = torch.optim.Adam(params, 1e-4)
```
[](https://colab.research.google.com/github/Project-MONAI/tutorials/blob/master/modules/layer_wise_learning_rate.ipynb)
## Setup environment
```
!python -c "import monai" || pip install -q "monai[pillow, ignite, tqdm]"
!python -c "import matplotlib" || pip install -q matplotlib
%matplotlib inline
from monai.transforms import (
AddChanneld,
Compose,
LoadImaged,
ScaleIntensityd,
ToTensord,
)
from monai.optimizers import generate_param_groups
from monai.networks.nets import densenet121
from monai.inferers import SimpleInferer
from monai.handlers import StatsHandler
from monai.engines import SupervisedTrainer
from monai.data import DataLoader
from monai.config import print_config
from monai.apps import MedNISTDataset
import torch
import matplotlib.pyplot as plt
from ignite.engine import Engine, Events
from ignite.metrics import Accuracy
import tempfile
import sys
import shutil
import os
import logging
```
## Setup imports
```
# Copyright 2020 MONAI Consortium
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
print_config()
```
## Setup data directory
You can specify a directory with the `MONAI_DATA_DIRECTORY` environment variable.
This allows you to save results and reuse downloads.
If not specified a temporary directory will be used.
```
directory = os.environ.get("MONAI_DATA_DIRECTORY")
root_dir = tempfile.mkdtemp() if directory is None else directory
print(root_dir)
```
## Setup logging
```
logging.basicConfig(stream=sys.stdout, level=logging.INFO)
```
## Create training experiment with MedNISTDataset and workflow
The MedNIST dataset was gathered from several sets from [TCIA](https://wiki.cancerimagingarchive.net/display/Public/Data+Usage+Policies+and+Restrictions), [the RSNA Bone Age Challenge](http://rsnachallenges.cloudapp.net/competitions/4), and [the NIH Chest X-ray dataset](https://cloud.google.com/healthcare/docs/resources/public-datasets/nih-chest).
### Set up pre-processing transforms
```
transform = Compose(
[
LoadImaged(keys="image"),
AddChanneld(keys="image"),
ScaleIntensityd(keys="image"),
ToTensord(keys="image"),
]
)
```
### Create MedNISTDataset for training
`MedNISTDataset` inherits from MONAI `CacheDataset` and provides rich parameters to automatically download dataset and extract, and acts as normal PyTorch Dataset with cache mechanism.
```
train_ds = MedNISTDataset(
root_dir=root_dir, transform=transform, section="training", download=True)
# the dataset can work seamlessly with the pytorch native dataset loader,
# but using monai.data.DataLoader has additional benefits of mutli-process
# random seeds handling, and the customized collate functions
train_loader = DataLoader(train_ds, batch_size=300,
shuffle=True, num_workers=10)
```
### Pick images from MedNISTDataset to visualize and check
```
plt.subplots(3, 3, figsize=(8, 8))
for i in range(9):
plt.subplot(3, 3, i + 1)
plt.imshow(train_ds[i * 5000]["image"][0].detach().cpu(), cmap="gray")
plt.tight_layout()
plt.show()
```
### Create training components - device, network, loss function
```
device = torch.device("cuda:0")
net = densenet121(pretrained=True, progress=False,
spatial_dims=2, in_channels=1, out_channels=6).to(device)
loss = torch.nn.CrossEntropyLoss()
```
### Set different learning rate values for layers
Please refer to the appendix at the end this notebook for the layers of `DenseNet121`.
1. Set LR=1e-3 for the selected `class_layers` block.
2. Set LR=1e-4 for convolution layers based on the filter where `conv.weight` is in the layer name.
3. LR=1e-5 for other layers.
```
params = generate_param_groups(
network=net,
layer_matches=[lambda x: x.class_layers, lambda x: "conv.weight" in x],
match_types=["select", "filter"],
lr_values=[1e-3, 1e-4],
)
```
### Define the optimizer based on the parameter groups
```
opt = torch.optim.Adam(params, 1e-5)
```
### Define the easiest training workflow and run
Use MONAI SupervisedTrainer handlers to quickly set up a training workflow.
```
trainer = SupervisedTrainer(
device=device,
max_epochs=5,
train_data_loader=train_loader,
network=net,
optimizer=opt,
loss_function=loss,
inferer=SimpleInferer(),
key_train_metric={
"train_acc": Accuracy(
output_transform=lambda x: (x["pred"], x["label"]))
},
train_handlers=StatsHandler(
tag_name="train_loss", output_transform=lambda x: x["loss"]),
)
```
### Define a ignite handler to adjust LR in runtime
```
class LrScheduler:
def attach(self, engine: Engine) -> None:
engine.add_event_handler(Events.EPOCH_COMPLETED, self)
def __call__(self, engine: Engine) -> None:
for i, param_group in enumerate(engine.optimizer.param_groups):
if i == 0:
param_group["lr"] *= 0.1
elif i == 1:
param_group["lr"] *= 0.5
print("LR values of 3 parameter groups: ", [
g["lr"] for g in engine.optimizer.param_groups])
LrScheduler().attach(trainer)
```
### Execute the training
```
trainer.run()
```
## Cleanup data directory
Remove directory if a temporary was used.
```
if directory is None:
shutil.rmtree(root_dir)
```
## Appendix: layers of DenseNet 121 network
```
print(net)
```
| github_jupyter |
```
# Install old version of scikit-learn, see https://github.com/SeldonIO/seldon-core/issues/2059
!pip install -UIv scikit-learn==0.20.3
!pip install azure-storage-file-datalake azure-identity azure-storage-blob pandas joblib
### ENTER YOUR DETAILS ###
storage_account_name = ""
client_id = ""
tenant_id = ""
client_secret = "" # client secret value of the service principal
connection_string = "" # blob storage connection string
# run `oc whoami --show-token` to get your token
# do not use quotes
%env OPENSHIFT_TOKEN=
%env STORAGE_ACCOUNT_NAME=
import pandas as pd
from sklearn.linear_model import LogisticRegression
from joblib import dump, load
from azure.identity import ClientSecretCredential
from azure.storage.filedatalake import DataLakeServiceClient
from azure.core._match_conditions import MatchConditions
from azure.storage.filedatalake._models import ContentSettings
from azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient, __version__
def initialize_storage_account_ad(storage_account_name, client_id, client_secret, tenant_id):
try:
global service_client
credential = ClientSecretCredential(tenant_id, client_id, client_secret)
service_client = DataLakeServiceClient(account_url="{}://{}.dfs.core.windows.net".format(
"https", storage_account_name), credential=credential)
except Exception as e:
print(e)
def download_file_from_directory(dataset):
try:
file_system_client = service_client.get_file_system_client(file_system="mycontainer")
directory_client = file_system_client.get_directory_client("sample")
local_file = open(dataset,'wb')
file_client = directory_client.get_file_client(dataset)
download = file_client.download_file()
downloaded_bytes = download.readall()
local_file.write(downloaded_bytes)
local_file.close()
except Exception as e:
print(e)
# Initialize and download Iris dataset from Azure Data Lake
initialize_storage_account_ad(storage_account_name, client_id, client_secret, tenant_id)
download_file_from_directory("iris.data")
# Read training data set
train_df = pd.read_csv("iris.data", header=None, names=["sepal_length", "sepal_width", "petal_length", "petal_width", "class"])
y = pd.factorize(train_df["class"])[0]
train_df.pop("class")
X = train_df.values
# Train model
clf = LogisticRegression()
clf.fit(X,y)
# Test model
print(X[0:2])
print(clf.predict(X[0:2]))
# Save model to local disk
dump(clf, 'model.joblib')
# Save model to Azure Blob Storage
local_file_name = "model.joblib"
upload_path = "sklearn/model.joblib"
try:
blob_service_client = BlobServiceClient.from_connection_string(connection_string)
blob_client = blob_service_client.get_blob_client(container="mycontainer", blob=upload_path)
print("\nUploading to Azure Storage as blob:\n\t" + upload_path)
# Upload the created file
with open(local_file_name, "rb") as data:
blob_client.upload_blob(data)
except Exception as ex:
print('Exception:')
print(ex)
%%bash
curl -O https://mirror.openshift.com/pub/openshift-v4/clients/oc/4.6/linux/oc.tar.gz
tar xzf oc.tar.gz
cp oc /opt/app-root/bin/
%%bash
# Test oc
oc login --server https://openshift.default.svc.cluster.local --insecure-skip-tls-verify --token=$OPENSHIFT_TOKEN
# Run model in Seldon
oc apply -n odh -f - <<EOF
apiVersion: machinelearning.seldon.io/v1
kind: SeldonDeployment
metadata:
name: example
spec:
name: iris
predictors:
- graph:
children: []
implementation: SKLEARN_SERVER
modelUri: https://$STORAGE_ACCOUNT_NAME.blob.core.windows.net/mycontainer/sklearn/model.joblib
name: classifier
name: default
replicas: 1
EOF
%%bash
# Test model in Seldon
MODEL_URL=example-default.odh.svc.cluster.local:8000
curl -X POST $MODEL_URL/api/v1.0/predictions \
-H 'Content-Type: application/json' \
-d '{ "data": { "ndarray": [[1,2,3,4]] } }'
```
| github_jupyter |
<table>
<tr>
<td width=15%><img src="./img/UGA.png"></img></td>
<td><center><h1>Introduction to Python for Data Sciences</h1></center></td>
<td width=15%><a href="http://www.iutzeler.org" style="font-size: 16px; font-weight: bold">Franck Iutzeler</a><br/> 2017/2018 </td>
</tr>
</table>
<br/><br/><div id="top"></div>
<center><a style="font-size: 40pt; font-weight: bold">Chap. 3 - Data Visualtization with Pandas </a></center>
<br/>
# ``2. Dataframes``
---
<a href="#style"><b>Package check and Styling</b></a><br/><br/><b>Outline</b><br/><br/>
a) <a href="#dataOp"> Operations</a><br/> b) <a href="#dataApp"> Appending, Concatenating, and Merging</a><br/> c) <a href="#dataPre"> Preparing the Data</a><br/> d) <a href="#dataBase"> Basic Statistics </a><br/> e) <a href="#dataGroup"> GroupBy </a><br/> f) <a href="#dataExo"> Exercises </a><br/>
## <a id="dataOp"> a) Operations</a>
<p style="text-align: right; font-size: 10px;"><a href="#top">Go to top</a></p>
```
import numpy as np
import pandas as pd
```
### Numpy operations
If we apply a NumPy function on a Pandas datframe, the result will be another Pandas dataframe with the indices preserved.
```
df = pd.DataFrame(np.random.randint(0, 10, (3, 4)), columns=['A', 'B', 'C', 'D'])
df
np.cos(df * np.pi/2 ) - 1
```
### Arithmetic operations
Arithmetic operations can also be performed either with <tt>+ - / *</tt> or with dedicated <tt>add multiply</tt> etc methods
```
A = pd.DataFrame(np.random.randint(0, 20, (2, 2)), columns=list('AB'))
A
B = pd.DataFrame(np.random.randint(0, 10, (3, 3)), columns=list('BAC'))
B
A+B
```
The pandas arithmetic functions also have an option to fill missing values by replacing the missing one in either of the dataframes by some value.
```
A.add(B, fill_value=0.0)
```
## <a id="dataApp"> b) Appending, Concatenating, and Merging</a>
<p style="text-align: right; font-size: 10px;"><a href="#top">Go to top</a></p>
Thanks to naming, dataframes can be easily added, merged, etc. However, if some entries are missing (columns or indices), the operations may get complicated. Here the most standard situations are covered, take a look at the documentation (notably [this one on merging, appending, and concatenating](https://pandas.pydata.org/pandas-docs/stable/merging.html) )
* **Appending** is for adding the lines of one dataframe with another one with the same columns.
```
A = pd.DataFrame(np.random.randint(0, 20, (2, 2)), columns=list('AB'))
A2 = pd.DataFrame(np.random.randint(0, 20, (3, 2)), columns=list('AB'))
print("A:\n",A,"\nA2:\n",A2)
A.append(A2) # this does not "append to A" but creates a new dataframe
```
Sometimes, indexes do not matter, they can be resetted using <tt>ignore_index=True</tt>.
```
A.append(A2,ignore_index=True)
```
* **Concatenating** is for adding lines and/or columns of multiples datasets (it is a generalization of appending)
```
A = pd.DataFrame(np.random.randint(0, 20, (2, 2)), columns=list('AB'))
A2 = pd.DataFrame(np.random.randint(0, 20, (3, 2)), columns=list('AB'))
A3 = pd.DataFrame(np.random.randint(0, 20, (1, 3)), columns=list('CAD'))
print("A:\n",A,"\nA2:\n",A2,"\nA3:\n",A3)
```
The most important settings of the <tt>concat</tt> function are <tt>pd.concat(objs, axis=0, join='outer',ignore_index=False)</tt> where <br/>
. *objs* is the list of dataframes to concatenate <br/>
. *axis* is the axis on which to concatenate 0 (default) for the lines and 1 for the columns <br/>
. *join* is to decide if we keep all columns/indices on the other axis ('outer' ,default), or the intersection ( 'inner') <br/>
. *ignore_index* is to decide is we keep the previous names (False, default) or give new ones (True)
For a detailed view see [this doc on merging, appending, and concatenating](https://pandas.pydata.org/pandas-docs/stable/merging.html)
```
pd.concat([A,A2,A3],ignore_index=True)
pd.concat([A,A2,A3],axis=1)
pd.concat([A,A2,A3],axis=1,ignore_index=True,join='inner')
```
* **Merging** is for putting together two dataframes with *hopefully* common data
For a detailed view see [this doc on merging, appending, and concatenating](https://pandas.pydata.org/pandas-docs/stable/merging.html)
```
df1 = pd.DataFrame({'employee': ['Bob', 'Jake', 'Lisa', 'Sue'],
'group': ['Accounting', 'Engineering', 'Engineering', 'HR']})
df1
df2 = pd.DataFrame({'employee': ['Lisa', 'Bob', 'Jake', 'Sue'],
'hire_date': [2004, 2008, 2012, 2014]})
df2
df3 = pd.merge(df1,df2)
df3
df4 = pd.DataFrame({'group': ['Accounting', 'Engineering', 'HR'],
'supervisor': ['Carly', 'Guido', 'Steve']})
df4
pd.merge(df3,df4)
```
## <a id="dataPre"> c) Preparing the Data</a>
<p style="text-align: right; font-size: 10px;"><a href="#top">Go to top</a></p>
Before exploring the data, it is primordial to verify its soundness, indeed if it has missing or replicated data, the results of our test may not be accurate. Pandas provides a collection of methodes to verify the sanity of the data (recall that when data is missing for an entry, it is noted as <tt>NaN</tt>, and thus any further operation including this will be <tt>NaN</tt>).
To explore some typical problems in a dataset, I messed with a small part of the [*MovieLens*](https://grouplens.org/datasets/movielens/) dataset. The <tt>ratings_mess.csv</tt> file contains 4 columns:
* <tt>userId</tt> id of the user, integer greater than 1
* <tt>movieId</tt> id of the user, integer greater than 1
* <tt>rating</tt> rating of the user to the movie, float between 0.0 and 5.0
* <tt>timestamp</tt> timestamp, integer
and features (man-made!) errors, some of them minor some of them major.
```
ratings = pd.read_csv('data/ml-small/ratings_mess.csv')
ratings.head(7) # enables to display the top n lines of a dataframe, 5 by default
```
### Missing values
Pandas provides functions that check if the values are missing:
* ``isnull()``: Generate a boolean mask indicating missing values
* ``notnull()``: Opposite of ``isnull()``
```
ratings.isnull().head(5)
```
#### Carefully pruning data
Now that we have to prune lines of our data, this will be done using ``dropna()`` through <tt>dataframe.dropna(subset=["col_1","col_2"],inplace=True)</tt> which drops all rows with at least one missing value in the columns <tt>col1, col2</tt> of <tt>dataframe</tt> *in place* that is without copy.
<div class="warn"> <b>Warning:</b> this function deletes any line with at least **one** missing data, which is not always wishable. Also, with *inplace=True*, it is applied in place, meaning that they modify the dataframe it is applied to, it is thus an **irreversible operation**; drop *inplace=True* to create a copy or see the result before apllying it.</div>
For instance here, <tt>userId,movieId,rating</tt> are essential whereas the <tt>timestamp</tt> is not (it can be dropped for the prediciton process). Thus, we will delete the lines where one of <tt>userId,movieId,rating</tt> is missing and fill the <tt>timestamp</tt> with 0 when it is missing.
```
ratings.dropna(subset=["userId","movieId","rating"],inplace=True)
ratings.head(5)
```
To fill missing data (from a certain column), the recommended way is to use ``fillna()`` through <tt>dataframe["col"].fillna(value,inplace=True)</tt> which replace all missing values in the column <tt>col</tt> of <tt>dataframe</tt> by <tt>value</tt> *in place* that is without copy (again this is irreversible, to use the copy version use inplace=False).
```
ratings["timestamp"].fillna(0,inplace=True)
ratings.head(7)
```
This indeed gives the correct result, however, the line indexing presents missing number. The indexes can be resetted with <tt>reset_index(inplace=True,drop=True)</tt>
```
ratings.reset_index(inplace=True,drop=True)
ratings.head(7)
```
### Improper values
Even without the missing values, some lines are problematic as they feature values outside of prescribed range (<tt>userId</tt> id of the user, integer greater than 1; <tt>movieId</tt> id of the user, integer greater than 1; <tt>rating</tt> rating of the user to the movie, float between 0.0 and 5.0; <tt>timestamp</tt> timestamp, integer )
```
ratings[ratings["userId"]<1] # Identifying a problem
```
Now, we drop the corresponding line, with ``drop`` by <tt>drop(problematic_row.index, inplace=True)</tt>.
<div class="warn"> **Warning:** Do not forget <tt>.index</tt> and <tt>inplace=True</tt></div>
```
ratings.drop(ratings[ratings["userId"]<1].index, inplace=True)
ratings.head(7)
pb_rows = ratings[ratings["movieId"]<1]
pb_rows
ratings.drop(pb_rows.index, inplace=True)
```
And finally the ratings.
```
pb_rows = ratings[ratings["rating"]<0]
pb_rows2 = ratings[ratings["rating"]>5]
tot_pb_rows = pb_rows.append(pb_rows2 )
tot_pb_rows
ratings.drop(tot_pb_rows.index, inplace=True)
ratings.reset_index(inplace=True,drop=True)
```
We finally have our dataset cured! Let us save it for further use.
<tt>to_csv</tt> saves as CSV into some file, <tt>index=False</tt> drops the index names as we did not specify it.
```
ratings.to_csv("data/ml-small/ratings_cured.csv",index=False)
```
## <a id="dataBase"> d) Basic Statistics </a>
<p style="text-align: right; font-size: 10px;"><a href="#top">Go to top</a></p>
With our cured dataset, we can begin exploring.
```
ratings = pd.read_csv('data/ml-small/ratings_cured.csv')
ratings.head()
```
The following table summarizes some other built-in Pandas aggregations:
| Aggregation | Description |
|--------------------------|---------------------------------|
| ``count()`` | Total number of items |
| ``first()``, ``last()`` | First and last item |
| ``mean()``, ``median()`` | Mean and median |
| ``min()``, ``max()`` | Minimum and maximum |
| ``std()``, ``var()`` | Standard deviation and variance |
| ``mad()`` | Mean absolute deviation |
| ``prod()`` | Product of all items |
| ``sum()`` | Sum of all items |
These are all methods of ``DataFrame`` and ``Series`` objects, and ``description`` also provides a quick overview.
```
ratings.describe()
```
We see that these statistics do not make sense for all rows. Let us drop the timestamp and examine the ratings.
```
ratings.drop("timestamp",axis=1,inplace=True)
ratings.head()
ratings["rating"].describe()
```
## <a id="dataGroup"> e) GroupBy </a>
<p style="text-align: right; font-size: 10px;"><a href="#top">Go to top</a></p>
These ratings are linked to users and movies, in order to have a separate view per user/movie, *grouping* has to be used.
The ``GroupBy`` operation (that comes from SQL) accomplishes:
- The *split* step involves breaking up and grouping a ``DataFrame`` depending on the value of the specified key.
- The *apply* step involves computing some function, usually an sum, median, means etc *within the individual groups*.
- The *combine* step merges the results of these operations into an output array.
<img src="img/GroupBy.png">
<p style="text-align: right">Source: [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by Jake VanderPlas</p>
```
ratings.head()
```
So to get the mean of the ratings per user, the command is
```
ratings.groupby("userId")["rating"].mean()
```
### Filtering
Filtering is the action of deleting rows depending on a boolean function. For instance, the following removes the user with a rating of only one movie.
```
ratings.groupby("userId")["rating"].count()
def filter_func(x):
return x["rating"].count() >= 2
filtered = ratings.groupby("userId").filter(filter_func)
filtered
filtered.groupby("userId")["rating"].count()
```
### Transformations
Transforming is the actions of applying a transformation (sic).
For instance, let us normalize the ratings so that they have zero mean for each user.
```
ratings.groupby("userId")["rating"].mean()
def center_ratings(x):
x["rating"] = x["rating"] - x["rating"].mean()
return x
centered = ratings.groupby("userId").apply(center_ratings)
centered.groupby("userId")["rating"].mean()
```
#### Another method using lambda calculus [*]
```
centered = pd.DataFrame(ratings)
centered["rating"] = centered.groupby("userId")["rating"].transform(lambda x:x-x.mean())
```
### Aggregations [*]
Aggregations let you aggreagate several operations.
```
ratings.groupby("userId")["rating"].aggregate([min,max,np.mean,np.median,len])
```
## <a id="dataExo"> f) Exercises </a>
<p style="text-align: right; font-size: 10px;"><a href="#top">Go to top</a></p>
<div class="exo"> **Exercise 3.2.1:** Bots Discovery<br/><br/>
In the dataset <tt>ratings_bots.csv</tt>, some users may be bots. To help a movie sucess they add ratings (faborable ones often). To get a better recommendation, we try to remove them by
<ul>
<li> Deleting all users with a mean rating above 4.7/5 (nobody is that nice) and count them. <br/>
* **hint:** the [nunique](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.nunique.html) function may be helpful to count*</li>
<li> Deteting multiples reviews of movie by a single user by replacing them with only the first one. As it is a strange behavior, add a column "potential_bot" with a boolean flag True for them and False for the other. What is the proportion of potential bots amongst the users? <br/>
* **hint:** the <tt>groupby</tt> function can be applied to several columns, also <tt>reset_index(drop=True)</tt> removes the grouby indexing.* <br/>
* **hint:** remember the <tt>loc</tt> function, e.g. <tt>df.loc[df['userId'] == 128]</tt> returns a dataframe of the rows where the userId is 128; and <tt>df.loc[df['userId'] == 128].loc[samerev['movieId'] == 3825]</tt> returns a dataframe of the rows where the userId is 128 **and** the movieID is 3825.* <br/>
* **hint:** 17 ratings have to be removed, for instance, user 128 has 3 ratings of the movie 3825.*</li>
</ul>
This dataset has around 100 000 ratings so hand picking won't do!
</div>
```
import pandas as pd
import numpy as np
ratings_bots = pd.read_csv('data/ml-small/ratings_bots.csv')
```
<div class="exo"> **Exercise 3.2.2:** Planets discovery <br/><br/>
We will use the Planets dataset, available via the [Seaborn package](http://seaborn.pydata.org/) (see further). It provides information on how astronomers found new planets around stars, *exoplanets*.
<ul>
<li>Diplay median, mean and quantile informations for these planets orbital periods, masses, and distances.</li>
<li>For each method, display statistic on the years planets were discovered using this technique.</li>
<li>Display a table giving the number of planets discovered by each methods in each decade (1980s to 2010s)<br/>
* **hint:** the decade can be obtained as series with <tt>10 (planets['year'] // 10)</tt> and this series can be used in a groupby operation on the dataframe even though it is not a column.*</li>
</ul>
</div>
```
import pandas as pd
import numpy as np
planets = pd.read_csv('data/planets.csv')
print(planets.shape)
planets.head()
```
---
<div id="style"></div>
### Package Check and Styling
<p style="text-align: right; font-size: 10px;"><a href="#top">Go to top</a></p>
```
import lib.notebook_setting as nbs
packageList = ['IPython', 'numpy', 'scipy', 'matplotlib', 'cvxopt', 'pandas', 'seaborn', 'sklearn', 'tensorflow']
nbs.packageCheck(packageList)
nbs.cssStyling()
```
| github_jupyter |
#python deep_dream.py path_to_your_base_image.jpg prefix_for_results
#python deep_dream.py img/mypic.jpg results/dream
#from __future__ import print_function
from tensorflow import keras
import numpy as np
import argparse
from keras.applications import inception_v3
from keras import backend as K
from keras.preprocessing import image
from keras.applications.inception_v3 import preprocess_input
from keras.applications.inception_v3 import decode_predictions
from keras.models import Model, load_model
import os
os.environ['KERAS_BACKEND'] = 'tensorflow'
```
import os
os.environ['KERAS_BACKEND'] = 'tensorflow'
from tensorflow import keras
from keras.applications import inception_v3
from keras.applications.inception_v3 import decode_predictions
from keras.models import Model, load_model
import keras.backend as K
from keras.preprocessing.image import load_img, img_to_array
from sklearn.metrics.pairwise import cosine_similarity
import numpy as np
from numpy.linalg import norm
import scipy
import pickle
from os import listdir
from os.path import isfile, join
import operator
from PIL import Image
from keras.preprocessing import image
import os
import math
import PIL.Image
from sklearn.metrics import pairwise
import matplotlib.pyplot as plt
from keras.applications.inception_v3 import preprocess_input
from sklearn import linear_model
from sklearn import metrics
from sklearn.model_selection import train_test_split
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
warnings.filterwarnings("ignore", category=FutureWarning)
import pandas as pd
from scipy import stats
import tensorflow as tf
K.backend()
K.set_learning_phase(0)
model = inception_v3.InceptionV3(weights='imagenet',include_top=False)
dream = model.input
print('Model loaded.')
import os
import cav
working_dir = '/Users/tyler/Desktop/dissertation/programming/tcav_on_azure'
concept = 'horizontal'
cav_dict = {}
layer_names = ['mixed0','mixed1','mixed2','mixed3','mixed4','mixed5','mixed6','mixed7','mixed8','mixed9','mixed10']
#layer_names = ['mixed0']
for layer_name in layer_names:
subpath = concept + '-random500_0-' + layer_name
cav_path = 'cav_dir/' + subpath + '-linear-0.1.pkl'
path = os.path.join(working_dir, cav_path)
this_cav = cav.CAV.load_cav(path)
cav_dict[layer_name] = this_cav.cavs[0]
'''
concept = 'striped_sub_1'
layer_names = ['mixed0','mixed1','mixed2','mixed3','mixed4','mixed5','mixed6','mixed7','mixed8','mixed9','mixed10']
layer_names = ['mixed6']
for layer_name in layer_names:
subpath = concept + '-random500_0-' + layer_name
cav_path = 'cav_dir/' + subpath + '-linear-0.1.pkl'
path = os.path.join(working_dir, cav_path)
this_cav = cav.CAV.load_cav(path)
cav_dict[layer_name] = this_cav.cavs[0]
'''
concept_p = 'grassland_sub_3'
concept_n = 'N_0'
target_class = 'zebra'
split_seed = 1
#cav_dict = {}
replace_these = ['mixed7','mixed8','mixed9','mixed10']
for layer in replace_these:
acts_p,_ = get_acts_for_concept(concept_p,layer)
acts_n,_ = get_acts_for_concept(concept_n,layer)
#_,acts_class = get_acts_for_concept(target_class,layer)
x = np.concatenate((acts_p,acts_n))
y = np.concatenate((np.zeros(500),np.ones(500)))
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.33, stratify=y,random_state=split_seed)
x_train_p_list,x_train_n_list =[],[]
for idx,a in enumerate(x_train):
if y_train[idx] == 0:
x_train_p_list.append(a)
else:
x_train_n_list.append(a)
x_train_p, x_train_n = np.array(x_train_p_list),np.array(x_train_n_list)
mu_p = (x_train_p).mean(axis=0)
mu_n = (x_train_n).mean(axis=0)
cav_params = mu_p - mu_n
cav_dict[layer] = cav_params
#for layer in layer_names:
# if layer not in cav_dict:
# cav_dict[layer] = ''
step = 0.02 # Gradient ascent step size
num_octave = 4 # Number of scales at which to run gradient ascent
octave_scale = 1.4 # Size ratio between scales
iterations = 30 # Number of ascent steps per scale
max_loss = 100000000000
#result_prefix = '/home/tyler/Desktop/tcav_on_azure/results/test'
size_dict = {'mixed0': 313600,'mixed1': 352800,'mixed2': 352800,'mixed3': 221952,'mixed4': 221952,'mixed5': 221952,'mixed6': 221952,'mixed7': 221952,'mixed8': 81920,'mixed9': 131072,'mixed10': 131072}
settings = {
'features': {
#'mixed0': 0,#/313600,
#'mixed1': 1,#/352800,
#'mixed2': 0,#/352800,
#'mixed3': 0,#/221952,
#'mixed4': 0,#/221952,
#'mixed5': 0,#/221952,
#'mixed6': 0,#/221952,
'mixed7': 1,#/221952,
'mixed8': 1,#/81920,
'mixed9': 1,#/131072,
'mixed10': 1#/131072
},}
#cav_dict['mixed9'] = pickle.load(open('mu_great_dane_9','rb'))
#cav_dict['mixed8'] = pickle.load(open('mu_great_dane_8','rb'))
#cav_dict['mixed7'] = pickle.load(open('mu_great_dane_7','rb'))
#cav_dict['mixed6'] = pickle.load(open('mu_great_dane_6','rb'))
layer_dict = dict([(layer.name, layer) for layer in model.layers])
sess = K.get_session()
loss_2 = K.variable(0.)
for layer_name in settings['features']:
coeff = settings['features'][layer_name]
assert layer_name in layer_dict.keys(), 'Layer ' + layer_name + ' not found in model.'
coeff = settings['features'][layer_name]
acts = layer_dict[layer_name].output
flat_acts = K.flatten(acts)
len_of_acts = flat_acts.shape[0]
print(len_of_acts)
layer_cav = K.variable(cav_dict[layer_name].reshape(-1,1))
#layer_cav_slice = K.slice(layer_cav,0,flat_acts.shape[0])
n = layer_cav.shape[0]
print(n, layer_name)
n_tensor = K.constant(n.value/1000)
features_shape = tf.shape(flat_acts)
H = features_shape[0]
#W = features_shape[2]
#layer_cav_slice = K.reshape(layer_cav, shape=[H,1])
print(H)
layer_cav_slice = K.slice(layer_cav,(0,0),(H,1))
flat_acts_slice = K.reshape(flat_acts, shape=[1,H])
print('layer_cav shape is ' + str(layer_cav_slice.shape))
print('acts shape is ' + str(flat_acts_slice.shape))
#loss_2 += coeff * K.dot(K.reshape(acts,(1,n)),layer_cav)
#scaling = K.prod(K.cast(K.shape(acts), 'float32'))
loss_2 += coeff * K.dot(flat_acts_slice,layer_cav_slice) #/ scaling
#loss_2 += coeff * eu_distance(acts,layer_cav)
#loss_2 -= K.sum(K.abs(K.reshape(acts,(n,1))-layer_cav),axis=0,keepdims=False)
#loss_2 += cosine_distance((flat_acts,layer_cav))
#loss_2 += K.dot(K.reshape(acts,(1,n)),layer_cav) / n_tensor
#print(loss_2.shape)
#loss_2 += 1000 * K.sum(K.square(model.input)) / (3 * 299 * 299)
#loss_2 -= 1 * K.sum(K.abs(model.input))
#loss_2 = loss
grads_2 = K.gradients(loss_2, model.input)[0]
grads_2 /= K.maximum(K.mean(K.abs(grads_2)), K.epsilon())
outputs_2 = [loss_2, grads_2, acts]
fetch_loss_and_grads_2 = K.function([model.input], outputs_2)
def eval_loss_and_grads(x):
outs = fetch_loss_and_grads_2([x])
loss_value = outs[0]
grad_values = outs[1]
return loss_value, grad_values
def gradient_ascent(x, iterations, step, max_loss=None):
for i in range(iterations):
jitter = 2*(np.random.random((img.shape[1], img.shape[2], 3)) - 0.5) * jitter_setting
jitter = np.expand_dims(jitter, axis=0)
#x += jitter
loss_value, grad_values = eval_loss_and_grads(x)
if max_loss is not None and loss_value > max_loss:
break
if i % 5 == 0:
print('..Loss value at', i, ':', loss_value)
x += step * grad_values
#x -= jitter
return x
```
## With Scaling
```
base_image_path = os.path.join(working_dir,'concepts/horse_sub_1/img252.jpg')
base_image_path = os.path.join(working_dir,'concepts/noise_white/img1.jpg')
#base_image_path = os.path.join(working_dir,'sky.jpg')
jitter_setting = .1
tf.logging.set_verbosity(0)
img_pic = image.load_img(base_image_path, target_size=(350, 350))
#img = image.img_to_array(img_pic)
img = preprocess_image(base_image_path)
img = resize_img(img,(299,299,3))
#img = np.expand_dims(img, axis=0) / 255
#jitter = .1*(np.random.random((img.shape[1], img.shape[2], 3)) - 0.5) * jitter_setting
#jitter = np.expand_dims(jitter, axis=0)
#img += jitter
if K.image_data_format() == 'channels_first':
original_shape = img.shape[2:]
else:
original_shape = img.shape[1:3]
successive_shapes = [original_shape]
for i in range(1, num_octave):
shape = tuple([int(dim / (octave_scale ** i)) for dim in original_shape])
x,y = shape
if x < 400 and y < 400:
successive_shapes.append(shape)
successive_shapes = successive_shapes[::-1]
original_img = np.copy(img)
shrunk_original_img = resize_img(img, successive_shapes[0])
for shape in successive_shapes:
print('Processing image shape', shape)
img = resize_img(img, shape)
img = gradient_ascent(img,
iterations=iterations,
step=step,
max_loss=max_loss)
upscaled_shrunk_original_img = resize_img(shrunk_original_img, shape)
same_size_original = resize_img(original_img, shape)
lost_detail = same_size_original - upscaled_shrunk_original_img
img += lost_detail
shrunk_original_img = resize_img(original_img, shape)
img -= jitter
save_img(img, fname='results/tmp.png')
#img_path = 'concepts/striped_sub_1/striped_0004.jpg'
img_path = 'results/tmp.png'
show_img = image.load_img(img_path)
show_img
#decode_predictions(preds, top=3)
model = load_model('v3_model.h5')
#get_prediction(prep(img_path))
preds = sess.run(endpoints_v3['prediction'], {endpoints_v3['input']: prep(img_path)})
preds.shape
sess = K.get_session()
endpoints_v3 = dict(
input=model.inputs[0].name,
input_tensor=model.inputs[0],
logit=model.outputs[0].name,
prediction=model.outputs[0].name,
prediction_tensor=model.outputs[0],)
def get_prediction(img):
img = preprocess_input(img)
preds = sess.run(endpoints_v3['prediction'], {endpoints_v3['input']: img})
top = decode_predictions(preds, top=3)
return top
def prep(path):
img_pic = image.load_img(path, target_size=(299, 299))
img = image.img_to_array(img_pic)
img = np.expand_dims(img, axis=0)
img = preprocess_input(img)
return img
#save_img(img, fname='results/zebra/0_1_2_3.png')
#show_img = image.load_img('results/striped_7_8_9.png', target_size=(299, 299))
#show_img
img_pic
#img_in = image.load_img(base_image_path, target_size=(299, 299))
# 1. run model
# 2. run bottlenecks_tensors
# 3. sess = K.get_session()
img = preprocess_image(base_image_path)
bottleneck_name = 'mixed9'
layer_9_acts = sess.run(bottlenecks_tensors[bottleneck_name],{model.input: img})
#layer_9_acts.shape
img.shape
successive_shapes
img = resize_img(img, shape)
img.shape
#img = preprocess_image(base_image_path)
bottleneck_name = 'mixed9'
layer_9_acts = sess.run(bottlenecks_tensors[bottleneck_name],{model.input: img})
layer_9_acts.shape
img.shape
img = gradient_ascent(img,
iterations=iterations,
step=step,
max_loss=max_loss)
x = img
eval_loss_and_grads(x)
layer_cav = K.constant(cav_dict[layer_name].reshape(-1,1))
n = layer_cav.shape[0]
print(n, layer_name)
#n_tensor = K.constant(n.value/1000)
coeff = settings['features'][layer_name]
acts = layer_dict[layer_name].output
#flat_acts = K.variable(K.reshape(acts,(1,n)))
#loss_2 += coeff * K.dot(K.reshape(acts,(1,n)),layer_cav) / n_tensor
acts
acts_sq = K.squeeze(acts,axis = 1)
acts_sq
flat_acts
layer_cav
K.slice(acts)
layer_cav
layer_dict['mixed9'].output
cav_dict['mixed9'].shape[0]
cav_dict['mixed9'].shape[0] / 2048
x / 288
model.layers.o
layer_cav.set_shape(acts.shape)
#image.img_to_array(img_pic)
## No scaling
tf.logging.set_verbosity(0)
base_image_path = 'concepts/striped_sub_1/striped_0004.jpg'
base_image_path = '/home/tyler/Desktop/tcav_on_azure/concepts/noise_white/img1.jpg'
img_pic = image.load_img(base_image_path, target_size=(299, 299))
img = image.img_to_array(img_pic)
img = np.expand_dims(img, axis=0)
img = inception_v3.preprocess_input(img)
jitter = 2*(np.random.random((img.shape[1], img.shape[2], 3)) - 0.5) * .05
jitter = np.expand_dims(jitter, axis=0)
img += jitter
#original_img = np.copy(img)
img = gradient_ascent(img,iterations=iterations,step=step,max_loss=max_loss)
img -= jitter
img_name = 'placeholder'
save_img(img, fname='results/' + img_name + '.png')
#flat_act = np.reshape(np.asarray(acts).squeeze(), -1)
#flat_act_norm = keras.utils.normalize(flat_act)
#loss2 = euclidean_distance(vec_norm(layer_9_cav),flat_act_norm)
#loss_2 += K.sum(K.square(K.reshape(acts,(131072,)) - layer_9_cav_K))
#loss_2 += K.dot(K.reshape(acts,(1,131072)),K.transpose(layer_9_cav_K))
layer_name = 'mixed9'
layer_out = layer_dict[layer_name].output
layer_out
img_in = shrunk_original_img
img_in.shape
new_acts = fetch_loss_and_grads_2([img_in])[0]
new_acts
layer_9_acts[0][5][0]
new_acts[0][5][0]
```
## New Loss
```
def get_loss(this_img):
layer_9_acts = sess.run(bottlenecks_tensors[bottleneck_name],{endpoints_v3['input']: this_img})
flat_act = np.reshape(np.asarray(layer_9_acts).squeeze(), -1)
loss += euclidean_distance(vec_norm(layer_9_cav),vec_norm(flat_act))
return loss
get_loss(original_img)
original_img.shape
sess = K.get_session()
#my_graph = tf.get_default_graph()
#my_graph.get_collection()
sess
model.input
this_img = original_img
loss = K.variable(0.)
layer_9_acts = sess.run(bottlenecks_tensors[bottleneck_name],{model.input: this_img})
flat_act = np.reshape(np.asarray(layer_9_acts).squeeze(), -1)
loss += euclidean_distance(vec_norm(layer_9_cav),vec_norm(flat_act))
#K.clear_session()
layer_9_acts = layer_dict[layer_name].output
layer_9_acts
x.shape
sess.run(bottlenecks_tensors[bottleneck_name],
{self.ends['input']: examples})
#sess.run(bottlenecks_tensors[bottleneck_name],{model.input: img})
#layer_9_acts = sess.run(bottlenecks_tensors[bottleneck_name],{endpoints_v3['input']: img})
#flat_act = np.reshape(np.asarray(layer_9_acts).squeeze(), -1)
#layer_9_acts = sess.run(bottlenecks_tensors[bottleneck_name],{endpoints_v3['input']: x})
#flat_act = np.reshape(np.asarray(layer_9_acts).squeeze(), -1)
#euclidean_distance(vec_norm(layer_9_cav),vec_norm(flat_act))
```
## Static functions
```
def preprocess_image(image_path):
# Util function to open, resize and format pictures
# into appropriate tensors.
img = load_img(image_path)
img = img_to_array(img)
img = np.expand_dims(img, axis=0)
img = inception_v3.preprocess_input(img)
return img
def deprocess_image(x):
# Util function to convert a tensor into a valid image.
if K.image_data_format() == 'channels_first':
x = x.reshape((3, x.shape[2], x.shape[3]))
x = x.transpose((1, 2, 0))
else:
x = x.reshape((x.shape[1], x.shape[2], 3))
x /= 2.
x += 0.5
x *= 255.
x = np.clip(x, 0, 255).astype('uint8')
return x
def resize_img(img, size):
img = np.copy(img)
if K.image_data_format() == 'channels_first':
factors = (1, 1,
float(size[0]) / img.shape[2],
float(size[1]) / img.shape[3])
else:
factors = (1,
float(size[0]) / img.shape[1],
float(size[1]) / img.shape[2],
1)
return scipy.ndimage.zoom(img, factors, order=1)
def euclidean_distance(a,b):
return np.linalg.norm(a-b)
def vec_norm(vec):
return vec / np.linalg.norm(vec)
def get_bottleneck_tensors():
"""Add Inception bottlenecks and their pre-Relu versions to endpoints dict."""
graph = tf.get_default_graph()
bn_endpoints = {}
for op in graph.get_operations():
# change this below string to change which layers are considered bottlenecks
# use 'ConcatV2' for InceptionV3
# use 'MaxPool' for VGG16 (for example)
if 'ConcatV2' in op.type:
name = op.name.split('/')[0]
bn_endpoints[name] = op.outputs[0]
return bn_endpoints
endpoints_v3 = dict(
input=model.inputs[0].name,
input_tensor=model.inputs[0],
logit=model.outputs[0].name,
prediction=model.outputs[0].name,
prediction_tensor=model.outputs[0],
)
bottlenecks_tensors = get_bottleneck_tensors()
bottleneck_name = 'mixed9'
def save_img(img, fname):
pil_img = deprocess_image(np.copy(img))
scipy.misc.imsave(fname, pil_img)
def eu_distance(A,B):
return K.sum(K.abs(A-B),axis=1,keepdims=True)
#Process:
# Load the original image.
# Define a number of processing scales (i.e. image shapes), from smallest to largest.
# Resize the original image to the smallest scale.
# For every scale, starting with the smallest (i.e. current one):
# Run gradient ascent
# Upscale image to the next scale
# Reinject the detail that was lost at upscaling time
# Stop when we are back to the original size.
#To obtain the detail lost during upscaling, we simply take the original image, shrink it down, upscale it,
# and compare the result to the (resized) original image.
def prep2(filename):
shape=(299, 299)
img = np.array(PIL.Image.open(open(filename, 'rb')).convert('RGB').resize(shape, PIL.Image.BILINEAR))
# Normalize pixel values to between 0 and 1.
img = np.float32(img) / 255.0
if not (len(img.shape) == 3 and img.shape[2] == 3):
return None
else:
return img
this_img = np.expand_dims(prep2('concepts/random500_0/ILSVRC2012_val_00001172.JPEG'),axis=0)
def get_acts_for_concept(concept,layer):
concept_dir = os.path.join(working_dir,'concepts/'+concept)
image_list = files_from_dir_ext(concept_dir,'jp')
image_list.sort()
act_path = os.path.join(working_dir,'final_acts/' + concept + '-' + layer + '.pkl')
n = size_dict[layer]
nn = size_dict_orig[layer]
try:
this_dict = pickle.load(open(act_path, 'rb'))
except:
this_dict = {}
#print(nn)
acts_ran = np.zeros((len(image_list),n))
orig = np.zeros((len(image_list),nn[1],nn[2],nn[3]))
for idx,image_path in enumerate(image_list):
if image_path not in this_dict:
img = prep2(os.path.join(concept_dir,image_path))
this_img = np.expand_dims(img, axis=0)
acts_orig = get_acts_for_layer_new(layer,this_img)
acts_ran[idx] = acts_orig.reshape(-1)
orig[idx] = acts_orig
this_dict[image_path] = (acts_orig.reshape(-1),acts_orig)
else:
acts_ran[idx],orig[idx] = this_dict[image_path]
#print('acts already exist')
pickle.dump(this_dict,open(act_path, 'wb'))
return acts_ran,orig
def files_from_dir_ext(a_dir,ext):
onlyfiles = [f for f in os.listdir(a_dir) if os.path.isfile(os.path.join(a_dir, f))]
this_ext = [e for e in onlyfiles if ext in e.lower()]
return this_ext
layer_dict = dict([(layer.name, layer) for layer in model.layers])
sess = K.get_session()
acts_mixed0_f = K.function([model.input],[layer_dict['mixed0'].output])
acts_mixed1_f = K.function([model.input],[layer_dict['mixed1'].output])
acts_mixed2_f = K.function([model.input],[layer_dict['mixed2'].output])
acts_mixed3_f = K.function([model.input],[layer_dict['mixed3'].output])
acts_mixed4_f = K.function([model.input],[layer_dict['mixed4'].output])
acts_mixed5_f = K.function([model.input],[layer_dict['mixed5'].output])
acts_mixed6_f = K.function([model.input],[layer_dict['mixed6'].output])
acts_mixed7_f = K.function([model.input],[layer_dict['mixed7'].output])
acts_mixed8_f = K.function([model.input],[layer_dict['mixed8'].output])
acts_mixed9_f = K.function([model.input],[layer_dict['mixed9'].output])
acts_mixed10_f = K.function([model.input],[layer_dict['mixed10'].output])
def get_acts_for_layer_new(layer_name,input_img):
acts = None
if layer_name=='mixed0':
acts = acts_mixed0_f([input_img])[0]
if layer_name=='mixed1':
acts = acts_mixed1_f([input_img])[0]
if layer_name=='mixed2':
acts = acts_mixed2_f([input_img])[0]
if layer_name=='mixed3':
acts = acts_mixed3_f([input_img])[0]
if layer_name=='mixed4':
acts = acts_mixed4_f([input_img])[0]
if layer_name=='mixed5':
acts = acts_mixed5_f([input_img])[0]
if layer_name=='mixed6':
acts = acts_mixed6_f([input_img])[0]
if layer_name=='mixed7':
acts = acts_mixed7_f([input_img])[0]
if layer_name=='mixed8':
acts = acts_mixed8_f([input_img])[0]
if layer_name=='mixed9':
acts = acts_mixed9_f([input_img])[0]
if layer_name=='mixed10':
acts = acts_mixed10_f([input_img])[0]
return acts
bn_names = ['mixed0','mixed1','mixed2','mixed3','mixed4','mixed5','mixed6','mixed7','mixed8','mixed9','mixed10']
size_dict = {}
for bn in bn_names:
acts_orig = get_acts_for_layer_new(bn,this_img)
size_dict[bn] = acts_orig.reshape(-1).shape[0]
size_dict_orig = {}
for bn in bn_names:
acts_orig = get_acts_for_layer_new(bn,this_img)
size_dict_orig[bn] = acts_orig.shape
```
| github_jupyter |
```
import speech_recognition as sr #recognizes speech
import time #using it to delay the response
import webbrowser #for the urls
import random #to randomly generate a filing for the audio file
import os #to help us utilize the remove function
import playsound #plays the sound directly without using media-player or vlc
from gtts import gTTS #imports the google translate text to speech
def runApp():
#initialize the recognizer
r = sr.Recognizer()
def audio_record(ask = False): #created an argument ask which is not compulsory
with sr.Microphone() as source:#the computer microphone is the source
r.adjust_for_ambient_noise(source)
if ask:
siri_speak(ask)
#make the program the background
audio = r.listen(source) #this basically listens to the source in this case microphone and stores
voice = '' #empty string
try:
voice = r.recognize_google(audio)
except sr.UnknownValueError: #this prints if the audio is not clear
siri_speak("Sorry,what did you say?")
except sr.RequestError:
siri_speak("Sorry, I seem to be experiencing internet downtime") #this prints if there is runtime error
return(voice)
def siri_speak(audio_str): #function that now makes siri speak out
tts = gTTS(audio_str, lang ='en')
r = random.randint(1, 10000) #creates random variables, upto 10000 for the audio files
audio_file = 'audio'+ str(r)+'.mp3' #generating the audio files names
tts.save(audio_file) #saves them then plays them
playsound.playsound(audio_file) #plays them
print(audio_str) #print the audio_str
def respond(voice): #storing the responses
if'what is your name' in voice:
siri_speak('My name is Siri') #prints
elif 'Show me' in voice:
show_me = audio_record("What do you want to see?") #records another audio
url = 'https://google.com/search?q='+search #opens urls
webbrowser.get().open(url)
siri_speak('Results for'+search)
elif 'Show me location' in voice:
location = audio_record("What map do you want displayed?") #opens url
url = 'https://google.nl/maps/place/search?q='+location
webbrowser.get().open(url)
siri_speak('Showing'+location)
elif 'Youtube' in voice:
url = 'https://www.youtube.com/watch?v='
webbrowser.get().open(url)
siri_speak('Opening Youtube')
elif 'Siri,am confused' in voice:
siri_speak('Am sorry, I really cannot help you')
elif 'When was the last time i slept?' in voice:
siri_speak('This assignment was harder than you obviously thought but its none of my business')
else:
siri_speak('Oops...')
siri_speak("Hello there, I am Siri, your assistant, how can i help you?")
siri_speak("........Listening.........")
time.sleep(5)
voice = audio_record()
respond(voice)
import tkinter as tk #a library for building GUI
root = tk.Toplevel() #creates the root
root.geometry('1000x600') #says how big the interface
root.title("NLP Search Engine with TTS/STT")
canvas = tk.Canvas(root,height =600, width = 1000, bg='#263D42') #creates a green canvas
canvas.pack()
photo = tk.PhotoImage(file = 'C:\\Users\\ADMIN\\Desktop\\Beautiful-lake-sunset-wallpaper.png')
canvas.create_image(0,0,image = photo, anchor ='nw') #image is at nw
runApp_btn = tk.Button(canvas,text = "Search the web.......",
width = "40", pady = 15,
font = "bold, 15", command = runApp,
bg='white') #creates a white button
runApp_btn.place(x=230,y=300)
root.mainloop() #makes our app run
```
| github_jupyter |
# T81-558: Applications of Deep Neural Networks
**Module 14: Other Neural Network Techniques**
* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
# Module 14 Video Material
* Part 14.1: What is AutoML [[Video]](https://www.youtube.com/watch?v=TFUysIR5AB0&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_01_automl.ipynb)
* Part 14.2: Using Denoising AutoEncoders in Keras [[Video]](https://www.youtube.com/watch?v=4bTSu6_fucc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_02_auto_encode.ipynb)
* Part 14.3: Training an Intrusion Detection System with KDD99 [[Video]](https://www.youtube.com/watch?v=1ySn6h2A68I&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_03_anomaly.ipynb)
* **Part 14.4: Anomaly Detection in Keras** [[Video]](https://www.youtube.com/watch?v=VgyKQ5MTDFc&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_14_04_ids_kdd99.ipynb)
* Part 14.5: The Deep Learning Technologies I am Excited About [[Video]]() [[Notebook]](t81_558_class_14_05_new_tech.ipynb)
# Part 14.4: Training an Intrusion Detection System with KDD99
The [KDD-99 dataset](http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html) is very famous in the security field and almost a "hello world" of intrusion detection systems in machine learning.
# Read in Raw KDD-99 Dataset
```
import pandas as pd
from tensorflow.keras.utils import get_file
try:
path = get_file('kddcup.data_10_percent.gz', origin='http://kdd.ics.uci.edu/databases/kddcup99/kddcup.data_10_percent.gz')
except:
print('Error downloading')
raise
print(path)
# This file is a CSV, just no CSV extension or headers
# Download from: http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html
df = pd.read_csv(path, header=None)
print("Read {} rows.".format(len(df)))
# df = df.sample(frac=0.1, replace=False) # Uncomment this line to sample only 10% of the dataset
df.dropna(inplace=True,axis=1) # For now, just drop NA's (rows with missing values)
# The CSV file has no column heads, so add them
df.columns = [
'duration',
'protocol_type',
'service',
'flag',
'src_bytes',
'dst_bytes',
'land',
'wrong_fragment',
'urgent',
'hot',
'num_failed_logins',
'logged_in',
'num_compromised',
'root_shell',
'su_attempted',
'num_root',
'num_file_creations',
'num_shells',
'num_access_files',
'num_outbound_cmds',
'is_host_login',
'is_guest_login',
'count',
'srv_count',
'serror_rate',
'srv_serror_rate',
'rerror_rate',
'srv_rerror_rate',
'same_srv_rate',
'diff_srv_rate',
'srv_diff_host_rate',
'dst_host_count',
'dst_host_srv_count',
'dst_host_same_srv_rate',
'dst_host_diff_srv_rate',
'dst_host_same_src_port_rate',
'dst_host_srv_diff_host_rate',
'dst_host_serror_rate',
'dst_host_srv_serror_rate',
'dst_host_rerror_rate',
'dst_host_srv_rerror_rate',
'outcome'
]
# display 5 rows
df[0:5]
```
# Analyzing a Dataset
The following script can be used to give a high-level overview of how a dataset appears.
```
ENCODING = 'utf-8'
def expand_categories(values):
result = []
s = values.value_counts()
t = float(len(values))
for v in s.index:
result.append("{}:{}%".format(v,round(100*(s[v]/t),2)))
return "[{}]".format(",".join(result))
def analyze(df):
print()
cols = df.columns.values
total = float(len(df))
print("{} rows".format(int(total)))
for col in cols:
uniques = df[col].unique()
unique_count = len(uniques)
if unique_count>100:
print("** {}:{} ({}%)".format(col,unique_count,int(((unique_count)/total)*100)))
else:
print("** {}:{}".format(col,expand_categories(df[col])))
expand_categories(df[col])
# Analyze KDD-99
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore
analyze(df)
```
# Encode the feature vector
Encode every row in the database. This is not instant!
```
# Encode a numeric column as zscores
def encode_numeric_zscore(df, name, mean=None, sd=None):
if mean is None:
mean = df[name].mean()
if sd is None:
sd = df[name].std()
df[name] = (df[name] - mean) / sd
# Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue)
def encode_text_dummy(df, name):
dummies = pd.get_dummies(df[name])
for x in dummies.columns:
dummy_name = f"{name}-{x}"
df[dummy_name] = dummies[x]
df.drop(name, axis=1, inplace=True)
# Now encode the feature vector
encode_numeric_zscore(df, 'duration')
encode_text_dummy(df, 'protocol_type')
encode_text_dummy(df, 'service')
encode_text_dummy(df, 'flag')
encode_numeric_zscore(df, 'src_bytes')
encode_numeric_zscore(df, 'dst_bytes')
encode_text_dummy(df, 'land')
encode_numeric_zscore(df, 'wrong_fragment')
encode_numeric_zscore(df, 'urgent')
encode_numeric_zscore(df, 'hot')
encode_numeric_zscore(df, 'num_failed_logins')
encode_text_dummy(df, 'logged_in')
encode_numeric_zscore(df, 'num_compromised')
encode_numeric_zscore(df, 'root_shell')
encode_numeric_zscore(df, 'su_attempted')
encode_numeric_zscore(df, 'num_root')
encode_numeric_zscore(df, 'num_file_creations')
encode_numeric_zscore(df, 'num_shells')
encode_numeric_zscore(df, 'num_access_files')
encode_numeric_zscore(df, 'num_outbound_cmds')
encode_text_dummy(df, 'is_host_login')
encode_text_dummy(df, 'is_guest_login')
encode_numeric_zscore(df, 'count')
encode_numeric_zscore(df, 'srv_count')
encode_numeric_zscore(df, 'serror_rate')
encode_numeric_zscore(df, 'srv_serror_rate')
encode_numeric_zscore(df, 'rerror_rate')
encode_numeric_zscore(df, 'srv_rerror_rate')
encode_numeric_zscore(df, 'same_srv_rate')
encode_numeric_zscore(df, 'diff_srv_rate')
encode_numeric_zscore(df, 'srv_diff_host_rate')
encode_numeric_zscore(df, 'dst_host_count')
encode_numeric_zscore(df, 'dst_host_srv_count')
encode_numeric_zscore(df, 'dst_host_same_srv_rate')
encode_numeric_zscore(df, 'dst_host_diff_srv_rate')
encode_numeric_zscore(df, 'dst_host_same_src_port_rate')
encode_numeric_zscore(df, 'dst_host_srv_diff_host_rate')
encode_numeric_zscore(df, 'dst_host_serror_rate')
encode_numeric_zscore(df, 'dst_host_srv_serror_rate')
encode_numeric_zscore(df, 'dst_host_rerror_rate')
encode_numeric_zscore(df, 'dst_host_srv_rerror_rate')
# display 5 rows
df.dropna(inplace=True,axis=1)
df[0:5]
# This is the numeric feature vector, as it goes to the neural net
# Convert to numpy - Classification
x_columns = df.columns.drop('outcome')
x = df[x_columns].values
dummies = pd.get_dummies(df['outcome']) # Classification
outcomes = dummies.columns
num_classes = len(outcomes)
y = dummies.values
df.groupby('outcome')['outcome'].count()
```
# Train the Neural Network
```
import pandas as pd
import io
import requests
import numpy as np
import os
from sklearn.model_selection import train_test_split
from sklearn import metrics
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
from tensorflow.keras.callbacks import EarlyStopping
# Create a test/train split. 25% test
# Split into train/test
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.25, random_state=42)
# Create neural net
model = Sequential()
model.add(Dense(10, input_dim=x.shape[1], activation='relu'))
model.add(Dense(50, input_dim=x.shape[1], activation='relu'))
model.add(Dense(10, input_dim=x.shape[1], activation='relu'))
model.add(Dense(1, kernel_initializer='normal'))
model.add(Dense(y.shape[1],activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam')
monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3,
patience=5, verbose=1, mode='auto')
model.fit(x_train,y_train,validation_data=(x_test,y_test),
callbacks=[monitor],verbose=2,epochs=1000)
# Measure accuracy
pred = model.predict(x_test)
pred = np.argmax(pred,axis=1)
y_eval = np.argmax(y_test,axis=1)
score = metrics.accuracy_score(y_eval, pred)
print("Validation score: {}".format(score))
```
| github_jupyter |
# Задание к занятию «Python, git»
## Задание 1
Даны 2 строки long_phrase и short_phrase. Напишите код, который проверяет действительно ли длинная фраза long_phrase длиннее короткой short_phrase. И выводит True или False в зависимости от результата сравнения
<pre>
long_phrase = 'Насколько проще было бы писать программы, если бы не заказчики'
short_phrase = '640Кб должно хватить для любых задач. Билл Гейтс (по легенде)'
</pre>
```
def compare_string_length(str1, str2):
'''Сравнивает длину двух строк.
Если строка str1 больше str2 то возвращает True, иначе False
'''
if len(str1) > len(str2):
return True
else:
return False
# Исходные данные
long_phrase = 'Насколько проще было бы писать программы, если бы не заказчики'
short_phrase = '640Кб должно хватить для любых задач. Билл Гейтс (по легенде)'
# Проверка работы функции
print(compare_string_length(long_phrase, short_phrase)) # True
print(compare_string_length(short_phrase, long_phrase)) # False
```
## Задание 2
Дана строка text. Определите какая из двух букв встречается в нем чаще - 'а' или 'и'.
<pre>
text = 'Если программист в 9-00 утра на работе, значит, он там и ночевал'
</pre>
P.S. Вам может помочь метод replace.
```
def check_symbol_freq(str, symb1, symb2):
'''Проверяет какая из букв symb1 или symb2 встречается чаще в строке str
Возвращает символ с наибольшим чиллом вхождением
'''
symb1_str_len = len(str.replace(symb1, ''))
symb2_str_len = len(str.replace(symb2, ''))
return symb1 if symb2_str_len > symb1_str_len else symb2
# Исходные данные
text = 'Если программист в 9-00 утра на работе, значит, он там и ночевал'
# Проверка работ функции
print("Чаще встречается: " + check_symbol_freq(text, 'а','е')) # 'а'
```
## Задание 3
Дано значение объема файла в байтах. Напишите перевод этого значения в мегабайты в формате: 'Объем файла равен 213.68Mb'
```
file_sizeof_mb = 213.68
print("Объем файла равен %.2fMb" % (file_sizeof_mb))
```
## Задание 4
Выведите на экран значение синуса 30 градусов с помощью метода math.sin.
```
# Импортируем модуль math
import math
# Выводим синус 30 градусов
print('Синус 30 градусов: ', math.sin(math.radians(30)))
```
## Задание 5
В прошлом задании у вас скорее всего не получилось точного значения 0.5 из-за конечной точности вычисления синуса. Но почему некоторые простые операции также могут давать неточный результат? Попробуйте вывести на экран результат операции 0.1 + 0.2
```
# Неточности связаны с особенностью работы процессора и представлением данных (в двоичном формате)
print(0.1 + 0.2)
```
## Задание 6 (посложнее)
В переменных a и b записаны 2 различных числа. Вам необходимо написать код, который меняет значения a и b местами без использования третьей переменной.
Дано число в двоичной системе счисления: num=10011. Напишите алгоритм перевода этого числа в привычную нам десятичную систему счисления.
Возможно, вам понадобится цикл прохождения всех целых чисел от 0 до m:
<pre>
for n in range(m)
</pre>
```
# Меняем значение 2х переменной, без использования 3й
x, y = 3,4
x += y
y = x - y
x -= y
print(x,y)
def bool_to_dec(numb_bool):
'''Функция, которое превращает число, записанное в двоичном формате
в десятичное
'''
numb_bool_str = str(numb_bool)
numb_bool_len = len(numb_bool_str)
numb_dec = 0
for i in range(numb_bool_len):
numb_dec += (2 ** (numb_bool_len - 1 - i)) * int(numb_bool_str[i])
return numb_dec
# Проверка работы функции
num = 10011
# Тест работы функции
print(bool_to_dec(0)) # 0
print(bool_to_dec(1)) # 1
print(bool_to_dec(11)) # 3
# Выполняем преобразование числа из задания
print(bool_to_dec(num)) # 19
```
| github_jupyter |
# Implementing AdaBoost
When the trees in the forest are trees of depth 1 (also known as decision stumps) and we
perform boosting instead of bagging, the resulting algorithm is called AdaBoost.
AdaBoost adjusts the dataset at each iteration by performing the following actions:
- Selecting a decision stump
- Increasing the weighting of cases that the decision stump labeled incorrectly while reducing the weighting of correctly labeled cases
This iterative weight adjustment causes each new classifier in the ensemble to prioritize
training the incorrectly labeled cases. As a result, the model adjusts by targeting highlyweighted
data points.
Eventually, the stumps are combined to form a final classifier.
## Implementing AdaBoost in OpenCV
Although OpenCV provides a very efficient implementation of AdaBoost, it is hidden
under the Haar cascade classifier. Haar cascade classifiers are a very popular tool for face
detection, which we can illustrate through the example of the Lena image:
```
import cv2
img_bgr = cv2.imread('../data/lena.jpg', cv2.IMREAD_COLOR)
img_gray = cv2.cvtColor(img_bgr, cv2.COLOR_BGR2GRAY)
```
After loading the image in both color and grayscale, we load a pretrained Haar cascade:
```
filename = '../data/haarcascade_frontalface_default.xml'
face_cascade = cv2.CascadeClassifier(filename)
```
The classifier will then detect faces present in the image using the following function call:
```
faces = face_cascade.detectMultiScale(img_gray, 1.1, 5)
```
Note that the algorithm operates only on grayscale images. That's why we saved two
pictures of Lena, one to which we can apply the classifier (`img_gray`), and one on which we
can draw the resulting bounding box (`img_bgr`):
```
color = (255, 0, 0)
thickness = 2
for (x, y, w, h) in faces:
cv2.rectangle(img_bgr, (x, y), (x + w, y + h),
color, thickness)
```
Then we can plot the image using the following code:
```
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(10, 6))
plt.imshow(cv2.cvtColor(img_bgr, cv2.COLOR_BGR2RGB));
```
Obviously, this picture contains only a single face. However, the preceding code will work
even on images where multiple faces could be detected. Try it out!
## Implementing AdaBoost in scikit-learn
In scikit-learn, AdaBoost is just another ensemble estimator. We can create an ensemble
from 50 decision stumps as follows:
```
from sklearn.ensemble import AdaBoostClassifier
ada = AdaBoostClassifier(n_estimators=50,
random_state=456)
```
We can load the breast cancer set once more and split it 75-25:
```
from sklearn.datasets import load_breast_cancer
cancer = load_breast_cancer()
X = cancer.data
y = cancer.target
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
X, y, random_state=456
)
```
Then fit and score AdaBoost using the familiar procedure:
```
ada.fit(X_train, y_train)
ada.score(X_test, y_test)
```
The result is remarkable, 96.5% accuracy!
We might want to compare this result to a random forest. However, to be fair, we should
make the trees in the forest all decision stumps. Then we will know the difference between
bagging and boosting:
```
from sklearn.ensemble import RandomForestClassifier
forest = RandomForestClassifier(n_estimators=50,
max_depth=1,
random_state=456)
forest.fit(X_train, y_train)
forest.score(X_test, y_test)
```
Of course, if we let the trees be as deep as needed, we might get a better score:
```
forest = RandomForestClassifier(n_estimators=50,
random_state=456)
forest.fit(X_train, y_train)
forest.score(X_test, y_test)
```
Wow, 99.3% accuracy score is incredible when the random forest classifier was allowed to be as deep as possible.
As a last step in this chapter, let's talk about how to combine different types of models into
an ensemble.
| github_jupyter |
## 1) Importing Necessary Libraries
First off, we need to import several Python libraries such as numpy, pandas, matplotlib and seaborn.
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
```
## 2) Reading in and Exploring the Data
It's time to read in our training and testing data using `pd.read_csv`, and take a first look at the training data using the `describe()` function.
```
train = pd.read_csv("/kaggle/input/titanic/train.csv")
test = pd.read_csv("/kaggle/input/titanic/test.csv")
train.describe(include="all")
```
## 3) Data Analysis
We're going to consider the features in the dataset and how complete they are.
```
print(train.columns)
train.describe(include = "all")
print(train.isnull().sum())
```
****EDA****
### Sex Feature
```
sns.barplot(x="Sex", y="Survived", data=train)
print("Percentage of females who survived:", train["Survived"][train["Sex"] == 'female'].value_counts(normalize = True)[1]*100)
print("Percentage of males who survived:", train["Survived"][train["Sex"] == 'male'].value_counts(normalize = True)[1]*100)
```
Females had higher chance of survival than males. The Sex feature is essential in our predictions.
### Pclass Feature
```
train['Pclass'].value_counts()
sns.barplot(x="Pclass", y="Survived", data=train)
print("Percentage of Pclass = 1 who survived:", train["Survived"][train["Pclass"] == 1].value_counts(normalize = True)[1]*100)
print("Percentage of Pclass = 2 who survived:", train["Survived"][train["Pclass"] == 2].value_counts(normalize = True)[1]*100)
print("Percentage of Pclass = 3 who survived:", train["Survived"][train["Pclass"] == 3].value_counts(normalize = True)[1]*100)
```
### SibSp Feature
```
train['SibSp'].value_counts()
sns.barplot(x="SibSp", y="Survived", data=train)
print("Percentage of SibSp = 0 who survived:", train["Survived"][train["SibSp"] == 0].value_counts(normalize = True)[1]*100)
print("Percentage of SibSp = 1 who survived:", train["Survived"][train["SibSp"] == 1].value_counts(normalize = True)[1]*100)
print("Percentage of SibSp = 2 who survived:", train["Survived"][train["SibSp"] == 2].value_counts(normalize = True)[1]*100)
```
### Parch Feature
```
sns.barplot(x="Parch", y="Survived", data=train)
plt.rcParams['figure.figsize'] = (15, 10)
plt.show()
```
People with less than four parents or children aboard are more likely to survive than those with four or more. Again, people traveling alone are less likely to survive than those with 1-3 parents or children.
```
data=[train,test]
for dataset in data:
#complete missing age with median
dataset['Age'].fillna(dataset['Age'].mode()[0], inplace = True)
train.isnull().sum()
```
### Age Feature
```
# train["Age"] = train["Age"].fillna(-0.5)
# test["Age"] = test["Age"].fillna(-0.5)
bins = [ 0, 5, 12, 18, 24, 35, 60, np.inf]
labels = ['Baby', 'Child', 'Teenager', 'Student', 'Young Adult', 'Adult', 'Senior']
train['AgeGroup'] = pd.cut(train["Age"], bins, labels = labels)
test['AgeGroup'] = pd.cut(test["Age"], bins, labels = labels)
sns.barplot(x="AgeGroup", y="Survived", data=train)
plt.rcParams['figure.figsize'] = (22, 15)
plt.show()
```
Babies are more likely to survive than any other age group.
```
train['Age'].isnull().sum()
```
# **CHECKING THE CORRELATION**
```
plt.rcParams['figure.figsize'] = (30, 13)
plt.style.use('ggplot')
sns.heatmap(train.corr(), annot = True, cmap = 'Wistia')
plt.title('Heatmap for the Dataset', fontsize = 20)
plt.show()
corr=train.corr()
corr["Survived"].sort_values(ascending=False)
```
## 5) Cleaning Data
Let's see how our test data looks!
```
test.describe(include="all")
```
### Cabin Feature
```
train = train.drop(['Cabin'], axis = 1)
test = test.drop(['Cabin'], axis = 1)
```
### Ticket Feature
```
train = train.drop(['Ticket'], axis = 1)
test = test.drop(['Ticket'], axis = 1)
```
### Embarked Feature
```
train['Embarked'].value_counts()
print("Number of people embarking in Southampton (S):")
southampton = train[train["Embarked"] == "S"].shape[0]
print(southampton)
print("Number of people embarking in Cherbourg (C):")
cherbourg = train[train["Embarked"] == "C"].shape[0]
print(cherbourg)
print("Number of people embarking in Queenstown (Q):")
queenstown = train[train["Embarked"] == "Q"].shape[0]
print(queenstown)
```
It's clear that the majority of people embarked in Southampton (S). Let's go ahead and fill in the missing values with S.
```
train = train.fillna({"Embarked": "S"})
```
### Age Feature
Next we'll fill in the missing values in the Age feature. Since a higher percentage of values are missing, it would be illogical to fill all of them with the same value (as we did with Embarked). Instead, let's try to find a way to predict the missing ages.
```
#combined group of both datasets
combine = [train, test]
for dataset in combine:
dataset['Title'] = dataset.Name.str.extract(' ([A-Za-z]+)\.', expand=False)
pd.crosstab(train['Title'], train['Sex'])
for dataset in combine:
dataset['Title'] = dataset['Title'].replace(['Lady', 'Capt', 'Col',
'Don', 'Dr', 'Major', 'Rev', 'Jonkheer', 'Dona'], 'Rare')
dataset['Title'] = dataset['Title'].replace(['Countess', 'Lady', 'Sir'], 'Royal')
dataset['Title'] = dataset['Title'].replace('Mlle', 'Miss')
dataset['Title'] = dataset['Title'].replace('Ms', 'Miss')
dataset['Title'] = dataset['Title'].replace('Mme', 'Mrs')
train[['Title', 'Survived']].groupby(['Title'], as_index=False).mean()
title_mapping = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Royal": 5, "Rare": 6}
for dataset in combine:
dataset['Title'] = dataset['Title'].map(title_mapping)
dataset['Title'] = dataset['Title'].fillna(0)
train.head()
train.isnull().sum()
mr_age = train[train["Title"] == 1]["AgeGroup"].mode()
mr_age
miss_age = train[train["Title"] == 2]["AgeGroup"].mode()
miss_age
mrs_age = train[train["Title"] == 3]["AgeGroup"].mode()
mrs_age
master_age = train[train["Title"] == 4]["AgeGroup"].mode()
master_age
royal_age = train[train["Title"] == 5]["AgeGroup"].mode()
royal_age
rare_age = train[train["Title"] == 6]["AgeGroup"].mode() #Adult
rare_age
age_title_mapping = {1: "Student", 2: "Student", 3: "Adult", 4: "Baby", 5: "Adult", 6: "Adult"}
for x in range(len(train["AgeGroup"])):
if train["AgeGroup"][x] == "Unknown":
train["AgeGroup"][x] = age_title_mapping[train["Title"][x]]
for x in range(len(test["AgeGroup"])):
if test["AgeGroup"][x] == "Unknown":
test["AgeGroup"][x] = age_title_mapping[test["Title"][x]]
age_mapping = {'Baby': 1, 'Child': 2, 'Teenager': 3, 'Student': 4, 'Young Adult': 5, 'Adult': 6, 'Senior': 7}
train['AgeGroup'] = train['AgeGroup'].map(age_mapping)
test['AgeGroup'] = test['AgeGroup'].map(age_mapping)
train.head()
train = train.drop(['Age'], axis = 1)
test = test.drop(['Age'], axis = 1)
```
### Name Feature
We can drop the name feature now that we've extracted the titles.
```
train = train.drop(['Name'], axis = 1)
test = test.drop(['Name'], axis = 1)
```
### Sex Feature
```
sex_mapping = {"male": 0, "female": 1}
train['Sex'] = train['Sex'].map(sex_mapping)
test['Sex'] = test['Sex'].map(sex_mapping)
train.head()
```
### Embarked Feature
```
embarked_mapping = {"S": 1, "C": 2, "Q": 3}
train['Embarked'] = train['Embarked'].map(embarked_mapping)
test['Embarked'] = test['Embarked'].map(embarked_mapping)
train.head()
```
### Fare Feature
It's time separate the fare values into some logical groups as well as filling in the single missing value in the test dataset.
```
for x in range(len(test["Fare"])):
if pd.isnull(test["Fare"][x]):
pclass = test["Pclass"][x] #Pclass = 3
test["Fare"][x] = round(train[train["Pclass"] == pclass]["Fare"].mean(), 4)
train['FareBand'] = pd.qcut(train['Fare'], 4, labels = [1, 2, 3, 4])
test['FareBand'] = pd.qcut(test['Fare'], 4, labels = [1, 2, 3, 4])
train = train.drop(['Fare'], axis = 1)
test = test.drop(['Fare'], axis = 1)
train.head()
test.head()
```
## 6) CHOOSING THE BEST MODEL
### Splitting the Training Data
We will use part of our training data (22% in this case) to test the accuracy of our different models.
```
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
from sklearn.metrics import roc_curve
from sklearn.model_selection import train_test_split
predictors = train.drop(['Survived', 'PassengerId'], axis=1)
target = train["Survived"]
X_train, X_test, Y_train, Y_test = train_test_split(predictors, target, test_size = 0.11, random_state = 0)
```
# NAIVE BAYES
```
from sklearn.naive_bayes import GaussianNB
nb = GaussianNB()
nb.fit(X_train,Y_train)
Y_pred_nb = nb.predict(X_test)
score_nb = round(accuracy_score(Y_pred_nb,Y_test)*100,2)
results = confusion_matrix(Y_test, Y_pred_nb)
results
print(classification_report(Y_pred_nb,Y_test))
```
# LOGISTIC REGRESSION
```
from sklearn.linear_model import LogisticRegression
lr1 = LogisticRegression()
lr1.fit(X_train,Y_train)
Y_pred_lr1 = lr1.predict(X_test)
score_lr1 = round(accuracy_score(Y_pred_lr1,Y_test)*100,2)
print("The accuracy score achieved using Logistic Regression is: "+str(score_lr1)+" %")
results = confusion_matrix(Y_test, Y_pred_lr1)
results
print(classification_report(Y_pred_lr1,Y_test))
```
# DECISION TREE
```
from sklearn.tree import DecisionTreeClassifier
decisiontree = DecisionTreeClassifier()
decisiontree.fit(X_train, Y_train)
y_preddt = decisiontree.predict(X_test)
score_dt = round(accuracy_score(y_preddt,Y_test)*100,2)
print("The accuracy score achieved using Decision Tree is: "+str(score_dt)+" %")
results = confusion_matrix(Y_test, y_preddt)
results
print(classification_report(y_preddt,Y_test))
```
# RANDOM FOREST
```
from sklearn.ensemble import RandomForestClassifier
randomforest = RandomForestClassifier()
randomforest.fit(X_train, Y_train)
y_predrf = randomforest.predict(X_test)
score_rf = round(accuracy_score(y_predrf,Y_test)*100,2)
print("The accuracy score achieved using Random Forest is: "+str(score_rf)+" %")
results = confusion_matrix(Y_test, y_predrf)
results
print(classification_report(y_predrf,Y_test))
```
# K NEAREST NEIGHBORS
```
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier()
knn.fit(X_train, Y_train)
y_predknn = knn.predict(X_test)
score_knn = round(accuracy_score(y_predknn,Y_test)*100,2)
print("The accuracy score achieved using Random Forest is: "+str(score_knn)+" %")
results = confusion_matrix(Y_test, y_predknn)
results
print(classification_report(y_predknn,Y_test))
```
# GRADIENT BOOSTING CLASSIFIER
```
from sklearn.ensemble import GradientBoostingClassifier
gbk = GradientBoostingClassifier()
gbk.fit(X_train, Y_train)
y_predgbg = gbk.predict(X_test)
score_gbg = round(accuracy_score(y_predgbg,Y_test)*100,2)
print("The accuracy score achieved using Random Forest is: "+str(score_gbg)+" %")
results = confusion_matrix(Y_test, y_predgbg)
results
print(classification_report(y_predgbg,Y_test))
```
| github_jupyter |
# Featurizer #
This notebook demonstrates how to use `pyTigerGraph` for common data processing and feature engineering tasks on graphs stored in `TigerGraph`.
## Connection to Database ##
The `TigerGraphConnection` class represents a connection to the TigerGraph database. Under the hood, it stores the necessary information to communicate with the database. Please see its documentation for details https://pytigergraph.github.io/pyTigerGraph/GettingStarted/.
```
from pyTigerGraph import TigerGraphConnection
conn = TigerGraphConnection(
host="http://127.0.0.1", # Change the address to your database server's
graphname="Cora",
username="tigergraph",
password="tigergraph",
useCert=False
)
# Graph schema and other information.
print(conn.gsql("ls"))
# Number of vertices for every vertex type
conn.getVertexCount('*')
# Number of vertices of a specific type
conn.getVertexCount("Paper")
# Number of edges for every type
conn.getEdgeCount()
# Number of edges of a specific type
conn.getEdgeCount("Cite")
```
## Feature Engineering ##
We added the graph algorithms to the workbench to perform feature engineering tasks. The usefull functions to extract the features are as below:
1. `listAlgorithm()` function: If it gets the class of algorithms (e.g. Centrality) as an input, it will print the available algorithms for the specified category;otherwise will print the entire available algorithms.
2. `installAlgorithm()` function: Gets tha name of the algorithmm as input and installs the algorithm if it is not already installed.
3. `runAlgorithmm()` function: Gets the algorithm name, schema type (e.g. vertex/edge, by default it is vertex), attribute name (if the result attribute needs to be stored in the schema type), and a list of schema type names (list of vertices/edges that the attribute needs to be saved in, by default it is for all vertices/edges).
```
f = conn.gds.featurizer()
f.listAlgorithms()
```
## Examples of running graph algorithms from GDS library ##
In the following, one example of each class of algoeirhms are provided. Some algorithms will generate a feature per vertex/edge;however, some other algorithms will calculates a number or statistics information about the graph. For example, the common neighbor algorithm calculates the number of common neighbors between two vertices.
## Get Pagerank as a feature ##
The pagerank is available in GDS library called tg_pagerank under the class of centrality algorithms https://github.com/tigergraph/gsql-graph-algorithms/blob/master/algorithms/Centrality/pagerank/global/unweighted/tg_pagerank.gsql.
```
f.installAlgorithm("tg_pagerank")
params = {'v_type':'Paper','e_type':'Cite','max_change':0.001, 'max_iter': 25, 'damping': 0.85,
'top_k': 10, 'print_accum': True, 'result_attr':'','file_path':'','display_edges': False}
f.runAlgorithm('tg_pagerank',params=params,feat_name="pagerank",timeout=2147480,sizeLimit = 2000000)
```
## Run Maximal Independent Set ##
The Maximal Independent Set algorithm is available in GDS library called tg_maximal_indep_set under the class of classification algorithms https://github.com/tigergraph/gsql-graph-algorithms/blob/master/algorithms/Classification/maximal_independent_set/deterministic/tg_maximal_indep_set.gsql.
```
f.installAlgorithm("tg_maximal_indep_set")
params = {'v_type': 'Paper', 'e_type': 'Cite','max_iter': 100,'print_accum': False,'file_path':''}
f.runAlgorithm('tg_maximal_indep_set',params=params)
```
## Get Louvain as a feature ##
The Louvain algorithm is available in GDS library called tg_louvain under the class of community detection algorithms https://github.com/tigergraph/gsql-graph-algorithms/blob/master/algorithms/Community/louvain/tg_louvain.gsql.
```
f.installAlgorithm(query_name='tg_louvain')
params = {'v_type': 'Paper', 'e_type':['Cite','reverse_Cite'],'wt_attr':"",'max_iter':10,'result_attr':"cid",'file_path' :"",'print_info':True}
f.runAlgorithm('tg_louvain',params,feat_name="cid")
```
## Get fastRP as a feature ##
The fastRP algorithm is available in GDS library called tg_fastRP under the class of community detection algorithms https://github.com/tigergraph/gsql-graph-algorithms/blob/master/algorithms/GraphML/Embeddings/FastRP/tg_fastRP.gsql
```
f.installAlgorithm("tg_fastRP")
params = {'v_type': 'Paper', 'e_type': ['Cite','reverse_Cite'], 'weights': '1,1,2', 'beta': -0.85, 'k': 3, 'reduced_dim': 128,
'sampling_constant': 1, 'random_seed': 42, 'print_accum': False,'result_attr':"",'file_path' :""}
f.runAlgorithm('tg_fastRP',params,feat_name ="fastrp_embedding")
```
## Run Breadth-First Search Algorithm from a single source node ##
The Breadth-First Search algorithm is available in GDS library called tg_bfs under the class of Path algorithms https://github.com/tigergraph/gsql-graph-algorithms/blob/master/algorithms/Path/bfs/tg_bfs.gsql.
```
f.installAlgorithm(query_name='tg_bfs')
params = {'v_type': 'Paper', 'e_type':['Cite','reverse_Cite'],'max_hops':10,"v_start":("2180","Paper"),
'print_accum':False,'result_attr':"",'file_path' :"",'display_edges':False}
f.runAlgorithm('tg_bfs',params,feat_name="bfs")
```
## Calculates the number of common neighbors between two vertices ##
The common neighbors algorithm is available in GDS library called tg_common_neighbors under the class of Topological Link Prediction algorithms https://github.com/tigergraph/gsql-graph-algorithms/blob/master/algorithms/Topological%20Link%20Prediction/common_neighbors/tg_common_neighbors.gsql
```
f.installAlgorithm(query_name='tg_common_neighbors')
params={"a":("2180","Paper"),"b":("431","Paper"),"e_type":"Cite","print_res":True}
f.runAlgorithm('tg_common_neighbors',params)
```
| github_jupyter |
```
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
# http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html
from sklearn.decomposition import PCA
# http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html
import boto3
import sagemaker.amazon.common as smac
```
<h2>Kaggle Bike Sharing Demand Dataset Normalization</h2>
Normalize 'temp','atemp','humidity','windspeed' and store the train and test files
```
columns = ['count', 'season', 'holiday', 'workingday', 'weather', 'temp',
'atemp', 'humidity', 'windspeed', 'year', 'month', 'day', 'dayofweek','hour']
cols_normalize = ['temp','atemp','humidity','windspeed']
df = pd.read_csv('train.csv', parse_dates=['datetime'])
df_test = pd.read_csv('test.csv', parse_dates=['datetime'])
# We need to convert datetime to numeric for training.
# Let's extract key features into separate numeric columns
def add_features(df):
df['year'] = df['datetime'].dt.year
df['month'] = df['datetime'].dt.month
df['day'] = df['datetime'].dt.day
df['dayofweek'] = df['datetime'].dt.dayofweek
df['hour'] = df['datetime'].dt.hour
add_features(df)
add_features(df_test)
df["count"] = df["count"].map(np.log1p)
df.head(2)
df_test.head(2)
# Normalize the dataset
scaler = StandardScaler()
# Normalization parameters based on Training
scaler.fit(df[cols_normalize])
def transform_data(scaler, df, columns):
transformed_data = scaler.transform(df[columns])
df_transformed = pd.DataFrame(transformed_data, columns=columns)
for col in df_transformed.columns:
df[col] = df_transformed[col]
transform_data(scaler, df, cols_normalize)
transform_data(scaler, df_test, cols_normalize)
df.head(2)
df_test.head(2)
# Store Original train and test data in normalized form
df.to_csv('train_normalized.csv',index=False, columns=columns)
df_test.to_csv('test_normalized.csv',index=False)
# Store only the 4 numeric colums for PCA Training and Test
# Data Needs to be normalized
def write_recordio_file (filename, x, y=None):
with open(filename, 'wb') as f:
smac.write_numpy_to_dense_tensor(f, x, y)
# Store All Normalized data as RecordIO File for PCA Training in SageMaker
# Need to pass as an array to create RecordIO file
X = df.as_matrix(columns=['temp','atemp','humidity','windspeed'])
write_recordio_file('bike_train_numeric_columns.recordio',X)
# next is biketrain_pca_projection_localmode.ipynb
```
| github_jupyter |
## 1. The most Nobel of Prizes
<p><img style="float: right;margin:5px 20px 5px 1px; max-width:250px" src="https://s3.amazonaws.com/assets.datacamp.com/production/project_441/img/Nobel_Prize.png"></p>
<p>The Nobel Prize is perhaps the world's most well known scientific award. Except for the honor, prestige and substantial prize money the recipient also gets a gold medal showing Alfred Nobel (1833 - 1896) who established the prize. Every year it's given to scientists and scholars in the categories chemistry, literature, physics, physiology or medicine, economics, and peace. The first Nobel Prize was handed out in 1901, and at that time the Prize was very Eurocentric and male-focused, but nowadays it's not biased in any way whatsoever. Surely. Right?</p>
<p>Well, we're going to find out! The Nobel Foundation has made a dataset available of all prize winners from the start of the prize, in 1901, to 2016. Let's load it in and take a look.</p>
```
# Loading in required libraries
# ... YOUR CODE FOR TASK 1 ...
import pandas as pd
import seaborn as sns
import numpy as np
# Reading in the Nobel Prize data
nobel = pd.read_csv('datasets/nobel.csv')
nobel.head(6)
# Taking a look at the first several winners
# ... YOUR CODE FOR TASK 1 ...
```
## 2. So, who gets the Nobel Prize?
<p>Just looking at the first couple of prize winners, or Nobel laureates as they are also called, we already see a celebrity: Wilhelm Conrad Röntgen, the guy who discovered X-rays. And actually, we see that all of the winners in 1901 were guys that came from Europe. But that was back in 1901, looking at all winners in the dataset, from 1901 to 2016, which sex and which country is the most commonly represented? </p>
<p>(For <em>country</em>, we will use the <code>birth_country</code> of the winner, as the <code>organization_country</code> is <code>NaN</code> for all shared Nobel Prizes.)</p>
```
# Display the number of (possibly shared) Nobel Prizes handed
# out between 1901 and 2016
# ... YOUR CODE FOR TASK 2 ...
display(len(nobel))
# Display the number of prizes won by male and female recipients.
# ... YOUR CODE FOR TASK 2 ...
display(nobel['sex'].value_counts())
# Display the number of prizes won by the top 10 nationalities.
nobel['birth_country'].value_counts().head(10)
# ... YOUR CODE FOR TASK 2 ...
```
## 3. USA dominance
<p>Not so surprising perhaps: the most common Nobel laureate between 1901 and 2016 was a man born in the United States of America. But in 1901 all the winners were European. When did the USA start to dominate the Nobel Prize charts?</p>
```
# Calculating the proportion of USA born winners per decade
nobel['usa_born_winner'] = nobel['birth_country'].apply(lambda x : True if x=='United States of America' else False)
nobel['decade'] = (np.floor(nobel["year"] / 10) * 10).astype(int)
prop_usa_winners = nobel.groupby(['decade'],as_index=False)['usa_born_winner'].mean()
display(prop_usa_winners)
# Display the proportions of USA born winners per decade
# ... YOUR CODE FOR TASK 3 ...
```
## 4. USA dominance, visualized
<p>A table is OK, but to <em>see</em> when the USA started to dominate the Nobel charts we need a plot!</p>
```
# Setting the plotting theme
sns.set()
# and setting the size of all plots.
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = [11, 7]
ax = sns.lineplot(x=nobel["decade"], y=nobel["usa_born_winner"])
# Adding %-formatting to the y-axis
from matplotlib.ticker import PercentFormatter
ax.yaxis.set_major_formatter(PercentFormatter())
```
## 5. What is the gender of a typical Nobel Prize winner?
<p>So the USA became the dominating winner of the Nobel Prize first in the 1930s and had kept the leading position ever since. But one group that was in the lead from the start, and never seems to let go, are <em>men</em>. Maybe it shouldn't come as a shock that there is some imbalance between how many male and female prize winners there are, but how significant is this imbalance? And is it better or worse within specific prize categories like physics, medicine, literature, etc.?</p>
```
# Calculating the proportion of female laureates per decade
nobel['female_winner'] = nobel['sex'].apply(lambda x : True if x=='Female' else False)
prop_female_winners = nobel.groupby(['decade','category'],as_index=False)['female_winner'].mean()
ax = sns.lineplot(x="decade", y="female_winner", hue="category", data=prop_female_winners)
# Plotting USA born winners with % winners on the y-axis
# ... YOUR CODE FOR TASK 5 ...
```
## 6. The first woman to win the Nobel Prize
<p>The plot above is a bit messy as the lines are overplotting. But it does show some interesting trends and patterns. Overall the imbalance is pretty large with physics, economics, and chemistry having the largest imbalance. Medicine has a somewhat positive trend, and since the 1990s the literature prize is also now more balanced. The big outlier is the peace prize during the 2010s, but keep in mind that this just covers the years 2010 to 2016.</p>
<p>Given this imbalance, who was the first woman to receive a Nobel Prize? And in what category?</p>
```
# Picking out the first woman to win a Nobel Prize
nobel[nobel["sex"] == "Female"].nsmallest(1, "year")
# ... YOUR CODE FOR TASK 5 ...
```
## 7. Repeat laureates
<p>For most scientists/writers/activists a Nobel Prize would be the crowning achievement of a long career. But for some people, one is just not enough, and few have gotten it more than once. Who are these lucky few? (Having won no Nobel Prize myself, I'll assume it's just about luck.)</p>
```
# Converting birth_date from String to datetime
nobel['birth_date'] = pd.to_datetime(nobel['birth_date'])
# Calculating the age of Nobel Prize winners
nobel['age'] = nobel["year"] - nobel["birth_date"].dt.year
sns.lmplot(x="year", y="age", data=nobel, lowess=True, aspect=2, line_kws={"color" : "black"})
# Plotting the age of Nobel Prize winners
# ... YOUR CODE FOR TASK 8 ...
# Selecting the laureates that have received 2 or more prizes.
nobel.groupby("full_name").filter(lambda x: len(x) >= 2)
# ... YOUR CODE FOR TASK 5 ...
```
## 8. How old are you when you get the prize?
<p>The list of repeat winners contains some illustrious names! We again meet Marie Curie, who got the prize in physics for discovering radiation and in chemistry for isolating radium and polonium. John Bardeen got it twice in physics for transistors and superconductivity, Frederick Sanger got it twice in chemistry, and Linus Carl Pauling got it first in chemistry and later in peace for his work in promoting nuclear disarmament. We also learn that organizations also get the prize as both the Red Cross and the UNHCR have gotten it twice.</p>
<p>But how old are you generally when you get the prize?</p>
```
# Converting birth_date from String to datetime
# Converting birth_date from String to datetime
nobel['birth_date'] = pd.to_datetime(nobel['birth_date'])
# Calculating the age of Nobel Prize winners
nobel['age'] = nobel["year"] - nobel["birth_date"].dt.year
sns.lmplot(x="year", y="age", data=nobel, lowess=True, aspect=2, line_kws={"color" : "black"})
# Plotting the age of Nobel Prize winners
# ... YOUR CODE FOR TASK 8 ...
```
## 9. Age differences between prize categories
<p>The plot above shows us a lot! We see that people use to be around 55 when they received the price, but nowadays the average is closer to 65. But there is a large spread in the laureates' ages, and while most are 50+, some are very young.</p>
<p>We also see that the density of points is much high nowadays than in the early 1900s -- nowadays many more of the prizes are shared, and so there are many more winners. We also see that there was a disruption in awarded prizes around the Second World War (1939 - 1945). </p>
<p>Let's look at age trends within different prize categories.</p>
```
# Same plot as above, but separate plots for each type of Nobel Prize
sns.lmplot(x="year", y="age", row='category',data=nobel, lowess=True, aspect=2, line_kws={"color" : "black"})
# ... YOUR CODE FOR TASK 9 ...
```
## 10. Oldest and youngest winners
<p>More plots with lots of exciting stuff going on! We see that both winners of the chemistry, medicine, and physics prize have gotten older over time. The trend is strongest for physics: the average age used to be below 50, and now it's almost 70. Literature and economics are more stable. We also see that economics is a newer category. But peace shows an opposite trend where winners are getting younger! </p>
<p>In the peace category we also a winner around 2010 that seems exceptionally young. This begs the questions, who are the oldest and youngest people ever to have won a Nobel Prize?</p>
```
# The oldest winner of a Nobel Prize as of 2016
# ... YOUR CODE FOR TASK 10 ...
display(nobel.nlargest(1, "age"))
# The youngest winner of a Nobel Prize as of 2016
nobel.nsmallest(1, "age")
# The youngest winner of a Nobel Prize as of 2016
# ... YOUR CODE FOR TASK 10 ...
```
## 11. You get a prize!
<p><img style="float: right;margin:20px 20px 20px 20px; max-width:200px" src="https://s3.amazonaws.com/assets.datacamp.com/production/project_441/img/paint_nobel_prize.png"></p>
<p>Hey! You get a prize for making it to the very end of this notebook! It might not be a Nobel Prize, but I made it myself in paint so it should count for something. But don't despair, Leonid Hurwicz was 90 years old when he got his prize, so it might not be too late for you. Who knows.</p>
<p>Before you leave, what was again the name of the youngest winner ever who in 2014 got the prize for "[her] struggle against the suppression of children and young people and for the right of all children to education"?</p>
```
# The name of the youngest winner of the Nobel Prize as of 2016
# The name of the youngest winner of the Nobel Prize as of 2016
youngest_winner = 'Malala'
```
| github_jupyter |
# List Comprehensions Lab
Complete the following set of exercises to solidify your knowledge of list comprehensions.
```
import os
import numpy as np
import pandas as pd
```
### 1. Use a list comprehension to create and print a list of consecutive integers starting with 1 and ending with 50.
### 2. Use a list comprehension to create and print a list of even numbers starting with 2 and ending with 200.
### 3. Use a list comprehension to create and print a list containing all elements of the 10 x 4 Numpy array below.
```
a = np.array([[0.84062117, 0.48006452, 0.7876326 , 0.77109654],
[0.44409793, 0.09014516, 0.81835917, 0.87645456],
[0.7066597 , 0.09610873, 0.41247947, 0.57433389],
[0.29960807, 0.42315023, 0.34452557, 0.4751035 ],
[0.17003563, 0.46843998, 0.92796258, 0.69814654],
[0.41290051, 0.19561071, 0.16284783, 0.97016248],
[0.71725408, 0.87702738, 0.31244595, 0.76615487],
[0.20754036, 0.57871812, 0.07214068, 0.40356048],
[0.12149553, 0.53222417, 0.9976855 , 0.12536346],
[0.80930099, 0.50962849, 0.94555126, 0.33364763]])
```
### 4. Add a condition to the list comprehension above so that only values greater than or equal to 0.5 are printed.
### 5. Use a list comprehension to create and print a list containing all elements of the 5 x 2 x 3 Numpy array below.
```
b = np.array([[[0.55867166, 0.06210792, 0.08147297],
[0.82579068, 0.91512478, 0.06833034]],
[[0.05440634, 0.65857693, 0.30296619],
[0.06769833, 0.96031863, 0.51293743]],
[[0.09143215, 0.71893382, 0.45850679],
[0.58256464, 0.59005654, 0.56266457]],
[[0.71600294, 0.87392666, 0.11434044],
[0.8694668 , 0.65669313, 0.10708681]],
[[0.07529684, 0.46470767, 0.47984544],
[0.65368638, 0.14901286, 0.23760688]]])
```
### 5. Add a condition to the list comprehension above so that the last value in each subarray is printed, but only if it is less than or equal to 0.5.
### 6. Use a list comprehension to select and print the names of all CSV files in the */data* directory.
### 7. Use a list comprehension and the Pandas `read_csv` and `concat` methods to read all CSV files in the */data* directory and combine them into a single data frame. Display the top 10 rows of the resulting data frame.
### 8. Use a list comprehension to select and print the column numbers for columns from the data set whose median is less than 0.48.
### 9. Use a list comprehension to add a new column (20) to the data frame whose values are the values in column 19 minus 0.1. Display the top 10 rows of the resulting data frame.
### 10. Use a list comprehension to extract and print all values from the data set that are between 0.7 and 0.75.
| github_jupyter |
# An Introduction to FEAST v2.0
FEAST v2.0 is a Python implementation of the Fugitive Emmissions Abatement Simulation Toolkit (FEAST) published by the Environmental Assessment and Optimization group at Stanford University. FEAST v2.0 generates similar results to FEAST v1.0 and includes some updates to the code structure to make the model more accessible. Extended documentation of FEAST is available [here](https://github.com/EAOgroup/FEAST/blob/master/Archive/FEAST_v1.0/FEASTDocumentation.pdf).
This tutorial gives an example of how to generate a realization of the default scenario in FEAST v2.0, analyze results, and change settings to generate a custom realization. The tutorial is interactive, so feel free to experiment with the code cells and discover how your changes affect the results.
## Running the default scenario
The default scenario simulates four leak detection and repair (LDAR) programs over a 10 year period. Leak distribution data sets, LDAR parameters and gas field properties are all assumed in order to generate the results.
Producing a single realization of the default scenario requires two lines of code: one to load the function *field_simulation* to the active python kernel, and the second to call the function. The code cell below illustrates the commands. The optional argument *dir_out* specifies the directory in which to save results from the simulation. It will take about one minute to complete the simulation.
```
from field_simulation import field_simulation
field_simulation(dir_out='../Results')
```
Each new realization is saved under the name "realization0," and the final integer is incremented by one with each new realization generated. The results can be viewed by using the built in plotting functions. There are three plotting functions available. The first produces a time series of the leakage in single realization file. It is shown in the code cell below.
```
# First the necessary functions are loaded to the active kernel
from GeneralClassesFunctions import plotting_functions
# Then the time series plotting function is called with a path to a
# specific results file
plotting_functions.time_series('../Results/realization0.p')
```
The other two plotting functions accumulate the data from all realizations in a directory. In order to illustrate their utility, multiple realizations should be used. For illustration purposes, four more realizations are generated below. To suppress the time step updates from *field_simulation()*, the optional command *display_status=False* was added.
```
for ind in range(0,4):
print("Currently evaluating iteration number " + str(ind))
field_simulation(display_status=False, dir_out='../Results')
```
Now there are five realizations of the default scenario in the "Results" folder. The *summary_plotter* function compiles results from all five to show the mean net present value, the estimated uncertainty in the sample mean from the mean of infinite realizations of the same scenario, and the types of costs and benefits that contributed to to the net present value. *summary_plotter* was already loaded to the kernel as part of the *plotting_functions* module, so it is called directly in the cell below.
```
# summary_plotter requires a path to a results directory as an input
plotting_functions.summary_plotter('../Results')
```
*hist_plotter* allows the leak repair performance of each LDAR program to be evaluated without regard to financial value. The function generates a histogram of the sizes of leaks found by each program. Like *summary_plotter*, *hist_plotter* combines results from all realizations in a directory. Unlike *summary_plotter*, *hist_plotter* generates the plots in separate windows from the notebook by default. An optional *inline=True* command was added to ensure that the plots pop up in this notebook.
```
plotting_functions.hist_plotter('../Results', inline=True)
```
FEAST has the capability to rapidly calculate the value of improving detection technology or changing operating procedures. Users can define any parameters they choose in existing LDAR program simulations, and more ambitious users can create their own LDAR program modules. The cell below illustrates how unique technology instances can be generated and simulated simultaneously for easy comparison. The call to *field_simulation* uses the option argument *dir_out* to define a directory to place the results in.
```
# This cell compares the performance of three AIR LDAR programs
# with varying camera sensitivities.
# First, the modules neaded to create the AIR objects must be
# imported to the kernel
from DetectionModules import ir
from GeneralClassesFunctions import simulation_classes
# The loop is used to generate 5 independent realizations of the
# desired simulation
for ind in range(0,5):
print("Currently evaluating iteration number " + str(ind))
# Before creating the LDAR objects, a few properties of the
# simulation need to be set.
# The default GasField settings are used
gas_field = simulation_classes.GasField()
# A time step of 10 days is specified (instead of the default
# timestep of 1 day) to speed up the simulation
time = simulation_classes.Time(delta_t = 10)
# Each camera is defined below by its noise equivalent
# temperature difference (netd).
# In the default scenario, the netd is 0.015 K
Default_AIR = ir.AIR(time=time, gas_field=gas_field)
Better_AIR = ir.AIR(time=time, gas_field=gas_field, netd=0.005)
Best_AIR = ir.AIR(time=time, gas_field=gas_field, netd=0.001)
# All of the tetchnologies are combined into a dict to be passed
# to field_simulation()
tech_dict = {'Default_AIR': Default_AIR, 'Better_AIR': Better_AIR,
'Best_AIR': Best_AIR}
# field_simulation is called with the predefined objects,
# and an output directory is specified
field_simulation(time=time, gas_field=gas_field, tech_dict=tech_dict,
dir_out='../Results/AIR_Sample', display_status=False)
```
The function *hist_plotter* shows how the improved sensitivity affects the size of leaks detected:
```
plotting_functions.hist_plotter('../Results/AIR_Sample',inline=True)
```
*summary_plotter* is used to illustrate the financial value of improving camera sensitivity.
```
plotting_functions.summary_plotter('../Results/AIR_Sample')
```
The above AIR example gives a glimpse into the possible analyses using FEAST v2.0. Any of the default parameters in FEAST v2.0 can be modified from the command line, stored in an object and used in a gas field simulation. The model is open source and freely available so that code can be customized and new technology modules can be added by private users.
The default parameters in FEAST v2.0 are intended to provide a realistic starting point but should be customized to accurately portray any particular gas field or LDAR program. In this tutorial, a sample size of five realizations was used to demonstrate the plotting functions, but a larger sample size should be used in any rigorous analysis in order to understand the stochastic error in the model.
Please contact chandler.kemp@gmail.com with any questions or suggestions regarding the code contained in FEAST.
| github_jupyter |
# Colab initialization
- install the pipeline in the colab runtime
- download files neccessary for this example
```
!pip3 install -U pip > /dev/null
!pip3 install -U "bio-embeddings[all] @ git+https://github.com/sacdallago/bio_embeddings.git" > /dev/null
!wget http://data.bioembeddings.com/public/embeddings/notebooks/pipeline_output_example/disprot/reduced_embeddings_file.h5 --output-document reduced_embeddings_file.h5
!wget http://data.bioembeddings.com/public/embeddings/notebooks/pipeline_output_example/disprot/mapping_file.csv --output-document mapping_file.csv
```
# Reindex the embeddings generated from the pipeline
In order to avoid fauly ids from the FASTA headers, the pipeline automatically generates ids for the sequences passed. At the end of a pipeline run, you might want to attempt to re-index these. The pipeline provides a convenience function that does this in-place (changes the original file!)
```
## This is just to get some logging output in the Notebook
import logging
logging.basicConfig(level=logging.INFO, format="%(asctime)s %(levelname)s %(message)s")
```
When executing pipeline runs, your input sequences will get assigned a new internal identifier. This identifier corresponds to the md5 hash of the sequence. We do this, becuase for storing and processing purposes we need unique strings as identifiers, and unfortunately, some FASTA files contain invalid characters in the header.
Nevertheless, sometimes you may want to convert the keys contained in the h5 files produces from the pipeline back from the internal ids to their original id as in the FASTA header of the input sequence.
We produce a mapping_file.csv which shows this mapping (the first, unnamed column represents the sequence' md5 hash, while the column `original_id` represents the extracted id from the input FASTA)
This operation can be dangerous, because if the `original_id` contains invalid characters or is empty, the h5 file will be corrupted.
Nevertheless, we make a helper function available which converts the internal ids back to the original ids **in place**, meaning that the h5 file will be directly modified (this is meant to avoid duplication of large h5 files, but with the risk of corrupting the original file. Please: only perform this operation if you are sure about what you are doing, and if it's strictly neccessary!)
```
import h5py
from bio_embeddings.utilities import reindex_h5_file
# Let's check the keys of our h5 file:
with h5py.File("reduced_embeddings_file.h5", "r") as h5_file:
for key in h5_file.keys():
print(key,)
# In place re-indexing of h5 file
reindex_h5_file("reduced_embeddings_file.h5", "mapping_file.csv")
# Let's check the new keys of our h5 file:
with h5py.File("reduced_embeddings_file.h5", "r") as h5_file:
for key in h5_file.keys():
print(key,)
```
| github_jupyter |
```
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '1'
os.environ['CUDA_DEVICE_ORDER']='PCI_BUS_ID'
import torch
from allennlp.nn import util
import sys
sys.path.insert(0, "../../gector")
sys.path.insert(0, "../../")
from gector.gec_model import GecBERTModel
vocab_path = "../../data/output_vocabulary"
model_paths = "../../models/Exp_005_roberta_base_coldstep_2_fixed_vocab/best.th"
model_name = 'roberta'
model_1 = GecBERTModel(vocab_path=vocab_path,
model_paths=[model_paths],
max_len=50,
min_len=3,
iterations=5,
min_error_probability=0.0,
min_probability=0.0,
lowercase_tokens=0,
model_name= model_name,
special_tokens_fix=1,
log=False,
confidence=0,
is_ensemble=0,
weigths=None,
use_cpu=False)
vocab_path = "../../data/output_vocabulary"
model_paths = "../../models/Exp_049_2_roberta_base_stage_3_gold/model_state_epoch_3.th"
model_name = 'roberta'
model_2 = GecBERTModel(vocab_path=vocab_path,
model_paths=[model_paths],
max_len=50,
min_len=3,
iterations=5,
min_error_probability=0.0,
min_probability=0.0,
lowercase_tokens=0,
model_name= model_name,
special_tokens_fix=1,
log=False,
confidence=0,
is_ensemble=0,
weigths=None,
use_cpu=False)
def get_embedings_for_batch(words_batch, model):
batch = model.preprocess(words_batch)
batch = util.move_to_device(batch[0].as_tensor_dict(), 0 if torch.cuda.is_available() else -1)
embed = model.models[0].text_field_embedder(batch['tokens'])
tensors = []
for i in range(len(words_batch)):
tensors.append(embed[i][batch['tokens']['mask'][i]==1].mean(dim=0).cpu().detach().numpy())
return tensors
from sklearn.metrics.pairwise import cosine_similarity
import numpy as np
import pickle
from sklearn.metrics.pairwise import cosine_similarity
import pandas as pd
import numpy as np
import os
import glob
def read_lines(fn):
if not os.path.exists(fn):
return []
with open(fn, 'r', encoding='utf-8') as f:
text = f.read()
lines = text.split("\n")
if lines[-1] == '':
return lines[:-1]
else:
return lines
from tqdm.auto import tqdm
def get_embedings_for_text(src_text, model, batch_size=32):
embedings = []
batch = []
for sent in tqdm(src_text):
batch.append(sent.split())
if len(batch) == batch_size:
batch_embed = get_embedings_for_batch(batch, model)
embedings.extend(batch_embed)
batch = []
batch_embed = get_embedings_for_batch(batch, model)
embedings.extend(batch_embed)
return embedings
```
### Fce
```
fce_texts_src = read_lines("../../data_parallel/fce/fce_train_src")
fce_embed_1 = get_embedings_for_text(fce_texts_src, model_1, batch_size=32)
fce_embed_2 = get_embedings_for_text(fce_texts_src, model_2, batch_size=32)
fce_cos = np.diag(cosine_similarity(fce_embed_1,fce_embed_2))
import pickle
with open("fce_embed_1_large.pickle", "wb") as f:
pickle.dump(fce_embed_1, f)
with open("fce_embed_2_large.pickle", "wb") as f:
pickle.dump(fce_embed_2, f)
with open("fce_cos_large.pickle", "wb") as f:
pickle.dump(fce_cos, f)
```
for base
```
with open("fce_embed_1.pickle", "rb") as f:
fce_embed_1 = pickle.load(f)
with open("fce_embed_2.pickle", "rb") as f:
fce_embed_2 = pickle.load(f)
fce_cos = np.diag(cosine_similarity(fce_embed_1,fce_embed_2))
with open("fce_cos.pickle", "wb") as f:
pickle.dump(fce_cos, f)
fce_cos.min()
fce_cos.max()
fce_cos.mean()
```
for large
```
fce_cos.min()
fce_cos.max()
fce_cos.mean()
```
### Nucle
```
nucle_texts_src = read_lines("../../data_parallel/nucle/nucle_src")
nucle_embed_1 = get_embedings_for_text(nucle_texts_src, model_1, batch_size=32)
nucle_embed_2 = get_embedings_for_text(nucle_texts_src, model_2, batch_size=32)
with open("nucle_embed_2_large.pickle", "wb") as f:
pickle.dump(nucle_embed_2, f)
with open("nucle_embed_1_large.pickle", "wb") as f:
pickle.dump(nucle_embed_1, f)
nucle_cos = np.diag(cosine_similarity(nucle_embed_1,nucle_embed_2))
with open("nucle_cos_large.pickle", "wb") as f:
pickle.dump(nucle_cos, f)
nucle_cos.min()
nucle_cos.max()
nucle_cos.mean()
```
### Lang8
```
lang8_texts_src = read_lines("../../data_parallel/lang8/lang8_src")
lang8_embed_1 = get_embedings_for_text(lang8_texts_src, model_1, batch_size=32)
lang8_embed_2 = get_embedings_for_text(lang8_texts_src, model_2, batch_size=32)
with open("lang8_embed_2.pickle", "wb") as f:
pickle.dump(lang8_embed_2, f)
with open("lang8_embed_1.pickle", "wb") as f:
pickle.dump(lang8_embed_1, f)
#lang8_cos = np.diag(cosine_similarity(lang8_embed_1,lang8_embed_2))
with open("lang8_cos.pickle", "wb") as f:
pickle.dump(lang8_cos, f)
#lang8_cos.min()
#lang8_cos.max()
#lang8_cos.mean()
lang8_embed_1_large = get_embedings_for_text(lang8_texts_src, model_1, batch_size=32)
lang8_embed_2_large = get_embedings_for_text(lang8_texts_src, model_2, batch_size=32)
with open("lang8_embed_2_large.pickle", "wb") as f:
pickle.dump(lang8_embed_2_large, f)
with open("lang8_embed_1_large.pickle", "wb") as f:
pickle.dump(lang8_embed_1_large, f)
lang8_cos_large = np.diag(cosine_similarity(lang8_embed_1_large,lang8_embed_2_large))
with open("lang8_cos_large.pickle", "wb") as f:
pickle.dump(lang8_cos_large, f)
del lang8_embed_2
del lang8_embed_1
del fce_embed_2
del fce_embed_1
del nucle_embed_2
del nucle_embed_1
del fce_cos
del nucle_cos
#del lang8_cos
```
### Try large models
```
vocab_path = "../../data/output_vocabulary"
model_paths = "../../models/Exp_008_roberta_large/best.th"
model_name = 'roberta-large'
model_1 = GecBERTModel(vocab_path=vocab_path,
model_paths=[model_paths],
max_len=50,
min_len=3,
iterations=5,
min_error_probability=0.0,
min_probability=0.0,
lowercase_tokens=0,
model_name= model_name,
special_tokens_fix=1,
log=False,
confidence=0,
is_ensemble=0,
weigths=None,
use_cpu=False)
vocab_path = "../../data/output_vocabulary"
model_paths = "../../models/Exp_037_roberta_large_st3/model_state_epoch_1.th"
model_name = 'roberta-large'
model_2 = GecBERTModel(vocab_path=vocab_path,
model_paths=[model_paths],
max_len=50,
min_len=3,
iterations=5,
min_error_probability=0.0,
min_probability=0.0,
lowercase_tokens=0,
model_name= model_name,
special_tokens_fix=1,
log=False,
confidence=0,
is_ensemble=0,
weigths=None,
use_cpu=False)
# from transformers import pipeline, RobertaForMaskedLM, RobertaTokenizer
# model = RobertaForMaskedLM.from_pretrained("youscan/ukr-roberta-base")
# tokenizer = RobertaTokenizer.from_pretrained("youscan/ukr-roberta-base")
#os.listdir('../../models/Exp_005_roberta_base_coldstep_2_fixed_vocab/')
from sentence_transformers import SentenceTransformer
from transformers import AutoTokenizer, AutoModel
# from transformers import pipeline, RobertaForMaskedLM, RobertaTokenizer
# model = RobertaForMaskedLM.from_pretrained("ukr-roberta-base")
# tokenizer = RobertaTokenizer.from_pretrained("ukr-roberta-base")
tokenizer = AutoTokenizer.from_pretrained("youscan/ukr-roberta-base")
# input_ids = torch.tensor(tokenizer.encode("Hello, my dog is cute")).unsqueeze(0) # Batch size 1
# outputs = model(input_ids)
# last_hidden_states = outputs[0]
#sorted(os.listdir("../models/Exp_046_roberta_base_stage_2_new/"))
```
| github_jupyter |
```
import types
def all_saptak():
names = ["Sa", "Re_", "Re", "Ga_", "Ga", "Ma", "Ma__", "Pa", "Dha_", "Dha", "Ni_", "Ni"]
mandra = [n.lower() for n in names]
tar = [n.upper() for n in names]
return tuple(mandra + names + tar)
def window(item, items, width=7):
index = items.index(item)
start = index - width
end = index + width + 1
if start < 0:
start = 0
if end > len(items):
end = len(items)
return items[start:end]
window("Sa", all_saptak())
import random
import numpy as np
def take (seq, n):
return [next(seq) for i in range(n)]
def get_next(probs):
r = random.random() #random.uniform()?
index = 0
while(r >= 0 and index < len(probs)):
r -= probs[index]
index += 1
return all_saptak()[index - 1]
def aalap(initial, beats=4, transition_up=None, transition_down=None):
current = initial
scale = all_saptak()
yield initial
while True:
aroha = random.choice([True, False])
for i in range(beats):
if aroha:
current = get_next([transition_up[current][v] for v in scale])
else:
current = get_next([transition_down[current][v] for v in scale])
yield current
probs = [row.strip().split(",") for row in open("/home/vikrant/Downloads/prob_matrix.txt")]
probs = [[float(f) for f in row ]for row in probs]
transpose = list(zip(*probs))
def add(a, b):
return a+b
add(1, 2)
args = [1,2]
add(*args)
add
list(zip([1, 1, 1, 1], [2, 2, 2, 2], [3, 3, 3, 3]))
transpose[5]
def read_prob_from_file(filename):
"""
file is csv with every column as transition probability for one swar.
for 36 swar there are 36 columns and every column contains 36 rows
"""
with open(filename) as f:
data = [[float(f) for f in row.strip().split(",")] for row in f]
return list(zip(*data)) #transpose
def convert_to_transition(matrix):
"""
matrix rows are transition probabilties for given swar.
it is matrix of size 36x36
"""
swar = all_saptak()
return {swar[i] : {swar[j]: item for j, item in enumerate(row)} for i, row in enumerate(matrix)}
transition_up = convert_to_transition(transpose)
transition_up
a = aalap("Sa", 4, transition_up, transition_up)
take(a, 16)
take(a, 16)
transition_up['Dha']
convert_to_transition(read_prob_from_file("/home/vikrant/Downloads/prob_matrix.txt"))["Dha"]
```
### Functions to work out probabilities from notations ###
```
def old_all_octaves(file):
with open(file) as f:
return f.read().strip().replace("\n",",").split(",")
def old_octaves():
return ['sa','lre','re','lga','ga','ma','mau','pa','lda','da','lni','ni',
'sA','lrE','rE','lgA','gA','mA','mAu','pA','ldA','dA','lnI','nI',
'SA','lRE','RE','lGA','GA','MA','MAu','PA','lDA','DA','lNI','NI']
def old_to_new(file):
"""
sa, lre, re ... -> sa, re_, re, ga_, ga...
"""
with open(file) as f:
s = f.read()
newoctave = all_saptak()
for i, item in enumerate(old_octaves()):
s = s.replace(item, newoctave[i])
return s
def transiotion_hist_up(data):
"""
data is saragam string read from file
"""
saptak = all_saptak() +('',)
data = data.strip().replace("\n",",,").split(",")
hist = {i:{j:0 for j in saptak} for i in saptak}
for i, s in enumerate(data[:-1]):
hist[s][data[i+1]] += 1
return hist
def remove_empty(data):
for v in data.values():
del v['']
del data['']
return data
def compute_prob(hist):
def divide(a, b):
return a/b if b > 0 else 0
hist = remove_empty(hist)
probs = {}
for k, v in hist.items():
probs[k] = {k1: divide(v1,sum(v.values())) for k1, v1 in v.items()}
return probs
def transiotion_hist_down(data):
"""
data is saragam string read from file
"""
saptak = all_saptak() +('',)
data = data.strip().replace("\n",",,").split(",")
hist = {i:{j:0 for j in saptak} for i in saptak}
for i, s in enumerate(data[1:]):
hist[s][data[i-1]] += 1
return hist
def transiotion_prob_up(file):
return compute_prob(transiotion_hist_up(file))
def transiotion_prob_down(file):
return compute_prob(transiotion_hist_down(file))
def test_probs():
p1 = transiotion_prob_up(old_to_new("/home/vikrant/Downloads/Bhoop1.txt"))
p2 = convert_to_transition(read_prob_from_file("/home/vikrant/Downloads/prob_matrix.txt"))
for k in p1:
v1 = p1[k]
v2 = p2[k]
for j in v1:
assert abs(v1[j] - v2[j])<= 0.001
old_all_octaves("/home/vikrant/Downloads/AllOctaves.txt" )
help([].extend)
old_to_new("/home/vikrant/Downloads/Bhoop1.txt").strip().replace("\n",",").replace(",,",",").split(",")
p1 = transiotion_prob_down(old_to_new("/home/vikrant/Downloads/Bhoop1.txt"))
p2 = convert_to_transition(read_prob_from_file("/home/vikrant/Downloads/prob_matrix.txt"))
len(p1) == len(p2)
p1['Dha']
p2['Dha']
test_probs()
%%file bhoop.csv
SA,SA,Dha,Pa,Ga,Re,Sa,Re,Ga,Ga,Pa,Ga,Dha,Pa,Ga,Ga
Ga,Pa,Dha,SA,RE,SA,Dha,Pa,SA,Pa,Dha,Pa,Ga,Re,Sa,Sa
Ga,Ga,Pa,Dha,Pa,SA,SA,SA,Dha,Dha,SA,RE,GA,RE,SA,Dha
GA,GA,RE,SA,RE,RE,SA,Dha,SA,Pa,Dha,Pa,Ga,Re,Sa,Sa
Ga,Re,Ga,Ga,Sa,Re,Sa,Sa,Sa,Sa,Sa,dha,Sa,Re,Ga,Ga
Pa,Ga,Pa,Pa,Dha,Dha,Pa,Pa,Ga,Pa,Dha,SA,Dha,Pa,Ga,Sa
Pa,Ga,Ga,Re,Ga,Pa,SA,Dha,SA,SA,SA,SA,Dha,Re,SA,SA
Dha,Dha,Dha,Dha,SA,RE,GA,RE,SA,SA,Dha,Pa,Dha,SA,Dha,Pa
Ga,Re,Ga,Ga,Ga,Re,Pa,Ga,Dha,Pa,Dha,SA,Dha,Pa,Ga,Sa
Sa,Re,Ga,Pa,Ga,Re,Sa,Sa,Re,Pa,Pa,Pa,Re,Ga,Ga,Re
Ga,Ga,Pa,Ga,Re,Ga,Pa,Dha,SA,SA,SA,SA,Dha,Dha,Pa,Ga,Pa
Dha,RE,SA,SA,Dha,Dha,Pa,Ga,Re,Ga,Pa,Dha,SA,Pa,Dha,SA,Dha,SA,Dha,Pa,Ga,Re,Sa
Pa,Ga,Ga,Ga,Pa,Pa,SA,Dha,SA,SA,SA,SA,SA,RE,GA,RE,SA,SA
SA,Dha,Dha,SA,SA,SA,RE,RE,Dha,SA,Pa,Dha,SA,SA,Dha,Dha,Pa
Ga,Ga,Pa,Ga,Re,Ga,Pa,Dha,SA,SA,RE,GA,RE,SA,Dha,Pa,Dha,SA,Dha,Pa,Ga,Re,Ga,Pa,Ga,Re,Sa
Sa,dha,dha,Sa
dha,Sa,Re
Sa,Re
dha,Sa
Sa,Re,Ga,Re,Ga,Sa,Re,dha,Sa
Sa,Re,Ga,Re,Ga,Pa,Ga,Re,Pa,Ga,dha,dha,Sa
Ga,Pa,Dha,Ga,Ga,Ga,Pa
Ga,Pa,Dha,Pa,Ga,Re,Sa
Ga,Pa,Dha,SA,SA,Dha,Pa,Ga,Re,Ga,Re,Pa,Ga,Re,Sa
Ga,Re,Sa,Re,Ga,Pa,Dha,SA,Pa,Dha,SA,RE,GA,RE,SA
Dha,SA,RE,SA,Dha,SA,Dha,Pa,Ga,Pa,Dha,Pa,Ga,Pa,Ga,Re,Sa,dha,dha,Sa
with open("bhoop.csv") as f:
data = f.read()
a = aalap("Sa", 8, transiotion_prob_up(data), transiotion_prob_down(data))
transiotion_prob_down(data)['Ga']
take(a, 16)
```
### Aalap with nyaas ###
```
def aalap_nyaas(initial, beats=8, nyaas = None, transition_up=None, transition_down=None):
current = initial
scale = all_saptak()
yield initial
while True:
if current in nyaas:
aroha = random.choice([True, False])
for i in range(beats):
if aroha:
current = get_next([transition_up[current][v] for v in scale])
else:
current = get_next([transition_down[current][v] for v in scale])
yield current
a = aalap_nyaas("Sa", beats=8, nyaas=['sa','Sa','SA','re','Re','RE','ga','Ga','GA'],
transition_up=transiotion_prob_up(data), transition_down=transiotion_prob_down(data))
for i,item in enumerate(take(a, 32)):
print(i+1, item)
from collections import deque
def search(seq, subseq, end=100):
def compare(source, dest):
for item in dest:
return any(["".join(item).lower() in "".join(source).lower() for item in dest])
n = len(max(subseq, key=len))
window = deque(take(seq, n), n)
for i in range(n, end):
if compare(window, subseq):
yield i-n
window = deque(take(seq, n), n)
else:
window.append(next(seq))
def count(seq):
return sum(1 for i in seq)
a = aalap_nyaas("Sa", beats=8, nyaas=['sa','Sa','SA','re','Re','RE','ga','Ga','GA'],
transition_up=transiotion_prob_up(data), transition_down=transiotion_prob_down(data))
pakad = [["dha","dha","sa"],["ga","re","pa","ga"],["dha","pa","ga","re"]]
sum([count(search(a,pakad, 64)) for i in range(1000)])/1000
1024/16
def subset_prob(probs, start, end):
subset = probs[start:end]
newvalues = [v/sum(subset) for v in subset]
return [0 for i in range(start)] + newvalues + [0 for i in range(end, len(probs))]
subset_prob([0.1,0.2,0.3,0.1,0.2,0.2],0,3)
def aalap_bounded(beats=8, top_bound = 5, transition_up=None, transition_down=None):
initial = 'Sa'
scale = all_saptak()
yield initial
current = initial
index = scale.index(initial)
if top_bound > 0:
aroha = True
else:
aroha = False
for i in range(beats):
if aroha:
current = get_next(subset_prob([transition_up[current][v] for v in scale], 0, index + top_bound))
if scale.index(current) == index + top_bound:
print(current, scale.index(current), top_bound+index)
aroha = False
else:
current = get_next(subset_prob([transition_down[current][v] for v in scale], 0, index + top_bound))
yield current
a = aalap_bounded(beats=64, top_bound=12,
transition_up=transiotion_prob_up(data), transition_down=transiotion_prob_down(data))
for i,j in enumerate(a):
print(i, j)
a = aalap("Sa",8,transiotion_prob_up(data), transiotion_prob_up(data))
take(a, 16)
take(a, 32)
def transition_probability(data):
data = data.strip().replace("\n",",,").split(",")
hist = {}
for i, item in enumerate(data[:-1]):
if item and data[i+1]:
itemd = hist.get(item, {})
itemd[data[i+1]] = itemd.get(data[i+1], 0) + 1
hist[item] = itemd
prob = {}
for k in hist:
total = sum(hist[k].values())
prob[k] = {j: v/total for j,v in hist[k].items()}
return prob
p = transition_probability(data)
p.keys()
p
def sample(items, probs):
r = random.random() #random.uniform()?
index = 0
while(r >= 0 and index < len(probs)):
r -= probs[index]
index += 1
return items[index - 1]
def aalap_(initial, probs):
current = initial
while True:
yield current
targets = [item for item in probs[current]]
probability = [probs[current][item] for item in targets]
current = sample(targets, probability)
sample(list(p['Sa'].keys()), [p['Sa'][k] for k in p['Sa'].keys()])
a = aalap_("Sa", p)
sum([count(search(a,pakad,32)) for i in range(1000)])/1000
%%file bhoop1.csv
SA,SA,Dha,Pa,Ga,Re,Sa,Re,Ga,Ga,Pa,Ga,Dha,Pa,Ga,Ga
Ga,Pa,Dha,SA,RE,SA,Dha,Pa,SA,Pa,Dha,Pa,Ga,Re,Sa,Sa
Ga,Ga,Pa,Dha,Pa,SA,SA,SA,Dha,Dha,SA,RE,GA,RE,SA,Dha
GA,GA,RE,SA,RE,RE,SA,Dha,SA,Pa,Dha,Pa,Ga,Re,Sa,Sa
Ga,Re,Ga,Ga,Sa,Re,Sa,Sa,Sa,Sa,Sa,dha,Sa,Re,Ga,Ga
Pa,Ga,Pa,Pa,Dha,Dha,Pa,Pa,Ga,Pa,Dha,SA,Dha,Pa,Ga,Sa
Pa,Ga,Ga,Re,Ga,Pa,SA,Dha,SA,SA,SA,SA,Dha,Re,SA,SA
Dha,Dha,Dha,Dha,SA,RE,GA,RE,SA,SA,Dha,Pa,Dha,SA,Dha,Pa
Ga,Re,Ga,Ga,Ga,Re,Pa,Ga,Dha,Pa,Dha,SA,Dha,Pa,Ga,Sa
Sa,Re,Ga,Pa,Ga,Re,Sa,Sa,Re,Pa,Pa,Pa,Re,Ga,Ga,Re
Ga,GaPa,Ga,Re,Ga,Pa,Dha,SA,SA,SA,SA,Dha,Dha,Pa,Ga,Pa
DhaRE,SA,SA,Dha,Dha,Pa,Ga,Re,GaPa,DhaSA,PaDha,SA,DhaSA,DhaPa,GaRe,Sa
Pa,Ga,Ga,Ga,Pa,Pa,SA,Dha,SA,SA,SA,SA,SARE,GARE,SA,SA
SA,Dha,Dha,SA,SA,SA,RE,RE,DhaSA,PaDha,SA,SA,Dha,Dha,Pa
Ga,GaPa,Ga,Re,Ga,Pa,Dha,SA,SARE,GARE,SA,DhaPa,DhaSA,DhaPa,GaRe,GaPa,GaRe,Sa
Sa,dha,dha,Sa
dha,Sa,Re
Sa,Re
dha,Sa
Sa,Re,Ga,Re,Ga,Sa,Re,dha,Sa
Sa,Re,Ga,Re,Ga,Pa,Ga,Re,Pa,Ga,dha,dha,Sa
Ga,Pa,Dha,Ga,Ga,Ga,Pa
Ga,Pa,Dha,Pa,Ga,Re,Sa
Ga,Pa,Dha,SA,SA,Dha,Pa,Ga,Re,Ga,Re,Pa,Ga,Re,Sa
Ga,Re,Sa,Re,Ga,Pa,Dha,SA,Pa,Dha,SA,RE,GA,RE,SA
Dha,SA,RE,SA,Dha,SA,Dha,Pa,Ga,Pa,Dha,Pa,Ga,Pa,Ga,Re,Sa,dha,dha,Sa
bhoop1 = transition_probability(open("bhoop1.csv").read())
a = aalap_("Sa", bhoop1)
sum([count(search(a,pakad,32)) for i in range(1000)])/1000
a = aalap_("Sa", bhoop1)
take(a, 32)
bhoop1
tune = """
SA,SA,Dha,Pa,Ga,Re,Sa,Re,Ga,Ga,Pa,Ga,Dha,Pa,Ga,Ga
Ga,Pa,Dha,SA,RE,SA,Dha,Pa,SA,Pa,Dha,Pa,Ga,Re,Sa,Sa
Ga,Ga,Pa,Dha,Pa,SA,SA,SA,Dha,Dha,SA,RE,GA,RE,SA,Dha
GA,GA,RE,SA,RE,RE,SA,Dha,SA,Pa,Dha,Pa,Ga,Re,Sa,Sa
Ga,Re,Ga,Ga,Sa,Re,Sa,Sa,Sa,Sa,Sa,dha,Sa,Re,Ga,Ga
Pa,Ga,Pa,Pa,Dha,Dha,Pa,Pa,Ga,Pa,Dha,SA,Dha,Pa,Ga,Sa
Pa,Ga,Ga,Re,Ga,Pa,SA,Dha,SA,SA,SA,SA,Dha,Re,SA,SA
Dha,Dha,Dha,Dha,SA,RE,GA,RE,SA,SA,Dha,Pa,Dha,SA,Dha,Pa
Ga,Re,Ga,Ga,Ga,Re,Pa,Ga,Dha,Pa,Dha,SA,Dha,Pa,Ga,Sa
Sa,dha,dha,Sa
dha,Sa,Re
Sa,Re
dha,Sa
Sa,Re,Ga,Re,Ga,Sa,Re,dha,Sa
Sa,Re,Ga,Re,Ga,Pa,Ga,Re,Pa,Ga,dha,dha,Sa
Ga,Pa,Dha,Ga,Ga,Ga,Pa
Ga,Pa,Dha,Pa,Ga,Re,Sa
Ga,Pa,Dha,SA,SA,Dha,Pa,Ga,Re,Ga,Re,Pa,Ga,Re,Sa
Ga,Re,Sa,Re,Ga,Pa,Dha,SA,Pa,Dha,SA,RE,GA,RE,SA
Dha,SA,RE,SA,Dha,SA,Dha,Pa,Ga,Pa,Dha,Pa,Ga,Pa,Ga,Re,Sa,dha,dha,Sa
"""
tune = tune.strip().replace("\n",",").replace(",,",",").split(",")
from matplotlib import pyplot
scale = all_saptak()
pyplot.plot([scale.index(s) for s in tune])
tune
```
| github_jupyter |
```
%matplotlib inline
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from scipy.stats import multivariate_normal
from pyhmc_minimal.hmcparameter import HMCParameter
from pyhmc_minimal.hmc import HMC
```
### Examples for the implementation of different known distributions for the hmcparameter class
```
class StateMultivarNormal(HMCParameter):
def __init__(self, init_val, mu=0, sigma_inv=1):
super().__init__(np.array(init_val))
self.mu = mu
self.sigma_inv = sigma_inv
def get_energy_grad(self):
return np.dot(self.sigma_inv, (self.value - self.mu))
def energy(self, value):
return np.dot((value - self.mu).transpose(), np.dot(self.sigma_inv, (value - self.mu))) / 2
def get_energy(self):
return self.energy(self.value)
def get_energy_for_value(self, value):
return self.energy(value)
class StateExpDist(HMCParameter):
def __init__(self, init_val, gamma):
super().__init__(np.array(init_val))
self.gamma = gamma
def get_energy_grad(self, *args):
return self.gamma
def energy(self, value):
if value <= 0:
return np.inf
else:
return self.gamma * value
def get_energy(self):
return self.energy(self.value)
def get_energy_for_value(self, value):
return self.energy(value)
class StateInvGamma(HMCParameter):
def __init__(self, init_val, alpha, betta):
super().__init__(np.array(init_val))
self.alpha = alpha
self.betta = betta
def get_energy_grad(self):
return (self.alpha + 1) / self.value - self.betta / (self.value ** 2)
def energy(self, value):
if value <= 0:
return np.inf
else:
return (self.alpha + 1) * np.log(value) + self.betta / value
def get_energy(self):
return self.energy(self.value)
def get_energy_for_value(self, value):
return self.energy(value)
class StateLapDist(HMCParameter):
def __init__(self, init_val):
super().__init__(np.array(init_val))
def get_energy_grad(self):
return 1 if self.value > 0 else -1
def energy(self, value):
return abs(value)
def get_energy(self):
return self.energy(self.value)
def get_energy_for_value(self, value):
return self.energy(value)
class StatebettaDist(HMCParameter):
def __init__(self, init_val, alpha, betta):
super().__init__(np.array(init_val))
self.alpha = alpha
self.betta = betta
def get_energy_grad(self):
return (1 - self.alpha) / self.value + (self.betta - 1) / (1 - self.value)
def energy(self, value):
if value < 0 or value > 1:
return np.inf
else:
return (1 - self.alpha) * np.log(value) + (1 - self.betta) * np.log(1 - value)
def get_energy(self):
return self.energy(self.value)
def get_energy_for_value(self, value):
return self.energy(value)
```
### Implementation for the default velocity parameter with a Gaussian distribution
```
class VelParam(HMCParameter):
def __init__(self, init_val):
super().__init__(np.array(init_val))
dim = np.array(init_val).shape
self.mu = np.zeros(dim)
self.sigma = np.identity(dim[0])
def gen_init_value(self):
self.value = multivariate_normal.rvs(self.mu, self.sigma)
def get_energy_grad(self):
return self.value
def energy(self, value):
return np.dot(value, value) / 2
def get_energy(self):
return self.energy(self.value)
def get_energy_for_value(self, value):
return self.energy(value)
```
### Example for creating instances for the state and velocity and running the hmc algorithm for multivariate Gaussian distribution
```
state = StateMultivarNormal([1, 2, 3, 4, 5, 6], [2, 3, 4, 5, 6, 7], np.identity(6))
vel = VelParam(np.array([1, 1, 1, 1, 1, 1]))
delta = 1
n = 10
m = 10000
hmc = HMC(state, vel, delta, n, m) # create an instance of the HMC class
hmc.HMC() # Run the HMC algorithm
res = np.array(hmc.get_samples()) # Getting the chain of samples for the state parameter
# Plotting the chains for each dimension of the multivariate Gaussian
plt.plot(res)
plt.xlabel('iteration number')
plt.ylabel('value')
plt.show()
# Looking at the samples for one variate as a histogram
sns.distplot(res[:,3])
plt.show()
# looking at the acceptance rate
print('Acceptance rate: %f' %hmc.calc_acceptence_rate())
```
| github_jupyter |
# AIMSim Demo
This notebook demonstrates the key uses of _AIMSim_ as a graphical user interface, command line tool, and scripting utility. For detailed explanations and to view the source code for _AIMSim_, visit our [documentation page](https://vlachosgroup.github.io/AIMSim/).
## Installing _AIMSim_
For users with Python already in use on their devices, it is _highly_ recommended to first create a virtual environment before installing _AIMSim_. This package has a large number of dependencies with only a handful of versions supported, so conflicts are likely unless a virtual environment is used.
For new Python users, the authors recommended installing anaconda navigator to manage dependencies for _AIMSim_ and make installation easier overall. Once anaconda navigator is ready, create a new environment with Python 3.7, open a terminal or command prompt in this environment, and follow the instructions below.
We reccomend installing _AIMSim_ using the commands shown below (omit exclamation points and the %%capture, unless you are running in a Jupyter notebook):
```
%%capture
!pip install aimsim
```
Now, start the _AIMSim_ GUI by typing `python -m aimsim` or simply `aimsim` into the command line.
## Graphical User Interface Walkthrough
For most users, the Graphical User Interface (GUI) will provide access to all the key functionalities in _AIMSim_. The GUI works by serving the user with drop downs and text fields which represent settings that would otherwise need to be configured in a file by hand. This file is written to the disk by the GUI as part of execution so that the file can be used as a 'starting point' for more advanced use cases.
**Important Note**: Jupyter Notebook _cannot_ run _AIMSim_ from Binder. In order to actually run the _AIMSim_ GUI alongside this tutorial, you will need to download this notebook and run it from a local installation of Jupyter, or follow the installation instructions above and start _AIMSim_ from there. You can install Jupyter [here](https://jupyter.org/install).
<div>
<img src="attachment:image-6.png" width="250"/>
</div>
### A. Database File
This field accepts a file or directory path containing an input set of molecules in one of the accepted formats: SMILES strings, Protein Data Bank files, and excel files containing these data types.
Example:
`/Users/chemist/Desktop/SMILES_database.smi`
#### A1. Similarity Plots
Checking this box will generate a similarity distribution with _AIMSim's_ default color scheme and labels. To customize this plot further, edit the configuration file produced by _AIMSim_ by clicking `Open Config`, then re-submit the file through the command line interface.
Example:
<div>
<img src="attachment:image-4.png" width="200"/>
</div>
In addition to the similarity distribution, this will create a heatmap showing pairwise comparisons between the two species. As above, edit the configuration file to control the appearance of this plot.
Example:
<div>
<img src="attachment:image-5.png" width="200"/>
</div>
#### A2. Property Similarity Checkboxes
Like in the previous two examples, checking this box will create a plot showing how a provided molecular property varies according to the chosen molecular fingerprint. For this to work, data must be provided in a comma-separated value format (which can be generated using Excel with Save As... -> CSV) where the rightmost column is a numerical value (the property of interest).
Example:
| SMILES | Boiling Point |
|--------|---------------|
| C | -161.6 |
| CC | -89 |
| CCC | -42 |
### B. Target Molecule
Provide a SMILES string representing a single molecule for comparison to the provided database of molecules. In the screenshot above, the provided molecule is "CO", methanol. Any valid SMILES strings are accepted, and any errors in the SMILES string will not affect the execution of other tasks.
#### B1. Similarity Heatmap
Like the similarity heatmap shown above, this checkbox will generate a similarity distribution for the single target molecule specified above to the entire molecular database. This is particularly useful when considering a new addition to a dataset, where _AIMSim_ can help in determining if the provided molecule's structural motif's are already well represented in the data.
### C. Similarity Measure
This dropdown includes all of the similarity metrics currently implemented in _AIMSim_. The default selected metric is likely a great starting point for most users, and the additional metrics are provided for advanced users or more specific use cases.
Available Similarity Measures are automatically updated according to the fingerprint currently selected. Not all metrics are compatible with all fingerprints, and _AIMSim_ recognizes will only allow the user to select valid combinations.
Below is a complete list of all similarity measures currently implemented in _AIMSim_.
| # | Name | Input Aliases |
| -- | ---------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------- |
| 1 | l0\_similarity | \- |
| 2 | l1\_similarity | manhattan\_similarity, taxicab\_similarity, city\_block\_similarity, snake\_similarity |
| 3 | l2\_similarity | euclidean\_similarity |
| 4 | cosine | driver-kroeber, ochiai |
| 5 | dice | sorenson, gleason |
| 6 | dice\_2 | \- |
| 7 | dice\_3 | \- |
| 8 | tanimoto | jaccard-tanimoto |
| 9 | simple\_matching | sokal-michener, rand |
| 10 | rogers-tanimoto | \- |
| 11 | russel-rao | \- |
| 12 | forbes | \- |
| 13 | simpson | \- |
| 14 | braun-blanquet | \- |
| 15 | baroni-urbani-buser | \- |
| 16 | kulczynski | \- |
| 17 | sokal-sneath | sokal-sneath\_1 |
| 18 | sokal-sneath\_2 | sokal-sneath-2, symmetric\_sokal\_sneath, symmetric-sokal-sneath, |
| 19 | sokal-sneath\_3 | sokal-sneath-3 |
| 20 | sokal-sneath\_4 | sokal-sneath-4 |
| 21 | jaccard | \- |
| 22 | faith | \- |
| 23 | michael | \- |
| 24 | mountford | \- |
| 25 | rogot-goldberg | \- |
| 26 | hawkins-dotson | \- |
| 27 | maxwell-pilliner | \- |
| 28 | harris-lahey | \- |
| 29 | consonni−todeschini\_1 | consonni−todeschini-1 |
| 30 | consonni−todeschini\_2 | consonni−todeschini-2 |
| 31 | consonni−todeschini\_3 | consonni−todeschini-3 |
| 32 | consonni−todeschini\_4 | consonni−todeschini-4 |
| 33 | consonni−todeschini\_5 | consonni−todeschini-5 |
| 34 | austin-colwell | \- |
| 35 | yule\_1 | yule-1 |
| 36 | yule\_2 | yule-2 |
| 37 | holiday-fossum | fossum, holiday\_fossum |
| 38 | holiday-dennis | dennis, holiday\_dennis |
| 39 | cole\_1 | cole-1 |
| 40 | cole\_2 | cole-2 |
| 41 | dispersion | choi |
| 42 | goodman-kruskal | goodman\_kruskal |
| 43 | pearson-heron | pearson\_heron |
| 44 | sorgenfrei | \- |
| 45 | cohen | \- |
| 46 | peirce\_1 | peirce-1 |
| 47 | peirce\_2 | peirce-2 |
### D. Molecular Descriptor
This dropdown includes all of the molecular descriptors, mainly fingerprints, currently implemented in _AIMSim_:
|#|Fingerprint|
|---|---|
|1|morgan|
|2|topological|
|3|daylight|
Each of these fingerprints should be generally applicable for chemical problems, though they are all provided to serve as an easy way to compare the results according to fingerprinting approach.
Additional descriptors are included with _AIMSim_ which are not mathematically compatible with some of the similarity measures. When such a descriptor is selected, the corresponding similarity measure will be removed from the dropdown.
#### D1. Show Experimental Descriptors
This checkbox adds additional molecular descriptors into the `Molecular Descriptor` dropdown. These are marked as _experimental_ because they are generated using third-party libraries over which we have very little or no control. The descriptors generated by these libraries should be used only when the user has a very specific need for a descriptor as implemented in one of the packages below:
- [ccbmlib](https://doi.org/10.12688/f1000research.22292.2): All molecular fingerprints included in the `ccbmlib` library have been reproduced in _AIMSim_. Read about these fingerprints [in the `ccbmlib` repository](https://github.com/vogt-m/ccbmlib).
- [mordred](https://doi.org/10.1186/s13321-018-0258-y): All 1000+ descriptors included in `mordred` are available in _AIMSim_, though as of Januray 2022 it seems that `mordred` is no longer being maintained and has a significant amount of bugs. Use at your own risk.
- [PaDELPy](https://doi.org/10.1002/jcc.21707): [This package](https://github.com/ecrl/padelpy) provides access to all of the molecular descriptors included as part of the PaDEL-Descriptor standalone Java program.
### E. Run
Pressing this button will call a number of input checkers to verify that the information entered into the fields above is valid, and then the tasks will be passed into _AIMSim_ for execution. Additional input to _AIMSim_ needed for some tasks may be requested from the command line.
For large collections of molecules with substantial run times, your operating system may report that _AIMSim_ has stopped responding and should be closed. This is likely not the case, and _AIMSim_ is simply executing your requested tasks. If unsure, try checking the `Verbose` checkbox discussed below, which will provide near-constant output while _AIMSim_ is running.
### F. Open Config
Using your system's default text editor, this button will open the configuration file generated by _AIMSim_ after pressing the run button. This is useful for fine-tuning your plots or re-running the exact same tasks in the future. This configuration file can also access additional functionality present in _AIMSim_ which is not included in the GUI, such as the sampling ratio for the data (covered in greater depth in the __Command Line and Configuration Files__ section below). To use this configuration file, include the name of the file after your call to _AIMSim_ on the command line, i.e.:
`aimsim aimsim-ui-config.yaml` or `python -m aimsim aimsim-ui-config.yaml`
Because of the way Python install libraries like _AIMSim_, this file will likely be saved somewhere difficult to find among many other internal Python files. It is highly recommended to make a copy of this file in a more readily accessible location, or copy the contents of this file into another one. The name of the file can also be changed to something more meaningful (i.e., JWB-Solvent-Screen-123.yaml) as long as the file extension (.yaml) is still included.
### G. Verbose
Selecting this checkbox will cause _AIMSim_ to emit near-constant updates to the command line on its status during execution. This is useful to confirm that _AIMSim_ is executing and has not crashed, and also to provide additional information about errors in the input data.
For large datasets, this may generate a _significant_ amount of command line output. A pairwise comparison of 10,000 molecules would require 100,000,000 (10,000 \* 10,000) operations, generating at least that many lines of text in the console.
Example __Verbose__ output:
```
Reading SMILES strings from C:\path\to\file\small.smi
Processing O=S(C1=CC=CC=C1)(N2CCOCC2)=O (1/5)
Processing O=S(C1=CC=C(C(C)(C)C)C=C1)(N2CCOCC2)=O (2/5)
Processing O=S(C1=CC=C(C2=CC=CC=C2)C=C1)(N3CCOCC3)=O (3/5)
Processing O=S(C1=CC=C(OC)C=C1)(N2CCOCC2)=O (4/5)
Processing O=S(C1=CC=C(SC)C=C1)(N2CCOCC2)=O (5/5)
Computing similarity of molecule num 1 against 1
Computing similarity of molecule num 2 against 1
Computing similarity of molecule num 3 against 1
Computing similarity of molecule num 4 against 1
Computing similarity of molecule num 5 against 1
Computing similarity of molecule num 1 against 2
```
### H. Outlier Check
Checking this will have _AIMSim_ create an Isolation Forest (read more about this in [Sklearn's documentation](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.IsolationForest.html)) to identify possible outliers in the input database of molecules. The results from this approach are _non-deterministic_ because of the underlying algorithm driving the Isolation Forest, so this feature is intended to be a "sanity check" rather than a quantitative measure of 'outlier-ness'. To truly determine how different a single example molecule is to a set of molecules, use the `Compare Target Molecule` functionality discussed above.
### I. Enable Multiple Workers
This checkbox will enable multiprocessing, speeding up execution time on the data. By default, _AIMSim_ will use __all__ physical cores available on your machine, which may impact performance of other programs.
The user should only enable this option with datasets off a few hundred or more molecules. This is because there is additional processing time associated with creating and destroying multiple processes, so for small data sets it is faster to simply execute the comparisons serially.
## Command Line and Configuration Files
For users who prefer to use _AIMSim_ without a user interface, a command line interface is provided. This requires the user to manually write configuration files, but allows access to more granular control and some additional features which are not included in the GUI. This can be invoked by typing `aimsim config.yaml` into your terminal or command window, where `config.yaml` is a configuration file you have provided or copied from the _AIMSim_ repository.
Below is a 'maximum specification' file to be used with _AIMSim_, showing all possible settings and tasks which _AIMSim_ can ingest. Any overall settings which are left out will be inferred by _AIMSim_, and any tasks which are not included will simply not be executed. Each field used in the file is explained afterward.
### Maximum Specification File
```
is_verbose (bool):
molecule_database (str): # path to excel / csv/ text file
molecule_database_source_type (str): # Type of source file. 'excel', 'csv', 'text'
similarity_measure (str): #Set to determine if auto identification required
fingerprint_type (str): # Set to determine if auto identification required
measure_id_subsample (float): # [0, 1] Subsample used for measure search
sampling_ratio (float): # [0, 1] Subsample used for all tasks
n_workers (int / str): # [int, 'auto'] number of processes, or let AIMSim decide
global_random_seed (int / str): # int or 'random'
tasks:
compare_target_molecule:
target_molecule_smiles (str):
draw_molecule (bool): # If true, strucures of target, most and least similar molecules are displayed
similarity_plot_settings:
plot_color (str): # Set a color recognized by matplotlib
shade (bool): # If true, the similarity density is shaded
plot_title (str):
log_file_path (str):
visualize_dataset:
heatmap_plot_settings:
cmap (str): # matplotlib recognized cmap (color map) used for heatmap.
plot_title (str):
annotate (bool): # If true, heatmap is annotated
similarity_plot_settings:
plot_color (str):
shade (bool): # If true, the similarity density is shaded
embedding_plot_settings:
plot_title (str):
embedding:
method (str): # algorithm used for embedding molecule set in 2 dimensions.
params: # method specific parameters
random_state (int): #used for seeding stochastic algorithms
see_property_variation_w_similarity:
log_file_path (str):
property_plot_settings:
plot_color (str): # Set a color recognized by matplotlib
identify_outliers:
random_state (int):
output (str): # filepath or "terminal" to control where results are shown
plot_outliers (bool):
pair_similarity_plot_settings: # Only meaningful if plot_outliers is True
plot_color (str): # Set a color recognized by matplotlib
cluster:
n_clusters (int):
clustering_method (str):
log_file_path (str):
cluster_file_path (str):
cluster_plot_settings:
cluster_colors (list): # Ensure len(list) >= n_cluster
embedding_plot_settings:
plot_title (str):
embedding:
method (str): # algorithm used for embedding molecule set in 2 dimensions.
params: # method specific parameters
random_state (int): #used for seeding stochastic algorithms
```
#### Overall _AIMSim_ Settings
These settings impact how all tasks run by _AIMSim_ will be executed.
- `is_verbose`: Must be either `True` or `False`. When `True`, _AIMSim_ will emit text updates of during execution to the command line, useful for debugging.
- `molecule_database`: A file path to an Excel workbook, text file containing SMILES strings, or PDB file surrounded by single quotes, i.e. `'/User/my_user/smiles_database.smi'`. Can also point to a directory containing a group of PDB files, but the file path must end with a '/' (or '\' for Windows).
- `molecule_database_source_type`: The type of data to be input to _AIMSim_, being either `text`, `excel`, or `pdb`.
- `similarity_measure`: The similarity measure to be used during all tasks, chosen from the list of supported similarity measures. Automatic similarity measure determination is also supported, and can be performed by specifying `determine`.
- `fingerprint_type`: The fingerprint type or molecular descriptor to be used during all tasks, chosen from the list of supported descriptors. Automatic determination is also supported, and can be performed by specifying `determine`.
- `measure_id_subsample`: A decimal number between 0 and 1 specifying what fraction of the dataset to use for automatic determination of similarity measure and fingerprint. For a dataset of 10,000 molecules, setting this to `0.1` would run only 1000 randomly selected molecules, dramatically reducing runtime. This field is only needed if `determine` is used in either of the prior fields.
- `sampling_ratio`: A decimal number between 0 and 1 specifying what fraction of the dataset to use for tasks. For a dataset of 10,000 molecules, setting this to `0.1` would run only 1000 randomly selected molecules, dramatically reducing runtime.
- `n_workers`: Either an integer or the string 'auto'. With an integer, _AIMSim_ will create that many processes for its operation. This number should be less than or equal to the number of _physical_ CPU cores in your computer. Set this option to 'auto' to let _AIMSim_ configure multiprocessing for you.
- `global_random_seed`: Integer to be passed to all non-deterministic functions in _AIMSim_. By default, this value is 42 to ensure consistent results between subsequent executions of _AIMSim_. This seed will override the random seeds provided to any other _AIMSim_ tasks. Alternatively, specify 'random' to allow _AIMSim_ to randomly generate a seed.
#### Task-Specific Settings
The settings fields below dictate the behavior of _AIMSim_ when performing its various tasks.
##### Compare Target Molecule
Generates a similarity distribution for the dataset compared to an individual molecule.
- `target_molecule_smiles`: SMILES string for the molecule used in comparison to the dataset.
- `draw_molecule`: If this is set to True, then _AIMSim_ draws the structure of the target molecule, and of the molecule most and least similar to it.
- `similarity_plot_settings`: Controls the appearance of the distribution.
- `plot_color`: Can be any color recognized by the _matplotlib_ library.
- `shade`: `True` or `False`, whether or not to shade in the area under the curve.
- `plot_title`: String containing text to be written above the plot.
- `log_file_path`: String specifying a file to write output to for the execution of this task. Useful for debugging.
##### Visualize Dataset
Generates a pairwise comparison matrix for all molecules in the dataset.
- `heatmap_plot_settings`: Control the appearance of the plot.
- `cmap`: _matplotlib_ recognized cmap (color map) used for heatmap.
- `plot_title`: String containing text to be written above the plot.
- `annotate`: `True` or `False`, controls whether or not _AIMSim_ will write annotations over the heatmap.
- `similarity_plot_settings`: Controls the appearance of the distribution.
- `plot_color`: Can be any color recognized by the _matplotlib_ library.
- `shade`: `True` or `False`, whether or not to shade in the area under the curve.
- `embedding_plot_settings`: Constrols the lower dimensional embedding of the dataset.
- `plot_title`: String containing text to be written above the plot.
- `embedding`: Set the algorithmic aspects of the embedding
- `method`: Label specifying the algorithm embedding the molecule set in 2 dimensions.
- `params`: Specific hyperparameters which are passed through to the underlying implementation
- `random_state`: Number used for seeding stochastic algorithms
##### Property Variation Visualization
Creates a plot of how a given property in the input molecule set varies according to the structural fingerprint chosen.
- `log_file_path`: String specifying a file to write output to for the execution of this task. Useful for debugging or retrospection.
- `property_plot_settings`: Control the appearance of the plot.
- `plot_color`: Any color recognized by the _matplotlib_ library.
##### Identify Outliers
Trains an [IsolationForest](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.IsolationForest.html) on the input data to check for potential outliers.
- `random_state`: An integer to pass through to random_state in sklearn. _AIMSim_ sets this to 42 by default.
- `output`: A string which specifies where the output of the outlier search should go. This can be either a filepath or "terminal" to write the output directly to the terminal.
- `plot_outliers`: Set this to `True` to generate a 2D plot of which molecules are potential outliers.
- `pair_similarity_plot_settings`: Only meaningful if plot_outliers is True, allows access to plot settings.
- `plot_color`: Any color recognized by the _matplotlib_ library.
##### Cluster
Use a clustering algorithm to make groups from the database of molecules.
- `n_clusters`: The number of clusters to group the molecules into.
- `clustering_method`: Optional string specifying a clustering method implemented in `sklearn`, one of `kmedoids`, `ward`, or `complete_linkage`. `complete_linkage` will be chosen by default if no alternative is provided.
- `log_file_path`: String specifying a file to write output to for the execution of this task. Useful for debugging.
- `cluster_file_path`: String specifying a file path where _AIMSim_ will output the result of clustering. Useful for comparing multiple clustering approaches or saving the results of large data sets.
- `cluster_plot_settings`: Control the appearance of the clustering plot.
- `cluster_colors`: A list of strings, each of which is a color recognized by _matplotlib_ to use for the clusters. Must specify at least as many colors as there are clusters. Additional colors will be ignored.
- `embedding_plot_settings`: Constrols the lower dimensional embedding of the dataset.
- `plot_title`: String containing text to be written above the plot.
- `embedding`: Set the algorithmic aspects of the embedding
- `method`: Label specifying the algorithm embedding the clustered molecule set in 2 dimensions.
- `params`: Specific hyperparameters which are passed through to the underlying implementation
- `random_state`: Number used for seeding stochastic algorithms
## Writing Scripts with _AIMSim_
Advanced users may wish to use _AIMSim_ to create their own descriptors, use the descriptor's provided in _AIMSim_ for something else entirely, or utilize the various similarity scores. Brief explanations for how to access the core functionalities of _AIMSim_ from a Python script are shown below.
### Making Custom Descriptors
Any arbitrary numpy array can be provided as a molecular descriptor, though correct function with the similarity metrics provided with _AIMSim_ is not guaranteed.
```
from aimsim.ops.descriptor import Descriptor
desc = Descriptor()
```
With the `Descriptor` class instantiated, one can then call the methods to set the value(s) of the descriptor.
```
import numpy as np
custom_desc = np.array([1, 2, 3])
desc.set_manually(custom_desc)
desc.numpy_
```
This same function can be achieved by passing in a numpy array for the keyword argument `value` in the constructor for `Descriptor`, as shown below:
```
desc = Descriptor(custom_desc)
desc.numpy_
```
The above code is useful for individually changing a descriptor for one molecule in a `MoleculeSet` but is obviously not practical for bulk custom descriptors. To assign descriptors for an entire set of molecules at once, instantiate the `MoleculeSet` class and call the `_set_descriptor` method passing in a 2-dimensional numpy array of descriptors.
```
from AIMSim.chemical_datastructures.molecule_set import MoleculeSet
molset = MoleculeSet(
'/path/to/databse/smiles.txt',
'text',
False,
'tanimoto'
)
molset._set_descriptor([[1, 2, 3], [4, 5, 6]])
```
### Generating Descriptors with _AIMSim_
Because _AIMSim_ is able to generate such a wise variety of molecular fingerprints and descriptors from only the SMILES strings, you may want to avoid re-inventing the wheel and use the descriptors that it generates. There are two general approaches to doing this, and the approach used depends on what other code you already have in place:
1. If you have only SMILES strings to turn into fingerprints/descriptors, the `Molecule` class should be used to handle generating the molecule object and generating the descriptors.
2. If you have already created a molecule using `RDKit`, you must provide the existing molecule in your call to the constructor in `Molecule`.
These approaches are covered in this order below.
```
# with a SMILES string
smiles = "CO"
from aimsim.chemical_datastructures.molecule import Molecule
mol = Molecule(mol_smiles=smiles)
mol.set_descriptor(fingerprint_type="atom-pair_fingerprint")
mol.get_descriptor_val()
# with an RDKit molecule
from rdkit import Chem
mol_graph = Chem.MolFromSmiles(smiles)
mol = Molecule(mol_graph=mol_graph)
mol.set_descriptor(fingerprint_type="mordred:nAtom")
mol.get_descriptor_val()
```
### Acessing _AIMSim_ Similarity Metrics
As of January 2022, _AIMSim_ implements 47 unique similarity metrics for use in comparing two numbers and/or two sets of numbers. These metrics were pulled from a variety of sources, including some original implementations, so it may be of interest to use this code in your own work.
All of the similarity metrics can be accessed through the `SimilarityMeasure` class, as shown below.
```
from aimsim.ops.similarity_measures import SimilarityMeasure
from rdkit.Chem import MolFromSmiles
sim_mes = SimilarityMeasure("driver-kroeber")
desc_1 = Descriptor()
desc_1.make_fingerprint(
MolFromSmiles("COC"),
"morgan_fingerprint",
)
desc_2 = Descriptor()
desc_2.make_fingerprint(
MolFromSmiles("CCCC"),
"morgan_fingerprint",
)
out = sim_mes(
desc_1,
desc_2,
)
out
```
A complete list of supported similarity measures and the names by which _AIMSim_ recognizes them is listed in the GUI walkthrough section.
## Using AIMSim Tasks inside custom Python pipelines
In this section we will take a look at using some of the Tasks provided by AIMSim inside custom Python scripts.
### Visualizing a Dataset
First we create the dataset which consists of 100 samples, each containing 3 features. We will first create an Excel file and load that file via _AIMSim_ to visualize it. <b>Note that </b> columns corresponding to sample names or features in the Excel have to be prefixed by <i>'feature_'</i>
```
%%capture
!pip install openpyxl # for using the excel writer
import pandas as pd
from numpy.random import random
n_samples = 100
dataset = {'feature_f1': random(size=n_samples),
'feature_f2': random(size=n_samples),
'feature_f3': random(size=n_samples)}
df = pd.DataFrame(dataset)
dataset_file = 'dataset.xlsx'
df.to_excel(dataset_file)
```
First we load the data into a MoleculeSet object. We use the arbitrary features defined above and L2- similarity to define the similarity in this feature space.
```
from aimsim.chemical_datastructures import MoleculeSet
# load a MoleculeSet from the file
molecule_set = MoleculeSet(molecule_database_src=dataset_file,
molecule_database_src_type='excel',
similarity_measure='l2_similarity',
is_verbose=False)
```
Now we visualize it using the VisualizeDataset Task.
Note that the arguments to the VisualizeDataset constructor are used to edit the plot settings (such as colors and axis labels) as well as the type and parameters of the 2D embedding (here we use PCA to embed the dataset in 2 dimensions). A complete list of the keywords accepted and their default values can be found in the docstring of the constructor in our [documentation page](https://vlachosgroup.github.io/AIMSim/).
```
from aimsim.tasks import VisualizeDataset
# instantiate the task
viz = VisualizeDataset(embedding_plot_settings={"embedding": {"method": "pca"}})
viz(molecule_set)
```
### Clustering
The dataset can also be clustered using the ClusterData Task in _AIMSim_. The following code snippets clusters the dataset using the K-Medoids algorithm. Note that we reuse the MoleculeSet object, therefore we are still using the L2 similarity for clustering. The data is clustered into 5 clusters and the 2D embedding is again generated using PCA. A complete list of the keywords accepted by the ClusterData constructor and their default values can be found in the docstring of the constructor in our [documentation page](https://vlachosgroup.github.io/AIMSim/).
```
from aimsim.tasks import ClusterData
clustering = ClusterData(n_clusters=5, # data is clustered into 5 clusters
clustering_method='kmedoids',
embedding_plot_settings={"embedding": {"method": "pca"}}
)
clustering(molecule_set)
```
| github_jupyter |
# Section 3.3 Single Model Numerical Diagnostics
```
import os
import arviz as az
# Change working directory
if os.path.split(os.getcwd())[-1] != "notebooks":
os.chdir(os.path.join(".."))
NETCDF_DIR = "inference_data"
az.style.use('arviz-white')
```
## What happened to hard numbers?
One criticism of visual plots their interpretation is subjective. When running one model its relatively simple to visually inspect the results, but if testing out various models looking over many traceplots and autocorrelation diagrams becomes statistician time intensive. (We'll talk more about multiple models in Section 5). As far as it is possible to automate model checking, we would like to.
## $\hat{R}$ and Effective Sample Size
Recall our two pertinent questions MCMC practioners should ask when making posterior estimates:
* Did the chains mix well?
* Did we get enough samples?
These questions are paraphrased from the paper published in March 2019: **Rank-normalization, folding, and localization: An improved $\hat{R}$ for assessing convergence of MCMC** by [Vehtari et.al.](https://arxiv.org/abs/1903.08008), and thankfully the paper provides two numbers -- $\hat{R}$, and effective sample size (ESS) -- as tools to help answer these questions.
### Warning: Active Research Zone
Wow! A paper from 2019! Bayesian statistics is an academically active field and numerous versions of $\hat{R}$ and effective sample size calculations have been proposed over the years. the first of which was published in 1992. In this tutorial we will be covering the calucation from the 2019 paper (linked again [here]((https://arxiv.org/abs/1903.08008)))
Just be mindful that when looking at older papers or results the diagnostics will answer the same question, but the exact calculation may differ.
Some prior papers are linked here for reference.
[Gelman and Rubin (1992)](https://projecteuclid.org/euclid.ss/1177011136)
[Brooks and Gelman (1998)](http://www2.stat.duke.edu/~scs/Courses/Stat376/Papers/ConvergeDiagnostics/BrooksGelman.pdf)
[Gelman et al. Bayesian Data Analysis (3 ed, 2014)](http://www.stat.columbia.edu/~gelman/book/)
### $\hat{R}$ (say "R hat")
The first question we'll try and answer is if the chains have mixed well. The summarized formula is
$$ \Large \hat{R} = \sqrt{\frac{\hat{\text{var}}^{+}(\theta \mid y)}{W}}$$
While the details of the calculation can be found in the paper, it's using the *between chain variance* with *in chain variance* to calculate $\hat{R}$. The idea is that if all the chains have converged, the variance should be similar across all chains, and the pooled sample of all chains.
### Effective sample size (also known as ESS, also known as $S_{eff}$)
As the name suggests effective sample size helps answer the question "Did we get enough samples?" The summarized formula is
$$ \large S_{\text{eff}} = \frac{NM}{\hat{\tau}} $$
Where N is computation draws, M is the number of chains and $\hat{\tau}$ is a number derived from the chain autocorrelations. The idea here is that in highly autocorrelated chains, while the computer *is* drawing samples they're not effective because they're not doing much to help estimate the posterior.
Let's walk through an example
## Reliving the horror. Naive Metropolis Hastings with Bad Initialization
In Section 3.1 we performed an Inference Run with nightmareish results. Let's load the data again here and plot the visual diagnostics once again.
```
data_bad_init = az.from_netcdf(os.path.join(NETCDF_DIR, "data_bad_init.nc"))
az.plot_trace(data_bad_init)
```
Looking again at the trace plots we can "see" that the results look bad, but like true statisticians let's use our numerical tools to quantify the results.
```
az.rhat(data_bad_init)
az.effective_sample_size(data_bad_init)
```
According to the math $\hat{R} = 6.95$ and $S_{\text{eff}} = 2.33$, but this begs the question is this good or bad? Subjectively speaking these are bad. Generally speaking
* We want $\hat{R}$ to be to close to 1 as possible
* We want ESS to be as close to the number of simulation draws as possible
$\hat{R}$ is telling us that the variances are not very consistent and $S_{\text{eff}}$ is telling us that the 400 draws (200 draws over 2 chains) we took were as useful as ~2 independent draws from the true distribution (for ESS this small, such an interpretation is necessarily silly).
In Aki's paper the advice there is specific advice for these diagnostics
* Run at least 4 chains
* $\hat{R}$ should be less than 1.01
* ESS should be 400 "before we expect $\hat{R}$ to be useful"
In Notebook 3.4 we'll be using $\hat{R}$ and $S_{\text{eff}}$ to compare each inference run
| github_jupyter |
# Part 2 - Refine Data
The second step for analyzing the data is to perform some additional preparations and enrichments. While the first step of storing the data into the structured zone should be mainly a technical conversion without losing any information, this next step will integrate some data and also preaggregate weather data to simplify working with it.
# 0 Prepare Python Environment
## 0.1 Spark Session
```
from pyspark.sql import SparkSession
import pyspark.sql.functions as f
if not 'spark' in locals():
spark = SparkSession.builder \
.master("local[*]") \
.config("spark.driver.memory","64G") \
.getOrCreate()
spark.version
```
## 0.2 Matplotlib
```
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.patches as mpatches
from pandas.plotting import register_matplotlib_converters
register_matplotlib_converters()
```
# 1 Read Taxi Data
Now we can read in the taxi data from the structured zone.
## 1.1 Trip Data
Let us load the NYC Taxi trip data from the Hive table `taxi.trip` and let us display the first 10 records.
```
trip_data = spark.read.table("taxi.trip")
trip_data.limit(10).toPandas()
```
Just to be sure, let us inspect the schema. It should match exactly the specified one.
```
trip_data.printSchema()
```
## 1.2 Fare information
Now we read in the second table `taxi.fare` containing the trips fare information.
```
fare_data = spark.read.table("taxi.fare")
fare_data.limit(10).toPandas()
fare_data.printSchema()
```
## 1.3 Join datasets
We can now join both the trip information and the fare information together in order to get a complete picture. Since the trip records do not contain a technical unique key, we use the following columns as the composite primary key of each trip:
* medallion
* hack_license
* vendor_id
* pickup_datetime
Finally the result is stored into the refined zone into the Hive table `refined.taxi_trip`.
```
# Create Hive database 'refined'
spark.sql("CREATE DATABASE IF NOT EXISTS refined")
# Join trip_data with fare_data using the columns "medallion", "hack_license", "vendor_id", "pickup_datetime"
taxi_trips = trip_data.join(fare_data,["medallion", "hack_license", "vendor_id", "pickup_datetime"], how="left_outer")
# Save taxi_trips into the Hive table "refined.taxi_trip"
taxi_trips.write.format("parquet").saveAsTable("refined.taxi_trip")
```
### Read from Refined Zone
```
taxi_trips = spark.read.table("refined.taxi_trip")
taxi_trips.limit(10).toPandas()
```
Let us have a look at the schema of the refined table
```
taxi_trips.printSchema()
```
Let us count the number of records in the table
```
taxi_trips.count()
```
# 2. Weather Data
The weather data also requires some additional preprocessing, especially when we want to join against weather data. The primary problem of all measurements is, that they might happen at different time intervals and not all measurements contain all metrics. Therefore we preaggregate the weather data to hourly and daily measurements, which can directly be used for joining.
## 2.1 Weather Data
We already have weather data, but only individual measurements. We do not know how many measurements there are per hour and per day, so the raw table is not very useable for joining. Instead we'd like to have an hourly and a daily weather table containing average temperature, wind speed and precipitation. Since we are only interested in the year 2013, we also only load that specific year.
```
weather = spark.read.table("isd.weather").where(f.col("year") == 2013)
weather.limit(10).toPandas()
```
## 2.2 Calculate derived metrics and preaggregate data
In order to simplify joining against weather data, we now preaggregate weather measurements to a single record per weather station and hour or per day.
### Hourly Preaggregation
For the hourly aggregation, we want to get the following columns
* `date` - day of the measurements. The day can be extracted from the timestamp column `ts` by using the Spark function `to_date` (available in the imported module `f`)
* `hour` - hour of the measurements. The hour can be extracted using the Spark function `hour`
* Grouping should be performed on the weather station IDs `usaf` and `wban` together with both extracted time columns `date` and `hour`
* For the following metrics, we are interested in the grouped averages: `wind_speed`, `air_temperature` and `precipitation_depth`
When performing the aggregation, you should ignore invalid measurements. This can be done by using the PySpark function `f.when` to conditionally only aggregate values where the correspondign quality flag (`wind_speed_qual` and `air_temperature_qual`) is not `9`. Note that it is enough to pick up only the valid values and let the `when` function return `NULL` for invalid values, since `NULL` is ignored in aggregations.
For averaging the precipitation, you should also only pick values where `precipitation_hours` equals `1`.
The final DataFrame should have the following columns (you might need to specify explicit aliases):
* `usaf`
* `wban`
* `date`
* `hour` (0-23)
* `wind_speed`
* `temperature`
* `precipitation`
```
hourly_weather = weather \
.withColumn("date", f.to_date(weather["ts"])) \
.withColumn("hour", f.hour(weather["ts"])) \
.groupBy("usaf", "wban", "date", "hour").agg(
f.avg(f.when(weather["wind_speed_qual"] != 9, weather["wind_speed"])).alias("wind_speed"),
f.avg(f.when(weather["air_temperature_qual"] != 9, weather["air_temperature"])).alias("temperature"),
f.avg(f.when(weather["precipitation_hours"] == 1, weather["precipitation_depth"])).alias("precipitation")
)
hourly_weather.limit(10).toPandas()
```
### Daily Preaggregation
In addition to the hourly metrics, we also preaggregate the data to daily records. This can easily be performed based on the hourly aggregations with a grouping on `usaf`, `wban` and `date`. Again we want to have the metrics `temperature`, `wind_speed` and `precipitation`. For the first two metrics, we are interested in the average (as this seems to make sense), while for precipitation we are interested in the sum (total amount of rainfall per day).
```
daily_weather = hourly_weather.groupBy("usaf", "wban", "date")\
.agg(
f.avg("temperature").alias("temperature"),
f.avg("wind_speed").alias("wind_speed"),
f.sum("precipitation").alias("precipitation"),
)
daily_weather.limit(10).toPandas()
```
### Save Preaggregated Weather
Finally we save both tables (hourly and daily weather), so we can directly reuse the data in the next steps.
```
hourly_weather.write.format("parquet").mode("overwrite").saveAsTable("refined.weather_hourly")
daily_weather.write.format("parquet").mode("overwrite").saveAsTable("refined.weather_daily")
```
## 2.3 Reload Data and draw Pictures
Now let us reload the data (just to make sure everything worked out nicely) and let's draw some pictures. We use a single station (which, by pure incident, is a weather station in NYC)
```
daily_weather = spark.read.table("refined.weather_daily")
nyc_station_usaf = "725053"
nyc_station_wban = "94728"
pdf = daily_weather \
.filter((daily_weather["usaf"] == nyc_station_usaf) & (daily_weather["wban"] == nyc_station_wban)) \
.orderBy("date") \
.toPandas()
```
### Wind Speed
The first picture will simply contain the wind speed for every day in 2013.
```
# Make a Plot
plt.figure(figsize=(16, 6), dpi=80, facecolor='w', edgecolor='k')
plt.plot(pdf["date"],pdf["wind_speed"])
```
### Air Temperature
The next picture contains the average air temperature for every day in 2013.
```
# Make a Plot
plt.figure(figsize=(16, 6), dpi=80, facecolor='w', edgecolor='k')
plt.plot(pdf["date"],pdf["temperature"])
```
### Precipitation
The last picture contains the precipitation for every day in 2013.
```
# Make a Plot
plt.figure(figsize=(16, 6), dpi=80, facecolor='w', edgecolor='k')
plt.plot(pdf["date"],pdf["precipitation"])
```
| github_jupyter |
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from __future__ import division
import pandas as pd
V = np.linspace(0,1000,1000)
plt.plot(V, 6.43 - 5e-14*(np.exp(V/2.6) - 1)) #in V and A
plt.ylim(0,10)
plt.xlim(0,100)
V = np.linspace(0,100,1000)
I_o = 5e-14 #A
I_L = 6.43 #A
R_s = 0 #ohm
R_sh = 1e6 #ohm
plt.plot(V, I_L - I_o*(np.exp(V/2.6) - 1)) #in V and A
plt.ylim(0,10)
plt.xlim(0,100)
I = 100
plt.plot(I - I_L - I_o*(np.exp((V+I*R_s)/2.6) - 1)) #this
from sympy import solve, Symbol, exp
x = Symbol('x')
y = Symbol('y')
solve([x + 5*y - 2, -3*x + 6*y - 15], [x, y])
V = Symbol('V')
I = Symbol('I')
I_o = 5e-14 #A
I_L = 6.43 #A
R_s = 10 #mohm
R_sh = 1e5 #mohm
n = 1
#solve(I - I_L - I_o*(exp( (V + I * R_s) /(26*n)) - 1) , I)
from scipy.optimize import fsolve
import math
def equations(p):
x, y = p
return (x+y**2-4, math.exp(x) + x*y - 3)
x, y = fsolve(equations, (1, 1))
print equations((x, y))
import scipy.optimize as optimize
from math import sqrt
def f(c):
return sqrt(c[0]**2 + c[1]**2 + (c[2]-2)**2)
result = optimize.minimize(f, [1,1,1])
print result.values()[6]
import scipy.optimize as optimize
from math import sqrt
# I, c[0]
I_L = 6.43 #A
# I_o = 5e-14 #A, c[2]
# n = 1, c[2]
V = 1 #mV
# R_s = 1 #mohm, c[3]
# R_sh = 1e5 #mohm c[4]
def f(c):
I - I_L - I_o*(exp( (V + I * R_s) /(26*n)) - 1)
return I
result = optimize.minimize(f, [1,1,1])
print result.values()[6]
irrad_df = pd.read_csv('data/ASTMG173.csv')
irrad_df.head()
irrad_df['globaltilt'].plot()
eqe_df = pd.read_csv('data/eqe_sunpower_25.csv')
eqe_df.head()
eqe_df['percent'].values
from scipy import interpolate
x = eqe_df['wavelength'].values
y = eqe_df['percent'].values
f = interpolate.interp1d(x, y)
wav_new = np.arange(300,1180, 0.5)
eqe_new = f(xnew) # use interpolation function returned by `interp1d`
plt.plot(x, y, 'o', wav_new, eqe_new, '-')
plt.show()
irrad_df[irrad_df['wavelength']==300]
irrad_df[irrad_df['wavelength']==1180]
from scipy import interpolate
x = irrad_df['wavelength'][40:1021].values
irrad_global = irrad_df['globaltilt'][40:1021].values #AM1.5 spectrum
f = interpolate.interp1d(x, irrad_global)
wav_new = np.arange(300,1180, 0.5) #300 nm to 1180 nm with 0.5 nm spacing
irrad_new = f(xnew) #recreate AM1.5 with 0.5 nm spacing
plt.plot(x, irrad_global, 'o', wav_new, irrad_new, '-')
plt.show()
plt.plot(wav_new,eqe_new*irrad_new*wav_new)
(1/1240)*sum(eqe_new*irrad_new*wav_new)*.5/1e3 #mA/cm^2
iv_df = pd.read_csv('data/i_v_sunpower_25.csv')
plt.plot(iv_df.voltage,iv_df.current, 'r--')
I_o = 3.6e-10 #mA/cm^2
I_L = 41.74 #mA/cm^2
plt.plot(iv_df.voltage, I_L - I_o*(np.exp(iv_df.voltage/.0283) - 1)) #in V and A
plt.ylim(0,50)
```
| github_jupyter |
# Amazon SageMaker Processing と AWS Step Functions Data Science SDK で機械学習ワークフローを構築する
Amazon SageMaker Processing を使うと、データの前/後処理やモデル評価のワークロードを Amazon SageMaker platform 上で簡単に実行することができます。Processingジョブは Amazon Simple Storage Service (Amazon S3) から入力データをダウンロードし、処理結果を Amazon S3 にアップロードします。
Step Functions SDK は AWS Step Function と Amazon SageMaker を使って、データサイエンティストが機械学習ワークフローを簡単に作成して実行するためのものです。詳しい情報は以下のドキュメントをご参照ください。
* [AWS Step Functions](https://aws.amazon.com/step-functions/)
* [AWS Step Functions Developer Guide](https://docs.aws.amazon.com/step-functions/latest/dg/welcome.html)
* [AWS Step Functions Data Science SDK](https://aws-step-functions-data-science-sdk.readthedocs.io)
AWS Step Functions Data Science SDK の SageMaker Processing Step [ProcessingStep](https://aws-step-functions-data-science-sdk.readthedocs.io/en/stable/sagemaker.html#stepfunctions.steps.sagemaker.ProcessingStep) によって、AWS Step Functions ワークフローで実装された Sageaker Processing を機械学習エンジニアが直接システムに統合することができます。
このノートブックは、SageMaker Processing Job を使ってデータの前処理、モデルの学習、モデルの精度評価の機械学習ワークフローを AWS Step Functions Data Science SDK を使って作成する方法をご紹介します。大まかな流れは以下の通りです。
1. AWS Step Functions Data Science SDK の `ProcessingStep` を使ってデータの前処理、特徴量エンジニアリング、学習用とテスト用への分割を行う scikit-learn スクリプトを実行する SageMaker Processing Job を実行
1. AWS Step Functions Data Science SDK の `TrainingStep` を使って前処理された学習データを使ったモデルの学習を実行
1. AWS Step Functions Data Science SDK の `ProcessingStep` を使って前処理したテスト用データを使った学習済モデルの評価を実行
このノートブックで使用するデータは [Census-Income KDD Dataset](https://archive.ics.uci.edu/ml/datasets/Census-Income+%28KDD%29) です。このデータセットから特徴量を選択し、データクレンジングを実施し、二値分類モデルの利用できる形にデータを変換し、最後にデータを学習用とテスト用に分割します。このノートブックではロジスティック回帰モデルを使って、国勢調査の回答者の収入が 5万ドル以上か 5万ドル未満かを予測します。このデータセットはクラスごとの不均衡が大きく、ほとんどのデータに 5万ドル以下というラベルが付加されています。
## Setup
このノートブックを実行するのに必要なライブラリをインストールします。
```
# Import the latest sagemaker, stepfunctions and boto3 SDKs
import sys
!{sys.executable} -m pip install --upgrade pip
!{sys.executable} -m pip install -qU awscli boto3 "sagemaker>=2.0.0"
!{sys.executable} -m pip install -qU "stepfunctions>=2.0.0"
!{sys.executable} -m pip show sagemaker stepfunctions
```
### 必要なモジュールのインポート
```
import io
import logging
import os
import random
import time
import uuid
import boto3
import stepfunctions
from stepfunctions import steps
from stepfunctions.inputs import ExecutionInput
from stepfunctions.steps import (
Chain,
ChoiceRule,
ModelStep,
ProcessingStep,
TrainingStep,
TransformStep,
)
from stepfunctions.template import TrainingPipeline
from stepfunctions.template.utils import replace_parameters_with_jsonpath
from stepfunctions.workflow import Workflow
import sagemaker
from sagemaker import get_execution_role
from sagemaker.amazon.amazon_estimator import image_uris
from sagemaker.processing import ProcessingInput, ProcessingOutput
from sagemaker.s3 import S3Uploader
from sagemaker.sklearn.processing import SKLearnProcessor
# SageMaker Session
sagemaker_session = sagemaker.Session()
region = sagemaker_session.boto_region_name
# SageMaker Execution Role
# You can use sagemaker.get_execution_role() if running inside sagemaker's notebook instance
role = get_execution_role()
```
次に、ノートブックから Step Functions を実行するための IAM ロール設定を行います。
## ノートブックインスタンスの IAM ロールに権限を追加
以下の手順を実行して、ノートブックインスタンスに紐づけられた IAM ロールに、AWS Step Functions のワークフローを作成して実行するための権限を追加してください。
1. [Amazon SageMaker console](https://console.aws.amazon.com/sagemaker/) を開く
2. **ノートブックインスタンス** を開いて現在使用しているノートブックインスタンスを選択する
3. **アクセス許可と暗号化** の部分に表示されている IAM ロールへのリンクをクリックする
4. IAM ロールの ARN は後で使用するのでメモ帳などにコピーしておく
5. **ポリシーをアタッチします** をクリックして `AWSStepFunctionsFullAccess` を検索する
6. `AWSStepFunctionsFullAccess` の横のチェックボックスをオンにして **ポリシーのアタッチ** をクリックする
もしこのノートブックを SageMaker のノートブックインスタンス以外で実行している場合、その環境で AWS CLI 設定を行ってください。詳細は [Configuring the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html) をご参照ください。
次に Step Functions で使用する実行ロールを作成します。
## Step Functions の実行ロールの作成
作成した Step Functions ワークフローは、AWS の他のサービスと連携するための IAM ロールを必要とします。
1. [IAM console](https://console.aws.amazon.com/iam/) にアクセス
2. 左側のメニューの **ロール** を選択し **ロールの作成** をクリック
3. **ユースケースの選択** で **Step Functions** をクリック
4. **次のステップ:アクセス権限** **次のステップ:タグ** **次のステップ:確認**をクリック
5. **ロール名** に `AmazonSageMaker-StepFunctionsWorkflowExecutionRole` と入力して **ロールの作成** をクリック
Next, attach a AWS Managed IAM policy to the role you created as per below steps.
次に、作成したロールに AWS マネージド IAM ポリシーをアタッチします。
1. [IAM console](https://console.aws.amazon.com/iam/) にアクセス
2. 左側のメニューの **ロール** を選択
3. 先ほど作成した `AmazonSageMaker-StepFunctionsWorkflowExecutionRole`を検索
4. **ポリシーをアタッチします** をクリックして `CloudWatchEventsFullAccess` を検索
5. `CloudWatchEventsFullAccess` の横のチェックボックスをオンにして **ポリシーのアタッチ** をクリック
次に、別の新しいポリシーをロールにアタッチします。ベストプラクティスとして、以下のステップで特定のリソースのみのアクセス権限とこのサンプルを実行するのに必要なアクションのみを有効にします。
1. 左側のメニューの **ロール** を選択
1. 先ほど作成した `AmazonSageMaker-StepFunctionsWorkflowExecutionRole`を検索
1. **ポリシーをアタッチします** をクリックして **ポリシーの作成** をクリック
1. **JSON** タブをクリックして以下の内容をペースト<br>
NOTEBOOK_ROLE_ARN の部分をノートブックインスタンスで使用している IAM ロールの ARN に置き換えてください。
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"events:PutTargets",
"events:DescribeRule",
"events:PutRule"
],
"Resource": [
"arn:aws:events:*:*:rule/StepFunctionsGetEventsForSageMakerTrainingJobsRule",
"arn:aws:events:*:*:rule/StepFunctionsGetEventsForSageMakerTransformJobsRule",
"arn:aws:events:*:*:rule/StepFunctionsGetEventsForSageMakerTuningJobsRule",
"arn:aws:events:*:*:rule/StepFunctionsGetEventsForECSTaskRule",
"arn:aws:events:*:*:rule/StepFunctionsGetEventsForBatchJobsRule"
]
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "NOTEBOOK_ROLE_ARN",
"Condition": {
"StringEquals": {
"iam:PassedToService": "sagemaker.amazonaws.com"
}
}
},
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": [
"batch:DescribeJobs",
"batch:SubmitJob",
"batch:TerminateJob",
"dynamodb:DeleteItem",
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:UpdateItem",
"ecs:DescribeTasks",
"ecs:RunTask",
"ecs:StopTask",
"glue:BatchStopJobRun",
"glue:GetJobRun",
"glue:GetJobRuns",
"glue:StartJobRun",
"lambda:InvokeFunction",
"sagemaker:CreateEndpoint",
"sagemaker:CreateEndpointConfig",
"sagemaker:CreateHyperParameterTuningJob",
"sagemaker:CreateModel",
"sagemaker:CreateProcessingJob",
"sagemaker:CreateTrainingJob",
"sagemaker:CreateTransformJob",
"sagemaker:DeleteEndpoint",
"sagemaker:DeleteEndpointConfig",
"sagemaker:DescribeHyperParameterTuningJob",
"sagemaker:DescribeProcessingJob",
"sagemaker:DescribeTrainingJob",
"sagemaker:DescribeTransformJob",
"sagemaker:ListProcessingJobs",
"sagemaker:ListTags",
"sagemaker:StopHyperParameterTuningJob",
"sagemaker:StopProcessingJob",
"sagemaker:StopTrainingJob",
"sagemaker:StopTransformJob",
"sagemaker:UpdateEndpoint",
"sns:Publish",
"sqs:SendMessage"
],
"Resource": "*"
}
]
}
```
5. **次のステップ:タグ** **次のステップ:確認**をクリック
6. **名前** に `AmazonSageMaker-StepFunctionsWorkflowExecutionPolicy` と入力して **ポリシーの作成** をクリック
7. 左側のメニューで **ロール** を選択して `AmazonSageMaker-StepFunctionsWorkflowExecutionRole` を検索
8. **ポリシーをアタッチします** をクリック
9. 前の手順で作成した `AmazonSageMaker-StepFunctionsWorkflowExecutionPolicy` ポリシーを検索してチェックボックスをオンにして **ポリシーのアタッチ** をクリック
11. AmazonSageMaker-StepFunctionsWorkflowExecutionRole の *Role ARN** をコピーして以下のセルにペースト
```
# paste the AmazonSageMaker-StepFunctionsWorkflowExecutionRole ARN from above
workflow_execution_role = "arn:aws:iam::420964472730:role/StepFunctionsWorkflowExecutionRole"
```
### Step Functions ワークフロー実行時の入力スキーマ作成
Step Functions ワークフローを実行する際に、パラメタなどを引数として渡すことができます。ここではそれらの引数のスキーマを作成します。
```
# Generate unique names for Pre-Processing Job, Training Job, and Model Evaluation Job for the Step Functions Workflow
training_job_name = "scikit-learn-training-{}".format(
uuid.uuid1().hex
) # Each Training Job requires a unique name
preprocessing_job_name = "scikit-learn-sm-preprocessing-{}".format(
uuid.uuid1().hex
) # Each Preprocessing job requires a unique name,
evaluation_job_name = "scikit-learn-sm-evaluation-{}".format(
uuid.uuid1().hex
) # Each Evaluation Job requires a unique name
# SageMaker expects unique names for each job, model and endpoint.
# If these names are not unique the execution will fail. Pass these
# dynamically for each execution using placeholders.
execution_input = ExecutionInput(
schema={
"PreprocessingJobName": str,
"TrainingJobName": str,
"EvaluationProcessingJobName": str,
}
)
```
## データの前処理と特徴量エンジニアリング
データクレンジング 、前処理、特徴量エンジニアリングのスクリプトの前に、データセットの初めの 20行をのぞいてみましょう。ターゲット変数は `income` 列です。選択する特徴量は `age`, `education`, `major industry code`, `class of worker`, `num persons worked for employer`, `capital gains`, `capital losses`, `dividends from stocks` です。
```
import pandas as pd
input_data = "s3://sagemaker-sample-data-{}/processing/census/census-income.csv".format(region)
df = pd.read_csv(input_data, nrows=10)
df.head(n=10)
```
scikit-learn の前処理スクリプトを実行するために `SKLearnProcessor`を作成します。これは、SageMaker が用意している scikit-learn のコンテナイメージを使って Processing ジョブを実行するためのものです。
```
sklearn_processor = SKLearnProcessor(
framework_version="0.20.0",
role=role,
instance_type="ml.m5.xlarge",
instance_count=1,
max_runtime_in_seconds=1200,
)
```
以下のセルを実行すると `preprocessing.py` が作成されます。これは前処理のためのスクリプトです。以下のセルを書き換えて実行すれば、`preprocessing.py` が上書き保存されます。このスクリプトでは、以下の処理が実行されます。
n the next cell. In this script, you
* 重複データやコンフリクトしているデータの削除
* ターゲット変数 `income` 列をカテゴリ変数から 2つのラベルを持つ列に変換
* `age` と `num persons worked for employer` をビニングして数値からカテゴリ変数に変換
* 連続値である`capital gains`, `capital losses`, `dividends from stocks` を学習しやすいようスケーリング
* `education`, `major industry code`, `class of worker`を学習しやすいようエンコード
* データを学習用とテスト用に分割し特徴量とラベルの値をそれぞれ保存
学習スクリプトでは、前処理済みの学習用データとラベル情報を使用してモデルを学習します。また、モデル評価スクリプトでは学習済みモデルと前処理済みのテスト用データトラベル情報を使用してモデルを評価します。
```
%%writefile preprocessing.py
import argparse
import os
import warnings
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, OneHotEncoder, LabelBinarizer, KBinsDiscretizer
from sklearn.preprocessing import PolynomialFeatures
from sklearn.compose import make_column_transformer
from sklearn.exceptions import DataConversionWarning
warnings.filterwarnings(action="ignore", category=DataConversionWarning)
columns = [
"age",
"education",
"major industry code",
"class of worker",
"num persons worked for employer",
"capital gains",
"capital losses",
"dividends from stocks",
"income",
]
class_labels = [" - 50000.", " 50000+."]
def print_shape(df):
negative_examples, positive_examples = np.bincount(df["income"])
print(
"Data shape: {}, {} positive examples, {} negative examples".format(
df.shape, positive_examples, negative_examples
)
)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--train-test-split-ratio", type=float, default=0.3)
args, _ = parser.parse_known_args()
print("Received arguments {}".format(args))
input_data_path = os.path.join("/opt/ml/processing/input", "census-income.csv")
print("Reading input data from {}".format(input_data_path))
df = pd.read_csv(input_data_path)
df = pd.DataFrame(data=df, columns=columns)
df.dropna(inplace=True)
df.drop_duplicates(inplace=True)
df.replace(class_labels, [0, 1], inplace=True)
negative_examples, positive_examples = np.bincount(df["income"])
print(
"Data after cleaning: {}, {} positive examples, {} negative examples".format(
df.shape, positive_examples, negative_examples
)
)
split_ratio = args.train_test_split_ratio
print("Splitting data into train and test sets with ratio {}".format(split_ratio))
X_train, X_test, y_train, y_test = train_test_split(
df.drop("income", axis=1), df["income"], test_size=split_ratio, random_state=0
)
preprocess = make_column_transformer(
(
["age", "num persons worked for employer"],
KBinsDiscretizer(encode="onehot-dense", n_bins=10),
),
(
["capital gains", "capital losses", "dividends from stocks"],
StandardScaler(),
),
(
["education", "major industry code", "class of worker"],
OneHotEncoder(sparse=False),
),
)
print("Running preprocessing and feature engineering transformations")
train_features = preprocess.fit_transform(X_train)
test_features = preprocess.transform(X_test)
print("Train data shape after preprocessing: {}".format(train_features.shape))
print("Test data shape after preprocessing: {}".format(test_features.shape))
train_features_output_path = os.path.join("/opt/ml/processing/train", "train_features.csv")
train_labels_output_path = os.path.join("/opt/ml/processing/train", "train_labels.csv")
test_features_output_path = os.path.join("/opt/ml/processing/test", "test_features.csv")
test_labels_output_path = os.path.join("/opt/ml/processing/test", "test_labels.csv")
print("Saving training features to {}".format(train_features_output_path))
pd.DataFrame(train_features).to_csv(train_features_output_path, header=False, index=False)
print("Saving test features to {}".format(test_features_output_path))
pd.DataFrame(test_features).to_csv(test_features_output_path, header=False, index=False)
print("Saving training labels to {}".format(train_labels_output_path))
y_train.to_csv(train_labels_output_path, header=False, index=False)
print("Saving test labels to {}".format(test_labels_output_path))
y_test.to_csv(test_labels_output_path, header=False, index=False)
```
前処理用スクリプトを S3 にアップロードします。
```
PREPROCESSING_SCRIPT_LOCATION = "preprocessing.py"
input_code = sagemaker_session.upload_data(
PREPROCESSING_SCRIPT_LOCATION,
bucket=sagemaker_session.default_bucket(),
key_prefix="data/sklearn_processing/code",
)
```
Processing ジョブの出力を保存する S3 パスを作成します。
```
s3_bucket_base_uri = "{}{}".format("s3://", sagemaker_session.default_bucket())
output_data = "{}/{}".format(s3_bucket_base_uri, "data/sklearn_processing/output")
preprocessed_training_data = "{}/{}".format(output_data, "train_data")
```
### `ProcessingStep` の作成
それでは、SageMaker Processing ジョブを起動するための [ProcessingStep](https://aws-step-functions-data-science-sdk.readthedocs.io/en/stable/sagemaker.html#stepfunctions.steps.sagemaker.ProcessingStep) を作成しましょう。
このステップは、前の手順で定義した SKLearnProcessor に入力と出力の情報を追加して使用します。
#### [ProcessingInputs](https://sagemaker.readthedocs.io/en/stable/api/training/processing.html#sagemaker.processing.ProcessingInput) と [ProcessingOutputs](https://sagemaker.readthedocs.io/en/stable/api/training/processing.html#sagemaker.processing.ProcessingOutput) オブジェクトを作成して SageMaker Processing ジョブに入力と出力の情報を追加
```
inputs = [
ProcessingInput(
source=input_data, destination="/opt/ml/processing/input", input_name="input-1"
),
ProcessingInput(
source=input_code,
destination="/opt/ml/processing/input/code",
input_name="code",
),
]
outputs = [
ProcessingOutput(
source="/opt/ml/processing/train",
destination="{}/{}".format(output_data, "train_data"),
output_name="train_data",
),
ProcessingOutput(
source="/opt/ml/processing/test",
destination="{}/{}".format(output_data, "test_data"),
output_name="test_data",
),
]
```
#### `ProcessingStep` の作成
```
# preprocessing_job_name = generate_job_name()
processing_step = ProcessingStep(
"SageMaker pre-processing step",
processor=sklearn_processor,
job_name=execution_input["PreprocessingJobName"],
inputs=inputs,
outputs=outputs,
container_arguments=["--train-test-split-ratio", "0.2"],
container_entrypoint=["python3", "/opt/ml/processing/input/code/preprocessing.py"],
)
```
## 前処理済みデータを使ったモデルの学習
学習スクリプト `train.py` を使って学習ジョブを実行するための `SKLearn` インスタンスを作成します。これはあとで `TrainingStep` を作成する際に使用します。
```
from sagemaker.sklearn.estimator import SKLearn
sklearn = SKLearn(
entry_point="train.py",
train_instance_type="ml.m5.xlarge",
role=role,
framework_version="0.20.0",
py_version="py3",
)
```
学習スクリプト `train.py` は、ロジスティック回帰モデルを学習し、学習済みモデルを `/opt/ml/model` に保存します。Amazon SageMaker は、学習ジョブの最後にそこに保存されているモデルを `model.tar.gz` に圧縮して S3 にアップロードします。
```
%%writefile train.py
import os
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.externals import joblib
if __name__ == "__main__":
training_data_directory = "/opt/ml/input/data/train"
train_features_data = os.path.join(training_data_directory, "train_features.csv")
train_labels_data = os.path.join(training_data_directory, "train_labels.csv")
print("Reading input data")
X_train = pd.read_csv(train_features_data, header=None)
y_train = pd.read_csv(train_labels_data, header=None)
model = LogisticRegression(class_weight="balanced", solver="lbfgs")
print("Training LR model")
model.fit(X_train, y_train)
model_output_directory = os.path.join("/opt/ml/model", "model.joblib")
print("Saving model to {}".format(model_output_directory))
joblib.dump(model, model_output_directory)
```
### `TrainingStep` の作成
```
training_step = steps.TrainingStep(
"SageMaker Training Step",
estimator=sklearn,
data={"train": sagemaker.TrainingInput(preprocessed_training_data, content_type="text/csv")},
job_name=execution_input["TrainingJobName"],
wait_for_completion=True,
)
```
## モデルの評価
`evaluation.py` はモデル評価用のスクリプトです。このスクリプトは scikit-learn を用いるため、以前の手順で使用した`SKLearnProcessor` を使用します。このスクリプトは学習済みモデルとテスト用データセットを入力として受け取り、各分類クラスの分類評価メトリクス、precision、リコール、F1スコア、accuracy と ROC AUC が記載された JSON ファイルを出力します。
```
%%writefile evaluation.py
import json
import os
import tarfile
import pandas as pd
from sklearn.externals import joblib
from sklearn.metrics import classification_report, roc_auc_score, accuracy_score
if __name__ == "__main__":
model_path = os.path.join("/opt/ml/processing/model", "model.tar.gz")
print("Extracting model from path: {}".format(model_path))
with tarfile.open(model_path) as tar:
tar.extractall(path=".")
print("Loading model")
model = joblib.load("model.joblib")
print("Loading test input data")
test_features_data = os.path.join("/opt/ml/processing/test", "test_features.csv")
test_labels_data = os.path.join("/opt/ml/processing/test", "test_labels.csv")
X_test = pd.read_csv(test_features_data, header=None)
y_test = pd.read_csv(test_labels_data, header=None)
predictions = model.predict(X_test)
print("Creating classification evaluation report")
report_dict = classification_report(y_test, predictions, output_dict=True)
report_dict["accuracy"] = accuracy_score(y_test, predictions)
report_dict["roc_auc"] = roc_auc_score(y_test, predictions)
print("Classification report:\n{}".format(report_dict))
evaluation_output_path = os.path.join("/opt/ml/processing/evaluation", "evaluation.json")
print("Saving classification report to {}".format(evaluation_output_path))
with open(evaluation_output_path, "w") as f:
f.write(json.dumps(report_dict))
MODELEVALUATION_SCRIPT_LOCATION = "evaluation.py"
input_evaluation_code = sagemaker_session.upload_data(
MODELEVALUATION_SCRIPT_LOCATION,
bucket=sagemaker_session.default_bucket(),
key_prefix="data/sklearn_processing/code",
)
```
モデル評価用の ProcessingStep の入力と出力オブジェクトを作成します。
```
preprocessed_testing_data = "{}/{}".format(output_data, "test_data")
model_data_s3_uri = "{}/{}/{}".format(s3_bucket_base_uri, training_job_name, "output/model.tar.gz")
output_model_evaluation_s3_uri = "{}/{}/{}".format(
s3_bucket_base_uri, training_job_name, "evaluation"
)
inputs_evaluation = [
ProcessingInput(
source=preprocessed_testing_data,
destination="/opt/ml/processing/test",
input_name="input-1",
),
ProcessingInput(
source=model_data_s3_uri,
destination="/opt/ml/processing/model",
input_name="input-2",
),
ProcessingInput(
source=input_evaluation_code,
destination="/opt/ml/processing/input/code",
input_name="code",
),
]
outputs_evaluation = [
ProcessingOutput(
source="/opt/ml/processing/evaluation",
destination=output_model_evaluation_s3_uri,
output_name="evaluation",
),
]
model_evaluation_processor = SKLearnProcessor(
framework_version="0.20.0",
role=role,
instance_type="ml.m5.xlarge",
instance_count=1,
max_runtime_in_seconds=1200,
)
processing_evaluation_step = ProcessingStep(
"SageMaker Processing Model Evaluation step",
processor=model_evaluation_processor,
job_name=execution_input["EvaluationProcessingJobName"],
inputs=inputs_evaluation,
outputs=outputs_evaluation,
container_entrypoint=["python3", "/opt/ml/processing/input/code/evaluation.py"],
)
```
いずれかのステップが失敗したときにワークフローが失敗だとわかるように `Fail` 状態を作成します。
```
failed_state_sagemaker_processing_failure = stepfunctions.steps.states.Fail(
"ML Workflow failed", cause="SageMakerProcessingJobFailed"
)
```
#### ワークフローの中のエラーハンドリングを追加
エラーハンドリングのために [Catch Block](https://aws-step-functions-data-science-sdk.readthedocs.io/en/stable/states.html#stepfunctions.steps.states.Catch) を使用します。もし Processing ジョブステップか学習ステップが失敗したら、`Fail` 状態に遷移します。
```
catch_state_processing = stepfunctions.steps.states.Catch(
error_equals=["States.TaskFailed"],
next_step=failed_state_sagemaker_processing_failure,
)
processing_step.add_catch(catch_state_processing)
processing_evaluation_step.add_catch(catch_state_processing)
training_step.add_catch(catch_state_processing)
```
## `Workflow` の作成と実行
```
workflow_graph = Chain([processing_step, training_step, processing_evaluation_step])
branching_workflow = Workflow(
name="SageMakerProcessingWorkflow",
definition=workflow_graph,
role=workflow_execution_role,
)
branching_workflow.create()
# branching_workflow.update(workflow_graph)
# Execute workflow
execution = branching_workflow.execute(
inputs={
"PreprocessingJobName": preprocessing_job_name, # Each pre processing job (SageMaker processing job) requires a unique name,
"TrainingJobName": training_job_name, # Each Sagemaker Training job requires a unique name,
"EvaluationProcessingJobName": evaluation_job_name, # Each SageMaker processing job requires a unique name,
}
)
execution_output = execution.get_output(wait=True)
execution.render_progress()
```
### ワークフローの出力を確認
Amazon S3 から `evaluation.json` を取得して確認します。ここにはモデルの評価レポートが書かれています。なお、以下のセルは Step Functions でワークフローの実行が完了してから(`evaluation.json` が出力されてから)実行してください。
```
workflow_execution_output_json = execution.get_output(wait=True)
from sagemaker.s3 import S3Downloader
import json
evaluation_output_config = workflow_execution_output_json["ProcessingOutputConfig"]
for output in evaluation_output_config["Outputs"]:
if output["OutputName"] == "evaluation":
evaluation_s3_uri = "{}/{}".format(output["S3Output"]["S3Uri"], "evaluation.json")
break
evaluation_output = S3Downloader.read_file(evaluation_s3_uri)
evaluation_output_dict = json.loads(evaluation_output)
print(json.dumps(evaluation_output_dict, sort_keys=True, indent=4))
```
## リソースの削除
このノートブックの実行が終わったら、不要なリソースを削除することを忘れないでください。以下のコードのコメントアウトを外してから実行すると、このノートブックで作成した Step Functions のワークフローを削除することができます。ノートブックインスタンス、各種データを保存した S3 バケットも不要であれば削除してください。
```
# branching_workflow.delete()
```
| github_jupyter |
This notebook is designed to run in a IBM Watson Studio default runtime (NOT the Watson Studio Apache Spark Runtime as the default runtime with 1 vCPU is free of charge). Therefore, we install Apache Spark in local mode for test purposes only. Please don't use it in production.
In case you are facing issues, please read the following two documents first:
https://github.com/IBM/skillsnetwork/wiki/Environment-Setup
https://github.com/IBM/skillsnetwork/wiki/FAQ
Then, please feel free to ask:
https://coursera.org/learn/machine-learning-big-data-apache-spark/discussions/all
Please make sure to follow the guidelines before asking a question:
https://github.com/IBM/skillsnetwork/wiki/FAQ#im-feeling-lost-and-confused-please-help-me
If running outside Watson Studio, this should work as well. In case you are running in an Apache Spark context outside Watson Studio, please remove the Apache Spark setup in the first notebook cells.
```
from IPython.display import Markdown, display
def printmd(string):
display(Markdown('# <span style="color:red">'+string+'</span>'))
if ('sc' in locals() or 'sc' in globals()):
printmd('<<<<<!!!!! It seems that you are running in a IBM Watson Studio Apache Spark Notebook. Please run it in an IBM Watson Studio Default Runtime (without Apache Spark) !!!!!>>>>>')
!pip install pyspark==2.4.5
try:
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
except ImportError as e:
printmd('<<<<<!!!!! Please restart your kernel after installing Apache Spark !!!!!>>>>>')
sc = SparkContext.getOrCreate(SparkConf().setMaster("local[*]"))
spark = SparkSession \
.builder \
.getOrCreate()
```
In case you want to learn how ETL is done, please run the following notebook first and update the file name below accordingly
https://github.com/IBM/coursera/blob/master/coursera_ml/a2_w1_s3_ETL.ipynb
```
# delete files from previous runs
!rm -f hmp.parquet*
# download the file containing the data in PARQUET format
!wget https://github.com/IBM/coursera/raw/master/hmp.parquet
# create a dataframe out of it
df = spark.read.parquet('hmp.parquet')
# register a corresponding query table
df.createOrReplaceTempView('df')
df_energy = spark.sql("""
select sqrt(sum(x*x)+sum(y*y)+sum(z*z)) as label, class from df group by class
""")
df_energy.createOrReplaceTempView('df_energy')
df_join = spark.sql('select * from df inner join df_energy on df.class=df_energy.class')
splits = df_join.randomSplit([0.8, 0.2])
df_train = splits[0]
df_test = splits[1]
df_train.count()
df_test.count()
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.feature import MinMaxScaler
vectorAssembler = VectorAssembler(inputCols=["x","y","z"],
outputCol="features")
normalizer = MinMaxScaler(inputCol="features", outputCol="features_norm")
from pyspark.ml.regression import LinearRegression
lr = LinearRegression(maxIter=10, regParam=0.3, elasticNetParam=0.8)
from pyspark.ml import Pipeline
pipeline = Pipeline(stages=[vectorAssembler, normalizer,lr])
model = pipeline.fit(df_train)
model.stages[2].summary.r2
model = pipeline.fit(df_test)
model.stages[2].summary.r2
```
| github_jupyter |
# 可视图片搜索
_**使用卷积神经网络和 Elasticsearch K-最近邻(KNN)索引检索视觉上相似的图像**_
## 目录
1. [背景](#Background)
1. [下载Zalando Research数据](#Setup)
1. [准备 TensorFlow 模型](#TensorFlow-Model-Preparation)
1. [使用 SageMaker 托管模型](#Hosting-Model)
1. [在 Elasticsearch 中创建一个 KNN 索引](#ES-KNN)
1. [搜索结果评估](#Searching-with-ES-k-NN)
1. [部署全栈视觉搜索应用程序](#)
1. [拓展](#Extensions)
## 背景
在本笔记本中,我们将构建视觉图像搜索应用程序的核心组件。使用可视图像搜索,您可以查找与您所提供照片相似的照片,而无需通过语音或文本来查找这些内容。
视觉图像搜索的核心组件之一是卷积神经网络(CNN)模型,该模型生成代表查询图像和要与查询进行比较的参考项目图像的“特征向量”。参考项目特征向量通常是离线生成的,必须存储在某种数据库中,以便可以对其进行有效搜索。对于小型参考项目数据集,可以使用蛮力搜索将查询与每个参考项目进行比较。但是,蛮力搜索大型数据集极其缓慢且不可行的。
为了能够有效搜索视觉上相似的图像,我们将使用Amazon SageMaker从图像生成“特征向量”,并在Amazon Elasticsearch Service中使用KNN算法。 Amazon Elasticsearch Service的KNN使您可以在向量空间中搜索点,并通过欧几里得距离或余弦相似度(默认值为欧几里得距离)找到这些点的“最近邻居”。用例包括建议(例如,音乐应用程序中的“您可能喜欢的其他歌曲”功能),图像识别和欺诈检测。
我们将按照以下步骤构建可视图像搜索:进行一些初始设置后,我们将使用TensorFlow准备模型以生成特征向量,然后从*__feidegger__*(一种*__zalandoresearch__*数据集)生成Fashion Images的特征向量。这些特征向量将导入到Amazon Elasticsearch KNN 索引中。接下来,我们将用一些图片测试下图像查询功能,并将结果可视化。
```
#Install tqdm to have progress bar
!pip install tqdm
#install necessary pkg to make connection with elasticsearch domain
!pip install elasticsearch
!pip install requests
!pip install requests-aws4auth
# Use SageMaker version 1.72.1
!pip install sagemaker==1.72.1
import boto3
import re
import sagemaker
from sagemaker import get_execution_role
role = get_execution_role()
s3_resource = boto3.resource("s3")
s3 = boto3.client('s3')
cfn = boto3.client('cloudformation')
def get_cfn_outputs(stackname):
outputs = {}
for output in cfn.describe_stacks(StackName=stackname)['Stacks'][0]['Outputs']:
outputs[output['OutputKey']] = output['OutputValue']
return outputs
## Setup variables to use for the rest of the demo
cloudformation_stack_name = "vis-search"
outputs = get_cfn_outputs(cloudformation_stack_name)
bucket = outputs['s3BucketTraining']
es_host = outputs['esHostName']
outputs
```
### 下载Zalando Research数据
该数据集包含8732幅高分辨率图像,每幅图像均是Zalando商店中的衣服,图片都是白底的。
**下载Zalando Research数据**:原始数据来自:https://github.com/zalandoresearch/feidegger
**Citation:** <br>
*@inproceedings{lefakis2018feidegger,* <br>
*title={FEIDEGGER: A Multi-modal Corpus of Fashion Images and Descriptions in German},* <br>
*author={Lefakis, Leonidas and Akbik, Alan and Vollgraf, Roland},* <br>
*booktitle = {{LREC} 2018, 11th Language Resources and Evaluation Conference},* <br>
*year = {2018}* <br>
*}*
```
## Data Preparation
use_small_data=True
images_path = 'data/feidegger/fashion'
if(use_small_data):
!wget https://us-east-1-binc.s3.amazonaws.com/ai-day-2021/visual-search/image_data_2k.tgz
!tar -xf image_data_2k.tgz
else:
!wget https://us-east-1-binc.s3.amazonaws.com/ai-day-2021/visual-search/image_data.tgz
!tar -xf image_data.tgz
# Uploading dataset to S3
!aws s3 sync data s3://$bucket/data/ --quiet && echo upload to $bucket/data finished
```
## 准备TensorFlow模型
我们将使用TensorFlow后端准备一个模型,以将图像“特征化”为特征向量。 TensorFlow具有底层 Module API和高级Keras API。
我们将从预先训练的模型开始,避免花费时间和金钱从头开始训练模型。 因此,作为准备模型的第一步,我们将从Keras应用程序中导入预训练的模型。 研究人员已经对具有不同层数的各种经过预训练的CNN架构进行了试验,发现有几种好的选择。
在本笔记本中,我们将基于ResNet架构选择模型,这是一种常用的选择。 在层数的各种选择中,从18到152,我们将使用50层。 这也是一个常见的选择,可以平衡结果特征向量(嵌入)的表现力和计算效率(层数越少意味着效率越高,但表现力越低)。
```
import os
import json
import time
import tensorflow as tf
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.resnet50 import ResNet50, preprocess_input
import sagemaker
from PIL import Image
from sagemaker.tensorflow import TensorFlow
# Set the channel first for better performance
from tensorflow.keras import backend
backend.set_image_data_format('channels_first')
print(backend.image_data_format())
```
现在,我们将导入一个ResNet50模型,该模型在Imagenet数据集上经过训练,可以在没有实际clssifier的情况下提取特征。更具体地说,我们将使用该层来生成浮点数的行向量,以作为“嵌入”或图像特征的表示。 我们还将模型另存为**export/Servo/1**下的*SavedModel*格式,以通过SageMaker TensorFlow服务API进行服务。
```
#Import Resnet50 model
model = tf.keras.applications.ResNet50(weights='imagenet', include_top=False,input_shape=(3, 224, 224),pooling='avg')
model.summary()
#Creating the directory strcture
dirName = 'export/Servo/1'
if not os.path.exists(dirName):
os.makedirs(dirName)
print("Directory " , dirName , " Created ")
else:
print("Directory " , dirName , " already exists")
#Save the model in SavedModel format
model.save('./export/Servo/1/', save_format='tf')
#Check the model Signature
!saved_model_cli show --dir ./export/Servo/1/ --tag_set serve --signature_def serving_default
```
## 使用 SageMaker 托管模型
在保存特征提取器模型后,我们将使用Sagemaker Tensorflow Serving api部署模型。Sagemaker Tensorflow Serving是用于生产环境的机器学习模型托管系统,具有灵活,高性能的特性。 使用TensorFlow Serving可以轻松部署新算法和实验,同时保持相同的服务器体系结构和API。TensorFlow Serving提供与TensorFlow模型的现成集成,但可以轻松扩展以服务于其他类型的模型和数据。我们将定义**inference.py**来自定义TensorFlow Serving API的输入数据。 我们还需要添加**requirements.txt**到此容器中,以使用额外的库。
```
#check the actual content of inference.py
!pygmentize src/inference.py
import tarfile
#zip the model .gz format
model_version = '1'
export_dir = 'export/Servo/' + model_version
with tarfile.open('model.tar.gz', mode='w:gz') as archive:
archive.add('export', recursive=True)
#Upload the model to S3
sagemaker_session = sagemaker.Session()
inputs = sagemaker_session.upload_data(path='model.tar.gz', key_prefix='model')
inputs
```
将模型上传到S3之后,我们将使用TensorFlow Serving容器托管模型。 我们会使用ml.p3.16xlarge实例类型。您可能需要开一个support case以增加SageMaker托管实例类型的服务配额。 我们将使用此端点生成特征并将其导入ElasticSearch。 您还可以选择小型实例,例如“ ml.m4.xlarge”以节省成本。
```
#Deploy the model in Sagemaker Endpoint. This process will take ~10 min.
from sagemaker.tensorflow.serving import Model
sagemaker_model = Model(entry_point='inference.py', model_data = 's3://' + sagemaker_session.default_bucket() + '/model/model.tar.gz',
role = role, framework_version='2.1.0', source_dir='./src' )
predictor = sagemaker_model.deploy(initial_instance_count=3, instance_type='ml.c5.xlarge')
# get the features for a sample image
payload = s3.get_object(Bucket=bucket,Key='data/feidegger/fashion/0VB21C000-A11@12.1.jpg')['Body'].read()
predictor.content_type = 'application/x-image'
predictor.serializer = None
features = predictor.predict(payload)['predictions'][0]
features
```
## 在 Elasticsearch 中创建一个 KNN 索引
Amazon Elasticsearch Service的KNN使您可以在向量空间中搜索点,并通过欧几里得距离或余弦相似度(默认值为欧几里得距离)找到这些点的“最近邻居”。 用例包括建议(例如,音乐应用程序中的“您可能喜欢的其他歌曲”功能),图像识别和欺诈检测。
KNN需要Elasticsearch 7.1或更高版本。 OpenDistro for Elasticsearch文档中提供了有关Elasticsearch功能的完整文档,包括设置和统计信息的描述。 有关k最近邻算法的背景信息
在这一步中,我们将获取zalando图像的所有特征,并将这些特征导入到Elastichseach7.4域中。
```
#Define some utility function
#return all s3 keys
def get_all_s3_keys(bucket):
"""Get a list of all keys in an S3 bucket."""
keys = []
kwargs = {'Bucket': bucket}
while True:
resp = s3.list_objects_v2(**kwargs)
for obj in resp['Contents']:
keys.append('s3://' + bucket + '/' + obj['Key'])
try:
kwargs['ContinuationToken'] = resp['NextContinuationToken']
except KeyError:
break
return keys
# get all the zalando images keys from the bucket make a list
s3_uris = get_all_s3_keys(bucket)
len(s3_uris)
# define a function to extract image features
from time import sleep
sm_client = boto3.client('sagemaker-runtime')
ENDPOINT_NAME = predictor.endpoint
def get_predictions(payload):
return sm_client.invoke_endpoint(EndpointName=ENDPOINT_NAME,
ContentType='application/x-image',
Body=payload)
def extract_features(s3_uri):
key = s3_uri.replace(f's3://{bucket}/', '')
payload = s3.get_object(Bucket=bucket,Key=key)['Body'].read()
try:
response = get_predictions(payload)
except:
sleep(0.1)
response = get_predictions(payload)
del payload
response_body = json.loads((response['Body'].read()))
feature_lst = response_body['predictions'][0]
return s3_uri, feature_lst
# This process cell will take approximately 24-25 minutes on a t3.medium notebook instance
# with 3 m5.xlarge SageMaker Hosted Endpoint instances
from multiprocessing import cpu_count
from tqdm.contrib.concurrent import process_map
workers = 4 * cpu_count()
result = process_map(extract_features, s3_uris, max_workers=workers)
# setting up the Elasticsearch connection
from elasticsearch import Elasticsearch, RequestsHttpConnection
from requests_aws4auth import AWS4Auth
region = 'us-east-1' # e.g. us-east-1
service = 'es'
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, region, service, session_token=credentials.token)
es = Elasticsearch(
hosts = [{'host': es_host, 'port': 443}],
http_auth = awsauth,
use_ssl = True,
verify_certs = True,
connection_class = RequestsHttpConnection
)
#Define KNN Elasticsearch index maping
knn_index = {
"settings": {
"index.knn": True
},
"mappings": {
"properties": {
"zalando_img_vector": {
"type": "knn_vector",
"dimension": 2048
}
}
}
}
#Creating the Elasticsearch index
es.indices.create(index="idx_zalando",body=knn_index,ignore=400)
es.indices.get(index="idx_zalando")
# defining a function to import the feature vectors corrosponds to each S3 URI into Elasticsearch KNN index
# This process will take around ~3 min.
def es_import(i):
es.index(index='idx_zalando',
body={"zalando_img_vector": i[1],
"image": i[0]}
)
process_map(es_import, result, max_workers=workers)
```
## 搜索结果评估
在这一步中,我们将使用SageMaker SDK和Boto3 SDK查询Elasticsearch以检索最近的邻居。 值得一提的是**zalando**数据集与Imagenet数据集非常相似。 现在,如果您遇到一个领域特定的问题,那么您需要在预先训练的特征提取器模型(例如VGG,Resnet,Xeception,Mobilenet等)之上训练该数据集,并创建一个新的特征提取器模型。
```
#define display_image function
def display_image(bucket, key):
response = s3.get_object(Bucket=bucket,Key=key)['Body']
img = Image.open(response)
return display(img)
import requests
import random
from PIL import Image
import io
urls = []
# yellow pattern dess
urls.append('https://fastly.hautelookcdn.com/products/D7242MNR/large/13494318.jpg')
# T shirt kind dress
urls.append('https://fastly.hautelookcdn.com/products/M2241/large/15658772.jpg')
#Dotted pattern dress
urls.append('https://fastly.hautelookcdn.com/products/19463M/large/14537545.jpg')
img_bytes = requests.get(random.choice(urls)).content
query_img = Image.open(io.BytesIO(img_bytes))
query_img
```
###### SageMaker SDK 方法
```
#SageMaker SDK approach
predictor.content_type = 'application/x-image'
predictor.serializer = None
features = predictor.predict(img_bytes)['predictions'][0]
import json
k = 5
idx_name = 'idx_zalando'
res = es.search(request_timeout=30, index=idx_name,
body={'size': k,
'query': {'knn': {'zalando_img_vector': {'vector': features, 'k': k}}}})
for i in range(k):
key = res['hits']['hits'][i]['_source']['image']
key = key.replace(f's3://{bucket}/','')
img = display_image(bucket,key)
```
##### Boto3 方法
```
client = boto3.client('sagemaker-runtime')
ENDPOINT_NAME = predictor.endpoint # our endpoint name
response = client.invoke_endpoint(EndpointName=ENDPOINT_NAME,
ContentType='application/x-image',
Body=img_bytes)
response_body = json.loads((response['Body'].read()))
features = response_body['predictions'][0]
import json
k = 5
idx_name = 'idx_zalando'
res = es.search(request_timeout=30, index=idx_name,
body={'size': k,
'query': {'knn': {'zalando_img_vector': {'vector': features, 'k': k}}}})
for i in range(k):
key = res['hits']['hits'][i]['_source']['image']
key = key.replace(f's3://{bucket}/','')
img = display_image (bucket,key)
```
# 部署一个全栈视觉搜索应用程序
```
s3_resource.Object(bucket, 'backend/template.yaml').upload_file('./backend/template.yaml', ExtraArgs={'ACL':'public-read'})
sam_template_url = f'https://{bucket}.s3.amazonaws.com/backend/template.yaml'
# Generate the CloudFormation Quick Create Link
print("单击下面的URL,以创建用于视觉搜索的后端API:\n")
print((
'https://console.aws.amazon.com/cloudformation/home?region=us-east-1#/stacks/create/review'
f'?templateURL={sam_template_url}'
'&stackName=vis-search-api'
f'¶m_BucketName={outputs["s3BucketTraining"]}'
f'¶m_DomainName={outputs["esDomainName"]}'
f'¶m_ElasticSearchURL={outputs["esHostName"]}'
f'¶m_SagemakerEndpoint={predictor.endpoint}'
))
```
既然您具有一个可正常工作的Amazon SageMaker终端节点来提取图像特征并在Elasticsearch上创建了KNN索引,您就可以构建一个现实世界的,全栈,且有ML能力的Web应用程序了。 您刚刚创建的SAM模板将部署Amazon API Gateway和AWS Lambda函数。 Lambda函数运行您的代码以响应发送到API网关的HTTP请求。
```
# Review the content of the Lambda function code.
!pygmentize backend/lambda/app.py
```
### 一旦CloudFormation堆栈显示CREATE_COMPLETE,请继续下面的单元格:
```
# Save the REST endpoint for the search API to a config file, to be used by the frontend build
import json
api_endpoint = get_cfn_outputs('vis-search-api')['ImageSimilarityApi']
with open('./frontend/src/config/config.json', 'w') as outfile:
json.dump({'apiEndpoint': api_endpoint}, outfile)
```
### 步骤 2: 部署前端服务
```
# add NPM to the path so we can assemble the web frontend from our notebook code
from os import environ
npm_path = ':/home/ec2-user/anaconda3/envs/JupyterSystemEnv/bin'
if npm_path not in environ['PATH']:
ADD_NPM_PATH = environ['PATH']
ADD_NPM_PATH = ADD_NPM_PATH + npm_path
else:
ADD_NPM_PATH = environ['PATH']
%set_env PATH=$ADD_NPM_PATH
%cd ./frontend/
!npm install
!npm run-script build
hosting_bucket = f"s3://{outputs['s3BucketHostingBucketName']}"
!aws s3 sync ./build/ $hosting_bucket --acl public-read
```
### 步骤 3: 浏览您的前端服务,并上传图片
```
print('点击下面的URL:\n')
print(outputs['S3BucketSecureURL'] + '/index.html')
```
您应该看到以下页面:

在网站上,尝试将以下URL粘贴在URL文本字段中。
`https://i4.ztat.net/large/VE/12/1C/14/8K/12/VE121C148-K12@10.jpg`
## 拓展
我们使用了在Imagenet数据集上进行训练的预训练Resnet50模型。 现在,根据您的用例,您可以使用自己的数据集微调任何预先训练的模型,例如VGG,Inception和MobileNet,并将模型托管在Amazon SageMaker中。
您还可以使用Amazon SageMaker Batch转换作业从存储的S3图像中提取大量特征,然后可以使用AWS Glue将该数据导入Elasticeearch域。
### 清理资源
确保您停止笔记本实例,删除Amazon SageMaker终端节点并删除Elasticsearch域,以防止产生任何额外费用。
```
# Delete the endpoint
predictor.delete_endpoint()
# Empty S3 Contents
training_bucket_resource = s3_resource.Bucket(bucket)
training_bucket_resource.objects.all().delete()
hosting_bucket_resource = s3_resource.Bucket(outputs['s3BucketHostingBucketName'])
hosting_bucket_resource.objects.all().delete()
```
| github_jupyter |
```
import os
import neptune
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from tqdm import tqdm as tqdm
from scipy.stats import ttest_ind as ttest
from scipy.stats import kendalltau,spearmanr
import scipy
import xarray as xr
from scipy.spatial.distance import pdist,squareform,cdist
from sklearn.preprocessing import StandardScaler
# from tensorflow.python import keras as keras
from keras.models import Model
from src.results.experiments import _DateExperimentLoader
from src.results.utils import raw_to_xr, dprime
from src.results.neptune import get_model_files, load_models, load_assemblies, load_params, load_properties,prep_assemblies,NeptuneExperimentRun,generate_convnet_encoders
from src.results.dicarlo import get_dicarlo_su
from src.data_loader import Shifted_Data_Loader
from src.data_generator import ShiftedDataBatcher
from src.rcca import CCA
import brainscore
from brainscore.assemblies import walk_coords,split_assembly
from brainscore.assemblies import split_assembly
# from brainscore.metrics import Score
from brainio_base.assemblies import DataAssembly
def set_style():
# This sets reasonable defaults for font size for
# a figure that will go in a paper
sns.set_context("talk")
# Set the font to be serif, rather than sans
sns.set(font='serif')
# Make the background white, and specify the
# specific font family
sns.set_style("white", {
"font.family": "serif",
"font.serif": ["Georgia","Times New Roman", "Palatino", "serif"]
})
os.environ['NEPTUNE_API_TOKEN']="eyJhcGlfYWRkcmVzcyI6Imh0dHBzOi8vdWkubmVwdHVuZS5tbCIsImFwaV9rZXkiOiI3ZWExMTlmYS02ZTE2LTQ4ZTktOGMxMi0wMDJiZTljOWYyNDUifQ=="
neptune.init('elijahc/DuplexAE')
neptune.set_project('elijahc/DuplexAE')
proj_root = '/home/elijahc/projects/vae'
def load_config(exps):
props = load_properties(exps)
params = load_params(exps)
ids = [e.id for e in exps]
for eid,pr,pa in zip(ids,props,params):
out = {'id':eid}
out.update(pr)
out.update(pa)
out['exp_dir']=os.path.join(proj_root,pr['dir'])
yield out
conv_eids = [
'DPX-29',
'DPX-30',
]
dense_eids = [
'DPX-10',
'DPX-16',
# 'DPX-27',
]
# eids = conv_eids+dense_eids
conv_exps = neptune.project.get_experiments(id=conv_eids)
dense_exps = neptune.project.get_experiments(id=dense_eids)
exps = np.array(conv_exps+dense_exps)
s_df = pd.DataFrame(list(load_config(exps)))
s_df.head()
run = NeptuneExperimentRun(proj_root,conv_exps[0])
def load_rdm(file_paths):
for fp in file_paths:
x = xr.open_dataarray(fp)
# print(x)
p_idxs = ['image_id','rxy','category_name','object_name']
yield x.set_index(image_1=[p+'_1' for p in p_idxs],image_2=[p+'_2' for p in p_idxs])
def process_dicarlo(assembly,avg_repetition=True, variation=[0, 3, 6], tasks=['ty','tz','rxy']):
stimulus_set = assembly.attrs['stimulus_set']
stimulus_set['dy_deg'] = stimulus_set.tz*stimulus_set.degrees
stimulus_set['dx_deg'] = stimulus_set.ty*stimulus_set.degrees
stimulus_set['dy_px'] = stimulus_set.dy_deg*32
stimulus_set['dx_px'] = stimulus_set.dx_deg*32
assembly.attrs['stimulus_set'] = stimulus_set
groups = ['category_name', 'object_name', 'image_id']+tasks
if not avg_repetition:
groups.append('repetition')
data = assembly.multi_groupby(groups) # (2)
data = data.mean(dim='presentation')
data = data.squeeze('time_bin') # (3)
# data.attrs['stimulus_set'] = stimulus_set.query('variation == {}'.format(variation))
data = data.T
data = data[stimulus_set.variation.isin(variation),:]
return data
from tqdm import trange
def gen_conv_assemblies(encodings,depths,stim_set,n=5):
enc = {k:encodings[k] for k in ['pixel','y_enc','z_enc']}
for i in np.arange(n):
enc.update({k:encodings[k][:,:,i] for k in ['conv_4','conv_3','conv_2','conv_1']})
yield raw_to_xr(enc,depths,stim_set)
# stimulus_set = neural_data.attrs['stimulus_set']
# stimulus_set = pd.read_csv('../data/dicarlo_images/stimulus_set.csv')
# stimulus_set['dy_deg'] = stimulus_set.tz*stimulus_set.degrees
# stimulus_set['dx_deg'] = stimulus_set.ty*stimulus_set.degrees
# stimulus_set['dy'] = stimulus_set.dy_deg*32
# stimulus_set['dx'] = stimulus_set.dx_deg*32
# stimulus_set.to_csv('../data/dicarlo_images/stimulus_set.csv',index=False)
# stimulus_set
# stimulus_set.to_csv('../data/dicarlo_images/stimulus_set.csv',index=False)
# sm_imgs = np.load('../data/dicarlo_images/sm_imgs_56x56.npy')
# sm_imgs = np.expand_dims(sm_imgs,-1)
# slug = [(dx,dy,lab,float(rxy)) for dx,dy,rxy,lab in zip(stimulus_set.dx_px.values,stimulus_set.dy_px.values,stimulus_set.rxy.values,stimulus_set.category_name.values)]
def is_iterable(obj):
try:
some_object_iterator = iter(some_object)
return True
except TypeError as te:
print(some_object, 'is not iterable')
return False
from sklearn.metrics.pairwise import cosine_similarity
from scipy.stats import pearsonr,pearson3
def dicarlo_rdm(data, stimulus_set, region=['V4','IT'],sortby='category_name', variation=[0,3,6], metric='correlation',n_sample=150):
# if region is not None:
# data = data.sel(region=region)
var_lookup = stimulus_set[stimulus_set.variation.isin(variation)].image_id.values
data = data.where(data.image_id.isin(var_lookup),drop=True)
print(data.shape)
out_dict = {'region':[],'variation':[],'rdm':[]}
xrs = []
for reg in region:
# for v in variation:
sub_dat = data.sel(region=reg)
if sortby is not None:
sub_dat = sub_dat.sortby(sortby)
if sub_dat.shape[1] > n_sample:
s_idxs = np.random.choice(np.arange(sub_dat.shape[1]),size=n_sample,replace=False)
else:
n_sample = int(0.8 * sub_dat.shape[1])
s_idxs = np.random.choice(np.arange(sub_dat.shape[1]),size=n_sample,replace=False)
if metric is 'cosine_similarity':
rdm = 1-cosine_similarity(sub_dat[:,s_idxs])
else:
# print(sub_dat.values.shape)
num_images = sub_dat.values.shape[0]
# print(sub_dat[:,s_idxs].values.shape)
rdm = squareform(pdist(sub_dat[:,s_idxs].values,metric=metric))
# rdm = np.empty(shape=(num_images,num_images))
# for i in trange(num_images):
# for j in np.arange(num_images):
# r,p = pearsonr(sub_dat.values[i],sub_dat.values[j])
out_dict['region'].append(reg)
out_dict['rdm'].append(rdm)
# out_dict['image_id'].append(sub_dat.image_id.values)
p = sub_dat.presentation.to_index()
xrs.append(xr.DataArray(rdm,
coords={
'image_1':p.set_names(tuple([n+'_1' for n in p.names])),
'image_2':p.set_names(tuple([n+'_2' for n in p.names])),
# 'variation':v,
'region':reg,
},
dims=('image_1','image_2'),
))
return xr.concat(xrs,'all')
def plot_rdm(data,sortby=None,figsize=(4,4), ax=None):
# Expects da object of shape(N,N)
if sortby is not None:
data = data.sortby([sortby+'_1',sortby+'_2'])
if ax is None:
fig,ax = plt.subplots(1,1,figsize=figsize)
labels = data[sortby+'_1'].values
sns.heatmap(data,ax=ax)
yticks = [int(l._text) for l in list(ax.get_yticklabels())]
xticks = [int(l._text) for l in list(ax.get_xticklabels())]
ax.set_yticklabels(labels[yticks])
ax.set_xticklabels(labels[xticks])
ax.set_title(np.unique(data.region.values)[0])
if ax is None:
return fig,ax
else:
return ax
neural_data = brainscore.get_assembly(name="dicarlo.Majaj2015")
neural_data.load()
stimulus_set = neural_data.attrs['stimulus_set']
# # stimulus_set.to_csv('../data/dicarlo_images/stimulus_set.csv',index=False)
neural_data = process_dicarlo(neural_data)
sm_imgs = np.load('../data/dicarlo_images/sm_imgs_56x56.npy')
ids0 = stimulus_set[stimulus_set.variation.values==0].image_id.values
ids3 = stimulus_set[stimulus_set.variation.values==3].image_id.values
sm_ims = list(zip(ids3,sm_imgs[stimulus_set.variation.values==3]))
# it_resp = neural_data.sel(region='IT')
# it_resp = it_resp[it_resp.image_id.isin(ids3)]
# # itp_df = it_resp.presentation.to_dataframe().reset_index()
# # idxs3 = itp_df.image_id.isin(ids3)
# # sm3 = sm_imgs[]
# scaler = StandardScaler()
# scaled_sm_imgs = scaler.fit_transform(sm_imgs.reshape(5760,56*56)).reshape(5760,56,56)
Xm,Xs = (sm_imgs.mean(),sm_imgs.std())
scaled_sm_imgs = np.clip((sm_imgs-Xm)/Xs,-1,1)
plt.imshow(scaled_sm_imgs[2],cmap='gray')
plt.colorbar()
plt.hist(sm_imgs.flatten())
DL = ShiftedDataBatcher('fashion_mnist',rotation=None,flatten=False, bg='natural')
batch = next(DL.gen_test_batches(num_batches=10,batch_size=512,bg='natural'))
batch[0].shape
plt.imshow(batch[0][25].reshape(56,56),cmap='gray')
plt.hist(scaled_sm_imgs.flatten())
plt.yscale('log')
plt.hist(batch[0].flatten())
plt.yscale('log')
sns.set_context('talk')
g = sns.FacetGrid(col='region',row='model',data=conv_cca,height=5)
g.map(sns.stripplot,'layer','pearsonr')
g.fig.autofmt_xdate(rotation=45)
def mod_rdm(imgs,stim):
iid = stim.image_id.values
on = stim.object_name.values
conv_rdm=[]
imgs = np.expand_dims(imgs,-1)
slug = [(dx,dy,lab,float(rxy)) for dx,dy,rxy,lab in zip(stim.dx_px.values,stim.dy_px.values,stim.rxy.values,stim.category_name.values)]
for encodings, depths, stim_set in prep_assemblies(proj_root,conv_exps,test_data=imgs,slug=slug,image_id=iid,object_name=on,n_units=300):
stim_set['variation']=stim.variation
xrs = gen_conv_assemblies(encodings,depths,stim_set,n=3)
drdm = dicarlo_rdm(next(xrs),stim_set,region=['conv_1','conv_2','conv_3','conv_4','y_enc','z_enc',],variation=[3],metric='correlation')
conv_rdm.append(drdm)
return conv_rdm
conv_rdm = list(mod_rdm(scaled_sm_imgs,stim))
cdf = pd.DataFrame(list(load_config(conv_exps)))
for e,crdm in zip(cdf.exp_dir.values,conv_rdm):
da = crdm.reset_index(['image_1','image_2'])
with open(os.path.join(e,'dicarlo_rdm_pearson.nc'), 'wb') as fp:
da.to_netcdf(fp)
fig,axs = plt.subplots(1,2, figsize=(10,5))
for i,ax in enumerate(axs):
plot_rdm(conv_rdm[i][4], figsize=(5,5), sortby='category_name', ax=ax);
# monkey_rdm = dicarlo_rdm(neural_data, stimulus_set,variation=[3],metric='correlation',)
# da = monkey_rdm.reset_index(['image_1','image_2'],)
# with open(os.path.join('../data/dicarlo_images','monkey_rdm_pearson.nc'), 'wb') as fp:
# da.to_netcdf(fp)
# # conv_assemblies = load_assemblies(proj_root,conv_exps)
# models = load_models(proj_root,conv_exps[0:1],load_weights=False)
# mod = next(models)
# mod.layers[1].summary()
# fig,axs = plot_rdm(monkey_rdm[1], figsize=(5,5), sortby='category_name');
# fig.tight_layout()
# # plt.tight_layout()
neural_data.sortby('category_name').image_id.values
fig.savefig('../figures/pub/IT_rdm.png', dpi=150)
monkey_fp = '../data/dicarlo_images/monkey_rdm_pearson.nc'
# monkey_rdm = next(load_rdm([monkey_fp]))
# subset = monkey_rdm.image_id_1.isin(ids3)
it_rdm = monkey_rdm[1]
v4_rdm = monkey_rdm[0]
# cdf = pd.DataFrame(list(load_config(conv_exps)))
# mod_rdm_fps = pd.DataFrame(list(load_config(conv_exps))).exp_dir.values
# mod_rdm_fps = [fp+'/dicarlo_rdm_pearson.nc' for fp in mod_rdm_fps]
# xrs = list(load_rdm(mod_rdm_fps))
# sortby='image_id'
# sorter = [sortby+'_1',sortby+'_2']
# xrs[0][0].sortby(sorter).image_id_1.values==monkey_rdm[0].sortby(sorter).image_id_1.values
def calc_model_kt(m_rdm, exps, sortby='image_id'):
cdf = pd.DataFrame(list(load_config(exps)))
mod_rdm_fps = pd.DataFrame(list(load_config(exps))).exp_dir.values
mod_rdm_fps = [fp+'/dicarlo_rdm_pearson.nc' for fp in mod_rdm_fps]
# print(mod_rdm_fps)
xrs = list(load_rdm(mod_rdm_fps))
# print(xrs)
kt = []
sorter = [sortby+'_1',sortby+'_2']
for i in np.arange(len(mod_rdm_fps)):
for j in trange(5):
mod_rdm = xrs[i][j+1]
for k,reg in enumerate(['V4','IT']):
# print(k,reg)
ru = m_rdm[k].sortby(sorter)
rv = mod_rdm.sortby(sorter)
# print(ru)
# print()
# print(rv)
ktp = kendalltau(ru,rv)
kt.append({'kendalltau':ktp[0],'p-value':ktp[1],'layer':np.unique(rv.region.values)[0],'encoder_arch':cdf.encoder_arch.values[i],'recon_weight':cdf.recon_weight.values[i],'region':reg})
kt_df = pd.DataFrame.from_records(kt)
return kt_df
kt_df = calc_model_kt(monkey_rdm,conv_exps,sortby='category_name')
kt_df.head()
sns.set_context('paper')
fig, axs = plt.subplots(2,2,figsize=(5,5), sharey=True,sharex=True)
for i,reg in enumerate(['V4','IT']):
for j,recon in enumerate([0.0,1.0]):
ax=axs[i,j]
sns.barplot(x='layer',y='kendalltau',
data=kt_df.query('region == "{}" and recon_weight == {}'.format(reg,recon)),
ax=ax, palette='magma')
ax.set_ylabel(r'Kendall $\tau$')
ax.set_title('{} | recon={}'.format(reg,recon))
# ax.set_title(reg)
for ax,recon in zip(axs[:,1].ravel(),[0,1]):
# ax.set_ylabel('recon = {}'.format(recon))
pass
for ax in axs[1]:
xlab = ax.get_xticklabels()
ax.set_xticklabels(xlab,rotation=90)
plt.tight_layout()
fig.savefig('../figures/pub/kendalltau.pdf', dpi=300)
```
| github_jupyter |
```
import openpyxl as opxl
import pandas as pd
import re
import numpy as np
import unicodedata
from IPython.display import display, HTML
def convert_to_lower_case(data):
if type(data) is dict:
for k, v in data.items():
if type(v) is str:
data[k] = v.lower()
elif type(v) is list:
data[k] = [x.lower() for x in v]
elif type(v) is dict:
data[k] = convert_to_lower_case(v)
return data
files_to_read = [
{
'output_file' : '2014-15_gems_jade', \
'year': '2014-2015', \
'xls_file' : 'Annex 18 - Reconciliation sheets Gems & Jade 14-15.xlsx', \
'name_cols': 'B:D', \
'data_cols' : 'A:M', \
'data_skip_rows' : 4, \
'stop_at' : '1000'
},
{
'output_file' : '2014-15_oil_gas', \
'year': '2014-2015', \
'xls_file' : 'Annex 18 - Reconciliation sheets Oil & Gas 14-15.xlsx', \
'name_cols': 'B:E', \
'data_cols' : 'A:M', \
'data_skip_rows' : 5, \
'stop_at' : '1000'
},
{
'output_file' : '2014-15_other_minerals', \
'year': '2014-2015', \
'xls_file' : 'Annex 18 - Reconciliation sheets Other minerals 14-15.xlsx', \
'name_cols': 'B:E', \
'data_cols' : 'A:M', \
'data_skip_rows' : 5, \
'stop_at' : '1000'
},
{
'output_file' : '2015-16_gems_jade', \
'year': '2015-2016', \
'xls_file' : 'Annex 18 - Reconciliation sheets Gems & Jade 15-16.xlsx', \
'name_cols': 'C:E', \
'data_cols' : 'A:N', \
'data_skip_rows' : 4, \
'stop_at' : '51'
},
{
'output_file' : '2015-16_oil_gas', \
'year': '2015-2016', \
'xls_file' : 'Annex 18 - Reconciliation sheets Oil & Gas 15-16.xlsx', \
'name_cols': 'B:E', \
'data_cols' : 'A:M', \
'data_skip_rows' : 5, \
'stop_at' : '35'
},
{
'output_file' : '2015-16_oil_gas_transport', \
'year': '2015-2016', \
'xls_file' : 'Annex 18 - Reconciliation sheets Oil & Gas transp 15-16.xlsx', \
'name_cols': 'B:E', \
'data_cols' : 'A:M', \
'data_skip_rows' : 5, \
'stop_at' : '5'
},
{
'output_file' : '2015-16_other_minerals', \
'year': '2015-2016', \
'xls_file' : 'Annex 18 - Reconciliation sheets Other minerals 15-16.xlsx', \
'name_cols': 'B:E', \
'data_cols' : 'A:M', \
'data_skip_rows' : 5, \
'stop_at' : '28'
}
]
'''
files_to_read = [
{
'output_file' : '2015-16_other_minerals', \
'year': '2015-2016', \
'xls_file' : 'Annex 18 - Reconciliation sheets Other minerals 15-16.xlsx', \
'name_cols': 'B:E', \
'data_cols' : 'A:M', \
'data_skip_rows' : 5, \
'stop_at' : '28'
}
]
{
'output_file' : '2014-15_oil_gas_transport', \
'year': '2014-2015', \
'xls_file' : 'Annex 18 - Reconciliation sheets Oil & Gas transp 15-16.xlsx', \
'name_cols': 'B:E', \
'data_cols' : 'A:M', \
'data_skip_rows' : 5, \
'stop_at' : '1000'
},
'''
#name_cols = 'B:D'
#data_cols = 'A:M'
#data_skip_rows = 4
df_all = pd.DataFrame({'company_name' : [],'name_of_revenue_stream' : [], \
'paid_to' : [], 'payment_category' : [], 'units' : [], \
'per_company_original' : [], \
'per_company_adjust' : [], \
'per_company_final' : [], \
'per_government_original' : [], \
'per_government_adjust' : [], \
'per_government_final' : [], \
'final_difference' : [], \
'comment' : []
})
key_terms = { 'payment_category' : ['Payments in kind', 'Payments in cash', 'B- Unilateral company disclosures'], \
'units' : ['In Barils', 'In Mscf', 'Gold in T.oz', 'Tin in MT', 'In (Please mention the commodity)', \
'Antimony Ore', 'NA', 'Copper', 'Copper in MT', 'Ferro Nickel']
}
key_terms = convert_to_lower_case(key_terms)
def join_column_titles(text):
#print(text)
to_return = ""
for t in text:
#print(t)
if not isinstance(t, float):
to_return = '.'.join([to_return,(''.join(i for i in t if ord(i)<128))])
#print("FINAL: " + to_return)
# remove first '.' from the title
return to_return[1:]
def rename_duplicate_column_titles(columns):
unique_titles = []
title_counts = {}
for c in columns:
if c in unique_titles:
title_counts[c] += 1
unique_titles.append(c + "." + str(title_counts[c]))
else:
title_counts[c] = 0
unique_titles.append(c)
return unique_titles
def add_sheet_to_main_df(main_df,current_df, company_name,key_terms):
current_payment_category = ""
current_units = "MMK"
current_paid_to = ""
for index, row in current_df.iterrows():
#display(row)
index_col = 'n'
description_col = 'description of payment'
# skip row if '°' is included in it because that means
# there's an extra row of French titles in the table
unicode_row = row.to_string().encode("utf-8")
if u'\xb0' in unicode_row.decode('windows-1252'):
continue
# if the index column in empty, that means it's not a data-row
if str(row[index_col]) == 'nan':
index_col = description_col
#print(str(row[description_col]).lower() + " | " + str(row[index_col]).lower())
if str(row[index_col]).lower() in key_terms['payment_category']:
current_payment_category = str(row[index_col]).lower()
elif str(row[index_col]).lower() in key_terms['units']:
current_units = str(row[index_col]).lower()
current_paid_to = ""
elif not str(row[index_col]).replace('.','',1).isdigit():
current_paid_to = str(row[index_col])
current_units = "MMK"
if str(row['n']).replace('.','',1).isdigit():
to_append = pd.DataFrame({'company_name' : [company_name], \
'name_of_revenue_stream' : [row['description of payment']] , \
'paid_to' : [current_paid_to], \
'payment_category' : [current_payment_category], \
'units' : [current_units], \
'per_company_original' : [row['per company.original']], \
'per_company_adjust' : [row['per company.adjust']], \
'per_company_final' : [row['per company.final']], \
'per_government_original' : [row['per government.original']], \
'per_government_adjust' : [row['per government.adjust']], \
'per_government_final' : [row['per government.final']], \
'final_difference' : [row['final difference']], \
'comment' : [row['comment']] \
})
to_append['comment'].fillna('', inplace=True)
to_append.fillna(0, inplace=True)
main_df = pd.concat([main_df, to_append])
return main_df
def read_files(files_to_read, df_all):
for f in files_to_read:
output_file = f['output_file']
year = f['year']
xls_file = f['xls_file']
name_cols = f['name_cols']
data_cols = f['data_cols']
data_skip_rows = f['data_skip_rows']
stop_at = f['stop_at']
xl = pd.ExcelFile(year+'/'+xls_file)
# '2014-2015/Annex 18 - Reconciliation sheets Gems & Jade 14-15.xlsx')
print(year+'/'+xls_file)
sheets_list = xl.sheet_names
print(sheets_list)
df_all = pd.DataFrame()
for s in sheets_list:
if not re.search("[A-Z]+\s\(\d+\)",s):
continue
else:
match= re.search(r'(\d+)', s)
print('current sheet number: ' + match.group(1))
if int(match.group(1)) > int(stop_at):
continue
#print(s)
sheet_name = s
#print(name_cols)
name_df = xl.parse(sheet_name, usecols = name_cols,header=1)
#display(name_df.head())
# xl.parse takes 1st ro parsed into column headers - so we retrieve company name
# from the name of the column
company_name = name_df.columns[2]
#print(company_name)
# xl.parse takes 1st ro parsed into column headers - so we retrieve regsitry number
# from the row where "Company name:" column is equal to "Registry number"
if 'Company name:' in name_df.columns:
company_number = name_df[name_df['Company name:'] == 'Registry number'][company_name]
else:
company_number = name_df[name_df['Company name'] == 'Registry number'][company_name]
#print(company_name, company_number)
df = xl.parse(sheet_name, skiprows=data_skip_rows, parse_cols = data_cols, header=None)
for i in df.iloc[0:1]:
if df.iloc[0:1][i][0] == 'Company':
df.iloc[0:1][i][0] = 'per company'
if df.iloc[0:1][i][0] == 'Government Agency':
df.iloc[0:1][i][0] = 'per government'
# fill out cells with merged column headers
df.iloc[0:2] = df.iloc[0:2].fillna(method='ffill', axis=1)
df.columns = df.iloc[0:2].apply(join_column_titles, axis=0)
df = df.iloc[2:]
df = df.reset_index(drop=True)
df.columns = rename_duplicate_column_titles(df.columns)
df.columns = [x.lower() for x in df.columns]
df.rename(columns={'final difference.final': 'final difference', \
'comment.final': 'comment'}, inplace=True)
if 'n.n' in df.columns:
df.rename(columns={'n.n': 'n'}, inplace=True)
if 'description of payment.description' in df.columns:
df.rename(columns={'description of payment.description': \
'description of payment'}, inplace=True)
for col in df.columns:
new_col_name = col.replace("governement", "government")
new_col_name = new_col_name.replace("ajust", "adjust")
df.rename(columns={col: new_col_name}, inplace=True)
if 'per company.initial' in df.columns:
df.rename(columns={'per company.initial': \
'per company.original'}, inplace=True)
if 'per government.initial' in df.columns:
df.rename(columns={'per government.initial': \
'per government.original'}, inplace=True)
if 'company.initial' in df.columns:
df.rename(columns={'company.initial': \
'per company.original'}, inplace=True)
if 'government agency.initial' in df.columns:
df.rename(columns={'government agency.initial': \
'per government.original'}, inplace=True)
#display(df.head())
# clean name for the company to save individual CSV file if needed
name = map(lambda x: ''.join(e for e in x if e.isalnum()) , company_name.split(' '))
name = ' '.join(w for w in name)
name = re.sub( '\s+', ' ', name.strip())
filename = re.sub( '\s+', '_', name.strip())+'.csv'
# df.to_csv(filename)
#print(filename)
df_all = add_sheet_to_main_df(df_all,df, company_name,key_terms)
df_all = df_all.reset_index(drop=True)
df_all.to_csv(year+'/'+output_file+'.csv', encoding='utf-8')
display(df_all.head())
read_files(files_to_read,df_all)
```
| github_jupyter |
```
import tensorflow
from math import sqrt
from numpy import concatenate
from matplotlib import pyplot
from pandas import read_csv
from pandas import DataFrame
from pandas import concat
from sklearn.preprocessing import MinMaxScaler
from sklearn.preprocessing import LabelEncoder
from sklearn.metrics import mean_squared_error
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
rides_weather = pd.read_pickle("rides_weather.pkl")
rides_weather
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
import pandas as pd
import os
from sklearn.preprocessing import MinMaxScaler
# from tf.keras.models import Sequential # This does not work!
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import Input, Dense, GRU, Embedding
from tensorflow.python.keras.optimizers import RMSprop
from tensorflow.python.keras.callbacks import EarlyStopping, ModelCheckpoint, TensorBoard, ReduceLROnPlateau
print("tensorflow version", tf.__version__)
#print("keras version", tf.keras.__version__)
print("pandas version", pd.__version__)
# potentially add day and hour later
# df['Various', 'Day'] = df.index.dayofyear
# df['Various', 'Hour'] = df.index.hour
rides_weather
target_names = rides_weather.columns[:3]
target_names
shift_days = 1
shift_steps = shift_days * 24 # Number of hours.
df_targets = rides_weather[target_names].shift(-shift_steps)
rides_weather[target_names].head(shift_steps + 5)
df_targets.head()
x_data = rides_weather[target_names].values[0:-shift_steps]
print(type(x_data))
print("Shape:", x_data.shape)
y_data = df_targets.values[:-shift_steps]
print(type(y_data))
print("Shape:", y_data.shape)
num_data = len(x_data)
train_split = 0.9
num_train = int(train_split * num_data)
num_test = num_data - num_train
# Define x
x_train = x_data[0:num_train]
x_test = x_data[num_train:]
len(x_train) + len(x_test)
# Define y
y_train = y_data[0:num_train]
y_test = y_data[num_train:]
len(y_train) + len(y_test)
num_x_signals = x_data.shape[1]
num_x_signals
num_y_signals = y_data.shape[1]
num_y_signals
print("Min:", np.min(x_train))
print("Max:", np.max(x_train))
print("Min:", np.min(x_test))
print("Max:", np.max(x_test))
# Scale from 0 to 1
x_scaler = MinMaxScaler()
x_train_scaled = x_scaler.fit_transform(x_train)
x_test_scaled = x_scaler.transform(x_test)
y_scaler = MinMaxScaler()
y_train_scaled = y_scaler.fit_transform(y_train)
y_test_scaled = y_scaler.transform(y_test)
print(x_train_scaled.shape)
print(y_train_scaled.shape)
def batch_generator(batch_size, sequence_length):
"""
Generator function for creating random batches of training-data.
"""
# Infinite loop.
while True:
# Allocate a new array for the batch of input-signals.
x_shape = (batch_size, sequence_length, num_x_signals)
x_batch = np.zeros(shape=x_shape, dtype=np.float16)
# Allocate a new array for the batch of output-signals.
y_shape = (batch_size, sequence_length, num_y_signals)
y_batch = np.zeros(shape=y_shape, dtype=np.float16)
# Fill the batch with random sequences of data.
for i in range(batch_size):
# Get a random start-index.
# This points somewhere into the training-data.
idx = np.random.randint(num_train - sequence_length)
# Copy the sequences of data starting at this index.
x_batch[i] = x_train_scaled[idx:idx+sequence_length]
y_batch[i] = y_train_scaled[idx:idx+sequence_length]
yield (x_batch, y_batch)
batch_size = 214
sequence_length = 24 * 7
generator = batch_generator(batch_size=batch_size,
sequence_length=sequence_length)
x_batch, y_batch = next(generator)
print(x_batch.shape)
print(y_batch.shape)
batch = 0 # First sequence in the batch.
signal = 0 # First signal from the 20 input-signals.
seq = x_batch[batch, :, signal]
plt.plot(seq)
seq = y_batch[batch, :, signal]
plt.plot(seq)
validation_data = (np.expand_dims(x_test_scaled, axis=0),
np.expand_dims(y_test_scaled, axis=0))
model = Sequential()
model.add(GRU(units=512,return_sequences=True,input_shape=(None,num_x_signals)))
model.add(Dense(num_y_signals, activation='sigmoid'))
warmup_steps=24
def loss_mse_warmup(y_true, y_pred):
"""
Calculate the Mean Squared Error between y_true and y_pred,
but ignore the beginning "warmup" part of the sequences.
y_true is the desired output.
y_pred is the model's output.
"""
# The shape of both input tensors are:
# [batch_size, sequence_length, num_y_signals].
# Ignore the "warmup" parts of the sequences
# by taking slices of the tensors.
y_true_slice = y_true[:, warmup_steps:, :]
y_pred_slice = y_pred[:, warmup_steps:, :]
# These sliced tensors both have this shape:
# [batch_size, sequence_length - warmup_steps, num_y_signals]
# Calculate the MSE loss for each value in these tensors.
# This outputs a 3-rank tensor of the same shape.
loss = tf.losses.mean_squared_error(labels=y_true_slice,
predictions=y_pred_slice)
# Keras may reduce this across the first axis (the batch)
# but the semantics are unclear, so to be sure we use
# the loss across the entire tensor, we reduce it to a
# single scalar with the mean function.
loss_mean = tf.reduce_mean(loss)
return loss_mean
optimizer = RMSprop(lr=1e-3)
model.compile(loss=loss_mse_warmup, optimizer=optimizer)
model.summary()
path_checkpoint = '23_checkpoint.keras'
callback_checkpoint = ModelCheckpoint(filepath=path_checkpoint,
monitor='val_loss',
verbose=1,
save_weights_only=True,
save_best_only=True)
callback_early_stopping = EarlyStopping(monitor='val_loss',
patience=5, verbose=1)
callback_tensorboard = TensorBoard(log_dir='./23_logs/',
histogram_freq=0,
write_graph=False)
callback_reduce_lr = ReduceLROnPlateau(monitor='val_loss',
factor=0.1,
min_lr=1e-4,
patience=0,
verbose=1)
callbacks = [callback_early_stopping,
callback_checkpoint,
callback_tensorboard,
callback_reduce_lr]
%%time
model.fit_generator(generator=generator,
epochs=20,
steps_per_epoch=100,
validation_data=validation_data,
callbacks=callbacks)
try:
model.load_weights(path_checkpoint)
except Exception as error:
print("Error trying to load checkpoint.")
print(error)
result = model.evaluate(x=np.expand_dims(x_test_scaled, axis=0),
y=np.expand_dims(y_test_scaled, axis=0))
print("loss (test-set):", result)
def plot_comparison(start_idx, length=100, train=True):
"""
Plot the predicted and true output-signals.
:param start_idx: Start-index for the time-series.
:param length: Sequence-length to process and plot.
:param train: Boolean whether to use training- or test-set.
"""
if train:
# Use training-data.
x = x_train_scaled
y_true = y_train
else:
# Use test-data.
x = x_test_scaled
y_true = y_test
# End-index for the sequences.
end_idx = start_idx + length
# Select the sequences from the given start-index and
# of the given length.
x = x[start_idx:end_idx]
y_true = y_true[start_idx:end_idx]
# Input-signals for the model.
x = np.expand_dims(x, axis=0)
# Use the model to predict the output-signals.
y_pred = model.predict(x)
# The output of the model is between 0 and 1.
# Do an inverse map to get it back to the scale
# of the original data-set.
y_pred_rescaled = y_scaler.inverse_transform(y_pred[0])
# For each output-signal.
for signal in range(len(target_names)):
# Get the output-signal predicted by the model.
signal_pred = y_pred_rescaled[:, signal]
# Get the true output-signal from the data-set.
signal_true = y_true[:, signal]
# Make the plotting-canvas bigger.
plt.figure(figsize=(15,5))
# Plot and compare the two signals.
plt.plot(signal_true, label='true')
plt.plot(signal_pred, label='pred')
# Plot grey box for warmup-period.
p = plt.axvspan(0, warmup_steps, facecolor='black', alpha=0.15)
# Plot labels etc.
plt.ylabel(target_names[signal])
plt.legend()
plt.show()
plot_comparison(start_idx=100, length=200, train=True)
plot_comparison(start_idx=200, length=1000, train=False)
model.save("3loc_model.model")
```
| github_jupyter |
# Convolutional Layer
In this notebook, we visualize four filtered outputs (a.k.a. activation maps) of a convolutional layer.
In this example, *we* are defining four filters that are applied to an input image by initializing the **weights** of a convolutional layer, but a trained CNN will learn the values of these weights.
<img src='notebook_ims/conv_layer.gif' height=60% width=60% />
### Import the image
```
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
```
### Define and visualize the filters
```
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
# visualize all four filters
fig = plt.figure(figsize=(10, 5))
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
width, height = filters[i].shape
for x in range(width):
for y in range(height):
ax.annotate(str(filters[i][x][y]), xy=(y,x),
horizontalalignment='center',
verticalalignment='center',
color='white' if filters[i][x][y]<0 else 'black')
```
## Define a convolutional layer
The various layers that make up any neural network are documented, [here](http://pytorch.org/docs/stable/nn.html). For a convolutional neural network, we'll start by defining a:
* Convolutional layer
Initialize a single convolutional layer so that it contains all your created filters. Note that you are not training this network; you are initializing the weights in a convolutional layer so that you can visualize what happens after a forward pass through this network!
#### `__init__` and `forward`
To define a neural network in PyTorch, you define the layers of a model in the function `__init__` and define the forward behavior of a network that applyies those initialized layers to an input (`x`) in the function `forward`. In PyTorch we convert all inputs into the Tensor datatype, which is similar to a list data type in Python.
Below, I define the structure of a class called `Net` that has a convolutional layer that can contain four 4x4 grayscale filters.
```
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a single convolutional layer with four filters
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# returns both layers
return conv_x, activated_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
```
### Visualize the output of each filter
First, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
```
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1, xticks=[], yticks=[])
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
```
Let's look at the output of a convolutional layer, before and after a ReLu activation function is applied.
```
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get the convolutional layer (pre and post activation)
conv_layer, activated_layer = model(gray_img_tensor)
# visualize the output of a conv layer
viz_layer(conv_layer)
```
#### ReLu activation
In this model, we've used an activation function that scales the output of the convolutional layer. We've chose a ReLu function to do this, and this function simply turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
<img src='notebook_ims/relu_ex.png' height=50% width=50% />
```
# after a ReLu is applied
# visualize the output of an activated conv layer
viz_layer(activated_layer)
```
| github_jupyter |
# Clustering Satellite Images
Our objective is to cluster pixels of a satellite image in order to find some insights.
```
# Modules
# File managing
import os
import re as regex
# Set working dir
os.chdir('/home/adriel_martins/Documents/projects/stats-img-processing')
# Visualization and Image Processing
import cv2
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from src.utils import biplot
# Data Manipulation
import pandas as pd
# Clustering
from sklearn.cluster import KMeans
from sklearn_extra.cluster import KMedoids
from sklearn.cluster import AgglomerativeClustering
```
# Loading Images
We have 3 differente types of data. The original satellite image with many channel, and also the decomposed to only 2 PCA
```
original_img = pd.read_csv('data/output/array/cropped_img.csv').drop(columns=['index', 'Unnamed: 0'])
pca = pd.read_csv('data/output/array/pca_score.csv').drop(columns=['Unnamed: 0'])
pca_partial = pca.loc[:, ['PC1', 'PC2']]
fa = pd.read_csv('data/output/array/fa_score.csv').drop(columns=['Unnamed: 0'])
data_collection = {}
data_names = ['original_img', 'pca_full', 'pca_partial', 'fa']
for i, data in enumerate([original_img, pca, pca_partial, fa]):
data_collection.update({data_names[i] : data})
display(data.head())
```
# K-Means
```
def get_model_results(data, n_clusters=4):
data = data.copy()
kmeans = KMeans(n_clusters=n_clusters,
n_init = 50)
model_results = data
model_results['cluster'] = kmeans.fit_predict(data)
return model_results
def biplot_decomposed_cluster(model_results, cols):
fig, ax = plt.subplots() # a figure with a single Axes
ax.scatter(model_results[cols[0]], model_results[cols[1]],
s=1, alpha=0.5,
c=model_results['cluster'])
ax.set_xlabel(cols[0]) # Add an x-label to the axes.
ax.set_ylabel(cols[1]) # Add a y-label to the axes.
return ax
def plot_clusters(cluster_series, colour='gray'):
cluster_series = cluster_series.to_numpy().copy()
cluster_series.shape = (500, -1)
plt.imshow(cluster_series, cmap=colour)
for index, (data_name, data) in enumerate(data_collection.items()):
plt.subplot(2, 2, index + 1)
print(index)
plot_clusters(get_model_results(data)['cluster'])
plt.tick_params(left = False, right = False , labelleft = False ,
labelbottom = False, bottom = False)
plt.title(data_name)
plt.savefig('data/output/img/kmeans_cluster_img.png')
```
# PAM (Partition Around Medoids)
```
def get_model_results_kmedoids(data, n_clusters=4):
data = data.copy()
kmedoids = KMedoids(n_clusters=n_clusters,
method='pam',
init='k-medoids++')
model_results = data
model_results['cluster'] = kmedoids.fit_predict(data)
return kmedoids.fit_predict(data)
for index, (data_name, data) in enumerate(data_collection.items()):
data = data.values
plot_clusters(get_model_results_kmedoids(data)['cluster'])
plt.tick_params(left = False, right = False , labelleft = False ,
labelbottom = False, bottom = False)
plt.title(data_name)
plt.savefig('data/output/img/kmedoid_cluster_img.png')
```
# Agglomerative Hierarchical
```
def get_model_results_agghier(data, n_clusters=4):
data = data.copy()
kmedoids = KMedoids(n_clusters=n_clusters,
method='pam',
init='k-medoids++')
model_results = data
model_results['cluster'] = kmeans.fit_predict(data)
return model_results
for index, (data_name, data) in enumerate(data_collection.items()):
# plt.subplot(2, 2, index + 1)
# print(index)
# model_results = get_model_results_agghier(data)
# plot_clusters(model_results['cluster'])
# plt.tick_params(left = False, right = False , labelleft = False ,
# labelbottom = False, bottom = False)
# plt.title(data_name)
plt.savefig('data/output/img/PAM_cluster_img.png')
```
| github_jupyter |

# NYC Taxi Data Regression Model
This is an [Azure Machine Learning Pipelines](https://aka.ms/aml-pipelines) version of two-part tutorial ([Part 1](https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-data-prep), [Part 2](https://docs.microsoft.com/en-us/azure/machine-learning/service/tutorial-auto-train-models)) available for Azure Machine Learning.
You can combine the two part tutorial into one using AzureML Pipelines as Pipelines provide a way to stitch together various steps involved (like data preparation and training in this case) in a machine learning workflow.
In this notebook, you learn how to prepare data for regression modeling by using open source library [pandas](https://pandas.pydata.org/). You run various transformations to filter and combine two different NYC taxi datasets. Once you prepare the NYC taxi data for regression modeling, then you will use [AutoMLStep](https://docs.microsoft.com/python/api/azureml-train-automl-runtime/azureml.train.automl.runtime.automl_step.automlstep?view=azure-ml-py) available with [Azure Machine Learning Pipelines](https://aka.ms/aml-pipelines) to define your machine learning goals and constraints as well as to launch the automated machine learning process. The automated machine learning technique iterates over many combinations of algorithms and hyperparameters until it finds the best model based on your criterion.
After you complete building the model, you can predict the cost of a taxi trip by training a model on data features. These features include the pickup day and time, the number of passengers, and the pickup location.
## Prerequisite
If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, make sure you go through the configuration Notebook located at https://github.com/Azure/MachineLearningNotebooks first if you haven't. This sets you up with a working config file that has information on your workspace, subscription id, etc.
## Prepare data for regression modeling
First, we will prepare data for regression modeling. We will leverage the convenience of Azure Open Datasets along with the power of Azure Machine Learning service to create a regression model to predict NYC taxi fare prices. Perform `pip install azureml-opendatasets` to get the open dataset package. The Open Datasets package contains a class representing each data source (NycTlcGreen and NycTlcYellow) to easily filter date parameters before downloading.
### Load data
Begin by creating a dataframe to hold the taxi data. When working in a non-Spark environment, Open Datasets only allows downloading one month of data at a time with certain classes to avoid MemoryError with large datasets. To download a year of taxi data, iteratively fetch one month at a time, and before appending it to green_df_raw, randomly sample 500 records from each month to avoid bloating the dataframe. Then preview the data. To keep this process short, we are sampling data of only 1 month.
Note: Open Datasets has mirroring classes for working in Spark environments where data size and memory aren't a concern.
```
import azureml.core
# Check core SDK version number
print("SDK version:", azureml.core.VERSION)
from azureml.opendatasets import NycTlcGreen, NycTlcYellow
import pandas as pd
from datetime import datetime
from dateutil.relativedelta import relativedelta
green_df_raw = pd.DataFrame([])
start = datetime.strptime("1/1/2016","%m/%d/%Y")
end = datetime.strptime("1/31/2016","%m/%d/%Y")
number_of_months = 1
sample_size = 5000
for sample_month in range(number_of_months):
temp_df_green = NycTlcGreen(start + relativedelta(months=sample_month), end + relativedelta(months=sample_month)) \
.to_pandas_dataframe()
green_df_raw = green_df_raw.append(temp_df_green.sample(sample_size))
yellow_df_raw = pd.DataFrame([])
start = datetime.strptime("1/1/2016","%m/%d/%Y")
end = datetime.strptime("1/31/2016","%m/%d/%Y")
sample_size = 500
for sample_month in range(number_of_months):
temp_df_yellow = NycTlcYellow(start + relativedelta(months=sample_month), end + relativedelta(months=sample_month)) \
.to_pandas_dataframe()
yellow_df_raw = yellow_df_raw.append(temp_df_yellow.sample(sample_size))
```
### See the data
```
from IPython.display import display
display(green_df_raw.head(5))
display(yellow_df_raw.head(5))
```
### Download data locally and then upload to Azure Blob
This is a one-time process to save the dave in the default datastore.
```
import os
dataDir = "data"
if not os.path.exists(dataDir):
os.mkdir(dataDir)
greenDir = dataDir + "/green"
yelloDir = dataDir + "/yellow"
if not os.path.exists(greenDir):
os.mkdir(greenDir)
if not os.path.exists(yelloDir):
os.mkdir(yelloDir)
greenTaxiData = greenDir + "/unprepared.parquet"
yellowTaxiData = yelloDir + "/unprepared.parquet"
green_df_raw.to_csv(greenTaxiData, index=False)
yellow_df_raw.to_csv(yellowTaxiData, index=False)
print("Data written to local folder.")
from azureml.core import Workspace
ws = Workspace.from_config()
print("Workspace: " + ws.name, "Region: " + ws.location, sep = '\n')
# Default datastore
default_store = ws.get_default_datastore()
default_store.upload_files([greenTaxiData],
target_path = 'green',
overwrite = True,
show_progress = True)
default_store.upload_files([yellowTaxiData],
target_path = 'yellow',
overwrite = True,
show_progress = True)
print("Upload calls completed.")
```
### Create and register datasets
By creating a dataset, you create a reference to the data source location. If you applied any subsetting transformations to the dataset, they will be stored in the dataset as well. You can learn more about the what subsetting capabilities are supported by referring to [our documentation](https://docs.microsoft.com/en-us/python/api/azureml-core/azureml.data.tabular_dataset.tabulardataset?view=azure-ml-py#remarks). The data remains in its existing location, so no extra storage cost is incurred.
```
from azureml.core import Dataset
green_taxi_data = Dataset.Tabular.from_delimited_files(default_store.path('green/unprepared.parquet'))
yellow_taxi_data = Dataset.Tabular.from_delimited_files(default_store.path('yellow/unprepared.parquet'))
```
Register the taxi datasets with the workspace so that you can reuse them in other experiments or share with your colleagues who have access to your workspace.
```
green_taxi_data = green_taxi_data.register(ws, 'green_taxi_data')
yellow_taxi_data = yellow_taxi_data.register(ws, 'yellow_taxi_data')
```
### Setup Compute
#### Create new or use an existing compute
```
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
# Choose a name for your CPU cluster
amlcompute_cluster_name = "cpu-cluster"
# Verify that cluster does not exist already
try:
aml_compute = ComputeTarget(workspace=ws, name=amlcompute_cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=4)
aml_compute = ComputeTarget.create(ws, amlcompute_cluster_name, compute_config)
aml_compute.wait_for_completion(show_output=True)
```
#### Define RunConfig for the compute
We will also use `pandas`, `scikit-learn` and `automl`, `pyarrow` for the pipeline steps. Defining the `runconfig` for that.
```
from azureml.core.runconfig import RunConfiguration
from azureml.core.conda_dependencies import CondaDependencies
# Create a new runconfig object
aml_run_config = RunConfiguration()
# Use the aml_compute you created above.
aml_run_config.target = aml_compute
# Enable Docker
aml_run_config.environment.docker.enabled = True
# Set Docker base image to the default CPU-based image
aml_run_config.environment.docker.base_image = "mcr.microsoft.com/azureml/base:0.2.1"
# Use conda_dependencies.yml to create a conda environment in the Docker image for execution
aml_run_config.environment.python.user_managed_dependencies = False
# Specify CondaDependencies obj, add necessary packages
aml_run_config.environment.python.conda_dependencies = CondaDependencies.create(
conda_packages=['pandas','scikit-learn'],
pip_packages=['azureml-sdk[automl,explain]', 'pyarrow'])
print ("Run configuration created.")
```
### Prepare data
Now we will prepare for regression modeling by using `pandas`. We run various transformations to filter and combine two different NYC taxi datasets.
We achieve this by creating a separate step for each transformation as this allows us to reuse the steps and saves us from running all over again in case of any change. We will keep data preparation scripts in one subfolder and training scripts in another.
> The best practice is to use separate folders for scripts and its dependent files for each step and specify that folder as the `source_directory` for the step. This helps reduce the size of the snapshot created for the step (only the specific folder is snapshotted). Since changes in any files in the `source_directory` would trigger a re-upload of the snapshot, this helps keep the reuse of the step when there are no changes in the `source_directory` of the step.
#### Define Useful Columns
Here we are defining a set of "useful" columns for both Green and Yellow taxi data.
```
display(green_df_raw.columns)
display(yellow_df_raw.columns)
# useful columns needed for the Azure Machine Learning NYC Taxi tutorial
useful_columns = str(["cost", "distance", "dropoff_datetime", "dropoff_latitude",
"dropoff_longitude", "passengers", "pickup_datetime",
"pickup_latitude", "pickup_longitude", "store_forward", "vendor"]).replace(",", ";")
print("Useful columns defined.")
```
#### Cleanse Green taxi data
```
from azureml.pipeline.core import PipelineData
from azureml.pipeline.steps import PythonScriptStep
# python scripts folder
prepare_data_folder = './scripts/prepdata'
# rename columns as per Azure Machine Learning NYC Taxi tutorial
green_columns = str({
"vendorID": "vendor",
"lpepPickupDatetime": "pickup_datetime",
"lpepDropoffDatetime": "dropoff_datetime",
"storeAndFwdFlag": "store_forward",
"pickupLongitude": "pickup_longitude",
"pickupLatitude": "pickup_latitude",
"dropoffLongitude": "dropoff_longitude",
"dropoffLatitude": "dropoff_latitude",
"passengerCount": "passengers",
"fareAmount": "cost",
"tripDistance": "distance"
}).replace(",", ";")
# Define output after cleansing step
cleansed_green_data = PipelineData("cleansed_green_data", datastore=default_store).as_dataset()
print('Cleanse script is in {}.'.format(os.path.realpath(prepare_data_folder)))
# cleansing step creation
# See the cleanse.py for details about input and output
cleansingStepGreen = PythonScriptStep(
name="Cleanse Green Taxi Data",
script_name="cleanse.py",
arguments=["--useful_columns", useful_columns,
"--columns", green_columns,
"--output_cleanse", cleansed_green_data],
inputs=[green_taxi_data.as_named_input('raw_data')],
outputs=[cleansed_green_data],
compute_target=aml_compute,
runconfig=aml_run_config,
source_directory=prepare_data_folder,
allow_reuse=True
)
print("cleansingStepGreen created.")
```
#### Cleanse Yellow taxi data
```
yellow_columns = str({
"vendorID": "vendor",
"tpepPickupDateTime": "pickup_datetime",
"tpepDropoffDateTime": "dropoff_datetime",
"storeAndFwdFlag": "store_forward",
"startLon": "pickup_longitude",
"startLat": "pickup_latitude",
"endLon": "dropoff_longitude",
"endLat": "dropoff_latitude",
"passengerCount": "passengers",
"fareAmount": "cost",
"tripDistance": "distance"
}).replace(",", ";")
# Define output after cleansing step
cleansed_yellow_data = PipelineData("cleansed_yellow_data", datastore=default_store).as_dataset()
print('Cleanse script is in {}.'.format(os.path.realpath(prepare_data_folder)))
# cleansing step creation
# See the cleanse.py for details about input and output
cleansingStepYellow = PythonScriptStep(
name="Cleanse Yellow Taxi Data",
script_name="cleanse.py",
arguments=["--useful_columns", useful_columns,
"--columns", yellow_columns,
"--output_cleanse", cleansed_yellow_data],
inputs=[yellow_taxi_data.as_named_input('raw_data')],
outputs=[cleansed_yellow_data],
compute_target=aml_compute,
runconfig=aml_run_config,
source_directory=prepare_data_folder,
allow_reuse=True
)
print("cleansingStepYellow created.")
```
#### Merge cleansed Green and Yellow datasets
We are creating a single data source by merging the cleansed versions of Green and Yellow taxi data.
```
# Define output after merging step
merged_data = PipelineData("merged_data", datastore=default_store).as_dataset()
print('Merge script is in {}.'.format(os.path.realpath(prepare_data_folder)))
# merging step creation
# See the merge.py for details about input and output
mergingStep = PythonScriptStep(
name="Merge Taxi Data",
script_name="merge.py",
arguments=["--output_merge", merged_data],
inputs=[cleansed_green_data.parse_parquet_files(file_extension=None),
cleansed_yellow_data.parse_parquet_files(file_extension=None)],
outputs=[merged_data],
compute_target=aml_compute,
runconfig=aml_run_config,
source_directory=prepare_data_folder,
allow_reuse=True
)
print("mergingStep created.")
```
#### Filter data
This step filters out coordinates for locations that are outside the city border. We use a TypeConverter object to change the latitude and longitude fields to decimal type.
```
# Define output after merging step
filtered_data = PipelineData("filtered_data", datastore=default_store).as_dataset()
print('Filter script is in {}.'.format(os.path.realpath(prepare_data_folder)))
# filter step creation
# See the filter.py for details about input and output
filterStep = PythonScriptStep(
name="Filter Taxi Data",
script_name="filter.py",
arguments=["--output_filter", filtered_data],
inputs=[merged_data.parse_parquet_files(file_extension=None)],
outputs=[filtered_data],
compute_target=aml_compute,
runconfig = aml_run_config,
source_directory=prepare_data_folder,
allow_reuse=True
)
print("FilterStep created.")
```
#### Normalize data
In this step, we split the pickup and dropoff datetime values into the respective date and time columns and then we rename the columns to use meaningful names.
```
# Define output after normalize step
normalized_data = PipelineData("normalized_data", datastore=default_store).as_dataset()
print('Normalize script is in {}.'.format(os.path.realpath(prepare_data_folder)))
# normalize step creation
# See the normalize.py for details about input and output
normalizeStep = PythonScriptStep(
name="Normalize Taxi Data",
script_name="normalize.py",
arguments=["--output_normalize", normalized_data],
inputs=[filtered_data.parse_parquet_files(file_extension=None)],
outputs=[normalized_data],
compute_target=aml_compute,
runconfig = aml_run_config,
source_directory=prepare_data_folder,
allow_reuse=True
)
print("normalizeStep created.")
```
#### Transform data
Transform the normalized taxi data to final required format. This steps does the following:
- Split the pickup and dropoff date further into the day of the week, day of the month, and month values.
- To get the day of the week value, uses the derive_column_by_example() function. The function takes an array parameter of example objects that define the input data, and the preferred output. The function automatically determines the preferred transformation. For the pickup and dropoff time columns, split the time into the hour, minute, and second by using the split_column_by_example() function with no example parameter.
- After new features are generated, use the drop_columns() function to delete the original fields as the newly generated features are preferred.
- Rename the rest of the fields to use meaningful descriptions.
```
# Define output after transform step
transformed_data = PipelineData("transformed_data", datastore=default_store).as_dataset()
print('Transform script is in {}.'.format(os.path.realpath(prepare_data_folder)))
# transform step creation
# See the transform.py for details about input and output
transformStep = PythonScriptStep(
name="Transform Taxi Data",
script_name="transform.py",
arguments=["--output_transform", transformed_data],
inputs=[normalized_data.parse_parquet_files(file_extension=None)],
outputs=[transformed_data],
compute_target=aml_compute,
runconfig = aml_run_config,
source_directory=prepare_data_folder,
allow_reuse=True
)
print("transformStep created.")
```
### Split the data into train and test sets
This function segregates the data into dataset for model training and dataset for testing.
```
train_model_folder = './scripts/trainmodel'
# train and test splits output
output_split_train = PipelineData("output_split_train", datastore=default_store).as_dataset()
output_split_test = PipelineData("output_split_test", datastore=default_store).as_dataset()
print('Data spilt script is in {}.'.format(os.path.realpath(train_model_folder)))
# test train split step creation
# See the train_test_split.py for details about input and output
testTrainSplitStep = PythonScriptStep(
name="Train Test Data Split",
script_name="train_test_split.py",
arguments=["--output_split_train", output_split_train,
"--output_split_test", output_split_test],
inputs=[transformed_data.parse_parquet_files(file_extension=None)],
outputs=[output_split_train, output_split_test],
compute_target=aml_compute,
runconfig = aml_run_config,
source_directory=train_model_folder,
allow_reuse=True
)
print("testTrainSplitStep created.")
```
## Use automated machine learning to build regression model
Now we will use **automated machine learning** to build the regression model. We will use [AutoMLStep](https://docs.microsoft.com/python/api/azureml-train-automl-runtime/azureml.train.automl.runtime.automl_step.automlstep?view=azure-ml-py) in AML Pipelines for this part. Perform `pip install azureml-sdk[automl]`to get the automated machine learning package. These functions use various features from the data set and allow an automated model to build relationships between the features and the price of a taxi trip.
### Automatically train a model
#### Create experiment
```
from azureml.core import Experiment
experiment = Experiment(ws, 'NYCTaxi_Tutorial_Pipelines')
print("Experiment created")
```
#### Define settings for autogeneration and tuning
Here we define the experiment parameter and model settings for autogeneration and tuning. We can specify automl_settings as **kwargs as well.
Use your defined training settings as a parameter to an `AutoMLConfig` object. Additionally, specify your training data and the type of model, which is `regression` in this case.
Note: When using AmlCompute, we can't pass Numpy arrays directly to the fit method.
```
import logging
from azureml.train.automl import AutoMLConfig
# Change iterations to a reasonable number (50) to get better accuracy
automl_settings = {
"iteration_timeout_minutes" : 10,
"iterations" : 2,
"primary_metric" : 'spearman_correlation',
"n_cross_validations": 5
}
training_dataset = output_split_train.parse_parquet_files(file_extension=None).keep_columns(['pickup_weekday','pickup_hour', 'distance','passengers', 'vendor', 'cost'])
automl_config = AutoMLConfig(task = 'regression',
debug_log = 'automated_ml_errors.log',
path = train_model_folder,
compute_target = aml_compute,
featurization = 'auto',
training_data = training_dataset,
label_column_name = 'cost',
**automl_settings)
print("AutoML config created.")
```
#### Define AutoMLStep
```
from azureml.pipeline.steps import AutoMLStep
trainWithAutomlStep = AutoMLStep(name='AutoML_Regression',
automl_config=automl_config,
allow_reuse=True)
print("trainWithAutomlStep created.")
```
#### Build and run the pipeline
```
from azureml.pipeline.core import Pipeline
from azureml.widgets import RunDetails
pipeline_steps = [trainWithAutomlStep]
pipeline = Pipeline(workspace = ws, steps=pipeline_steps)
print("Pipeline is built.")
pipeline_run = experiment.submit(pipeline, regenerate_outputs=False)
print("Pipeline submitted for execution.")
RunDetails(pipeline_run).show()
```
### Explore the results
```
# Before we proceed we need to wait for the run to complete.
pipeline_run.wait_for_completion()
# functions to download output to local and fetch as dataframe
def get_download_path(download_path, output_name):
output_folder = os.listdir(download_path + '/azureml')[0]
path = download_path + '/azureml/' + output_folder + '/' + output_name
return path
def fetch_df(step, output_name):
output_data = step.get_output_data(output_name)
download_path = './outputs/' + output_name
output_data.download(download_path, overwrite=True)
df_path = get_download_path(download_path, output_name) + '/processed.parquet'
return pd.read_parquet(df_path)
```
#### View cleansed taxi data
```
green_cleanse_step = pipeline_run.find_step_run(cleansingStepGreen.name)[0]
yellow_cleanse_step = pipeline_run.find_step_run(cleansingStepYellow.name)[0]
cleansed_green_df = fetch_df(green_cleanse_step, cleansed_green_data.name)
cleansed_yellow_df = fetch_df(yellow_cleanse_step, cleansed_yellow_data.name)
display(cleansed_green_df.head(5))
display(cleansed_yellow_df.head(5))
```
#### View the combined taxi data profile
```
merge_step = pipeline_run.find_step_run(mergingStep.name)[0]
combined_df = fetch_df(merge_step, merged_data.name)
display(combined_df.describe())
```
#### View the filtered taxi data profile
```
filter_step = pipeline_run.find_step_run(filterStep.name)[0]
filtered_df = fetch_df(filter_step, filtered_data.name)
display(filtered_df.describe())
```
#### View normalized taxi data
```
normalize_step = pipeline_run.find_step_run(normalizeStep.name)[0]
normalized_df = fetch_df(normalize_step, normalized_data.name)
display(normalized_df.head(5))
```
#### View transformed taxi data
```
transform_step = pipeline_run.find_step_run(transformStep.name)[0]
transformed_df = fetch_df(transform_step, transformed_data.name)
display(transformed_df.describe())
display(transformed_df.head(5))
```
#### View training data used by AutoML
```
split_step = pipeline_run.find_step_run(testTrainSplitStep.name)[0]
train_split = fetch_df(split_step, output_split_train.name)
display(train_split.describe())
display(train_split.head(5))
```
#### View the details of the AutoML run
```
from azureml.train.automl.run import AutoMLRun
#from azureml.widgets import RunDetails
# workaround to get the automl run as its the last step in the pipeline
# and get_steps() returns the steps from latest to first
for step in pipeline_run.get_steps():
automl_step_run_id = step.id
print(step.name)
print(automl_step_run_id)
break
automl_run = AutoMLRun(experiment = experiment, run_id=automl_step_run_id)
#RunDetails(automl_run).show()
```
#### Retrieve all Child runs
We use SDK methods to fetch all the child runs and see individual metrics that we log.
```
children = list(automl_run.get_children())
metricslist = {}
for run in children:
properties = run.get_properties()
metrics = {k: v for k, v in run.get_metrics().items() if isinstance(v, float)}
metricslist[int(properties['iteration'])] = metrics
rundata = pd.DataFrame(metricslist).sort_index(1)
rundata
```
### Retreive the best model
Uncomment the below cell to retrieve the best model
```
# best_run, fitted_model = automl_run.get_output()
# print(best_run)
# print(fitted_model)
```
### Test the model
#### Get test data
Uncomment the below cell to get test data
```
# split_step = pipeline_run.find_step_run(testTrainSplitStep.name)[0]
# x_test = fetch_df(split_step, output_split_test.name)[['distance','passengers', 'vendor','pickup_weekday','pickup_hour']]
# y_test = fetch_df(split_step, output_split_test.name)[['cost']]
# display(x_test.head(5))
# display(y_test.head(5))
```
#### Test the best fitted model
Uncomment the below cell to test the best fitted model
```
# y_predict = fitted_model.predict(x_test)
# y_actual = y_test.values.tolist()
# display(pd.DataFrame({'Actual':y_actual, 'Predicted':y_predict}).head(5))
# import matplotlib.pyplot as plt
# fig = plt.figure(figsize=(14, 10))
# ax1 = fig.add_subplot(111)
# distance_vals = [x[0] for x in x_test.values]
# ax1.scatter(distance_vals[:100], y_predict[:100], s=18, c='b', marker="s", label='Predicted')
# ax1.scatter(distance_vals[:100], y_actual[:100], s=18, c='r', marker="o", label='Actual')
# ax1.set_xlabel('distance (mi)')
# ax1.set_title('Predicted and Actual Cost/Distance')
# ax1.set_ylabel('Cost ($)')
# plt.legend(loc='upper left', prop={'size': 12})
# plt.rcParams.update({'font.size': 14})
# plt.show()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/AlbertoRosado1/desihigh/blob/main/SnowWhiteDwarf.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
from google.colab import drive
drive.mount('/content/drive',force_remount=True)
import sys
sys.path.append('/content/drive/MyDrive/desihigh')
import os
import numpy as np
import astropy.io.fits as fits
import pylab as pl
import pandas as pd
import matplotlib.pyplot as plt
from IPython.display import YouTubeVideo
from scipy import interpolate
from scipy import optimize
from tools.wave2rgb import wavelength_to_rgb
from tools.resample_flux import trapz_rebin
from pkg_resources import resource_filename
```
# A snow white dwarf
When you look to the sky, who knows what you will find? We're all familiar with our own [sun](https://solarsystem.nasa.gov/solar-system/sun/overview/),
<img src="https://github.com/AlbertoRosado1/desihigh/blob/main/desihigh/images/sun.jpg?raw=1" alt="Drawing" style="width: 800px;"/>
a seemingly ever present that we see continually day-to-day. Would it surprise you to know that in 5.5 billion years the sun will change beyond recognition as the Hydrogen fuelling nuclear fusion within runs out?
<img src="https://github.com/AlbertoRosado1/desihigh/blob/main/desihigh/images/RedGiant.jpg?raw=1" alt="Drawing" style="width: 800px;"/>
During this apparent mid-life crisis, the sun will begin to fuse Helium to create the carbon fundamental to life on earth, and the oxygen necessary to sustain it. Expanding to ten-to-hundreds the size of the sun today, it will soon envelope Mercury & Venus, and perhaps [even Earth itself](https://phys.org/news/2016-05-earth-survive-sun-red-giant.html#:~:text=Red%20Giant%20Phase%3A,collapses%20under%20its%20own%20weight.), and eventual explode as a spectacular [planetary nebulae](https://www.space.com/17715-planetary-nebula.html):
<img src="https://github.com/AlbertoRosado1/desihigh/blob/main/desihigh/images/PlanetaryNebulae.jpg?raw=1" alt="Drawing" style="width: 800px;"/>
The ashen carbon-oxygen at the center will survive as a fossilised relic, dissipating energy just slowly enough that it will continue to survive for another 13.8 billion years, the current age of our Universe, and see in many more millenia.
We can learn about this eventual fate of the sun, and its impact on Earth, by studying neighbouring White Dwarves in the Milky Way. We'll look at one such candidate that DESI has observed only recently!
```
# Load the DESI spectrum
andes = resource_filename('desihigh', 'student_andes')
zbest = fits.open(andes + '/zbest-mws-66003-20200315-wd.fits')[1]
coadd = fits.open(andes + '/coadd-mws-66003-20200315-wd.fits')
# Get its position on the sky:
ra, dec = float(zbest.data['TARGET_RA']), float(zbest.data['TARGET_DEC'])
```
It's position on the night sky lies just above [Ursa Marjor](https://en.wikipedia.org/wiki/Ursa_Major) or the Great Bear,
<img src="https://github.com/AlbertoRosado1/desihigh/blob/main/desihigh/images/UrsaMajor.jpg?raw=1" alt="Drawing" style="width: 800px;"/>
familiar in the night sky:
<img src="https://github.com/AlbertoRosado1/desihigh/blob/main/desihigh/images/UrsaMajor2.png?raw=1" alt="Drawing" style="width: 800px;"/>
If you were to stare long enough, you'd see an almost imperceptible change in the apparent position as our viewpoint changes as the Earth orbits the Sun. Remember, the dinosaurs roamed planet Earth on the other side of the galaxy!
The motion of the Earth around the sun is just enough, given a precise enough instrument, to calculate the distance to our White Dwarf given simple trigonometry you've likely already seen:
<img src="https://github.com/AlbertoRosado1/desihigh/blob/main/desihigh/images/PDistance.jpg?raw=1" alt="Drawing" style="width: 800px;"/>
The [GAIA](https://www.esa.int/Science_Exploration/Space_Science/Gaia_overview) space satellite was precisely designed to do this this and will eventually map one billion stars in the Milky Way, roughly one in every hundred there, in this way.
<img src="https://github.com/AlbertoRosado1/desihigh/blob/main/desihigh/images/Gaia.jpg?raw=1" alt="Drawing" style="width: 800px;"/>
With this parallax, GAIA tells us the distance to our white dwarf:
```
# Distance calculated from Gaia parallax (Bailer-Jones et al. 2018).
# Photometric data and the [computed distance](https://ui.adsabs.harvard.edu/abs/2018AJ....156...58B/) can be found at the [Gaia Archive](https://gea.esac.esa.int/archive/)
dist_para = 784.665266 # parcsecs, 1 parsec = 3.0857 x 10^16 m.
parsec = 3.085677581e16 # m
# AU: Astronomical Unit - distance between the Sun and the Earth.
au = 1.495978707e11 # m
print('GAIA parallax tells us that the distance to our White Dwarf is {:.0f} million x the distance from the Earth to the Sun.'.format(dist_para * parsec / au / 1.e6))
```
The GAIA camera is designed to measure the brightness of the white dwarf in three different parts of the visible spectrum, corresponding to the colors shown below. You'll recognise this as the same style plot we explored for Hydrogen Rydberg lines in the Intro.
```
# (Pivot) Wavelengths for the Gaia DR2 filters.
GAIA = {'G_WAVE': 6230.6, 'BP_WAVE': 5051.5, 'RP_WAVE': 7726.2}
for wave in GAIA.values():
# color = [r, g, b]
color = wavelength_to_rgb(wave / 10.)
pl.axvline(x=wave / 10., c=color)
pl.title('Wavelengths (and colors) at which GAIA measures the brightness of each star', pad=10.5, fontsize=10)
pl.xlabel('Vacuum wavelength [nanometers]')
pl.xlim(380., 780.)
for band in ['G', 'BP', 'RP']:
GAIA[band + '_MAG'] = zbest.data['GAIA_PHOT_{}_MEAN_MAG'.format(band)][0]
GAIA[band + '_FLUX'] = 10.**(-(GAIA[band + '_MAG'] + (25.7934 - 25.6884)) / 2.5) * 3631. / 3.34e4 / GAIA[band + '_WAVE']**2.
# Add in the mag. errors that DESI catalogues don't propagate.
GAIA['G_MAGERR'] = 0.0044
GAIA['BP_MAGERR'] = 0.0281
GAIA['RP_MAGERR'] = 0.0780
for key, value in GAIA.items():
print('{:10s} \t {:05.4f}'.format(key, value))
```
This combination, a measurement of distance (from parallax) and of apparent brightness (in a number of colors), is incredibly powerful, as together they tell us the intrinsic luminosity or brightness of the dwarf rather than how it appears to us, from which we can determine what physics could be determining how bright the white dwarf is.
# DESI
By resolving the subtle variations in the amount of light with wavelength, DESI gives us a much better idea of the White Dwarf composition and its history from its entire spectrum, rather than a few measurements at different colors:
```
# Get the wavelength and flux
wave = coadd[1].data['WAVELENGTH']
count = coadd[1].data['TARGET35191335094848528']
# Plotting the DESI spectrum
pl.figure(figsize=(15, 10))
pl.plot(wave, count)
pl.grid()
pl.xlabel('Wavelength $[\AA]$')
pl.ylim(ymin=0.)
pl.title('TARGET35191335094848528')
```
Astronomers have spent a long time studying stars, classifying them according to different types - not least [Annie Jump Cannon](https://www.womenshistory.org/education-resources/biographies/annie-jump-cannon),
<img src="https://github.com/AlbertoRosado1/desihigh/blob/main/desihigh/images/anniecannon.jpg?raw=1" alt="Drawing" style="width: 800px;"/>
that has left us with a new ability to predict the spectrum of a star of given temperature, little $g$ - the acceleration due to gravity on their surface, and their mass. Given 'standard' stars, those with external distance constraints we can also determine how intrinsically bright a given star is with a determined spectrum. Let's grab these:
```
# White Dwarf model spectra [Levenhagen 2017](https://ui.adsabs.harvard.edu/abs/2017ApJS..231....1L)
wdspec = resource_filename('desihigh', 'dat/WDspec')
spec_da_list = os.listdir(wdspec)
model_flux_spec_da = []
model_wave_spec_da = []
T_spec_da = []
logg_spec_da = []
# Loop over files in the directory and collect into a list.
for filename in spec_da_list:
if filename[-4:] != '.npz':
continue
model = np.load(wdspec + '/' + filename)['arr_0']
model_flux_spec_da.append(model[:,1])
model_wave_spec_da.append(model[:,0])
T, logg = filename.split('.')[0].split('t0')[-1].split('g')
T_spec_da.append(float(T) * 1000.)
logg_spec_da.append(float(logg[:-1]) / 10.)
print('Collected {:d} model spectra.'.format(len(spec_da_list)))
# We'll select every 10th model white dwarf spectra to plot.
nth = 10
for model_wave, model_flux, model_temp in zip(model_wave_spec_da[::nth], model_flux_spec_da[::nth], T_spec_da[::nth]):
pl.plot(model_wave, model_flux / model_flux[-1], label=r'$T = {:.1e}$'.format(model_temp))
# Other commands to set the plot
pl.xlim(3000., 10000.)
# pl.ylim(ymin=1., ymax=3.6)
pl.legend(frameon=False, ncol=2)
pl.xlabel('Wavelength [Angstroms]')
pl.ylabel('Normalised flux')
```
Firstly, these white dwarves are hot! At 240,000 Kelvin, you shouldn't touch one. While we can see that the hottest white dwarf is brightest at short wavelength and will therefore appear blue. In exactly the same way as the bluest part of a flame is the hottest:
<img src="https://github.com/AlbertoRosado1/desihigh/blob/main/desihigh/images/bunsen.jpg?raw=1" alt="Drawing" style="width: 280px;"/>
So now we have everything to find the temperature of the White Dwarf that DESI was able to find. As for the Intro., we simply find the model that looks most like the data.
```
# wavelength range to be fitted
wave_min = 3750.
wave_max = 5200.
sq_diff = []
# Masking the range to be fitted
fitted_range = (wave > wave_min) & (wave < wave_max)
fitted_wave = wave[fitted_range]
for model_wave, model_flux in zip(model_wave_spec_da, model_flux_spec_da):
# Resample the model resoltuion to match the observed spectrum
model_flux_resampled = trapz_rebin(model_wave, model_flux, fitted_wave)
# Compute the sum of the squared difference of the individually normalised model and observed spectra
sq_diff.append(np.sum((model_flux_resampled / np.median(model_flux_resampled) - count[fitted_range] / np.median(count[fitted_range]))**2.))
# Unit-weighted least-squared best-fit surface gravity and temperature from the DESI spctrum
arg_min = np.argmin(sq_diff)
T_desi = T_spec_da[arg_min]
logg_desi = logg_spec_da[arg_min]
# Plot the best fit only
fitted_range = (model_wave_spec_da[arg_min] > wave_min) & (model_wave_spec_da[arg_min] < wave_max)
fitted_range_data = (wave > wave_min) & (wave < wave_max)
pl.figure(figsize=(15, 10))
pl.plot(wave[fitted_range_data], count[fitted_range_data] / np.median(count[fitted_range_data]), label='DESI spectrum')
pl.plot(model_wave_spec_da[arg_min][fitted_range], model_flux_spec_da[arg_min][fitted_range] / np.median(model_flux_spec_da[arg_min][fitted_range]), label='Best-fit model')
pl.grid()
pl.xlim(wave_min, wave_max)
pl.xlabel('Wavelength [Angstroms]')
pl.ylabel('Normalised Flux')
pl.legend(frameon=False)
pl.title('DESI White Dwarf: Temperature = ' + str(T_desi) + ' K; $\log_{10}$(g) = ' + str(logg_desi))
```
So our white dwarf is a cool 26,000 Kelvin. While the surface gravity would be unbearable. If you remember, the gravitational acceleration is derived from the mass and radius of a body as $g = \frac{G \cdot M}{r^2}$ and is roughly a measure of how dense an object is. Let's see what this looks like for a few well known sources
```
logg = pd.read_csv(resource_filename('desihigh', 'dat/logg.txt'), sep='\s+', comment='#', names=['Body', 'Surface Gravity [g]'])
logg = logg.sort_values('Surface Gravity [g]')
logg
fig, ax = plt.subplots()
pl.plot(np.arange(0, len(logg), 1), logg['Surface Gravity [g]'], marker='.', c='k')
plt.xticks(np.arange(len(logg)))
ax.set_xticklabels(logg['Body'], rotation='vertical')
ax.set_ylabel('Surface gravity [g]')
```
So the acceleration on Jupyter is a few times higher than that on earth, while on the Sun it'd be 30 times higher. The force you feel during takeoff of a flight is roughly 30% larger than the acceleration due to gravity on Earth. For our DESI white dwarf, the acceleration due to gravity on the surface is:
```
logg = 7.6
g = 10.**7.6 # cm2 / s.
g /= 100. # m2 / s
g /= 9.81 # Relative to that on Earth, i.e. [g].
g
```
times higher than that on Earth! In fact, if it weren't for strange restrictions on what electrons can and cannot not do (as determined by Quantum Mechanics), the White Dwarf would be so dense it would collapse entirely. Go figure!
Now it's your turn. Can you find an class of object even more dense than a White Dwarf? What is the acceleration due to gravity on its surface?
Harder(!) You may be one of the first to see this White Dwarf 'up close'! What else can you find out about it? Here's something to get you started ...
```
model_colors = pd.read_csv(resource_filename('desihigh', 'dat/WDphot/Table_DA.txt'), sep='\s+', comment='#')
model_colors = model_colors[['Teff', 'logg', 'Age', 'G', 'G_BP', 'G_RP']]
model_colors
```
The above table shows the model prediction for colors of the white dwarf observed by GAIA, if it had the temperature, age and surface gravity (logg) shown.
The GAIA colors observed for the DESI white dwarf are:
```
GAIA['G_MAG'], GAIA['BP_MAG'], GAIA['RP_MAG']
GAIA['G_MAGERR'], GAIA['BP_MAGERR'], GAIA['RP_MAGERR']
```
Can you figure out how old are White Dwarf is? What does that say about the age of our Universe? Does it match the estimates of other [experiments](https://www.space.com/24054-how-old-is-the-universe.html#:~:text=In%202013%2C%20Planck%20measured%20the,universe%20at%2013.82%20billion%20years.)?
If you get stuck, or need another hint, leave us a [message](https://www.github.com/michaelJwilson/DESI-HighSchool/issues/new)!
```
```
| github_jupyter |
[View in Colaboratory](https://colab.research.google.com/github/JacksonIsaac/kadenze-deeplearning-creative-applications/blob/master/Kadenze_Session_2.ipynb)
```
#%pylab
%matplotlib inline
import os
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as colors
import matplotlib.cm as cmx
plt.style.use('ggplot')
fig = plt.figure(figsize=(10,6))
ax = fig.gca()
x = np.linspace(-1, 1, 200)
hz = 10
cost = np.sin(hz*x) * np.exp(-x)
ax.plot(x, cost)
ax.set_ylabel('Cost')
ax.set_xlabel('Parameter')
gradient = np.diff(cost)
gradient
fig = plt.figure(figsize=(10, 6))
ax = fig.gca()
x = np.linspace(-1, 1, 200)
hz = 10
cost = np.sin(hz*x)*np.exp(-x)
ax.plot(x, cost)
ax.set_ylabel('Cost')
ax.set_xlabel('Some Parameter')
n_iterations = 500
cmap = plt.get_cmap('coolwarm')
c_norm = colors.Normalize(vmin=0, vmax=n_iterations)
scalar_map = cmx.ScalarMappable(norm=c_norm, cmap=cmap)
init_p = 120 #np.random.randint(len(x)*0.2, len(x)*0.8)
learning_rate = 1.0
for iter_i in range(n_iterations):
#print(init_p)
init_p -= learning_rate * gradient[int(init_p)]
#print(init_p)
ax.plot(x[int(init_p)], cost[int(init_p)], 'ro', alpha=(iter_i + 1)/n_iterations, color=scalar_map.to_rgba(iter_i))
from mpl_toolkits.mplot3d import axes3d
from matplotlib import cm
fig = plt.figure(figsize=(10,6))
ax = fig.gca(projection='3d')
x, y = np.mgrid[-1:1:0.02, -1:1:0.02]
X, Y, Z = x, y, np.sin(hz*x) * np.exp(-x) * np.cos(hz*y) * np.exp(-y)
ax.plot_surface(X, Y, Z, rstride=2, cstride=2, alpha=0.75, cmap='jet', shade=False)
n_obs = 1000
x = np.linspace(-3, 3, n_obs)
plt.scatter(x, y, alpha=0.15, marker='+')
X = tf.placeholder(tf.float32, name='X')
Y = tf.placeholder(tf.float32, name='Y')
sess = tf.InteractiveSession()
n = tf.random_normal([1000]).eval()
plt.hist(n)
n = tf.random_normal([1000], stddev=0.1).eval()
plt.hist(n)
W = tf.Variable(tf.random_normal([1], dtype=tf.float32, stddev=0.1), name='weight')
B = tf.Variable(tf.constant([1], dtype=tf.float32), name='bias')
Y_pred = X * W + B
def distance(p1, p2):
return tf.abs(p1 - p2)
cost = distance(Y_pred, tf.sin(X))
cost = tf.reduce_mean(distance(Y_pred, Y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01).minimize(cost)
n_iterations = 500
fig, ax = plt.subplots(1, 1)
ax.scatter(x, y, alpha=0.15, marker='+')
idxs = np.arange(100)
batch_size = 10
n_batches = len(idxs) // batch_size
for batch_i in range(n_batches):
print(idxs[batch_i * batch_size : (batch_i + 1) * batch_size])
rand_idxs = np.random.permutation(idxs)
for batch_i in range(n_batches):
print(rand_idxs[batch_i * batch_size : (batch_i + 1) * batch_size])
from skimage.data import astronaut
from scipy.misc import imresize
img = imresize(astronaut(), (64, 64))
plt.imshow(img)
xs = []
ys = []
for row_i in range(img.shape[0]):
for col_i in range(img.shape[1]):
xs.append([row_i, col_i])
ys.append(img[row_i, col_i])
xs = np.array(xs)
ys = np.array(ys)
xs = (xs - np.mean(xs)) / np.std(xs)
print(xs.shape)
print(ys.shape)
plt.imshow(ys.reshape(img.shape))
plt.imshow(xs.reshape(img.shape))
def linear(X, n_input, n_output, activation=None, scope=None):
with tf.variable_scope(scope or "linear"):
W = tf.get_variable(
name='W',
shape=[n_input, n_output],
initializer=tf.random_normal_initializer(mean=0.0, stddev=0.1))
b = tf.get_variable(
name='b',
shape=[n_output],
initializer=tf.constant_initializer())
h = tf.matmul(X, W) + b
if activation is not None:
h = activation(h)
return h
X = tf.placeholder(tf.float32, shape=[None, 2], name='X')
Y = tf.placeholder(tf.float32, shape=[None, 3], name='Y')
n_neurons = [2, 64, 64, 64, 64, 64, 3]
current_input = X
for layer_i in range(1, len(n_neurons)):
current_input = linear(
X=current_input,
n_input = n_neurons[layer_i - 1],
n_output = n_neurons[layer_i],
activation = tf.nn.relu if (layer_i + 1) < len(n_neurons) else None,
scope = 'layer_' + str(layer_i)
)
Y_pred = current_input
cost = tf.reduce_mean(
tf.reduce_sum(distance(Y_pred, Y), 1))
optimizer = tf.train.AdamOptimizer(0.001).minimize(cost)
n_iterations = 500
batch_size = 50
with tf.Session() as sess:
sess.run(tf.initialize_all_variables())
prev_cost = 0.0
for it_i in range(n_iterations):
idxs = np.random.permutation(range(len(xs)))
n_batches = len(idxs) // batch_size
for batch_i in range(n_batches):
idxs_i = idxs[batch_i * batch_size : (batch_i + 1) * batch_size]
sess.run(optimizer, feed_dict={X: xs, Y: ys})
training_cost = sess.run(cost, feed_dict={X: xs, Y: ys})
print(it_i, training_cost)
if (it_i + 1) % 20 == 0:
ys_pred = Y_pred.eval(feed_dict={X: xs}, session=sess)
fig, ax = plt.subplots(1, 1)
imp = np.clip(ys_pred.reshape(img.shape), 0, 255).astype(np.uint8)
plt.imshow(img)
fig.canvas.draw()
```
| github_jupyter |
**Chapter 3 – Classification**
_This notebook contains all the sample code and solutions to the exercices in chapter 3._
# Setup
First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
```
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import numpy.random as rnd
import os
# to make this notebook's output stable across runs
rnd.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 12
plt.rcParams['ytick.labelsize'] = 12
# Where to save the figures
PROJECT_ROOT_DIR = "."
CHAPTER_ID = "classification"
def save_fig(fig_id, tight_layout=True):
path = os.path.join(PROJECT_ROOT_DIR, "images", CHAPTER_ID, fig_id + ".png")
print("Saving figure", fig_id)
if tight_layout:
plt.tight_layout()
plt.savefig(path, format='png', dpi=300)
```
# MNIST
```
from shutil import copyfileobj
from six.moves import urllib
from sklearn.datasets.base import get_data_home
import os
def fetch_mnist(data_home=None):
mnist_alternative_url = "https://github.com/amplab/datascience-sp14/raw/master/lab7/mldata/mnist-original.mat"
data_home = get_data_home(data_home=data_home)
data_home = os.path.join(data_home, 'mldata')
if not os.path.exists(data_home):
os.makedirs(data_home)
mnist_save_path = os.path.join(data_home, "mnist-original.mat")
if not os.path.exists(mnist_save_path):
mnist_url = urllib.request.urlopen(mnist_alternative_url)
with open(mnist_save_path, "wb") as matlab_file:
copyfileobj(mnist_url, matlab_file)
fetch_mnist()
from sklearn.datasets import fetch_mldata
mnist = fetch_mldata("MNIST original")
# from six.moves import urllib
# from sklearn.datasets import fetch_mldata
# try:
# mnist = fetch_mldata('MNIST original')
# except urllib.error.HTTPError as ex:
# print("Could not download MNIST data from mldata.org, trying alternative...")
# # Alternative method to load MNIST, if mldata.org is down
# from scipy.io import loadmat
# mnist_alternative_url = "https://github.com/amplab/datascience-sp14/raw/master/lab7/mldata/mnist-original.mat"
# mnist_path = "./mnist-original.mat"
# response = urllib.request.urlopen(mnist_alternative_url)
# with open(mnist_path, "wb") as f:
# content = response.read()
# f.write(content)
# mnist_raw = loadmat(mnist_path)
# mnist = {
# "data": mnist_raw["data"].T,
# "target": mnist_raw["label"][0],
# "COL_NAMES": ["label", "data"],
# "DESCR": "mldata.org dataset: mnist-original",
# }
# print("Success!")
mnist
X, y = mnist["data"], mnist["target"]
X.shape
y.shape
28*28
def plot_digit(data):
image = data.reshape(28, 28)
plt.imshow(image, cmap = matplotlib.cm.binary,
interpolation="nearest")
plt.axis("off")
some_digit_index = 36000
some_digit = X[some_digit_index]
plot_digit(some_digit)
save_fig("some_digit_plot")
plt.show()
# EXTRA
def plot_digits(instances, images_per_row=10, **options):
size = 28
images_per_row = min(len(instances), images_per_row)
images = [instance.reshape(size,size) for instance in instances]
n_rows = (len(instances) - 1) // images_per_row + 1
row_images = []
n_empty = n_rows * images_per_row - len(instances)
images.append(np.zeros((size, size * n_empty)))
for row in range(n_rows):
rimages = images[row * images_per_row : (row + 1) * images_per_row]
row_images.append(np.concatenate(rimages, axis=1))
image = np.concatenate(row_images, axis=0)
plt.imshow(image, cmap = matplotlib.cm.binary, **options)
plt.axis("off")
plt.figure(figsize=(9,9))
example_images = np.r_[X[:12000:600], X[13000:30600:600], X[30600:60000:590]]
plot_digits(example_images, images_per_row=10)
save_fig("more_digits_plot")
plt.show()
y[some_digit_index]
X_train, X_test, y_train, y_test = X[:60000], X[60000:], y[:60000], y[60000:]
shuffle_index = rnd.permutation(60000)
X_train, y_train = X_train[shuffle_index], y_train[shuffle_index]
```
# Binary classifier
```
y_train_5 = (y_train == 5)
y_test_5 = (y_test == 5)
from sklearn.linear_model import SGDClassifier
sgd_clf = SGDClassifier(random_state=42)
sgd_clf.fit(X_train, y_train_5)
sgd_clf.predict([some_digit])
from sklearn.model_selection import cross_val_score
cross_val_score(sgd_clf, X_train, y_train_5, cv=3, scoring="accuracy")
from sklearn.model_selection import StratifiedKFold
from sklearn.base import clone
skfolds = StratifiedKFold(n_splits=3, random_state=42)
for train_index, test_index in skfolds.split(X_train, y_train_5):
clone_clf = clone(sgd_clf)
X_train_folds = X_train[train_index]
y_train_folds = (y_train_5[train_index])
X_test_fold = X_train[test_index]
y_test_fold = (y_train_5[test_index])
clone_clf.fit(X_train_folds, y_train_folds)
y_pred = clone_clf.predict(X_test_fold)
n_correct = sum(y_pred == y_test_fold)
print(n_correct / len(y_pred))
from sklearn.base import BaseEstimator
class Never5Classifier(BaseEstimator):
def fit(self, X, y=None):
pass
def predict(self, X):
return np.zeros((len(X), 1), dtype=bool)
never_5_clf = Never5Classifier()
cross_val_score(never_5_clf, X_train, y_train_5, cv=3, scoring="accuracy")
from sklearn.model_selection import cross_val_predict
y_train_pred = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3)
from sklearn.metrics import confusion_matrix
confusion_matrix(y_train_5, y_train_pred)
from sklearn.metrics import precision_score, recall_score
precision_score(y_train_5, y_train_pred)
4344 / (4344 + 1307)
recall_score(y_train_5, y_train_pred)
4344 / (4344 + 1077)
from sklearn.metrics import f1_score
f1_score(y_train_5, y_train_pred)
4344 / (4344 + (1077 + 1307)/2)
y_scores = sgd_clf.decision_function([some_digit])
y_scores
threshold = 0
y_some_digit_pred = (y_scores > threshold)
y_some_digit_pred
threshold = 200000
y_some_digit_pred = (y_scores > threshold)
y_some_digit_pred
y_scores = cross_val_predict(sgd_clf, X_train, y_train_5, cv=3, method="decision_function")
from sklearn.metrics import precision_recall_curve
precisions, recalls, thresholds = precision_recall_curve(y_train_5, y_scores)
def plot_precision_recall_vs_threshold(precisions, recalls, thresholds):
plt.plot(thresholds, precisions[:-1], "b--", label="Precision", linewidth=2)
plt.plot(thresholds, recalls[:-1], "g-", label="Recall", linewidth=2)
plt.xlabel("Threshold", fontsize=16)
plt.legend(loc="center left", fontsize=16)
plt.ylim([0, 1])
plt.figure(figsize=(8, 4))
plot_precision_recall_vs_threshold(precisions, recalls, thresholds)
plt.xlim([-700000, 700000])
save_fig("precision_recall_vs_threshold_plot")
plt.show()
(y_train_pred == (y_scores > 0)).all()
y_train_pred_90 = (y_scores > 70000)
precision_score(y_train_5, y_train_pred_90)
recall_score(y_train_5, y_train_pred_90)
def plot_precision_vs_recall(precisions, recalls):
plt.plot(recalls, precisions, "b-", linewidth=2)
plt.xlabel("Recall", fontsize=16)
plt.ylabel("Precision", fontsize=16)
plt.axis([0, 1, 0, 1])
plt.figure(figsize=(8, 6))
plot_precision_vs_recall(precisions, recalls)
save_fig("precision_vs_recall_plot")
plt.show()
```
# ROC curves
```
from sklearn.metrics import roc_curve
fpr, tpr, thresholds = roc_curve(y_train_5, y_scores)
def plot_roc_curve(fpr, tpr, **options):
plt.plot(fpr, tpr, linewidth=2, **options)
plt.plot([0, 1], [0, 1], 'k--')
plt.axis([0, 1, 0, 1])
plt.xlabel('False Positive Rate', fontsize=16)
plt.ylabel('True Positive Rate', fontsize=16)
plt.figure(figsize=(8, 6))
plot_roc_curve(fpr, tpr)
save_fig("roc_curve_plot")
plt.show()
from sklearn.metrics import roc_auc_score
roc_auc_score(y_train_5, y_scores)
from sklearn.ensemble import RandomForestClassifier
forest_clf = RandomForestClassifier(random_state=42)
y_probas_forest = cross_val_predict(forest_clf, X_train, y_train_5, cv=3, method="predict_proba")
y_scores_forest = y_probas_forest[:, 1] # score = proba of positive class
fpr_forest, tpr_forest, thresholds_forest = roc_curve(y_train_5, y_scores_forest)
plt.figure(figsize=(8, 6))
plt.plot(fpr, tpr, "b:", linewidth=2, label="SGD")
plot_roc_curve(fpr_forest, tpr_forest, label="Random Forest")
plt.legend(loc="lower right", fontsize=16)
save_fig("roc_curve_comparison_plot")
plt.show()
roc_auc_score(y_train_5, y_scores_forest)
y_train_pred_forest = cross_val_predict(forest_clf, X_train, y_train_5, cv=3)
precision_score(y_train_5, y_train_pred_forest)
recall_score(y_train_5, y_train_pred_forest)
```
# Multiclass classification
```
sgd_clf.fit(X_train, y_train)
sgd_clf.predict([some_digit])
some_digit_scores = sgd_clf.decision_function([some_digit])
some_digit_scores
np.argmax(some_digit_scores)
sgd_clf.classes_
from sklearn.multiclass import OneVsOneClassifier
ovo_clf = OneVsOneClassifier(SGDClassifier(random_state=42))
ovo_clf.fit(X_train, y_train)
ovo_clf.predict([some_digit])
len(ovo_clf.estimators_)
forest_clf.fit(X_train, y_train)
forest_clf.predict([some_digit])
forest_clf.predict_proba([some_digit])
cross_val_score(sgd_clf, X_train, y_train, cv=3, scoring="accuracy")
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train.astype(np.float64))
cross_val_score(sgd_clf, X_train_scaled, y_train, cv=3, scoring="accuracy")
y_train_pred = cross_val_predict(sgd_clf, X_train_scaled, y_train, cv=3)
conf_mx = confusion_matrix(y_train, y_train_pred)
conf_mx
def plot_confusion_matrix(matrix):
"""If you prefer color and a colorbar"""
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot(111)
cax = ax.matshow(conf_mx)
fig.colorbar(cax)
plt.matshow(conf_mx, cmap=plt.cm.gray)
save_fig("confusion_matrix_plot", tight_layout=False)
plt.show()
row_sums = conf_mx.sum(axis=1, keepdims=True)
norm_conf_mx = conf_mx / row_sums
np.fill_diagonal(norm_conf_mx, 0)
plt.matshow(norm_conf_mx, cmap=plt.cm.gray)
save_fig("confusion_matrix_errors_plot", tight_layout=False)
plt.show()
cl_a, cl_b = 3, 5
X_aa = X_train[(y_train == cl_a) & (y_train_pred == cl_a)]
X_ab = X_train[(y_train == cl_a) & (y_train_pred == cl_b)]
X_ba = X_train[(y_train == cl_b) & (y_train_pred == cl_a)]
X_bb = X_train[(y_train == cl_b) & (y_train_pred == cl_b)]
plt.figure(figsize=(8,8))
plt.subplot(221)
plot_digits(X_aa[:25], images_per_row=5)
plt.subplot(222)
plot_digits(X_ab[:25], images_per_row=5)
plt.subplot(223)
plot_digits(X_ba[:25], images_per_row=5)
plt.subplot(224)
plot_digits(X_bb[:25], images_per_row=5)
save_fig("error_analysis_digits_plot")
plt.show()
```
# Multilabel classification
```
from sklearn.neighbors import KNeighborsClassifier
y_train_large = (y_train >= 7)
y_train_odd = (y_train % 2 == 1)
y_multilabel = np.c_[y_train_large, y_train_odd]
knn_clf = KNeighborsClassifier()
knn_clf.fit(X_train, y_multilabel)
knn_clf.predict([some_digit])
y_train_knn_pred = cross_val_predict(knn_clf, X_train, y_train, cv=3)
f1_score(y_train, y_train_knn_pred, average="macro")
```
# Multioutput classification
```
noise = rnd.randint(0, 100, (len(X_train), 784))
X_train_mod = X_train + noise
noise = rnd.randint(0, 100, (len(X_test), 784))
X_test_mod = X_test + noise
y_train_mod = X_train
y_test_mod = X_test
some_index = 5500
plt.subplot(121); plot_digit(X_test_mod[some_index])
plt.subplot(122); plot_digit(y_test_mod[some_index])
save_fig("noisy_digit_example_plot")
plt.show()
knn_clf.fit(X_train_mod, y_train_mod)
clean_digit = knn_clf.predict([X_test_mod[some_index]])
plot_digit(clean_digit)
save_fig("cleaned_digit_example_plot")
plt.show()
```
# Extra material
## Dummy (ie. random) classifier
```
from sklearn.dummy import DummyClassifier
dmy_clf = DummyClassifier()
y_probas_dmy = cross_val_predict(dmy_clf, X_train, y_train_5, cv=3, method="predict_proba")
y_scores_dmy = y_probas_dmy[:, 1]
fprr, tprr, thresholdsr = roc_curve(y_train_5, y_scores_dmy)
plot_roc_curve(fprr, tprr)
```
## KNN classifier
```
from sklearn.neighbors import KNeighborsClassifier
knn_clf = KNeighborsClassifier(n_jobs=-1, weights='distance', n_neighbors=4)
knn_clf.fit(X_train, y_train)
y_knn_pred = knn_clf.predict(X_test)
from sklearn.metrics import accuracy_score
accuracy_score(y_test, y_knn_pred)
from scipy.ndimage.interpolation import shift
def shift_digit(digit_array, dx, dy, new=0):
return shift(digit_array.reshape(28, 28), [dy, dx], cval=new).reshape(784)
plot_digit(shift_digit(some_digit, 5, 1, new=100))
X_train_expanded = [X_train]
y_train_expanded = [y_train]
for dx, dy in ((1, 0), (-1, 0), (0, 1), (0, -1)):
shifted_images = np.apply_along_axis(shift_digit, axis=1, arr=X_train, dx=dx, dy=dy)
X_train_expanded.append(shifted_images)
y_train_expanded.append(y_train)
X_train_expanded = np.concatenate(X_train_expanded)
y_train_expanded = np.concatenate(y_train_expanded)
X_train_expanded.shape, y_train_expanded.shape
knn_clf.fit(X_train_expanded, y_train_expanded)
y_knn_expanded_pred = knn_clf.predict(X_test)
accuracy_score(y_test, y_knn_expanded_pred)
ambiguous_digit = X_test[2589]
knn_clf.predict_proba([ambiguous_digit])
plot_digit(ambiguous_digit)
```
# Exercise solutions
**Coming soon**
| github_jupyter |
# Tutorial 7: Graph Neural Networks

**Filled notebook:**
[](https://github.com/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/tutorial7/GNN_overview.ipynb)
[](https://colab.research.google.com/github/phlippe/uvadlc_notebooks/blob/master/docs/tutorial_notebooks/tutorial7/GNN_overview.ipynb)
**Pre-trained models:**
[](https://github.com/phlippe/saved_models/tree/main/tutorial7)
[](https://drive.google.com/drive/folders/1DOTV_oYt5boa-MElbc2izat4VMSc1gob?usp=sharing)
**Recordings:**
[](https://youtu.be/fK7d56Ly9q8)
[](https://youtu.be/ZCNSUWe4a_Q)
In this tutorial, we will discuss the application of neural networks on graphs. Graph Neural Networks (GNNs) have recently gained increasing popularity in both applications and research, including domains such as social networks, knowledge graphs, recommender systems, and bioinformatics. While the theory and math behind GNNs might first seem complicated, the implementation of those models is quite simple and helps in understanding the methodology. Therefore, we will discuss the implementation of basic network layers of a GNN, namely graph convolutions, and attention layers. Finally, we will apply a GNN on a node-level, edge-level, and graph-level tasks.
Below, we will start by importing our standard libraries. We will use PyTorch Lightning as already done in Tutorial 5 and 6.
```
## Standard libraries
import os
import json
import math
import numpy as np
import time
## Imports for plotting
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('svg', 'pdf') # For export
from matplotlib.colors import to_rgb
import matplotlib
matplotlib.rcParams['lines.linewidth'] = 2.0
import seaborn as sns
sns.reset_orig()
sns.set()
## Progress bar
from tqdm.notebook import tqdm
## PyTorch
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.utils.data as data
import torch.optim as optim
# Torchvision
import torchvision
from torchvision.datasets import CIFAR10
from torchvision import transforms
# PyTorch Lightning
try:
import pytorch_lightning as pl
except ModuleNotFoundError: # Google Colab does not have PyTorch Lightning installed by default. Hence, we do it here if necessary
!pip install --quiet pytorch-lightning>=1.4
import pytorch_lightning as pl
from pytorch_lightning.callbacks import LearningRateMonitor, ModelCheckpoint
# Path to the folder where the datasets are/should be downloaded (e.g. CIFAR10)
DATASET_PATH = "../data"
# Path to the folder where the pretrained models are saved
CHECKPOINT_PATH = "../saved_models/tutorial7"
# Setting the seed
pl.seed_everything(42)
# Ensure that all operations are deterministic on GPU (if used) for reproducibility
torch.backends.cudnn.determinstic = True
torch.backends.cudnn.benchmark = False
device = torch.device("cuda:0") if torch.cuda.is_available() else torch.device("cpu")
print(device)
```
We also have a few pre-trained models we download below.
```
import urllib.request
from urllib.error import HTTPError
# Github URL where saved models are stored for this tutorial
base_url = "https://raw.githubusercontent.com/phlippe/saved_models/main/tutorial7/"
# Files to download
pretrained_files = ["NodeLevelMLP.ckpt", "NodeLevelGNN.ckpt", "GraphLevelGraphConv.ckpt"]
# Create checkpoint path if it doesn't exist yet
os.makedirs(CHECKPOINT_PATH, exist_ok=True)
# For each file, check whether it already exists. If not, try downloading it.
for file_name in pretrained_files:
file_path = os.path.join(CHECKPOINT_PATH, file_name)
if "/" in file_name:
os.makedirs(file_path.rsplit("/",1)[0], exist_ok=True)
if not os.path.isfile(file_path):
file_url = base_url + file_name
print(f"Downloading {file_url}...")
try:
urllib.request.urlretrieve(file_url, file_path)
except HTTPError as e:
print("Something went wrong. Please try to download the file from the GDrive folder, or contact the author with the full output including the following error:\n", e)
```
## Graph Neural Networks
### Graph representation
Before starting the discussion of specific neural network operations on graphs, we should consider how to represent a graph. Mathematically, a graph $\mathcal{G}$ is defined as a tuple of a set of nodes/vertices $V$, and a set of edges/links $E$: $\mathcal{G}=(V,E)$. Each edge is a pair of two vertices, and represents a connection between them. For instance, let's look at the following graph:
<center width="100%" style="padding:10px"><img src="example_graph.svg" width="250px"></center>
The vertices are $V=\{1,2,3,4\}$, and edges $E=\{(1,2), (2,3), (2,4), (3,4)\}$. Note that for simplicity, we assume the graph to be undirected and hence don't add mirrored pairs like $(2,1)$. In application, vertices and edge can often have specific attributes, and edges can even be directed. The question is how we could represent this diversity in an efficient way for matrix operations. Usually, for the edges, we decide between two variants: an adjacency matrix, or a list of paired vertex indices.
The **adjacency matrix** $A$ is a square matrix whose elements indicate whether pairs of vertices are adjacent, i.e. connected, or not. In the simplest case, $A_{ij}$ is 1 if there is a connection from node $i$ to $j$, and otherwise 0. If we have edge attributes or different categories of edges in a graph, this information can be added to the matrix as well. For an undirected graph, keep in mind that $A$ is a symmetric matrix ($A_{ij}=A_{ji}$). For the example graph above, we have the following adjacency matrix:
$$
A = \begin{bmatrix}
0 & 1 & 0 & 0\\
1 & 0 & 1 & 1\\
0 & 1 & 0 & 1\\
0 & 1 & 1 & 0
\end{bmatrix}
$$
While expressing a graph as a list of edges is more efficient in terms of memory and (possibly) computation, using an adjacency matrix is more intuitive and simpler to implement. In our implementations below, we will rely on the adjacency matrix to keep the code simple. However, common libraries use edge lists, which we will discuss later more.
Alternatively, we could also use the list of edges to define a sparse adjacency matrix with which we can work as if it was a dense matrix, but allows more memory-efficient operations. PyTorch supports this with the sub-package `torch.sparse` ([documentation](https://pytorch.org/docs/stable/sparse.html)) which is however still in a beta-stage (API might change in future).
### Graph Convolutions
Graph Convolutional Networks have been introduced by [Kipf et al.](https://openreview.net/pdf?id=SJU4ayYgl) in 2016 at the University of Amsterdam. He also wrote a great [blog post](https://tkipf.github.io/graph-convolutional-networks/) about this topic, which is recommended if you want to read about GCNs from a different perspective. GCNs are similar to convolutions in images in the sense that the "filter" parameters are typically shared over all locations in the graph. At the same time, GCNs rely on message passing methods, which means that vertices exchange information with the neighbors, and send "messages" to each other. Before looking at the math, we can try to visually understand how GCNs work. The first step is that each node creates a feature vector that represents the message it wants to send to all its neighbors. In the second step, the messages are sent to the neighbors, so that a node receives one message per adjacent node. Below we have visualized the two steps for our example graph.
<center width="100%" style="padding:10px"><img src="graph_message_passing.svg" width="700px"></center>
If we want to formulate that in more mathematical terms, we need to first decide how to combine all the messages a node receives. As the number of messages vary across nodes, we need an operation that works for any number. Hence, the usual way to go is to sum or take the mean. Given the previous features of nodes $H^{(l)}$, the GCN layer is defined as follows:
$$H^{(l+1)} = \sigma\left(\hat{D}^{-1/2}\hat{A}\hat{D}^{-1/2}H^{(l)}W^{(l)}\right)$$
$W^{(l)}$ is the weight parameters with which we transform the input features into messages ($H^{(l)}W^{(l)}$). To the adjacency matrix $A$ we add the identity matrix so that each node sends its own message also to itself: $\hat{A}=A+I$. Finally, to take the average instead of summing, we calculate the matrix $\hat{D}$ which is a diagonal matrix with $D_{ii}$ denoting the number of neighbors node $i$ has. $\sigma$ represents an arbitrary activation function, and not necessarily the sigmoid (usually a ReLU-based activation function is used in GNNs).
When implementing the GCN layer in PyTorch, we can take advantage of the flexible operations on tensors. Instead of defining a matrix $\hat{D}$, we can simply divide the summed messages by the number of neighbors afterward. Additionally, we replace the weight matrix with a linear layer, which additionally allows us to add a bias. Written as a PyTorch module, the GCN layer is defined as follows:
```
class GCNLayer(nn.Module):
def __init__(self, c_in, c_out):
super().__init__()
self.projection = nn.Linear(c_in, c_out)
def forward(self, node_feats, adj_matrix):
"""
Inputs:
node_feats - Tensor with node features of shape [batch_size, num_nodes, c_in]
adj_matrix - Batch of adjacency matrices of the graph. If there is an edge from i to j, adj_matrix[b,i,j]=1 else 0.
Supports directed edges by non-symmetric matrices. Assumes to already have added the identity connections.
Shape: [batch_size, num_nodes, num_nodes]
"""
# Num neighbours = number of incoming edges
num_neighbours = adj_matrix.sum(dim=-1, keepdims=True)
node_feats = self.projection(node_feats)
node_feats = torch.bmm(adj_matrix, node_feats)
node_feats = node_feats / num_neighbours
return node_feats
```
To further understand the GCN layer, we can apply it to our example graph above. First, let's specify some node features and the adjacency matrix with added self-connections:
```
node_feats = torch.arange(8, dtype=torch.float32).view(1, 4, 2)
adj_matrix = torch.Tensor([[[1, 1, 0, 0],
[1, 1, 1, 1],
[0, 1, 1, 1],
[0, 1, 1, 1]]])
print("Node features:\n", node_feats)
print("\nAdjacency matrix:\n", adj_matrix)
```
Next, let's apply a GCN layer to it. For simplicity, we initialize the linear weight matrix as an identity matrix so that the input features are equal to the messages. This makes it easier for us to verify the message passing operation.
```
layer = GCNLayer(c_in=2, c_out=2)
layer.projection.weight.data = torch.Tensor([[1., 0.], [0., 1.]])
layer.projection.bias.data = torch.Tensor([0., 0.])
with torch.no_grad():
out_feats = layer(node_feats, adj_matrix)
print("Adjacency matrix", adj_matrix)
print("Input features", node_feats)
print("Output features", out_feats)
```
As we can see, the first node's output values are the average of itself and the second node. Similarly, we can verify all other nodes. However, in a GNN, we would also want to allow feature exchange between nodes beyond its neighbors. This can be achieved by applying multiple GCN layers, which gives us the final layout of a GNN. The GNN can be build up by a sequence of GCN layers and non-linearities such as ReLU. For a visualization, see below (figure credit - [Thomas Kipf, 2016](https://tkipf.github.io/graph-convolutional-networks/)).
<center width="100%" style="padding: 10px"><img src="gcn_network.png" width="600px"></center>
However, one issue we can see from looking at the example above is that the output features for nodes 3 and 4 are the same because they have the same adjacent nodes (including itself). Therefore, GCN layers can make the network forget node-specific information if we just take a mean over all messages. Multiple possible improvements have been proposed. While the simplest option might be using residual connections, the more common approach is to either weigh the self-connections higher or define a separate weight matrix for the self-connections. Alternatively, we can re-visit a concept from the last tutorial: attention.
### Graph Attention
If you remember from the last tutorial, attention describes a weighted average of multiple elements with the weights dynamically computed based on an input query and elements' keys (if you haven't read Tutorial 6 yet, it is recommended to at least go through the very first section called [What is Attention?](https://uvadlc-notebooks.readthedocs.io/en/latest/tutorial_notebooks/tutorial6/Transformers_and_MHAttention.html#What-is-Attention?)). This concept can be similarly applied to graphs, one of such is the Graph Attention Network (called GAT, proposed by [Velickovic et al., 2017](https://arxiv.org/abs/1710.10903)). Similarly to the GCN, the graph attention layer creates a message for each node using a linear layer/weight matrix. For the attention part, it uses the message from the node itself as a query, and the messages to average as both keys and values (note that this also includes the message to itself). The score function $f_{attn}$ is implemented as a one-layer MLP which maps the query and key to a single value. The MLP looks as follows (figure credit - [Velickovic et al.](https://arxiv.org/abs/1710.10903)):
<center width="100%" style="padding:10px"><img src="graph_attention_MLP.svg" width="250px"></center>
$h_i$ and $h_j$ are the original features from node $i$ and $j$ respectively, and represent the messages of the layer with $\mathbf{W}$ as weight matrix. $\mathbf{a}$ is the weight matrix of the MLP, which has the shape $[1,2\times d_{\text{message}}]$, and $\alpha_{ij}$ the final attention weight from node $i$ to $j$. The calculation can be described as follows:
$$\alpha_{ij} = \frac{\exp\left(\text{LeakyReLU}\left(\mathbf{a}\left[\mathbf{W}h_i||\mathbf{W}h_j\right]\right)\right)}{\sum_{k\in\mathcal{N}_i} \exp\left(\text{LeakyReLU}\left(\mathbf{a}\left[\mathbf{W}h_i||\mathbf{W}h_k\right]\right)\right)}$$
The operator $||$ represents the concatenation, and $\mathcal{N}_i$ the indices of the neighbors of node $i$. Note that in contrast to usual practice, we apply a non-linearity (here LeakyReLU) before the softmax over elements. Although it seems like a minor change at first, it is crucial for the attention to depend on the original input. Specifically, let's remove the non-linearity for a second, and try to simplify the expression:
$$
\begin{split}
\alpha_{ij} & = \frac{\exp\left(\mathbf{a}\left[\mathbf{W}h_i||\mathbf{W}h_j\right]\right)}{\sum_{k\in\mathcal{N}_i} \exp\left(\mathbf{a}\left[\mathbf{W}h_i||\mathbf{W}h_k\right]\right)}\\[5pt]
& = \frac{\exp\left(\mathbf{a}_{:,:d/2}\mathbf{W}h_i+\mathbf{a}_{:,d/2:}\mathbf{W}h_j\right)}{\sum_{k\in\mathcal{N}_i} \exp\left(\mathbf{a}_{:,:d/2}\mathbf{W}h_i+\mathbf{a}_{:,d/2:}\mathbf{W}h_k\right)}\\[5pt]
& = \frac{\exp\left(\mathbf{a}_{:,:d/2}\mathbf{W}h_i\right)\cdot\exp\left(\mathbf{a}_{:,d/2:}\mathbf{W}h_j\right)}{\sum_{k\in\mathcal{N}_i} \exp\left(\mathbf{a}_{:,:d/2}\mathbf{W}h_i\right)\cdot\exp\left(\mathbf{a}_{:,d/2:}\mathbf{W}h_k\right)}\\[5pt]
& = \frac{\exp\left(\mathbf{a}_{:,d/2:}\mathbf{W}h_j\right)}{\sum_{k\in\mathcal{N}_i} \exp\left(\mathbf{a}_{:,d/2:}\mathbf{W}h_k\right)}\\
\end{split}
$$
We can see that without the non-linearity, the attention term with $h_i$ actually cancels itself out, resulting in the attention being independent of the node itself. Hence, we would have the same issue as the GCN of creating the same output features for nodes with the same neighbors. This is why the LeakyReLU is crucial and adds some dependency on $h_i$ to the attention.
Once we obtain all attention factors, we can calculate the output features for each node by performing the weighted average:
$$h_i'=\sigma\left(\sum_{j\in\mathcal{N}_i}\alpha_{ij}\mathbf{W}h_j\right)$$
$\sigma$ is yet another non-linearity, as in the GCN layer. Visually, we can represent the full message passing in an attention layer as follows (figure credit - [Velickovic et al.](https://arxiv.org/abs/1710.10903)):
<center width="100%"><img src="graph_attention.jpeg" width="400px"></center>
To increase the expressiveness of the graph attention network, [Velickovic et al.](https://arxiv.org/abs/1710.10903) proposed to extend it to multiple heads similar to the Multi-Head Attention block in Transformers. This results in $N$ attention layers being applied in parallel. In the image above, it is visualized as three different colors of arrows (green, blue, and purple) that are afterward concatenated. The average is only applied for the very final prediction layer in a network.
After having discussed the graph attention layer in detail, we can implement it below:
```
class GATLayer(nn.Module):
def __init__(self, c_in, c_out, num_heads=1, concat_heads=True, alpha=0.2):
"""
Inputs:
c_in - Dimensionality of input features
c_out - Dimensionality of output features
num_heads - Number of heads, i.e. attention mechanisms to apply in parallel. The
output features are equally split up over the heads if concat_heads=True.
concat_heads - If True, the output of the different heads is concatenated instead of averaged.
alpha - Negative slope of the LeakyReLU activation.
"""
super().__init__()
self.num_heads = num_heads
self.concat_heads = concat_heads
if self.concat_heads:
assert c_out % num_heads == 0, "Number of output features must be a multiple of the count of heads."
c_out = c_out // num_heads
# Sub-modules and parameters needed in the layer
self.projection = nn.Linear(c_in, c_out * num_heads)
self.a = nn.Parameter(torch.Tensor(num_heads, 2 * c_out)) # One per head
self.leakyrelu = nn.LeakyReLU(alpha)
# Initialization from the original implementation
nn.init.xavier_uniform_(self.projection.weight.data, gain=1.414)
nn.init.xavier_uniform_(self.a.data, gain=1.414)
def forward(self, node_feats, adj_matrix, print_attn_probs=False):
"""
Inputs:
node_feats - Input features of the node. Shape: [batch_size, c_in]
adj_matrix - Adjacency matrix including self-connections. Shape: [batch_size, num_nodes, num_nodes]
print_attn_probs - If True, the attention weights are printed during the forward pass (for debugging purposes)
"""
batch_size, num_nodes = node_feats.size(0), node_feats.size(1)
# Apply linear layer and sort nodes by head
node_feats = self.projection(node_feats)
node_feats = node_feats.view(batch_size, num_nodes, self.num_heads, -1)
# We need to calculate the attention logits for every edge in the adjacency matrix
# Doing this on all possible combinations of nodes is very expensive
# => Create a tensor of [W*h_i||W*h_j] with i and j being the indices of all edges
edges = adj_matrix.nonzero(as_tuple=False) # Returns indices where the adjacency matrix is not 0 => edges
node_feats_flat = node_feats.view(batch_size * num_nodes, self.num_heads, -1)
edge_indices_row = edges[:,0] * num_nodes + edges[:,1]
edge_indices_col = edges[:,0] * num_nodes + edges[:,2]
a_input = torch.cat([
torch.index_select(input=node_feats_flat, index=edge_indices_row, dim=0),
torch.index_select(input=node_feats_flat, index=edge_indices_col, dim=0)
], dim=-1) # Index select returns a tensor with node_feats_flat being indexed at the desired positions along dim=0
# Calculate attention MLP output (independent for each head)
attn_logits = torch.einsum('bhc,hc->bh', a_input, self.a)
attn_logits = self.leakyrelu(attn_logits)
# Map list of attention values back into a matrix
attn_matrix = attn_logits.new_zeros(adj_matrix.shape+(self.num_heads,)).fill_(-9e15)
attn_matrix[adj_matrix[...,None].repeat(1,1,1,self.num_heads) == 1] = attn_logits.reshape(-1)
# Weighted average of attention
attn_probs = F.softmax(attn_matrix, dim=2)
if print_attn_probs:
print("Attention probs\n", attn_probs.permute(0, 3, 1, 2))
node_feats = torch.einsum('bijh,bjhc->bihc', attn_probs, node_feats)
# If heads should be concatenated, we can do this by reshaping. Otherwise, take mean
if self.concat_heads:
node_feats = node_feats.reshape(batch_size, num_nodes, -1)
else:
node_feats = node_feats.mean(dim=2)
return node_feats
```
Again, we can apply the graph attention layer on our example graph above to understand the dynamics better. As before, the input layer is initialized as an identity matrix, but we set $\mathbf{a}$ to be a vector of arbitrary numbers to obtain different attention values. We use two heads to show the parallel, independent attention mechanisms working in the layer.
```
layer = GATLayer(2, 2, num_heads=2)
layer.projection.weight.data = torch.Tensor([[1., 0.], [0., 1.]])
layer.projection.bias.data = torch.Tensor([0., 0.])
layer.a.data = torch.Tensor([[-0.2, 0.3], [0.1, -0.1]])
with torch.no_grad():
out_feats = layer(node_feats, adj_matrix, print_attn_probs=True)
print("Adjacency matrix", adj_matrix)
print("Input features", node_feats)
print("Output features", out_feats)
```
We recommend that you try to calculate the attention matrix at least for one head and one node for yourself. The entries are 0 where there does not exist an edge between $i$ and $j$. For the others, we see a diverse set of attention probabilities. Moreover, the output features of node 3 and 4 are now different although they have the same neighbors.
## PyTorch Geometric
We had mentioned before that implementing graph networks with adjacency matrix is simple and straight-forward but can be computationally expensive for large graphs. Many real-world graphs can reach over 200k nodes, for which adjacency matrix-based implementations fail. There are a lot of optimizations possible when implementing GNNs, and luckily, there exist packages that provide such layers. The most popular packages for PyTorch are [PyTorch Geometric](https://pytorch-geometric.readthedocs.io/en/latest/) and the [Deep Graph Library](https://www.dgl.ai/) (the latter being actually framework agnostic). Which one to use depends on the project you are planning to do and personal taste. In this tutorial, we will look at PyTorch Geometric as part of the PyTorch family. Similar to PyTorch Lightning, PyTorch Geometric is not installed by default on GoogleColab (and actually also not in our `dl2020` environment due to many dependencies that would be unnecessary for the practicals). Hence, let's import and/or install it below:
```
# torch geometric
try:
import torch_geometric
except ModuleNotFoundError:
# Installing torch geometric packages with specific CUDA+PyTorch version.
# See https://pytorch-geometric.readthedocs.io/en/latest/notes/installation.html for details
TORCH = torch.__version__.split('+')[0]
CUDA = 'cu' + torch.version.cuda.replace('.','')
!pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html
!pip install torch-sparse -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html
!pip install torch-cluster -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html
!pip install torch-spline-conv -f https://pytorch-geometric.com/whl/torch-{TORCH}+{CUDA}.html
!pip install torch-geometric
import torch_geometric
import torch_geometric.nn as geom_nn
import torch_geometric.data as geom_data
```
PyTorch Geometric provides us a set of common graph layers, including the GCN and GAT layer we implemented above. Additionally, similar to PyTorch's torchvision, it provides the common graph datasets and transformations on those to simplify training. Compared to our implementation above, PyTorch Geometric uses a list of index pairs to represent the edges. The details of this library will be explored further in our experiments.
In our tasks below, we want to allow us to pick from a multitude of graph layers. Thus, we define again below a dictionary to access those using a string:
```
gnn_layer_by_name = {
"GCN": geom_nn.GCNConv,
"GAT": geom_nn.GATConv,
"GraphConv": geom_nn.GraphConv
}
```
Additionally to GCN and GAT, we added the layer `geom_nn.GraphConv` ([documentation](https://pytorch-geometric.readthedocs.io/en/latest/modules/nn.html#torch_geometric.nn.conv.GraphConv)). GraphConv is a GCN with a separate weight matrix for the self-connections. Mathematically, this would be:
$$
\mathbf{x}_i^{(l+1)} = \mathbf{W}^{(l + 1)}_1 \mathbf{x}_i^{(l)} + \mathbf{W}^{(\ell + 1)}_2 \sum_{j \in \mathcal{N}_i} \mathbf{x}_j^{(l)}
$$
In this formula, the neighbor's messages are added instead of averaged. However, PyTorch Geometric provides the argument `aggr` to switch between summing, averaging, and max pooling.
## Experiments on graph structures
Tasks on graph-structured data can be grouped into three groups: node-level, edge-level and graph-level. The different levels describe on which level we want to perform classification/regression. We will discuss all three types in more detail below.
### Node-level tasks: Semi-supervised node classification
Node-level tasks have the goal to classify nodes in a graph. Usually, we have given a single, large graph with >1000 nodes of which a certain amount of nodes are labeled. We learn to classify those labeled examples during training and try to generalize to the unlabeled nodes.
A popular example that we will use in this tutorial is the Cora dataset, a citation network among papers. The Cora consists of 2708 scientific publications with links between each other representing the citation of one paper by another. The task is to classify each publication into one of seven classes. Each publication is represented by a bag-of-words vector. This means that we have a vector of 1433 elements for each publication, where a 1 at feature $i$ indicates that the $i$-th word of a pre-defined dictionary is in the article. Binary bag-of-words representations are commonly used when we need very simple encodings, and already have an intuition of what words to expect in a network. There exist much better approaches, but we will leave this to the NLP courses to discuss.
We will load the dataset below:
```
cora_dataset = torch_geometric.datasets.Planetoid(root=DATASET_PATH, name="Cora")
```
Let's look at how PyTorch Geometric represents the graph data. Note that although we have a single graph, PyTorch Geometric returns a dataset for compatibility to other datasets.
```
cora_dataset[0]
```
The graph is represented by a `Data` object ([documentation](https://pytorch-geometric.readthedocs.io/en/latest/modules/data.html#torch_geometric.data.Data)) which we can access as a standard Python namespace. The edge index tensor is the list of edges in the graph and contains the mirrored version of each edge for undirected graphs. The `train_mask`, `val_mask`, and `test_mask` are boolean masks that indicate which nodes we should use for training, validation, and testing. The `x` tensor is the feature tensor of our 2708 publications, and `y` the labels for all nodes.
After having seen the data, we can implement a simple graph neural network. The GNN applies a sequence of graph layers (GCN, GAT, or GraphConv), ReLU as activation function, and dropout for regularization. See below for the specific implementation.
```
class GNNModel(nn.Module):
def __init__(self, c_in, c_hidden, c_out, num_layers=2, layer_name="GCN", dp_rate=0.1, **kwargs):
"""
Inputs:
c_in - Dimension of input features
c_hidden - Dimension of hidden features
c_out - Dimension of the output features. Usually number of classes in classification
num_layers - Number of "hidden" graph layers
layer_name - String of the graph layer to use
dp_rate - Dropout rate to apply throughout the network
kwargs - Additional arguments for the graph layer (e.g. number of heads for GAT)
"""
super().__init__()
gnn_layer = gnn_layer_by_name[layer_name]
layers = []
in_channels, out_channels = c_in, c_hidden
for l_idx in range(num_layers-1):
layers += [
gnn_layer(in_channels=in_channels,
out_channels=out_channels,
**kwargs),
nn.ReLU(inplace=True),
nn.Dropout(dp_rate)
]
in_channels = c_hidden
layers += [gnn_layer(in_channels=in_channels,
out_channels=c_out,
**kwargs)]
self.layers = nn.ModuleList(layers)
def forward(self, x, edge_index):
"""
Inputs:
x - Input features per node
edge_index - List of vertex index pairs representing the edges in the graph (PyTorch geometric notation)
"""
for l in self.layers:
# For graph layers, we need to add the "edge_index" tensor as additional input
# All PyTorch Geometric graph layer inherit the class "MessagePassing", hence
# we can simply check the class type.
if isinstance(l, geom_nn.MessagePassing):
x = l(x, edge_index)
else:
x = l(x)
return x
```
Good practice in node-level tasks is to create an MLP baseline that is applied to each node independently. This way we can verify whether adding the graph information to the model indeed improves the prediction, or not. It might also be that the features per node are already expressive enough to clearly point towards a specific class. To check this, we implement a simple MLP below.
```
class MLPModel(nn.Module):
def __init__(self, c_in, c_hidden, c_out, num_layers=2, dp_rate=0.1):
"""
Inputs:
c_in - Dimension of input features
c_hidden - Dimension of hidden features
c_out - Dimension of the output features. Usually number of classes in classification
num_layers - Number of hidden layers
dp_rate - Dropout rate to apply throughout the network
"""
super().__init__()
layers = []
in_channels, out_channels = c_in, c_hidden
for l_idx in range(num_layers-1):
layers += [
nn.Linear(in_channels, out_channels),
nn.ReLU(inplace=True),
nn.Dropout(dp_rate)
]
in_channels = c_hidden
layers += [nn.Linear(in_channels, c_out)]
self.layers = nn.Sequential(*layers)
def forward(self, x, *args, **kwargs):
"""
Inputs:
x - Input features per node
"""
return self.layers(x)
```
Finally, we can merge the models into a PyTorch Lightning module which handles the training, validation, and testing for us.
```
class NodeLevelGNN(pl.LightningModule):
def __init__(self, model_name, **model_kwargs):
super().__init__()
# Saving hyperparameters
self.save_hyperparameters()
if model_name == "MLP":
self.model = MLPModel(**model_kwargs)
else:
self.model = GNNModel(**model_kwargs)
self.loss_module = nn.CrossEntropyLoss()
def forward(self, data, mode="train"):
x, edge_index = data.x, data.edge_index
x = self.model(x, edge_index)
# Only calculate the loss on the nodes corresponding to the mask
if mode == "train":
mask = data.train_mask
elif mode == "val":
mask = data.val_mask
elif mode == "test":
mask = data.test_mask
else:
assert False, f"Unknown forward mode: {mode}"
loss = self.loss_module(x[mask], data.y[mask])
acc = (x[mask].argmax(dim=-1) == data.y[mask]).sum().float() / mask.sum()
return loss, acc
def configure_optimizers(self):
# We use SGD here, but Adam works as well
optimizer = optim.SGD(self.parameters(), lr=0.1, momentum=0.9, weight_decay=2e-3)
return optimizer
def training_step(self, batch, batch_idx):
loss, acc = self.forward(batch, mode="train")
self.log('train_loss', loss)
self.log('train_acc', acc)
return loss
def validation_step(self, batch, batch_idx):
_, acc = self.forward(batch, mode="val")
self.log('val_acc', acc)
def test_step(self, batch, batch_idx):
_, acc = self.forward(batch, mode="test")
self.log('test_acc', acc)
```
Additionally to the Lightning module, we define a training function below. As we have a single graph, we use a batch size of 1 for the data loader and share the same data loader for the train, validation, and test set (the mask is picked inside the Lightning module). Besides, we set the argument `progress_bar_refresh_rate` to zero as it usually shows the progress per epoch, but an epoch only consists of a single step. The rest of the code is very similar to what we have seen in Tutorial 5 and 6 already.
```
def train_node_classifier(model_name, dataset, **model_kwargs):
pl.seed_everything(42)
node_data_loader = geom_data.DataLoader(dataset, batch_size=1)
# Create a PyTorch Lightning trainer with the generation callback
root_dir = os.path.join(CHECKPOINT_PATH, "NodeLevel" + model_name)
os.makedirs(root_dir, exist_ok=True)
trainer = pl.Trainer(default_root_dir=root_dir,
callbacks=[ModelCheckpoint(save_weights_only=True, mode="max", monitor="val_acc")],
gpus=1 if str(device).startswith("cuda") else 0,
max_epochs=200,
progress_bar_refresh_rate=0) # 0 because epoch size is 1
trainer.logger._default_hp_metric = None # Optional logging argument that we don't need
# Check whether pretrained model exists. If yes, load it and skip training
pretrained_filename = os.path.join(CHECKPOINT_PATH, f"NodeLevel{model_name}.ckpt")
if os.path.isfile(pretrained_filename):
print("Found pretrained model, loading...")
model = NodeLevelGNN.load_from_checkpoint(pretrained_filename)
else:
pl.seed_everything()
model = NodeLevelGNN(model_name=model_name, c_in=dataset.num_node_features, c_out=dataset.num_classes, **model_kwargs)
trainer.fit(model, node_data_loader, node_data_loader)
model = NodeLevelGNN.load_from_checkpoint(trainer.checkpoint_callback.best_model_path)
# Test best model on the test set
test_result = trainer.test(model, test_dataloaders=node_data_loader, verbose=False)
batch = next(iter(node_data_loader))
batch = batch.to(model.device)
_, train_acc = model.forward(batch, mode="train")
_, val_acc = model.forward(batch, mode="val")
result = {"train": train_acc,
"val": val_acc,
"test": test_result[0]['test_acc']}
return model, result
```
Finally, we can train our models. First, let's train the simple MLP:
```
# Small function for printing the test scores
def print_results(result_dict):
if "train" in result_dict:
print(f"Train accuracy: {(100.0*result_dict['train']):4.2f}%")
if "val" in result_dict:
print(f"Val accuracy: {(100.0*result_dict['val']):4.2f}%")
print(f"Test accuracy: {(100.0*result_dict['test']):4.2f}%")
node_mlp_model, node_mlp_result = train_node_classifier(model_name="MLP",
dataset=cora_dataset,
c_hidden=16,
num_layers=2,
dp_rate=0.1)
print_results(node_mlp_result)
```
Although the MLP can overfit on the training dataset because of the high-dimensional input features, it does not perform too well on the test set. Let's see if we can beat this score with our graph networks:
```
node_gnn_model, node_gnn_result = train_node_classifier(model_name="GNN",
layer_name="GCN",
dataset=cora_dataset,
c_hidden=16,
num_layers=2,
dp_rate=0.1)
print_results(node_gnn_result)
```
As we would have hoped for, the GNN model outperforms the MLP by quite a margin. This shows that using the graph information indeed improves our predictions and lets us generalizes better.
The hyperparameters in the model have been chosen to create a relatively small network. This is because the first layer with an input dimension of 1433 can be relatively expensive to perform for large graphs. In general, GNNs can become relatively expensive for very big graphs. This is why such GNNs either have a small hidden size or use a special batching strategy where we sample a connected subgraph of the big, original graph.
### Edge-level tasks: Link prediction
In some applications, we might have to predict on an edge-level instead of node-level. The most common edge-level task in GNN is link prediction. Link prediction means that given a graph, we want to predict whether there will be/should be an edge between two nodes or not. For example, in a social network, this is used by Facebook and co to propose new friends to you. Again, graph level information can be crucial to perform this task. The output prediction is usually done by performing a similarity metric on the pair of node features, which should be 1 if there should be a link, and otherwise close to 0. To keep the tutorial short, we will not implement this task ourselves. Nevertheless, there are many good resources out there if you are interested in looking closer at this task.
Tutorials and papers for this topic include:
* [PyTorch Geometric example](https://github.com/rusty1s/pytorch_geometric/blob/master/examples/link_pred.py)
* [Graph Neural Networks: A Review of Methods and Applications](https://arxiv.org/pdf/1812.08434.pdf), Zhou et al. 2019
* [Link Prediction Based on Graph Neural Networks](https://papers.nips.cc/paper/2018/file/53f0d7c537d99b3824f0f99d62ea2428-Paper.pdf), Zhang and Chen, 2018.
### Graph-level tasks: Graph classification
Finally, in this part of the tutorial, we will have a closer look at how to apply GNNs to the task of graph classification. The goal is to classify an entire graph instead of single nodes or edges. Therefore, we are also given a dataset of multiple graphs that we need to classify based on some structural graph properties. The most common task for graph classification is molecular property prediction, in which molecules are represented as graphs. Each atom is linked to a node, and edges in the graph are the bonds between atoms. For example, look at the figure below.
<center width="100%"><img src="molecule_graph.svg" width="600px"></center>
On the left, we have an arbitrary, small molecule with different atoms, whereas the right part of the image shows the graph representation. The atom types are abstracted as node features (e.g. a one-hot vector), and the different bond types are used as edge features. For simplicity, we will neglect the edge attributes in this tutorial, but you can include by using methods like the [Relational Graph Convolution](https://arxiv.org/abs/1703.06103) that uses a different weight matrix for each edge type.
The dataset we will use below is called the MUTAG dataset. It is a common small benchmark for graph classification algorithms, and contain 188 graphs with 18 nodes and 20 edges on average for each graph. The graph nodes have 7 different labels/atom types, and the binary graph labels represent "their mutagenic effect on a specific gram negative bacterium" (the specific meaning of the labels are not too important here). The dataset is part of a large collection of different graph classification datasets, known as the [TUDatasets](https://chrsmrrs.github.io/datasets/), which is directly accessible via `torch_geometric.datasets.TUDataset` ([documentation](https://pytorch-geometric.readthedocs.io/en/latest/modules/datasets.html#torch_geometric.datasets.TUDataset)) in PyTorch Geometric. We can load the dataset below.
```
tu_dataset = torch_geometric.datasets.TUDataset(root=DATASET_PATH, name="MUTAG")
```
Let's look at some statistics for the dataset:
```
print("Data object:", tu_dataset.data)
print("Length:", len(tu_dataset))
print(f"Average label: {tu_dataset.data.y.float().mean().item():4.2f}")
```
The first line shows how the dataset stores different graphs. The nodes, edges, and labels of each graph are concatenated to one tensor, and the dataset stores the indices where to split the tensors correspondingly. The length of the dataset is the number of graphs we have, and the "average label" denotes the percentage of the graph with label 1. As long as the percentage is in the range of 0.5, we have a relatively balanced dataset. It happens quite often that graph datasets are very imbalanced, hence checking the class balance is always a good thing to do.
Next, we will split our dataset into a training and test part. Note that we do not use a validation set this time because of the small size of the dataset. Therefore, our model might overfit slightly on the validation set due to the noise of the evaluation, but we still get an estimate of the performance on untrained data.
```
torch.manual_seed(42)
tu_dataset.shuffle()
train_dataset = tu_dataset[:150]
test_dataset = tu_dataset[150:]
```
When using a data loader, we encounter a problem with batching $N$ graphs. Each graph in the batch can have a different number of nodes and edges, and hence we would require a lot of padding to obtain a single tensor. Torch geometric uses a different, more efficient approach: we can view the $N$ graphs in a batch as a single large graph with concatenated node and edge list. As there is no edge between the $N$ graphs, running GNN layers on the large graph gives us the same output as running the GNN on each graph separately. Visually, this batching strategy is visualized below (figure credit - PyTorch Geometric team, [tutorial here](https://colab.research.google.com/drive/1I8a0DfQ3fI7Njc62__mVXUlcAleUclnb?usp=sharing#scrollTo=2owRWKcuoALo)).
<center width="100%"><img src="torch_geometric_stacking_graphs.png" width="600px"></center>
The adjacency matrix is zero for any nodes that come from two different graphs, and otherwise according to the adjacency matrix of the individual graph. Luckily, this strategy is already implemented in torch geometric, and hence we can use the corresponding data loader:
```
graph_train_loader = geom_data.DataLoader(train_dataset, batch_size=64, shuffle=True)
graph_val_loader = geom_data.DataLoader(test_dataset, batch_size=64) # Additional loader if you want to change to a larger dataset
graph_test_loader = geom_data.DataLoader(test_dataset, batch_size=64)
```
Let's load a batch below to see the batching in action:
```
batch = next(iter(graph_test_loader))
print("Batch:", batch)
print("Labels:", batch.y[:10])
print("Batch indices:", batch.batch[:40])
```
We have 38 graphs stacked together for the test dataset. The batch indices, stored in `batch`, show that the first 12 nodes belong to the first graph, the next 22 to the second graph, and so on. These indices are important for performing the final prediction. To perform a prediction over a whole graph, we usually perform a pooling operation over all nodes after running the GNN model. In this case, we will use the average pooling. Hence, we need to know which nodes should be included in which average pool. Using this pooling, we can already create our graph network below. Specifically, we re-use our class `GNNModel` from before, and simply add an average pool and single linear layer for the graph prediction task.
```
class GraphGNNModel(nn.Module):
def __init__(self, c_in, c_hidden, c_out, dp_rate_linear=0.5, **kwargs):
"""
Inputs:
c_in - Dimension of input features
c_hidden - Dimension of hidden features
c_out - Dimension of output features (usually number of classes)
dp_rate_linear - Dropout rate before the linear layer (usually much higher than inside the GNN)
kwargs - Additional arguments for the GNNModel object
"""
super().__init__()
self.GNN = GNNModel(c_in=c_in,
c_hidden=c_hidden,
c_out=c_hidden, # Not our prediction output yet!
**kwargs)
self.head = nn.Sequential(
nn.Dropout(dp_rate_linear),
nn.Linear(c_hidden, c_out)
)
def forward(self, x, edge_index, batch_idx):
"""
Inputs:
x - Input features per node
edge_index - List of vertex index pairs representing the edges in the graph (PyTorch geometric notation)
batch_idx - Index of batch element for each node
"""
x = self.GNN(x, edge_index)
x = geom_nn.global_mean_pool(x, batch_idx) # Average pooling
x = self.head(x)
return x
```
Finally, we can create a PyTorch Lightning module to handle the training. It is similar to the modules we have seen before and does nothing surprising in terms of training. As we have a binary classification task, we use the Binary Cross Entropy loss.
```
class GraphLevelGNN(pl.LightningModule):
def __init__(self, **model_kwargs):
super().__init__()
# Saving hyperparameters
self.save_hyperparameters()
self.model = GraphGNNModel(**model_kwargs)
self.loss_module = nn.BCEWithLogitsLoss() if self.hparams.c_out == 1 else nn.CrossEntropyLoss()
def forward(self, data, mode="train"):
x, edge_index, batch_idx = data.x, data.edge_index, data.batch
x = self.model(x, edge_index, batch_idx)
x = x.squeeze(dim=-1)
if self.hparams.c_out == 1:
preds = (x > 0).float()
data.y = data.y.float()
else:
preds = x.argmax(dim=-1)
loss = self.loss_module(x, data.y)
acc = (preds == data.y).sum().float() / preds.shape[0]
return loss, acc
def configure_optimizers(self):
optimizer = optim.AdamW(self.parameters(), lr=1e-2, weight_decay=0.0) # High lr because of small dataset and small model
return optimizer
def training_step(self, batch, batch_idx):
loss, acc = self.forward(batch, mode="train")
self.log('train_loss', loss)
self.log('train_acc', acc)
return loss
def validation_step(self, batch, batch_idx):
_, acc = self.forward(batch, mode="val")
self.log('val_acc', acc)
def test_step(self, batch, batch_idx):
_, acc = self.forward(batch, mode="test")
self.log('test_acc', acc)
```
Below we train the model on our dataset. It resembles the typical training functions we have seen so far.
```
def train_graph_classifier(model_name, **model_kwargs):
pl.seed_everything(42)
# Create a PyTorch Lightning trainer with the generation callback
root_dir = os.path.join(CHECKPOINT_PATH, "GraphLevel" + model_name)
os.makedirs(root_dir, exist_ok=True)
trainer = pl.Trainer(default_root_dir=root_dir,
callbacks=[ModelCheckpoint(save_weights_only=True, mode="max", monitor="val_acc")],
gpus=1 if str(device).startswith("cuda") else 0,
max_epochs=500,
progress_bar_refresh_rate=0)
trainer.logger._default_hp_metric = None # Optional logging argument that we don't need
# Check whether pretrained model exists. If yes, load it and skip training
pretrained_filename = os.path.join(CHECKPOINT_PATH, f"GraphLevel{model_name}.ckpt")
if os.path.isfile(pretrained_filename):
print("Found pretrained model, loading...")
model = GraphLevelGNN.load_from_checkpoint(pretrained_filename)
else:
pl.seed_everything(42)
model = GraphLevelGNN(c_in=tu_dataset.num_node_features,
c_out=1 if tu_dataset.num_classes==2 else tu_dataset.num_classes,
**model_kwargs)
trainer.fit(model, graph_train_loader, graph_val_loader)
model = GraphLevelGNN.load_from_checkpoint(trainer.checkpoint_callback.best_model_path)
# Test best model on validation and test set
train_result = trainer.test(model, test_dataloaders=graph_train_loader, verbose=False)
test_result = trainer.test(model, test_dataloaders=graph_test_loader, verbose=False)
result = {"test": test_result[0]['test_acc'], "train": train_result[0]['test_acc']}
return model, result
```
Finally, let's perform the training and testing. Feel free to experiment with different GNN layers, hyperparameters, etc.
```
model, result = train_graph_classifier(model_name="GraphConv",
c_hidden=256,
layer_name="GraphConv",
num_layers=3,
dp_rate_linear=0.5,
dp_rate=0.0)
print(f"Train performance: {100.0*result['train']:4.2f}%")
print(f"Test performance: {100.0*result['test']:4.2f}%")
```
The test performance shows that we obtain quite good scores on an unseen part of the dataset. It should be noted that as we have been using the test set for validation as well, we might have overfitted slightly to this set. Nevertheless, the experiment shows us that GNNs can be indeed powerful to predict the properties of graphs and/or molecules.
## Conclusion
In this tutorial, we have seen the application of neural networks to graph structures. We looked at how a graph can be represented (adjacency matrix or edge list), and discussed the implementation of common graph layers: GCN and GAT. The implementations showed the practical side of the layers, which is often easier than the theory. Finally, we experimented with different tasks, on node-, edge- and graph-level. Overall, we have seen that including graph information in the predictions can be crucial for achieving high performance. There are a lot of applications that benefit from GNNs, and the importance of these networks will likely increase over the next years.
| github_jupyter |
# Numerical Differentiation
Teng-Jui Lin
Content adapted from UW AMATH 301, Beginning Scientific Computing, in Spring 2020.
- Numerical differentiation
- First order methods
- Forward difference
- Backward difference
- Second order methods
- Central difference
- Other second order methods
- Errors
- `numpy` implementation
- Data differentiation by [`numpy.gradient()`](https://numpy.org/doc/stable/reference/generated/numpy.gradient.html)
## Numerical differentiation of known function
From the definition of derivative, the forward difference approximation is given by
$$
f'(x) = \dfrac{f(x+\Delta x) - f(x)}{\Delta x}
$$
The backward difference approximation is given by
$$
f'(x) = \dfrac{f(x) - f(x-\Delta x)}{\Delta x}
$$
The central difference approximation is given by
$$
f'(x) = \dfrac{f(x + \Delta x) - f(x-\Delta x)}{2\Delta x}
$$
which is the average of forward and backward difference.
Forward and backward difference are $\mathcal{O}(\Delta x)$, or first order method. Central difference is $\mathcal{O}(\Delta x^2)$, being a second order method. Note that we also have second order method at the left and right end points:
$$
f'(x) = \dfrac{-3f(x) + 4f(x+\Delta x) - 4f(x+2\Delta x)}{2\Delta x}
$$
$$
f'(x) = \dfrac{3f(x) - 4f(x-\Delta x) + 4f(x-2\Delta x)}{2\Delta x}
$$
### Implementation
**Problem Statement.** Find the derivative of the function
$$
f(x) = \sin x
$$
using analytic expression, forward difference, backward difference, and central difference. compare their accuracy using a plot.
```
import numpy as np
import matplotlib.pyplot as plt
# target function
f = lambda x : np.sin(x)
df = lambda x : np.cos(x) # analytic for comparison
x = np.arange(0, 2*np.pi, 0.1)
def forward_diff(f, x, dx):
return (f(x + dx) - f(x))/dx
def backward_diff(f, x, dx):
return (f(x) - f(x - dx))/dx
def central_diff(f, x, dx):
return (f(x + dx) - f(x - dx))/(2*dx)
dx = 0.1
forward_df = forward_diff(f, x, dx)
backward_df = backward_diff(f, x, dx)
central_df = central_diff(f, x, dx)
# plot settings
%config InlineBackend.figure_format = 'retina'
%matplotlib inline
plt.rcParams.update({
'font.family': 'Arial', # Times New Roman, Calibri
'font.weight': 'normal',
'mathtext.fontset': 'cm',
'font.size': 18,
'lines.linewidth': 2,
'axes.linewidth': 2,
'axes.spines.top': False,
'axes.spines.right': False,
'axes.titleweight': 'bold',
'axes.titlesize': 18,
'axes.labelweight': 'bold',
'xtick.major.size': 8,
'xtick.major.width': 2,
'ytick.major.size': 8,
'ytick.major.width': 2,
'figure.dpi': 80,
'legend.framealpha': 1,
'legend.edgecolor': 'black',
'legend.fancybox': False,
'legend.fontsize': 14
})
fig, ax = plt.subplots(figsize=(6, 3))
ax.plot(x, df(x), label='Analytic', color='black')
ax.plot(x, forward_df, '--', label='Forward')
ax.plot(x, backward_df, '--', label='Backward')
ax.plot(x, central_df, '--', label='Central')
ax.set_xlabel('$x$')
ax.set_ylabel('$f\'(x)$')
ax.set_title('Numerical differentiation methods')
ax.set_xlim(0, 2*np.pi)
ax.set_ylim(-1, 1)
ax.legend()
fig, ax = plt.subplots(figsize=(6, 3))
ax.plot(x, df(x), label='Analytic', color='black')
ax.plot(x, forward_df, '--', label='Forward')
ax.plot(x, backward_df, '--', label='Backward')
ax.plot(x, central_df, '--', label='Central')
ax.set_xlabel('$x$')
ax.set_ylabel('$f\'(x)$')
ax.set_title('Numerical differentiation methods')
ax.set_xlim(1.5, 2.5)
ax.set_ylim(-0.9, 0.5)
ax.legend()
```
### Error and method order
**Problem Statement.** Compare the error of forward difference, backward difference, and central difference with analytic derivative of the function
$$
f(x) = \sin x
$$
Compare the error of the methods using a plot.
```
# target function
f = lambda x : np.sin(x)
df = lambda x : np.cos(x) # analytic for comparison
x = np.arange(0, 2*np.pi, 0.1)
dx = np.array([0.1 / 2**i for i in range(5)])
forward_errors = np.zeros(len(dx))
backward_errors = np.zeros(len(dx))
central_errors = np.zeros(len(dx))
for i in range(len(dx)):
forward_df = forward_diff(f, x, dx[i])
backward_df = backward_diff(f, x, dx[i])
central_df = central_diff(f, x, dx[i])
forward_errors[i] = np.linalg.norm(df(x) - forward_df)
backward_errors[i] = np.linalg.norm(df(x) - backward_df)
central_errors[i] = np.linalg.norm(df(x) - central_df)
fig, ax = plt.subplots(figsize=(6, 3))
ax.plot(dx, forward_errors, '.-', label='Forward')
ax.plot(dx, backward_errors, 'o-.', label='Backward', alpha=0.5)
ax.plot(dx, central_errors, 'o--', label='Central', alpha=0.8)
ax.set_xlabel('$dx$')
ax.set_ylabel('Error')
ax.set_title('Error of numerical methods')
# ax.set_xlim(1.5, 2.5)
# ax.set_ylim(-0.9, 0.5)
ax.legend()
```
## Numerical differentiation of data
### Implementation
**Problem Statement.** The Gaussian function has the form
$$
f(x) = \dfrac{1}{\sqrt{2\pi\sigma^2}}\exp\left(-\dfrac{(x-\mu)^2}{2\sigma^2}\right)
$$
(a) Generate an equidistant Gaussian dataset of such form in the domain $[0, 5]$ with $\sigma = 1, \mu = 2.5$.
(b) Find the numerical derivative of the data points using second order methods and [`numpy.gradient()`](https://numpy.org/doc/stable/reference/generated/numpy.gradient.html). Plot the data and the derivative.
```
# generate data
gaussian = lambda x, sigma, mu : 1/np.sqrt(2*np.pi*sigma**2) * np.exp(-(x - mu)**2 / (2*sigma**2))
gaussian_data_x = np.linspace(0, 5, 50)
gaussian_data_y = np.array([gaussian(i, 1, 2.5) for i in gaussian_data_x])
def numerical_diff(data_x, data_y):
'''
Numerically differentiate given equidistant data points.
Central difference is used in middle.
Second order forward and backward difference used at end points.
:param data_x: x-coordinates of data points
:param data_y: y-coordinates of data points
:returns: numerical derivative of data points
'''
df = np.zeros_like(data_x)
dx = data_x[1] - data_x[0] # assume equidistant points
df[0] = (-3*data_y[0] + 4*data_y[1] - data_y[2])/(2*dx)
df[-1] = (3*data_y[-1] - 4*data_y[-2] + data_y[-3])/(2*dx)
df[1:-1] = (data_y[2:] - data_y[0:-2])/(2*dx)
return df
fig, ax = plt.subplots(figsize=(6, 3))
ax.plot(gaussian_data_x, gaussian_data_y, 'o', label='Data points')
ax.plot(gaussian_data_x, numerical_diff(gaussian_data_x, gaussian_data_y), '.', label='Derivative')
ax.set_xlabel('$x$')
ax.set_ylabel('$f(x), f\'(x)$')
ax.set_title('Numerical differentiation of data')
# ax.set_xlim(1.5, 2.5)
# ax.set_ylim(-0.9, 0.5)
ax.legend()
```
### Numerical differentiation of data with `numpy`
[`numpy.gradient()`](https://numpy.org/doc/stable/reference/generated/numpy.gradient.html) has similar implementation as above. It uses central difference in the middle, and forward and backward differences at the end points.
```
dx = gaussian_data_x[1] - gaussian_data_x[0]
gaussian_df = np.gradient(gaussian_data_y, dx)
fig, ax = plt.subplots(figsize=(6, 3))
ax.plot(gaussian_data_x, gaussian_data_y, 'o', label='Data points')
ax.plot(gaussian_data_x, gaussian_df, '.', label='Derivative')
ax.set_xlabel('$x$')
ax.set_ylabel('$f(x), f\'(x)$')
ax.set_title('Numerical differentiation of data')
# ax.set_xlim(1.5, 2.5)
# ax.set_ylim(-0.9, 0.5)
ax.legend()
```
| github_jupyter |
# Unity ML-Agents Toolkit
## Environment Basics
This notebook contains a walkthrough of the basic functions of the Python API for the Unity ML-Agents toolkit. For instructions on building a Unity environment, see [here](https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Getting-Started-with-Balance-Ball.md).
### 1. Set environment parameters
Be sure to set `env_name` to the name of the Unity environment file you want to launch. Ensure that the environment build is in the `python/` directory.
```
env_name = "3DBall" # Name of the Unity environment binary to launch
train_mode = True # Whether to run the environment in training or inference mode
```
### 2. Load dependencies
The following loads the necessary dependencies and checks the Python version (at runtime). ML-Agents Toolkit (v0.3 onwards) requires Python 3.
```
import matplotlib.pyplot as plt
import numpy as np
import sys
from unityagents import UnityEnvironment
%matplotlib inline
print("Python version:")
print(sys.version)
# check Python version
if (sys.version_info[0] < 3):
raise Exception("ERROR: ML-Agents Toolkit (v0.3 onwards) requires Python 3")
```
### 3. Start the environment
`UnityEnvironment` launches and begins communication with the environment when instantiated.
Environments contain _brains_ which are responsible for deciding the actions of their associated _agents_. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
```
env = UnityEnvironment(file_name=env_name)
# Examine environment parameters
print(str(env))
# Set the default brain to work with
default_brain = env.brain_names[0]
brain = env.brains[default_brain]
```
### 4. Examine the observation and state spaces
We can reset the environment to be provided with an initial set of observations and states for all the agents within the environment. In ML-Agents, _states_ refer to a vector of variables corresponding to relevant aspects of the environment for an agent. Likewise, _observations_ refer to a set of relevant pixel-wise visuals for an agent.
```
# Reset the environment
env_info = env.reset(train_mode=train_mode)[default_brain]
# Examine the state space for the default brain
print("Agent state looks like: \n{}".format(env_info.vector_observations[0]))
# Examine the observation space for the default brain
for observation in env_info.visual_observations:
print("Agent observations look like:")
if observation.shape[3] == 3:
plt.imshow(observation[0,:,:,:])
else:
plt.imshow(observation[0,:,:,0])
```
### 5. Take random actions in the environment
Once we restart an environment, we can step the environment forward and provide actions to all of the agents within the environment. Here we simply choose random actions based on the `action_space_type` of the default brain.
Once this cell is executed, 10 messages will be printed that detail how much reward will be accumulated for the next 10 episodes. The Unity environment will then pause, waiting for further signals telling it what to do next. Thus, not seeing any animation is expected when running this cell.
```
for episode in range(10):
env_info = env.reset(train_mode=train_mode)[default_brain]
done = False
episode_rewards = 0
while not done:
if brain.vector_action_space_type == 'continuous':
env_info = env.step(np.random.randn(len(env_info.agents),
brain.vector_action_space_size))[default_brain]
else:
env_info = env.step(np.random.randint(0, brain.vector_action_space_size,
size=(len(env_info.agents))))[default_brain]
episode_rewards += env_info.rewards[0]
done = env_info.local_done[0]
print("Total reward this episode: {}".format(episode_rewards))
```
### 6. Close the environment when finished
When we are finished using an environment, we can close it with the function below.
```
env.close()
```
| github_jupyter |
```
from typing import List
from collections import defaultdict
from functools import lru_cache
class Solution:
def catMouseGame(self, graph: List[List[int]]) -> int:
@lru_cache(None)
def dfs(mouse, cat, step):
if step > len(graph) * 2:
return 0
if cat == mouse:
return 2
if mouse == 0:
return 1
if step % 2 == 0:# 老鼠行动
draw = False
for nm in graph[mouse]:
ans = dfs(nm, cat, step + 1)
if ans == 0:
draw = True
elif ans == 1:
return 1
return 0 if draw else 2
else:
draw = False
for nc in graph[cat]:
if nc == 0:
continue
ans = dfs(mouse, nc, step + 1)
if ans == 0:
draw = True
elif ans == 2:
return 2
return 0 if draw else 1
return dfs(1, 2, 0)
from typing import List
from collections import defaultdict, deque
class Solution:
def catMouseGame(self, graph: List[List[int]]) -> int:
# 用 已知 推 未知
def getPreState(m, c, t):
pos = []
if t == 1: # 如果当前轮到老鼠走,那么前一步是轮到猫走
for nc in graph[c]:
if nc == 0:
continue
pos.append((m, nc, 2))
else:
for nm in graph[m]:
pos.append((nm, c, 1))
return pos
def mustLoss(m, c, t):
if t == 1:
for nm in graph[m]:
if res[(nm, c, 2)] != 2:
return False
else:
for nc in graph[c]:
if nc == 0:
continue
if res[(m, nc, 1)] != 1:
return False
return True
res = defaultdict(int) # 记录每个状态的输赢
queue = deque()
n = len(graph)
for t in range(1, 3): # 1: mouse move, 2: cat move
for i in range(1, n):
res[(0, i, t)] = 1 # 如果是1,就是老鼠赢
queue.append((0, i, t))
res[(i, i, t)] = 2 # 如果是2,就是猫赢
queue.append((i, i, t))
while queue:
m, c, t = queue.popleft()
ans = res[(m, c, t)] # 当前状态的结果,是已经知道谁赢谁输的
for pre in getPreState(m, c, t): # 用当前的状态推导之前的状态
m2, c2, t2 = pre # 先前的一个状态
if res[pre] != 0:
continue
if ans == t2:
res[pre] = ans
queue.append(pre)
elif mustLoss(m2, c2, t2):
res[pre] = 3 - t2
queue.append(pre)
return res[(1, 2, 1)] # 初始位置为(1,2,1)
solution = Solution()
solution.catMouseGame([[1,3],[0],[3],[0,2]])
```
| github_jupyter |
```
import os
os.chdir("../../scVI/")
os.getcwd()
import torch
import pickle
import seaborn as sns
import numpy as np
import pandas as pd
from umap import UMAP
from sklearn.cluster import SpectralClustering
from scvi.inference import UnsupervisedTrainer
from scvi.models import VAE
save_path = '../CSF/Notebooks/'
import matplotlib.pyplot as plt
import matplotlib
matplotlib.rcParams['pdf.fonttype'] = 42
matplotlib.rcParams['ps.fonttype'] = 42
%matplotlib inline
from random import sample
from numpy.random import permutation
%matplotlib inline
celllabels = np.load(save_path + 'meta/celllabels.npy')
isCD4 = celllabels=='CD4'
clusters = np.load(save_path + 'meta/CD4.clusters.npy')
isMS = np.load(save_path+'meta/isMS.npy')[isCD4]
isCSF = np.load(save_path+'meta/isCSF.npy')[isCD4]
def ES_fast(score, s , p, interval):
N = len(s)
N_H = np.sum(s==1)
m = 1/(N-N_H)
power = np.abs(score)**p
N_R = np.sum(power[s==1])
h = power / N_R
ES = [0]
hit = 0
miss = 0
for i in np.arange(0, (len(power)-interval),interval):
x = np.arange(i,i+interval,1)
si = s[x]
hit = hit + np.sum(h[x][si==1])
miss = miss + m*np.sum(si==0)
ES.append(hit-miss)
return(ES)
VisionScore = pd.read_csv('../CSF/signatures/sigScore.csv')
s = isMS[isCSF==True]
score = np.asarray(VisionScore['TFH'])[isCSF==True]
ranked = pd.DataFrame(np.asarray([s, score]).T,columns=['s','score'])
ranked = ranked.sample(frac=1)
ranked = ranked.sort_values(by='score',ascending=False)
s = np.asarray(ranked['s'])
score = np.asarray(ranked['score'])
sns.distplot(score[s==False], kde=True, rug=False,color='green',label='control')
sns.distplot(score[s==True], kde=True, rug=False,color='orange',label='MS')
# plt.legend()
plt.savefig(save_path+'figures/SupFigure8/scoredist.MSinCSF.TFH.pdf')
from random import sample
scorerank = np.argsort(np.argsort(score))
sns.rugplot(sample(list(scorerank[s==True]),1000), label = 'MS', color='orange', linewidth = 0.1)
sns.rugplot(sample(list(scorerank[s==False]),1000), label = 'Control', color = 'green', linewidth = 0.1)
plt.axis('off')
# plt.legend()
plt.savefig(save_path+'figures/SupFigure8/rugplot.MSinCSF.TFH.pdf')
ES = ES_fast(score,s,1,1)
from random import sample
control = [ES_fast(score,np.asarray(sample(list(s),len(s))),1,1) for i in range(100)]
control_score = pd.read_csv('../CSF/signatures/sigScore.TFH.matched.csv')
control_score = control_score.loc[isCSF==True]
control_score = control_score[control_score.columns[1:]]
control_score.loc[:5]
control2 = []
for x in control_score.columns:
score = control_score[x]
ranked = pd.DataFrame(np.asarray([s, score]).T,columns=['s','score'])
ranked = ranked.sample(frac=1)
ranked = ranked.sort_values(by='score',ascending=False)
s = np.asarray(ranked['s'])
score = np.asarray(ranked['score'])
control2.append(ES_fast(score,s,1,1))
plt.plot(np.arange(len(ES)),ES,'r-')
for i in range(100):
plt.plot(np.arange(len(ES)),control2[i],'b-',alpha=0.1)
plt.axvline(x=np.argmax(ES),color='black')
# plt.title("Diseased Cell Set")
plt.xlabel("Rank in Ordered Dataset")
plt.ylabel("Enrichment Score (ES)")
plt.savefig(save_path+'figures/SupFigure8/ES.MSinCSF.TFH.pdf')
with open(save_path + 'CSEA/TFH.MSinCSF.pkl', 'wb') as f:
pickle.dump((ES,control2), f)
```
### Pvalue
```
np.mean(np.asarray([np.max(x) for x in control2[1:]]) > np.max(ES))
```
### Leading Edge
```
np.argmax(ES)
```
# in blood
```
s = isMS[isCSF==False]
score = np.asarray(VisionScore['TFH'])[isCSF==False]
ranked = pd.DataFrame(np.asarray([s, score]).T,columns=['s','score'])
ranked = ranked.sample(frac=1)
ranked = ranked.sort_values(by='score',ascending=False)
s = np.asarray(ranked['s'])
score = np.asarray(ranked['score'])
sns.distplot(score[s==False], kde=True, rug=False,color='green',label='control')
sns.distplot(score[s==True], kde=True, rug=False,color='orange',label='MS')
# plt.legend()
plt.savefig(save_path+'figures/SupFigure8/scoredist.MSinPBMC.TFH.pdf')
scorerank = np.argsort(np.argsort(score))
sns.rugplot(sample(list(scorerank[s==True]),1000), label = 'MS', color='orange', linewidth = 0.1)
sns.rugplot(sample(list(scorerank[s==False]),1000), label = 'Control', color = 'green', linewidth = 0.1)
plt.axis('off')
# plt.legend()
plt.savefig(save_path+'figures/SupFigure8/rugplot.MSinPBMC.TFH.pdf')
ES = ES_fast(score,s,1,1)
from random import sample
control = [ES_fast(score,np.asarray(sample(list(s),len(s))),1,1) for i in range(100)]
control_score = pd.read_csv('../CSF/signatures/sigScore.TFH.matched.csv')
control_score = control_score.loc[isCSF==False]
control_score = control_score[control_score.columns[1:]]
control2 = []
for x in control_score.columns:
score = control_score[x]
ranked = pd.DataFrame(np.asarray([s, score]).T,columns=['s','score'])
ranked = ranked.sample(frac=1)
ranked = ranked.sort_values(by='score',ascending=False)
s = np.asarray(ranked['s'])
score = np.asarray(ranked['score'])
control2.append(ES_fast(score,s,1,1))
plt.plot(np.arange(len(ES)),ES,'r-')
for i in range(100):
plt.plot(np.arange(len(ES)),control2[i],'b-',alpha=0.1)
plt.axvline(x=np.argmax(ES),color='black')
# plt.title("Diseased Cell Set")
plt.xlabel("Rank in Ordered Dataset")
plt.ylabel("Enrichment Score (ES)")
plt.savefig(save_path+'figures/SupFigure8/ES.MSinPBMC.TFH.pdf')
```
# Significance value
0.889 in blood
0.002 in CSF
```
np.mean(np.asarray([np.max(x) for x in control2]) > np.max(ES))
```
### Leading Edge
```
np.argmax(ES)
```
# show TFH cell origins
```
latent_u = np.load(save_path + 'UMAP/all_dataset.umap.npy')
celllabels = np.load(save_path + 'meta/celllabels.npy')
celltype, labels = np.unique(celllabels, return_inverse=True)
isMS = np.load(save_path+'meta/isMS.npy')
isCSF = np.load(save_path+'meta/isCSF.npy')
validclusters = (celllabels!='Mono Doublet') & \
(celllabels!='contamination1') & \
(celllabels!='doublet') & \
(celllabels!='B cell doublets') & \
(celllabels!='RBC')
isCD4 = (celllabels=='CD4')
latent_u = latent_u[celllabels=='CD4',:]
isMS = isMS[celllabels=='CD4']
isCSF = isCSF[celllabels=='CD4']
edgethres = np.quantile(VisionScore['TFH'],(1-(587+135)/25105))
TFH = np.asarray(VisionScore['TFH']>edgethres)
len(isCSF)
fig, ax = plt.subplots(figsize=(5, 5),facecolor='white')
plt.scatter(latent_u[:, 0], latent_u[:, 1],c='lightgray',s=5)
plt.scatter(latent_u[:, 0][TFH & isCSF], latent_u[:, 1][TFH & isCSF],c='orange',s=3,label='CSF')
plt.scatter(latent_u[:, 0][TFH & (isCSF==False)], latent_u[:, 1][TFH & (isCSF==False)],c='green',s=3,label='PBMC')
plt.title('TFH',fontsize=30)
plt.axis("off")
# plt.legend()
plt.tight_layout()
plt.savefig(save_path+'figures/SupFigure8/TFH.CSF_PBMC.pdf')
fig, ax = plt.subplots(figsize=(5, 5),facecolor='white')
plt.scatter(latent_u[:, 0], latent_u[:, 1],c='lightgray',s=5)
plt.scatter(latent_u[:, 0][TFH & isCSF & isMS], latent_u[:, 1][TFH & isCSF & isMS],c='orange',s=3,label='MS')
plt.scatter(latent_u[:, 0][TFH & isCSF & (isMS==False)], latent_u[:, 1][TFH & isCSF & (isMS==False)],c='green',s=3,label='control')
plt.title('TFH',fontsize=30)
plt.axis("off")
# plt.legend()
plt.tight_layout()
plt.savefig(save_path+'figures/SupFigure8/TFH.MSinCSF.pdf')
fig, ax = plt.subplots(figsize=(5, 5),facecolor='white')
plt.scatter(latent_u[:, 0], latent_u[:, 1],c='lightgray',s=5)
plt.scatter(latent_u[:, 0][TFH & (isCSF==False) & isMS], latent_u[:, 1][TFH & (isCSF==False) & isMS],c='orange',s=3,label='MS')
plt.scatter(latent_u[:, 0][TFH & (isCSF==False) & (isMS==False)], latent_u[:, 1][TFH & (isCSF==False) & (isMS==False)],c='green',s=3,label='control')
plt.title('TFH',fontsize=30)
plt.axis("off")
# plt.legend()
plt.tight_layout()
plt.savefig(save_path+'figures/SupFigure8/TFH.MSinPBMC.pdf')
```
| github_jupyter |
```
import sys
sys.path.append('/home/bibek/projects/DEEPL')
from helpers.deep import get_deep_data, get_classifier
#data = get_deep_data(debug=False, filepath='/home/bibek/projects/DEEPL/_playground/sample_data/nlp_out.csv')
import pandas as pd
df = pd.read_csv('/home/bibek/projects/DEEPL/_playground/sample_data/processed_sectors_subsectors.csv')
#################################################
### COMPARE KEYWORDS EXTRACTION VS SIMPLE method
#################################################
# create a function to extract keywords from document(will return list)
def keywords_extracter(doc):
ngrams = get_key_ngrams(doc, 3)
allwords = {}
for x in ngrams['1grams']:
allwords[x[0]] = True
for y in ngrams['2grams']:
a,b = y[0].split()
allwords[a] = True
allwords[b] = True
for y in ngrams['3grams']:
a,b,c = y[0].split()
allwords[a] = True
allwords[b] = True
allwords[c] = True
return list(allwords.keys())
# convert doc to str and then split
str_split = compose(str.split, str)
# stemmer
stemmer = PorterStemmer()
# split and stem
#split_and_stem = lambda x: list(map(stemmer.stem,x))
rm_stop_list = curried_map(rm_stop_words_txt)
rm_punc_list = curried_map(rm_punc_not_nums)
rm_punc_nums_list = curried_map(remove_punc_and_nums)
lower_list = curried_map(str.lower)
composed_list_processor= compose(remove_punc_and_nums, rm_stop_words_txt, str.lower)
# rm_punc_nums_processor
punc_nums_preprocessor = compose(list, curried_filter(lambda x: x.strip()!=''), curried_map(composed_list_processor), str.split, str)
processed = df['excerpt'].apply(punc_nums_preprocessor)
print(processed)
assert False
# keywords pre processor
kw_preprocessor = compose(list, rm_punc_list , rm_stop_list,lower_list,keywords_extracter,str)
# simple pre processor
simple_preprocessor = compose(list, rm_punc_list , rm_stop_list,lower_list,str.split,str)
kw_processed = [(kw_preprocessor(ex), l) for (ex, l) in data] # if langid.classify(str(ex))[0] == 'en']
simple_processed = [(simple_preprocessor(ex), l) for (ex, l) in data] # if langid.classify(str(ex))[0] == 'en']
punc_nums_processed = [(punc_nums_preprocessor(ex), l) for (ex, l) in data] # if langid.classify(str(ex))[0] == 'en']
from classifier.feature_selectors import UnigramFeatureSelector, BigramFeatureSelector
from classifier.NaiveBayes_classifier import NaiveBayesClassifier
#processed_data = kw_processed
#processed_data = simple_processed
def get_avg_accuracy(iters, size, processed_data, feature_selector=UnigramFeatureSelector):
sum_accuracy = 0
accuracies = []
for x in range(iters):
random.shuffle(processed_data)
processed_data = processed_data[:size]
data_len = len(processed_data)
test_len = int(data_len * 0.25)
train_data = processed_data[test_len:]
test_data = processed_data[:test_len]
selector = feature_selector.new(corpus=processed_data, top=2000) # use top 2000 words
classifier = NaiveBayesClassifier.new(selector, train_data)
accuracy = classifier.get_accuracy(test_data)
accuracies.append(accuracy)
sum_accuracy += accuracy
return sum_accuracy/iters, accuracies
SIZE = 200
ITERS = 2
kw_accuracy = get_avg_accuracy(ITERS, SIZE, kw_processed)
punc_nums_accuracy = get_avg_accuracy(ITERS, SIZE, punc_nums_processed)
simple_accuracy = get_avg_accuracy(ITERS, SIZE, simple_processed)
simple_bigram_accuracy = get_avg_accuracy(ITERS, SIZE, simple_processed, BigramFeatureSelector)
print('KEYWORDS:', kw_accuracy)
print('PUNC NUMS:', punc_nums_accuracy)
print('SIMPLE: ', simple_accuracy)
print('SIMPLE BIGRAM: ', simple_bigram_accuracy)
```
| github_jupyter |
```
from rmgpy.molecule.molecule import Molecule
from rmgpy.molecule.resonance import *
from IPython.display import display
import time
struct1 = Molecule().fromAdjacencyList("""
multiplicity 2
1 C u0 p0 c0 {2,S} {3,D} {7,S}
2 C u0 p0 c0 {1,S} {4,S} {8,D}
3 C u0 p0 c0 {1,D} {5,S} {11,S}
4 C u0 p0 c0 {2,S} {9,D} {10,S}
5 C u0 p0 c0 {3,S} {6,D} {15,S}
6 C u0 p0 c0 {5,D} {12,S} {16,S}
7 C u0 p0 c0 {1,S} {12,D} {18,S}
8 C u0 p0 c0 {2,D} {13,S} {19,S}
9 C u0 p0 c0 {4,D} {14,S} {22,S}
10 C u0 p0 c0 {4,S} {11,D} {23,S}
11 C u0 p0 c0 {3,S} {10,D} {24,S}
12 C u0 p0 c0 {6,S} {7,D} {17,S}
13 C u0 p0 c0 {8,S} {14,D} {20,S}
14 C u0 p0 c0 {9,S} {13,D} {21,S}
15 C u1 p0 c0 {5,S} {25,S} {26,S}
16 H u0 p0 c0 {6,S}
17 H u0 p0 c0 {12,S}
18 H u0 p0 c0 {7,S}
19 H u0 p0 c0 {8,S}
20 H u0 p0 c0 {13,S}
21 H u0 p0 c0 {14,S}
22 H u0 p0 c0 {9,S}
23 H u0 p0 c0 {10,S}
24 H u0 p0 c0 {11,S}
25 H u0 p0 c0 {15,S}
26 H u0 p0 c0 {15,S}
""")
# Kekulized form, radical on ring
struct2 = Molecule().fromAdjacencyList("""
multiplicity 2
1 C u0 p0 c0 {2,S} {3,S} {7,D}
2 C u0 p0 c0 {1,S} {4,S} {8,D}
3 C u0 p0 c0 {1,S} {5,S} {11,D}
4 C u0 p0 c0 {2,S} {9,S} {10,D}
5 C u0 p0 c0 {3,S} {6,S} {15,D}
6 C u0 p0 c0 {5,S} {12,D} {16,S}
7 C u0 p0 c0 {1,D} {12,S} {17,S}
8 C u0 p0 c0 {2,D} {13,S} {18,S}
9 C u0 p0 c0 {4,S} {14,D} {19,S}
10 C u0 p0 c0 {4,D} {11,S} {20,S}
11 C u0 p0 c0 {3,D} {10,S} {21,S}
12 C u0 p0 c0 {6,D} {7,S} {22,S}
13 C u1 p0 c0 {8,S} {14,S} {23,S}
14 C u0 p0 c0 {9,D} {13,S} {24,S}
15 C u0 p0 c0 {5,D} {25,S} {26,S}
16 H u0 p0 c0 {6,S}
17 H u0 p0 c0 {7,S}
18 H u0 p0 c0 {8,S}
19 H u0 p0 c0 {9,S}
20 H u0 p0 c0 {10,S}
21 H u0 p0 c0 {11,S}
22 H u0 p0 c0 {12,S}
23 H u0 p0 c0 {13,S}
24 H u0 p0 c0 {14,S}
25 H u0 p0 c0 {15,S}
26 H u0 p0 c0 {15,S}
""")
# Aromatic form
struct3 = Molecule().fromAdjacencyList("""
multiplicity 2
1 C u0 p0 c0 {2,B} {3,B} {7,B}
2 C u0 p0 c0 {1,B} {4,B} {8,B}
3 C u0 p0 c0 {1,B} {5,B} {11,B}
4 C u0 p0 c0 {2,B} {9,B} {10,B}
5 C u0 p0 c0 {3,B} {6,B} {15,S}
6 C u0 p0 c0 {5,B} {12,B} {16,S}
7 C u0 p0 c0 {1,B} {12,B} {18,S}
8 C u0 p0 c0 {2,B} {13,B} {19,S}
9 C u0 p0 c0 {4,B} {14,B} {22,S}
10 C u0 p0 c0 {4,B} {11,B} {23,S}
11 C u0 p0 c0 {3,B} {10,B} {24,S}
12 C u0 p0 c0 {6,B} {7,B} {17,S}
13 C u0 p0 c0 {8,B} {14,B} {20,S}
14 C u0 p0 c0 {9,B} {13,B} {21,S}
15 C u1 p0 c0 {5,S} {25,S} {26,S}
16 H u0 p0 c0 {6,S}
17 H u0 p0 c0 {12,S}
18 H u0 p0 c0 {7,S}
19 H u0 p0 c0 {8,S}
20 H u0 p0 c0 {13,S}
21 H u0 p0 c0 {14,S}
22 H u0 p0 c0 {9,S}
23 H u0 p0 c0 {10,S}
24 H u0 p0 c0 {11,S}
25 H u0 p0 c0 {15,S}
26 H u0 p0 c0 {15,S}
""")
t0 = time.time()
out1 = generateAromaticResonanceStructures(struct1)
t1 = time.time()
print t1 - t0
t0 = time.time()
out2 = generateAromaticResonanceStructures(struct2)
t1 = time.time()
print t1 - t0
t0 = time.time()
out3 = generateAromaticResonanceStructures(struct3)
t1 = time.time()
print t1 - t0
for o in out1:
display(o)
for o in out2:
display(o)
for o in out3:
display(o)
mol = Molecule(SMILES='c12ccccc1c(C=[CH])ccc2')
display(mol)
print '===================='
out = generateResonanceStructures(mol)
for o in out:
display(o)
print out[1].toAdjacencyList()
for mol in out:
print mol.toAdjacencyList()
print '\n'
mol = Molecule(SMILES="C1=CC2=CC=C3C=CC4=C5C6=C(C2=C35)C1=CC=C6C=C4")
display(mol)
print '===================='
out = generateClarStructures(mol)
print len(out)
for o in out:
display(o)
mol.getAromaticRings()[0]
print out[1].toAdjacencyList()
mol = Molecule(SMILES="C1=CC2=CC=CC3CC=CC(=C1)C=32")
display(mol)
print '===================='
out = generateClarStructures(mol)
for o in out:
display(o)
toRDKitMol(mol, removeHs=False, returnMapping=True)
mol = Molecule(SMILES="c1c2cccc1C(=C)C=[C]2")
display(mol)
print '===================='
out = generateResonanceStructures(mol)
for o in out:
display(o)
[atom.props['inRing'] for atom in out[3].atoms]
def getAllSimpleCyclesOfSize(self, size):
"""
Return a list of all non-duplicate monocyclic rings with length 'size'.
Naive approach by eliminating polycyclic rings that are returned by
``getAllCyclicsOfSize``.
"""
cycleList = self.getAllCyclesOfSize(size)
i = 0
#import pdb; pdb.set_trace()
while i < len(cycleList):
for vertex in cycleList[i]:
internalConnectivity = sum([1 if vertex2 in cycleList[i] else 0 for vertex2 in vertex.edges.iterkeys()])
if internalConnectivity > 2:
del cycleList[i]
break
else:
i += 1
return cycleList
from unittest import *
import sys
import rmgpy.molecule.resonanceTest
tests = TestLoader().loadTestsFromModule(rmgpy.molecule.resonanceTest)
TextTestRunner(verbosity=2, stream=sys.stdout).run(tests)
from rdkit import Chem
def toRDKitMol(mol, removeHs=True, returnMapping=False, sanitize=True):
"""
Convert a molecular structure to a RDKit rdmol object. Uses
`RDKit <http://rdkit.org/>`_ to perform the conversion.
Perceives aromaticity and, unless removeHs==False, removes Hydrogen atoms.
If returnMapping==True then it also returns a dictionary mapping the
atoms to RDKit's atom indices.
"""
# Sort the atoms before converting to ensure output is consistent
# between different runs
mol.sortAtoms()
atoms = mol.vertices
rdAtomIndices = {} # dictionary of RDKit atom indices
rdkitmol = Chem.rdchem.EditableMol(Chem.rdchem.Mol())
for index, atom in enumerate(mol.vertices):
rdAtom = Chem.rdchem.Atom(atom.element.symbol)
rdAtom.SetNumRadicalElectrons(atom.radicalElectrons)
if atom.element.symbol == 'C' and atom.lonePairs == 1 and mol.multiplicity == 1: rdAtom.SetNumRadicalElectrons(2)
rdkitmol.AddAtom(rdAtom)
if removeHs and atom.symbol == 'H':
pass
else:
rdAtomIndices[atom] = index
rdBonds = Chem.rdchem.BondType
orders = {1: rdBonds.SINGLE, 2: rdBonds.DOUBLE, 3: rdBonds.TRIPLE, 1.5: rdBonds.AROMATIC}
# Add the bonds
for atom1 in mol.vertices:
for atom2, bond in atom1.edges.iteritems():
index1 = atoms.index(atom1)
index2 = atoms.index(atom2)
if index1 < index2:
order = orders[bond.order]
rdkitmol.AddBond(index1, index2, order)
# Make editable mol into a mol and rectify the molecule
rdkitmol = rdkitmol.GetMol()
if sanitize:
Chem.SanitizeMol(rdkitmol)
if removeHs:
rdkitmol = Chem.RemoveHs(rdkitmol, sanitize=sanitize)
if returnMapping:
return rdkitmol, rdAtomIndices
return rdkitmol
```
| github_jupyter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.