code
stringlengths 38
801k
| repo_path
stringlengths 6
263
|
|---|---|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] tags=["remove_cell"]
# # Shor's Algorithm
# -
# Shor’s algorithm is famous for factoring integers in polynomial time. Since the best-known classical algorithm requires superpolynomial time to factor the product of two primes, the widely used cryptosystem, RSA, relies on factoring being impossible for large enough integers.
#
# In this chapter we will focus on the quantum part of Shor’s algorithm, which actually solves the problem of _period finding_. Since a factoring problem can be turned into a period finding problem in polynomial time, an efficient period finding algorithm can be used to factor integers efficiently too. For now its enough to show that if we can compute the period of $a^x\bmod N$ efficiently, then we can also efficiently factor. Since period finding is a worthy problem in its own right, we will first solve this, then discuss how this can be used to factor in section 5.
# + tags=["thebelab-init"]
import matplotlib.pyplot as plt
import numpy as np
from qiskit import QuantumCircuit, Aer, execute
from qiskit.visualization import plot_histogram
from math import gcd
from numpy.random import randint
import pandas as pd
from fractions import Fraction
print("Imports Successful")
# -
# ## 1. The Problem: Period Finding
#
# Let’s look at the periodic function:
#
# $$ f(x) = a^x \bmod{N}$$
#
# <details>
# <summary>Reminder: Modulo & Modular Arithmetic (Click here to expand)</summary>
#
# The modulo operation (abbreviated to 'mod') simply means to find the remainder when dividing one number by another. For example:
#
# $$ 17 \bmod 5 = 2 $$
#
# Since $17 \div 5 = 3$ with remainder $2$. (i.e. $17 = (3\times 5) + 2$). In Python, the modulo operation is denoted through the <code>%</code> symbol.
#
# This behaviour is used in <a href="https://en.wikipedia.org/wiki/Modular_arithmetic">modular arithmetic</a>, where numbers 'wrap round' after reaching a certain value (the modulus). Using modular arithmetic, we could write:
#
# $$ 17 = 2 \pmod 5$$
#
# Note that here the $\pmod 5$ applies to the entire equation (since it is in parenthesis), unlike the equation above where it only applied to the left-hand side of the equation.
# </details>
#
# where $a$ and $N$ are positive integers, $a$ is less than $N$, and they have no common factors. The period, or order ($r$), is the smallest (non-zero) integer such that:
#
# $$a^r \bmod N = 1 $$
#
# We can see an example of this function plotted on the graph below. Note that the lines between points are to help see the periodicity and do not represent the intermediate values between the x-markers.
# + tags=["hide-input"]
N = 35
a = 3
# Calculate the plotting data
xvals = np.arange(35)
yvals = [np.mod(a**x, N) for x in xvals]
# Use matplotlib to display it nicely
fig, ax = plt.subplots()
ax.plot(xvals, yvals, linewidth=1, linestyle='dotted', marker='x')
ax.set(xlabel='$x$', ylabel='$%i^x$ mod $%i$' % (a, N),
title="Example of Periodic Function in Shor's Algorithm")
try: # plot r on the graph
r = yvals[1:].index(1) +1
plt.annotate(text='', xy=(0,1), xytext=(r,1), arrowprops=dict(arrowstyle='<->'))
plt.annotate(text='$r=%i$' % r, xy=(r/3,1.5))
except:
print('Could not find period, check a < N and have no common factors.')
# -
# ## 2. The Solution
#
# Shor’s solution was to use [quantum phase estimation](./quantum-phase-estimation.html) on the unitary operator:
#
# $$ U|y\rangle \equiv |ay \bmod N \rangle $$
#
# To see how this is helpful, let’s work out what an eigenstate of U might look like. If we started in the state $|1\rangle$, we can see that each successive application of U will multiply the state of our register by $a \pmod N$, and after $r$ applications we will arrive at the state $|1\rangle$ again. For example with $a = 3$ and $N = 35$:
#
# $$\begin{aligned}
# U|1\rangle &= |3\rangle & \\
# U^2|1\rangle &= |9\rangle \\
# U^3|1\rangle &= |27\rangle \\
# & \vdots \\
# U^{(r-1)}|1\rangle &= |12\rangle \\
# U^r|1\rangle &= |1\rangle
# \end{aligned}$$
# + tags=["hide-input"]
ax.set(xlabel='Number of applications of U', ylabel='End state of register',
title="Effect of Successive Applications of U")
fig
# -
# So a superposition of the states in this cycle ($|u_0\rangle$) would be an eigenstate of $U$:
#
# $$|u_0\rangle = \tfrac{1}{\sqrt{r}}\sum_{k=0}^{r-1}{|a^k \bmod N\rangle} $$
#
#
# <details>
# <summary>Click to Expand: Example with $a = 3$ and $N=35$</summary>
#
# $$\begin{aligned}
# |u_0\rangle &= \tfrac{1}{\sqrt{12}}(|1\rangle + |3\rangle + |9\rangle \dots + |4\rangle + |12\rangle) \\[10pt]
# U|u_0\rangle &= \tfrac{1}{\sqrt{12}}(U|1\rangle + U|3\rangle + U|9\rangle \dots + U|4\rangle + U|12\rangle) \\[10pt]
# &= \tfrac{1}{\sqrt{12}}(|3\rangle + |9\rangle + |27\rangle \dots + |12\rangle + |1\rangle) \\[10pt]
# &= |u_0\rangle
# \end{aligned}$$
# </details>
#
#
# This eigenstate has an eigenvalue of 1, which isn’t very interesting. A more interesting eigenstate could be one in which the phase is different for each of these computational basis states. Specifically, let’s look at the case in which the phase of the $k$th state is proportional to $k$:
#
# $$\begin{aligned}
# |u_1\rangle &= \tfrac{1}{\sqrt{r}}\sum_{k=0}^{r-1}{e^{-\tfrac{2\pi i k}{r}}|a^k \bmod N\rangle}\\[10pt]
# U|u_1\rangle &= e^{\tfrac{2\pi i}{r}}|u_1\rangle
# \end{aligned}
# $$
#
# <details>
# <summary>Click to Expand: Example with $a = 3$ and $N=35$</summary>
#
# $$\begin{aligned}
# |u_1\rangle &= \tfrac{1}{\sqrt{12}}(|1\rangle + e^{-\tfrac{2\pi i}{12}}|3\rangle + e^{-\tfrac{4\pi i}{12}}|9\rangle \dots + e^{-\tfrac{20\pi i}{12}}|4\rangle + e^{-\tfrac{22\pi i}{12}}|12\rangle) \\[10pt]
# U|u_1\rangle &= \tfrac{1}{\sqrt{12}}(|3\rangle + e^{-\tfrac{2\pi i}{12}}|9\rangle + e^{-\tfrac{4\pi i}{12}}|27\rangle \dots + e^{-\tfrac{20\pi i}{12}}|12\rangle + e^{-\tfrac{22\pi i}{12}}|1\rangle) \\[10pt]
# U|u_1\rangle &= e^{\tfrac{2\pi i}{12}}\cdot\tfrac{1}{\sqrt{12}}(e^{\tfrac{-2\pi i}{12}}|3\rangle + e^{-\tfrac{4\pi i}{12}}|9\rangle + e^{-\tfrac{6\pi i}{12}}|27\rangle \dots + e^{-\tfrac{22\pi i}{12}}|12\rangle + e^{-\tfrac{24\pi i}{12}}|1\rangle) \\[10pt]
# U|u_1\rangle &= e^{\tfrac{2\pi i}{12}}|u_1\rangle
# \end{aligned}$$
#
# (We can see $r = 12$ appears in the denominator of the phase.)
# </details>
#
# This is a particularly interesting eigenvalue as it contains $r$. In fact, $r$ has to be included to make sure the phase differences between the $r$ computational basis states are equal. This is not the only eigenstate with this behaviour; to generalise this further, we can multiply an integer, $s$, to this phase difference, which will show up in our eigenvalue:
#
# $$\begin{aligned}
# |u_s\rangle &= \tfrac{1}{\sqrt{r}}\sum_{k=0}^{r-1}{e^{-\tfrac{2\pi i s k}{r}}|a^k \bmod N\rangle}\\[10pt]
# U|u_s\rangle &= e^{\tfrac{2\pi i s}{r}}|u_s\rangle
# \end{aligned}
# $$
#
# <details>
# <summary>Click to Expand: Example with $a = 3$ and $N=35$</summary>
#
# $$\begin{aligned}
# |u_s\rangle &= \tfrac{1}{\sqrt{12}}(|1\rangle + e^{-\tfrac{2\pi i s}{12}}|3\rangle + e^{-\tfrac{4\pi i s}{12}}|9\rangle \dots + e^{-\tfrac{20\pi i s}{12}}|4\rangle + e^{-\tfrac{22\pi i s}{12}}|12\rangle) \\[10pt]
# U|u_s\rangle &= \tfrac{1}{\sqrt{12}}(|3\rangle + e^{-\tfrac{2\pi i s}{12}}|9\rangle + e^{-\tfrac{4\pi i s}{12}}|27\rangle \dots + e^{-\tfrac{20\pi i s}{12}}|12\rangle + e^{-\tfrac{22\pi i s}{12}}|1\rangle) \\[10pt]
# U|u_s\rangle &= e^{\tfrac{2\pi i s}{12}}\cdot\tfrac{1}{\sqrt{12}}(e^{-\tfrac{2\pi i s}{12}}|3\rangle + e^{-\tfrac{4\pi i s}{12}}|9\rangle + e^{-\tfrac{6\pi i s}{12}}|27\rangle \dots + e^{-\tfrac{22\pi i s}{12}}|12\rangle + e^{-\tfrac{24\pi i s}{12}}|1\rangle) \\[10pt]
# U|u_s\rangle &= e^{\tfrac{2\pi i s}{12}}|u_s\rangle
# \end{aligned}$$
#
# </details>
#
# We now have a unique eigenstate for each integer value of $s$ where $$0 \leq s \leq r-1$$. Very conveniently, if we sum up all these eigenstates, the different phases cancel out all computational basis states except $|1\rangle$:
#
# $$ \tfrac{1}{\sqrt{r}}\sum_{s=0}^{r-1} |u_s\rangle = |1\rangle$$
#
# <details>
# <summary>Click to Expand: Example with $a = 7$ and $N=15$</summary>
#
# For this, we will look at a smaller example where $a = 7$ and $N=15$. In this case $r=4$:
#
# $$\begin{aligned}
# \tfrac{1}{2}(\quad|u_0\rangle &= \tfrac{1}{2}(|1\rangle \hphantom{e^{-\tfrac{2\pi i}{12}}}+ |7\rangle \hphantom{e^{-\tfrac{12\pi i}{12}}} + |4\rangle \hphantom{e^{-\tfrac{12\pi i}{12}}} + |13\rangle)\dots \\[10pt]
# + |u_1\rangle &= \tfrac{1}{2}(|1\rangle + e^{-\tfrac{2\pi i}{4}}|7\rangle + e^{-\tfrac{\hphantom{1}4\pi i}{4}}|4\rangle + e^{-\tfrac{\hphantom{1}6\pi i}{4}}|13\rangle)\dots \\[10pt]
# + |u_2\rangle &= \tfrac{1}{2}(|1\rangle + e^{-\tfrac{4\pi i}{4}}|7\rangle + e^{-\tfrac{\hphantom{1}8\pi i}{4}}|4\rangle + e^{-\tfrac{12\pi i}{4}}|13\rangle)\dots \\[10pt]
# + |u_3\rangle &= \tfrac{1}{2}(|1\rangle + e^{-\tfrac{6\pi i}{4}}|7\rangle + e^{-\tfrac{12\pi i}{4}}|4\rangle + e^{-\tfrac{18\pi i}{4}}|13\rangle)\quad) = |1\rangle \\[10pt]
# \end{aligned}$$
#
# </details>
#
# Since the computational basis state $|1\rangle$ is a superposition of these eigenstates, which means if we do QPE on $U$ using the state $|1\rangle$, we will measure a phase:
#
# $$\phi = \frac{s}{r}$$
#
# Where $s$ is a random integer between $0$ and $r-1$. We finally use the [continued fractions](https://en.wikipedia.org/wiki/Continued_fraction) algorithm on $\phi$ to find $r$. The circuit diagram looks like this (note that this diagram uses Qiskit's qubit ordering convention):
#
# <img src="images/shor_circuit_1.svg">
#
# We will next demonstrate Shor’s algorithm using Qiskit’s simulators. For this demonstration we will provide the circuits for $U$ without explanation, but in section 4 we will discuss how circuits for $U^{2^j}$ can be constructed efficiently.
# ## 3. Qiskit Implementation
#
# In this example we will solve the period finding problem for $a=7$ and $N=15$. We provide the circuits for $U$ where:
#
# $$U|y\rangle = |ay\bmod 15\rangle $$
#
# without explanation. To create $U^x$, we will simply repeat the circuit $x$ times. In the next section we will discuss a general method for creating these circuits efficiently. The function `c_amod15` returns the controlled-U gate for `a`, repeated `power` times.
# + tags=["thebelab-init"]
def c_amod15(a, power):
"""Controlled multiplication by a mod 15"""
if a not in [2,7,8,11,13]:
raise ValueError("'a' must be 2,7,8,11 or 13")
U = QuantumCircuit(4)
for iteration in range(power):
if a in [2,13]:
U.swap(0,1)
U.swap(1,2)
U.swap(2,3)
if a in [7,8]:
U.swap(2,3)
U.swap(1,2)
U.swap(0,1)
if a == 11:
U.swap(1,3)
U.swap(0,2)
if a in [7,11,13]:
for q in range(4):
U.x(q)
U = U.to_gate()
U.name = "%i^%i mod 15" % (a, power)
c_U = U.control()
return c_U
# -
# We will use 8 counting qubits:
# + tags=["thebelab-init"]
# Specify variables
n_count = 8 # number of counting qubits
a = 7
# -
# We also provide the circuit for the inverse QFT (you can read more about the QFT in the [quantum Fourier transform chapter](./quantum-fourier-transform.html#generalqft)):
# + tags=["thebelab-init"]
def qft_dagger(n):
"""n-qubit QFTdagger the first n qubits in circ"""
qc = QuantumCircuit(n)
# Don't forget the Swaps!
for qubit in range(n//2):
qc.swap(qubit, n-qubit-1)
for j in range(n):
for m in range(j):
qc.cp(-np.pi/float(2**(j-m)), m, j)
qc.h(j)
qc.name = "QFT†"
return qc
# -
# With these building blocks we can easily construct the circuit for Shor's algorithm:
# +
# Create QuantumCircuit with n_count counting qubits
# plus 4 qubits for U to act on
qc = QuantumCircuit(n_count + 4, n_count)
# Initialise counting qubits
# in state |+>
for q in range(n_count):
qc.h(q)
# And ancilla register in state |1>
qc.x(3+n_count)
# Do controlled-U operations
for q in range(n_count):
qc.append(c_amod15(a, 2**q),
[q] + [i+n_count for i in range(4)])
# Do inverse-QFT
qc.append(qft_dagger(n_count), range(n_count))
# Measure circuit
qc.measure(range(n_count), range(n_count))
qc.draw('text')
# -
# Let's see what results we measure:
backend = Aer.get_backend('qasm_simulator')
results = execute(qc, backend, shots=2048).result()
counts = results.get_counts()
plot_histogram(counts)
# Since we have 3 qubits, these results correspond to measured phases of:
rows, measured_phases = [], []
for output in counts:
decimal = int(output, 2) # Convert (base 2) string to decimal
phase = decimal/(2**n_count) # Find corresponding eigenvalue
measured_phases.append(phase)
# Add these values to the rows in our table:
rows.append(["%s(bin) = %i(dec)" % (output, decimal),
"%i/%i = %.2f" % (decimal, 2**n_count, phase)])
# Print the rows in a table
headers=["Register Output", "Phase"]
df = pd.DataFrame(rows, columns=headers)
print(df)
# We can now use the continued fractions algorithm to attempt to find $s$ and $r$. Python has this functionality built in: We can use the `fractions` module to turn a float into a `Fraction` object, for example:
Fraction(0.666)
5998794703657501/9007199254740992
# Because this gives fractions that return the result exactly (in this case, `0.6660000...`), this can give gnarly results like the one above. We can use the `.limit_denominator()` method to get the fraction that most closely resembles our float, with denominator below a certain value:
# Get fraction that most closely resembles 0.666
# with denominator < 15
Fraction(0.666).limit_denominator(15)
# Much nicer! The order (r) must be less than N, so we will set the maximum denominator to be `15`:
rows = []
for phase in measured_phases:
frac = Fraction(phase).limit_denominator(15)
rows.append([phase, "%i/%i" % (frac.numerator, frac.denominator), frac.denominator])
# Print as a table
headers=["Phase", "Fraction", "Guess for r"]
df = pd.DataFrame(rows, columns=headers)
print(df)
# We can see that two of the measured eigenvalues provided us with the correct result: $r=4$, and we can see that Shor’s algorithm has a chance of failing. These bad results are because $s = 0$, or because $s$ and $r$ are not coprime and instead of $r$ we are given a factor of $r$. The easiest solution to this is to simply repeat the experiment until we get a satisfying result for $r$.
#
# ### Quick Exercise
#
# - Modify the circuit above for values of $a = 2, 8, 11$ and $13$. What results do you get and why?
# ## 4. Modular Exponentiation
#
# You may have noticed that the method of creating the $U^{2^j}$ gates by repeating $U$ grows exponentially with $j$ and will not result in a polynomial time algorithm. We want a way to create the operator:
#
# $$ U^{2^j}|y\rangle = |a^{2^j}y \bmod N \rangle $$
#
# that grows polynomially with $j$. Fortunately, calculating:
#
# $$ a^{2^j} \bmod N$$
#
# efficiently is possible. Classical computers can use an algorithm known as _repeated squaring_ to calculate an exponential. In our case, since we are only dealing with exponentials of the form $2^j$, the repeated squaring algorithm becomes very simple:
# + tags=["thebelab-init"]
def a2jmodN(a, j, N):
"""Compute a^{2^j} (mod N) by repeated squaring"""
for i in range(j):
a = np.mod(a**2, N)
return a
# -
a2jmodN(7, 2049, 53)
# If an efficient algorithm is possible in Python, then we can use the same algorithm on a quantum computer. Unfortunately, despite scaling polynomially with $j$, modular exponentiation circuits are not straightforward and are the bottleneck in Shor’s algorithm. A beginner-friendly implementation can be found in reference [1].
#
# ## 5. Factoring from Period Finding
#
# Not all factoring problems are difficult; we can spot an even number instantly and know that one of its factors is 2. In fact, there are [specific criteria](https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.186-4.pdf#%5B%7B%22num%22%3A127%2C%22gen%22%3A0%7D%2C%7B%22name%22%3A%22XYZ%22%7D%2C70%2C223%2C0%5D) for choosing numbers that are difficult to factor, but the basic idea is to choose the product of two large prime numbers.
#
# A general factoring algorithm will first check to see if there is a shortcut to factoring the integer (is the number even? Is the number of the form $N = a^b$?), before using Shor’s period finding for the worst-case scenario. Since we aim to focus on the quantum part of the algorithm, we will jump straight to the case in which N is the product of two primes.
#
# ### Example: Factoring 15
#
# To see an example of factoring on a small number of qubits, we will factor 15, which we all know is the product of the not-so-large prime numbers 3 and 5.
# + tags=["thebelab-init"]
N = 15
# -
# The first step is to choose a random number, $x$, between $1$ and $N-1$:
# + tags=["thebelab-init"]
np.random.seed(1) # This is to make sure we get reproduceable results
a = randint(2, 15)
print(a)
# -
# Next we quickly check it isn't already a non-trivial factor of $N$:
from math import gcd # greatest common divisor
gcd(a, 15)
# Great. Next, we do Shor's order finding algorithm for `a = 7` and `N = 15`. Remember that the phase we measure will be $s/r$ where:
#
# $$ a^r \bmod N = 1 $$
#
# and $s$ is a random integer between 0 and $r-1$.
# + tags=["thebelab-init"]
def qpe_amod15(a):
n_count = 3
qc = QuantumCircuit(4+n_count, n_count)
for q in range(n_count):
qc.h(q) # Initialise counting qubits in state |+>
qc.x(3+n_count) # And ancilla register in state |1>
for q in range(n_count): # Do controlled-U operations
qc.append(c_amod15(a, 2**q),
[q] + [i+n_count for i in range(4)])
qc.append(qft_dagger(n_count), range(n_count)) # Do inverse-QFT
qc.measure(range(n_count), range(n_count))
# Simulate Results
backend = Aer.get_backend('qasm_simulator')
# Setting memory=True below allows us to see a list of each sequential reading
result = execute(qc, backend, shots=1, memory=True).result()
readings = result.get_memory()
print("Register Reading: " + readings[0])
phase = int(readings[0],2)/(2**n_count)
print("Corresponding Phase: %f" % phase)
return phase
# -
# From this phase, we can easily find a guess for $r$:
np.random.seed(3) # This is to make sure we get reproduceable results
phase = qpe_amod15(a) # Phase = s/r
Fraction(phase).limit_denominator(15) # Denominator should (hopefully!) tell us r
frac = Fraction(phase).limit_denominator(15)
s, r = frac.numerator, frac.denominator
print(r)
# Now we have $r$, we might be able to use this to find a factor of $N$. Since:
#
# $$a^r \bmod N = 1 $$
#
# then:
#
# $$(a^r - 1) \bmod N = 0 $$
#
# which mean $N$ must divide $a^r-1$. And if $r$ is also even, then we can write:
#
# $$a^r -1 = (a^{r/2}-1)(a^{r/2}+1)$$
#
# (if $r$ is not even, we cannot go further and must try again with a different value for $a$). There is then a high probability that the greatest common divisor of either $a^{r/2}-1$, or $a^{r/2}+1$ is a factor of $N$ [2]:
guesses = [gcd(a**(r//2)-1, N), gcd(a**(r//2)+1, N)]
print(guesses)
# The cell below repeats the algorithm until at least one factor of 15 is found. You should try re-running the cell a few times to see how it behaves.
a = 7
factor_found = False
attempt = 0
while not factor_found:
attempt += 1
print("\nAttempt %i:" % attempt)
phase = qpe_amod15(a) # Phase = s/r
frac = Fraction(phase).limit_denominator(15) # Denominator should (hopefully!) tell us r
r = frac.denominator
print("Result: r = %i" % r)
if phase != 0:
# Guesses for factors are gcd(x^{r/2} ±1 , 15)
guesses = [gcd(a**(r//2)-1, 15), gcd(a**(r//2)+1, 15)]
print("Guessed Factors: %i and %i" % (guesses[0], guesses[1]))
for guess in guesses:
if guess != 1 and (15 % guess) == 0: # Check to see if guess is a factor
print("*** Non-trivial factor found: %i ***" % guess)
factor_found = True
# ## 6. References
#
# 1. <NAME>, _Circuit for Shor's algorithm using 2n+3 qubits,_ [arXiv:quant-ph/0205095](https://arxiv.org/abs/quant-ph/0205095)
#
# 2. <NAME> and <NAME>, _Quantum Computation and Quantum Information,_ Cambridge Series on Information and the Natural Sciences (Cambridge University Press, Cambridge, 2000). (Page 633)
import qiskit
qiskit.__qiskit_version__
|
content/ch-algorithms/shor.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import sys
sys.path.append('../')
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
import glob
from keras.optimizers import Adam, SGD
from keras.callbacks import ModelCheckpoint, LearningRateScheduler, TerminateOnNaN, CSVLogger, TensorBoard
from keras import backend as K
from keras.models import load_model
from math import ceil
import numpy as np
from matplotlib import pyplot as plt
from models.keras_ssd512_Siamese import ssd_512
from keras_loss_function.keras_ssd_loss import SSDLoss
from keras_layers.keras_layer_AnchorBoxes import AnchorBoxes
from keras_layers.keras_layer_DecodeDetections import DecodeDetections
from keras_layers.keras_layer_DecodeDetectionsFast import DecodeDetectionsFast
from keras_layers.keras_layer_L2Normalization import L2Normalization
from ssd_encoder_decoder.ssd_input_encoder import SSDInputEncoder
from ssd_encoder_decoder.ssd_output_decoder import decode_detections, decode_detections_fast
from data_generator.object_detection_2d_data_generator import DataGenerator
from data_generator.object_detection_2d_geometric_ops import Resize
from data_generator.object_detection_2d_photometric_ops import ConvertTo3Channels
from data_generator.data_augmentation_chain_original_ssd import SSDDataAugmentation, SSDDataAugmentation_Siamese
from data_generator.object_detection_2d_misc_utils import apply_inverse_transforms
img_height = 512 # Height of the model input images
img_width = 512 # Width of the model input images
img_channels = 3 # Number of color channels of the model input images
mean_color = [123, 117, 104] # Per-channel mean of images. Do not change if use any of the pre-trained weights.
# The color channel order in the original SSD is BGR,
# so we'll have the model reverse the color channel order of the input images.
swap_channels = [2, 1, 0]
# The anchor box scaling factors used in the original SSD512 for the Pascal VOC datasets
# scales_pascal =
# The anchor box scaling factors used in the original SSD512 for the MS COCO datasets
scales_coco = [0.07, 0.15, 0.3, 0.45, 0.6, 0.75, 0.9, 1.05]
scales = scales_coco
aspect_ratios = [[1.0, 2.0, 0.5],
[1.0, 2.0, 0.5, 3.0, 1.0 / 3.0],
[1.0, 2.0, 0.5, 3.0, 1.0 / 3.0],
[1.0, 2.0, 0.5, 3.0, 1.0 / 3.0],
[1.0, 2.0, 0.5, 3.0, 1.0 / 3.0],
[1.0, 2.0, 0.5],
[1.0, 2.0, 0.5]] # The anchor box aspect ratios used in the original SSD512; the order matters
two_boxes_for_ar1 = True
steps = [8, 16, 32, 64, 128, 256, 512] # Space between two adjacent anchor box center points for each predictor layer.
# The offsets of the first anchor box center points from the top and left borders of the image
# as a fraction of the step size for each predictor layer.
offsets = [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5]
clip_boxes = False # Whether or not to clip the anchor boxes to lie entirely within the image boundaries
# The variances by which the encoded target coordinates are divided as in the original implementation
variances = [0.1, 0.1, 0.2, 0.2]
normalize_coords = True
Model_Build = 'New_Model' # 'Load_Model'
Optimizer_Type = 'SGD' # 'Adam' #
# Different batch_size will have different prediction loss.
batch_size = 6 # Change the batch size if you like, or if you run into GPU memory issues.
# alpha_distance = 0.0001 # Coefficient for the distance between the source and target feature maps.
G_loss_weights = [0.001, 1.0]
D_loss_weights = [0.001, 0.001]
# 'City_to_foggy0_01_resize_600_1200' # 'City_to_foggy0_02_resize_600_1200' # 'SIM10K_to_VOC07'
# 'SIM10K' # 'Cityscapes_foggy_beta_0_01' # 'City_to_foggy0_02_resize_400_800'
# 'SIM10K_to_VOC12_resize_400_800'
DatasetName = 'SIM10K_to_City_resize_400_800' # 'SIM10K_to_VOC07_resize_400_800' # 'City_to_foggy0_01_resize_400_800' #
processed_dataset_path = './processed_dataset_h5/' + DatasetName
if not os.path.exists(processed_dataset_path):
os.makedirs(processed_dataset_path)
checkpoint_path = '../trained_weights/SIM10K_to_City/current/G100_D10_GD_weights0_001'
if not os.path.exists(checkpoint_path):
os.makedirs(checkpoint_path)
csv_file_name = 'training_log.csv'
if len(glob.glob(os.path.join(processed_dataset_path, '*.h5'))):
Dataset_Build = 'Load_Dataset'
else:
Dataset_Build = 'New_Dataset'
if DatasetName == 'SIM10K_to_VOC12_resize_400_800':
resize_image_to = (400, 800)
# The directories that contain the images.
train_source_images_dir = '../../datasets/SIM10K/JPEGImages'
train_target_images_dir = '../../datasets/VOCdevkit/VOC2012/JPEGImages'
test_target_images_dir = '../../datasets/VOCdevkit/VOC2012/JPEGImages'
# The directories that contain the annotations.
train_annotation_dir = '../../datasets/SIM10K/Annotations'
test_annotation_dir = '../../datasets/VOCdevkit/VOC2012/Annotations'
# The paths to the image sets.
train_source_image_set_filename = '../../datasets/SIM10K/ImageSets/Main/trainval10k.txt'
# The trainset of VOC which has 'car' object is used as train_target.
train_target_image_set_filename = '../../datasets/VOCdevkit/VOC2012_CAR/ImageSets/Main/train_target.txt'
# The valset of VOC which has 'car' object is used as test.
test_target_image_set_filename = '../../datasets/VOCdevkit/VOC2012_CAR/ImageSets/Main/test.txt'
classes = ['background', 'car'] # Our model will produce predictions for these classes.
train_classes = ['background', 'car', 'motorbike', 'person'] # The train_source dataset contains these classes.
train_include_classes = [train_classes.index(one_class) for one_class in classes[1:]]
# The test_target dataset contains these classes.
val_classes = ['background', 'car',
'aeroplane', 'bicycle', 'bird', 'boat',
'bottle', 'bus', 'cat',
'chair', 'cow', 'diningtable', 'dog',
'horse', 'motorbike', 'person', 'pottedplant',
'sheep', 'sofa', 'train', 'tvmonitor']
val_include_classes = [val_classes.index(one_class) for one_class in classes[1:]]
# Number of positive classes, 8 for domain Cityscapes, 20 for Pascal VOC, 80 for MS COCO, 1 for SIM10K
n_classes = len(classes) - 1
elif DatasetName == 'SIM10K_to_VOC07_resize_400_800':
resize_image_to = (400, 800)
# The directories that contain the images.
train_source_images_dir = '../../datasets/SIM10K/JPEGImages'
train_target_images_dir = '../../datasets/VOCdevkit/VOC2007/JPEGImages'
test_target_images_dir = '../../datasets/VOCdevkit/VOC2007/JPEGImages'
# The directories that contain the annotations.
train_annotation_dir = '../../datasets/SIM10K/Annotations'
test_annotation_dir = '../../datasets/VOCdevkit/VOC2007/Annotations'
# The paths to the image sets.
train_source_image_set_filename = '../../datasets/SIM10K/ImageSets/Main/trainval10k.txt'
# The trainset of VOC which has 'car' object is used as train_target.
train_target_image_set_filename = '../../datasets/VOCdevkit/VOC2007_CAR/ImageSets/Main/train_target.txt'
# The valset of VOC which has 'car' object is used as test.
test_target_image_set_filename = '../../datasets/VOCdevkit/VOC2007_CAR/ImageSets/Main/test.txt'
classes = ['background', 'car'] # Our model will produce predictions for these classes.
train_classes = ['background', 'car', 'motorbike', 'person'] # The train_source dataset contains these classes.
train_include_classes = [train_classes.index(one_class) for one_class in classes[1:]]
# The test_target dataset contains these classes.
val_classes = ['background', 'car',
'aeroplane', 'bicycle', 'bird', 'boat',
'bottle', 'bus', 'cat',
'chair', 'cow', 'diningtable', 'dog',
'horse', 'motorbike', 'person', 'pottedplant',
'sheep', 'sofa', 'train', 'tvmonitor']
val_include_classes = [val_classes.index(one_class) for one_class in classes[1:]]
# Number of positive classes, 8 for domain Cityscapes, 20 for Pascal VOC, 80 for MS COCO, 1 for SIM10K
n_classes = len(classes) - 1
elif DatasetName == 'SIM10K_to_City_resize_400_800':
resize_image_to = (400, 800)
# The directories that contain the images.
train_source_images_dir = '../../datasets/SIM10K/JPEGImages'
train_target_images_dir = '../../datasets/Cityscapes/JPEGImages'
test_target_images_dir = '../../datasets/val_data_for_SIM10K_to_cityscapes/JPEGImages'
# The directories that contain the annotations.
train_annotation_dir = '../../datasets/SIM10K/Annotations'
test_annotation_dir = '../../datasets/val_data_for_SIM10K_to_cityscapes/Annotations'
# The paths to the image sets.
train_source_image_set_filename = '../../datasets/SIM10K/ImageSets/Main/trainval10k.txt'
train_target_image_set_filename = '../../datasets/Cityscapes/ImageSets/Main/train_source.txt'
test_target_image_set_filename = '../../datasets/val_data_for_SIM10K_to_cityscapes/ImageSets/Main/test.txt'
classes = ['background', 'car'] # Our model will produce predictions for these classes.
train_classes = ['background', 'car', 'motorbike', 'person'] # The train_source dataset contains these classes.
train_include_classes = [train_classes.index(one_class) for one_class in classes[1:]]
# The test_target dataset contains these classes.
val_classes = ['background', 'car']
val_include_classes = 'all'
# Number of positive classes, 8 for domain Cityscapes, 20 for Pascal VOC, 80 for MS COCO, 1 for SIM10K
n_classes = len(classes) - 1
elif DatasetName == 'City_to_foggy0_02_resize_400_800':
resize_image_to = (400, 800)
# Introduction of PascalVOC: https://arleyzhang.github.io/articles/1dc20586/
# The directories that contain the images.
train_source_images_dir = '../../datasets/Cityscapes/JPEGImages'
train_target_images_dir = '../../datasets/Cityscapes/JPEGImages'
test_target_images_dir = '../../datasets/Cityscapes/JPEGImages'
# The directories that contain the annotations.
train_annotation_dir = '../../datasets/Cityscapes/Annotations'
test_annotation_dir = '../../datasets/Cityscapes/Annotations'
# The paths to the image sets.
train_source_image_set_filename = '../../datasets/Cityscapes/ImageSets/Main/train_source.txt'
train_target_image_set_filename = '../../datasets/Cityscapes/ImageSets/Main/train_target.txt'
test_target_image_set_filename = '../../datasets/Cityscapes/ImageSets/Main/test.txt'
# Our model will produce predictions for these classes.
classes = ['background',
'person', 'rider', 'car', 'truck',
'bus', 'train', 'motorcycle', 'bicycle']
train_classes = classes
train_include_classes = 'all'
val_classes = classes
val_include_classes = 'all'
# Number of positive classes, 8 for domain Cityscapes, 20 for Pascal VOC, 80 for MS COCO, 1 for SIM10K
n_classes = len(classes) - 1
elif DatasetName == 'City_to_foggy0_01_resize_400_800':
resize_image_to = (400, 800)
# Introduction of PascalVOC: https://arleyzhang.github.io/articles/1dc20586/
# The directories that contain the images.
train_source_images_dir = '../../datasets/Cityscapes/JPEGImages'
train_target_images_dir = '../../datasets/CITYSCAPES_beta_0_01/JPEGImages'
test_target_images_dir = '../../datasets/CITYSCAPES_beta_0_01/JPEGImages'
# The directories that contain the annotations.
train_annotation_dir = '../../datasets/Cityscapes/Annotations'
test_annotation_dir = '../../datasets/Cityscapes/Annotations'
# The paths to the image sets.
train_source_image_set_filename = '../../datasets/Cityscapes/ImageSets/Main/train_source.txt'
train_target_image_set_filename = '../../datasets/Cityscapes/ImageSets/Main/train_target.txt'
test_target_image_set_filename = '../../datasets/Cityscapes/ImageSets/Main/test.txt'
# Our model will produce predictions for these classes.
classes = ['background',
'person', 'rider', 'car', 'truck',
'bus', 'train', 'motorcycle', 'bicycle']
train_classes = classes
train_include_classes = 'all'
val_classes = classes
val_include_classes = 'all'
# Number of positive classes, 8 for domain Cityscapes, 20 for Pascal VOC, 80 for MS COCO, 1 for SIM10K
n_classes = len(classes) - 1
else:
raise ValueError('Undefined dataset name.')
# +
if Model_Build == 'New_Model':
# 1: Build the Keras model.
K.clear_session() # Clear previous models from memory.
import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
config = tf.ConfigProto()
config.gpu_options.allow_growth = True # dynamically grow the memory used on the GPU
config.log_device_placement = True # to log device placement (on which device the operation ran)
# (nothing gets printed in Jupyter, only if you run it standalone)
sess = tf.Session(config=config)
set_session(sess) # set this TensorFlow session as the default session for Keras
D_model, G_model = ssd_512(image_size=(img_height, img_width, img_channels),
n_classes=n_classes,
G_loss_weights=G_loss_weights,
D_loss_weights=D_loss_weights,
mode='training',
l2_regularization=0.0005,
scales=scales,
aspect_ratios_per_layer=aspect_ratios,
two_boxes_for_ar1=two_boxes_for_ar1,
steps=steps,
offsets=offsets,
clip_boxes=clip_boxes,
variances=variances,
normalize_coords=normalize_coords,
subtract_mean=mean_color,
swap_channels=swap_channels)
else:
raise ValueError('Undefined Model_Build. Model_Build should be New_Model or Load_Model')
# +
if Dataset_Build == 'New_Dataset':
# 1: Instantiate two `DataGenerator` objects: One for training, one for validation.
# Optional: If you have enough memory, consider loading the images into memory for the reasons explained above.
train_dataset = DataGenerator(dataset='train', load_images_into_memory=False, hdf5_dataset_path=None)
val_dataset = DataGenerator(dataset='val', load_images_into_memory=False, hdf5_dataset_path=None)
# 2: Parse the image and label lists for the training and validation datasets. This can take a while.
# images_dirs, image_set_filenames, and annotations_dirs should have the same length
train_dataset.parse_xml(images_dirs=[train_source_images_dir],
target_images_dirs=[train_target_images_dir],
image_set_filenames=[train_source_image_set_filename],
target_image_set_filenames=[train_target_image_set_filename],
annotations_dirs=[train_annotation_dir],
classes=train_classes,
include_classes=train_include_classes,
exclude_truncated=False,
exclude_difficult=False,
ret=False)
val_dataset.parse_xml(images_dirs=[test_target_images_dir],
image_set_filenames=[test_target_image_set_filename],
annotations_dirs=[test_annotation_dir],
classes=val_classes,
include_classes=val_include_classes,
exclude_truncated=False,
exclude_difficult=True,
ret=False)
# Optional: Convert the dataset into an HDF5 dataset. This will require more disk space, but will
# speed up the training. Doing this is not relevant in case you activated the `load_images_into_memory`
# option in the constructor, because in that cas the images are in memory already anyway. If you don't
# want to create HDF5 datasets, comment out the subsequent two function calls.
# After create these h5 files, if you have resized the input image, you need to reload these files. Otherwise,
# the images and the labels will not change.
train_dataset.create_hdf5_dataset(file_path=os.path.join(processed_dataset_path, 'dataset_train.h5'),
resize=resize_image_to,
variable_image_size=True,
verbose=True)
val_dataset.create_hdf5_dataset(file_path=os.path.join(processed_dataset_path, 'dataset_test.h5'),
resize=False,
variable_image_size=True,
verbose=True)
train_dataset = DataGenerator(dataset='train',
load_images_into_memory=False,
hdf5_dataset_path=os.path.join(processed_dataset_path, 'dataset_train.h5'),
filenames=train_source_image_set_filename,
target_filenames=train_target_image_set_filename,
filenames_type='text',
images_dir=train_source_images_dir,
target_images_dir=train_target_images_dir)
val_dataset = DataGenerator(dataset='val',
load_images_into_memory=False,
hdf5_dataset_path=os.path.join(processed_dataset_path, 'dataset_test.h5'),
filenames=test_target_image_set_filename,
filenames_type='text',
images_dir=test_target_images_dir)
elif Dataset_Build == 'Load_Dataset':
# 1: Instantiate two `DataGenerator` objects: One for training, one for validation.
# Load dataset from the created h5 file.
train_dataset = DataGenerator(dataset='train',
load_images_into_memory=False,
hdf5_dataset_path=os.path.join(processed_dataset_path, 'dataset_train.h5'),
filenames=train_source_image_set_filename,
target_filenames=train_target_image_set_filename,
filenames_type='text',
images_dir=train_source_images_dir,
target_images_dir=train_target_images_dir)
val_dataset = DataGenerator(dataset='val',
load_images_into_memory=False,
hdf5_dataset_path=os.path.join(processed_dataset_path, 'dataset_test.h5'),
filenames=test_target_image_set_filename,
filenames_type='text',
images_dir=test_target_images_dir)
else:
raise ValueError('Undefined Dataset_Build. Dataset_Build should be New_Dataset or Load_Dataset.')
# +
# 4: Set the image transformations for pre-processing and data augmentation options.
# For the training generator:
ssd_data_augmentation = SSDDataAugmentation_Siamese(img_height=img_height,
img_width=img_width)
# For the validation generator:
convert_to_3_channels = ConvertTo3Channels()
resize = Resize(height=img_height, width=img_width)
# 5: Instantiate an encoder that can encode ground truth labels into the format needed by the SSD loss function.
# The encoder constructor needs the spatial dimensions of the model's predictor layers to create the anchor boxes.
predictor_sizes = [G_model.get_layer('conv4_3_norm_mbox_conf').output_shape[1:3],
G_model.get_layer('fc7_mbox_conf').output_shape[1:3],
G_model.get_layer('conv6_2_mbox_conf').output_shape[1:3],
G_model.get_layer('conv7_2_mbox_conf').output_shape[1:3],
G_model.get_layer('conv8_2_mbox_conf').output_shape[1:3],
G_model.get_layer('conv9_2_mbox_conf').output_shape[1:3],
G_model.get_layer('conv10_2_mbox_conf').output_shape[1:3]]
ssd_input_encoder = SSDInputEncoder(img_height=img_height,
img_width=img_width,
n_classes=n_classes,
predictor_sizes=predictor_sizes,
scales=scales,
aspect_ratios_per_layer=aspect_ratios,
two_boxes_for_ar1=two_boxes_for_ar1,
steps=steps,
offsets=offsets,
clip_boxes=clip_boxes,
variances=variances,
matching_type='multi',
pos_iou_threshold=0.5,
neg_iou_limit=0.5,
normalize_coords=normalize_coords)
# 6: Create the generator handles that will be passed to Keras' `fit_generator()` function.
# The input image and label are first processed by transformations. Then, the label will be further encoded by
# ssd_input_encoder. The encoded labels are classId and offset to each anchor box.
# G_train_generator = train_dataset.generate(batch_size=batch_size,
# generator_type='G',
# shuffle=True,
# transformations=[ssd_data_augmentation],
# label_encoder=ssd_input_encoder,
# returns={'processed_images',
# 'encoded_labels'},
# keep_images_without_gt=False)
# D_train_generator = train_dataset.generate(batch_size=batch_size,
# generator_type='D',
# shuffle=True,
# transformations=[ssd_data_augmentation],
# label_encoder=ssd_input_encoder,
# returns={'processed_images',
# 'encoded_labels'},
# keep_images_without_gt=False)
# val_generator = val_dataset.generate(batch_size=batch_size,
# generator_type='G',
# shuffle=False,
# transformations=[convert_to_3_channels,
# resize],
# label_encoder=ssd_input_encoder,
# returns={'processed_images',
# 'encoded_labels'},
# keep_images_without_gt=False)
# Get the number of samples in the training and validations datasets.
train_dataset_size = train_dataset.get_dataset_size()
val_dataset_size = val_dataset.get_dataset_size()
print("Number of images in the training dataset:\t{:>6}".format(train_dataset_size))
print("Number of images in the validation dataset:\t{:>6}".format(val_dataset_size))
# +
num_epochss_real = 120
steps_per_G_epoch = 100
steps_per_D_epoch = 10
initial_epoch = 0
final_epoch = int(num_epochss_real * 370.0 / steps_per_G_epoch)
val_freq = int(final_epoch / num_epochss_real)
def lr_schedule(epoch):
if epoch < 20:
return 0.0005
elif epoch < int(0.7 * final_epoch):
return 0.001
elif epoch < int(0.9 * final_epoch):
return 0.0001
else:
return 0.00001
# def lr_schedule(epoch):
# if epoch < 20:
# return 0.0005
# elif epoch < 800:
# return 0.001
# elif epoch < 1000:
# return 0.0001
# else:
# return 0.00001
# Define model callbacks.
# TODO: Set the filepath under which you want to save the model.
model_checkpoint = ModelCheckpoint(filepath=os.path.join(checkpoint_path, 'epoch-{epoch:02d}_loss-{loss:.4f}_val_loss-{val_loss:.4f}.h5'),
monitor='val_loss',
verbose=1,
save_best_only=False,
save_weights_only=True,
mode='auto',
period=1)
# model_checkpoint.best to the best validation loss from the previous training
# model_checkpoint.best = 4.83704
csv_logger = CSVLogger(filename=os.path.join(checkpoint_path, csv_file_name),
separator=',',
append=True)
learning_rate_scheduler = LearningRateScheduler(schedule=lr_schedule,
verbose=1)
terminate_on_nan = TerminateOnNaN()
TensorBoard_monitor = TensorBoard(log_dir=checkpoint_path)
callbacks = [model_checkpoint,
csv_logger,
learning_rate_scheduler,
terminate_on_nan,
TensorBoard_monitor]
callbacks_no_val = [learning_rate_scheduler,
terminate_on_nan]
# def lr_schedule_D(epoch):
# return 0.0005
# learning_rate_scheduler_D = LearningRateScheduler(schedule=lr_schedule_D,
# verbose=1)
# callbacks_no_val_D = [learning_rate_scheduler_D,
# terminate_on_nan]
# -
for initial_epoch in range(final_epoch):
try:
print('\n')
print('Train generator.')
G_train_generator = train_dataset.generate(batch_size=batch_size,
generator_type='G',
shuffle=True,
transformations=[ssd_data_augmentation],
label_encoder=ssd_input_encoder,
returns={'processed_images',
'encoded_labels'},
keep_images_without_gt=False)
if initial_epoch != 0 and initial_epoch % val_freq == 0:
val_generator = val_dataset.generate(batch_size=batch_size,
generator_type='G',
shuffle=False,
transformations=[convert_to_3_channels,
resize],
label_encoder=ssd_input_encoder,
returns={'processed_images',
'encoded_labels'},
keep_images_without_gt=False)
history_G = G_model.fit_generator(generator=G_train_generator,
steps_per_epoch=steps_per_G_epoch,
epochs=initial_epoch+1,
callbacks=callbacks,
validation_data=val_generator,
validation_steps=ceil(val_dataset_size/batch_size),
initial_epoch=initial_epoch)
else:
history_G = G_model.fit_generator(generator=G_train_generator,
steps_per_epoch=steps_per_G_epoch,
epochs=initial_epoch+1,
callbacks=callbacks_no_val,
initial_epoch=initial_epoch)
print('\n')
print( 'Train discriminator.')
D_train_generator = train_dataset.generate(batch_size=batch_size,
generator_type='D',
shuffle=True,
transformations=[ssd_data_augmentation],
label_encoder=ssd_input_encoder,
returns={'processed_images',
'encoded_labels'},
keep_images_without_gt=False)
history_D = D_model.fit_generator(generator=D_train_generator,
steps_per_epoch=steps_per_D_epoch,
epochs=initial_epoch+1,
callbacks=callbacks_no_val,
initial_epoch=initial_epoch)
except ValueError:
pass
# +
# 1: Set the generator for the val_dataset or train_dataset predictions.
predict_generator = train_dataset.generate(batch_size=batch_size,
shuffle=True,
transformations=[ssd_data_augmentation],
label_encoder=None,
returns={'processed_images',
'filenames',
'inverse_transform',
'original_images',
'original_labels'},
keep_images_without_gt=False)
# 2: Generate samples.
batch_images, batch_filenames, batch_inverse_transforms, batch_original_images, batch_original_labels = next(predict_generator)
# -
batch_images, batch_filenames, batch_inverse_transforms, batch_original_images, batch_original_labels = next(predict_generator)
# +
i = 1
print("Image:", batch_filenames[i])
colors = plt.cm.hsv(np.linspace(0, 1, n_classes+1)).tolist()
plt.figure(figsize=(20, 12))
plt.imshow(batch_images[0][i])
plt.show()
plt.figure(figsize=(20, 12))
plt.imshow(batch_images[1][i])
plt.show()
# +
i = 0 # Which batch item to look at
print("Image:", batch_filenames[i])
print()
print("Ground truth boxes:\n")
print(np.array(batch_original_labels[i]))
# 3: Make predictions.
y_pred = model.predict(batch_images)[-1]
# Now let's decode the raw predictions in `y_pred`.
# Had we created the model in 'inference' or 'inference_fast' mode,
# then the model's final layer would be a `DecodeDetections` layer and
# `y_pred` would already contain the decoded predictions,
# but since we created the model in 'training' mode,
# the model outputs raw predictions that still need to be decoded and filtered.
# This is what the `decode_detections()` function is for.
# It does exactly what the `DecodeDetections` layer would do,
# but using Numpy instead of TensorFlow (i.e. on the CPU instead of the GPU).
# `decode_detections()` with default argument values follows the procedure of the original SSD implementation:
# First, a very low confidence threshold of 0.01 is applied to filter out the majority of the predicted boxes,
# then greedy non-maximum suppression is performed per class with an intersection-over-union threshold of 0.45,
# and out of what is left after that, the top 200 highest confidence boxes are returned.
# Those settings are for precision-recall scoring purposes though.
# In order to get some usable final predictions, we'll set the confidence threshold much higher, e.g. to 0.5,
# since we're only interested in the very confident predictions.
# 4: Decode the raw predictions in `y_pred`.
y_pred_decoded = decode_detections(y_pred,
confidence_thresh=0.35,
iou_threshold=0.4,
top_k=200,
normalize_coords=normalize_coords,
img_height=img_height,
img_width=img_width)
# We made the predictions on the resized images,
# but we'd like to visualize the outcome on the original input images,
# so we'll convert the coordinates accordingly.
# Don't worry about that opaque `apply_inverse_transforms()` function below,
# in this simple case it just applies `(* original_image_size / resized_image_size)` to the box coordinates.
# 5: Convert the predictions for the original image.
y_pred_decoded_inv = apply_inverse_transforms(y_pred_decoded, batch_inverse_transforms)
np.set_printoptions(precision=2, suppress=True, linewidth=90)
print("Predicted boxes:\n")
print(' class conf xmin ymin xmax ymax')
print(y_pred_decoded_inv[i])
# Finally, let's draw the predicted boxes onto the image.
# Each predicted box says its confidence next to the category name.
# The ground truth boxes are also drawn onto the image in green for comparison.
# 5: Draw the predicted boxes onto the image
# Set the colors for the bounding boxes
colors = plt.cm.hsv(np.linspace(0, 1, n_classes+1)).tolist()
plt.figure(figsize=(20, 12))
plt.imshow(batch_original_images[i])
current_axis = plt.gca()
for box in batch_original_labels[i]:
xmin = box[1]
ymin = box[2]
xmax = box[3]
ymax = box[4]
label = '{}'.format(classes[int(box[0])])
current_axis.add_patch(plt.Rectangle((xmin, ymin), xmax-xmin, ymax-ymin, color='green', fill=False, linewidth=2))
current_axis.text(xmin, ymin, label, size='x-large', color='white', bbox={'facecolor': 'green', 'alpha': 1.0})
# for box in y_pred_decoded_inv[i]:
# xmin = box[2]
# ymin = box[3]
# xmax = box[4]
# ymax = box[5]
# color = colors[int(box[0])]
# label = '{}: {:.2f}'.format(classes[int(box[0])], box[1])
# current_axis.add_patch(plt.Rectangle((xmin, ymin), xmax-xmin, ymax-ymin, color=color, fill=False, linewidth=2))
# current_axis.text(xmin, ymin, label, size='x-large', color='white', bbox={'facecolor': color, 'alpha': 1.0})
# -
|
src/ssd512_siamese_train.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.4 64-bit (''base'': conda)'
# metadata:
# interpreter:
# hash: c71059875956ddddd8fc05948e3759e391222d7e4133faeaedb7f7789eb644e6
# name: 'Python 3.7.4 64-bit (''base'': conda)'
# ---
import jinfo as j
# #### Create simple DNA sequence objects and retrieve sequence, label, length, molecular weight and melting temp:
#
# + tags=[]
seq_1 = j.DNASeq("ATGAGGATAGATCCCTATTAA", label="simple_dna_sequence")
print(seq_1)
print(seq_1.len)
print(seq_1.MW())
print(seq_1.tm())
# -
# #### Can get the mRNA transcription of a DNA sequence object, and probe features:
# + tags=[]
seq_1_mRNA = j.RNASeq(seq_1.transcribe(), label="simple_rna_sequence") #Should transcribe/translate return an RNASeq/AASeq object - print would still work?
print(seq_1_mRNA)
print(seq_1_mRNA.reverse_transcribe())
print(seq_1_mRNA.MW())
# -
# #### Translate the DNA or RNA sequences to get a protein:
# + tags=[]
seq_1_prot = j.AASeq(seq_1.translate(), label="simple_protein_sequence")
print(seq_1_prot)
print(seq_1_prot.MW())
# -
# #### Can perform DNA or protein alignments:
# (requires MUSCLE backend)
# + tags=[]
seq_2 = j.DNASeq("ATGAGGAACTTGATAGATCCCTA", label="simple_dna_homolog_1")
seq_3 = j.DNASeq("ATGAGGATAGATCCTTACCTCTA", label="simple_dna_homolog_2")
seq_4 = j.DNASeq("ATGAGGATAGAGGCCTCCCTA", label="simple_dna_homolog_3")
simple_alignment = seq_1.align(seq_2)
print(simple_alignment)
# Type of underlying seq object is preserved:
type(simple_alignment.seqs[0])
# + tags=[]
multiple_alignment = j.multialign([seq_1, seq_2, seq_3, seq_4])
print(multiple_alignment)
# -
# #### From alignment objects phylogenetic trees can be calculated:
# (requires FastTree backend)
# + tags=[]
simple_tree = multiple_alignment.calc_tree()
print(simple_tree.tree) # Newick format tree...
# -
# #### For ML applications One-hot encoding DNA is helpful:
# + tags=[]
print(seq_1.one_hot())
# -
# #### You can read sequence objects and alignments from fasta files:
# + tags=[]
# Example real workflow using 10 feline coronavirus spike protein variants:
# Import sequences into a list of seq objects:
spike_homologs = j.seq_list_from_fasta("docs/sequence.fasta", seq_obj=j.AASeq)
# Check out the first protein:
print(spike_homologs[0])
# + tags=[]
# Align the homologues:
feline_spike_alignment = j.multialign(spike_homologs)
# Show the percentage identity array from the alignment:
low_id_alignment = feline_spike_alignment.identity_filter(95, show_id_array=True)
# + tags=[]
# Calculate phylogenetic trees from the alignments:
tree = feline_spike_alignment.calc_tree()
print(tree.tree)
tree2 = low_id_alignment.calc_tree()
print(tree2.tree)
|
docs/demo.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# In this lesson, I'll be talking about **imports** in Python, giving some tips for working with unfamiliar libraries (and the objects they return), and digging into the guts of Python just a bit to talk about **operator overloading**.
# ## Imports
#
# So far we've talked about types and functions which are built-in to the language.
#
# But one of the best things about Python (especially if you're a data scientist) is the vast number of high-quality custom libraries that have been written for it.
#
# Some of these libraries are in the "standard library", meaning you can find them anywhere you run Python. Others libraries can be easily added, even if they aren't always shipped with Python.
#
# Either way, we'll access this code with **imports**.
#
# We'll start our example by importing `math` from the standard library.
# +
import math
print("It's math! It has type {}".format(type(math)))
# -
# `math` is a module. A module is just a collection of variables (a *namespace*, if you like) defined by someone else. We can see all the names in `math` using the built-in function `dir()`.
print(dir(math))
# We can access these variables using dot syntax. Some of them refer to simple values, like `math.pi`:
print("pi to 4 significant digits = {:.4}".format(math.pi))
# But most of what we'll find in the module are functions, like `math.log`:
math.log(32, 2)
# Of course, if we don't know what `math.log` does, we can call `help()` on it:
help(math.log)
# We can also call `help()` on the module itself. This will give us the combined documentation for *all* the functions and values in the module (as well as a high-level description of the module). Click the "output" button to see the whole `math` help page.
# + _kg_hide-output=true
help(math)
# -
# ### Other import syntax
#
# If we know we'll be using functions in `math` frequently we can import it under a shorter alias to save some typing (though in this case "math" is already pretty short).
import math as mt
mt.pi
# > You may have seen code that does this with certain popular libraries like Pandas, Numpy, Tensorflow, or Matplotlib. For example, it's a common convention to `import numpy as np` and `import pandas as pd`.
# The `as` simply renames the imported module. It's equivalent to doing something like:
import math
mt = math
# Wouldn't it be great if we could refer to all the variables in the `math` module by themselves? i.e. if we could just refer to `pi` instead of `math.pi` or `mt.pi`? Good news: we can do that.
from math import *
print(pi, log(32, 2))
# `import *` makes all the module's variables directly accessible to you (without any dotted prefix).
#
# Bad news: some purists might grumble at you for doing this.
#
# Worse: they kind of have a point.
from math import *
from numpy import *
print(pi, log(32, 2))
# What the what? But it worked before!
#
# These kinds of "star imports" can occasionally lead to weird, difficult-to-debug situations.
#
# The problem in this case is that the `math` and `numpy` modules both have functions called `log`, but they have different semantics. Because we import from `numpy` second, its `log` overwrites (or "shadows") the `log` variable we imported from `math`.
#
# A good compromise is to import only the specific things we'll need from each module:
from math import log, pi
from numpy import asarray
# ### Submodules
#
# We've seen that modules contain variables which can refer to functions or values. Something to be aware of is that they can also have variables referring to *other modules*.
import numpy
print("numpy.random is a", type(numpy.random))
print("it contains names such as...",
dir(numpy.random)[-15:]
)
# So if we import `numpy` as above, then calling a function in the `random` "submodule" will require *two* dots.
# Roll 10 dice
rolls = numpy.random.randint(low=1, high=6, size=10)
rolls
# # Oh the places you'll go, oh the objects you'll see
#
# So after 6 lessons, you're a pro with ints, floats, bools, lists, strings, and dicts (right?).
#
# Even if that were true, it doesn't end there. As you work with various libraries for specialized tasks, you'll find that they define their own types which you'll have to learn to work with. For example, if you work with the graphing library `matplotlib`, you'll be coming into contact with objects it defines which represent Subplots, Figures, TickMarks, and Annotations. `pandas` functions will give you DataFrames and Series.
#
# In this section, I want to share with you a quick survival guide for working with strange types.
#
# ## Three tools for understanding strange objects
#
# In the cell above, we saw that calling a `numpy` function gave us an "array". We've never seen anything like this before (not in this course anyways). But don't panic: we have three familiar builtin functions to help us here.
#
# **1: `type()`** (what is this thing?)
type(rolls)
# **2: `dir()`** (what can I do with it?)
print(dir(rolls))
# What am I trying to do with this dice roll data? Maybe I want the average roll, in which case the "mean"
# method looks promising...
rolls.mean()
# Or maybe I just want to get back on familiar ground, in which case I might want to check out "tolist"
rolls.tolist()
# **3: `help()`** (tell me more)
# That "ravel" attribute sounds interesting. I'm a big classical music fan.
help(rolls.ravel)
# + _kg_hide-output=true
# Okay, just tell me everything there is to know about numpy.ndarray
# (Click the "output" button to see the novel-length output)
help(rolls)
# -
# (Of course, you might also prefer to check out [the online docs](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.ndarray.html))
# ### Operator overloading
#
# What's the value of the below expression?
[3, 4, 1, 2, 2, 1] + 10
# What a silly question. Of course it's an error.
#
# But what about...
rolls + 10
# We might think that Python strictly polices how pieces of its core syntax behave such as `+`, `<`, `in`, `==`, or square brackets for indexing and slicing. But in fact, it takes a very hands-off approach. When you define a new type, you can choose how addition works for it, or what it means for an object of that type to be equal to something else.
#
# The designers of lists decided that adding them to numbers wasn't allowed. The designers of `numpy` arrays went a different way (adding the number to each element of the array).
#
# Here are a few more examples of how `numpy` arrays interact unexpectedly with Python operators (or at least differently from lists).
# At which indices are the dice less than or equal to 3?
rolls <= 3
xlist = [[1,2,3],[2,4,6],]
# Create a 2-dimensional array
x = numpy.asarray(xlist)
print("xlist = {}\nx =\n{}".format(xlist, x))
# Get the last element of the second row of our numpy array
x[1,-1]
# Get the last element of the second sublist of our nested list?
xlist[1,-1]
# numpy's `ndarray` type is specialized for working with multi-dimensional data, so it defines its own logic for indexing, allowing us to index by a tuple to specify the index at each dimension.
#
# ### When does 1 + 1 not equal 2?
#
# Things can get weirder than this. You may have heard of (or even used) tensorflow, a Python library popularly used for deep learning. It makes extensive use of operator overloading.
import tensorflow as tf
# Create two constants, each with value 1
a = tf.constant(1)
b = tf.constant(1)
# Add them together to get...
a + b
# `a + b` isn't 2, it is (to quote tensorflow's documentation)...
#
# > a symbolic handle to one of the outputs of an `Operation`. It does not hold the values of that operation's output, but instead provides a means of computing those values in a TensorFlow `tf.Session`.
#
#
# It's important just to be aware of the fact that this sort of thing is possible and that libraries will often use operator overloading in non-obvious or magical-seeming ways.
#
# Understanding how Python's operators work when applied to ints, strings, and lists is no guarantee that you'll be able to immediately understand what they do when applied to a tensorflow `Tensor`, or a numpy `ndarray`, or a pandas `DataFrame`.
#
# Once you've had a little taste of DataFrames, for example, an expression like the one below starts to look appealingly intuitive:
#
# ```python
# # Get the rows with population over 1m in South America
# df[(df['population'] > 10*6) & (df['continent'] == 'South America')]
# ```
#
# But why does it work? The example above features something like **5** different overloaded operators. What's each of those operations doing? It can help to know the answer when things start going wrong.
# #### Curious how it all works?
#
# Have you ever called `help()` or `dir()` on an object and wondered what the heck all those names with the double-underscores were?
print(dir(list))
# This turns out to be directly related to operator overloading.
#
# When Python programmers want to define how operators behave on their types, they do so by implementing methods with special names beginning and ending with 2 underscores such as `__lt__`, `__setattr__`, or `__contains__`. Generally, names that follow this double-underscore format have a special meaning to Python.
#
# So, for example, the expression `x in [1, 2, 3]` is actually calling the list method `__contains__` behind-the-scenes. It's equivalent to (the much uglier) `[1, 2, 3].__contains__(x)`.
#
# If you're curious to learn more, you can check out [Python's official documentation](https://docs.python.org/3.4/reference/datamodel.html#special-method-names), which describes many, many more of these special "underscores" methods.
#
# We won't be defining our own types in these lessons (if only there was time!), but I hope you'll get to experience the joys of defining your own wonderful, weird types later down the road.
# # Your turn!
#
# Head over to [the very last Exercises notebook](https://www.kaggle.com/kernels/fork/1275190) for one more round of coding questions involving imports, working with unfamiliar objects, and, of course, more gambling.
|
notebooks/rendered/python/tut_7.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="_S3cEMHiCg3o"
#
# **Data source :** [Mobile price Dataset]( https://www.kaggle.com/iabhishekofficial/mobile-price-classification)
#
#
# + [markdown] id="v2LsUr4MDxkQ"
# # Loading Laibrary
# + id="LNNwKNUHhOKy"
# %%capture
# !pip install category-encoders
# Standard Imports
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
import category_encoders as ce
# Classifiers to Use
from sklearn.ensemble import RandomForestClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
# Other needed imports
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import StackingClassifier
from sklearn.impute import SimpleImputer
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RepeatedStratifiedKFold
from sklearn.model_selection import GridSearchCV
import warnings
warnings.filterwarnings("ignore")
# + [markdown] id="g0PuAbPMD6Id"
# # Loading Data
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="-yp1dTUChu3x" outputId="3c53bb71-0a2d-4287-9941-927e9a69fccf"
df=pd.read_csv('https://raw.githubusercontent.com/helah20/Dataset/main/train.csv')
dft=pd.read_csv('https://raw.githubusercontent.com/helah20/Dataset/main/test.csv')
df.head()
# + [markdown] id="x1f0HJwEEDOO"
# # Scalling Data
# + id="_wABprA4kd1E"
# Scale continuous features and replace in the original df
cols=['blue','dual_sim','four_g','three_g','touch_screen','wifi','price_range','n_cores']
scaler = StandardScaler() # scalling only fore continuse feature , excloud boolean colums , n_cores one hot encoder
scaled_df = pd.DataFrame(scaler.fit_transform(df.drop(df[cols], axis=1)))
# + [markdown] id="pQgcX-6EEaHn"
# # Modeling :
# + [markdown] id="5Uv7YIfRDI0H"
# #### define the Models
# + id="N6C89g7njZi4"
# Create a dictionary holding all Classifiers and preprocessing techniques
models = {
"rf": make_pipeline(ce.OneHotEncoder(), SimpleImputer(strategy="median"),RandomForestClassifier()),
"knn": make_pipeline(ce.OneHotEncoder(), SimpleImputer(strategy="median"),KNeighborsClassifier()),
"dt": make_pipeline(ce.OneHotEncoder(),SimpleImputer(strategy="median") ,DecisionTreeClassifier())
}
# + [markdown] id="J8jLZuNLDK4x"
# #### define cross validatition function to evaluate our models
# + id="aijibTXJC1Tz"
# evaluate a given model using cross-validation
def evaluate_model(model, X, y):
cv = RepeatedStratifiedKFold(n_splits=5, n_repeats=2, random_state=9000)
scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1)
return scores
# + [markdown] id="NGG0jUPPDTho"
# #### Split the data
# + id="k7CP0awZC1fI"
# Split data
X = df.drop("price_range", axis=1)
y = df["price_range"]
# + [markdown] id="2ZXDAacLDVKw"
# #### define the baseline model
# + colab={"base_uri": "https://localhost:8080/"} id="6CG0KDyAC14t" outputId="e78edce5-833c-43f9-b81a-5bf39611787f"
# Baseline Score
print("Model Baseline")
df["price_range"].value_counts(normalize=True)[0]
# + [markdown] id="rmdXysNEDdT9"
# #### applying the models
# + colab={"base_uri": "https://localhost:8080/"} id="F6JtwSiJkSch" outputId="141305e4-e901-451a-92e7-91e1da55d290"
# Evaluate models on their own
result_ls = []
model_ls = []
# Iterate over models dict and evaluate each seprately
for key, value in models.items():
# gather scores KFold cross validation scores
score = evaluate_model(value, X, y)
result_ls.append(score) # save results
model_ls.append(key) # save model name
print(f"Model: {key}, Score: {np.mean(score)}")
# + [markdown] id="DyVyC8s3zYbi"
# #### comparing Models
#
# Looking at our crossval score boxplots we can see that **KNN** models are performing pretty much **HIGHER** than others.
# + colab={"base_uri": "https://localhost:8080/", "height": 523} id="PRG8oBx0soTO" outputId="7e4d7bdc-8d8e-459f-fb11-88b98e1d4964"
plt.style.use("seaborn-whitegrid")
plt.figure(figsize=(10,8))
plt.boxplot(result_ls, labels=model_ls, showmeans=True )
plt.title("Cross Val Scores of Each Base Model",fontsize=18)
plt.xlabel("Model",fontsize=18)
plt.xticks(fontsize=15)
plt.axhline(y=.25, linestyle="--", c="r", label="Baseline")
plt.legend()
plt.ylabel("Accuracy",fontsize=18);
# + [markdown] id="-mzWwC7b3YtG"
# #### define stack moodel
#
# to see if we can do better with [ model stacking ](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.StackingClassifier.html)
# + id="xjEw5TSJuDZ6"
# Stacking Classifier is expecting a list
stack_ls = [
("rf", make_pipeline(ce.OneHotEncoder(), SimpleImputer(strategy="median"),RandomForestClassifier())), # Random forest intead
("knn", make_pipeline(ce.OneHotEncoder(), SimpleImputer(strategy="median"),KNeighborsClassifier())),
("dt", make_pipeline(ce.OneHotEncoder(),SimpleImputer(strategy="median") ,DecisionTreeClassifier()))
]
# Create Stacked Classifier
stack_model = StackingClassifier(stack_ls, cv=5)
# Add stack_model to models dictionary
models["stacked"] = stack_model
# + colab={"base_uri": "https://localhost:8080/"} id="OoSy_vyU5EGE" outputId="6d285ef4-bc5f-458e-bd83-f2ab139fcbb5"
# Evaluate All Models
result_ls = []
model_ls = []
# Iterate over models dict and evaluate each seprately
for key, value in models.items():
# gather scores KFold cross validation scores
score = evaluate_model(value, X, y)
result_ls.append(score) # save results
model_ls.append(key) # save model name
print(f"Model: {key}, Score: {np.mean(score)}")
# + [markdown] id="QRltayzh6HE3"
# #### comparing all Models
#
# as we can see here our stacked estimator performed better than all base estimators so we will use the stacked estimator.
# + colab={"base_uri": "https://localhost:8080/", "height": 523} id="dWlmWGk85eN9" outputId="c6ea37d4-b262-444b-ad7e-d4392956f8b8"
plt.style.use("seaborn-whitegrid")
plt.figure(figsize=(10,8))
plt.boxplot(result_ls, labels=model_ls, showmeans=True)
plt.xticks(fontsize=15)
plt.title("Cross Val Scores of Each Base Model and Stacked Model",fontsize=18)
plt.xlabel("Model",fontsize=18)
plt.axhline(0.25, linestyle="--", c="r", label="Baseline")
plt.legend()
plt.ylabel("Accuracy",fontsize=18);
# + [markdown] id="UblQBvvkHsou"
# # Optimizing stack model
#
# Now let's find the best pram using [grid search](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html) and fit our model
# + [markdown] id="gQxMZOJa7AWO"
# Grid Search
# + id="c3oJ_ROT5w9B"
param_grid = {
"rf__randomforestclassifier__min_samples_leaf": [2,3],
"rf__randomforestclassifier__min_samples_split": [2, 3],
"rf__randomforestclassifier__n_estimators":[300,500],
"dt__onehotencoder__drop_invariant": [True, False],
"dt__simpleimputer__strategy": ["mean", "median"],
"dt__decisiontreeclassifier__criterion": ["gini", "entropy"],
"dt__decisiontreeclassifier__ccp_alpha": [0.0,0.030],
"knn__kneighborsclassifier__n_neighbors": [5,7],
"knn__kneighborsclassifier__metric": ["minkowski", "euclidean"],
}
grid_model = GridSearchCV(stack_model, param_grid=param_grid, cv=5, scoring="accuracy", n_jobs=-1, verbose=2)
grid_model.fit(X,y)
# + id="8jgSNEcn7ude" outputId="7c8e73dc-871e-4660-875e-bd80b9526285"
grid_model.best_score_
# + id="sqzYG2jo8bix" outputId="018fc6ab-8478-4286-ba60-330571b6931d"
grid_model.best_params_
# + id="Nee5T9LZfRVq"
# Pull out best model CV results to plot alongside all models tested
grid_df = pd.DataFrame(grid_model.cv_results_)
grid_cv_score_best = grid_df[grid_df["rank_test_score"]==1].loc[:,"split0_test_score":"split4_test_score"] .T[237].values
# + id="hRehq9ZmD81K" outputId="177d1831-b928-4900-a00f-1ba8482ea676"
grid_cv_score_best
# + id="b6EVK-PnD81L" outputId="82f1f697-8a38-4268-cbb3-9073cd257196"
grid_df.values
# + id="gcUz3maJfRVr"
# Add results to running list of other model scores
result_ls.append(grid_cv_score_best)
model_ls.append("grid")
# + id="ieU1bs6SD81M" outputId="e1280fe9-b2e8-434b-e634-ef60a7fdb13e"
model_ls # make sure that we have only one grid
# + id="EJCG_rUDD81N" outputId="45bc4210-785e-4126-bd12-cf80e1af2f05"
result_ls # make sure we had one array for each model
# + id="hAgZ0yKFfRVr" outputId="2e90c80c-0947-4730-c1b0-333cfcf126b1"
# plot results
plt.style.use("seaborn-whitegrid")
plt.tight_layout()
plt.figure(figsize=(20,20))
plt.boxplot(result_ls, labels=model_ls, showmeans=True)
plt.title("Cross Val Scores of Each Base Model, Stacked Model, and Best Tuned Stacked Model", fontsize="xx-large")
plt.xlabel("Model", fontsize="x-large")
plt.xticks(fontsize="large")
plt.yticks(fontsize="large")
plt.axhline(0.25, linestyle="--", c="r", label="Baseline Model")
plt.legend()
plt.ylabel("Accuracy", fontsize="x-large")
plt.grid(b=None, axis="x")
plt.savefig("compare_model.png", dpi=150);
# + [markdown] id="9DfmszEZEiDL"
# #### split the data and apply the best model
# + id="li-bdCWID81O" outputId="21307884-2eb4-4726-d188-1165488dd1e5"
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, random_state=42)
grid_model.fit(X_train, y_train).score(X_test, y_test)
|
notebook/Mobile_Price__Classification_ML.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: mydsp
# language: python
# name: mydsp
# ---
# [<NAME>](https://orcid.org/0000-0001-7225-9992),
# Professorship Signal Theory and Digital Signal Processing,
# [Institute of Communications Engineering (INT)](https://www.int.uni-rostock.de/),
# Faculty of Computer Science and Electrical Engineering (IEF),
# [University of Rostock, Germany](https://www.uni-rostock.de/en/)
#
# # Tutorial Signals and Systems (Signal- und Systemtheorie)
#
# Summer Semester 2021 (Bachelor Course #24015)
#
# - lecture: https://github.com/spatialaudio/signals-and-systems-lecture
# - tutorial: https://github.com/spatialaudio/signals-and-systems-exercises
#
# WIP...
# The project is currently under heavy development while adding new material for the summer semester 2021
#
# Feel free to contact lecturer [<EMAIL>](https://orcid.org/0000-0002-3010-0294)
#
# ## Übung / Exercise 4
# # Sine Integral
import numpy as np
import matplotlib.pyplot as plt
from scipy.special import sici
# +
N = 7
x = np.linspace(0,N*2*np.pi,2**10)
si, _ = sici(x)
plt.figure(figsize=(6, 4))
plt.plot(x,si,lw=2)
plt.xticks(np.arange(0,N+1)*2*np.pi, ['0',r'$2\pi$',r'$4\pi$',r'$6\pi$',r'$8\pi$',r'$10\pi$',r'$12\pi$',r'$14\pi$'])
plt.yticks(np.arange(0,4)*np.pi/4, ['0',r'$\pi/4$',r'$\pi/2$',r'$3\pi/4$'])
plt.xlim(0,14*np.pi)
plt.ylim(0,3/4*np.pi)
plt.xlabel(r'$\omega$')
plt.ylabel(r'$\mathrm{Si}(\omega) = \int_0^\omega\,\,\,\frac{\sin \nu}{\nu}\,\,\,\mathrm{d}\nu$')
#plt.title('Sine Integral Si(x)')
plt.grid(True)
plt.savefig('sine_intergral_0A13DD5E57.pdf')
# -
# ## Copyright
#
# This tutorial is provided as Open Educational Resource (OER), to be found at
# https://github.com/spatialaudio/signals-and-systems-exercises
# accompanying the OER lecture
# https://github.com/spatialaudio/signals-and-systems-lecture.
# Both are licensed under a) the Creative Commons Attribution 4.0 International
# License for text and graphics and b) the MIT License for source code.
# Please attribute material from the tutorial as *<NAME>,
# Continuous- and Discrete-Time Signals and Systems - A Tutorial Featuring
# Computational Examples, University of Rostock* with
# ``main file, github URL, commit number and/or version tag, year``.
|
ft/sine_intergral_0A13DD5E57.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # KOBE Bryant Shot Selection
# <NAME> marked his retirement from the NBA by scoring 60 points in his final game as a Los Angeles Laker on Wednesday, April 12, 2016. Drafted into the NBA at the age of 17, Kobe earned the sport’s highest accolades throughout his long career.
#
# Using 20 years of data on Kobe's swishes and misses, can you predict which shots will find the bottom of the net? This competition is well suited for practicing classification basics, feature engineering, and time series analysis. Practice got Kobe an eight-figure contract and 5 championship rings. What will it get you?
# ---
# # Analysis and Conclusion
# ---
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
# %matplotlib inline
# # 0. Read In Data
m1_df = pd.read_csv('../data/m1_summary.csv', index_col='Unnamed: 0')
m1_df
m2_df = pd.read_csv('../data/m2_summary.csv', index_col='Unnamed: 0')
m2_df
m3_df = pd.read_csv('../data/m3_summary.csv', index_col='Unnamed: 0')
m3_df
m4_df = pd.read_csv('../data/m4_summary.csv', index_col='Unnamed: 0')
m4_df
m5_df = pd.read_csv('../data/m5_summary.csv', index_col='Unnamed: 0')
m5_df
# ## Combine into a Single DataFrame
# **We want to ensure our analysis was correct and combine all of our results into a single dataframe.**
df = pd.concat([m1_df, m2_df, m3_df, m4_df, m5_df])
df
# **We then sort that dataframe by accuracy.**
df.sort_values(by='Final Acc', ascending=False, inplace=True)
df
df.to_csv('../data/all_m_summary.csv')
# # Which dataset performed the best?
# ## Dataset 1
d1 = []
for j in df.index:
if 'Dataset 1' in j:
d1.append(j)
d1_df = df.loc[d1]
d1_df
d1_df.mean()
# ## Dataset 2
d2 = []
for j in df.index:
if 'Dataset 2' in j:
d2.append(j)
d2_df = df.loc[d2]
d2_df
d2_df.mean()
# ## Dataset 3
d3 = []
for j in df.index:
if 'Dataset 3' in j:
d3.append(j)
d3_df = df.loc[d3]
d3_df
d3_df.mean()
# ## Dataset 4
d4 = []
for j in df.index:
if 'Dataset 4' in j:
d4.append(j)
d4_df = df.loc[d4]
d4_df
d4_df.mean()
# ## Dataset 5
d5 = []
for j in df.index:
if 'Dataset 5' in j:
d5.append(j)
d5_df = df.loc[d5]
d5_df
d5_df.mean()
# # Overall Analysis:
# 1. **Given that the chance to make a shot under any condition is probabilistic, it logically follows that our accuracy in predicting a made or miss shot is not close to 100%. Kobe is conisdered one of the greatest shooters and scorers of all time, and even he had no shot where he was 100% accurate from.**
#
#
# 2. **We appear to be stuck in a range from 59-62%, with our best performing model breaking away from the others slightly with 61.76% accuracy and comparable misclassiciation scores with other top performing models.**
#
#
# 3. This is a significant increase from our Null Model of 55%, though honestly I was hoping we could get closer to 80%.
#
#
# 4. On average, Dataset 3 was our best performing dataset. Dataset 3 used ```combined_shot_type```, ```shot_distance```, and ```shot_zone_area```. This tells us that our most important predictors were where the shot was from and what kind of shot it was. Other factors, such as time left, opponent, period, and other features were less important, as we hypothesized.
#
#
# 5. Despite Dataset 3 performing best on average, **XGBoost on Dataset 1** was our best performing model. This intuitively makes sense when consider that, though dataset 3 contains our MOST IMPORTANT variables, the other variables still add predictive value.
|
code/.ipynb_checkpoints/07_Analysis-and-Conclusion-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Regression Regularization Techniques - Ridge and Lasso Regression
#
# ## What We'll Accomplish in This Notebook
#
# In this notebook we will:
#
# <ul>
# <li>See what happens to our coefficients when we overfit with a polynomial regression example,</li>
# <li>Introduce the main framework behind regularization,</li>
# <li>Discuss how Ridge and Lasso regression work and apply it to our polynomial regression problem,</li>
# <li>See hyperparameters for the first time,</li>
# <li>Learn about scaling data,</li>
# <li>Learn how to use Lasso for feature selection.</li>
# </ul>
#
# ### Math Warning
#
# This notebook is one of our more math-heavy notebooks. When presenting the idea behind regression regularization I delve into the mathematical theory behind it. If you're not a math person don't be daunted. It's not the most important thing to learn all the math details by heart, it's more important to learn the overarching idea behind the algorithm and how to implement it in practice.
# +
# import the packages we'll use
## For data handling
import pandas as pd
import numpy as np
from numpy import meshgrid
## For plotting
import matplotlib.pyplot as plt
import seaborn as sns
## This sets the plot style
## to have a grid on a white background
sns.set_style("whitegrid")
# -
# ## Coefficient Explosions
#
# Let's return to our example from the `Bias-Variance Tradeoff` notebook.
# +
## Generate data
x = np.linspace(-3,3,100)
y = x*(x-1) + 1.2*np.random.randn(100)
## plot the data alongside the true relationship
plt.figure(figsize = (10,10))
plt.scatter(x,y, label="Observed Data")
plt.plot(x,x*(x-1),'k', label="True Relationship")
plt.xlabel("x",fontsize=16)
plt.ylabel("y",fontsize=16)
plt.legend(fontsize=14)
plt.show()
# -
## Import the packages we'll need
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LinearRegression
## Make an array of zeros that will hold some data for me
n = 26
coef_holder = np.zeros((n,n))
## Now we'll fit the data with polynomials degree 1 through n
for i in range(1,n+1):
## Make a pipe
pipe = Pipeline([('poly',PolynomialFeatures(i,include_bias = False)),
('reg',LinearRegression())])
## fit the data
pipe.fit(x.reshape(-1,1),y)
## store the coefficient estimates
coef_holder[i-1,:i] = np.round(pipe['reg'].coef_,3)
## Display the coefficient estimates as a dataframe
pd.DataFrame(coef_holder,
columns = ["x^" + str(i) for i in range(1,n+1)],
index = [str(i) + "_deg_poly" for i in range(1,n+1)])
# +
## A plot to remind ourselves of the overfitting
plt.figure(figsize=(12,12))
plt.scatter(x, y, alpha = .8, label="Observed Data")
plt.plot(x,
pipe.predict(x.reshape(-1,1)),
'k', label= str(n) + "th deg fit")
plt.plot(x,x*(x-1),'k--',label = "True Relationship")
plt.xlabel("x",fontsize=16)
plt.ylabel("y",fontsize=16)
plt.legend(fontsize=16)
plt.show()
# -
# Looking at the dataframe we've just produced we can notice that a number of our coefficients get larger in magnitude as the model gets more complex.
#
# This observation leads to the main idea behind regularization.
#
# ## The Idea Behind Regularization
#
# Suppose the non-intercept coefficients from the regression are denoted by $\beta$, i.e. $\beta=\left(\beta_1,\beta_2,\dots,\beta_m\right)^T$. Recall that in Ordinary Least Squares regression our goal is to estimate $\beta$ so that
# $$
# MSE = \frac{1}{n}(y - X\beta - \beta_0)^T(y - X\beta - \beta_0)
# $$
# is minimized on the training data.
#
# The main idea behind regularization is to still minimize the MSE, BUT while also ensuring that $\beta$ doesn't get too large.
#
# #### How to Measure Large?
#
# It's reasonable to wonder how we can measure largeness of $\beta$. Let $||\bullet||$ denote some norm in $\mathbb{R}^{m}$. If you're unfamiliar with what a norm is don't worry we'll make this more concrete when we talk about lasso and ridge regression, for now think of it as a measure of how "long" the $\beta$ vector is. We measure how large $\beta$ is by looking at $||\beta||$.
#
# #### Constrained Optimization
#
# In regularization we still minimize the MSE, but we constrain ourselves so that we only consider $\beta$ with $||\beta||\leq c$ for some constant $c$.
#
# #### An Equivalent Problem
#
# It turns out this is equivalent to minimizing the following:
# $$
# ||y-X\beta - \beta_0||^2_2 + \alpha||\beta||,
# $$
# for some constant $\alpha$ and where $||a||_2^2 = a_1^2 + a_2^2 + \dots + a_n^2, a\in\mathbb{R}^n$. Note that minimizing $||y-X\beta - \beta_0||^2_2$ is equivalent to minimizing the MSE.
#
# To see a mathematical derivation of this equivalence look at reference 3 below.
#
# Here we can think of $\alpha||\beta||$ as a penalty term, which will not allow $\beta$ to grow too large as we minimize $||y-X\beta-\beta_0||^2_2$. The ammount we "penalize" for a large $\beta$ depends on the value of $\alpha$.
#
# #### Our First Hyperparameter
#
# $\alpha$ is the first instance in our course of a <i>hyperparameter</i>, but it will not be the last. A hyperparameter is a parameter we set before fitting the model. While normal parameters, like $\beta$, are estimated during the training step.
#
# For $\alpha=0$ we recover the OLS estimate for $\beta$, for $\alpha=\infty$ we get $\beta=0$, values of $\alpha$ between those two extremes will give different coefficient estimates. The value of $\alpha$ that gives the best model for your data depends on the problem and can be found through cross-validation model comparisons.
#
#
# ## Ridge and Lasso
#
# <i>Ridge regression</i> and <i>lasso</i> are two forms of regularization where we make specific choices for the norm:
# <ul>
# <li>In ridge regression we take $||\bullet||$ to be the square of the Euclidean norm, $||\bullet||_2^2$,</li>
# <li>In lasso we take $||\bullet||$ to be the $l_1$-norm, $||a||_1 = |a_1| + |a_2| + \dots + |a_n|, \ a\in \mathbb{R}^n$.
# </ul>
#
# ### Containing the Explosion of $\beta$
#
# These two algorithms make it difficult for $\beta$ to explode. Let's use ridge regression on our polynomial example above and track the norm of $\beta$ as we increase $\alpha$.
## the ridge regression object is called Ridge in
## sklearn.linear_model
from sklearn.linear_model import Ridge
# +
## make an array of alpha values
alphas = np.arange(0,.001,.000001)
norms = []
np.random.seed(440)
x = np.linspace(-3,3,100)
y = x*(x-1) + 1.2*np.random.randn(100)
## for each alpha value
for a in alphas:
## We'll talk about normalizing in a second
pipe = Pipeline([('poly',PolynomialFeatures(30, include_bias=False)),
('ridge',Ridge(alpha=a, normalize=True))])
pipe.fit(x.reshape(-1,1),y)
## get the beta vector
coefs = pipe['ridge'].coef_
## append the square of 2-norm of beta for this alpha
norms.append(np.power(np.sqrt(np.sum(np.power(coefs,2))),2))
# +
## now plot
plt.figure(figsize=(10,10))
plt.plot(alphas,norms)
plt.xlabel("alpha",fontsize=16)
plt.ylabel("2-Norm of beta",fontsize=16)
plt.xlim((-.00001,max(alphas)))
plt.ylim((0,10))
plt.show()
# -
# ### You Code
#
# ### A Quick Aside on "Normalizing"
#
# These regularization techniques are very sensitive to the scale of our predictors. This is a result of the constrained optimization set up of the problem. To illustrate this let's look at an example.
#
# Suppose someone's tiredness after a walk is given by:
# $$
# \text{tiredness} = \text{age} + 3\text{(distance traveled in m)}
# $$
#
# We'll use this breakout session to explore what happens in ridge regression and lasso when we go between measuring distnace on two different scales, meters and kilometers.
#
# Take the given data and assume age is in years and distance is in meters.
# +
age = np.random.randint(20,40,100)
distance = 3*np.random.randn(100)+10
tiredness = age + 3*distance + 2*np.random.randn(100)
df = pd.DataFrame({'age':age,'distance':distance,'tiredness':tiredness})
# -
# Now build a linear regression model of `tiredness` on `age` and `distance`. Print the coefficients.
#
# Then create a new column in the dataframe changing `distance` to being measured in kilometers, call it `distance_km`. Fit a second model regressing `tiredness` on `age` and `distance_km`. Again print the coefficients.
# +
## Fit the first model here
## Sample Answer
reg = LinearRegression(copy_X = True)
reg.fit(df[['age','distance']],df['tiredness'])
print(reg.coef_)
# +
## Fir the km model here
## Sample Answer
df['distance_km'] = df['distance']/1000
reg = LinearRegression(copy_X = True)
reg.fit(df[['age','distance_km']],df['tiredness'])
print(reg.coef_)
# -
# Now build a ridge regression model again regressing `tiredness` on `age` and `distance`. Let $\alpha=10$, but do NOT include `normalize=True`. Print the coefficients. How do these compare to the normal linear regression model?
#
#
# Then build a ridge regression model using the `distance_km` instead of `distance`. Again do NOT include `normalize=True`. Let $\alpha=10$, and print the coefficients what happened this time?
# +
## Fit your meters model here
## Sample Answer
ridge = Ridge(copy_X = True,alpha=10,normalize=False)
ridge.fit(df[['age','distance']],df['tiredness'])
print("meters",ridge.coef_)
## Fit your km model here
## Sample Answer
ridge = Ridge(copy_X = True,alpha=10,normalize=False)
ridge.fit(df[['age','distance_km']],df['tiredness'])
print("km",ridge.coef_)
# -
# As $\alpha$ increases in ridge and lasso our "budget" for the constrained optimization gets smaller, meaning that we can't appropriately estimate the coefficient on distance when it is measured in kilometers. This is due to the large difference in scales between meters and kilometers.
#
# #### How to Solve the Scaling Issue?
#
# One way to address the scale issue is to just adjust all of our features to have the same scale prior to fitting the model. We can do this by setting `normalize=True` in the `Ridge` and `Lasso` model objects. This centers and scales the features by subtracting off their mean and dividing by their $l_2$ norms.
#
# Refit the two ridge models you just created, this time set `normalize = True`. What happened?
# +
## Meters model here
## Sample Answer
ridge = Ridge(copy_X = True,alpha=10,normalize=True)
ridge.fit(df[['age','distance']],df['tiredness'])
print("meters",ridge.coef_)
## KM model here
## Sample Answer
ridge = Ridge(copy_X = True,alpha=10,normalize=True)
ridge.fit(df[['age','distance_km']],df['tiredness'])
print("km",ridge.coef_)
# -
# #### Standardizing by Hand
#
# Another way to scale down the data is to use `sklearn`'s `StandardScaler`. This is a method that scales the data to have mean $0$ and variance $1$ using the standard normal transformation:
# $$
# \frac{X - \overline{X}}{s_X}.
# $$
# While slightly different than `normalize=True` it still puts all of the data on the same scale and can be used instead of `normalize=True`.
#
# Build a pipe that first takes the data and scales it using `StandardScaler` then fits the ridge regression with $\alpha=10$. This time remember to set `normalize=False` or just leave the `normalize` option out of the model object.
# +
## import standard scaler
from sklearn.preprocessing import StandardScaler
# you make a scaler object like StandarScaler()
# +
## Here's a pipe for ya
pipe = Pipeline([('scale',StandardScaler()),
('ridge',Ridge(copy_X = True,alpha = 10,normalize=False))])
## Fit the meters model here
## Sample Answer
pipe.fit(df[['age','distance']],df['tiredness'])
print("meters",pipe['ridge'].coef_)
## Fit the km model here
## Sample Answer
pipe = Pipeline([('scale',StandardScaler()),
('ridge',Ridge(copy_X = True,alpha = 10,normalize=False))])
pipe.fit(df[['age','distance_km']],df['tiredness'])
print("km",pipe['ridge'].coef_)
# -
# Remember from our `Basic Pipeline` notebook.
#
# `StandardScaler` has the following methods:
# - `fit` which uses the input data to fit the method, i.e. find the mean and standard deviation of the input data
# - `transform` which uses the statistics calculated in `fit` to scale the input data
# - `fit_transform` which fits and transforms all in one.
#
# ##### Why is this Important?
#
# This is important because the order in which we do things matters. We first `fit` the scaler using the training data, this sets the scaler for all future data we put into the `transform` method, training and testing. The main take away here is that the scaler fit from the training data is the fit we use to predict on the testing data. This is a subtle distinction, but an important one.
#
# Okay now time to talk about lasso.
# ## Looking More Closely at Lasso
#
# Up to this point we've mainly focused on ridge regression. Let's look more closely at lasso now.
#
# As we've said the formulation of lasso is identical to that of ridge regression up to the choice of norm. Looking at the two norms $||\bullet||_2^2$ for ridge and $||\bullet||_1$ for lasso the main difference is that the square of the $l_2$ norm is differentiable everywhere, where as the $l_1$ norm is not.
#
# ### Lasso for Feature Selection
#
# This fact gives lasso one of its best uses, feature selection. Let's return to our polynomial again.
x = np.linspace(-3,3,100)
y = x*(x-1) + 1.2*np.random.randn(100)
# We'll fit this with a high degree polynomial, after normalizing of course, with both ridge and lasso models but for different values of $\alpha$.
from sklearn.linear_model import Lasso
# +
alpha = [0.00001,0.0001,0.001,0.01,0.1,1,10,100,1000]
n=10
#$ These will hold our coefficient estimates
ridge_coefs = np.empty((len(alpha),n))
lasso_coefs = np.empty((len(alpha),n))
## for each alpha value
for i in range(len(alpha)):
## set up the ridge pipeline
ridge_pipe = Pipeline([('poly',PolynomialFeatures(n,include_bias=False)),
('ridge',Ridge(alpha = alpha[i], normalize=True))])
## set up the lasso pipeline
lasso_pipe = Pipeline([('poly',PolynomialFeatures(n,include_bias=False)),
('lasso',Lasso(alpha = alpha[i], normalize=True, max_iter = 1000000))])
## fit the ridge
ridge_pipe.fit(x.reshape(-1,1),y)
## fit the lasso
lasso_pipe.fit(x.reshape(-1,1),y)
## record the coefficients
ridge_coefs[i,:] = ridge_pipe['ridge'].coef_
lasso_coefs[i,:] = lasso_pipe['lasso'].coef_
# +
print("Ridge Coefficients")
pd.DataFrame(np.round(ridge_coefs,8),
columns = ["x^" + str(i) for i in range(1,n+1)],
index = ["alpha=" + str(a) for a in alpha])
# +
print("Lasso Coefficients")
pd.DataFrame(np.round(lasso_coefs,8),
columns = ["x^" + str(i) for i in range(1,n+1)],
index = ["alpha=" + str(a) for a in alpha])
# -
# If we look at our two tables we see that the ridge coefficients slowly go down to $0$, but most of the time don't actually get there. On the other hand for lasso with $\alpha=0.1$ almost all of our coefficients are $0$, except for the ones that matter the coefficients for $x$ and $x^2$.
#
# This feature of lasso makes it quite popular, particularly when you have a lot of features which makes other popular model selection algorithms unfeasible (see Regression HW).
#
# You just fit a lasso model with all of your features and then choose the ones that are nonzero for your final model. Again this should be done with some sort of test to see which model you think will give the better test error, like cv.
#
# ### Why Does it Do That?
#
# To see a geometric explanation for why this happens let's return to our constrained optimization formulation, and assume both $X$ and $y$ have mean $0$ for simplicity.
#
# ##### Ridge Regression
# $$
# \text{Minimize } || y - X\beta||_2^2 \text{ constrained to } ||\beta||_2^2 \leq c.
# $$
#
# If we have two features the constraint is $\beta_1^2 + \beta_2^2 \leq (\sqrt{c})^2$, which you may recall is the formula for a filled in circle centered at the origin with radius $\sqrt{c}$ in $\mathbb{R}^2$.
#
# ##### Lasso
# $$
# \text{Minimize } || y - X\beta||_2^2 \text{ constrained to } ||\beta||_1 \leq c.
# $$
#
# If we have two features the constraint is $|\beta_1| + |\beta_2| \leq c$, which gives a filled in square with edges at $(c,0),(0,c),(-c,0),$ and $(0,-c)$.
#
# Let's look at a picture in the case of two features.
# <img src="lasso_ridge_eosl.png" style="width:60%"></img>
# Photo Credit to <a href="https://web.stanford.edu/~hastie/ElemStatLearn/">Elements of Statistical Learning</a>.
#
# In this photo $\hat{\beta}$ is the OLS estimate for $\beta$ at the minimum value of $|| y - X\beta||_2^2$, the red ellipses are selected level curves for $|| y - X\beta||_2^2$, and the blue square and circle are the contraint regions for the lasso and ridge respectively. We can think of lasso and ridge as finding the smallest level curve that still intersects with the constraint region. If the OLS estimate is not contained within the constraint region this will occur somewhere on the boundary. This image demonstrates that the level curve corresponding to the minimal value of $|| y - X\beta||_2^2$ often intercepts the lasso constraint on an axis of the $\beta$-space, which is not the case for ridge regression.
#
# As a reminder for practical purposes decreasing the value of $\alpha$ for the `sklearn` `Lasso` and `Ridge` objects increases the size of the constraint region. Increasing the value of $\alpha$ will shrink the constraint region.
#
#
# ### You Code
#
#
# Run the following code to generate data `X` and `y`.
#
# Use lasso to try and determine which features should be included in the model. When you're ready run the code block at the end to see how the data was constructed.
import generate as g
X,y = g.get_data()
# +
## Look at the data here
## Sample Answer
print("X",np.shape(X))
print("y",np.shape(y))
# +
## Write a loop to see what happens to the coefficients as alpha
## goes to inf.
## Store the coefficients in an array
## Sample Answer
alphas = [0.00001,0.0001,0.001,0.01,0.1,1,10,100,1000]
lasso_coefs = np.zeros((len(alphas), np.shape(X)[1]))
for i in range(len(alphas)):
lasso = Lasso(alpha=alphas[i])
lasso.fit(X,y)
lasso_coefs[i,:] = lasso.coef_
# +
## Look at how each coefficient changes here
## Sample Answer
for i in range(np.shape(X)[1]):
print("X_"+str(i))
print(lasso_coefs[:,i])
print()
print("+++++++++++++++++++")
print()
# +
## Write down your guess for the important variables here
## Sample Answer
## X_0, X_1, X_7
# -
g.give_how_generated()
# ## Which One is Better?
#
# Which algorithm is the better choice? Well that depends on the problem. Both are good at addressing overfitting concerns, but each has a couple unique pros and cons.
#
# ##### Lasso
#
# <b>Pros</b>
#
# <ul>
# <li>Works well when you have a large number of features that don't have any effect on the target.</li>
# <li>Feature selection is a plus, this can allow for a sparser model which is good for computational reasons.</li>
# </ul>
#
# <b>Cons</b>
#
# <ul>
# <li>Can have trouble with highly correlated features, it typically chooses one variable among those that are correlated, which may be random.</li>
# </ul>
#
#
# ##### Ridge
#
# <b>Pros</b>
#
# <ul>
# <li>Works well when the target depends on all or most of the features.</li>
# <li>Can handle colinearity better than lasso.</li>
# </ul>
#
# <b>Cons</b>
#
# <ul>
# <li>Because ridge typically keeps most of the predictors in the model, this can be a computationally costly model type for data sets with a large number of predictors.</li>
# </ul>
#
#
# ##### Elastic Net
#
# Sometimes the best model will be something in between ridge and lasso. Check out the Regression HW to learn about how that is possible with an elastic net model.
# ## That's It!
#
# That's it for this notebook!
#
# See below for references specific to this notebook.
# ## Notebook Specific References
#
# To help teach this lesson I consulted some additional source I found through a Google search. Here are links to those references for you to take a deeper dive into ridge and lasso regression.
#
# <ol>
# <li><a href="https://www.statlearning.com/">https://www.statlearning.com/</a></li>
# <li><a href="https://www.analyticsvidhya.com/blog/2016/01/ridge-lasso-regression-python-complete-tutorial/">https://www.analyticsvidhya.com/blog/2016/01/ridge-lasso-regression-python-complete-tutorial/</a></li>
# <li><a href="https://suzyahyah.github.io/optimization/2018/07/20/Constrained-unconstrained-form-Ridge.html">https://suzyahyah.github.io/optimization/2018/07/20/Constrained-unconstrained-form-Ridge.html</a></li>
# <li><a href="https://statweb.stanford.edu/~owen/courses/305a/Rudyregularization.pdf">https://statweb.stanford.edu/~owen/courses/305a/Rudyregularization.pdf</a></li>
# <li><a href="http://web.mit.edu/zoya/www/linearRegression.pdf">http://web.mit.edu/zoya/www/linearRegression.pdf</a></li>
# </ol>
# This notebook was written for the Erdős Institute Cőde Data Science Boot Camp by <NAME>, Ph. D., 2021.
#
# Redistribution of the material contained in this repository is conditional on acknowledgement of <NAME>, Ph.D.'s original authorship and sponsorship of the Erdős Institute as subject to the license (see License.md)
|
Lectures/Regression/6. Regularization with Lasso and Ridge Regression - Complete.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:fintech] *
# language: python
# name: conda-env-fintech-py
# ---
# # Unit 5 - Financial Planning
# +
# Initial imports
import os
import requests
import pandas as pd
from dotenv import load_dotenv
import alpaca_trade_api as tradeapi
from MCForecastTools import MCSimulation
from datetime import date, timedelta
# %matplotlib inline
# -
# Load .env enviroment variables
load_dotenv()
# ## Part 1 - Personal Finance Planner
# ### Collect Crypto Prices Using the `requests` Library
# Set current amount of crypto assets
# YOUR CODE HERE!
my_btc = 1.2
my_eth = 5.3
# Crypto API URLs
btc_url = "https://api.alternative.me/v2/ticker/Bitcoin/?convert=USD"
eth_url = "https://api.alternative.me/v2/ticker/Ethereum/?convert=USD"
#btc_response_data = requests.get(btc_url, verify = False)
#btc_response_data = requests.get(btc_url).json()
# +
# Fetch current BTC price
# YOUR CODE HERE!
# send the get request for Bitcoin and Ethereum
btc_response_data = requests.get(btc_url).json()
# Initial json parse
#print(btc_response_data) # print the json response returned.
#print(type(btc_response_data)) # dict type
#print(len(btc_response_data)) # check the len
# look at the 1st dict element
#print(btc_response_data['data'])
#print(type(btc_response_data['data'])) # another dict type
#print(len(btc_response_data['data'])) # check the len
# look at the '1' dict element
#print(btc_response_data['data']['1'])
#print(type(btc_response_data['data']['1'])) # another dict type
#print(len(btc_response_data['data']['1'])) # check the len
# look at the 'quotes' key as price is embedded there
# print(btc_response_data['data']['1']['quotes'])
# print(type(btc_response_data['data']['1']['quotes']))
# print(len(btc_response_data['data']['1']['quotes'])) # check the len
# look at the 'USD' key as price is embedded there
# print(btc_response_data['data']['1']['quotes']['USD'])
# print(type(btc_response_data['data']['1']['quotes']['USD']))
# print(len(btc_response_data['data']['1']['quotes']['USD'])) # check the len
btc_price = btc_response_data['data']['1']['quotes']['USD']['price']
print('Current BTC price = ', btc_price)
my_btc_value = my_btc * btc_price
# Fetch current ETH price
# YOUR CODE HERE!
eth_response_data = requests.get(eth_url).json()
#print(eth_response_data['data']['1027'])
eth_price = eth_response_data['data']['1027']['quotes']['USD']['price']
print('Current ETH price = ',eth_price)
my_eth_value = my_eth * eth_price
# Compute current value of my crpto
# YOUR CODE HERE!
# Print current crypto wallet balance
print(f"The current value of your {my_btc} BTC is ${my_btc_value:0.2f}")
print(f"The current value of your {my_eth} ETH is ${my_eth_value:0.2f}")
# -
# ### Collect Investments Data Using Alpaca: `SPY` (stocks) and `AGG` (bonds)
# Set current amount of shares
my_agg = 200
my_spy = 50
# +
# Set Alpaca API key and secret
# YOUR CODE HERE!
alpaca_api_key = os.getenv("ALPACA_API_KEY")
alpaca_secret_key = os.getenv("ALPACA_SECRET_KEY")
# Create the Alpaca API object
# YOUR CODE HERE!
alpaca = tradeapi.REST(
alpaca_api_key,
alpaca_secret_key,
api_version="v2")
# +
# Format current date as ISO format
# YOUR CODE HERE!
today = pd.to_datetime("today")
#print(date.today())
end = pd.Timestamp(today, tz="America/New_York").isoformat()
end1500 = (today - timedelta(days=1500)).isoformat() # include more days as weekend are there.
start = pd.Timestamp(end1500, tz="America/New_York").isoformat()
print('start = ',start)
print('end = ', end)
# Set the tickers
tickers = ["AGG", "SPY"]
# Set timeframe to '1D' for Alpaca API
timeframe = "1D"
# Get current closing prices for SPY and AGG
# (use a limit=1000 parameter to call the most recent 1000 days of data)
# YOUR CODE HERE!
df_portfolio = alpaca.get_barset(
tickers,
timeframe,
limit = 1000,
start = start,
end = end
).df
# Preview DataFrame
# YOUR CODE HERE!
df_portfolio.sort_index
df_portfolio.tail()
# +
# Pick AGG and SPY close prices
# YOUR CODE HERE!
len(df_portfolio)
current_day = df_portfolio.iloc[len(df_portfolio)-1]
agg_close_price = current_day["AGG"]["close"]
spy_close_price = current_day["SPY"]["close"]
#spy_close_price = df_portfolio["SPY"]["close"]
#type(agg_close)
#print(agg_close)
#print(f"Current AGG closing price: ${agg_close}")
# Print AGG and SPY close prices
print(f"Current AGG closing price: ${agg_close_price}")
print(f"Current SPY closing price: ${spy_close_price}")
# -
# Compute the current value of shares
# YOUR CODE HERE!
my_agg_value = my_agg * agg_close_price
my_spy_value = my_spy * spy_close_price
# Print current value of shares
print(f"The current value of your {my_agg} AGG shares is ${my_agg_value:0.2f}")
print(f"The current value of your {my_spy} SPY shares is ${my_spy_value:0.2f}")
# +
#data = {'col_1': [3, 2, 1, 0], 'col_2': ['a', 'b', 'c', 'd']}
#>>> pd.DataFrame.from_dict(data)
# -
# ### Savings Health Analysis
# +
# Set monthly household income
# YOUR CODE HERE!
monthly_income = 12000
# Consolidate financial assets data
# YOUR CODE HERE!
total_crypto_value = my_btc_value + my_eth_value
total_shares_value = my_spy_value + my_agg_value
# Create savings DataFrame
# YOUR CODE HERE!
savings = {'amount' : [total_crypto_value, total_shares_value]}
#d_savings = { 'crypto' : [total_crypto_value], 'shares' : [total_shares_value]}
#print(d_savings)
df_savings = pd.DataFrame(savings, index= ['crypto', 'shares'])
# Display savings DataFrame
display(df_savings)
type(df_savings)
# -
# Plot savings pie chart
# YOUR CODE HERE!
df_savings.plot.pie(y='amount', figsize=(5, 5))
# +
# Set ideal emergency fund
emergency_fund = monthly_income * 3
#print(emergency_fund)
#print('Emergency Fund = ', emergency_fund)
# Calculate total amount of savings
# YOUR CODE HERE!
total_savings = df_savings.sum()[0]
#print('total savings = ', total_savings)
#print(type(total_savings))
# Validate saving health
# YOUR CODE HERE!
if ( total_savings > emergency_fund ):
print(f'Congratulations you have enough savings = {total_savings} more than emergency funds = {emergency_fund}')
elif total_savings == emergency_fund:
print(f'Congratulations on reaching enough savings = {total_savings} as emergency fund needed = {emergency_fund}')
else:
print(f'Your savings = {total_savings} are less than emergency funds = {emergency_fund}')
# -
# ## Part 2 - Retirement Planning
#
# ### Monte Carlo Simulation
# +
# Set start and end dates of five years back from today.
# Sample results may vary from the solution based on the time frame chosen
# We already have 4 yrs data frmo earlier call. so now we just need 1 more year prior to that. Set the dates here
start_date = pd.Timestamp('2016-05-24', tz='America/New_York').isoformat()
end_date = pd.Timestamp('2017-05-23', tz='America/New_York').isoformat()
# We already have 4 yrs data frmo earlier call to Alpaca. Confirm by printing head and tail
print(df_portfolio.head()) # we have data from 2017-05-24.
print(df_portfolio.tail()) # we haev data till 2021-05-14
# -
# get the remaining 1 year.
df_portfolio_rest = alpaca.get_barset(
tickers,
timeframe,
limit = 1000,
start = start_date,
end = end_date
).df
# print to check the new 1 year data extra
print(df_portfolio_rest.head())
print(df_portfolio_rest.tail())
# +
# Get 5 years' worth of historical data for SPY and AGG
# (use a limit=1000 parameter to call the most recent 1000 days of data)
# YOUR CODE HERE!
# here concatenate the 4 yrs we aleady got earlier and then the 1 year now.
df_stock_data = pd.concat([df_portfolio_rest, df_portfolio], axis="rows", join="inner")
# sort by date index to get all the 5 years in sequence
df_stock_data.sort_index
# Display sample data
print(df_stock_data.head())
print(df_stock_data.tail())
print(f'Total days of data = {len(df_stock_data)}') # Confirm we have the 5 yrs data
# +
# Configuring a Monte Carlo simulation to forecast 30 years cumulative returns
# use limit=1000 to call the most recent 1000 days of data
# YOUR CODE HERE!
# Configure a Monte Carlo simulation to forecast five years cumulative returns # Fitst one is 30 yrs Simulation not 5
MC_even_dist = MCSimulation(
portfolio_data = df_stock_data,
weights = [.4,.6 ],
num_simulation = 500,
num_trading_days = 252*30
)
# -
# Printing the simulation input data
# YOUR CODE HERE!
MC_even_dist.portfolio_data.head()
# Running a Monte Carlo simulation to forecast 30 years cumulative returns
# YOUR CODE HERE!
MC_even_dist.calc_cumulative_return()
# Plot simulation outcomes
# YOUR CODE HERE!
line_plot = MC_even_dist.plot_simulation()
# Plot probability distribution and confidence intervals
# YOUR CODE HERE!
dist_plot = MC_even_dist.plot_distribution()
# ### Retirement Analysis
# Fetch summary statistics from the Monte Carlo simulation results
# YOUR CODE HERE!
even_tbl = MC_even_dist.summarize_cumulative_return()
# Print summary statistics
# YOUR CODE HERE!
print(even_tbl)
# ### Calculate the expected portfolio return at the `95%` lower and upper confidence intervals based on a `$20,000` initial investment.
# +
# Set initial investment
initial_investment = 20000
# Use the lower and upper `95%` confidence intervals to calculate the range of the possible outcomes of our $20,000
# YOUR CODE HERE!
ci_lower = round(even_tbl[8]*initial_investment,2)
ci_upper = round(even_tbl[9]*initial_investment,2)
# Print results
print(f"There is a 95% chance that an initial investment of ${initial_investment} in the portfolio"
f" over the next 30 years will end within in the range of"
f" ${ci_lower} and ${ci_upper}")
# -
# ### Calculate the expected portfolio return at the `95%` lower and upper confidence intervals based on a `50%` increase in the initial investment.
# +
# Set initial investment
initial_investment = 20000 * 1.5
# Use the lower and upper `95%` confidence intervals to calculate the range of the possible outcomes of our $30,000
# YOUR CODE HERE!
ci_lower = round(even_tbl[8]*initial_investment,2)
ci_upper = round(even_tbl[9]*initial_investment,2)
# Print results
print(f"There is a 95% chance that an initial investment of ${initial_investment} in the portfolio"
f" over the next 30 years will end within in the range of"
f" ${ci_lower} and ${ci_upper}")
# -
# ## Optional Challenge - Early Retirement
#
#
# ### Five Years Retirement Option
# Configuring a Monte Carlo simulation to forecast 5 years cumulative returns
# YOUR CODE HERE!
# increase SPY weights to 75%
MC_even_dist_5 = MCSimulation(
portfolio_data = df_stock_data,
weights = [.20,.80 ],
num_simulation = 500,
num_trading_days = 252*5
)
# Running a Monte Carlo simulation to forecast 5 years cumulative returns
# YOUR CODE HERE!
MC_even_dist_5.calc_cumulative_return()
# Plot simulation outcomes
# YOUR CODE HERE!
line_plot_5 = MC_even_dist_5.plot_simulation()
# Plot probability distribution and confidence intervals
# YOUR CODE HERE!
dist_plot_5 = MC_even_dist_5.plot_distribution()
# +
# Fetch summary statistics from the Monte Carlo simulation results
# YOUR CODE HERE!
even_tbl_5 = MC_even_dist_5.summarize_cumulative_return()
# Print summary statistics
# YOUR CODE HERE!
print(even_tbl_5)
# +
# Set initial investment
# YOUR CODE HERE!
initial_investment_5 = 30000
# Use the lower and upper `95%` confidence intervals to calculate the range of the possible outcomes of our $60,000
# YOUR CODE HERE!
ci_lower_five = round(even_tbl_5[8]*initial_investment_5,2)
ci_upper_five = round(even_tbl_5[9]*initial_investment_5,2)
# Print results
print(f"There is a 95% chance that an initial investment of ${initial_investment} in the portfolio"
f" over the next 5 years will end within in the range of"
f" ${ci_lower_five} and ${ci_upper_five}")
# -
# ### Ten Years Retirement Option
# Configuring a Monte Carlo simulation to forecast 10 years cumulative returns
# YOUR CODE HERE!
MC_even_dist_10 = MCSimulation(
portfolio_data = df_stock_data,
weights = [.20,.80 ],
num_simulation = 500,
num_trading_days = 252*10
)
# Running a Monte Carlo simulation to forecast 10 years cumulative returns
# YOUR CODE HERE!
MC_even_dist_10.calc_cumulative_return()
# Plot simulation outcomes
# YOUR CODE HERE!
line_plot_10 = MC_even_dist_10.plot_simulation()
# Plot probability distribution and confidence intervals
# YOUR CODE HERE!
dist_plot_10 = MC_even_dist_10.plot_distribution()
# +
# Fetch summary statistics from the Monte Carlo simulation results
# YOUR CODE HERE!
even_tbl_10 = MC_even_dist_10.summarize_cumulative_return()
# Print summary statistics
# YOUR CODE HERE!
print(even_tbl_10)
# +
# Set initial investment
# YOUR CODE HERE!
initial_investment_10 = 30000
# Use the lower and upper `95%` confidence intervals to calculate the range of the possible outcomes of our $60,000
# YOUR CODE HERE!
ci_lower_ten = round(even_tbl_10[8]*initial_investment_10,2)
ci_upper_ten = round(even_tbl_10[9]*initial_investment_10,2)
# Print results
print(f"There is a 95% chance that an initial investment of ${initial_investment} in the portfolio"
f" over the next 10 years will end within in the range of"
f" ${ci_lower_ten} and ${ci_upper_ten}")
# -
|
financial-planner.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <div style="direction:rtl;line-height:300%;">
# <font size=5>
# <div align=center>
# <font color=#FF7500>
# Sharif University of Technology - CE Department
# </font>
# <p></p>
# <font color=blue>
# Artificial Intelligence
# </font>
# <br />
# <br />
# Spring 2021
# </div>
# <hr/>
# <font color=red size=6>
# <br />
# <div align=center>
# Convolutional Neural Networks
# </div>
# </font>
# <br />
# <div align=center>
# <NAME>, <NAME>, <NAME>
# </div>
# # **Convolutional Neural Networks**
# Table of Contents:
# * Impact
# * Learning Visual Features
# * CovNet Layers:
# * Convolutional Layer
# * Pooling Layer
# * Fully-Connected Layer
# * Conclusion
# * References
# # Impact
# Convolutional Neural Networks, or CNNs, were designed to map image data to an output variable.
#
# They have proven so effective that they are the go-to method for any type of prediction problem involving image data as an input.
# 
# 
# # Learning Visual Features
# An image is just a matrix of numbers [0,255]. i.e., 1080x1080x3 for an RGB image. <br>
# Output variable produces probabilty of belonging to a particular class.
# 
# Basic solution is to using fully connected neural networks:
# 
# Regular Neural Nets don’t scale well to full images. In MNIST, images are only of size 28x28x1 (28 wide, 28 high, gray scale), so a single fully-connected neuron in a first hidden layer of a regular Neural Network would have 28x28x1 = 784 weights. This amount still seems manageable, but clearly this fully-connected structure does not scale to larger images. For example, an image of more respectable size, e.g. 200x200x3, would lead to neurons that have 200*200*3 = 120,000 weights. Moreover, we would almost certainly want to have several such neurons, so the parameters would add up quickly! Clearly, this full connectivity is wasteful and the huge number of parameters would quickly lead to overfitting
#
# 
# # CovNet Layers
#
# There are four types of layers in a Convolutional Neural Network:
# 1. Convolutional Layers
# 2. Pooling Layers
# 3. Fully-Connected Layers
#
# 
# # Convolutional Layers
# Convolutional layers are comprised of filters and feature maps.
#
# **Filters** <br>
# The input image is multiplied by a filter to get the convolved layer. These filters differ in shapes and values to get different features like edges, curves, lines. Filter also called as kernel or feature detector. <br>
# The filters are the “neurons” of the layer. The have input weights and output a value. The input size is a fixed square called a patch or a receptive field.
# <br>
# If the convolutional layer is an input layer, then the input patch will be pixel values. If the deeper in the network architecture, then the convolutional layer will take input from a feature map from the previous layer.
# 
#
# **Feature Maps** <br>
# The feature map is the output of one filter applied to the previous layer.
#
# A given filter is drawn across the entire previous layer, moved one pixel at a time. Each position results in an activation of the neuron and the output is collected in the feature map.
# 
# In CNN terminology, the 3×3 matrix is called a ‘filter‘ or ‘kernel’ or ‘feature detector’ and the matrix formed by sliding the filter over the image and computing the element-wise multiply is called the ‘Feature Map‘. It is important to note that filters acts as feature detectors from the original input image.
# 
#
# **Zero Padding** <br>
# The distance that filter is moved across the the input from the previous layer each activation is referred to as the stride.
#
# If the size of the previous layer is not cleanly divisible by the size of the filters receptive field and the size of the stride then it is possible for the receptive field to attempt to read off the edge of the input feature map. In this case, techniques like zero padding can be used to invent mock inputs for the receptive field to read.
# 
# **Activation Layers**
# After each conv layer, it is convention to apply a nonlinear layer (or activation layer) immediately afterward.The purpose of this layer is to introduce nonlinearity to a system that basically has just been computing linear operations during the conv layers (just element wise multiplications and summations).In the past, nonlinear functions like tanh and sigmoid were used, but researchers found out that ReLU layers work far better because the network is able to train a lot faster (because of the computational efficiency) without making a significant difference to the accuracy. It also helps to alleviate the vanishing gradient problem, which is the issue where the lower layers of the network train very slowly because the gradient decreases exponentially through the layers. The ReLU layer applies the function f(x) = max(0, x) to all of the values in the input volume. In basic terms, this layer just changes all the negative activations to 0. This layer increases the nonlinear properties of the model and the overall network without affecting the receptive fields of the conv layer.
# 
# # Pooling Layers
# The pooling layer (POOL) is a downsampling operation, typically applied after a convolution layer, which does some spatial invariance. Its function is to progressively reduce the spatial size of the representation to reduce the amount of parameters and computation in the network, and hence to also control overfitting. The Pooling Layer operates independently on every depth slice of the input and resizes it spatially, using the MAX operation(called max pooling).
# 
# In addition to max pooling, the pooling units can also perform other functions, such as average pooling or even L2-norm pooling. Average pooling was often used historically but has recently fallen out of favor compared to the max pooling operation, which has been shown to work better in practice.
# # Fully-Connected Layers
# Fully connected layers are the normal flat feed-forward neural network layer.
#
# These layers may have a non-linear activation function or a softmax activation in order to output probabilities of class predictions.
#
# Fully connected layers are used at the end of the network after feature extraction and consolidation has been performed by the convolutional and pooling layers. They are used to create final non-linear combinations of features and for making predictions by the network.
# 
# # Conclusion
# Adding multiple convolutional layers and pooling layers, the image will be processed for feature extraction. And there will be fully connected layers heading to the layer for softmax (for a multi-class case) or sigmoid (for a binary case) function. I didn’t mention the ReLu activation step, but there’s no difference with the activation step in ANN.
# As the layers go deeper and deeper, the features that the model deals with become more complex. For example, at the early stage of ConvNet, it looks up for oriented line patterns and then finds some simple figures. At the deep stage, it can catch the specific forms of objects and finally able to detect the object of an input image.
# 
# # References
# *1. https://stanford.edu/~shervine/teaching/cs-230/cheatsheet-convolutional-neural-networks* <br>
# *2. https://mit6874.github.io/assets/sp2021/slides/l03.pdf* <br>
# *3. http://introtodeeplearning.com/slides/6S191_MIT_DeepLearning_L3.pdf* <br>
# *4. https://towardsdatascience.com/convolution-neural-network-e9b864ac1e6c*
|
notebooks/example_6/index.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Z Calibration Curves for 3D-DAOSTORM (and sCMOS).
#
# In this example we are trying to determine the coefficients $w_o,c,d,A,B,C,D$ in this equation:
#
# \begin{equation*}
# W_{x,y} = w_o \sqrt{1 + \left(\frac{z-c}{d}\right)^2 + A\left(\frac{z-c}{d}\right)^3 + B\left(\frac{z-c}{d}\right)^4 + C\left(\frac{z-c}{d}\right)^5 + D\left(\frac{z-c}{d}\right)^6}
# \end{equation*}
#
# This is a modified form of a typical microscope defocusing curve. $W_x$, $W_y$ are the widths of the localization as measured by 3D-DAOSTORM and $z$ is the localization $z$ offset in $um$.
#
# See also [Huang et al, Science, 2008](http://dx.doi.org/10.1126/science.1153529).
#
# ### Configuration
#
# To perform z-calibration you need a movie of (small) fluorescent beads or single blinking dye molecules on a flat surface such as a coverslip. You then scan the coverslip through the focus of the microscope while recording a movie.
#
# In this example we'll simulate blinking dyes on a coverslip. The PSF is created using the pupil function approach and is purely astigmatic.
#
# Create an empty directory and change to that directory.
import os
os.chdir("/home/hbabcock/Data/storm_analysis/jy_testing/")
print(os.getcwd())
# Generate sample data for this example.
import storm_analysis.jupyter_examples.dao3d_zcal as dao3d_zcal
dao3d_zcal.configure()
# ### 3D-DAOSTORM analysis of the calibration movie
# Set parameters for 3D-DAOSTORM analysis. Note the analysis is done using the `3d` PSF model, a Gaussian with independent widths in X/Y.
# +
import storm_analysis.sa_library.parameters as params
# Load the parameters.
daop = params.ParametersDAO().initFromFile("example.xml")
# Set for a single iteration, we don't want multiple iterations of peak finding
# as this could cause stretched peaks to get split in half.
daop.changeAttr("iterations", 1)
# Use a large find max radius. This also reduces peak splitting.
daop.changeAttr("find_max_radius", 10)
# Use a higher threshold so that we don't get the dimmer localizations.
daop.changeAttr("threshold", 18)
# Don't do tracking or drift correction.
daop.changeAttr("radius", 0.0)
daop.changeAttr("drift_correction", 0)
# Save the changed parameters.
daop.toXMLFile("calibration.xml")
# -
# Analyze the calibration movie with 3D-DAOSTORM
# +
import os
import storm_analysis.daostorm_3d.mufit_analysis as mfit
if os.path.exists("calib.hdf5"):
os.remove("calib.hdf5")
mfit.analyze("calib.tif", "calib.hdf5", "calibration.xml")
# -
# Check results with with overlay images.
# +
# Overlay image at z near zero.
import storm_analysis.jupyter_examples.overlay_image as overlay_image
overlay_image.overlayImage("calib.tif", "calib.hdf5", 40)
# -
# ### Z calibration
# First we will need a file containing the z-offsets for each frame. This file contains two columns, the first is whether or not the data in this frame should be used (0 = No, 1 = Yes) and the second contains the z offset in microns.
# +
import numpy
# In this simulation the z range went from -0.6 microns to 0.6 microns in 10nm steps.
z_range = dao3d_zcal.z_range
z_offsets = numpy.arange(-z_range, z_range + 0.001, 0.01)
valid = numpy.ones(z_offsets.size)
# Limit the z range to +- 0.4um.
mask = (numpy.abs(z_offsets) > 0.4)
valid[mask] = 0.0
numpy.savetxt("z_offsets.txt", numpy.transpose(numpy.vstack((valid, z_offsets))))
# -
# Plot Wx / Wy versus Z curves.
# +
import matplotlib
import matplotlib.pyplot as pyplot
# Change default figure size.
matplotlib.rcParams['figure.figsize'] = (8,6)
import storm_analysis.daostorm_3d.z_calibration as z_cal
[wx, wy, z, pixel_size] = z_cal.loadWxWyZData("calib.hdf5", "z_offsets.txt")
pyplot.scatter(z, wx, color = 'r')
pyplot.scatter(z, wy, color = 'b')
pyplot.show()
# -
# Now measure Z calibration curves. We'll do a second order fit, i.e. A,B will be fit, but not C,D.
#
# Note - The fitting is not super robust, so you may have to play with `fit_order` and `p_start` to get it to work. Usually it will work for `fit_order = 0`, but then it might fail for `fit_order = 1` but succeed for `fit_order = 2`.
# +
#
# The function z_cal.calibrate() will perform all of these steps at once.
#
fit_order = 2
outliers = 3.0 # Sigma to be considered an outlier.
# Initial guess, this is optional, but might be necessary if your setup is
# significantly different from what storm-analysis expects.
#
# It can also help to boot-strap to higher fitting orders.
#
p_start = [3.2,0.19,0.3]
# Fit curves
print("Fitting (round 1).")
[wx_params, wy_params] = z_cal.fitDefocusingCurves(wx, wy, z, n_additional = 0, z_params = p_start)
print(wx_params)
p_start = wx_params[:3]
# Fit curves.
print("Fitting (round 2).")
[wx_params, wy_params] = z_cal.fitDefocusingCurves(wx, wy, z, n_additional = fit_order, z_params = p_start)
print(wx_params)
p_start = wx_params[:3]
# Remove outliers.
print("Removing outliers.")
[t_wx, t_wy, t_z] = z_cal.removeOutliers(wx, wy, z, wx_params, wy_params, outliers)
# Redo fit.
print("Fitting (round 3).")
[wx_params, wy_params] = z_cal.fitDefocusingCurves(t_wx, t_wy, t_z, n_additional = fit_order, z_params = p_start)
# Plot fit.
z_cal.plotFit(wx, wy, z, t_wx, t_wy, t_z, wx_params, wy_params, z_range = 0.4)
# This prints the parameter with the scale expected by 3D-DAOSTORM in the analysis XML file.
z_cal.prettyPrint(wx_params, wy_params, pixel_size = pixel_size)
# -
# Create a parameters file with these calibration values.
# +
# Load the parameters.
daop = params.ParametersDAO().initFromFile("example.xml")
# Update calibration parameters.
z_cal.setWxWyParams(daop, wx_params, wy_params, pixel_size)
# Do z fitting.
daop.changeAttr("do_zfit", 1)
# Set maximum allowed distance in wx, wy space that a point can be from the
# calibration curve.
daop.changeAttr("cutoff", 2.0)
# Use a higher threshold as the Gaussian PSF is not a good match for our PSF model, so
# we'll get spurious peak splitting if it is too low.
daop.changeAttr("threshold", 12)
# Don't do tracking or drift correction as this movie is the same as the calibration
# movie, every frame has a different z value.
daop.changeAttr("radius", 0.0)
daop.changeAttr("drift_correction", 0)
# Save the changed parameters.
daop.toXMLFile("measure.xml")
# -
# ### Analyze test movie with the z-calibration parameters.
# +
if os.path.exists("measure.hdf5"):
os.remove("measure.hdf5")
mfit.analyze("measure.tif", "measure.hdf5", "measure.xml")
# -
# Plot Wx / Wy versus Z curves for data from the test movie.
# +
[wx, wy, z, pixel_size] = z_cal.loadWxWyZData("measure.hdf5", "z_offsets.txt")
pyplot.scatter(z, wx, color = 'r')
pyplot.scatter(z, wy, color = 'b')
pyplot.show()
# -
# Check how well we did at fitting Z.
# +
import storm_analysis.sa_library.sa_h5py as saH5Py
# Create numpy arrays with the real and the measured z values.
measured_z = numpy.array([])
real_z = numpy.array([])
with saH5Py.SAH5Py("measure.hdf5") as h5:
for fnum, locs in h5.localizationsIterator(fields = ["category", "z"]):
# The z fit function will place all the localizations that are too
# far from the calibration curve into category 9.
mask = (locs["category"] != 9)
z = locs["z"][mask]
measured_z = numpy.concatenate((measured_z, z))
real_z = numpy.concatenate((real_z, numpy.ones(z.size)*z_offsets[fnum]))
# Plot
fig = pyplot.figure(figsize = (8,8))
ax = fig.add_subplot(1,1,1)
ax.scatter(real_z, measured_z, s = 4)
ax.plot([-1.0,1.0],[-1.0,1.0], color = 'black', linewidth = 2)
ax.axis("equal")
ax.axis([-0.5, 0.5, -0.5, 0.5])
pyplot.xlabel("Actual Z (um)")
pyplot.ylabel("Measured Z (um)")
# -
# Change the tolerance for the distance from the calibration curve and redo the Z fit.
# +
import shutil
import storm_analysis.sa_utilities.fitz_c as fitz_c
import storm_analysis.sa_utilities.std_analysis as std_ana
m_params = params.ParametersDAO().initFromFile("measure.xml")
[wx_params, wy_params] = m_params.getWidthParams()
[min_z, max_z] = m_params.getZRange()
# Make a copy of the .hdf5 file as this operation will change it in place.
shutil.copyfile("measure.hdf5", "measure_copy.hdf5")
m_params.changeAttr("cutoff", 0.2)
print("cutoff is", m_params.getAttr("cutoff"))
# Re-fit z parameters.
fitz_c.fitz("measure_copy.hdf5", m_params.getAttr("cutoff"),
wx_params, wy_params, min_z, max_z, m_params.getAttr("z_step"))
# Mark out of range peaks as category 9. The range is specified by the min_z and max_z parameters.
std_ana.zCheck("measure_copy.hdf5", m_params)
# +
# Create numpy arrays with the real and the measured z values.
measured_z = numpy.array([])
real_z = numpy.array([])
with saH5Py.SAH5Py("measure_copy.hdf5") as h5:
for fnum, locs in h5.localizationsIterator(fields = ["category", "z"]):
# The z fit function will place all the localizations that are too
# far from the calibration curve into category 9.
mask = (locs["category"] != 9)
z = locs["z"][mask]
measured_z = numpy.concatenate((measured_z, z))
real_z = numpy.concatenate((real_z, numpy.ones(z.size)*z_offsets[fnum]))
# Plot
fig = pyplot.figure(figsize = (8,8))
ax = fig.add_subplot(1,1,1)
ax.scatter(real_z, measured_z, s= 4)
ax.plot([-1.0,1.0],[-1.0,1.0], color = 'black', linewidth = 2)
ax.axis("equal")
ax.axis([-0.5, 0.5, -0.5, 0.5])
pyplot.xlabel("Actual Z (um)")
pyplot.ylabel("Measured Z (um)")
|
jupyter_notebooks/dao3d_zcal.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Initial setup and file reading
# testing is using a local copy of the file.
# +
import pandas as pd
import ebcdic
import codecs
file = r'C:\PublicData\Texas\TXRRC\index\dbf900.ebc' ##Local storage location
##file origin: ftp://ftpe.rrc.texas.gov/shfwba/dbf900.ebc.gz
with open(file, 'rb') as ebcdic:
data = ebcdic.read()
ascii_txt = codecs.decode(data, 'cp1140')
# -
# ## Splitting records into managable records to do work on them
# Need to verify this is most efficent method
# +
split_records = []
n = 247 ##Unknown if this holds true for all versions of this file or for other files on TXRRC
for index in range(0, len(ascii_txt), n):
split_records.append(ascii_txt[index : index + n])
# -
# ## Testing the record order and formatting
API = None
for record in split_records[0:100]:
if record.startswith('01'):
ct+=1
API = record[2:10]
print('---------------------------------------------------')
print(API, ' Starting')
print(API, record[0:2])
print(record)
print('---')
# ### Checking number of well records in the file. These may be inaccurate with multiple well bore with the same API8 used by TXRRC
API = None
APIs =[]
ct = 0
for record in split_records:
if record.startswith('01'):
ct+=1
APIs.append(record[2:10])
print(ct, len(APIs))
# ## Start of record definition layout. Names from the manual.
# #### Definitions from https://www.rrc.texas.gov/media/41906/wba091_well-bore-database.pdf
# Notes for where the definition sheet was modified to better suite new formatting (e.g. date formats)
#
# Each items has (Name, Starting character in record, number of characters after start, format)
# Note that format type is borrowed from skylerbast/TxRRC_data and may not be appropriate for JSON or MySQL storage
# 
# ### First draft of definition sections for testing.
#
# #### Format types
# - pic_any - string<br>
# - pic_numeric - whole numbers (interger)<br>
# - pic_yyyymmdd - date in YYYYMMDD format<br>
# - pic_yyyymm - date in YYYYMM format<br>
# - pic_latlong - numbers in DDDDDDDDD format, need to be separated and converted to DDD.DDDDDD<br>
# - pic_coord - numbers in CCCCCCCCC format, need to be separated and convered to CCCCCCCC.C<br>
#
# more types may need to be defined later
# +
WBROOT_01 = [
('RRC-TAPE-RECORD-ID',0,2,'pic_any'),
('WELL-BORE-API-ROOT',2,8,'pic_numeric'), ## Combines WB-API-CNTY and WB-API-UNIQUE
('WB-NXT-AVAIL-SUFFIX',10,2,'pic_numeric'),
('WB-NXT-AVAIL-HOLE-CHGE-NBR',12,2,'pic_numeric'),
('WB-FIELD-DISTRICT',14,2,'pic_numeric'),
('WB-RES-CNTY-CODE',16,3,'pic_numeric'),
('WB-ORIG-COMPL-DATE',20,8,'pic_yyyymmdd'), ##YYYYMMDD Combines WB-ORIG-COMPL-CENT&YY&MM&DD
('WB-TOTAL-DEPTH',28,5,'pic_numeric'),
('WB-VALID-FLUID-LEVEL',33,5,'pic_numeric'),
('WB-CERTIFICATION-REVOKED-DATE',38,8,'pic_yyyymmdd'), ##YYYYMMDD Combines WB-CERT-REVOKED-CC&YY&MM&DD
('WB-CERTIFICATION-DENIAL-DATE',46,8,'pic_yyyymmdd'), ##YYYYMMDD Combines WB-CERTIFICATION-DENIAL-CC&YY&MM&DD
('WB-DENIAL-REASON-FLAG',56,1,'pic_any'),
('WB-ERROR-API-ASSIGN-CODE',55,1,'pic_any'),
('WB-REFER-CORRECT-API-NBR',56,8,'pic_numeric'),
('WB-DUMMY-API-NUMBER',64,8,'pic_numeric'),
('WB-DATE-DUMMY-REPLACED',72,8,'pic_yyyymmdd'), ##YYYYMMDD
('WB-NEWEST-DRL-PMT-NBR',80,6,'pic_numeric'),
('WB-CANCEL-EXPIRE-CODE',86,1,'pic_any'),
('WB-EXCEPT-13-A',88,1,'pic_any'),
('WB-FRESH-WATER-FLAG',89,1,'pic_any'),
('WB-PLUG-FLAG',90,1,'pic_any'),
('WB-PREVIOUS-API-NBR',91,8,'pic_numeric'),
('WB-COMPLETION-DATA-IND',99,1,'pic_any'),
('WB-HIST-DATE-SOURCE-FLAG',100,1,'pic_numeric'),
('WB-EX14B2-COUNT',102,2,'pic_numeric'),
('WB-DESIGNATION-HB-1975-FLAG',104,1,'pic_any'),
('WB-DESIGNATION-EFFECTIVE-DATE',105,6,'pic_yyyymm'), ##YYYYMM Combines WB-DESIGNATION-EFFEC-CC&YY&MM
('WB-DESIGNATION-REVISED-DATE',111,6,'pic_yyyymm'), ##YYYYMM Combines WB-DESIGNATION-REVISED-CC&YY&MM
('WB-DESIGNATION-LETTER-DATE',117,8,'pic_yyyymmdd'), ##YYYYMMDD Combines WB-DESIGNATION-LETTER-CC&YY&MM&DD
('WB-CERTIFICATION-EFFECT-DATE',125,6,'pic_yyyymm'), ##YYYYMM Combines WB-CERTIFICATION-EFFEC-CC&YY&MM
('WB-WATER-LAND-CODE',131,1,'pic_any'),
('WB-TOTAL-BONDED-DEPTH',132,6,'pic_numeric'),
('WB-OVERRIDE-EST-PLUG-COST',138,7,'pic_numeric'),
('WB-SHUT-IN-DATE',145,6,'pic_numeric'), ##YYYYMM
('WB-OVERRIDE-BONDED-DEPTH',151,6,'pic_numeric'),
('WB-SUBJ-TO-14B2-FLAG',157,1,'pic_any'),
('WB-PEND-REMOVAL-14B2-FLAG',158,1,'pic_any'),
('WB-ORPHAN-WELL-HOLD-FLAG',159,1,'pic_any'),
('WB-W3X-WELL-FLAG',160,1,'pic_any')
]
WBCOMPL_02oil =[
('RRC-TAPE-RECORD-ID',0,2,'pic_any'),
('WB-OIL-CODE',2,1,'pic_any'), ##If gas use WBCOMPL_02gas
('WB-OIL-DIST',3,2,'pic_numeric'), ##If gas use WBCOMPL_02gas
('WB-OIL-LSE-NBR',5,5,'pic_numeric'), ##If gas use WBCOMPL_02gas
('WB-OIL-WELL-NBR',10,6,'pic_any'), ##If gas use WBCOMPL_02gas
('WB-GAS-DIST',16,2,'pic_numeric'),
('WB-GAS-WELL-NO',18,6,'pic_any'),
('WB-MULTI-WELL-REC-NBR',24,1,'pic_any'),
('WB-API-SUFFIX',25,2,'pic_numeric'),
('WB-ACTIVE-INACTIVE-CODE',45,1,'pic_any'),
('WB-DWN-HOLE-COMMINGLE-CODE',86,1,'pic_any'),
('WB-CREATED-FROM-PI-FLAG',121,1,'pic_any'),
('WB-RULE-37-NBR',122,7,'pic_numeric'),
('WB-P-15',155,1,'pic_any'),
('WB-P-12',156,1,'pic_any'),
('WB-PLUG-DATE-POINTER',157,8,'pic_numeric')
]
WBCOMPL_02gas =[
('RRC-TAPE-RECORD-ID',0,2,'pic_any'),
('WB-GAS-CODE',2,1,'pic_any'), ##If oil use WBCOMPL_02oil
('WB-GAS-RRC-ID',3,6,'pic_numeric'), ##If oil use WBCOMPL_02oil
('WB-GAS-DIST',16,2,'pic_numeric'),
('WB-GAS-WELL-NO',18,6,'pic_any'),
('WB-MULTI-WELL-REC-NBR',24,1,'pic_any'),
('WB-API-SUFFIX',25,2,'pic_numeric'),
('WB-ACTIVE-INACTIVE-CODE',45,1,'pic_any'),
('WB-DWN-HOLE-COMMINGLE-CODE',86,1,'pic_any'),
('WB-CREATED-FROM-PI-FLAG',121,1,'pic_any'),
('WB-RULE-37-NBR',122,7,'pic_numeric'),
('WB-P-15',155,1,'pic_any'),
('WB-P-12',156,1,'pic_any'),
('WB-PLUG-DATE-POINTER',157,8,'pic_numeric')
]
WBDATE_03 = [
('RRC-TAPE-RECORD-ID',0,2,'pic_any'),
('WB-FILE-KEY',2,8,'pic_numeric'),
('WB-FILE-DATE',10,8,'pic_yyyymmdd'), ##YYYYMMDD
('WB-EXCEPT-RULE-11',26,1,'pic_any'),
('WB-CEMENT-AFFIDAVIT',27,1,'pic_any'),
('WB-G-5',28,1,'pic_any'),
('WB-W-12',29,1,'pic_any'),
('WB-DIR-SURVEY',30,1,'pic_any'),
('WB-W2-G1-DATE ',31,8,'pic_yyyymmdd'), ##YYYYMMDD
('WB-COMPL-DATE',39,8,'pic_yyyymmdd'), ##YYYYMMDD Combines WB-COMPL-CENTURY&YEAR&MONTH&DAY
('WB-DRL-COMPL-DATE',47,8,'pic_yyyymmdd'), ##YYYYMMDD
('WB-PLUGB-DEPTH1',55,5,'pic_numeric'),
('WB-PLUGB-DEPTH2',60,5,'pic_numeric'),
('WB-WATER-INJECTION-NBR',65,6,'pic_any'),
('WB-SALT-WTR-NBR',71,5,'pic_numeric'),
('WB-REMARKS-IND',84,1,'pic_any'),
('WB-ELEVATION',85,4,'pic_numeric'),
('WB-ELEVATION-CODE',89,2,'pic_any'),
('WB-LOG-FILE-RBA',91,8,'pic_numeric'),
('WB-DOCKET-NBR',99,10,'pic_any'),
('WB-PSA-WELL-FLAG',109,1,'pic_any'),
('WB-ALLOCATION-WELL-FLAG',110,1,'pic_any')
]
WBRMKS_04 = [
('RRC-TAPE-RECORD-ID',0,2,'pic_any'),
('WB-RMK-LNE-CNT',2,3,'pic_numeric'),
('WB-RMK-TYPE-CODE',5,1,'pic_any'),
('WB-REMARKS',6,70,'pic_any')
]
WBTUBE_05 = [
('RRC-TAPE-RECORD-ID',0,2,'pic_any'),
('WB-SEGMENT-COUNTER ',2,3,'pic_numeric'),
('WB-TUBING-INCHES',5,2,'pic_numeric'),
('WB-FR-NUMERATOR',7,2,'pic_numeric'),
('WB-FR-DENOMINATOR',9,2,'pic_numeric'),
('WB-DEPTH-SET',11,5,'pic_numeric'),
('WB-PACKER-SET',16,5,'pic_numeric')
]
WBCASE_06 = [
('RRC-TAPE-RECORD-ID',0,2,'pic_any'),
('WB-CASING-COUNT',2,3,'pic_numeric'),
('WB-CAS-INCH',5,2,'pic_numeric'),
('WB-CAS-FRAC-NUM',7,2,'pic_numeric'),
('WB-CAS-FRAC-DENOM',9,2,'pic_numeric'),
('WB-CAS-WT-TABLE',11,8,'pic_numeric'), ##ISSUE 03 WB-CASING-WEIGHT-LBS-FT REDEFINES WB-CAS-WT-TABLEOCCURS 2 TIMES.(05 WB-WGT-WHOLE PIC 9(03))(05 WB-WGT-TENTHS PIC 9(01))
('WB-CASING-DEPTH-SET',19,5,'pic_numeric'),
('WB-MLTI-STG-TOOL-DPTH',24,5,'pic_numeric'),
('WB-AMOUNT-OF-CEMENT',29,5,'pic_numeric'),
('WB-CEMENT-MEASUREMENT',34,1,'pic_any'), ##S = sacks, Y = yards, F = cubic feet
('WB-HOLE-INCH',35,2,'pic_numeric'),
('WB-HOLE-FRAC-NUM',37,2,'pic_numeric'),
('WB-HOLE-FRAC-DENOM',39,2,'pic_numeric'),
('WB-TOP-OF-CEMENT-CASING',42,7,'pic_any'),
('WB-AMOUNT-CASING-LEFT',49,5,'pic_numeric')
]
WBPERF_07 = [
('RRC-TAPE-RECORD-ID',0,2,'pic_any'),
('WB-PERF-COUNT',2,3,'pic_numeric'),
('WB-FROM-PERF',5,5,'pic_numeric'),
('WB-TO-PERF',10,5,'pic_numeric'),
('WB-OPEN-HOLE-CODE',15,2,'pic_any') ##WB-OPEN-HOLE VALUE 'OH'
]
WBLINE_08 = [
('RRC-TAPE-RECORD-ID',0,2,'pic_any'),
('WB-LINE-COUNT',2,3,'pic_numeric'),
('WB-LIN-INCH',5,2,'pic_numeric'),
('WB-LIN-FRAC-NUM',7,2,'pic_numeric'),
('WB-LIN-FRAC-DENOM',9,2,'pic_numeric'),
('WB-SACKS-OF-CEMENT',11,5,'pic_numeric'),
('WB-TOP-OF-LINER',16,5,'pic_numeric'),
('WB-BOTTOM-OF-LINER',21,5,'pic_numeric')
]
WBFORM_09 = [
('RRC-TAPE-RECORD-ID',0,2,'pic_any'),
('WB-FORMATION-CNTR',2,3,'pic_numeric'),
('WB-FORMATION-NAME',5,32,'pic_any'),
('WB-FORMATION-DEPTH',37,5,'pic_numeric')
]
WBSQEZE_10 = [
('RRC-TAPE-RECORD-ID',0,2,'pic_any'),
('WB-SQUEEZE-CNTR ',2,3,'pic_numeric'),
('WB-SQUEEZE-UPPER-DEPTH ',5,5,'pic_numeric'),
('WB-SQUEEZE-LOWER-DEPTH',10,5,'pic_numeric'),
('WB-SQUEEZE-KIND-AMOUNT ',17,50,'pic_any')
]
WBFRESH_11 = [
('RRC-TAPE-RECORD-ID',0,2,'pic_any'),
('WB-FRESH-WATER-CNTR',2,3,'pic_numeric'),
('WB-TWDB-DATE',5,8,'pic_numeric'), #YYYYMMDD
('WB-SURFACE-CASING-DETER-CODE',13,1,'pic_any'), #WB-FIELD-RULE-CODE Y or N
('WB-UQWP-FROM',14,4,'pic_numeric'),
('WB-UQWP-TO',18,4,'pic_numeric')
]
WBOLDLOC_12 = [
('RRC-TAPE-RECORD-ID',0,2,'pic_any'),
('WB-LEASE-NAME',2,32,'pic_any'),
('WB-SEC-BLK-SURVEY-LOC',34,52,'pic_any'),
('WB-WELL-LOC-MILES',86,4,'pic_numeric'),
('WB-WELL-LOC-DIRECTION',90,6,'pic_any'),
('WB-WELL-LOC-NEAREST-TOWN',96,13,'pic_any'),
('WB-DIST-FROM-SURVEY-LINES',137,28,'pic_any'),
('WB-DIST-DIRECT-NEAR-WELL',165,28,'pic_any')
]
WBNEWLOC_13 = [
('RRC-TAPE-RECORD-ID',0,2,'pic_any'),
('WB-LOC-COUNTY',2,3,'pic_numeric'),
('WB-ABSTRACT',5,6,'pic_any'),
('WB-SURVEY',11,55,'pic_any'),
('WB-BLOCK-NUMBER',66,10,'pic_any'),
('WB-SECTION',76,8,'pic_any'),
('WB-ALT-SECTION',84,4,'pic_any'),
('WB-ALT-ABSTRACT',88,6,'pic_any'),
('WB-FEET-FROM-SUR-SECT-1',94,6,'pic_numeric'),
('WB-DIREC-FROM-SUR-SECT-1',100,13,'pic_any'),
('WB-FEET-FROM-SUR-SECT-2',113,6,'pic_numeric'),
('WB-DIREC-FROM-SUR-SECT-2',119,13,'pic_any'),
('WB-WGS84-LATITUDE',132,9,'pic_latlong'), ##PIC S9(3)V9(7) ##DDD.DDDDDDD ISSUE WITH characters at end of each section
('WB-WGS84-LONGITUDE',142,9,'pic_latlong'), ##PIC S9(3)V9(7) ##DDD.DDDDDDD ISSUE WITH characters at end of each section
('WB-PLANE-ZONE',157,2,'pic_numeric'),
('WB-PLANE-COORDINATE-EAST',159,9,'pic_coord'), ##PIC S9(8)V9(2) TX State plane ft NAD27 DDDDDDDD.DD ISSUE WITH characters at end of each section
('WB-PLANE-COORDINATE-NORTH',169,9,'pic_coord'), ##PIC S9(8)V9(2) TX State plane ft NAD27 DDDDDDDD.DD ISSUE WITH characters at end of each section
('WB-VERIFICATION-FLAG',177,1,'pic_any') #N = not verified, Y = verified, C = verified change
]
WBPLUG_14oil = [
('RRC-TAPE-RECORD-ID',0,2,'pic_any'),
('WB-DATE-W3-FILED',2,8,'pic_yyyymmdd'), ##YYYYMMDD
('WB-DATE-WELL-BORE-PLUGGED',10,8,'pic_yyyymmdd'), ##YYYYMMDD
('WB-PLUG-TOTAL-DEPTH',18,5,'pic_numeric'),
('WB-PLUG-CEMENT-COMP',23,32,'pic_any'),
('WB-PLUG-MUD-FILLED',55,1,'pic_any'),
('WB-PLUG-MUD-APPLIED',56,12,'pic_any'),
('WB-PLUG-MUD-WEIGHT',68,3,'pic_numeric'),
('WB-PLUG-DRIL-PERM-DATE',75,8,'pic_yyyymmdd'), ##YYYYMMDD
('WB-PLUG-DRIL-PERM-NO',83,6,'pic_numeric'),
('WB-PLUG-DRIL-COMP-DATE',89,8,'pic_yyyymmdd'), ##YYYYMMDD
('WB-PLUG-LOG-ATTACHED',97,1,'pic_any'),
('WB-PLUG-LOG-RELEASED-TO',98,32,'pic_any'),
('WB-PLUG-TYPE-LOG',130,1,'pic_any'), ##D=Drillers E=Electric R=Radioactivity A=Acoustical-sonic F=Dril-and-Elec G=Elec-and-Radio H=radio-and-acous I=Dril-and-acous J=elec-and-acous K=dril-and-acous L=dril-elec-radio M=elec-radio-acous N=Dril-elec-acous O=dril-radio-acous P=dril-elec-radio-acous
('WB-PLUG-FRESH-WATER-DEPTH',131,5,'pic_numeric'),
('WB-PLUG-UWQP',136,40,'pic_any'), ##REDEFINES WB-PLUG-UWQP. WB-PLUG-FROM-UWQP OCCURS 4 TIMES PIC 9(05), WB-PLUG-TO-UWQP OCCURS 4 TIMES PIC 9(05)
('WB-PLUG-MATERIAL-LEFT',176,1,'pic_any'),
('WB-PLUG-OIL-CODE',177,1,'pic_any'), ##if gas use WBPLUG_14gas
('WB-PLUG-OIL-DIST',178,2,'pic_numeric'), ##if gas use WBPLUG_14gas
('WB-PLUG-OIL-LSE-NBR',180,5,'pic_numeric'), ##if gas use WBPLUG_14gas
('WB-PLUG-OIL-WELL-NBR',185,6,'pic_any'), ##if gas use WBPLUG_14gas
('WB-PLUG-GAS-DIST',191,2,'pic_numeric'),
('WB-PLUG-GAS-WELL-NO',193,6,'pic_any'),
('WB-PLUG-TYPE-WELL',199,1,'pic_any'), ##O=Oil, G=Gas, D=Dry, S=Service
('WB-PLUG-MULTI-COMPL-FLAG',200,1,'pic_any'),
('WB-PLUG-CEM-AFF',201,1,'pic_any'), ##Y=WB-PLUG-CA-FILED N=WB-PLUG-CA-NOT-FILED
('WB-PLUG-13A',202,1,'pic_any'), ##Y=WB-PLUG-13A-FILED N=WB-PLUG-13A-NOT-FILED
('WB-PLUG-LOG-RELEASED-DATE',203,8,'pic_yyyymmdd'), ##YYYYMMDD
('WB-PLUG-LOG-FILE-RBA',211,8,'pic_numeric'),
('WB-STATE-FUNDED-PLUG-NUMBER',219,7,'pic_numeric')
]
WBPLUG_14gas = [
('RRC-TAPE-RECORD-ID',0,2,'pic_any'),
('WB-DATE-W3-FILED',2,8,'pic_yyyymmdd'), ##YYYYMMDD
('WB-DATE-WELL-BORE-PLUGGED',10,8,'pic_yyyymmdd'), ##YYYYMMDD
('WB-PLUG-TOTAL-DEPTH',18,5,'pic_numeric'),
('WB-PLUG-CEMENT-COMP',23,32,'pic_any'),
('WB-PLUG-MUD-FILLED',55,1,'pic_any'),
('WB-PLUG-MUD-APPLIED',56,12,'pic_any'),
('WB-PLUG-MUD-WEIGHT',68,3,'pic_numeric'),
('WB-PLUG-DRIL-PERM-DATE',75,8,'pic_yyyymmdd'), ##YYYYMMDD
('WB-PLUG-DRIL-PERM-NO',83,6,'pic_numeric'),
('WB-PLUG-DRIL-COMP-DATE',89,8,'pic_yyyymmdd'), ##YYYYMMDD
('WB-PLUG-LOG-ATTACHED',97,1,'pic_any'),
('WB-PLUG-LOG-RELEASED-TO',98,32,'pic_any'),
('WB-PLUG-TYPE-LOG',130,1,'pic_any'), ##D=Drillers E=Electric R=Radioactivity A=Acoustical-sonic F=Dril-and-Elec G=Elec-and-Radio H=radio-and-acous I=Dril-and-acous J=elec-and-acous K=dril-and-acous L=dril-elec-radio M=elec-radio-acous N=Dril-elec-acous O=dril-radio-acous P=dril-elec-radio-acous
('WB-PLUG-FRESH-WATER-DEPTH',131,5,'pic_numeric'),
('WB-PLUG-UWQP',136,40,'pic_any'), ##REDEFINES WB-PLUG-UWQP. WB-PLUG-FROM-UWQP OCCURS 4 TIMES PIC 9(05), WB-PLUG-TO-UWQP OCCURS 4 TIMES PIC 9(05)
('WB-PLUG-MATERIAL-LEFT',176,1,'pic_any'),
('WB-PLUG-GAS-CODE',177,1,'pic_any'), ##If oil use WBPLUG_14oil
('WB-PLUG-GAS-RRC-ID',178,6,'pic_numeric'), ##If oil use WBPLUG_14oil
('WB-PLUG-GAS-DIST',191,2,'pic_numeric'),
('WB-PLUG-GAS-WELL-NO',193,6,'pic_any'),
('WB-PLUG-TYPE-WELL',199,1,'pic_any'), ##O=Oil, G=Gas, D=Dry, S=Service
('WB-PLUG-MULTI-COMPL-FLAG',200,1,'pic_any'),
('WB-PLUG-CEM-AFF',201,1,'pic_any'), ##Y=WB-PLUG-CA-FILED N=WB-PLUG-CA-NOT-FILED
('WB-PLUG-13A',202,1,'pic_any'), ##Y=WB-PLUG-13A-FILED N=WB-PLUG-13A-NOT-FILED
('WB-PLUG-LOG-RELEASED-DATE',203,8,'pic_yyyymmdd'), ##YYYYMMDD
('WB-PLUG-LOG-FILE-RBA',211,8,'pic_numeric'),
('WB-STATE-FUNDED-PLUG-NUMBER',219,7,'pic_numeric')
]
WBPLRMKS_15 = [
('RRC-TAPE-RECORD-ID',0,2,'pic_any'),
('WB-PLUG-RMK-LNE-CNT',2,3,'pic_numeric'),
('WB-PLUG-RMK-TYPE-CODE',5,1,'pic_any'),
('WB-PLUG-REMARKS',6,70,'pic_any')
]
WBPLREC_16 = [
('RRC-TAPE-RECORD-ID',0,2,'pic_any'),
('WB-PLUG-NUMBER',2,3,'pic_numeric'),
('WB-NBR-OF-CEMENT-SACKS',5,5,'pic_numeric'),
('WB-MEAS-TOP-OF-PLUG',10,5,'pic_numeric'),
('WB-BOTTOM-TUBE-PIPE-DEPTH',15,5,'pic_numeric'),
('WB-PLUG-CALC-TOP',20,5,'pic_numeric'),
('WB-PLUG-TYPE-CEMENT',25,6,'pic_any')
]
WBPLCASE_17 = [
('RRC-TAPE-RECORD-ID',0,2,'pic_any'),
('WB-PLG-CAS-COUNTER',2,6,'pic_numeric'),
('WB-PLUG-CAS-INCH',8,2,'pic_numeric'),
('WB-PLUG-CAS-FRAC-NUM',10,2,'pic_numeric'),
('WB-PLUG-CAS-FRAC-DENOM',12,2,'pic_numeric'),
('WB-PLUG-WGT-WHOLE',14,3,'pic_numeric'),
('WB-PLUG-WGT-TENTHS',17,1,'pic_numeric'),
('WB-PLUG-AMT-PUT',18,5,'pic_numeric'),
('WB-PLUG-AMT-LEFT',23,5,'pic_numeric'),
('WB-PLUG-HOLE-INCH',28,2,'pic_numeric'),
('WB-PLUG-HOLE-FRAC-NUM',30,2,'pic_numeric'),
('WB-PLUG-HOLE-FRAC-DENOM',32,2,'pic_numeric')
]
WBPLPERF_18 = [
('RRC-TAPE-RECORD-ID',0,2,'pic_any'),
('WB-PLUG-PERF-COUNTER',2,3,'pic_numeric'),
('WB-PLUG-FROM-PERF',5,5,'pic_numeric'),
('WB-PLUG-TO-PERF',10,5,'pic_numeric'),
('WB-PLUG-OPEN-HOLE-INDICATOR',15,1,'pic_any')
]
WBPLNAME_19 = [
('RRC-TAPE-RECORD-ID',0,2,'pic_any'),
('WB-PLUG-FIELD-NO',2,8,'pic_numeric'),
('WB-PLUG-FIELD-NAME',10,32,'pic_any'),
('WB-PLUG-OPER-NO',42,6,'pic_any'),
('WB-PLUG-OPER-NAME',48,32,'pic_any'),
('WB-PLUG-LEASE-NAME',80,32,'pic_any')
]
WBDRILL_20 = [
('RRC-TAPE-RECORD-ID',0,2,'pic_any'),
('WB-PERMIT-NUMBER',2,6,'pic_numeric')
]
WBWELLID_21oil = [
('RRC-TAPE-RECORD-ID',0,2,'pic_any'),
('WB-OIL',2,1,'pic_any'),
('WB-OIL-DISTRICT',3,2,'pic_numeric'),
('WB-OIL-LSE-NUMBER',5,5,'pic_numeric'),
('WB-OIL-WELL-NUMBER',10,6,'pic_any')
]
WBWELLID_21gas = [
('RRC-TAPE-RECORD-ID',0,2,'pic_any'),
('WB-GAS',2,1,'pic_any'),
('WB-GAS-RRCID',3,6,'pic_numeric')
]
WB14B2_22oil = [
('RRC-TAPE-RECORD-ID',0,2,'pic_any'),
('WB14B2-OIL-CODE',2,1,'pic_any'), ##if gas use WB14B2_22gas
('WB14B2-OIL-DISTRICT',3,2,'pic_numeric'), ##if gas use WB14B2_22gas
('WB14B2-OIL-LEASE-NUMBER',5,5,'pic_numeric'), ##if gas use WB14B2_22gas
('WB14B2-OIL-WELL-NUMBER',10,6,'pic_any'), ##if gas use WB14B2_22gas
('WB14B2-APPLICATION-NUMBER',16,6,'pic_numeric'),
('WB14B2-GAS-DISTRICT',22,2,'pic_numeric'),
('WB14B2-EXT-STATUS-FLAG',24,1,'pic_any'), ##A=approved C=cancelled D=Denied E=Expired
('WB14B2-EXT-CANCELLED-REASON',25,1,'pic_any'), ##I=injection P=producing G=plugged S=Service O=cancelled-other
('WB14B2-EXT-APPROVED-DATE',26,8,'pic_yyyymmdd'), ##YYYYMMDD WB14B2-EXT-APPROVED CENT+YEAR+MONTH+DAY
('WB14B2-EXT-EXP-DATE',34,8,'pic_yyyymmdd'), ##YYYYMMDD WB14B2-EXT-EXP-CENT+YEAR+MONTH+DAY
('WB14B2-EXT-DENIED-DATE',42,8,'pic_yyyymmdd'), ##YYYYMMDD WB14B2-EXT-DENIED-CENT+YEAR+MONTH+DAY
('WB14B2-EXT-HIST-DATE',50,8,'pic_yyyymmdd'), ##YYYYMMDD WB14B2-EXT-HIST-CENT+YEAR+MONTH+DAY
('WB14B2-MECH-INTEG-VIOL-FLAG',58,1,'pic_any'),## WB14B2-MECH-INTEG-VIOL VALUE 'H'
('WB14B2-PLUG-ORDER-SF-HOLD-FLAG',59,1,'pic_any'), ## WB14B2-PLUG-ORDER-SF-HOLD VALUE 'E'
('WB14B2-POLLUTION-VIOL-FLAG',60,1,'pic_any'), ## WB14B2-POLLUTION-VIOLVALUE 'P'
('WB14B2-FIELD-OPS-HOLD-FLAG',61,1,'pic_any'), ## WB14B2-FIELD-OPS-HOLD VALUE 'F'
('WB14B2-H15-PROBLEM-FLAG',62,1,'pic_any'), ## WB14B2-H15-PROBLEM VALUE 'V'
('WB14B2-H15-NOT-FILED-FLAG',63,1,'pic_any'), ## WB14B2-H15-NOT-FILED VALUE 'X'
('WB14B2-OPER-DELQ-FLAG',64,1,'pic_any'), ## WB14B2-OPER-DELQ VALUE 'O'
('WB14B2-DISTRICT-HOLD-SFP-FLAG',65,1,'pic_any'), ## WB14B2-DISTRICT-HOLD-SFP VALUE 'T'
('WB14B2-DIST-SF-CLEAN-UP-FLAG',66,1,'pic_any'), ## WB14B2-DIST-SF-CLEAN-UP VALUE 'M'
('WB14B2-DIST-STATE-PLUG-FLAG',67,1,'pic_any'), ## WB14B2-DIST-STATE-PLUG VALUE 'K'
('WB14B2-GOOD-FAITH-VIOL-FLAG',68,1,'pic_any'), ## WB14B2-GOOD-FAITH-VIOL VALUE 'R'
('WB14B2-WELL-OTHER-VIOL-FLAG',69,1,'pic_any'), ## WB14B2-WELL-OTHER-VIOL VALUE 'Q'
('WB14B2-W3C-SURF-EQP-VIOL-FLAG',70,1,'pic_any'), ## WB14B2-W3C-SURF-EQUIP-VIOL VALUE 'S'
('WB14B2-W3X-VIOL-FLAG',71,1,'pic_any'), ## WB14B2-W3X-VIOL VALUE 'W'
('WB14B2-HB2259-W3X-PUB-ENT',79,1,'pic_any'),
('WB14B2-HB2259-W3X-10PCT',80,1,'pic_any'),
('WB14B2-HB2259-W3X-BONDING',81,1,'pic_any'),
('WB14B2-HB2259-W3X-H13-EOR',82,1,'pic_any'), ## WB14B2-HB2259-EOR-REJECTED VALUE 'R'
('WB14B2-HB2259-W3X-AOP',83,1,'pic_any'), ## WB14B2-HB2259-AOP-REJECTEDVALUE 'R'
('WB14B2-HB2259-W3X-MIT',84,1,'pic_any'), ## WB14B2-HB2259-MIT-REJECTED VALUE 'R'
('WB14B2-HB2259-W3X-ESCROW',85,1,'pic_any'), ## WB14B2-HB2259-ESCROW-REJECTED VALUE 'R'
('WB14B2-W3X-FILING-KEY',86,8,'pic_numeric'),
('WB14B2-W3X-AOP-RECEIVED-DATE',94,8,'pic_yyyymmdd'), ##YYYYMMDD
('WB14B2-W3X-AOP-FEE-RCVD-DATE',102,8,'pic_yyyymmdd'), ##YYYYMMDD
('WB14B2-W3X-H15-FEE-RCVD-DATE',110,8,'pic_yyyymmdd'), ##YYYYMMDD
('WB14B2-W3X-ESCROW-FUNDS',118,5,'pic_numeric'), ##ISSUE WB14B2-W3X-ESCROW-FUNDS VALUE ZEROS PIC 9(05)V99.119 WB14B2-W3X-ESCROW-FUND-SPLIT REDEFINESWB14B2-W3X-ESCROW-FUNDS.07 WB14B2-W3X-ESCROW-FUND-WHOLE PIC 9(05).07 WB14B2-W3X-ESCROW-FUND-DECIMAL PIC 9(02).
('WB14B2-60-DAY-LETTER-SENT-FLAG',125,1,'pic_any'),
('WB14B2-W1X-36-NEEDS-BOND-FLAG',126,1,'pic_any'),
('WB14B2-W1X-36-TYPE-COVERAGE',127,1,'pic_any'),## WB14B2-W1X-36-BOND VALUE 'B' WB14B2-W1X-36-LOC VALUE 'L'
('WB14B2-W1X-36-AMT-FILED',128,9,'pic_numeric'),
('WB14B2-W1X-36-SURETY',137,5,'pic_numeric'),
('WB14B2-W1X-36-EXP-DATE',142,8,'pic_yyyymmdd'), ##YYYYMMDD WB14B2-W1X-36-EXP-CENT+YEAR+MON+DAY
('WB14B2-W1X-36-BOND-NUMBER',150,20,'pic_any')
]
WB14B2_22gas = [
('RRC-TAPE-RECORD-ID',0,2,'pic_any'),
('WB14B2-GAS-CODE',2,1,'pic_any'), ##if oil use WB14B2_22oil
('WB14B2-GAS-RRC-ID',3,6,'pic_numeric'), ##if oil use WB14B2_22oil
('WB14B2-APPLICATION-NUMBER',16,6,'pic_numeric'),
('WB14B2-GAS-DISTRICT',22,2,'pic_numeric'),
('WB14B2-EXT-STATUS-FLAG',24,1,'pic_any'), ##A=approved C=cancelled D=Denied E=Expired
('WB14B2-EXT-CANCELLED-REASON',25,1,'pic_any'), ##I=injection P=producing G=plugged S=Service O=cancelled-other
('WB14B2-EXT-APPROVED-DATE',26,8,'pic_yyyymmdd'), ##YYYYMMDD WB14B2-EXT-APPROVED CENT+YEAR+MONTH+DAY
('WB14B2-EXT-EXP-DATE',34,8,'pic_yyyymmdd'), ##YYYYMMDD WB14B2-EXT-EXP-CENT+YEAR+MONTH+DAY
('WB14B2-EXT-DENIED-DATE',42,8,'pic_yyyymmdd'), ##YYYYMMDD WB14B2-EXT-DENIED-CENT+YEAR+MONTH+DAY
('WB14B2-EXT-HIST-DATE',50,8,'pic_yyyymmdd'), ##YYYYMMDD WB14B2-EXT-HIST-CENT+YEAR+MONTH+DAY
('WB14B2-MECH-INTEG-VIOL-FLAG',58,1,'pic_any'),## WB14B2-MECH-INTEG-VIOL VALUE 'H'
('WB14B2-PLUG-ORDER-SF-HOLD-FLAG',59,1,'pic_any'), ## WB14B2-PLUG-ORDER-SF-HOLD VALUE 'E'
('WB14B2-POLLUTION-VIOL-FLAG',60,1,'pic_any'), ## WB14B2-POLLUTION-VIOLVALUE 'P'
('WB14B2-FIELD-OPS-HOLD-FLAG',61,1,'pic_any'), ## WB14B2-FIELD-OPS-HOLD VALUE 'F'
('WB14B2-H15-PROBLEM-FLAG',62,1,'pic_any'), ## WB14B2-H15-PROBLEM VALUE 'V'
('WB14B2-H15-NOT-FILED-FLAG',63,1,'pic_any'), ## WB14B2-H15-NOT-FILED VALUE 'X'
('WB14B2-OPER-DELQ-FLAG',64,1,'pic_any'), ## WB14B2-OPER-DELQ VALUE 'O'
('WB14B2-DISTRICT-HOLD-SFP-FLAG',65,1,'pic_any'), ## WB14B2-DISTRICT-HOLD-SFP VALUE 'T'
('WB14B2-DIST-SF-CLEAN-UP-FLAG',66,1,'pic_any'), ## WB14B2-DIST-SF-CLEAN-UP VALUE 'M'
('WB14B2-DIST-STATE-PLUG-FLAG',67,1,'pic_any'), ## WB14B2-DIST-STATE-PLUG VALUE 'K'
('WB14B2-GOOD-FAITH-VIOL-FLAG',68,1,'pic_any'), ## WB14B2-GOOD-FAITH-VIOL VALUE 'R'
('WB14B2-WELL-OTHER-VIOL-FLAG',69,1,'pic_any'), ## WB14B2-WELL-OTHER-VIOL VALUE 'Q'
('WB14B2-W3C-SURF-EQP-VIOL-FLAG',70,1,'pic_any'), ## WB14B2-W3C-SURF-EQUIP-VIOL VALUE 'S'
('WB14B2-W3X-VIOL-FLAG',71,1,'pic_any'), ## WB14B2-W3X-VIOL VALUE 'W'
('WB14B2-HB2259-W3X-PUB-ENT',79,1,'pic_any'),
('WB14B2-HB2259-W3X-10PCT',80,1,'pic_any'),
('WB14B2-HB2259-W3X-BONDING',81,1,'pic_any'),
('WB14B2-HB2259-W3X-H13-EOR',82,1,'pic_any'), ## WB14B2-HB2259-EOR-REJECTED VALUE 'R'
('WB14B2-HB2259-W3X-AOP',83,1,'pic_any'), ## WB14B2-HB2259-AOP-REJECTEDVALUE 'R'
('WB14B2-HB2259-W3X-MIT',84,1,'pic_any'), ## WB14B2-HB2259-MIT-REJECTED VALUE 'R'
('WB14B2-HB2259-W3X-ESCROW',85,1,'pic_any'), ## WB14B2-HB2259-ESCROW-REJECTED VALUE 'R'
('WB14B2-W3X-FILING-KEY',86,8,'pic_numeric'),
('WB14B2-W3X-AOP-RECEIVED-DATE',94,8,'pic_yyyymmdd'), ##YYYYMMDD
('WB14B2-W3X-AOP-FEE-RCVD-DATE',102,8,'pic_yyyymmdd'), ##YYYYMMDD
('WB14B2-W3X-H15-FEE-RCVD-DATE',110,8,'pic_yyyymmdd'), ##YYYYMMDD
('WB14B2-W3X-ESCROW-FUNDS',118,5,'pic_numeric'), ##ISSUE WB14B2-W3X-ESCROW-FUNDS VALUE ZEROS PIC 9(05)V99.119 WB14B2-W3X-ESCROW-FUND-SPLIT REDEFINESWB14B2-W3X-ESCROW-FUNDS.07 WB14B2-W3X-ESCROW-FUND-WHOLE PIC 9(05).07 WB14B2-W3X-ESCROW-FUND-DECIMAL PIC 9(02).
('WB14B2-60-DAY-LETTER-SENT-FLAG',125,1,'pic_any'),
('WB14B2-W1X-36-NEEDS-BOND-FLAG',126,1,'pic_any'),
('WB14B2-W1X-36-TYPE-COVERAGE',127,1,'pic_any'),## WB14B2-W1X-36-BOND VALUE 'B' WB14B2-W1X-36-LOC VALUE 'L'
('WB14B2-W1X-36-AMT-FILED',128,9,'pic_numeric'),
('WB14B2-W1X-36-SURETY',137,5,'pic_numeric'),
('WB14B2-W1X-36-EXP-DATE',142,8,'pic_yyyymmdd'), ##YYYYMMDD WB14B2-W1X-36-EXP-CENT+YEAR+MON+DAY
('WB14B2-W1X-36-BOND-NUMBER',150,20,'pic_any')
]
WBH15_23 = [
('RRC-TAPE-RECORD-ID',0,2,'pic_any'),
('WB-H15-DATE-KEY',2,8,'pic_yyyymmdd'), ##YYYYMMDD
('WB-H15-STATUS',10,1,'pic_any'), ##A=approved C=compliant D=delinquent N=not-approved P=approval-pending W=W3A-extension U=UIC E=no-test-proj-ext X=W1X-denied
('WB-H15-OPERATOR',11,6,'pic_numeric'),
('WB-H15-NEXT-TEST-DUE-DATE',17,6,'pic_yyyymm'), ##YYYYMM WB-NEXT-TEST-CCYY&MM
('WB-H15-DISTRICT',23,2,'pic_numeric'),
('WB-H15-FIELD',25,8,'pic_numeric'),
('WB-H15-HIST-WELLBORE-FLAG',33,1,'pic_any'), ##D=drilling C=early-compl
('WB-H15-HIST-WELL-DATE',34,8,'pic_yyyymmdd'), ##YYYYMMDD WB-H15-HIST-WELL-CCYYMMDD
('WB-H15-W1X-WELL',42,1,'pic_any'),
('WB-H15-OIL-GAS-CODE',43,1,'pic_any'), ##G=gas O=oil
('WB-H15-LEASE-NBR',44,5,'pic_numeric'),
('WB-H15-WELL-NBR',49,6,'pic_any'),
('WB-H15-GASID-NBR',55,6,'pic_numeric'),
('WB-H15-TEST-DATE',61,8,'pic_yyyymmdd'), ##YYYYMMDD WB-H15-TEST-CC&YY&MM&DD
('WB-H15-BASE-USABLE-WATER',69,6,'pic_numeric'),
('WB-H15-TYPE-TEST-FLAG',75,1,'pic_any'), ##F=H15-fluid-test M=H15-mech-integ-test
('WB-H15-TOP-OF-FLUID',76,6,'pic_numeric'),
('WB-H15-FLUID-TEST-FLAG',82,1,'pic_any'), ##W=wire S=Sonic V=visual O=other
('WB-H15-MECH-INTEG-TEST-FLAG',83,1,'pic_any'), ##H=hydraulic O=integ-other
('WB-H15-MECH-TEST-REASON-FLAG',84,1,'pic_any'), ##A=substitute B=14B2-require
('WB-H15-ALTERNATE-TEST-PERIOD',85,2,'pic_numeric'),
('WB-H15-OTHER-MIT-TEST-TYPE',87,20,'pic_any'),
('WB-H15-STATUS-DATE',107,8,'pic_numeric'), ##WB-H15-STATUS-CC&YY&MM&DD
('WB-H15-NO-DATE-WELL-FLAG',115,1,'pic_any'), ##Y=WB-H15-NO-DATE-WELL
('WB-H15-RECORD-FROM-EDI-FLAG',116,1,'pic_any'), ##Y=WB-H15-RECORD-FROM-EDI
('WB-H15-KEYED-DATE',117,8,'pic_yyyymmdd'), ##YYYYMMDD WB-H15-KEYED-CC&YY&MM&DD
('WB-H15-CHANGED-DATE',125,8,'pic_yyyymmdd'), ##YYYYMMDD WB-H15-CHANGED-CC&YY&MM&DD
('WB-H15-PREVIOUS-STATUS',133,1,'pic_any'),
('WB-H15-UIC-TEST-FLAG',134,1,'pic_any'), ##U=WB-H15-UIC-H5-TEST
('WB-H15-2YRS-APPROVED-FLAG',135,1,'pic_any'), ##Y=WB-H15-2YRS-APPROVED
('WB-H15-MAIL-HOLD-FLAG',136,1,'pic_any'), ##Y=WB-H15-MAIL-HOLD
('WB-H15-10YR-INACTIVE-FLAG',137,1,'pic_any'),
('WB-H15-W3X-WELL-FLAG',138,1,'pic_any') ##Y=WB-H15-W3X-WELL
]
WBH15RMK_24 = [
('RRC-TAPE-RECORD-ID',0,2,'pic_any'),
('WB-H15-REMARK-KEY',2,3,'pic_numeric'),
('WB-H15-REMARK-TEXT',5,70,'pic_any')
]
WBSB126_25 = [
('RRC-TAPE-RECORD-ID',0,2,'pic_any'),
('WB-SB126-DESIGNATION-FLAG',2,1,'pic_any'), ##A=auto-designated M=manual-designated
('WB-SB126-DESIG-EFFECTIVE-DATE',3,6,'pic_yyyymm'), ##YYYYMM WB-SB126-DESIG-EFFEC-CC&YY&MM
('WB-SB126-DESIG-REVISED-DATE',9,6,'pic_yyyymm'), ##YYYYMM WB-SB126-DESIG-REVISED-CC&YY&MM
('WB-SB126-DESIG-LETTER-DATE',15,8,'pic_yyyymmdd'), ##YYYYMMDD WB-SB126-DESIG-LETTER-CC&YY&MM&DD
('WB-SB126-CERT-EFFECT-DATE',23,6,'pic_yyyymm'), ##YYYMM WB-SB126-CERT-EFFEC-CC&YY&MM
('WB-SB126-CERT-REVOKED-DATE',29,8,'pic_yyyymmdd'), ##YYYYMMDD WB-SB126-CERT-REVOKED-CC&YY&MM&DD
('WB-SB126-CERT-DENIAL-DATE',37,8,'pic_yyyymmdd'), ##YYYYMMDD WB-SB126-CERT-DENIAL-CC$YY&MM&DD
('WB-SB126-DENIAL-REASON-FLAG',45,1,'pic_any') ##A=denied-auto M=denied-manual
]
WBDASTAT_26 = [
('RRC-TAPE-RECORD-ID',0,2,'pic_any'),
('WBDASTAT-STAT-NUM',2,7,'pic_numeric'),
('WBDASTAT-UNIQ-NUM',9,2,'pic_numeric'),
('WBDASTAT-DELETED-FLAG',11,1,'pic_any')
]
WBW3C_27 = [
('RRC-TAPE-RECORD-ID',0,2,'pic_any'),
('WB-W3C-1YR-FLAG',2,1,'pic_any'),
('WB-W3C-1YR-FILED-DATE',3,8,'pic_yyyymmdd'), ##YYYYMMDD
('WB-W3C-1YR-FILING-OPER',11,6,'pic_numeric'),
('WB-W3C-5YR-FLAG',17,1,'pic_any'),
('WB-W3C-5YR-FILED-DATE',18,8,'pic_numeric'),
('WB-W3C-5YR-FILING-OPER',26,6,'pic_numeric'),
('WB-W3C-10YR-FLAG',32,1,'pic_any'),
('WB-W3C-10YR-FILED-DATE',33,8,'pic_numeric'),
('WB-W3C-10YR-FILING-OPER',41,6,'pic_numeric'),
('WB-W3C-14B2-REMOVAL-DATE',48,8,'pic_yyyymmdd'), ##YYYYMMDD
('WB-W3C-EXTENSION-FLAG',55,1,'pic_any'),
('WB-W3C-EXTENSION-DATE',56,8,'pic_yyyymmdd'), ##YYYYMMDD Combines WB-W3C-EXTENSION-YEAR&MONTH&DAY
('WB-W3C-5YR-FLAG-PREVIOUS',64,1,'pic_any'),
('WB-W3C-10YR-FLAG-PREVIOUS',65,1,'pic_any')
]
WB14B2RM_28 = [
('RRC-TAPE-RECORD_ID',0,2,'pic_any'),
('WB-14B2-RMK-LNE-CNT',2,3,'pic_numeric'),
('WB-14B2-RMK-DATE',5,8,'pic_yyyymmdd'), ##YYYYMMDD
('WB-14B2-RMK-USERID',13,8,'pic_any'),
('WB-14B2-REMARKS',21,66,'pic_any')
]
# -
|
Notebooks/TXRRC dbf900.ebc parsing.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Gridplot: Visualize Multiple Graphs
#
# This example provides how to visualize graphs using the gridplot.
# +
import graspologic
import numpy as np
# %matplotlib inline
# -
# ## Overlaying two sparse graphs using gridplot
#
# ### Simulate more graphs using weighted stochastic block models
# The 2-block model is defined as below:
#
# \begin{align*}
# P =
# \begin{bmatrix}0.25 & 0.05 \\
# 0.05 & 0.25
# \end{bmatrix}
# \end{align*}
#
# We generate two weighted SBMs where the weights are distributed from a discrete uniform(1, 10) and discrete uniform(2, 5).
# +
from graspologic.simulations import sbm
n_communities = [50, 50]
p = np.array([[0.25, 0.05], [0.05, 0.25]])
wt = np.random.randint
wtargs = dict(low=1, high=10)
np.random.seed(1)
A_unif1= sbm(n_communities, p, wt=wt, wtargs=wtargs)
wtargs = dict(low=2, high=5)
A_unif2= sbm(n_communities, p, wt=wt, wtargs=wtargs)
# -
# ## Visualizing both graphs
# +
from graspologic.plot import gridplot
X = [A_unif1, A_unif2]
labels = ["Uniform(1, 10)", "Uniform(2, 5)"]
f = gridplot(X=X,
labels=labels,
title='Two Weighted Stochastic Block Models',
height=12,
font_scale=1.5)
|
docs/tutorials/plotting/gridplot.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # "Rock" Drop Lab
# ## PH 211 COCC
# ### <NAME> 2/10/2021
#
# This notebook is meant to provide tools and discussion to support data analysis and presentation as you generate your lab reports.
#
# [Rock Drop Lab](http://coccweb.cocc.edu/bemerson/PhysicsGlobal/Courses/PH211/PH211Materials/PH211Labs/PH211Labrockdrop.html) and [Rock Drop Lab Discussion](http://coccweb.cocc.edu/bemerson/PhysicsGlobal/Courses/PH211/PH211Materials/PH211Labs/PH211LabDrockdrop.html)
#
# In this lab we are gathering some data, entering the data into the notebook, plotting the data as a scatterplot, plotting a physics model of the "rock", and finally using the model to predict the height of an unknown object.
#
# Including images in your python notebooks seems valuable but there are a range of challenges. One of which is the literalness of python. I had this image file with .jpg as the suffix and it didn't recognize the image which had a .JPG suffic. Usually such image files are not case sensitive but here they are. If I reference a web location for the image which is on my github then the image can be found when the file is exported to html for later conversion to pdf.
#
# `<img src="https://raw.githubusercontent.com/smithrockmaker/PH211/master/images/COVIDRock.JPG" />`
#
# <img src="https://raw.githubusercontent.com/smithrockmaker/PH211/master/images/COVIDRock.JPG" />
#
# Here is the same image included from a local image folder on my computer. This works when I am working in Jupyterlab but 'breaks' when I export the file to html. Still working on a student facing solution.
#
# <img src="images/COVIDRock.JPG" />
#
# ## Dependencies
#
# This is where we load in the various libraries of python tools that are needed for the particular work we are undertaking.
#
# The new library from ```numpy``` is needed for creating a polynomial fit to the data later on. There are multiple version of these modules for different purposes. This one feels best matched to our needs and experience.
#
# [numpy.polynomial.polynomial module](https://docs.scipy.org/doc/numpy/reference/routines.polynomials.polynomial.html)
#
# The following code cell will need to be run first before any other code cells.
import numpy as np
import matplotlib as mplot
import matplotlib.pyplot as plt
from numpy.polynomial import polynomial as ply
# ## Data Entry (Lists/Vectors) (Deliverable I)
#
# At this point you should be getting comfortable doing data entry. You should explain what the data is and how you gathered it in this markdown cell. You should also indicate the variability of your data (x and y). You are asked to figure out a way to present your data here in the markdown cell and **not** just as a list in the code below.
# +
timedata = [0., .45, .72, .95, 1.21, 1.33]
heightdata = [0., 1., 2., 3., 4., 5.]
# 2 ways to print out and check your data
print("flight time:",timedata)
print("height:",heightdata)
timedatalength = len(timedata)
heightdatalength = len(heightdata)
# length counts how many 'data points' in the list
print("number of data points (x):", timedatalength)
print("number of data points (y):", heightdatalength)
# -
# ***
# ## Lab Deliverable:
#
# Describe your data collection method in some detail so it could be reproduced by other researchers. Explain whether <0,0> is a real data point for this experiment and where you focused your attention as you gathered the data. Present your raw data completely and clearly in a markdown cell.
#
# Include a separate discussion of the variability of your data which is the standard deviation divided by the mean for each data point. Include the data that supports your statement. You **DO NOT** need to do this calculation for every data point but you must do so for at least one to get a sense of things. Given that you may be working alone I would expect a reasonable amount of variability.
#
# ***
# ### Data Plot
#
# If you are unsure what is happening here refer to earlier labs where it has been described in more detail.
#
#
# +
fig1, ax1 = plt.subplots()
ax1.scatter(timedata, heightdata)
# a more explicit way to set labels
plt.xlabel('drop time (s)', fontsize = 10)
plt.ylabel('drop height (m)', fontsize = 10)
plt.title('Calibration Data for Rock', fontsize = 20)
fig1.set_size_inches(10, 9)
ax1.grid()
#fig1.savefig("myplot.png")
plt.show()
# -
# ***
# ## Lab Deliverable (II):
#
# A plot of your raw data with your analysis of that data. Any data points seem out of place? Doesn't appear linear? If it curves does the shape match your conceptual understanding of the setting? Imagine you are presenting this data to your engineering group and you need them to understand why it seems reasonable or not.
#
# ***
# ### Curve Fitting
#
# The new feature for this lab is fitting a polynomial curve to the data and trying to make sense of it.
#
# ```degree``` is the order of the polynomial as in degree = 2 => quadratic polynomial with 3 coefficients.
#
# [polynomial.polynomial.polyfit](https://docs.scipy.org/doc/numpy/reference/generated/numpy.polynomial.polynomial.polyfit.html)
#
# Read the documentation and see if you can figure out what is happening in this code block.
degree = 3
coefs = ply.polyfit(timedata, heightdata,degree)
print("Coefficients of polynomial fit:", coefs)
# ### Add the physics model...the curve fit and the ideal rock
#
# The model we will create here is not a linear model but it starts the same way by generating a set of 'x' values from which to generate the 'y' values give the curve fit generated above.
#
# It starts by defining a set of x values.```numpy.linspace()``` is a tool for doing this and because we did ```import numpy as np``` it shows in the code as ```np.linspace()```. Look back to previous labs if you need to refresh.
#
# [numpy.linspace documentation](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linspace.html)
#
# Because it's interesting to see and not hard to do when you have access to a notebook I've included the model of an ideal rock. You can worry if you want about whether gravity in Bend is different than sea level (it is a little) but the relationship of your data, the curve fit, and the ideal rock should make sense.
#
# +
# generate x values for model of data
maxtime = 2.2
numpoints = 20
modeltime = np.linspace(0.,maxtime,numpoints)
# create a model height list that matches the model time
modelheight = np.full_like(modeltime,0)
idealrock = np.full_like(modeltime,0)
# calculate the heights predicted from the model
modelheight = coefs[0] + coefs[1]*modeltime + \
coefs[2]* modeltime**2 + coefs[3]*modeltime**3
# calculate an ideal physics rock (no air drag and a = 9.81 m/s/s)
idealrock = 0.5*9.81*modeltime**2
# print("testing the output of the loop;", modelheight)
# -
# ***
# ## Lab Deliverable (II):
#
# The cell below illustrates how to generate the plot of your data, the behavior of an ideal physics rock, the polynomial fit to your data, and the drop time for your unknown height. Describe the important features of this plot and use markdown to show the polynomial fit with the coefficients.
#
# Compare each coefficient with those in the standard kinematic expression for the position of an object experiencing constant acceleration.
#
# .$$\large x_f = x_0 + v_{x_0} t + \frac{1}{2} a_x t^2$$
#
# The coefficients of your polynomial fit have the same meaning as the terms above.
#
# ***
# +
fig2, ax2 = plt.subplots()
# This is the plot of the actual data
ax2.scatter(timedata, heightdata,
marker = 'x', color = 'green',
label = "data")
# This is the plot of your polynomial curve fit
ax2.plot(modeltime, modelheight,
color = 'red', linestyle = ':',
linewidth = 3., label = "model")
# This is the plot of an ideal physics rock
ax2.plot(modeltime, idealrock,
color = 'blue', linestyle = '-',
linewidth = 1., label = "ideal rock")
# This is the drop time you measured for your unknown height
# followed by a plot of a vertical line 'vlines' at that point
# You MAY need to change the 12 to a different number. See what
# happens when you do.
unknown_data = 1.87
ax2.vlines(unknown_data, 0, 12,
color = 'magenta', linestyle = '-',
linewidth = 2., label = "unknown drop")
# a more explicit way to set labels
plt.xlabel('drop time (s)', fontsize = 10)
plt.ylabel('drop height (m)', fontsize = 10)
plt.title('Experimental Data with Model', fontsize = 20)
fig2.set_size_inches(10, 9)
ax2.grid()
plt.legend(loc= 2)
plt.show()
# -
# ### Terminal Velocity? (Deliverable III)
#
# Does this data and the model suggest that your 'rock' has reached terminal velocity during this experiment? Why or why not? What would that terminal velocity be?
# ### Lab Deliverable (IV): Challenge Drop
#
# Begin by documenting what you measured for your 'challenge drop'. Where is it located and what were the conditions under which you made the measurement? Hopefully you were able to make multiple measurements to benefit from the effects of averaging.
#
# Present your measured drop time for the unknown height. From this data point and your model above predict the height of the unknown object and a numerical value for your uncertainty based on your data. I drew a line on the previous plot indicating the drop time for the unknown which you can of course edit. The intersection point on the plot should be consistent with the predicted height found below by plugging your drop time for the unknown height into the polynomial fit.
# +
predicted_height = coefs[0] + coefs[1]*unknown_data + \
coefs[2]* unknown_data**2 + coefs[3]*unknown_data**3
print("The predicted height of unknown drop is (m):", predicted_height)
# -
# ### Discussion: Deliverable IV
#
# Is there a way to check this result for reasonableness? Does it make sense from an examination of the environment? I'm interested in how you do a 'gut check' of your answer rather than just take what the code spits out. Do you feel any better looking at the plot rather than the output of the calculation?
# ## Reflection
#
# As usual I learned a bunch of new stuff in the process of creating this notebook as a framework for your lab report.
#
# The issue of embedding images in the notebook in such a way that they show in the pdf continues to be an aggravation. I will keep working on this.
#
# Thanks.
# ### Extensions
#
# Extensions are ideas that I didn't have time to explore or develop fully for this lab. These are offered as opportunities for students with more programming experience than is typical for students in the class.
#
#
# #### Second Plot that is zoomed in to the point of intersection
#
# I could just create another plot with the same functions but different axis limits but I wonder if there is a more clever way to do this.
#
|
Launch or Drop/.ipynb_checkpoints/RockDropLab21-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # TTS Inference Model Selection
#
# This notebook can be used to generate audio samples using either NeMo's pretrained models or after training NeMo TTS models. This notebook supports all TTS models and is intended to showcase different models and how their results differ.
# # License
#
# > Copyright 2020 NVIDIA. All Rights Reserved.
# >
# > Licensed under the Apache License, Version 2.0 (the "License");
# > you may not use this file except in compliance with the License.
# > You may obtain a copy of the License at
# >
# > http://www.apache.org/licenses/LICENSE-2.0
# >
# > Unless required by applicable law or agreed to in writing, software
# > distributed under the License is distributed on an "AS IS" BASIS,
# > WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# > See the License for the specific language governing permissions and
# > limitations under the License.
# + tags=[]
"""
You can either run this notebook locally (if you have all the dependencies and a GPU) or on Google Colab.
Instructions for setting up Colab are as follows:
1. Open a new Python 3 notebook.
2. Import this notebook from GitHub (File -> Upload Notebook -> "GITHUB" tab -> copy/paste GitHub URL)
3. Connect to an instance with a GPU (Runtime -> Change runtime type -> select "GPU" for hardware accelerator)
4. Run this cell to set up dependencies.
"""
BRANCH = 'main'
# # If you're using Google Colab and not running locally, uncomment and run this cell.
# # !apt-get install sox libsndfile1 ffmpeg
# # !pip install wget unidecode
# # !python -m pip install git+https://github.com/NVIDIA/NeMo.git@$BRANCH#egg=nemo_toolkit[all]
# -
# ## Models
#
# First we pick the models that we want to use. Currently supported models are:
#
# End-to-End Models:
# - [FastPitch_HifiGan_E2E](https://ngc.nvidia.com/catalog/models/nvidia:nemo:tts_en_e2e_fastpitchhifigan)
# - [FastSpeech2_HifiGan_E2E](https://ngc.nvidia.com/catalog/models/nvidia:nemo:tts_en_e2e_fastspeech2hifigan)
#
# Spectrogram Generators:
# - [Tacotron 2](https://ngc.nvidia.com/catalog/models/nvidia:nemo:tts_en_tacotron2)
# - [Glow-TTS](https://ngc.nvidia.com/catalog/models/nvidia:nemo:tts_en_glowtts)
# - [TalkNet](https://ngc.nvidia.com/catalog/models/nvidia:nemo:tts_en_talknet)
# - [FastPitch](https://ngc.nvidia.com/catalog/models/nvidia:nemo:tts_en_fastpitch)
# - [FastSpeech2](https://ngc.nvidia.com/catalog/models/nvidia:nemo:tts_en_fastspeech_2)
# - [Mixer-TTS](https://ngc.nvidia.com/catalog/models/nvidia:nemo:tts_en_lj_mixertts)
# - [Mixer-TTS-X](https://ngc.nvidia.com/catalog/models/nvidia:nemo:tts_en_lj_mixerttsx)
#
# Audio Generators
# - [WaveGlow](https://ngc.nvidia.com/catalog/models/nvidia:nemo:tts_waveglow_88m)
# - [SqueezeWave](https://ngc.nvidia.com/catalog/models/nvidia:nemo:tts_squeezewave)
# - [UniGlow](https://ngc.nvidia.com/catalog/models/nvidia:nemo:tts_uniglow)
# - [MelGAN](https://ngc.nvidia.com/catalog/models/nvidia:nemo:tts_melgan)
# - [HiFiGAN](https://ngc.nvidia.com/catalog/models/nvidia:nemo:tts_hifigan)
# - [UnivNet](https://ngc.nvidia.com/catalog/models/nvidia:nemo:tts_en_lj_univnet)
# - Griffin-Lim
# + tags=[]
from ipywidgets import Select, HBox, Label
from IPython.display import display
supported_e2e = ["fastpitch_hifigan", "fastspeech2_hifigan", None]
supported_spec_gen = ["tacotron2", "glow_tts", "talknet", "fastpitch", "fastspeech2", "mixertts", "mixerttsx", None]
supported_audio_gen = ["waveglow", "squeezewave", "uniglow", "melgan", "hifigan", "univnet", "griffin-lim", None]
print("Select the model(s) that you want to use. Please choose either 1 end-to-end model or 1 spectrogram generator and 1 vocoder.")
e2e_selector = Select(options=supported_e2e, value=None)
spectrogram_generator_selector = Select(options=supported_spec_gen, value=None)
audio_generator_selector = Select(options=supported_audio_gen, value=None)
display(HBox([e2e_selector, Label("OR"), spectrogram_generator_selector, Label("+"), audio_generator_selector]))
# +
e2e_model = e2e_selector.value
spectrogram_generator = spectrogram_generator_selector.value
audio_generator = audio_generator_selector.value
if e2e_model is None and spectrogram_generator is None and audio_generator is None:
raise ValueError("No models were chosen. Please return to the previous step and choose either 1 end-to-end model or 1 spectrogram generator and 1 vocoder.")
if e2e_model and (spectrogram_generator or audio_generator):
raise ValueError(
"An end-to-end model was chosen and either a spectrogram generator or a vocoder was also selected. For end-to-end models, please select `None` "
"in the second and third column to continue. For the two step pipeline, please select `None` in the first column to continue."
)
if (spectrogram_generator and audio_generator is None) or (audio_generator and spectrogram_generator is None):
raise ValueError("In order to continue with the two step pipeline, both the spectrogram generator and the audio generator must be chosen, but one was `None`")
# -
# ## Load model checkpoints
#
# Next we load the pretrained model provided by NeMo. All NeMo models have two functions to help with this
#
# - list_available_models(): This function will return a list of all pretrained checkpoints for that model
# - from_pretrained(): This function will download the pretrained checkpoint, load it, and return an instance of the model
#
# Below we will use `from_pretrained` to load the chosen models from above.
# + tags=[]
from omegaconf import OmegaConf, open_dict
import torch
from nemo.collections.tts.models.base import SpectrogramGenerator, Vocoder, TextToWaveform
def load_spectrogram_model():
override_conf = None
from_pretrained_call = SpectrogramGenerator.from_pretrained
if spectrogram_generator == "tacotron2":
from nemo.collections.tts.models import Tacotron2Model
pretrained_model = "tts_en_tacotron2"
elif spectrogram_generator == "glow_tts":
from nemo.collections.tts.models import GlowTTSModel
pretrained_model = "tts_en_glowtts"
import wget
from pathlib import Path
if not Path("cmudict-0.7b").exists():
filename = wget.download("http://svn.code.sf.net/p/cmusphinx/code/trunk/cmudict/cmudict-0.7b")
filename = str(Path(filename).resolve())
else:
filename = str(Path("cmudict-0.7b").resolve())
conf = SpectrogramGenerator.from_pretrained(pretrained_model, return_config=True)
if "params" in conf.parser:
conf.parser.params.cmu_dict_path = filename
else:
conf.parser.cmu_dict_path = filename
override_conf = conf
elif spectrogram_generator == "talknet":
from nemo.collections.tts.models import TalkNetSpectModel
pretrained_model = "tts_en_talknet"
from_pretrained_call = TalkNetSpectModel.from_pretrained
elif spectrogram_generator == "fastpitch":
from nemo.collections.tts.models import FastPitchModel
pretrained_model = "tts_en_fastpitch"
elif spectrogram_generator == "fastspeech2":
from nemo.collections.tts.models import FastSpeech2Model
pretrained_model = "tts_en_fastspeech2"
elif spectrogram_generator == "mixertts":
from nemo.collections.tts.models import MixerTTSModel
pretrained_model = "tts_en_lj_mixertts"
elif spectrogram_generator == "mixerttsx":
from nemo.collections.tts.models import MixerTTSModel
pretrained_model = "tts_en_lj_mixerttsx"
else:
raise NotImplementedError
model = from_pretrained_call(pretrained_model, override_config_path=override_conf)
return model
def load_vocoder_model():
RequestPseudoInverse = False
TwoStagesModel = False
strict=True
if audio_generator == "waveglow":
from nemo.collections.tts.models import WaveGlowModel
pretrained_model = "tts_waveglow"
strict=False
elif audio_generator == "squeezewave":
from nemo.collections.tts.models import SqueezeWaveModel
pretrained_model = "tts_squeezewave"
elif audio_generator == "uniglow":
from nemo.collections.tts.models import UniGlowModel
pretrained_model = "tts_uniglow"
elif audio_generator == "melgan":
from nemo.collections.tts.models import MelGanModel
pretrained_model = "tts_melgan"
elif audio_generator == "hifigan":
from nemo.collections.tts.models import HifiGanModel
spectrogram_generator2ft_hifigan = {
"mixertts": "tts_en_lj_hifigan_ft_mixertts",
"mixerttsx": "tts_en_lj_hifigan_ft_mixerttsx"
}
pretrained_model = spectrogram_generator2ft_hifigan.get(spectrogram_generator, "tts_hifigan")
elif audio_generator == "univnet":
from nemo.collections.tts.models import UnivNetModel
pretrained_model = "tts_en_lj_univnet"
elif audio_generator == "griffin-lim":
from nemo.collections.tts.models import TwoStagesModel
cfg = {'linvocoder': {'_target_': 'nemo.collections.tts.models.two_stages.GriffinLimModel',
'cfg': {'n_iters': 64, 'n_fft': 1024, 'l_hop': 256}},
'mel2spec': {'_target_': 'nemo.collections.tts.models.two_stages.MelPsuedoInverseModel',
'cfg': {'sampling_rate': 22050, 'n_fft': 1024,
'mel_fmin': 0, 'mel_fmax': 8000, 'mel_freq': 80}}}
model = TwoStagesModel(cfg)
TwoStagesModel = True
else:
raise NotImplementedError
if not TwoStagesModel:
model = Vocoder.from_pretrained(pretrained_model, strict=strict)
return model
def load_e2e_model():
if e2e_model == "fastpitch_hifigan":
from nemo.collections.tts.models import FastPitchHifiGanE2EModel
pretrained_model = "tts_en_e2e_fastpitchhifigan"
elif e2e_model == "fastspeech2_hifigan":
from nemo.collections.tts.models import FastSpeech2HifiGanE2EModel
pretrained_model = "tts_en_e2e_fastspeech2hifigan"
else:
raise NotImplementedError
model = TextToWaveform.from_pretrained(pretrained_model)
return model
emodel = None
spec_gen = None
vocoder = None
if e2e_model:
emodel = load_e2e_model().eval().cuda()
else:
spec_gen = load_spectrogram_model().eval().cuda()
vocoder = load_vocoder_model().eval().cuda()
# -
# ## Inference
#
# Now that we have downloaded the model checkpoints and loaded them into memory. Let's define a short infer helper function that takes a string, and our models to produce speech.
#
# Notice that the NeMo TTS model interface is fairly simple and standardized across all models.
#
# End-to-end models have two helper functions:
# - parse(): Accepts raw python strings and returns a torch.tensor that represents tokenized text
# - convert_text_to_waveform(): Accepts a batch of tokenized text and returns a torch.tensor that represents a batch of raw audio
#
# Mel Spectrogram generators have two helper functions:
#
# - parse(): Accepts raw python strings and returns a torch.tensor that represents tokenized text
# - generate_spectrogram(): Accepts a batch of tokenized text and returns a torch.tensor that represents a batch of spectrograms
#
# Vocoder have just one helper function:
#
# - convert_spectrogram_to_audio(): Accepts a batch of spectrograms and returns a torch.tensor that represents a batch of raw audio
def infer(end2end_model, spec_gen_model, vocoder_model, str_input):
parser_model = end2end_model or spec_gen_model
with torch.no_grad():
parsed = parser_model.parse(str_input)
if end2end_model is None:
gen_spec_kwargs = {}
if spectrogram_generator == "mixerttsx":
gen_spec_kwargs["raw_texts"] = [str_input]
spectrogram = spec_gen_model.generate_spectrogram(tokens=parsed, **gen_spec_kwargs)
audio = vocoder_model.convert_spectrogram_to_audio(spec=spectrogram)
if audio_generator == "hifigan":
audio = vocoder_model._bias_denoise(audio, spectrogram).squeeze(1)
else:
spectrogram = None
audio = end2end_model.convert_text_to_waveform(tokens=parsed)[0]
if spectrogram is not None:
if isinstance(spectrogram, torch.Tensor):
spectrogram = spectrogram.to('cpu').numpy()
if len(spectrogram.shape) == 3:
spectrogram = spectrogram[0]
if isinstance(audio, torch.Tensor):
audio = audio.to('cpu').numpy()
return spectrogram, audio
# Now that everything is set up, let's give an input that we want our models to speak
text_to_generate = input("Input what you want the model to say: ")
spec, audio = infer(emodel, spec_gen, vocoder, text_to_generate)
# # Results
#
# After our model generates the audio, let's go ahead and play it. We can also visualize the spectrogram that was produced from the first stage model if a spectrogram generator was used.
# +
import IPython.display as ipd
import numpy as np
from PIL import Image
from matplotlib.pyplot import imshow
from matplotlib import pyplot as plt
ipd.Audio(audio, rate=22050)
# -
# %matplotlib inline
if spec is not None:
imshow(spec, origin="lower")
plt.show()
|
tutorials/tts/Inference_ModelSelect.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/JSJeong-me/CNN-Cats-Dogs/blob/main/4_2_aug_pretrained.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + colab={"base_uri": "https://localhost:8080/"} id="22mg_PXnCKsU" outputId="cb6278ab-b8ae-46ee-d8c8-ed2d83932e4a"
from google.colab import drive
drive.mount('/content/drive')
# + id="6giq8ls1A3jN"
# %matplotlib inline
# + id="B9GBtXmwA47m" colab={"base_uri": "https://localhost:8080/"} outputId="3e5a1825-df22-4ab2-e014-9929122f9004"
# !ls -l
# + id="FWj06i8uA6OO"
# !cp ./drive/MyDrive/training_data.zip .
# + id="AEJfrotXA58E"
# !unzip training_data.zip
# + id="O5M05p_pA5rd"
# + id="KPy5I5SBA3jW"
import glob
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.preprocessing.image import ImageDataGenerator, load_img, img_to_array, array_to_img
# + id="eZImnfGOA3jX" colab={"base_uri": "https://localhost:8080/"} outputId="05513be5-4d7a-45ff-f187-0c11c2aa5e66"
IMG_DIM = (150, 150)
train_files = glob.glob('training_data/*')
train_imgs = [img_to_array(load_img(img, target_size=IMG_DIM)) for img in train_files]
train_imgs = np.array(train_imgs)
train_labels = [fn.split('/')[1].split('.')[0].strip() for fn in train_files]
validation_files = glob.glob('validation_data/*')
validation_imgs = [img_to_array(load_img(img, target_size=IMG_DIM)) for img in validation_files]
validation_imgs = np.array(validation_imgs)
validation_labels = [fn.split('/')[1].split('.')[0].strip() for fn in validation_files]
print('Train dataset shape:', train_imgs.shape,
'\tValidation dataset shape:', validation_imgs.shape)
# + id="OiDa5o60A3jX"
train_imgs_scaled = train_imgs.astype('float32')
validation_imgs_scaled = validation_imgs.astype('float32')
train_imgs_scaled /= 255
validation_imgs_scaled /= 255
# + id="COLKZn79A3jY" colab={"base_uri": "https://localhost:8080/"} outputId="c364bdfe-3804-4cab-f8a5-02a115afc82f"
batch_size = 50
num_classes = 2
epochs = 150
input_shape = (150, 150, 3)
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
le.fit(train_labels)
# encode wine type labels
train_labels_enc = le.transform(train_labels)
validation_labels_enc = le.transform(validation_labels)
print(train_labels[0:5], train_labels_enc[0:5])
# + id="Ieppy25xFE_N"
# + id="j0wF41XlFEu9"
train_datagen = ImageDataGenerator( zoom_range=0.3, rotation_range=50, # rescale=1./255,
width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2,
horizontal_flip=True, fill_mode='nearest')
val_datagen = ImageDataGenerator() # rescale=1./255
train_generator = train_datagen.flow(train_imgs, train_labels_enc, batch_size=30)
val_generator = val_datagen.flow(validation_imgs, validation_labels_enc, batch_size=20)
# + id="wDDCzCWzFEDf"
# + id="9fVVMivMA3jZ"
from tensorflow.keras.applications import vgg16
from tensorflow.keras.models import Model
import tensorflow.keras
vgg = vgg16.VGG16(include_top=False, weights='imagenet',
input_shape=input_shape)
output = vgg.layers[-1].output
output = tensorflow.keras.layers.Flatten()(output)
vgg_model = Model(vgg.input, output)
vgg_model.trainable = False
for layer in vgg_model.layers:
layer.trainable = False
vgg_model.summary()
# + id="T9SDjC3EA3jZ" outputId="c9700f00-c6a8-4693-d572-458b3938bae5" colab={"base_uri": "https://localhost:8080/", "height": 717}
import pandas as pd
pd.set_option('max_colwidth', -1)
layers = [(layer, layer.name, layer.trainable) for layer in vgg_model.layers]
pd.DataFrame(layers, columns=['Layer Type', 'Layer Name', 'Layer Trainable'])
# + id="OvJvrtrgA3ja" colab={"base_uri": "https://localhost:8080/"} outputId="46899c19-2f44-47c9-8c43-7941889840c0"
print("Trainable layers:", vgg_model.trainable_weights)
# + id="J_Zbyst4A3ja" outputId="6ff932bc-52d6-4df4-da16-160e48d987de" colab={"base_uri": "https://localhost:8080/", "height": 304}
bottleneck_feature_example = vgg.predict(train_imgs_scaled[0:1])
print(bottleneck_feature_example.shape)
plt.imshow(bottleneck_feature_example[0][:,:,0])
# + id="_8LY2OOaA3jb"
def get_bottleneck_features(model, input_imgs):
features = model.predict(input_imgs, verbose=0)
return features
# + id="8sekjWocA3jb" outputId="bb54ac03-4660-4932-816f-2389c00facf1" colab={"base_uri": "https://localhost:8080/"}
train_features_vgg = get_bottleneck_features(vgg_model, train_imgs_scaled)
validation_features_vgg = get_bottleneck_features(vgg_model, validation_imgs_scaled)
print('Train Bottleneck Features:', train_features_vgg.shape,
'\tValidation Bottleneck Features:', validation_features_vgg.shape)
# + id="unerLhrzF2HL"
# + id="qyFijnDhGBjn" outputId="5535330b-a91f-42c9-d129-95ec983268c8" colab={"base_uri": "https://localhost:8080/"}
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout, InputLayer
from tensorflow.keras.models import Sequential
from tensorflow.keras import optimizers
model = Sequential()
model.add(vgg_model)
model.add(Dense(512, activation='relu', input_dim=input_shape))
model.add(Dropout(0.3))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.3))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer=optimizers.RMSprop(lr=2e-5),
metrics=['accuracy'])
model.summary()
# + id="oXhFf4HLA3jc" outputId="e1242989-21ef-4f04-ca0f-e27cf7d0e943" colab={"base_uri": "https://localhost:8080/"}
history = model.fit_generator(train_generator, epochs=30,
validation_data=val_generator, verbose=1)
# + id="UV-VPb-lA3jc"
# + id="P0Fk-e8wA3jd" colab={"base_uri": "https://localhost:8080/", "height": 308} outputId="d86bb56d-225d-4b52-c931-66f4ea237121"
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))
t = f.suptitle('Pre-trained CNN (Transfer Learning) Performance', fontsize=12)
f.subplots_adjust(top=0.85, wspace=0.3)
epoch_list = list(range(1,31))
ax1.plot(epoch_list, history.history['accuracy'], label='Train Accuracy')
ax1.plot(epoch_list, history.history['val_accuracy'], label='Validation Accuracy')
ax1.set_xticks(np.arange(0, 31, 5))
ax1.set_ylabel('Accuracy Value')
ax1.set_xlabel('Epoch')
ax1.set_title('Accuracy')
l1 = ax1.legend(loc="best")
ax2.plot(epoch_list, history.history['loss'], label='Train Loss')
ax2.plot(epoch_list, history.history['val_loss'], label='Validation Loss')
ax2.set_xticks(np.arange(0, 31, 5))
ax2.set_ylabel('Loss Value')
ax2.set_xlabel('Epoch')
ax2.set_title('Loss')
l2 = ax2.legend(loc="best")
# + id="wHLV0XfKA3jd"
model.save('4-2-augpretrained_cnn.h5')
# + id="eBjX3kbpA3je"
# + id="3IaOWSbaA3je"
# + id="Q7SzUeawA3jn"
|
4_2_aug_pretrained.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Basket option implementation based on normal model
# %load_ext autoreload
# %autoreload 2
import numpy as np
from option_models import basket
from option_models import bsm
from option_models import normal
# +
# A trivial test case 1:
# one asset have 100% weight (the others zero)
# the case should be equivalent to the BSM or Normal model price
spot = np.ones(4) * 100
vol = np.ones(4) * 0.4
weights = np.array([1, 0, 0, 0])
divr = np.zeros(4)
intr = 0
cor_m = 0.5*np.identity(4) + 0.5
texp = 5
strike = 120
print(weights)
np.random.seed(123456)
price_basket = basket.basket_price_mc(strike, spot, vol*spot, weights, texp, cor_m, bsm=False)
# +
# Compare the price to normal model formula
norm1 = normal.NormalModel(vol=40)
price_norm = norm1.price(strike=120, spot=100, texp=texp, cp=1)
print(price_basket, price_norm)
# +
# A trivial test case 2
# all assets almost perfectly correlated:
# the case should be equivalent to the BSM or Normal model price
spot = np.ones(4) * 100
vol = np.ones(4) * 0.4
weights = np.ones(4) * 0.25
divr = np.zeros(4)
intr = 0
cor_m = 0.0001*np.identity(4) + 0.9999*np.ones((4,4))
texp = 5
strike = 120
print( cor_m )
np.random.seed(123456)
price_basket = basket.basket_price_mc(strike, spot, vol*spot, weights, texp, cor_m, bsm=False)
print(price_basket, price_norm)
# +
# A full test set for basket option with exact price
spot = np.ones(4) * 100
vol = np.ones(4) * 0.4
weights = np.ones(4) * 0.25
divr = np.zeros(4)
intr = 0
cor_m = 0.5*np.identity(4) + 0.5
texp = 5
strike = 100
price_exact = 28.0073695
# -
cor_m
price_basket = basket.basket_price_mc(strike, spot, vol*spot, weights, texp, cor_m, bsm=False)
print(price_basket, price_exact)
# # [To Do] Basket option implementation based on BSM model
# ## Write the similar test for BSM
price_basket = basket.basket_price_mc(strike, spot, vol, weights, texp, cor_m, bsm=True)
print(price_basket)
# +
# A trivial test case 1:
# one asset have 100% weight (the others zero)
# the case should be equivalent to the BSM or Normal model price
spot = np.ones(4) * 100
vol = np.ones(4) * 0.4
weights = np.array([1, 0, 0, 0])
divr = np.zeros(4)
intr = 0
cor_m = 0.5*np.identity(4) + 0.5
texp = 5
strike = 120
print(weights)
np.random.seed(123456)
price_basket = basket.basket_price_mc(strike, spot, vol, weights, texp, cor_m, bsm=True)
# -
bsm1 = bsm.BsmModel(vol=0.4)
price_bsm = bsm1.price(strike=120, spot=100, texp=texp, cp=1)
print(price_basket, price_bsm)
# # Spread option implementation based on normal model
# +
# A full test set for spread option
spot = np.array([100, 96])
vol = np.array([0.2, 0.1])
weights = np.array([1, -1])
divr = np.array([1, 1])*0.05
intr = 0.1
cor_m = np.array([[1, 0.5], [0.5, 1]])
texp = 1
strike = 0
price_exact = 8.5132252
# +
# MC price based on normal model
# make sure that the prices are similar
np.random.seed(123456)
price_spread = basket.basket_price_mc(strike, spot, vol*spot, weights, texp, cor_m, intr=intr, divr=divr, bsm=False)
print(price_spread, price_exact)
# -
# # Spread option implementation based on BSM model
# Once the implementation is finished the BSM model price should also work
price_spread = basket.basket_price_mc(
strike, spot, vol*spot, weights, texp, cor_m, intr=intr, divr=divr, bsm=True)
# You also test Kirk's approximation
price_kirk = basket.spread_price_kirk(strike, spot, vol, texp, 0.5, intr, divr)
print(price_kirk, price_spread)
# # [To Do] Complete the implementation of basket_price_norm_analytic
# # Compare the MC stdev of BSM basket prices from with and without CV
# The basket option example from above
spot = np.ones(4) * 100
vol = np.ones(4) * 0.4
weights = np.array([1, 0, 0, 0])
divr = np.zeros(4)
intr = 0
cor_m = 0.5*np.identity(4) + 0.5
texp = 5
strike = 120
### Make sure that the analytic normal price is correctly implemented
basket.basket_price_norm_analytic(strike, spot, vol*spot, weights, texp, cor_m, intr=intr, divr=divr)
# +
# Run below about 100 times and get the mean and stdev
### Returns 2 prices, without CV and with CV
price_baskets = []
for i in range(100):
price_basket = basket.basket_price_mc_cv(strike, spot, vol, weights, texp, cor_m)
price_baskets.append(price_basket)
price_baskets = np.array(price_baskets)
print(price_baskets.shape)
# -
# prices without CV
print('prices without CV')
print("mean:{mean}, std:{std}".format(mean = price_baskets[:,0].mean(), std = price_baskets[:,0].std()))
# prices with CV
print('prices with CV')
print("mean:{mean}, std:{std}".format(mean = price_baskets[:,1].mean(), std = price_baskets[:,1].std()))
|
py/HW2_xlt/TestCode_BasketSpread.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Days 9 Class Exercises: Seaborn
# For these class exercises, we will be using a wine quality dataset which was obtained from this URL:
# http://mlr.cs.umass.edu/ml/machine-learning-databases/wine-quality. The data for these exercises can be found in the `data` directory of this repository.
#
# <span style="float:right; margin-left:10px; clear:both;"></span> Additionally, with these class exercises we learn a few new things. When new knowledge is introduced you'll see the icon shown on the right:
#
# ## Get Started
# Import the Numpy, Pandas, Matplotlib (matplotlib magic) and Seaborn.
# +
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
# -
# ## Exercise 1. Explore the data
# First, read about this dataset from the file [../data/winequality.names](../data/winequality.names)
# Next, read in the file named `winequality-red.csv`. This data, despite the `csv` suffix, is separated using a semicolon.
wine = pd.read_csv("..//data/winequality-red.csv", sep=';')
wine.head()
# How many samples (observations) do we have?
wine.shape
# Are the data types for the columns in the dataframe appropriate for the type of data in each column?
wine.dtypes
# Any missing values?
wine.isna().sum()
wine.duplicated().sum()
# ## Exercise 2: Explore the Data
# The quality column contains our expected outcome. Wines scored as 0 are considered very bad and wines scored as 10 are very excellent. Plot a bargraph to see how many samples are there per each quality of wine.
#
# **Hints**:
# - Use the [pd.Series.value_counts()](https://pandas.pydata.org/docs/reference/api/pandas.Series.value_counts.html) function to count the number of values
# - Panda DataFrames and Series have built in plotting funcitons that use MatplotLib. Therefore, we can use the [pd.Series.plot.bar()](https://pandas.pydata.org/docs/reference/api/pandas.Series.plot.bar.html) function to simplify use of matplotlib.
wine['quality'].value_counts(sort = False).plot.bar();
# Now use Matplotlib functionality to recreate the plot (no need to color each bar)
qcounts =wine['quality'].value_counts(sort = False)
fig = plt.figure()
# Recreate the bargraph using Seaborn
sns.barplot(x = 'quality', data = wine )
# Describe the data for all of the columns in the dataframe. This includes our physicochemical measurements (independent data) as well as the quality data (dependent).
# Visualizing the data can sometimes better help undrestand it's limits. Create a single figure, that contains boxplots for each of the data columns. Use the [seaborn.boxplot()](https://seaborn.pydata.org/generated/seaborn.boxplot.html) function to do this:
#
# <span style="float:right; margin-left:10px; clear:both;"></span>In our plot, the axis labels are squished together and many of the box plots are too hard to see because all of them share the same y-axis coordinate system. Unfortunately, not all Seaborn functions provide arguments to control the height and widht of a plot, the `boxplot` function is one of them. However, remember that Seaborn uses matplotlib! So, we can use matplot lib functions set the height using a command such as:
#
# ```python
# plt.figure(figsize=(10, 6))
# ```
# Where the first number is the width and the second number is the height. Repeat the plot from the previous cell but add this line of code just above the figure.
# <span style="float:right; margin-left:10px; clear:both;"></span> Unfortunately, we are still unable to read some of the x-axis labels. But we can use Matplotlib to correct this. When calling a Seaborn plot function it will return the Matplotlib axis object. We can then call functions on the axis such as the [set_xticklabels](https://matplotlib.org/stable/api/_as_gen/matplotlib.axes.Axes.set_xticklabels.html) function. That function will allow us to set a rotation on the axis tick labels and it takes a `rotation` argument. For example. The following function call on an axis object named `g` will reset the tick labels (using the `get_xticklabels()` function) and set a rotation of 45 degrees.
#
# ```python
# g.set_xticklabels(g.get_xticklabels(), rotation=45);
# ```
#
# Try it on the wine data boxplot:
# The boxplots from some of the measurements are too squished to view their distribution. The [seaborn.FacetGrid()](https://seaborn.pydata.org/generated/seaborn.FacetGrid.html) function can help. It allows us to divide our data into differnet panels of the same figure. But, it requires that our data be in tidy format.
# <span style="float:right; margin-left:10px; clear:both;"></span>Using `FacetGrid` we can divide up our plots into rows and columns using variable. Here are few important arguments that can be passed to the `FacetGrid` function.
#
# - **data**: Tidy (“long-form”) dataframe where each column is a variable and each row is an observation.
# - **row**, **col**: Variables that define subsets of the data, which will be drawn on separate facets in the grid.
# - **col_wrap**: “Wrap” the column variable at this width, so that the column facets span multiple rows. Incompatible with a row facet.
# - **sharex**, **sharey**: If true, the facets will share y axes across columns and/or x axes across rows.
#
# We have two variables in our tidy wine data set: "quality" and the "measurement". We want to create a separate boxplot for each measurement regardless of quality in this case we can either have a grid of 1 column or 1 row. It is your choice.
#
# After you've created a `FacetGrid` you must then tell the grid what type of plot you want to draw. This is performed using the [map](https://seaborn.pydata.org/generated/seaborn.FacetGrid.map.html#seaborn.FacetGrid.map) function of the `seaborn.axisgrid.FacetGrid` object. Let's walk through a demonstration to see how it works.
#
# First, import the tips dataset:
#
# ```python
# tips = sns.load_dataset('tips')
# tips.head()
# ```
# Next create a `FacetGrid` that will divide the data by meal time and sex
#
# ```python
# g = sns.FacetGrid(tips, col="time", row="sex")
# ```
# Notice the result is an empty grid. Now we need to indicate the type of plot we weant to draw. For this example, we'll draw a `sns.scatterplot` plot. When we call the `map` function any arguments given get passed to the scatterplot function:
#
# ```python
# g = sns.FacetGrid(tips, col="time", row="sex")
# g.map(sns.scatterplot, "total_bill", "tip");
# ```
# Now, lets use a `FacetGrid` to create boxplots for each measurement in separate facet. Do the following
# 1. Tidy the wine data. Be sure to keep the `quality` column as is, and melt the others into a single column named `measurement`
# 2. Unlike the tip data, we only have one variable we want to calculate boxplots for: measurement. We don't not want to create box plots for measurement and quality. So, we only need one row of plots.
# 3. Make the row of plots span 2 rows so we can see them more easily.
# 4. Make sure that each boxplot does not share the x-axis coordinates with all other boxplots.
# Redo the FacetGrid plot but use the [seaborn.violinplot](https://seaborn.pydata.org/generated/seaborn.violinplot.html) instead.
#
# Redo the FacetGrid plot but with the [seaborn.swarmplot](https://seaborn.pydata.org/generated/seaborn.swarmplot.html) instead. Be sure to set the `size` argument for the swarmplot to 1.
#
# **Note**: this may take awhile to create.
# Next, let's look for columns that might show correlation with other columns. Colinear data can be problematic for some analyses. Use the Seaborn [seaborn.pairplot](https://seaborn.pydata.org/generated/seaborn.pairplot.html) function to do this.
#
# Be sure to:
#
# - Color each point with the quality value.
# - Use the 'tab10' palette for coloring
#
# **Note**: this may take awhile to create)
# Do you see any measurement types that are correlated?
# Perform correlation analysis on the data columns. Exclude the `quality` column from the correlation analysis.
# Use the [seaborn.heatmap](https://seaborn.pydata.org/generated/seaborn.heatmap.html) function to create a heatmap of the correlation values between data columns.
#
# Be sure to:
# - Make sure the color values span from -1 to 1 (i.e., they are not inferred from the data).
# - Show the correlation values in the cells of the heatmap
# - Make the figure large enough to read the correlation values
# - Make the cells of the heatmap square
|
class_exercises/.ipynb_checkpoints/D09-Seaborn-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={} colab_type="code" id="MJG1tV3p-8EM"
import numpy as np
import gym
import time
env = gym.make('LunarLander-v2')
# + colab={} colab_type="code" id="K-dCPXDq_Gj1"
import torch
import torch.nn as nn
import torch.optim as optim
class DQN(nn.Module):
def __init__(self, in_features, n_actions):
super(DQN,self).__init__()
self.neuralnet = nn.Sequential(
nn.Linear(in_features,256),
nn.ReLU(),
nn.Linear(256,128),
nn.ReLU(),
nn.Linear(128,64),
nn.ReLU(),
nn.Linear(64,n_actions)
)
def forward(self,x):
return self.neuralnet(x)
# + colab={} colab_type="code" id="B6zGowe__ac8"
import collections
class ExperienceBuffer():
def __init__(self,capacity):
self.exp_buffer = collections.deque(maxlen=capacity)
def append(self,exp):
self.exp_buffer.append(exp)
def __len__(self):
return len(self.exp_buffer)
def clear(self):
self.exp_buffer.clear()
def sample(self,batch_size):
indices = np.random.choice( range(len(self.exp_buffer)), batch_size )
states,actions,rewards,dones,next_states = zip(*[self.exp_buffer[i] for i in indices])
return np.array(states),np.array(actions),np.array(rewards, dtype=np.float32),np.array(dones,dtype=np.uint8),np.array(next_states)
# + colab={} colab_type="code" id="G8ZC4Jbc_oo6"
class Agent():
def __init__(self,env,buffer):
self.env = env
self.buffer = buffer
self._reset()
def _reset(self):
self.state = env.reset()
self.total_rewards = 0.0
def step(self, net, eps, device="cpu"):
done_reward= None
if np.random.random() < eps:
action = env.action_space.sample()
else:
state_prev = torch.tensor(self.state).to(device)
action = int(torch.argmax(net(state_prev).to(device)))
state_prev = self.state
rewards = 0
done = False
for _ in range(4):
self.state,reward,done,info = env.step(action)
self.total_rewards+=reward
if done:
break
self.buffer.append((state_prev,action,reward,done,self.state))
if done:
done_reward = self.total_rewards
self._reset()
return done_reward
# + colab={} colab_type="code" id="U5sWRHaU_fD7"
GAMMA = 0.99
EPSILON_START = 1
EPSILON_FINAL = 0.01
EPSILON_DECAY_OBS = 10**5
BATCH_SIZE = 32
MEAN_GOAL_REWARD = 250
REPLAY_BUFFER_SIZE = 10000
REPLAY_MIN_SIZE = 10000
LEARNING_RATE= 1e-4
SYNC_TARGET_OBS = 1000
# + colab={} colab_type="code" id="iEc5pGIqAGB8"
def cal_loss(batch, net, tgt_net, device='cpu'):
states,actions,rewards,dones,next_states = batch
states_v = torch.tensor(states).to(device)
actions_v = torch.tensor(actions).to(device)
rewards_v = torch.tensor(rewards).to(device)
dones_v = torch.ByteTensor(dones).to(device)
next_states_v = torch.tensor(next_states).to(device)
Q_val = net(states_v).gather(1,actions_v.unsqueeze(-1)).squeeze(-1) #select q value corresponding each action
Q_val_next = tgt_net(next_states_v).max(1)[0] #give maximum value for each sample
Q_val_next[dones_v] = 0.0 #making q value for done to zero
Q_val_next = Q_val_next.detach() #detach from current graph
expected_return = rewards_v + GAMMA * Q_val_next #what should be
return nn.MSELoss()(Q_val,expected_return)
# + colab={"base_uri": "https://localhost:8080/", "height": 1482} colab_type="code" id="NdXuc3WBAjbi" outputId="a566c52e-d710-4d8c-df94-a7d76324ebe7"
device = torch.device("cuda" if torch.cuda.is_available() else 'cpu')
net = DQN(env.observation_space.shape[0],env.action_space.n).to(device)
tgt_net = DQN(env.observation_space.shape[0],env.action_space.n).to(device)
buffer= ExperienceBuffer(REPLAY_BUFFER_SIZE)
agent = Agent(env,buffer)
epsilon = EPSILON_START
optimizer = optim.Adam(net.parameters(),lr=LEARNING_RATE)
total_rewards= []
ts = time.time()
best_mean_reward= None
obs_id = 0
while True:
obs_id +=1
epsilon = max(EPSILON_FINAL, EPSILON_START - obs_id/EPSILON_DECAY_OBS)
reward = agent.step(net,epsilon,device=device)
if reward is not None:
total_rewards.append(reward)
game_time = time.time() - ts
ts = time.time()
mean_reward= np.mean(total_rewards[-100:])
if best_mean_reward == None or best_mean_reward < mean_reward:
torch.save(net.state_dict(),'checkpoints/lunar_lander-best.dat')
if best_mean_reward == None:
last = mean_reward
best_mean_reward = mean_reward
if best_mean_reward is not None and best_mean_reward - last > 10:
last = best_mean_reward
print("GAME : {}, TIME ECLAPSED : {}, EPSILON : {}, MEAN_REWARD : {}".format(obs_id,game_time,epsilon,mean_reward))
print("Reward {} -> {} Model Saved".format(best_mean_reward,mean_reward))
best_mean_reward = mean_reward
if mean_reward > MEAN_GOAL_REWARD:
print("SOLVED in {} obs".format(obs_id))
break
if len(buffer) < REPLAY_MIN_SIZE:
continue
if obs_id % SYNC_TARGET_OBS == 0:
tgt_net.load_state_dict(net.state_dict())
optimizer.zero_grad()
batch = buffer.sample(BATCH_SIZE)
loss_t = cal_loss(batch,net,tgt_net,device= device)
loss_t.backward()
optimizer.step()
# + colab={} colab_type="code" id="jCgBTTCgehVC"
|
Homework-Assignments/Week 5 - Lunar Lander (Deep Q Learning)/Lunar Lander (Deep Q Learning).ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# # Data Preparation
# ## Importing a dataset
dataset = pd.read_csv('Social_Network_Ads.csv')
X = dataset.iloc[:, [2, 3]].values
y = dataset.iloc[:, 4].values
# ## Splitting the dataset into a Training set a Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.25, random_state = 0)
# ## Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# # Modeling
# ## Fitting Logistic Regression to the Training set
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(random_state=0)
classifier.fit(X_train, y_train)
# ## Predicting the Test set results
y_pred = classifier.predict(X_test)
# ## Making the Confusion Matrix
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test, y_pred)
# ## Visualising results
from matplotlib.colors import ListedColormap
def draw(X_set, y_set, title):
X1, X2 = np.meshgrid(np.arange(start=X_set[:, 0].min() - 1, stop=X_set[:, 0].max() + 1, step=0.01),
np.arange(start=X_set[:, 1].min() - 1, stop=X_set[:, 1].max() + 1, step=0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha=0.5, cmap=ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
c=ListedColormap(('red', 'green'))(i), label=j)
plt.title(title)
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
draw(X_train, y_train, 'Logistic Regression (Training set)')
draw(X_test, y_test, 'Logistic Regression (Test set)')
|
Classification jupyter notebooks/s12_logistic_regression.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/valentinas0515/my_first_repo/blob/main/text_sumarizier_SOLVED.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="SmVEXRTA1wZn"
# # Text Summarisation
#
# There’s sooo much content to take in these days.
#
# * Blog posts
# * YouTube
# * Podcasts
# * Reports
#
# Wouldn't it be great to be able to summarise
#
# Using Hugging Face Transformers you can leverage a pre-trained summarisation pipeline to start summarising content.
#
# In this notebook you'll go through:
# 1. Installing Hugging Face Transformers
# 2. Building a summarization pipeline
# 3. Running an encoding decoding model for summarization
#
#
# # Install Dependencies
# + id="3_gjhmg416Jw" colab={"base_uri": "https://localhost:8080/"} outputId="4c0185af-86b9-4bc6-d603-c26d4f6f3759"
# !pip install transformers
# + [markdown] id="KeuHh5231_RR"
# # Import Libraries
# + id="MQhdl-C218tZ"
from transformers import pipeline
# + [markdown] id="TV-sY9LD2Ilj"
# # Load Summarization Pipeline
# + id="oJuFb85_2GiX" colab={"base_uri": "https://localhost:8080/", "height": 194, "referenced_widgets": ["db10df0ec6e546f9b2f53fba30614e52", "0dd670c7094c4bfc8c94c7fa072abe02", "<KEY>", "4aa9e7925ebc40e7913e8a0f61ebb67a", "8952f6d420594a8391e24adb7a4d1343", "5b22e232e89b4242b57491f923816ad5", "fe902cc92afe47e2a500cfd105808d8e", "72eaf26ebacd46f081be3ba0eb9676dc", "<KEY>", "86087ae29ca748a0b46a2682f2e80747", "<KEY>", "<KEY>", "02062286a19145c0a8160dc7bea1459b", "a904bdf5d7b9467e939a94784ca194f4", "<KEY>", "f9bebe0746164d13912644bcdcbe6a03", "<KEY>", "<KEY>", "<KEY>", "1126e2ed9f814ad995310234fd5e8d7c", "<KEY>", "51b0e4f7677e4e788183d20eb539e676", "9980dd04fd604a228587dea8726f9803", "<KEY>", "80440952dcea46dc98cfcc56042a3393", "810d09dcef7d4e95ba7d85dafdc28cc3", "<KEY>", "a80202e9bedb40a2859fd2969c271ef6", "<KEY>", "bac9a89a0fec4120ab90ec35fa3a1294", "<KEY>", "39224713fb9a47b59c2e39d4afeb4a81", "<KEY>", "c66a996e754f41f7a6437a7928ad0bda", "93b493a8de3244edba7834011d30fb73", "8dbabf8b909d41bca9fe8edc99cef9e2", "<KEY>", "<KEY>", "f4d9a350cee747f3aecfa1b5e6902d0f", "<KEY>", "3c505a541a03471c8e514804d1bd2f0a", "fe9e1629c8ab4532a1ba891cb380a253", "bfbce16c5a8846948702f5dbe5e2578a", "7c3dee171a774c71a588b8ee5c6d808e", "29858fe79f1a4798bdaff0fa7f81229c", "<KEY>", "1806e76f36104ce5a3ea1764c652eee0", "<KEY>", "<KEY>", "<KEY>", "e760d9d7333343eba7cea1edd8155c48", "9b7e7286623146c180fb680ee127a053", "<KEY>", "c3b98a5117214c95844e6300bfef9820", "f34cf7e630c541da98bad19ac07fdd54"]} outputId="1cebac38-7909-4017-883e-0058278a458c"
summary_pipeline = pipeline("summarization")
# + [markdown] id="X-M7tBCg2ZxO"
# # Get text
# + id="l2U_8Rzq2ez6"
article = '''
A lack of transparency and reporting standards in the scientifc community has led to increasing and widespread
concerns relating to reproduction and integrity of results. As an omics science, which generates vast amounts of data and
relies heavily on data science for deriving biological meaning, metabolomics is highly vulnerable to irreproducibility. The
metabolomics community has made substantial eforts to align with FAIR data standards by promoting open data formats,
data repositories, online spectral libraries, and metabolite databases. Open data analysis platforms also exist; however,
they tend to be infexible and rely on the user to adequately report their methods and results. To enable FAIR data science
in metabolomics, methods and results need to be transparently disseminated in a manner that is rapid, reusable, and fully
integrated with the published work. To ensure broad use within the community such a framework also needs to be inclusive
and intuitive for both computational novices and experts alike.
'''
# + colab={"base_uri": "https://localhost:8080/"} id="Sk70O2qr2jZh" outputId="f5511376-4e6f-43d3-fed9-357ffa2c485b"
summary_pipeline(article, max_length = 70, min_length = 20)
# + id="2tKychwa2mui" colab={"base_uri": "https://localhost:8080/"} outputId="c23281ae-51f6-4cfa-bfbc-ed653be73ac0"
help(summary_pipeline)
|
text_sumarizier_SOLVED.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
import os
import codecs
import json
import random
home_path = os.path.expanduser('~')
root_dir = home_path + "/origin_neural-belief-tracker"
# -
# ## Ontology
# +
ontology_file = root_dir + "/ontologies/ontology_dstc2_en.json"
ontologies = json.load(codecs.open(ontology_file, 'r', 'cp949', 'ignore'))
actions = ''
slots = ''
for action in ontologies:
actions += action + ', '
for slot in ontologies['requestable']:
slots += slot + ', '
print "action set: " + actions
print "slot set: " + slots
# -
# ## Training Data
# +
training_file = root_dir+'/data/woz/woz_train_en.json'
#test_data = 'woz_test_en.json'
#validate_data = 'woz_validate_en.json'
total_chat_idx = 0
user_chat_idx = 0
sys_chat_idx = 0
list = json.load(codecs.open(training_file, 'r', 'cp949', 'ignore'))
for data in list:
idx = data["dialogue_idx"]
dial_list = data['dialogue']
for dial_text in dial_list:
system_transcript = dial_text['system_transcript'] # system_utterance
transcript = dial_text['transcript'] # user_utterance
belief_states = dial_text['belief_state']
if system_transcript:
total_chat_idx = total_chat_idx +1
sys_chat_idx = sys_chat_idx+1
if transcript:
total_chat_idx = total_chat_idx +1
user_chat_idx = user_chat_idx+1
# -
# ### training data 발화 수
print("total dialogue set: " + str(len(list)))
print ("total: " + str(total_chat_idx)) # 데이터 총 발화 수
print ("user: " + str(user_chat_idx)) # 데이터 총 발화 수
print ("system: " + str(sys_chat_idx)) # 데이터 총 발화 수
# ### what does consist training data?
# +
index = random.randrange(0,len(list)) #무작위 정수
print('==================== '+str(index) + 'th dialogue'+' ====================')
dial_list = list[index]['dialogue']
for dial_text in dial_list:
system_transcript = dial_text['system_transcript'] # system_utterance
transcript = dial_text['transcript'] # user_utterance
belief_states = dial_text['belief_state']
if system_transcript:
print('sys: ' + system_transcript)
if transcript:
print('user: ' + transcript)
if belief_states:
for belief_state in belief_states:
print(belief_state)
|
.ipynb_checkpoints/data_statistic-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import numpy as np
import simpy
from collections import OrderedDict
from collections import Counter
processList = list()
class AMIDOLRateReward():
def __init__(self):
self.rewards = dict()
def accumulateReward(self, env, params):
return(0.0)
def getDelay(self, params):
return(simpy.core.Infinity)
def isEnabled(self, params):
return(True)
def simpyProcess(self, env, params):
while(True):
try:
if(self.isEnabled(params)):
yield env.timeout(self.getDelay(params))
self.accumulateReward(env, params)
else:
yield env.timeout(simpy.core.Infinity)
except simpy.Interrupt as i:
continue
print(self.getName() + " terminating.")
class AMIDOLEvent():
def getName(self):
return("GenericEvent")
def getRate(self, params):
return(1.0)
def getDelay(self, params):
delay = np.random.exponential(self.getRate(params))
return(delay)
def isEnabled(self, params):
return(True)
def fireEvent(self, params):
return(params)
def reactivation(self, env):
global processList
for process in processList:
if (process != env.active_process):
process.interrupt()
def simpyProcess(self, env, params):
while(True):
try:
if(self.isEnabled(params)):
yield env.timeout(self.getDelay(params))
self.fireEvent(params)
else:
yield env.timeout(simpy.core.Infinity)
self.reactivation(env)
except simpy.Interrupt as i:
continue
print(self.getName() + " terminating.")
# +
class AMIDOLParameters():
def __init__(self):
self.S_Pcinfect_S = 51999999
self.ScinfectcI_Scinfect_IcI_Pccure_I = 1
self.ScinfectcIccure_RcR_P = 0
self.beta = 1.0/3.0*1.24
self.gamma = 1.0/3.0
class infectEvent(AMIDOLEvent):
def getName(self):
return("InfectEvent")
def getRate(self, v):
rate = 1.0 / (v.beta*v.S_Pcinfect_S*v.ScinfectcI_Scinfect_IcI_Pccure_I/(v.S_Pcinfect_S+v.ScinfectcI_Scinfect_IcI_Pccure_I+v.ScinfectcIccure_RcR_P))
return(rate)
def isEnabled(self, v):
return((v.beta*v.S_Pcinfect_S * v.ScinfectcI_Scinfect_IcI_Pccure_I) > 0.0)
def fireEvent(self, v):
v.S_Pcinfect_S -= 1.0
v.ScinfectcI_Scinfect_IcI_Pccure_I += 1.0
class cureEvent(AMIDOLEvent):
def getName(self):
return("CureEvent")
def getRate(self, v):
rate = 1.0/(v.gamma*v.ScinfectcI_Scinfect_IcI_Pccure_I)
return(rate)
def isEnabled(self, v):
return(v.gamma*v.ScinfectcI_Scinfect_IcI_Pccure_I > 0.0)
def fireEvent(self, v):
v.ScinfectcI_Scinfect_IcI_Pccure_I -= 1.0
v.ScinfectcIccure_RcR_P += 1.0
class rvIRateReward(AMIDOLRateReward):
def __init__(self):
self.rewards = OrderedDict()
self.samplePoints = [0.0, 5.0, 10.0, 15.0, 20.0, 25.0, 30.0, 35.0, 40.0, 45.0, 50.0, 55.0, 60.0, 65.0, 70.0, 75.0, 80.0, 85.0, 90.0, 95.0, 100.0]
self.delays = list()
self.delays.append(self.samplePoints[0])
idx = 1
lastX = self.samplePoints[0]
for x in self.samplePoints[1:]:
self.delays.append(x - lastX)
lastX = x
def accumulateReward(self, env, params):
self.rewards[env.now] = params.ScinfectcI_Scinfect_IcI_Pccure_I
def getDelay(self, params):
if (self.delays):
return(self.delays.pop(0))
else:
return(simpy.core.Infinity)
rvICounter = Counter()
maxRuns = 100
for trace in range(0, maxRuns):
params = AMIDOLParameters()
cure = cureEvent()
infect = infectEvent()
rvI = rvIRateReward()
env = simpy.Environment()
processList = []
cureProcess = env.process(cure.simpyProcess(env, params))
processList.append(cureProcess)
infectProcess = env.process(infect.simpyProcess(env, params))
processList.append(infectProcess)
rvIProcess = env.process(rvI.simpyProcess(env, params))
env.run(until=rvI.samplePoints[-1])
results = {k: v / maxRuns for k, v in rvI.rewards.iteritems()}
rvICounter = Counter(results) + rvICounter
rvICounter
# -
counter = Counter()
for trace in range(0, 100):
env.run(until=rvI.samplePoints[-1])
results =
|
docs-static/SIRS/steel-thread/tests/.ipynb_checkpoints/NewSim SIRS Test-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import tensorflow as tf
import tensorflow.contrib.slim as slim
from tqdm import tqdm
import os
from scipy.misc import imread,imresize
import inception_resnet_v2
checkpoint = 'inception_resnet_v2_2016_08_30.ckpt'
img_size = inception_resnet_v2.inception_resnet_v2.default_image_size
img_size
batch_size = 8
learning_rate = 1e-3
classes = 196
# +
tf.reset_default_graph()
sess = tf.InteractiveSession()
X = tf.placeholder(tf.float32,[None,img_size, img_size, 3])
Y = tf.placeholder(tf.int32, [None])
images = tf.map_fn(lambda image: tf.image.per_image_standardization(image), X)
with slim.arg_scope(inception_resnet_v2.inception_resnet_v2_arg_scope()):
logits, endpoints = inception_resnet_v2.inception_resnet_v2(images)
logits = tf.layers.dense(logits, classes)
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=Y, logits=logits)
cost = tf.reduce_mean(cross_entropy)
accuracy = tf.reduce_mean(tf.cast(tf.nn.in_top_k(logits, Y, 1), tf.float32))
global_step = tf.Variable(0, name="global_step", trainable=False)
tf.summary.scalar("total_loss", cost)
tf.summary.scalar("accuracy", accuracy)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost,global_step=global_step)
sess.run(tf.global_variables_initializer())
var_lists = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope = 'InceptionResnetV2')
saver = tf.train.Saver(var_list = var_lists)
saver.restore(sess, checkpoint)
# -
|
grab-aiforsea/computer-vision/transfer-learning.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# source: https://feisky.xyz/machine-learning/neural-networks/active.html#%E6%BF%80%E6%B4%BB%E5%87%BD%E6%95%B0
# ## Activation function
# 
# 在類神經網路中如果不使用激勵函數,那麼在類神經網路中皆是以上層輸入的線性組合作為這一層的輸出(也就是矩陣相乘),
# 輸出和輸入依然脫離不了線性關係,做深度類神經網路便失去意義。
#
# * 數學意義: 將線性函數透過 activation function 轉換到一非線性區間
# * 物理意義: 強化特徵值
# ## Sigmoid
# * Sigmoid将一个实数映射到(0,1)的区间,可以用来做二分类
# * sigmoid缺点:
# 激活函数计算量大(指数运算),反向传播求误差梯度时,求导涉及除法
# 对于深层网络,sigmoid函数反向传播时,很容易就会出现梯度消失的情况
# (在sigmoid接近饱和区时,变换太缓慢,导数趋于0,这种情况会造成信息丢失),从而无法完成深层网络的训练
# 
# 
# ## Tanh
# * 也称为双切正切函数,取值范围为[-1,1]
# * tanh在特征相差明显时的效果会很好,在循环过程中会不断扩大特征效果
# 
# 
# ## ReLU
# * Rectified Linear Unit(ReLU)
# * 仿生物學原理: 2001年,神經科學家Dayan、Abott從生物學角度,模擬出了腦神經元接受信號更精確的激活模型
# * ReLU计算量小
# * 一部分神经元的输出为0造成了网络的稀疏性,并且减少了参数的相互依存关系,缓解了过拟合问题的发生
# 
# 
# 
# ## Softmax
# * softmax函数将K维的实数向量压缩(映射)成另一个K维的实数向量,其中向量中的每个元素取值都介于(0,1)之间。
# * 常用于多分类问题
# * 適用於 output layer
# 
|
Activation_function_Introduction.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# %matplotlib inline
# +
import warnings
warnings.filterwarnings("ignore")
from datetime import datetime
import math
import matplotlib as mpl
import matplotlib.pyplot as plt
import os
import numpy as np
import pandas as pd
import seaborn as sns
mpl.rcParams['figure.figsize'] = [8, 5]
# -
# ## [HPSC COVID-19 14-day Epidemiology Reports](https://www.hpsc.ie/a-z/respiratory/coronavirus/novelcoronavirus/surveillance/covid-1914-dayepidemiologyreports/)
#
# This notebook uses data copied from the daily [HPSC COVID-19 14-day Epidemiology Reports](https://www.hpsc.ie/a-z/respiratory/coronavirus/novelcoronavirus/surveillance/covid-1914-dayepidemiologyreports/), from 2021-01-01 to 2021-05-13 (see the [generated CSV file](https://github.com/derekocallaghan/covid19data/tree/main/notebooks/data/HSPC_COVID_Epidmiology_14Day_Report.csv)). It is currently used to perform some exploratory analysis of age group hospitalisation.
df = pd.read_csv('./data/HSPC_COVID_Epidmiology_14Day_Report.csv', sep=" ", skiprows=13, parse_dates=["Date"], date_parser=lambda x: datetime.strptime(x, "%Y-%m-%d"))
df
# ### There appears to be an issue with the "Cases hospitalised (%)" for 2021-03-17 (total > 100%), so it's excluded for now. It has been confirmed that the [original report](https://www.hpsc.ie/a-z/respiratory/coronavirus/novelcoronavirus/surveillance/covid-1914-dayepidemiologyreports/COVID-19%2014%20day%20epidemiology%20report_20210317%20website%20final.pdf) also contains this issue.
df[df.Date=='2021-03-17']
df = df[df.Date!='2021-03-17']
# ### Although there has been an increase in % hospitalisation for age groups 0-18 since approximately end of January, this is likely to be related to the corresponding decrease for age groups 75+.
ax=sns.lineplot(data=df[(df.Date>='2021-01-01') & (df["Age Group (years)"].isin(['0-4', '5-12', '13-18', '19-24', '75-84', '85+']))], x='Date', y='Cases hospitalised (%)', hue='Age Group (years)')
ax.set_title('HPSC 14-day reports: Cases hospitalised (%) 2021, Age 0-24, 75+')
plt.xticks(rotation=30);
# ### Corresponding total cases hospitalised for these age groups
ax=sns.lineplot(data=df[(df.Date>='2021-01-01') & (df["Age Group (years)"].isin(['0-4', '5-12', '13-18', '19-24', '75-84', '85+']))], x='Date', y='Cases hospitalised (n)', hue='Age Group (years)')
ax.set_title('HPSC 14-day reports: Cases hospitalised (n) 2021, Age 0-24, 75+')
plt.xticks(rotation=30);
# ### Corresponding total cases hospitalised for age 0-18
ax=sns.lineplot(data=df[(df.Date>='2021-01-01') & (df["Age Group (years)"].isin(['0-4', '5-12', '13-18']))], x='Date', y='Cases hospitalised (n)', hue='Age Group (years)')
ax.set_title('HPSC 14-day reports: Cases hospitalised (n) 2021, Age 0-18')
plt.xticks(rotation=30);
# ### The "*Cases hospitalised (%)*" column in the source data is the percentage for a particular age group of the total cases hospitalised. Here, the percentage of the corresponding age group "*Number of cases (n)*" is calculated.
df['AgeGroupHospPerc'] = df['Cases hospitalised (n)']*100/df['Number of cases (n)']
df.head()
# ### Daily/fortnightly mean hospitalised/cases percentage for 0-18, 75+
ax=sns.lineplot(data=df[(df.Date>='2021-01-01') & (df["Age Group (years)"].isin(['0-4', '5-12', '13-18', '75-84', '85+']))], x='Date', y='AgeGroupHospPerc', hue='Age Group (years)')
ax.set_ylabel('% Hospitalisation/Cases')
ax.set_title('HPSC 14-day reports: Daily Hospitalisation/Cases (%) 2021, Age 0-18, 75+')
plt.xticks(rotation=30);
ax=sns.lineplot(data=df[(df.Date>='2021-01-01') & (df["Age Group (years)"].isin(['0-4', '5-12', '13-18', '75-84', '85+']))].groupby('Age Group (years)').resample('2W-MON', on='Date').mean().reset_index(), x='Date', y='AgeGroupHospPerc', hue='Age Group (years)')
ax.set_ylabel('% Hospitalisation/Cases')
ax.set_title('HPSC 14-day reports: Fortnightly Mean Hospitalisation/Cases (%) 2021, Age 0-18, 75+')
plt.xticks(rotation=30);
# ### Daily/14D/fortnightly mean hospitalised/cases percentage for 0-24
ax=sns.lineplot(data=df[(df.Date>='2021-01-01') & (df["Age Group (years)"].isin(['0-4', '5-12', '13-18', '19-24']))], x='Date', y='AgeGroupHospPerc', hue='Age Group (years)')
ax.set_ylabel('% Hospitalisation/Cases')
ax.set_title('HPSC 14-day reports: Daily Hospitalisation/Cases (%) 2021, Age 0-24')
plt.xticks(rotation=30);
rdf = df[(df.Date>='2021-01-01')].copy()
rdf.Date = rdf.Date.dt.round('14D')
ax=sns.lineplot(data=rdf[(rdf["Age Group (years)"].isin(['0-4', '5-12', '13-18', '19-24']))], x='Date', y='AgeGroupHospPerc', hue='Age Group (years)')
ax.set_ylim((0,6.3))
ax.set_ylabel('% Hospitalisation/Cases')
ax.set_title('HPSC 14-day reports: 14D (Mean, 95% CI) Hospitalisation/Cases (%) 2021, Age 0-24')
plt.legend(loc='upper left')
plt.xticks(rotation=30);
ax=sns.lineplot(data=df[(df.Date>='2021-01-01') & (df["Age Group (years)"].isin(['0-4', '5-12', '13-18', '19-24']))].groupby('Age Group (years)').resample('2W-MON', on='Date').mean().reset_index(), x='Date', y='AgeGroupHospPerc', hue='Age Group (years)', hue_order=['0-4', '5-12', '13-18', '19-24'])
ax.set_ylim((0,6.3))
ax.set_ylabel('% Hospitalisation/Cases')
ax.set_title('HPSC 14-day reports: Fortnightly Mean Hospitalisation/Cases (%) 2021, Age 0-24')
plt.xticks(rotation=30);
# ### Daily/14D/fortnightly mean hospitalised/cases percentage for all ages
ax=sns.lineplot(data=df[(df.Date>='2021-01-01') & (~df["Age Group (years)"].isin(['Unknown']))], x='Date', y='AgeGroupHospPerc', hue='Age Group (years)', hue_order=['0-4', '5-12', '13-18', '19-24', '25-34', '35-44', '45-54', '55-64', '65-74', '75-84', '85+'])
ax.set_ylabel('% Hospitalisation/Cases')
ax.set_title('HPSC 14-day reports: Daily Hospitalisation/Cases (%) 2021, All Ages (from HPSC 14-day totals)')
plt.xticks(rotation=30);
ax=sns.lineplot(data=rdf[(~rdf["Age Group (years)"].isin(['Unknown']))], x='Date', y='AgeGroupHospPerc', hue='Age Group (years)', hue_order=['0-4', '5-12', '13-18', '19-24', '25-34', '35-44', '45-54', '55-64', '65-74', '75-84', '85+'])
ax.set_ylabel('% Hospitalisation/Cases')
ax.set_title('HPSC 14-day reports: 14D (Mean, 95% CI) Hospitalisation/Cases (%) 2021, All Ages')
plt.legend(loc='upper left')
plt.xticks(rotation=30);
ax=sns.lineplot(data=df[(df.Date>='2021-01-01') & (~df["Age Group (years)"].isin(['Unknown']))].groupby('Age Group (years)').resample('2W-MON', on='Date').mean().reset_index(), x='Date', y='AgeGroupHospPerc', hue='Age Group (years)', hue_order=['0-4', '5-12', '13-18', '19-24', '25-34', '35-44', '45-54', '55-64', '65-74', '75-84', '85+'])
ax.set_ylabel('% Hospitalisation/Cases')
ax.set_title('HPSC 14-day reports: Fortnightly Mean Hospitalisation/Cases (%) 2021, All Ages (from HPSC 14-day totals)')
plt.xticks(rotation=30);
# ### 14-day total cases hospitalised for 0-24, all ages
ax=sns.lineplot(data=df[(df.Date>='2021-01-01') & (df["Age Group (years)"].isin(['0-4', '5-12', '13-18', '19-24']))], x='Date', y='Cases hospitalised (n)', hue='Age Group (years)')
ax.set_title('HPSC 14-day totals: Cases hospitalised (n) 2021, Age 0-24')
plt.xticks(rotation=30);
ax=sns.lineplot(data=df[(df.Date>='2021-01-01') & (~df["Age Group (years)"].isin(['Unknown']))], x='Date', y='Cases hospitalised (n)', hue='Age Group (years)', hue_order=['0-4', '5-12', '13-18', '19-24', '25-34', '35-44', '45-54', '55-64', '65-74', '75-84', '85+'])
ax.set_title('HPSC 14-day totals: Cases hospitalised (n) 2021, All Ages')
plt.xticks(rotation=30);
|
notebooks/HPSC COVID-19 14-day Epidemiology Reports.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# ---
# # Applying Customizations
import pandas as pd
import numpy as np
import holoviews as hv
from holoviews import opts
hv.extension('bokeh', 'matplotlib')
# As introduced in the [Customization](../getting_started/2-Customization.ipynb) section of the 'Getting Started' guide, HoloViews maintains a strict separation between your content (your data and declarations about your data) and its presentation (the details of how this data is represented visually). This separation is achieved by maintaining sets of keyword values ("options") that specify how elements are to appear, stored outside of the element itself. Option keywords can be specified for individual element instances, for all elements of a particular type, or for arbitrary user-defined sets of elements that you give a certain ``group`` and ``label`` (see [Annotating Data](../user_guide/01-Annotating_Data.ipynb)).
#
# The options system controls how individual plots appear, but other important settings are made more globally using the "output" system, which controls HoloViews plotting and rendering code (see the [Plots and Renderers](Plots_and_Renderers.ipynb) user guide). In this guide we will show how to customize the visual styling with the options and output systems, focusing on the mechanisms rather than the specific choices available (which are covered in other guides such as [Style Mapping](04-Style_Mapping.ipynb)).
# ## Core concepts
#
# This section offers an overview of some core concepts for customizing visual representation, focusing on how HoloViews keeps content and presentation separate. To start, we will revisit the simple introductory example in the [Customization](../getting_started/2-Customization.ipynb) getting-started guide (which might be helpful to review first).
spike_train = pd.read_csv('../assets/spike_train.csv.gz')
curve = hv.Curve(spike_train, 'milliseconds', 'Hertz')
spikes = hv.Spikes(spike_train, 'milliseconds', [])
# And now we display the ``curve`` and a ``spikes`` elements together in a layout as we did in the getting-started guide:
# +
curve = hv.Curve( spike_train, 'milliseconds', 'Hertz')
spikes = hv.Spikes(spike_train, 'milliseconds', [])
layout = curve + spikes
layout.opts(
opts.Curve( height=200, width=900, xaxis=None, line_width=1.50, color='red', tools=['hover']),
opts.Spikes(height=150, width=900, yaxis=None, line_width=0.25, color='grey')).cols(1)
# -
# This example illustrates a number of key concepts, as described below.
#
# ### Content versus presentation
#
# In the getting-started guide [Introduction](../getting_started/1-Introduction.ipynb), we saw that we can print the string representation of HoloViews objects such as `layout`:
print(layout)
# In the [Customization](../getting_started/2-Customization.ipynb) getting-started guide, the `.opts.info()` method was introduced that lets you see the options *associated* with (though not stored on) the objects:
layout.opts.info()
# If you inspect all the state of the `Layout`, `Curve`, or `Spikes` objects you will not find any of these keywords, because they are stored in an entirely separate data structure. HoloViews assigns a unique ID per HoloViews object that lets arbitrarily specific customization be associated with that object if needed, while also making it simple to define options that apply to entire classes of objects by type (or group and label if defined). The HoloViews element is thus *always* a thin wrapper around your data, without any visual styling information or plotting state, even though it *seems* like the object includes the styling information. This separation between content and presentation is by design, so that you can work with your data and with its presentation entirely independently.
#
# If you wish to clear the options that have been associated with an object `obj`, you can call `obj.opts.clear()`.
#
# ## Option builders
#
# The [Customization](../getting_started/2-Customization.ipynb) getting-started guide also introduces the notion of *option builders*. One of the option builders in the visualization shown above is:
opts.Curve( height=200, width=900, xaxis=None, line_width=1.50, color='red', tools=['hover'])
# An *option builder* takes a collection of keywords and returns an `Options` object that stores these keywords together. Why should you use option builders and how are they different from a vanilla dictionary?
#
# 1. The option builder specifies which type of HoloViews object the options are for, which is important because each type accepts different options.
# 2. Knowing the type, the options builder does *validation* against that type for the currently loaded plotting extensions. Try introducing a typo into one of the keywords above; you should get a helpful error message. Separately, try renaming `line_width` to `linewidth`, and you'll get a different message because the latter is a valid matplotlib keyword.
# 3. The option builder allows *tab-completion* in the notebook. This is useful for discovering available keywords for that type of object, which helps prevent mistakes and makes it quicker to specify a set of keywords.
#
# In the cell above, the specified options are applicable to `Curve` elements, and different validation and tab completion will be available for other types.
#
# The returned `Options` object is different from a dictionary in the following ways:
#
# 1. An optional *spec* is recorded, where this specification is normally just the element name. Above this is simply 'Curve'. Later, in section [Using `group` and `label`](#Using-group-and-label), we will see how this can also specify the `group` and `label`.
# 2. The keywords are alphanumerically sorted, making it easier to compare `Options` objects.
# ## Inlining options
#
# When customizing a single element, the use of an option builder is not mandatory. If you have a small number of keywords that are common (e.g `color`, `cmap`, `title`, `width`, `height`) it can be clearer to inline them into the `.opts` method call if tab-completion and validation isn't required:
np.random.seed(42)
array = np.random.random((10,10))
im1 = hv.Image(array).opts(opts.Image(cmap='Reds')) # Using an option builder
im2 = hv.Image(array).opts(cmap='Blues') # Without an option builder
im1 + im2
# You cannot inline keywords for composite objects such as `Layout` or `Overlay` objects. For instance, the `layout` object is:
print(layout)
# To customize this layout, you need to use an option builder to associate your keywords with either the `Curve` or the `Spikes` object, or else you would have had to apply the options to the individual elements before you built the composite object. To illustrate setting by type, note that in the first example, both the `Curve` and the `Spikes` have different `height` values provided.
#
# You can also target options by the `group` and `label` as described in section on [using `group` and `label`](#Using-group-and-label).
#
# ## Session-specific options
#
# One other common need is to set some options for a Python session, whether using Jupyter notebook or not. For this you can set the default options that will apply to all objects created subsequently:
opts.defaults(
opts.HeatMap(cmap='Summer', colorbar=True, toolbar='above'))
# The `opt.defaults` method has now set the style used for all `HeatMap` elements used in this session:
data = [(chr(65+i), chr(97+j), i*j) for i in range(5) for j in range(5) if i!=j]
heatmap = hv.HeatMap(data).sort()
heatmap
# ## Discovering options
#
# Using tab completion in the option builders is one convenient and easy way of discovering the available options for an element. Another approach is to use `hv.help`.
#
# For instance, if you run `hv.help(hv.Curve)` you will see a list of the 'style' and 'plot' options applicable to `Curve`. The distinction between these two types of options can often be ignored for most purposes, but the interested reader is encouraged to read more about them in more detail [below](#Split-into-style,-plot-and-norm-options).
#
# For the purposes of discovering the available options, the keywords listed under the 'Style Options' section of the help output is worth noting. These keywords are specific to the active plotting extension and are part of the API for that plotting library. For instance, running `hv.help(hv.Curve)` in the cell below would give you the keywords in the Bokeh documentation that you can reference for customizing the appearance of `Curve` objects.
#
#
# ## Maximizing readability
#
# There are many ways to specify options in your code using the above tools, but for creating readable, maintainable code, we recommend making the separation of content and presentation explicit. Someone reading your code can then understand your visualizations in two steps 1) what your data *is* in terms of the applicable elements and containers 2) how this data is to be presented visually.
#
# The following guide details the approach we have used through out the examples and guides on holoviews.org. We have found that following these rules makes code involving holoviews easier to read and more consistent.
#
# The core principle is as follows: ***avoid mixing declarations of data, elements and containers with details of their visual appearance***.
#
# ### Two contrasting examples
#
# One of the best ways to do this is to declare all your elements, compose them and then apply all the necessary styling with the `.opts` method before the visualization is rendered to disk or to the screen. For instance, the example from the getting-started guide could have been written sub-optimally as follows:
#
# ***Sub-optimal***
# ```python
# curve = hv.Curve( spike_train, 'milliseconds', 'Hertz').opts(
# height=200, width=900, xaxis=None, line_width=1.50, color='red', tools=['hover'])
# spikes = hv.Spikes(spike_train, 'milliseconds', vdims=[]).opts(
# height=150, width=900, yaxis=None, line_width=0.25, color='grey')
# (curve + spikes).cols(1)
# ```
#
# Code like that is very difficult to read because it mixes declarations of the data and its dimensions with details about how to present it. The recommended version declares the `Layout`, then separately applies all the options together where it's clear that they are just hints for the visualization:
#
# ***Recommended***
# ```python
# curve = hv.Curve( spike_train, 'milliseconds', 'Hertz')
# spikes = hv.Spikes(spike_train, 'milliseconds', [])
# layout = curve + spikes
#
# layout.opts(
# opts.Curve( height=200, width=900, xaxis=None, line_width=1.50, color='red', tools=['hover']),
# opts.Spikes(height=150, width=900, yaxis=None, line_width=0.25, color='grey')).cols(1)
# ```
#
#
# By grouping the options in this way and applying them at the end, you can see the definition of `layout` without being distracted by visual concerns declared later. Conversely, you can modify the visual appearance of `layout` easily without needing to know exactly how it was defined. The [coding style guide](#Coding-style-guide) section below offers additional advice for keeping things readable and consistent.
#
# ### When to use multiple`.opts` calls
#
# The above coding style applies in many case, but sometimes you have multiple elements of the same type that you need to distinguish visually. For instance, you may have a set of curves where using the `dim` or `Cycle` objects (described in the [Style Mapping](04-Style_Mapping.ipynb) user guide) is not appropriate and you want to customize the appearance of each curve individually. Alternatively, you may be generating elements in a list comprehension for use in `NdOverlay` and have a specific style to apply to each one.
#
# In these situations, it is often appropriate to use the inline style of `.opts` locally. In these instances, it is often best to give the individually styled objects a suitable named handle as illustrated by the [legend example](../gallery/demos/bokeh/legend_example.ipynb) of the gallery.
#
# ### General advice
#
# As HoloViews is highly compositional by design, you can always build long expressions mixing the data and element declarations, the composition of these elements, and their customization. Even though such expressions can be terse they can also be difficult to read.
#
# The simplest way to avoid long expressions is to keep some level of separation between these stages:
#
# 1. declaration of the data
# 2. declaration of the elements, including `.opts` to distinguish between elements of the same type if necessary
# 3. composition with `+` and `*` into layouts and overlays, and
# 4. customization of the composite object, either with a final call to the `.opts` method, or by declaring such settings as the default for your entire session as described [above](#Session-specific-options).
#
# When stages are simple enough, it can be appropriate to combine them. For instance, if the declaration of the data is simple enough, you can fold in the declaration of the element. In general, any expression involving three or more of these stages will benefit from being broken up into several steps.
#
# These general principles will help you write more readable code. Maximizing readability will always require some level of judgement, but you can maximize consistency by consulting the [coding style guide](#Coding-style-guide) section for more tips.
# # Customizing display output
#
#
# The options system controls most of the customizations you might want to do, but there are a few settings that are controlled at a more general level that cuts across all HoloViews object types: the active plotting extension (e.g. Bokeh or Matplotlib), the output display format (PNG, SVG, etc.), the output figure size, and other similar options. The `hv.output` utility allows you to modify these more global settings, either for all subsequent objects or for one particular object:
#
# * `hv.output(**kwargs)`: Customize how the output appears for the rest of the notebook session.
# * `hv.output(obj, **kwargs)`: Temporarily affect the display of an object `obj` using the keyword `**kwargs`.
#
# The `hv.output` utility only has an effect in contexts where HoloViews objects can be automatically displayed, which currently is limited to the Jupyter Notebook (in either its classic or JupyterLab variants). In any other Python context, using `hv.output` has no effect, as there is no automatically displayed output; see the [hv.save() and hv.render()](Plots_and_Renderers.ipynb#Saving-and-rendering) utilities for explicitly creating output in those other contexts.
#
# To start with `hv.output`, let us define a `Path` object:
# +
lin = np.linspace(0, np.pi*2, 200)
def lissajous(t, a, b, delta):
return (np.sin(a * t + delta), np.sin(b * t), t)
path = hv.Path([lissajous(lin, 3, 5, np.pi/2)])
path.opts(opts.Path(color='purple', line_width=3, line_dash='dotted'))
# -
# Now, to illustrate, let's use `hv.output` to switch our plotting extension to matplotlib:
hv.output(backend='matplotlib', fig='svg')
# We can now display our `path` object with some option customization:
path.opts(opts.Path(linewidth=2, color='red', linestyle='dotted'))
# Our plot is now rendered with Matplotlib, in SVG format (try right-clicking the image in the web browser and saving it to disk to confirm). Note that the `opts.Path` option builder now tab completes *Matplotlib* keywords because we activated the Matplotlib plotting extension beforehand. Specifically, `linewidth` and `linestyle` don't exist in Bokeh, where the corresponding options are called `line_width` and `line_dash` instead.
#
# You can see the custom output options that are currently active using `hv.output.info()`:
hv.output.info()
# The info method will always show which backend is active as well as any other custom settings you have specified. These settings apply to the subsequent display of all objects unless you customize the output display settings for a single object.
#
#
# To illustrate how settings are kept separate, let us switch back to Bokeh in this notebook session:
hv.output(backend='bokeh')
hv.output.info()
# With Bokeh active, we can now declare options on `path` that we want to apply only to matplotlib:
path = path.opts(
opts.Path(linewidth=3, color='blue', backend='matplotlib'))
path
# Now we can supply `path` to `hv.output` to customize how it is displayed, while activating matplotlib to generate that display. In the next cell, we render our path at 50% size as an SVG using matplotlib.
hv.output(path, backend='matplotlib', fig='svg', size=50)
# Passing `hv.output` an object will apply the specified settings only for the subsequent display. If you were to view `path` now in the usual way, you would see that it is still being displayed with Bokeh with purple dotted lines.
#
# One thing to note is that when we set the options with `backend='matplotlib'`, the active plotting extension was Bokeh. This means that `opts.Path` will tab complete *bokeh* keywords, and not the matplotlib ones that were specified. In practice you will want to set the backend appropriately before building your options settings, to ensure that you get the most appropriate tab completion.
# ### Available `hv.output` settings
#
# You can see the available settings using `help(hv.output)`. For reference, here are the most commonly used ones:
#
# * **backend**: *The backend used by HoloViews*. If the necessary libraries are installed this can be `'bokeh'`, `'matplotlib'` or `'plotly'`.
# * **fig** : *The static figure format*. The most common options are `'svg'` and `'png'`.
# * **holomap**: *The display type for holomaps*. With matplotlib and the necessary support libraries, this may be `'gif'` or `'mp4'`. The JavaScript `'scrubber'` widgets as well as the regular `'widgets'` are always supported.
# * **fps**: *The frames per second used for animations*. This setting is used for GIF output and by the scrubber widget.
# * **size**: *The percentage size of displayed output*. Useful for making all display larger or smaller.
# * **dpi**: *The rendered dpi of the figure*. This setting affects raster output such as PNG images.
#
# In `help(hv.output)` you will see a few other, less common settings. The `filename` setting particular is not recommended and will be deprecated in favor of `hv.save` in future.
# ## Coding style guide
#
# Using `hv.output` plus option builders with the `.opts` method and `opts.default` covers the functionality required for most HoloViews code written by users. In addition to these recommended tools, HoloViews supports [Notebook Magics](Notebook_Magics.ipynb) (not recommended because they are Jupyter-specific) and literal (nested dictionary) formats useful for developers, as detailed in the [Extending HoloViews](#Extending-HoloViews) section.
#
# This section offers further recommendations for how users can structure their code. These are generally tips based on the important principles described in the [maximizing readability](#Maximizing-readability) section that are often helpful but optional.
#
# * Use as few `.opts` calls as necessary to style the object the way you want.
# * You can inline keywords without an option builder if you only have a few common keywords. For instance, `hv.Image(...).opts(cmap='Reds')` is clearer to read than `hv.Image(...).opts(opts.Image(cmap='Reds'))`.
# * Conversely, you *should* use an option builder if you have more than four keywords.
# * When you have multiple option builders, it is often clearest to list them on separate lines with a single intentation in both `.opts` and `opts.defaults`:
#
# **Not recommended**
#
# ```
# layout.opts(opts.VLine(color='white'), opts.Image(cmap='Reds'), opts.Layout(width=500), opts.Curve(color='blue'))
# ```
#
# **Recommended**
#
# ```
# layout.opts(
# opts.Curve(color='blue'),
# opts.Image(cmap='Reds'),
# opts.Layout(width=500),
# opts.VLine(color='white'))
# ```
#
# * The latter is recommended for another reason: if possible, list your element option builders in alphabetical order, before your container option builders in alphabetical order.
#
# * Keep the expression before the `.opts` method simple so that the overall expression is readable.
# * Don't mix `hv.output` and use of the `.opts` method in the same expression.
# ## What is `.options`?
#
#
# If you tab complete a HoloViews object, you'll notice there is an `.options` method as well as a `.opts` method. So what is the difference?
#
# The `.options` method was introduced in HoloViews 1.10 and was the first time HoloViews allowed users to ignore the distinction between 'style', 'plot' and 'norm' options described in the next section. It is largely equivalent to the `.opts` method except that it applies the options on a returned clone of the object.
#
# In other words, you have `clone = obj.options(**kwargs)` where `obj` is unaffected by the keywords supplied while `clone` will be customized. Both `.opts` and `.options` support an explicit `clone` keyword, so:
#
# * `obj.opts(**kwargs, clone=True)` is equivalent to `obj.options(**kwargs)`, and conversely
# * `obj.options(**kwargs, clone=False)` is equivalent to `obj.opts(**kwargs)`
#
# For this reason, users only ever need to use `.opts` and occasionally supply `clone=True` if required. The only other difference between these methods is that `.opts` supports the full literal specification that allows splitting into [style, plot and norm options](#Split-into-style,-plot-and-norm-options) (for developers) whereas `.options` does not.
#
# ## When should I use `clone=True`?
#
# The 'Persistent styles' section of the [customization](../getting_started/2-Customization.ipynb) user guide shows how HoloViews remembers options set for an object (per plotting extension). For instance, we never customized the `spikes` object defined at the start of the notebook but we did customize it when it was part of a `Layout` called `layout`. Examining this `spikes` object, we see the options were applied to the underlying object, not just a copy of it in the layout:
#
spikes
# This is because `clone=False` by default in the `.opts` method. To illustrate `clone=True`, let's view some purple spikes *without* affecting the original `spikes` object:
purple_spikes = spikes.opts(color='purple', clone=True)
purple_spikes
# Now if you were to look at `spikes` again, you would see it is still looks like the grey version above and only `purple_spikes` is purple. This means that `clone=True` is useful when you want to keep different styles for some HoloViews object (by making styled clones of it) instead of overwriting the options each time you call `.opts`.
# ## Extending HoloViews
#
# In addition to the formats described above for use by users, additional option formats are supported that are less user friendly for data exploration but may be more convenient for library authors building on HoloViews.
#
# The first of these is the *`Option` list syntax* which is typically most useful outside of notebooks, a *literal syntax* that avoids the need to import `opts`, and then finally a literal syntax that keeps *style* and *plot* options separate.
# ### `Option` list syntax
#
# If you find yourself using `obj.opts(*options)` where `options` is a list of `Option` objects, use `obj.opts(options)` instead as list input is also supported:
# +
options = [
opts.Curve( height=200, width=900, xaxis=None, line_width=1.50, color='grey', tools=['hover']),
opts.Spikes(height=150, width=900, yaxis=None, line_width=0.25, color='orange')]
layout.opts(options).cols(1)
# -
# This approach is often best in regular Python code where you are dynamically building up a list of options to apply. Using the option builders early also allows for early validation before use in the `.opts` method.
# ### Literal syntax
#
# This syntax has the advantage of being a pure Python literal but it is harder to work with directly (due to nested dictionaries), is less readable, lacks tab completion support and lacks validation at the point where the keywords are defined:
#
layout.opts(
{'Curve': dict(height=200, width=900, xaxis=None, line_width=2, color='blue', tools=['hover']),
'Spikes': dict(height=150, width=900, yaxis=None, line_width=0.25, color='green')}).cols(1)
# The utility of this format is you don't need to import `opts` and it is easier to dynamically add or remove keywords using Python or if you are storing options in a text file like YAML or JSON and only later applying them in Python code. This format should be avoided when trying to maximize readability or make the available keyword options easy to explore.
#
# ### Using `group` and `label`
#
# The notion of an element `group` and `label` was introduced in [Annotating Data](./01-Annotating_Data.ipynb). This type of metadata is helpful for organizing large collections of elements with shared styling, such as automatically generated objects from some external software (e.g. a simulator). If you have a large set of elements with semantically meaningful `group` and `label` parameters set, you can use this information to appropriately customize large numbers of visualizations at once.
#
# To illustrate, here are four overlaid curves where three have the `group` of 'Sinusoid' and one of these also has the label 'Squared':
xs = np.linspace(-np.pi,np.pi,100)
curve = hv.Curve((xs, xs/3))
group_curve1 = hv.Curve((xs, np.sin(xs)), group='Sinusoid')
group_curve2 = hv.Curve((xs, np.sin(xs+np.pi/4)), group='Sinusoid')
label_curve = hv.Curve((xs, np.sin(xs)**2), group='Sinusoid', label='Squared')
curves = curve * group_curve1 * group_curve2 * label_curve
curves
# We can now use the `.opts` method to make all curves blue unless they are in the 'Sinusoid' group in which case they are red. Additionally, if a curve in the 'Sinusoid' group also has the label 'Squared', we can make sure that curve is green with a custom interpolation option:
curves.opts(
opts.Curve(color='blue'),
opts.Curve('Sinusoid', color='red'),
opts.Curve('Sinusoid.Squared', interpolation='steps-mid', color='green'))
# By using `opts.defaults` instead of the `.opts` method, we can use this type of customization to apply options to many elements, including elements that haven't even been created yet. For instance, if we run:
opts.defaults(opts.Area('Error', alpha=0.5, color='grey'))
# Then any `Area` element with a `group` of 'Error' will then be displayed as a semi-transparent grey:
X = np.linspace(0,2,10)
hv.Area((X, np.random.rand(10), -np.random.rand(10)), vdims=['y', 'y2'], group='Error')
# ## Split into `style`, `plot` and `norm` options
#
# In `HoloViews`, an element such as `Curve` actually has three semantic distinct categories of options: `style`, `plot`, and `norm` options. Normally, a user doesn't need to worry about the distinction if they spend most of their time working with a single plotting extension.
#
# When trying to build a system that consistently needs to generate visualizations across different plotting libraries, it can be useful to make this distinction explicit:
#
# ##### ``style`` options:
#
# ``style`` options are passed directly to the underlying rendering backend that actually draws the plots, allowing you to control the details of how it behaves. Each backend has its own options (e.g. the [``bokeh``](Bokeh_Backend) or plotly backends).
#
# For whichever backend has been selected, HoloViews can tell you which options are supported, but you will need to read the corresponding documentation (e.g. [matplotlib](http://matplotlib.org/contents.html), [bokeh](http://bokeh.pydata.org)) for the details of their use. For listing available options, see the ``hv.help`` as described in the [Discovering options](#Discovering-options) section.
#
# HoloViews has been designed to be easily extensible to additional backends in the future and each backend would have its own set of style options.
#
# ##### ``plot`` options:
#
# Each of the various HoloViews plotting classes declares various [Parameters](http://param.pyviz.org) that control how HoloViews builds the visualization for that type of object, such as plot sizes and labels. HoloViews uses these options internally; they are not simply passed to the underlying backend. HoloViews documents these options fully in its online help and in the [Reference Manual](http://holoviews.org/Reference_Manual). These options may vary for different backends in some cases, depending on the support available both in that library and in the HoloViews interface to it, but we try to keep any options that are meaningful for a variety of backends the same for all of them. For listing available options, see the output of ``hv.help``.
#
# ##### ``norm`` options:
#
# ``norm`` options are a special type of plot option that are applied orthogonally to the above two types, to control normalization. Normalization refers to adjusting the properties of one plot relative to those of another. For instance, two images normalized together would appear with relative brightness levels, with the brightest image using the full range black to white, while the other image is scaled proportionally. Two images normalized independently would both cover the full range from black to white. Similarly, two axis ranges normalized together are effectively linked and will expand to fit the largest range of either axis, while those normalized separately would cover different ranges. For listing available options, see the output of ``hv.help``.
#
# You can preserve the semantic distinction between these types of option in an augmented form of the [Literal syntax](#Literal-syntax) as follows:
full_literal_spec = {
'Curve': {'style':dict(color='orange')},
'Curve.Sinusoid': {'style':dict(color='grey')},
'Curve.Sinusoid.Squared': {'style':dict(color='black'),
'plot':dict(interpolation='steps-mid')}}
curves.opts(full_literal_spec)
# This specification is what HoloViews uses internally, but it is awkward for people to use and is not ever recommended for normal users. That said, it does offer the maximal amount of flexibility and power for integration with other software.
#
# For instance, a simulator that can output visualization using either Bokeh or Matplotlib via HoloViews could use this format. By keeping the 'plot' and 'style' options separate, the 'plot' options could be set regardless of the plotting library while the 'style' options would be conditional on the backend.
# ## Onwards
#
# This section of the user guide has described how you can discover and set customization options in HoloViews. Using `hv.help` and the option builders, you should be able to find the options available for any given object you want to display.
#
# What *hasn't* been explored are some of the facilities HoloViews offers to map the dimensions of your data to style options. This important topic is explored in the next user guide [Style Mapping](04-Style_Mapping.ipynb), where you will learn of the `dim` object as well as about the `Cycle` and `Palette` objects.
|
examples/user_guide/03-Applying_Customizations.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <b>Imports of required libraries</b>
import numpy as np
from scipy.stats import ttest_1samp, norm, chi2, bartlett
# <b>1. An example of a ttest usage on a population from gaussian distribution</b>
# +
random_sample = np.random.normal(loc=31.5, scale=5, size=100)
#while np.round(np.mean(random_sample), decimals=1) != 31.5 or np.round(np.std(random_sample), decimals=0) != 5.0:
# random_sample = np.random.normal(loc=31.5, scale=5, size=100)
sample_mean = np.mean(random_sample)
sample_standard_deviation = np.std(random_sample)
print('sample mean: \t\t {0}\nsample std deviation: \t {1}\n'
.format(np.round(sample_mean, decimals=1),
np.round(sample_standard_deviation, decimals=0)))
hypothetic_mean = 28
stat, p = ttest_1samp(random_sample, hypothetic_mean)
alpha = 0.05
print('t-statistic value: \t {0} \np-value: \t\t {1} \nalpha: \t\t\t {2}\n'
.format(stat, p, alpha))
if p <= alpha:
print('Result: \t\t p-value is smaller than or equal to alpha \n \t\t\t We reject null hypothesis')
else:
print('Result: \t\t p-value is greater than alpha \n \t\t\t We can\'t reject null hypothesis')
# -
# <b>2. An example of a ttest on a specified population</b>
# +
waiting_time_sample = np.array([1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 4, 4, 4, 4, 4, 5, 5, 6, 6, 6, 7, 7])
sample_mean = np.round(np.mean(waiting_time_sample), decimals=1)
sample_standard_deviation = np.round(np.std(waiting_time_sample), decimals=1)
print('sample mean: \t\t {0}\nsample std deviation: \t {1}\n'
.format(sample_mean, sample_standard_deviation))
hypothetic_mean = 3
stat, p = ttest_1samp(waiting_time_sample, hypothetic_mean)
alpha = 0.05
print('t-statistic value: \t {0} \np-value: \t\t {1} \nalpha: \t\t\t {2}\n'
.format(stat, p, alpha))
if p <= alpha:
print('Result: \t\t p-value is smaller than or equal to alpha \n \t\t\t We reject null hypothesis')
else:
print('Result: \t\t p-value is greater than alpha \n \t\t\t We can\'t reject null hypothesis')
# -
# <b>3. An example of a ttest on populations from gaussian distribution and t-Student distribution</b>
# +
# sample generated from gaussian distribution + ttest
normal_distribution_sample = np.random.normal(loc=38, scale=14, size=18)
while np.round(
np.mean(normal_distribution_sample), decimals=1) != 38.0 or np.round(
np.std(normal_distribution_sample), decimals=1) != 14.0:
normal_distribution_sample = np.random.normal(loc=38, scale=14, size=18)
sample_mean = np.mean(normal_distribution_sample)
sample_standard_deviation = np.std(normal_distribution_sample)
print('=== Normal distribution sample stats ===\nsample mean: \t\t {0}\nsample std deviation: \t {1}\n'
.format(np.round(sample_mean, decimals=1),
np.round(sample_standard_deviation, decimals=0)))
hypothetic_mean = 49
stat, p = ttest_1samp(normal_distribution_sample, hypothetic_mean)
alpha = 0.01
print('t-statistic value: \t {0} \np-value: \t\t {1} \nalpha: \t\t\t {2}\n'
.format(stat, p, alpha))
if p <= alpha:
print('Result: \t\t p-value is smaller than or equal to alpha \n \t\t\t We reject null hypothesis')
else:
print('Result: \t\t p-value is greater than alpha \n \t\t\t We can\'t reject null hypothesis')
# +
# using sample parameters + z-statistic formula
sample_mean = 38
sample_standard_deviation = 14
number_of_observations = 18
print('=== Sample stats ===\nsample mean: \t\t {0}\nsample std deviation: \t {1}\n'
.format(np.round(sample_mean, decimals=1),
np.round(sample_standard_deviation, decimals=0)))
hypothetic_mean = 49
z_statistic = ((sample_mean - hypothetic_mean)/sample_standard_deviation)*np.sqrt(number_of_observations)
alpha = 0.01
z_alpha1 = norm.ppf(alpha/2)
z_alpha2 = norm.ppf(1-(alpha/2))
print('z-statistic value: \t {0} \nalpha: \t\t\t {1}\nz_alpha1: \t\t {2}\nz_alpha2: \t\t {3}\n'
.format(z_statistic, alpha, z_alpha1, z_alpha2))
if z_statistic < z_alpha1 or z_statistic > z_alpha2:
print('Result: \t\t z_statistic is out of critical values partition \n \t\t\t We reject null hypothesis')
else:
print('Result: \t\t z_statistic is inside of critical values partition \n \t\t\t We can\'t reject null hypothesis')
# +
degrees_of_freedom = 17
t_student_sample = np.random.standard_t(df=degrees_of_freedom, size=18)
print('=== t-Student distribution sample stats ===\ndegrees of freedom: \t {0}\n'.format(degrees_of_freedom))
hypothetic_mean = 49
stat, p = ttest_1samp(t_student_sample, hypothetic_mean)
alpha = 0.01
print('t-statistic value: \t {0} \np-value: \t\t {1} \nalpha: \t\t\t {2}\n'.format(stat, p, alpha))
if p <= alpha:
print('Result: \t\t p-value is smaller than or equal to alpha \n \t\t\t We reject null hypothesis')
else:
print('Result: \t\t p-value is greater than alpha \n \t\t\t We can\'t reject null hypothesis')
# -
# <b>4. An example of a chi-square variance test</b>
# +
# sample generated from gaussian distribution
normal_distribution_sample = np.random.normal(loc=38.0, scale=1.5, size=25)
sample_mean = np.mean(normal_distribution_sample)
sample_standard_deviation = np.std(normal_distribution_sample)
sample_variance = np.var(normal_distribution_sample)
print('=== Normal distribution sample stats ===\nsample mean: \t\t {0}\nsample std deviation: \t {1}\nsample variance: \t {2}\n'
.format(np.round(sample_mean, decimals=1),
np.round(sample_standard_deviation, decimals=0),
np.round(sample_variance, decimals=1)))
hypothetic_variance = 1.6
chi_square_stat = (
((len(normal_distribution_sample) - 1) * np.power(sample_standard_deviation, 2))
/hypothetic_variance)
new_p = 1 - chi2.cdf(chi_square_stat, df=len(normal_distribution_sample)-1)
new_p2 = chi2.sf(chi_square_stat, df=len(normal_distribution_sample)-1)
p = new_p2
alpha = 0.05
print('chi-squared statistic: \t {0} \np-value: \t\t {1} \np-value2: \t\t {2} \n\nalpha: \t\t\t {3}\n'
.format(chi_square_stat, new_p, new_p2, alpha))
if p <= alpha:
print('Result: \t\t p-value is smaller than or equal to alpha \n \t\t\t We reject null hypothesis')
else:
print('Result: \t\t p-value is greater than alpha \n \t\t\t We can\'t reject null hypothesis')
alpha = 0.1
print('\nalpha: \t\t\t {0}\n'.format(alpha))
if p <= alpha:
print('Result: \t\t p-value is smaller than or equal to alpha \n \t\t\t We reject null hypothesis')
else:
print('Result: \t\t p-value is greater than alpha \n \t\t\t We can\'t reject null hypothesis')
# -
# <b>5. An example of a Bartlett variance test on 2 populations</b>
# +
new_product_buyers = np.random.normal(loc=27.7, scale=5.5, size=20)
old_product_buyers = np.random.normal(loc=32.1, scale=6.3, size=22)
new_product_sample_mean = np.mean(new_product_buyers)
new_product_sample_standard_deviation = np.std(new_product_buyers)
new_product_sample_variance = np.var(new_product_buyers)
print('=== New product buyers sample stats ===\nsample mean: \t\t {0}\nsample std deviation: \t {1}\nsample variance: \t {2}\n'
.format(np.round(new_product_sample_mean, decimals=1),
np.round(new_product_sample_standard_deviation, decimals=1),
np.round(new_product_sample_variance, decimals=1)))
old_product_sample_mean = np.mean(old_product_buyers)
old_product_sample_standard_deviation = np.std(old_product_buyers)
old_product_sample_variance = np.var(old_product_buyers)
print('=== Old product buyers sample stats ===\nsample mean: \t\t {0}\nsample std deviation: \t {1}\nsample variance: \t {2}\n'
.format(np.round(old_product_sample_mean, decimals=1),
np.round(old_product_sample_standard_deviation, decimals=1),
np.round(old_product_sample_variance, decimals=1)))
stat, p = bartlett(new_product_buyers, old_product_buyers)
alpha = 0.05
print('Bartlett test statistic: {0} \np-value: \t\t {1} \nalpha: \t\t\t {2}\n'.format(stat, p, alpha))
if p <= alpha:
print('Result: \t\t p-value is smaller than or equal to alpha \n \t\t\t We reject null hypothesis')
else:
print('Result: \t\t p-value is greater than alpha \n \t\t\t We can\'t reject null hypothesis')
|
02_statistical_tests_introduction.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# You are given an unordered array consisting of consecutive integers E(elements of) [1, 2, 3, ..., n] without any duplicates. You are allowed to swap any two elements. Find the minimum number of swaps required to sort the array in ascending order.
#
# **Example**
#
# arr = [7,1,3,2,4,5,6]
#
# Perform the following steps:
#
# i arr swap (indices)
# 0 [7, 1, 3, 2, 4, 5, 6] swap (0,3)
# 1 [2, 1, 3, 7, 4, 5, 6] swap (0,1)
# 2 [1, 2, 3, 7, 4, 5, 6] swap (3,4)
# 3 [1, 2, 3, 4, 7, 5, 6] swap (4,5)
# 4 [1, 2, 3, 4, 5, 7, 6] swap (5,6)
# 5 [1, 2, 3, 4, 5, 6, 7]
#
# It took 5 swaps to sort the array.
# +
import math
import os
import random
import re
import sys
# Complete the minimumSwaps function below.
def minimumSwaps(arr):
temp = [0] * (len(arr) + 1)
for pos, val in enumerate(arr):
temp[val] = pos
pos += 1
swaps = 0
for i in range(len(arr)):
if arr[i] != i+1:
swaps += 1
t = arr[i]
arr[i] = i+1
arr[temp[i+1]] = t
temp[t] = temp[i+1]
return swaps
if __name__ == '__main__':
fptr = open(os.environ['OUTPUT_PATH'], 'w')
n = int(input())
arr = list(map(int, input().rstrip().split()))
res = minimumSwaps(arr)
fptr.write(str(res) + '\n')
fptr.close()
# +
def main():
n = int(input())
a = list(map(int, input().split()))
answer = 0
used = [False] * n
for i in range(n):
if not used[i]:
used[i] = True
j = a[i] - 1
while j != i:
used[j] = True
answer += 1
j = a[j] - 1
print(answer)
if __name__ == '__main__':
main()
# -
# **Sample Input 0**
#
# 4
# 4 3 1 2
#
# **Sample Output 0**
#
# 3
#
# **Explanation 0**
#
# Given array arr: [4,3,1,2]
# After swapping (0,2) we get arr: [1,3,4,2]
# After swapping (1,2) we get arr: [1,4,3,2]
# After swapping (1,3) we get arr: [1,2,3,4]
# So, we need a minimum of 3 swaps to sort the array in ascending order.
#
# **Sample Input 1**
#
# 5
# 2 3 4 1 5
#
# **Sample Output 1**
#
# 3
#
# **Explanation 1**
#
# Given array arr: [2,3,4,1,5]
# After swapping (2,3) we get arr: [2,3,1,4,5]
# After swapping (0,1) we get arr: [3,2,1,4,5]
# After swapping (0,2) we get arr: [1,2,3,4,5]
# So, we need a minimum of 3 swaps to sort the array in ascending order.
#
# **Sample Input 2**
#
# 7
# 1 3 5 2 4 6 7
#
# **Sample Output 2**
#
# 3
#
# **Explanation 2**
#
# Given array arr: [1,3,5,2,4,6,7]
# After swapping (1,3) we get arr: [1,2,5,3,4,6,7]
# After swapping (2,3) we get arr: [1,2,3,5,4,6,7]
# After swapping (3,4) we get arr: [1,2,3,4,5,6,7]
# So, we need a minimum of 3 swaps to sort the array in ascending order.
|
Interview Preparation Kit/1. arrays/3. minimum swaps 2.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/wisrovi/03MAIR-Algoritmos-de-Optimizacion/blob/main/TorresHanoi.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + [markdown] id="iVNMT59Is8k7"
# # Solucion Torres de Hanoi con recursividad
# + id="HvO4yUjns7TY"
def torres_hanoi(N, desde=1, hasta=3):
if N==1:
print("Llevar desde " + str(desde) + " hasta " + str(hasta))
else:
torres_hanoi(N-1, desde, 6 - desde - hasta)
print("Llevar desde " + str(desde) + " hasta " + str(hasta))
torres_hanoi(N-1, 6 - desde - hasta, hasta)
# + id="M11VEcyA1iQy"
def torres_hanoi(N, desde=1, hasta=3):
if N==1:
print("Llevar desde " + str(desde) + " hasta " + str(hasta))
else:
torres_hanoi(N-1, desde, list({1,2,3} - {desde, hasta})[0])
print("Llevar desde " + str(desde) + " hasta " + str(hasta))
torres_hanoi(N-1, list({1,2,3} - {desde, hasta})[0], hasta)
# + colab={"base_uri": "https://localhost:8080/"} id="X-jNksn6s1E6" outputId="ab72df0a-ab56-4234-ee6b-e89a86dab19a"
torres_hanoi(3)
# + [markdown] id="uHN3-w-r3JLh"
# # Complejidad O(2^n)
|
TorresHanoi.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.7.6 64-bit
# metadata:
# interpreter:
# hash: 221d51554432784ffe68cd304581426b501cf3ad89feeaaa710db93299c6f320
# name: python3
# ---
# # Doublet Detection on 10k PBMCs from 10x Genomics v3
# +
import numpy as np
import doubletdetection
import scanpy as sc
import matplotlib.pyplot as plt
sc.settings.n_jobs=8
sc.set_figure_params()
# -
# ## Download Data from 10x
# ### Load Count Matrix
adata = sc.read_10x_h5(
"pbmc_10k_v3_filtered_feature_bc_matrix.h5",
backup_url="https://cf.10xgenomics.com/samples/cell-exp/3.0.0/pbmc_10k_v3/pbmc_10k_v3_filtered_feature_bc_matrix.h5"
)
adata.var_names_make_unique()
# remove "empty" genes
sc.pp.filter_genes(adata, min_cells=1)
# ## Run Doublet Detection
#
# Here we show-off the new backend implementation that uses `scanpy`. This new implementation is over 2x faster than version 2.4.0. To use the previous version of DoubletDetection please add the parameters (`use_phenograph=True`, `verbose=True`, `standard_scaling=False`) to the classifier and use the thresholds `p_thresh=1e-7`, `voter_thresh=0.8`. We recommend first using these parameters until we further validate the new implementation.
clf = doubletdetection.BoostClassifier(
n_iters=25,
use_phenograph=False,
standard_scaling=True
)
doublets = clf.fit(adata.X).predict(p_thresh=1e-16, voter_thresh=0.5)
doublet_score = clf.doublet_score()
adata.obs["doublet"] = doublets
adata.obs["doublet_score"] = doublet_score
# ## Visualize Results
# ### Convergence of doublet calls
f = doubletdetection.plot.convergence(clf, save='convergence_test.pdf', show=True, p_thresh=1e-16, voter_thresh=0.5)
# ### Doublets on umap
sc.pp.normalize_total(adata)
sc.pp.log1p(adata)
sc.pp.highly_variable_genes(adata)
sc.tl.pca(adata)
sc.pp.neighbors(adata)
sc.tl.umap(adata)
sc.pl.umap(adata, color=["doublet", "doublet_score"])
sc.pl.violin(adata, "doublet_score")
# ### Number of predicted doublets at different threshold combinations
f3 = doubletdetection.plot.threshold(clf, save='threshold_test.pdf', show=True, p_step=6)
|
tests/notebooks/PBMC_10k_vignette.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Parse RumEval Twitter data
# +
import os
import numpy as np
import pandas as pd
import pickle as pc
import dateutil.parser
from glob import glob
import json
import codecs
from nltk.tokenize.api import StringTokenizer
from nltk.tokenize import TweetTokenizer
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.manifold import TSNE
# +
import matplotlib.pyplot as plt
# Set font size
fS = 20
# -
# Change to twitter data dir
os.chdir('/home/wmkouw/Dropbox/Projects/ucopenhagen/seq-rumour/data/RumEval2019')
# Get folder paths
twitter_path = 'twitter-english/charliehebdo/'
threads = os.listdir(twitter_path)
# +
# Get labels
with open('train-key.json') as f:
train_key = json.load(f)
with open('dev-key.json') as f:
dev_key = json.load(f)
label_keys = {**train_key['subtaskaenglish'], **dev_key['subtaskaenglish']}
train_key['subtaskaenglish']
# +
# Text array
tweet_id = []
thread_ix = []
response_ix = []
reply_ix = []
texts = []
created_date = []
created_datetime = []
labels = []
# Loop over threads
for t, thread in enumerate(threads):
with open(twitter_path + thread + '/source-tweet/' + thread + '.json') as f:
tweet = json.load(f)
tweet_id.append(thread)
thread_ix.append(t)
reply_ix.append(0)
texts.append(tweet['text'])
created_date.append(dateutil.parser.parse(tweet['created_at']).date())
created_datetime.append(dateutil.parser.parse(tweet['created_at']))
labels.append(label_keys[thread])
replies = os.listdir(twitter_path + thread + '/replies/')
for r, reply in enumerate(replies):
with open(twitter_path + thread + '/replies/' + reply) as f:
tweet = json.load(f)
tweet_id.append(reply[:-5])
thread_ix.append(t)
reply_ix.append(r + 1)
texts.append(tweet['text'])
created_date.append(dateutil.parser.parse(tweet['created_at']).date())
created_datetime.append(dateutil.parser.parse(tweet['created_at']))
labels.append(label_keys[reply[:-5]])
# -
tweet
# Convert to dataframe
data = pd.DataFrame({'id': tweet_id,
'thread_ix': thread_ix,
'reply_ix': reply_ix,
'text': texts,
'created_date': created_date,
'created_datetime': created_datetime,
'label': labels})
# write frame to csv
data.to_csv('./RumEval19.csv', sep='\t', encoding='utf-8', index=False)
data
# # Twitter word embeddings
os.chdir('/home/wmkouw/Dropbox/Projects/ucopenhagen/seq-rumour/data/word2vec-twitter')
# change 'xrange' in word2vecReader to 'range'
exec(open("repl.py").read())
# +
tt = TweetTokenizer()
num_tweets = len(data)
wemb = np.zeros((num_tweets, 400))
for n in range(num_tweets):
aa = tt.tokenize(data['text'][n])
ct = 0
for a in aa:
try:
wemb[n, :] += model.__getitem__(a)
ct += 1
except:
print('.', end='')
# Average embeddings
wemb[n, :] /= ct
# Add word embeddings to dataframe
data = data.assign(embedding=wemb.tolist())
# Write embbeding array separately
np.save('rumeval19.npy', wemb)
|
data/RumEval19/read_RumEval2019.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3.6 - AzureML
# language: python
# name: python3-azureml
# ---
# # Create a Pipeline
#
# You can perform the various steps required to ingest data, train a model, and register the model individually by using the Azure ML SDK to run script-based experiments. However, in an enterprise environment it is common to encapsulate the sequence of discrete steps required to build a machine learning solution into a *pipeline* that can be run on one or more compute targets; either on-demand by a user, from an automated build process, or on a schedule.
#
# In this notebook, you'll bring together all of these elements to create a simple pipeline that pre-processes data and then trains and registers a model.
# ## Connect to your workspace
#
# To get started, connect to your workspace.
#
# > **Note**: If you haven't already established an authenticated session with your Azure subscription, you'll be prompted to authenticate by clicking a link, entering an authentication code, and signing into Azure.
# +
import azureml.core
from azureml.core import Workspace
# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))
# -
# ## Prepare data
#
# In your pipeline, you'll use a dataset containing details of diabetes patients. Run the cell below to create this dataset (if you created it previously, the code will find the existing version)
# +
from azureml.core import Dataset
default_ds = ws.get_default_datastore()
if 'diabetes dataset' not in ws.datasets:
default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
target_path='diabetes-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
#Create a tabular dataset from the path on the datastore (this may take a short while)
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'diabetes-data/*.csv'))
# Register the tabular dataset
try:
tab_data_set = tab_data_set.register(workspace=ws,
name='diabetes dataset',
description='diabetes data',
tags = {'format':'CSV'},
create_new_version=True)
print('Dataset registered.')
except Exception as ex:
print(ex)
else:
print('Dataset already registered.')
# -
# ## Create scripts for pipeline steps
#
# Pipelines consist of one or more *steps*, which can be Python scripts, or specialized steps like a data transfer step that copies data from one location to another. Each step can run in its own compute context. In this exercise, you'll build a simple pipeline that contains two Python script steps: one to pre-process some training data, and another to use the pre-processed data to train and register a model.
#
# First, let's create a folder for the script files we'll use in the pipeline steps.
# +
import os
# Create a folder for the pipeline step files
experiment_folder = 'diabetes_pipeline'
os.makedirs(experiment_folder, exist_ok=True)
print(experiment_folder)
# -
# Now let's create the first script, which will read data from the diabetes dataset and apply some simple pre-processing to remove any rows with missing data and normalize the numeric features so they're on a similar scale.
#
# The script includes a argument named **--prepped-data**, which references the folder where the resulting data should be saved.
# +
# %%writefile $experiment_folder/prep_diabetes.py
# Import libraries
import os
import argparse
import pandas as pd
from azureml.core import Run
from sklearn.preprocessing import MinMaxScaler
# Get parameters
parser = argparse.ArgumentParser()
parser.add_argument("--input-data", type=str, dest='raw_dataset_id', help='raw dataset')
parser.add_argument('--prepped-data', type=str, dest='prepped_data', default='prepped_data', help='Folder for results')
args = parser.parse_args()
save_folder = args.prepped_data
# Get the experiment run context
run = Run.get_context()
# load the data (passed as an input dataset)
print("Loading Data...")
diabetes = run.input_datasets['raw_data'].to_pandas_dataframe()
# Log raw row count
row_count = (len(diabetes))
run.log('raw_rows', row_count)
# remove nulls
diabetes = diabetes.dropna()
# Normalize the numeric columns
scaler = MinMaxScaler()
num_cols = ['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree']
diabetes[num_cols] = scaler.fit_transform(diabetes[num_cols])
# Log processed rows
row_count = (len(diabetes))
run.log('processed_rows', row_count)
# Save the prepped data
print("Saving Data...")
os.makedirs(save_folder, exist_ok=True)
save_path = os.path.join(save_folder,'data.csv')
diabetes.to_csv(save_path, index=False, header=True)
# End the run
run.complete()
# -
# Now you can create the script for the second step, which will train a model. The script includes a argument named **--training-data**, which references the location where the prepared data was saved by the previous step.
# +
# %%writefile $experiment_folder/train_diabetes.py
# Import libraries
from azureml.core import Run, Model
import argparse
import pandas as pd
import numpy as np
import joblib
import os
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
import matplotlib.pyplot as plt
# Get parameters
parser = argparse.ArgumentParser()
parser.add_argument("--training-data", type=str, dest='training_data', help='training data')
args = parser.parse_args()
training_data = args.training_data
# Get the experiment run context
run = Run.get_context()
# load the prepared data file in the training folder
print("Loading Data...")
file_path = os.path.join(training_data,'data.csv')
diabetes = pd.read_csv(file_path)
# Separate features and labels
X, y = diabetes[['Pregnancies','PlasmaGlucose','DiastolicBloodPressure','TricepsThickness','SerumInsulin','BMI','DiabetesPedigree','Age']].values, diabetes['Diabetic'].values
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=0)
# Train adecision tree model
print('Training a decision tree model...')
model = DecisionTreeClassifier().fit(X_train, y_train)
# calculate accuracy
y_hat = model.predict(X_test)
acc = np.average(y_hat == y_test)
print('Accuracy:', acc)
run.log('Accuracy', np.float(acc))
# calculate AUC
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
print('AUC: ' + str(auc))
run.log('AUC', np.float(auc))
# plot ROC curve
fpr, tpr, thresholds = roc_curve(y_test, y_scores[:,1])
fig = plt.figure(figsize=(6, 4))
# Plot the diagonal 50% line
plt.plot([0, 1], [0, 1], 'k--')
# Plot the FPR and TPR achieved by our model
plt.plot(fpr, tpr)
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC Curve')
run.log_image(name = "ROC", plot = fig)
plt.show()
# Save the trained model in the outputs folder
print("Saving model...")
os.makedirs('outputs', exist_ok=True)
model_file = os.path.join('outputs', 'diabetes_model.pkl')
joblib.dump(value=model, filename=model_file)
# Register the model
print('Registering model...')
Model.register(workspace=run.experiment.workspace,
model_path = model_file,
model_name = 'diabetes_model',
tags={'Training context':'Pipeline'},
properties={'AUC': np.float(auc), 'Accuracy': np.float(acc)})
run.complete()
# -
# ## Prepare a compute environment for the pipeline steps
#
# In this exercise, you'll use the same compute for both steps, but it's important to realize that each step is run independently; so you could specify different compute contexts for each step if appropriate.
#
# First, get the compute target you created in a previous lab (if it doesn't exist, it will be created).
#
# > **Important**: Change *your-compute-cluster* to the name of your compute cluster in the code below before running it! Cluster names must be globally unique names between 2 to 16 characters in length. Valid characters are letters, digits, and the - character.
# +
from azureml.core.compute import ComputeTarget, AmlCompute
from azureml.core.compute_target import ComputeTargetException
cluster_name = "anhldt-compute1"
try:
# Check for existing compute target
pipeline_cluster = ComputeTarget(workspace=ws, name=cluster_name)
print('Found existing cluster, use it.')
except ComputeTargetException:
# If it doesn't already exist, create it
try:
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
pipeline_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
pipeline_cluster.wait_for_completion(show_output=True)
except Exception as ex:
print(ex)
# -
# > **Note**: Compute instances and clusters are based on standard Azure virtual machine images. For this exercise, the *Standard_DS11_v2* image is recommended to achieve the optimal balance of cost and performance. If your subscription has a quota that does not include this image, choose an alternative image; but bear in mind that a larger image may incur higher cost and a smaller image may not be sufficient to complete the tasks. Alternatively, ask your Azure administrator to extend your quota.
#
# The compute will require a Python environment with the necessary package dependencies installed.
# %%writefile $experiment_folder/experiment_env.yml
name: experiment_env
dependencies:
- python=3.6.2
- scikit-learn
- ipykernel
- matplotlib
- pandas
- pip
- pip:
- azureml-defaults
- pyarrow
# Now that you have a Conda configuration file, you can create an environment and use it in the run configuration for the pipeline.
# +
from azureml.core import Environment
from azureml.core.runconfig import RunConfiguration
# Create a Python environment for the experiment (from a .yml file)
experiment_env = Environment.from_conda_specification("experiment_env", experiment_folder + "/experiment_env.yml")
# Register the environment
experiment_env.register(workspace=ws)
registered_env = Environment.get(ws, 'experiment_env')
# Create a new runconfig object for the pipeline
pipeline_run_config = RunConfiguration()
# Use the compute you created above.
pipeline_run_config.target = pipeline_cluster
# Assign the environment to the run configuration
pipeline_run_config.environment = registered_env
print ("Run configuration created.")
# -
# ## Create and run a pipeline
#
# Now you're ready to create and run a pipeline.
#
# First you need to define the steps for the pipeline, and any data references that need to be passed between them. In this case, the first step must write the prepared data to a folder that can be read from by the second step. Since the steps will be run on remote compute (and in fact, could each be run on different compute), the folder path must be passed as a data reference to a location in a datastore within the workspace. The **OutputFileDatasetConfig** object is a special kind of data reference that is used for interim storage locations that can be passed between pipeline steps, so you'll create one and use at as the output for the first step and the input for the second step. Note that you need to pass it as a script argument so your code can access the datastore location referenced by the data reference.
# +
from azureml.data import OutputFileDatasetConfig
from azureml.pipeline.steps import PythonScriptStep
# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")
# Create an OutputFileDatasetConfig (temporary Data Reference) for data passed from step 1 to step 2
prepped_data = OutputFileDatasetConfig("prepped_data")
# Step 1, Run the data prep script
prep_step = PythonScriptStep(name = "Prepare Data",
source_directory = experiment_folder,
script_name = "prep_diabetes.py",
arguments = ['--input-data', diabetes_ds.as_named_input('raw_data'),
'--prepped-data', prepped_data],
compute_target = pipeline_cluster,
runconfig = pipeline_run_config,
allow_reuse = True)
# Step 2, run the training script
train_step = PythonScriptStep(name = "Train and Register Model",
source_directory = experiment_folder,
script_name = "train_diabetes.py",
arguments = ['--training-data', prepped_data.as_input()],
compute_target = pipeline_cluster,
runconfig = pipeline_run_config,
allow_reuse = True)
print("Pipeline steps defined")
# -
# OK, you're ready build the pipeline from the steps you've defined and run it as an experiment.
# +
from azureml.core import Experiment
from azureml.pipeline.core import Pipeline
from azureml.widgets import RunDetails
# Construct the pipeline
pipeline_steps = [prep_step, train_step]
pipeline = Pipeline(workspace=ws, steps=pipeline_steps)
print("Pipeline is built.")
# Create an experiment and run the pipeline
experiment = Experiment(workspace=ws, name = 'mslearn-diabetes-pipeline')
pipeline_run = experiment.submit(pipeline, regenerate_outputs=True)
print("Pipeline submitted for execution.")
RunDetails(pipeline_run).show()
pipeline_run.wait_for_completion(show_output=True)
# -
# A graphical representation of the pipeline experiment will be displayed in the widget as it runs. Keep an eye on the kernel indicator at the top right of the page, when it turns from **⚫** to **◯**, the code has finished running. You can also monitor pipeline runs in the **Experiments** page in [Azure Machine Learning studio](https://ml.azure.com).
#
# When the pipeline has finished, you can examine the metrics recorded by it's child runs.
for run in pipeline_run.get_children():
print(run.name, ':')
metrics = run.get_metrics()
for metric_name in metrics:
print('\t',metric_name, ":", metrics[metric_name])
# Assuming the pipeline was successful, a new model should be registered with a *Training context* tag indicating it was trained in a pipeline. Run the following code to verify this.
# +
from azureml.core import Model
for model in Model.list(ws):
print(model.name, 'version:', model.version)
for tag_name in model.tags:
tag = model.tags[tag_name]
print ('\t',tag_name, ':', tag)
for prop_name in model.properties:
prop = model.properties[prop_name]
print ('\t',prop_name, ':', prop)
print('\n')
# -
# ## Publish the pipeline
#
# After you've created and tested a pipeline, you can publish it as a REST service.
# +
# Publish the pipeline from the run
published_pipeline = pipeline_run.publish_pipeline(
name="diabetes-training-pipeline", description="Trains diabetes model", version="1.0")
published_pipeline
# -
# Note that the published pipeline has an endpoint, which you can see in the **Endpoints** page (on the **Pipeline Endpoints** tab) in [Azure Machine Learning studio](https://ml.azure.com). You can also find its URI as a property of the published pipeline object:
rest_endpoint = published_pipeline.endpoint
print(rest_endpoint)
# ## Call the pipeline endpoint
#
# To use the endpoint, client applications need to make a REST call over HTTP. This request must be authenticated, so an authorization header is required. A real application would require a service principal with which to be authenticated, but to test this out, we'll use the authorization header from your current connection to your Azure workspace, which you can get using the following code:
# +
from azureml.core.authentication import InteractiveLoginAuthentication
interactive_auth = InteractiveLoginAuthentication()
auth_header = interactive_auth.get_authentication_header()
print("Authentication header ready.")
# -
# Now we're ready to call the REST interface. The pipeline runs asynchronously, so we'll get an identifier back, which we can use to track the pipeline experiment as it runs:
# +
import requests
experiment_name = 'mslearn-diabetes-pipeline'
rest_endpoint = published_pipeline.endpoint
response = requests.post(rest_endpoint,
headers=auth_header,
json={"ExperimentName": experiment_name})
run_id = response.json()["Id"]
run_id
# -
# Since you have the run ID, you can use it to wait for the run to complete.
#
# > **Note**: The pipeline should complete quickly, because each step was configured to allow output reuse. This was done primarily for convenience and to save time in this course. In reality, you'd likely want the first step to run every time in case the data has changed, and trigger the subsequent steps only if the output from step one changes.
# +
from azureml.pipeline.core.run import PipelineRun
published_pipeline_run = PipelineRun(ws.experiments[experiment_name], run_id)
published_pipeline_run.wait_for_completion(show_output=True)
# -
# ## Schedule the Pipeline
#
# Suppose the clinic for the diabetes patients collects new data each week, and adds it to the dataset. You could run the pipeline every week to retrain the model with the new data.
# +
from azureml.pipeline.core import ScheduleRecurrence, Schedule
# Submit the Pipeline every Monday at 00:00 UTC
recurrence = ScheduleRecurrence(frequency="Week", interval=1, week_days=["Monday"], time_of_day="00:00")
weekly_schedule = Schedule.create(ws, name="weekly-diabetes-training",
description="Based on time",
pipeline_id=published_pipeline.id,
experiment_name='mslearn-diabetes-pipeline',
recurrence=recurrence)
print('Pipeline scheduled.')
# -
# You can retrieve the schedules that are defined in the workspace like this:
schedules = Schedule.list(ws)
schedules
# You can check the latest run like this:
# +
pipeline_experiment = ws.experiments.get('mslearn-diabetes-pipeline')
latest_run = list(pipeline_experiment.get_runs())[0]
latest_run.get_details()
# -
# This is a simple example, designed to demonstrate the principle. In reality, you could build more sophisticated logic into the pipeline steps - for example, evaluating the model against some test data to calculate a performance metric like AUC or accuracy, comparing the metric to that of any previously registered versions of the model, and only registering the new model if it performs better.
#
# You can use the [Azure Machine Learning extension for Azure DevOps](https://marketplace.visualstudio.com/items?itemName=ms-air-aiagility.vss-services-azureml) to combine Azure ML pipelines with Azure DevOps pipelines (yes, it *is* confusing that they have the same name!) and integrate model retraining into a *continuous integration/continuous deployment (CI/CD)* process. For example you could use an Azure DevOps *build* pipeline to trigger an Azure ML pipeline that trains and registers a model, and when the model is registered it could trigger an Azure Devops *release* pipeline that deploys the model as a web service, along with the application or service that consumes the model.
|
.ipynb_checkpoints/08 - Create a Pipeline-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Congratulations! You have successfully defined and called your own function! That's pretty cool.
#
# In the previous exercise, you defined and called the function shout(), which printed out a string concatenated with '!!!'. You will now update shout() by adding a parameter so that it can accept and process any string argument passed to it. Also note that shout(word), the part of the header that specifies the function name and parameter(s), is known as the signature of the function. You may encounter this term in the wild!
# Complete the function header by adding the parameter name, word.
# Assign the result of concatenating word with '!!!' to shout_word.
# Print the value of shout_word.
# Call the shout() function, passing to it the string, 'congratulations'.
# +
# Define shout with the parameter, word
def shout(word):
"""Print a string with three exclamation marks"""
# Concatenate the strings: shout_word
shout_word = word + '!!!'
# Print shout_word
print(shout_word)
# Call shout with the string 'congratulations'
shout('congratulation')
|
Python Data Science Toolbox -Part 1/Writing your own functions/02 .Single-parameter functions.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Streaming Submodular Optimization
#
# Sometimes, your data can't all be in memory at once to compute on. This can be because your data is so large that it cannot fit in memory, but it can also be because your data comes in the form of a stream without a known ending. In these settings, simple application of existing algorithms is not practical. Fortunately, streaming submodular optimization strategies have been developed that allow for efficient computation in either setting.
#
# In the streaming optimization setting, the optimizer is exposed to only a batch of data at a time and must make a decision on what examples to keep before seeing the next batch, without knowing beforehand how many total batches there will be. This problem is much more challenging than the original problem because it cannot go back to earlier batches and recalculate what the gain of each element would be given some good element in the current batch. The primary difficulty with developing streaming submodular optimization algorithms is that if the algorithm chooses elements too quickly, it may miss out on good elements later on, and if the algorithm is too reluctant to make choices, it may not even select an entire set of items.
#
# There exist several algorithms for performing submodular optimization in the streaming setting. Currently, apricot has the sieve greedy algorithm (http://www.cs.cornell.edu/~ashwin85/docs/frp0328-badanidiyuru.pdf) implemented. At a high level, this algorithm works by defining thresholds on an exponential scale (with a user specified fineness) that could be the objective value of the optimal subset and, in parallel, collects subsets that would be optimal if that objective value were true. As certain thresholds are found to be too small they are discarded, and as higher scoring examples are found, larger thresholds are instantiated. After any batch is observed the user can retrieve the best performing subset out of those still being considered.
#
# Let's see this in action.
# +
# %pylab inline
numpy.random.seed(0)
import seaborn
seaborn.set_style('whitegrid')
# -
# ### Feature-based Functions
# Let's start off by creating a data set made up of random values. By definition, we won't be able to construct a data set that cannot fit in memory here, but we can pretend that we can only see batches of data and compare the results there to the performance when performing normal selection on the entire data set.
X = numpy.exp(numpy.random.randn(1000, 50))
# We can perform streaming submodular optimization using the `partial_fit` method, which is inspired by sklearn's API.
# +
from apricot import FeatureBasedSelection
model = FeatureBasedSelection(10, 'sqrt')
model.partial_fit(X)
model.ranking, sum(model.gains)
# -
# The `partial_fit` method is a batched version of the pure streaming algorithm, in which only a single example is seen at a time. Processing a minibatch of data is much faster than processing a single example at a time, due to parallelization and vectorization, but the algorithm will produce the exact same results regardless of the size of the data that it sees.
model = FeatureBasedSelection(10, 'sqrt')
for i in range(0, 1000, 100):
model.partial_fit(X[i:i+100])
model.ranking, sum(model.gains)
# As a baseline to compare the performance of the streaming algorithm to, let's calculate the performance that we would get if we were able to apply the greedy algorithm. This should serve as an upper bound for the performance that we would expect to get.
# +
from apricot import FeatureBasedSelection
model = FeatureBasedSelection(10, 'sqrt', optimizer='naive')
model.fit(X)
greedy_baseline = sum(model.gains)
greedy_baseline
# -
# Next, let's calculate the performance that we get if we chose a random set of points as a lower bound for performance. Because the streaming algorithm seems points one at a time, we'll intentionally choose to use the first points in the data set. An improperly tuned streaming algorithm may simply choose the first points that it sees to ensure that it can collect a subset of size $k$. Performing better than this means that the algorithm was able to choose better that it observes later on.
first_baseline = numpy.sqrt(X[:10].sum(axis=0)).sum()
first_baseline
# It looks like the 237.87 that we get from the streaming algorithm falls squarely between the upper bound of 253.96 and the lower bound of 197.06.
#
# However, the streaming algorithm has a hyperparameter, $\epsilon$, which controls the fineness of the thresholds chosen. Setting this value to be small increases the number of thresholds that are considered, and setting it to be large increases the number of thresholds. $\epsilon$ must be a positive value. Thus, small values of epsilon are more likely to estimate the true threshold value and, hence, return a subset that is the highest quality possible under the streaming setting.
#
# Let's take a look at the performance of the streaming algorithm as we scan over different values for $\epsilon$.
# +
epsilons = 10 ** numpy.arange(-2.5, 1.1, 0.1)
gains = []
for epsilon in epsilons:
model = FeatureBasedSelection(10, 'sqrt', optimizer_kwds={'epsilon': epsilon})
model.partial_fit(X)
gains.append(sum(model.gains))
# -
plt.plot([epsilons[0], epsilons[-1]], [greedy_baseline, greedy_baseline], color='k', label="Upper Bound")
plt.plot([epsilons[0], epsilons[-1]], [first_baseline, first_baseline], color='0.5', label="Lower Bound")
plt.scatter(epsilons, gains, color='m', label="Streaming Algorithm")
plt.legend(fontsize=12, loc=(1.05, 0.3))
plt.xscale('log')
plt.xlabel("$\\epsilon$", fontsize=14)
plt.ylabel("Objective Score", fontsize=12)
plt.show()
# The results from this plot make sense. When $\epsilon$ is small, the streaming algorithm returns a better subset than when $\epsilon$ is large. At a certain point there is no additional benefit to decreasing $\epsilon$ because the true threshold is already included. Likewise, when $\epsilon$ gets too large, the nearest threshold to the true threshold is too far away and the best estimate is the trivial solution of returning the first $k$ items.
#
# At this point, you may be wondering why the streaming algorithm doesn't return a value near to the upper bound if the threshold that has been selected is close to the true one. Well, the reason is because the algorithm can do only a single pass through the data set. This means that if the best first example is near the end of the stream, the streaming algorithm has to select several other examples first before getting to it. Choosing some other element before the optimal best first element means, by definition, that the streaming algorithm must make sub-optimal choices. Thus, the algorithm will infrequently return the best set.
# ### Facility Location
#
# The streaming algorithm becomes more complicated when used to optimize graph-based functions such as the facility location function. This is because optimization is performed on similarities between examples and, by construction, the streaming algorithm does not see all examples and so cannot calculate these similarities. An initial solution to this problem might be to calculate similarities of examples within the batch that is being observed. However, this strategy has several flaws. The first is that it could not work when only a single example is observed at a time. Another flaw is that the scale of the gains would change from batch to batch: if one batch contained many similar elements and the next many dissimilar ones, the gains would generally be larger for items in the first batch than in the second batch.
#
# There are a few potential solutions to this problem, which revolve around storing a reservoir of data and calculating similarities between newly observed items and this reservoir. The simplest way to populate this reservoir is to store the first $r$ elements, where $r$ is the size of the reservoir, that are observed. Although this strategy would work when the data is completely randomly distributed, and thus the first $r$ elements are a good representation of the entire data set, it does not work when the data is ordered or exhibits autocorrelation. For instance, if the data comes in the form of sensor measurements over time, taking the first $r$ elements means taking only a small time slice. A more complicated, and theoretically justified approach, is to use reservoir sampling in order to maintain a uniform sample of the data at any point in time. This is the approach that is implemented in apricot.
#
# The practical difference, for the user, between using the streaming algorithm on a feature-based function and on a graph-based function is that the reservoir size must be set in the selection object.
# +
from apricot import FacilityLocationSelection
X_corr = numpy.corrcoef(X) ** 2
model = FacilityLocationSelection(10, 'corr')
model.partial_fit(X)
model.ranking, X_corr[model.ranking].max(axis=0).sum()
# -
# We need to manually calculate the objective here because the gains are no longer calculated with respect to the entire data set. For a comparison with baseline methods we must calculate the objective function by hand.
#
# Let's take a look at the upper bound calculate using the naive greedy algorithm:
# +
model = FacilityLocationSelection(10, 'corr')
model.fit(X)
naive_greedy = sum(model.gains)
model.ranking, naive_greedy
# -
# We can also calculate the lower bound in the same manner as before by choosing the first $k$ elements
first_k = (numpy.corrcoef(X) ** 2)[:10].max(axis=0).sum()
first_k
# We're seeing similar trends here as when we evaluated the feature-based function.
#
# Let's vary the size of the reservoir and see how that affects performance.
# +
sizes = numpy.arange(1, 1002, 10)
gains = []
for size in sizes:
model = FacilityLocationSelection(10, 'corr', max_reservoir_size=size)
model.partial_fit(X)
gains.append(X_corr[model.ranking].max(axis=0).sum())
# -
plt.plot([sizes[0], sizes[-1]], [naive_greedy, naive_greedy], color='k', label="Naive Greedy")
plt.plot([sizes[0], sizes[-1]], [first_k, first_k], color='0.4', label="First K")
plt.scatter(sizes, gains, label='Streaming Algorithm')
plt.legend(fontsize=12, loc=(1.05, 0.3))
plt.xlabel("Max Reservoir Size", fontsize=12)
plt.ylabel("Objective Score", fontsize=12)
plt.show()
# We observe the expected trend: as the size of the reservoir increases, so too does the objective score--- until a point. However, the scores of even similarly sized reservoirs seem to have a wide variance. This can be attributed to reservoir sampling being a stochastic approach and producing very different reservoirs even when set to the same size.
# Sometimes, you already have a reservoir that you would like to evaluate against. In this case, you can pass the reservoir (a $n$ by $d$ matrix of examples, not of similarities) into the selector using the `reservoir` keyword. This reservoir will be held constant throughout the selection process and not updated using reservoir sampling. Although the variance will decrease substantially when using a pre-defined reservoir because there is no sampling involved, there is the chance that a biased reservoir that is not truly representative of the entire stream will lead to suboptimal selected examples.
# +
gains3 = []
for size in sizes:
model = FacilityLocationSelection(10, 'corr', reservoir=X[:size])
model.partial_fit(X)
gains3.append(X_corr[model.ranking].max(axis=0).sum())
# -
plt.plot([sizes[0], sizes[-1]], [naive_greedy, naive_greedy], color='k', label="Naive Greedy")
plt.plot([sizes[0], sizes[-1]], [first_k, first_k], color='0.4', label="First K")
plt.scatter(sizes, gains, label='Reservoir Sampling')
plt.scatter(sizes, gains3, label="Fixed Reservoir")
plt.legend(fontsize=12, loc=(1.05, 0.3))
plt.xlabel("Max Reservoir Size", fontsize=12)
plt.ylabel("Objective Score", fontsize=12)
plt.show()
# We can see here that using a small number of examples from the beginning of the stream was not a great idea, because the optimization algorithm saw that the first elements of the stream were very representative of the reservoir (by definition) and didn't select a truly representative set. However, once the reservoir became big enough, it became a reasonable approximation of the diversity within the data set.
|
tutorials/6. Streaming Submodular Optimization.ipynb
|
// ---
// jupyter:
// jupytext:
// text_representation:
// extension: .cpp
// format_name: light
// format_version: '1.5'
// jupytext_version: 1.14.4
// kernelspec:
// display_name: C++14
// language: C++14
// name: xcpp14
// ---
// # Linked List
// ## Basic structure of a node
#include <iostream>
using namespace std;
struct Node
{
int val;
Node *next;
//Constructor
Node(int val, Node *next) : val(val), next(next){};
//Insertion
void Inst(Node *&, int);
void InstRec(Node *&, int);
//Traversal
void Trav();
void TravRec(Node *&);
};
// ## Insertion using iteration
// +
void Node::Inst(Node*&temp,int val)
{
//Iterator
Node *iptr = temp;
//Create a node
Node* nd = new Node(val,NULL);
//If head is NULL
if(temp == NULL)
temp = nd;
else
{
while(iptr->next != NULL)
{
iptr = iptr->next;
}
iptr->next = nd;
}
}
//Creating the head (Specially for Xeus-Cling)
Node *head = new Node(1,NULL);
//Inserting 3 nodes
head->Inst(head,4);
head->Inst(head,5);
head->Inst(head,6);
// -
// `iptr` is acting as an iterator for the LinkedList.
// Two conditions are checked:
// 1. If it's a head
// 2. If it's a normal node
// ## Insertion using recursion
// +
void Node::InstRec(Node *&temp,int val)
{
//Base case if the head is NULL
if(temp == NULL)
{
Node *nd = new Node(val,NULL);
temp = nd;
return;
}
//Base case if the last node is reached
else if(temp->next == NULL)
{
Node *nd = new Node(val,NULL);
temp->next = nd;
return;
}
//Recursive case to keep on going forward
InstRec(temp->next,val);
}
head->InstRec(head,8);
// -
// There are two base cases:
// 1. If the head is NULL
// 2. If the last node is reached
//
// The recursive case keeps on moving to the next node in the Linkedlist.
// ## Traversal using iterator
// +
void Node::Trav()
{
Node* iptr = this;
while(iptr != NULL)
{
cout << iptr->val << " ";
iptr = iptr->next;
}
cout << endl;
}
head->Trav();
// -
// `iptr` is acting as an iterator for the LinkedList.
// _Notice: we didn't need to pass any argument to this function. As we don't need to modify any node._
//
// ## Traversal using recursion
// +
void Node::TravRec(Node*&temp)
{
if(temp==NULL)
{
cout << endl;
return;
}
cout << temp->val << " ";
TravRec(temp->next);
}
head->TravRec(head);
// -
// * Base Case: When the temp reaches `NULL`
// * Recursive Case: Print the value of node and do recursive call by passing `temp->next`
|
notebooks/LinkedList.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# import os
import tensorflow as tf
import numpy as np
import pickle
import logging
import tqdm
import gc
import math
import unicodedata
import itertools
import sys
import random
from six.moves import zip_longest
from tensorflow.python.layers.core import Dense
# +
flags = tf.app.flags
tf.app.flags.DEFINE_string('f', '', 'kernel')
flags.DEFINE_string("mode","train","mode")
flags.DEFINE_string("inference_mode",'greedy',"inference_mode")
flags.DEFINE_string("rnn_cell", "lstm", "rnn cell")
flags.DEFINE_string("data_file", "Data/CQA_codes.pkl", "data_file")
flags.DEFINE_integer("batch_size", 3, "batch_size")
flags.DEFINE_integer("epochs", 30, "epochs")
flags.DEFINE_integer("max_summary_length",100,"max_summary_length")
flags.DEFINE_integer("dim_str", 50, "dim_str")
flags.DEFINE_integer("dim_sem", 75, "dim_sem")
flags.DEFINE_integer("dim_output", 150, "dim_output")
flags.DEFINE_float("keep_prob", 0.7, "keep_prob")
flags.DEFINE_float("lr", 0.02, "lr")
flags.DEFINE_float("norm", 1e-4, "norm")
flags.DEFINE_integer("gpu", 0, "gpu")
flags.DEFINE_string("sent_attention", "max", "sent_attention")
flags.DEFINE_string("ans_attention", "max", "ans_attention")
flags.DEFINE_string("doc_attention", "max", "doc_attention")
flags.DEFINE_bool("large_data", True, "large_data")
flags.DEFINE_integer("log_period", 100, "log_period")
flags.DEFINE_integer("beam_width",4,"beam_width")
# +
def grouper(iterable, n, fillvalue=None, shorten=False, num_groups=None):
args = [iter(iterable)] * n
out = zip_longest(*args, fillvalue=fillvalue)
out = list(out)
if num_groups is not None:
default = (fillvalue,) * n
assert isinstance(num_groups, int)
out = list(each for each, _ in zip_longest(out, range(num_groups), fillvalue=default))
if shorten:
assert fillvalue is None
out = (tuple(e for e in each if e is not None) for each in out)
return out
def LReLu(x, leak=0.01):
f1 = 0.5 * (1 + leak)
f2 = 0.5 * (1 - leak)
return f1 * x + f2 * tf.abs(x)
def dynamicBiRNN(input, seqlen, n_hidden, cell_type, cell_name=''):
batch_size = tf.shape(input)[0]
with tf.variable_scope(cell_name + 'fw', initializer=tf.contrib.layers.xavier_initializer(), dtype = tf.float32):
if(cell_type == 'gru'):
fw_cell = tf.contrib.rnn.GRUCell(n_hidden)
elif(cell_type == 'lstm'):
fw_cell = tf.contrib.rnn.LSTMCell(n_hidden)
fw_initial_state = fw_cell.zero_state(batch_size, tf.float32)
with tf.variable_scope(cell_name + 'bw', initializer=tf.contrib.layers.xavier_initializer(), dtype = tf.float32):
if(cell_type == 'gru'):
bw_cell = tf.contrib.rnn.GRUCell(n_hidden)
elif(cell_type == 'lstm'):
bw_cell = tf.contrib.rnn.LSTMCell(n_hidden)
bw_initial_state = bw_cell.zero_state(batch_size, tf.float32)
with tf.variable_scope(cell_name):
outputs, output_states = tf.nn.bidirectional_dynamic_rnn(fw_cell, bw_cell, input,
initial_state_fw=fw_initial_state,
initial_state_bw=bw_initial_state,
sequence_length=seqlen)
return outputs, output_states
def decode(helper, scope, reuse=None):
with tf.variable_scope(scope, reuse=reuse):
attention_mechanism = tf.contrib.seq2seq.BahdanauAttention(num_units=num_units, memory=encoder_outputs,memory_sequence_length=input_lengths)
cell = tf.contrib.rnn.GRUCell(num_units=num_units)
attn_cell = tf.contrib.seq2seq.AttentionWrapper(cell, attention_mechanism, attention_layer_size=num_units / 2)
out_cell = tf.contrib.rnn.OutputProjectionWrapper(attn_cell, vocab_size, reuse=reuse)
decoder = tf.contrib.seq2seq.BasicDecoder(cell=out_cell, helper=helper,initial_state=out_cell.zero_state(dtype=tf.float32, batch_size=batch_size))#initial_state=encoder_final_state)
outputs = tf.contrib.seq2seq.dynamic_decode(decoder=decoder, output_time_major=False,impute_finished=True, maximum_iterations=self.config.max_summary_length)
return outputs[0]
def get_structure(name, input, max_l, mask_parser_1, mask_parser_2):
def _getDep(input, mask1, mask2):
#input: batch_l, sent_l, rnn_size
with tf.variable_scope("Structure/"+name, reuse=True, dtype=tf.float32):
w_parser_p = tf.get_variable("w_parser_p")
w_parser_c = tf.get_variable("w_parser_c")
b_parser_p = tf.get_variable("bias_parser_p")
b_parser_c = tf.get_variable("bias_parser_c")
w_parser_s = tf.get_variable("w_parser_s")
w_parser_root = tf.get_variable("w_parser_root")
parent = tf.tanh(tf.tensordot(input, w_parser_p, [[2], [0]]) + b_parser_p)
child = tf.tanh(tf.tensordot(input, w_parser_c, [[2], [0]])+b_parser_c)
# rep = LReLu(parent+child)
temp = tf.tensordot(parent,w_parser_s,[[-1],[0]])
raw_scores_words_ = tf.matmul(temp,tf.matrix_transpose(child))
# raw_scores_words_ = tf.squeeze(tf.tensordot(rep, w_parser_s, [[3], [0]]) , [3])
raw_scores_root_ = tf.squeeze(tf.tensordot(input, w_parser_root, [[2], [0]]) , [2])
raw_scores_words = tf.exp(raw_scores_words_)
raw_scores_root = tf.exp(raw_scores_root_)
tmp = tf.zeros_like(raw_scores_words[:,:,0])
raw_scores_words = tf.matrix_set_diag(raw_scores_words,tmp)
str_scores, LL = _getMatrixTree(raw_scores_root, raw_scores_words, mask1, mask2)
return str_scores
def _getMatrixTree(r, A, mask1, mask2):
L = tf.reduce_sum(A, 1)
L = tf.matrix_diag(L)
L = L - A
LL = L[:, 1:, :]
LL = tf.concat([tf.expand_dims(r, [1]), LL], 1)
LL_inv = tf.matrix_inverse(LL) #batch_l, doc_l, doc_l
d0 = tf.multiply(r, LL_inv[:, :, 0])
LL_inv_diag = tf.expand_dims(tf.matrix_diag_part(LL_inv), 2)
tmp1 = tf.matrix_transpose(tf.multiply(tf.matrix_transpose(A), LL_inv_diag))
tmp2 = tf.multiply(A, tf.matrix_transpose(LL_inv))
d = mask1 * tmp1 - mask2 * tmp2
d = tf.concat([tf.expand_dims(d0,[1]), d], 1)
return d, LL
str_scores = _getDep(input, mask_parser_1, mask_parser_2)
return str_scores
def initialize_uninitialized_vars(sess):
from itertools import compress
global_vars = tf.global_variables()
is_not_initialized = sess.run([~(tf.is_variable_initialized(var)) \
for var in global_vars])
not_initialized_vars = list(compress(global_vars, is_not_initialized))
if len(not_initialized_vars):
sess.run(tf.variables_initializer(not_initialized_vars))
# +
class Instance:
def __init__(self):
self.token_idxs = None
self.abstract_idxs = None
self.idx = -1
def _doc_len(self):
k = len(self.token_idxs)
return (k)
def _abstract_len(self):
k = len(self.abstract_idxs)
return k
def _max_ans_len(self):
k = max([len(ans) for ans in self.token_idxs])
return int(k)
def _max_sent_len(self):
k = max([len(sent) for ans in self.token_idxs for sent in ans ])
return int(k)
class DataSet:
def __init__(self, data):
self.data = data
self.num_examples = len(self.data)
def sort(self):
random.shuffle(self.data)
self.data = sorted(self.data, key=lambda x: x._max_sent_len())
self.data = sorted(self.data, key=lambda x: x._max_ans_len())
self.data = sorted(self.data, key=lambda x: x._doc_len())
def get_by_idxs(self, idxs):
return [self.data[idx] for idx in idxs]
def get_batches(self, batch_size, num_epochs=None, rand = True):
num_batches_per_epoch = int(math.ceil(self.num_examples / batch_size))
idxs = list(range(self.num_examples))
_grouped = lambda: list(grouper(idxs, batch_size))
if(rand):
grouped = lambda: random.sample(_grouped(), num_batches_per_epoch)
else:
grouped = _grouped
num_steps = num_epochs*num_batches_per_epoch
batch_idx_tuples = itertools.chain.from_iterable(grouped() for _ in range(num_epochs))
for i in range(num_steps):
batch_idxs = tuple(i for i in next(batch_idx_tuples) if i is not None)
batch_data = self.get_by_idxs(batch_idxs)
yield i,batch_data
class Params:
def __init__(self,n_embed,d_embed,vocab,inv_vocab,vsize,dim_hidden,embeddings):
self.n_embed = n_embed
self.d_embed = d_embed
self.vocab = vocab
self.inv_vocab = inv_vocab
self.vsize = vsize
self.dim_hidden = dim_hidden
self.embeddings = embeddings.astype(np.float64)
# +
def attention_decoder(decoder_inputs, initial_state, encoder_states, cell):
with variable_scope.variable_scope("attention_decoder") as scope:
batch_size = encoder_states.get_shape()[0].value
attn_size = encoder_states.get_shape()[2].value
encoder_states = tf.expand_dims(encoder_states, axis=2)
attention_vec_size = attn_size
W_h = variable_scope.get_variable("W_h", [1, 1, attn_size, attention_vec_size])
encoder_features = nn_ops.conv2d(encoder_states, W_h, [1, 1, 1, 1], "SAME")
v = variable_scope.get_variable("v", [attention_vec_size])
def attention(decoder_state):
with variable_scope.variable_scope("Attention"):
decoder_features = linear(decoder_state, attention_vec_size, True)
decoder_features = tf.expand_dims(tf.expand_dims(decoder_features, 1), 1)
e = math_ops.reduce_sum(v * math_ops.tanh(encoder_features + decoder_features), [2, 3])
attn_dist = nn_ops.softmax(e)
masked_sums = tf.reduce_sum(attn_dist, axis=1)
attn_dist = attn_dist / tf.reshape(masked_sums, [-1, 1])
context_vector = math_ops.reduce_sum(array_ops.reshape(attn_dist, [batch_size, -1, 1, 1]) * encoder_states, [1, 2]) # shape (batch_size, attn_size).
context_vector = array_ops.reshape(context_vector, [-1, attn_size])
return context_vector, attn_dist
def linear(args, output_size, bias, bias_start=0.0, scope=None):
total_arg_size = 0
shapes = [a.get_shape().as_list() for a in args]
for shape in shapes:
total_arg_size += shape[1]
with tf.variable_scope(scope or "Linear"):
matrix = tf.get_variable("Matrix", [total_arg_size, output_size])
if len(args) == 1:
res = tf.matmul(args[0], matrix)
else:
res = tf.matmul(tf.concat(axis=1, values=args), matrix)
if not bias:
return res
bias_term = tf.get_variable(
"Bias", [output_size], initializer=tf.constant_initializer(bias_start))
return res + bias_term
outputs = []
attn_dists = []
p_gens = []
state = initial_state
context_vector = array_ops.zeros([batch_size, attn_size])
context_vector.set_shape([None, attn_size])
for i, inp in enumerate(decoder_inputs):
tf.logging.info("Adding attention_decoder timestep %i of %i", i, len(decoder_inputs))
if i > 0:
variable_scope.get_variable_scope().reuse_variables()
input_size = inp.get_shape().with_rank(2)[1]
x = linear([inp] + [context_vector], input_size, True)
cell_output, state = cell(x, state)
context_vector, attn_dist = attention(state)
attn_dists.append(attn_dist)
with tf.variable_scope('calculate_pgen'):
p_gen = linear([context_vector, state.c, state.h, x], 1, True)
p_gen = tf.sigmoid(p_gen)
p_gens.append(p_gen)
with variable_scope.variable_scope("AttnOutputProjection"):
output = linear([cell_output] + [context_vector], cell.output_size, True)
outputs.append(output)
return outputs, state, attn_dists, p_gens
def _calc_final_dist(self, vocab_dists, attn_dists):
with tf.variable_scope('final_distribution'):
vocab_dists = [p_gen * dist for (p_gen,dist) in zip(self.p_gens, vocab_dists)]
attn_dists = [(1-p_gen) * dist for (p_gen,dist) in zip(self.p_gens, attn_dists)]
# Concatenate some zeros to each vocabulary dist, to hold the probabilities for in-article OOV words
extended_vsize = self._vocab.size() + self._max_art_oovs # the maximum (over the batch) size of the extended vocabulary
extra_zeros = tf.zeros((self._hps.batch_size, self._max_art_oovs))
vocab_dists_extended = [tf.concat(axis=1, values=[dist, extra_zeros]) for dist in vocab_dists] # list length max_dec_steps of shape (batch_size, extended_vsize)
# Project the values in the attention distributions onto the appropriate entries in the final distributions
# This means that if a_i = 0.1 and the ith encoder word is w, and w has index 500 in the vocabulary, then we add 0.1 onto the 500th entry of the final distribution
# This is done for each decoder timestep.
# This is fiddly; we use tf.scatter_nd to do the projection
batch_nums = tf.range(0, limit=self._hps.batch_size) # shape (batch_size)
batch_nums = tf.expand_dims(batch_nums, 1) # shape (batch_size, 1)
attn_len = tf.shape(self._enc_batch_extend_vocab)[1] # number of states we attend over
batch_nums = tf.tile(batch_nums, [1, attn_len]) # shape (batch_size, attn_len)
indices = tf.stack( (batch_nums, self._enc_batch_extend_vocab), axis=2) # shape (batch_size, enc_t, 2)
shape = [self._hps.batch_size, extended_vsize]
attn_dists_projected = [tf.scatter_nd(indices, copy_dist, shape) for copy_dist in attn_dists] # list length max_dec_steps (batch_size, extended_vsize)
# Add the vocab distributions and the copy distributions together to get the final distributions
# final_dists is a list length max_dec_steps; each entry is a tensor shape (batch_size, extended_vsize) giving the final distribution for that decoder timestep
# Note that for decoder timesteps and examples corresponding to a [PAD] token, this is junk - ignore.
final_dists = [vocab_dist + copy_dist for (vocab_dist,copy_dist) in zip(vocab_dists_extended, attn_dists_projected)]
return final_dists
# +
class StructureModel():
def __init__(self, config,params):
self.config = config
self.params = params
t_variables = {}
t_variables['keep_prob'] = tf.placeholder(tf.float32)
t_variables['batch_l'] = tf.placeholder(tf.int32)
#Placeholder for answers and abstracts
t_variables['token_idxs'] = tf.placeholder(tf.int32, [None, None, None, None])
t_variables['abstract_idxs'] = tf.placeholder(tf.int32, [None,None])
#Storing length of each heirarchy element
t_variables['sent_l'] = tf.placeholder(tf.int32, [None, None,None])
t_variables['ans_l'] = tf.placeholder(tf.int32, [None, None])
t_variables['doc_l'] = tf.placeholder(tf.int32, [None])
t_variables['abstract_l'] = tf.placeholder(tf.int32,[None])
#Storing upper limit of each element length
t_variables['max_sent_l'] = tf.placeholder(tf.int32)
t_variables['max_doc_l'] = tf.placeholder(tf.int32)
t_variables['max_ans_l'] = tf.placeholder(tf.int32)
t_variables['max_abstract_l'] = tf.placeholder(tf.int32)
#Masks to limit element sizes
t_variables['mask_tokens'] = tf.placeholder(tf.float32, [None, None, None,None])
t_variables['mask_sents'] = tf.placeholder(tf.float32, [None, None,None])
t_variables['mask_answers']= tf.placeholder(tf.float32,[None,None])
t_variables['mask_abstracts'] = tf.placeholder(tf.float32,[None,None])
#Parser Masks
t_variables['mask_parser_1'] = tf.placeholder(tf.float32, [None, None, None])
t_variables['mask_parser_2'] = tf.placeholder(tf.float32, [None, None, None])
t_variables['start_tokens'] = tf.placeholder(tf.int32,[None])
self.t_variables = t_variables
def get_feed_dict(self, batch):
batch_size = len(batch)
abstracts_l_matrix = np.zeros([batch_size],np.int32)
doc_l_matrix = np.zeros([batch_size], np.int32)
for i, instance in enumerate(batch):
n_ans = len(instance.token_idxs)
n_words = len(instance.abstract_idxs)
doc_l_matrix[i] = n_ans
abstracts_l_matrix[i] = n_words
max_doc_l = np.max(doc_l_matrix)
max_ans_l = max([max([len(ans) for ans in doc.token_idxs]) for doc in batch])
max_sent_l = max([max([max([len(sent) for itr in doc.token_idxs for sent in itr]) for ans in doc.token_idxs]) for doc in batch])
max_abstract_l = np.max(abstracts_l_matrix)
ans_l_matrix = np.zeros([batch_size, max_doc_l], np.int32)
sent_l_matrix = np.zeros([batch_size, max_doc_l, max_ans_l], np.int32)
token_idxs_matrix = np.zeros([batch_size, max_doc_l, max_ans_l, max_sent_l], np.int32)
abstract_idx_matrix = np.zeros([batch_size,max_abstract_l], np.int32)
mask_tokens_matrix = np.ones([batch_size, max_doc_l, max_ans_l, max_sent_l], np.float32)
mask_sents_matrix = np.ones([batch_size, max_doc_l, max_ans_l], np.float32)
mask_answers_matrix = np.ones([batch_size, max_doc_l],np.float32)
mask_abstact_matrix = np.ones([batch_size,max_abstract_l],np.float32)
for i, instance in enumerate(batch):
n_answers = len(instance.token_idxs)
abstract_ = instance.abstract_idxs
abstract_idx_matrix[i,:len(abstract_)] = np.asarray(abstract_)
mask_abstact_matrix[i,len(abstract_):] = 0
abstracts_l_matrix[i] = len(abstract_)
for j, ans in enumerate(instance.token_idxs):
for k, sent in enumerate(instance.token_idxs[j]):
token_idxs_matrix[i, j, k,:len(sent)] = np.asarray(sent)
mask_tokens_matrix[i, j, k,len(sent):] = 0
sent_l_matrix[i, j,k] = len(sent)
mask_sents_matrix[i,j,len(ans):]=0
ans_l_matrix[i,j] = len(ans)
mask_answers_matrix[i, n_answers:] = 0
mask_parser_1 = np.ones([batch_size, max_doc_l, max_doc_l], np.float32)
mask_parser_2 = np.ones([batch_size, max_doc_l, max_doc_l], np.float32)
mask_parser_1[:, :, 0] = 0
mask_parser_2[:, 0, :] = 0
feed_dict = {self.t_variables['token_idxs']: token_idxs_matrix,self.t_variables['abstract_idxs']: abstract_idx_matrix,
self.t_variables['sent_l']: sent_l_matrix,self.t_variables['ans_l']:ans_l_matrix,self.t_variables['doc_l']: doc_l_matrix,
self.t_variables['abstract_l']:abstracts_l_matrix,
self.t_variables['mask_tokens']: mask_tokens_matrix, self.t_variables['mask_sents']: mask_sents_matrix, self.t_variables['mask_answers']:mask_answers_matrix,
self.t_variables['mask_abstracts']: mask_abstact_matrix,
self.t_variables['max_sent_l']: max_sent_l,self.t_variables['max_ans_l']:max_ans_l, self.t_variables['max_doc_l']: max_doc_l,
self.t_variables['max_abstract_l']: max_abstract_l,
self.t_variables['mask_parser_1']: mask_parser_1, self.t_variables['mask_parser_2']: mask_parser_2,
self.t_variables['batch_l']: batch_size, self.t_variables['keep_prob']:self.config.keep_prob}
return feed_dict
def build(self):
with tf.variable_scope("Embeddings"):
#Initial embedding placeholders
self.embeddings = tf.get_variable("emb", [self.params.n_embed, self.params.d_embed], dtype=tf.float32,
initializer=tf.contrib.layers.xavier_initializer())
embeddings_root = tf.get_variable("emb_root", [1, 1, 2 * self.config.dim_sem], dtype=tf.float32,
initializer=tf.contrib.layers.xavier_initializer())
embeddings_root_a = tf.get_variable("emb_root_ans", [1, 1,2* self.config.dim_sem], dtype=tf.float32,
initializer=tf.contrib.layers.xavier_initializer())
embeddings_root_s = tf.get_variable("emb_root_s", [1, 1,2* self.config.dim_sem], dtype=tf.float32,
initializer=tf.contrib.layers.xavier_initializer())
with tf.variable_scope("Model"):
#Weights and biases at pooling layers and final softmax for output. (Fianl might not be required)(Semantic combination part)
w_comb = tf.get_variable("w_comb", [4 * self.config.dim_sem, 2 * self.config.dim_sem], dtype=tf.float32,
initializer=tf.contrib.layers.xavier_initializer())
b_comb = tf.get_variable("bias_comb", [2 * self.config.dim_sem], dtype=tf.float32, initializer=tf.constant_initializer())
w_comb_a = tf.get_variable("w_comb_a", [4 * self.config.dim_sem, 2 * self.config.dim_sem], dtype=tf.float32,
initializer=tf.contrib.layers.xavier_initializer())
b_comb_a = tf.get_variable("bias_comb_a", [2 * self.config.dim_sem], dtype=tf.float32, initializer=tf.constant_initializer())
w_comb_s = tf.get_variable("w_comb_s", [4 * self.config.dim_sem, 2 * self.config.dim_sem], dtype=tf.float32,
initializer=tf.contrib.layers.xavier_initializer())
b_comb_s = tf.get_variable("bias_comb_s", [2 * self.config.dim_sem], dtype=tf.float32, initializer=tf.constant_initializer())
w_softmax = tf.get_variable("w_softmax", [2 * self.config.dim_sem, self.config.dim_output], dtype=tf.float32,
initializer=tf.contrib.layers.xavier_initializer())
b_softmax = tf.get_variable("bias_softmax", [self.config.dim_output], dtype=tf.float32,
initializer=tf.contrib.layers.xavier_initializer())
with tf.variable_scope("Structure/doc"):
#Placeholders for hierarchical model at document level(structural part)
tf.get_variable("w_parser_p", [2 * self.config.dim_str, 2 * self.config.dim_str],
dtype=tf.float32,
initializer=tf.contrib.layers.xavier_initializer())
tf.get_variable("w_parser_c", [2 * self.config.dim_str, 2 * self.config.dim_str],
dtype=tf.float32,
initializer=tf.contrib.layers.xavier_initializer())
tf.get_variable("w_parser_s", [2 * self.config.dim_str, 2 * self.config.dim_str], dtype=tf.float32,
initializer=tf.contrib.layers.xavier_initializer())
tf.get_variable("bias_parser_p", [2 * self.config.dim_str], dtype=tf.float32,
initializer=tf.contrib.layers.xavier_initializer())
tf.get_variable("bias_parser_c", [2 * self.config.dim_str], dtype=tf.float32,
initializer=tf.contrib.layers.xavier_initializer())
tf.get_variable("w_parser_root", [2 * self.config.dim_str, 1], dtype=tf.float32,
initializer=tf.contrib.layers.xavier_initializer())
with tf.variable_scope("Structure/ans"):
#Placeholders for hierarchial model at answer level(structural part)
tf.get_variable("w_parser_p", [2 * self.config.dim_str, 2 * self.config.dim_str],
dtype=tf.float32,
initializer=tf.contrib.layers.xavier_initializer())
tf.get_variable("w_parser_c", [2 * self.config.dim_str, 2 * self.config.dim_str],
dtype=tf.float32,
initializer=tf.contrib.layers.xavier_initializer())
tf.get_variable("bias_parser_p", [2 * self.config.dim_str], dtype=tf.float32,
initializer=tf.contrib.layers.xavier_initializer())
tf.get_variable("bias_parser_c", [2 * self.config.dim_str], dtype=tf.float32,
initializer=tf.contrib.layers.xavier_initializer())
tf.get_variable("w_parser_s", [2 * self.config.dim_str, 2 * self.config.dim_str], dtype=tf.float32,
initializer=tf.contrib.layers.xavier_initializer())
tf.get_variable("w_parser_root", [2 * self.config.dim_str, 1], dtype=tf.float32,
initializer=tf.contrib.layers.xavier_initializer())
with tf.variable_scope("Structure/sent"):
#Placeholders for hierarchial model at sentence level(structural part)
tf.get_variable("w_parser_p", [2 * self.config.dim_str, 2 * self.config.dim_str],
dtype=tf.float32,
initializer=tf.contrib.layers.xavier_initializer())
tf.get_variable("w_parser_c", [2 * self.config.dim_str, 2 * self.config.dim_str],
dtype=tf.float32,
initializer=tf.contrib.layers.xavier_initializer())
tf.get_variable("bias_parser_p", [2 * self.config.dim_str], dtype=tf.float32,
initializer=tf.contrib.layers.xavier_initializer())
tf.get_variable("bias_parser_c", [2 * self.config.dim_str], dtype=tf.float32,
initializer=tf.contrib.layers.xavier_initializer())
tf.get_variable("w_parser_s", [2 * self.config.dim_str, 2 * self.config.dim_str], dtype=tf.float32,
initializer=tf.contrib.layers.xavier_initializer())
tf.get_variable("w_parser_root", [2 * self.config.dim_str, 1], dtype=tf.float32,
initializer=tf.contrib.layers.xavier_initializer())
#Variables of dimension batchsize passing length of each vector to architectures
sent_l = self.t_variables['sent_l']
ans_l = self.t_variables['ans_l']
doc_l = self.t_variables['doc_l']
abstract_l = self.t_variables['abstract_l']
#Maximum lengths of sentences, answers and documents to be processed
max_sent_l = self.t_variables['max_sent_l']
max_ans_l = self.t_variables['max_ans_l']
max_doc_l = self.t_variables['max_doc_l']
max_abstract_l = self.t_variables['max_abstract_l']
#batch size
batch_l = self.t_variables['batch_l']
#Creating embedding matrices for answers and abstracts corresponding to indexes
tokens_input = tf.nn.embedding_lookup(self.embeddings, self.t_variables['token_idxs'][:,:max_doc_l, :max_ans_l, :max_sent_l])
reference_input = tf.nn.embedding_lookup(self.embeddings,self.t_variables['abstract_idxs'][:,:max_abstract_l])
#Dropout on input
tokens_input = tf.nn.dropout(tokens_input, self.t_variables['keep_prob'])
#Masking inputs
mask_tokens = self.t_variables['mask_tokens'][:,:max_doc_l, :max_ans_l, :max_sent_l]
mask_sents = self.t_variables['mask_sents'][:, :max_doc_l,:max_ans_l]
mask_answers = self.t_variables['mask_answers'][:,:max_doc_l]
mask_abstract = self.t_variables['mask_abstracts'][:,:max_abstract_l]
[_, _, _, _, rnn_size] = tokens_input.get_shape().as_list()
tokens_input_do = tf.reshape(tokens_input, [batch_l * max_doc_l*max_ans_l, max_sent_l, rnn_size])
sent_l = tf.reshape(sent_l, [batch_l * max_doc_l* max_ans_l])
mask_tokens = tf.reshape(mask_tokens, [batch_l * max_doc_l*max_ans_l, -1])
#Word level input
tokens_output, token_encoder_states = dynamicBiRNN(tokens_input_do, sent_l, n_hidden=self.params.dim_hidden,
cell_type=self.config.rnn_cell, cell_name='Model/sent')
tokens_sem = tf.concat([tokens_output[0][:,:,:self.config.dim_sem], tokens_output[1][:,:,:self.config.dim_sem]], 2)
tokens_str = tf.concat([tokens_output[0][:,:,self.config.dim_sem:], tokens_output[1][:,:,self.config.dim_sem:]], 2)
temp1 = tf.zeros([batch_l * max_doc_l*max_ans_l, max_sent_l,1], tf.float32)
temp2 = tf.zeros([batch_l * max_doc_l*max_ans_l ,1,max_sent_l], tf.float32)
mask1 = tf.ones([batch_l * max_doc_l * max_ans_l, max_sent_l, max_sent_l-1], tf.float32)
mask2 = tf.ones([batch_l * max_doc_l * max_ans_l, max_sent_l-1, max_sent_l], tf.float32)
mask1 = tf.concat([temp1,mask1],2)
mask2 = tf.concat([temp2,mask2],1)
str_scores_s_ = get_structure('sent', tokens_str, max_sent_l, mask1, mask2) # batch_l, sent_l+1, sent_l
str_scores_s = tf.matrix_transpose(str_scores_s_) # soft parent
tokens_sem_root = tf.concat([tf.tile(embeddings_root_s, [batch_l * max_doc_l *max_ans_l, 1, 1]), tokens_sem], 1)
tokens_output_ = tf.matmul(str_scores_s, tokens_sem_root)
tokens_output = LReLu(tf.tensordot(tf.concat([tokens_sem, tokens_output_], 2), w_comb_s, [[2], [0]]) + b_comb_s)
if (self.config.sent_attention == 'sum'):
tokens_output = tokens_output * tf.expand_dims(mask_tokens,2)
tokens_output = tf.reduce_sum(tokens_output, 1)
elif (self.config.sent_attention == 'mean'):
tokens_output = tokens_output * tf.expand_dims(mask_tokens,2)
tokens_output = tf.reduce_sum(tokens_output, 1)/tf.expand_dims(tf.cast(sent_l,tf.float32),1)
elif (self.config.sent_attention == 'max'):
tokens_output = tokens_output + tf.expand_dims((mask_tokens-1)*999,2)
tokens_output = tf.reduce_max(tokens_output, 1)
#Sentence level RNN
sents_input = tf.reshape(tokens_output, [batch_l*max_doc_l, max_ans_l,2*self.config.dim_sem])
ans_l = tf.reshape(ans_l,[batch_l*max_doc_l])
mask_sents = tf.reshape(mask_sents,[batch_l*max_doc_l,-1])
sents_output, _ = dynamicBiRNN(sents_input, ans_l, n_hidden=self.params.dim_hidden, cell_type=self.config.rnn_cell, cell_name='Model/ans')
sents_sem = tf.concat([sents_output[0][:,:,:self.config.dim_sem], sents_output[1][:,:,:self.config.dim_sem]], 2)
sents_str = tf.concat([sents_output[0][:,:,self.config.dim_sem:], sents_output[1][:,:,self.config.dim_sem:]], 2)
temp1 = tf.zeros([batch_l * max_doc_l, max_ans_l, 1], tf.float32)
temp2 = tf.zeros([batch_l * max_doc_l, 1, max_ans_l], tf.float32)
mask1 = tf.ones([batch_l * max_doc_l , max_ans_l, max_ans_l-1], tf.float32)
mask2 = tf.ones([batch_l * max_doc_l , max_ans_l-1, max_ans_l], tf.float32)
mask1 = tf.concat([temp1,mask1],2)
mask2 = tf.concat([temp2,mask2],1)
str_scores_ = get_structure('ans', sents_str, max_ans_l, mask1,mask2) #batch_l, sent_l+1, sent_l
str_scores = tf.matrix_transpose(str_scores_) # soft parent
sents_sem_root = tf.concat([tf.tile(embeddings_root_a, [batch_l*max_doc_l, 1, 1]), sents_sem], 1)
sents_output_ = tf.matmul(str_scores, sents_sem_root)
sents_output = LReLu(tf.tensordot(tf.concat([sents_sem, sents_output_], 2), w_comb, [[2], [0]]) + b_comb)
if (self.config.doc_attention == 'sum'):
sents_output = sents_output * tf.expand_dims(mask_sents,2)
sents_output = tf.reduce_sum(sents_output, 1)
elif (self.config.doc_attention == 'mean'):
sents_output = sents_output * tf.expand_dims(mask_sents,2)
sents_output = tf.reduce_sum(sents_output, 1)/tf.expand_dims(tf.cast(ans_l,tf.float32),1)
elif (self.config.doc_attention == 'max'):
sents_output = sents_output + tf.expand_dims((mask_sents-1)*999,2)
sents_output = tf.reduce_max(sents_output, 1)
#Answer level RNN
ans_input = tf.reshape(sents_output, [batch_l, max_doc_l,2*self.config.dim_sem])
ans_output, _ = dynamicBiRNN(ans_input, doc_l, n_hidden=self.params.dim_hidden, cell_type=self.config.rnn_cell, cell_name='Model/doc')
ans_sem = tf.concat([ans_output[0][:,:,:self.config.dim_sem], ans_output[1][:,:,:self.config.dim_sem]], 2)
ans_str = tf.concat([ans_output[0][:,:,self.config.dim_sem:], ans_output[1][:,:,self.config.dim_sem:]], 2)
str_scores_ = get_structure('doc', ans_str, max_doc_l, self.t_variables['mask_parser_1'], self.t_variables['mask_parser_2']) #batch_l, sent_l+1, sent_l
str_scores = tf.matrix_transpose(str_scores_) # soft parent
ans_sem_root = tf.concat([tf.tile(embeddings_root, [batch_l, 1, 1]), ans_sem], 1)
ans_output_ = tf.matmul(str_scores, ans_sem_root)
ans_output = LReLu(tf.tensordot(tf.concat([ans_sem, ans_output_], 2), w_comb, [[2], [0]]) + b_comb)
if (self.config.ans_attention == 'sum'):
ans_output = ans_output * tf.expand_dims(mask_answers,2)
# ans_output = tf.reduce_sum(ans_output, 1)
elif (self.config.ans_attention == 'mean'):
ans_output = ans_output * tf.expand_dims(mask_answers,2)
ans_output = tf.reduce_sum(ans_output, 1)/tf.expand_dims(tf.cast(doc_l,tf.float32),1)
elif (self.config.ans_attention == 'max'):
ans_output = ans_output + tf.expand_dims((mask_answers-1)*999,2)
ans_output = tf.reduce_max(ans_output, 1)
encoder_output = ans_output
tgt_vocab_size = self.params.vsize
learning_rate = self.config.lr
decoder_cell = tf.nn.rnn_cell.BasicLSTMCell(self.config.dim_output)
lstm_init = tf.contrib.rnn.LSTMStateTuple(encoder_output,encoder_output)
projection_layer = tf.layers.Dense(tgt_vocab_size, use_bias=False)
# Attention Decoder Call start
decoder_outputs,_dec_out_state,attn_dists,p_gens=attention_decoder(reference_input,lstm_init,token_encoder_states,decoder_cell)
with tf.variable_scope('output_projection'):
w = tf.get_variable('w', [hps.hidden_dim, vsize], dtype=tf.float32, initializer=self.trunc_norm_init)
w_t = tf.transpose(w)
v = tf.get_variable('v', [vsize], dtype=tf.float32, initializer=self.trunc_norm_init)
vocab_scores = [] # vocab_scores is the vocabulary distribution before applying softmax. Each entry on the list corresponds to one decoder step
for i,output in enumerate(decoder_outputs):
if i > 0:
tf.get_variable_scope().reuse_variables()
vocab_scores.append(tf.nn.xw_plus_b(output, w, v)) # apply the linear layer
vocab_dists = [tf.nn.softmax(s) for s in vocab_scores] # The vocabulary distributions. List length max_dec_steps of (batch_size, vsize) arrays. The words are in the order they appear in the vocabulary file.
# For pointer-generator model, calc final distribution from copy distribution and vocabulary distribution
final_dists = self._calc_final_dist(vocab_dists, self.attn_dists)
# Attention Decoder Call end
#training
if(config.mode == 'train'):
training_helper = tf.contrib.seq2seq.TrainingHelper(reference_input, abstract_l, time_major=False)
decoder = tf.contrib.seq2seq.BasicDecoder(decoder_cell, training_helper,initial_state=lstm_init,output_layer=projection_layer)
outputs, states, seq_l = tf.contrib.seq2seq.dynamic_decode(decoder)
training_logits = outputs.rnn_output
#inference
elif (config.mode == 'infer'):
embeddings = np.float32(self.params.embeddings)
start_tokens = tf.tile(tf.constant([self.params.inv_vocab['<GO>']], dtype=tf.int32), [batch_l], name='start_tokens')
if(config.inference_mode == 'greedy'):
inference_helper = tf.contrib.seq2seq.GreedyEmbeddingHelper(embeddings,start_tokens,self.params.inv_vocab['<EOS>'])
inference_decoder = tf.contrib.seq2seq.BasicDecoder(decoder_cell, inference_helper, lstm_init,output_layer=projection_layer)
outputs, _, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder, maximum_iterations=self.config.max_summary_length)
inference_logits = outputs.sample_id
elif(config.inference_mode == 'beam'):
beam_decoder_initial_state = tf.contrib.seq2seq.tile_batch(lstm_init, multiplier=self.config.beam_width)
inference_decoder = tf.contrib.seq2seq.BeamSearchDecoder(cell=decoder_cell,embedding=embeddings,start_tokens=start_tokens,end_token=self.params.inv_vocab['<EOS>'],
initial_state=beam_decoder_initial_state,beam_width=self.config.beam_width,output_layer=projection_layer,
length_penalty_weight=0.0)
outputs, _, _ = tf.contrib.seq2seq.dynamic_decode(inference_decoder,impute_finished = False, maximum_iterations=self.config.max_summary_length)
inference_logits = outputs.predicted_ids
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=self.t_variables['abstract_idxs'], logits=training_logits)
target_weights = tf.sequence_mask(abstract_l, max_abstract_l, dtype=tf.float32)
reduced_loss = tf.reduce_sum(loss*target_weights)/tf.to_float(batch_l)
global_step = tf.Variable(0, name='global_step', trainable=False)
params = tf.trainable_variables()
gradients = tf.gradients(loss, params)
clipped_gradients, _ = tf.clip_by_global_norm(gradients, 5.0)
optimizer = tf.train.AdamOptimizer(learning_rate,epsilon=0.1)
update_step = optimizer.apply_gradients(zip(clipped_gradients, params),global_step=global_step)
self.final_output = training_logits
self.inference_logits = inference_logits
self.loss = reduced_loss
self.opt = optimizer.minimize(loss)
# +
#Main function begins here
config = flags.FLAGS
remaining_args = flags.FLAGS([sys.argv[0]] + [flag for flag in sys.argv if flag.startswith("--")])
assert(remaining_args == [sys.argv[0]])
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = str(config.gpu)
hash = random.getrandbits(32)
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
ah = logging.FileHandler(str(hash)+'.log')
ah.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(message)s')
ah.setFormatter(formatter)
logger.addHandler(ah)
gc.disable()
train, dev, test, embeddings, vocab = pickle.load(open(config.data_file,'rb'))
gc.enable()
print('Data loaded succesfully')
trainset, devset, testset = DataSet(train), DataSet(dev), DataSet(test)
vocab = dict([(v.index,k) for k,v in vocab.items()])
trainset.sort()
devset.sort()
testset.sort()
train_batches = trainset.get_batches(config.batch_size, config.epochs, rand=True)
dev_batches = devset.get_batches(config.batch_size, 1, rand=False)
test_batches = testset.get_batches(config.batch_size, 1, rand=False)
dev_batches = [i for i in dev_batches]
test_batches = [i for i in test_batches]
num_examples, train_batches, dev_batches, test_batches, embedding_matrix, vocab = len(train), train_batches, dev_batches, test_batches, embeddings, vocab
n_embed,d_embed = embedding_matrix.shape
vsize = len(vocab)
inv_vocab = {v: k for k, v in vocab.items()}
dim_hidden = config.dim_sem+config.dim_str
params = Params(n_embed,d_embed,vocab,inv_vocab,vsize,dim_hidden,embeddings)
# -
model = StructureModel(config,params)
model.build()
# +
tfconfig = tf.ConfigProto()
tfconfig.gpu_options.allow_growth = True
saver = tf.train.Saver()
#Change number here (MILU)
num = 36000
#training mode
if(config.mode == 'train'):
num_batches_per_epoch = int(num_examples / config.batch_size)
num_steps = config.epochs * num_batches_per_epoch
with tf.Session(config=tfconfig) as sess:
gvi = tf.global_variables_initializer()
sess.run(gvi)
sess.run(model.embeddings.assign(embedding_matrix.astype(np.float32)))
model_name = 'Checkpoints/5/model'+str(num)+'.ckpt'
saver.restore(sess, model_name)
loss = 0
for ct, batch in tqdm.tqdm(train_batches, total=num_steps):
feed_dict = model.get_feed_dict(batch)
outputs,_,_loss = sess.run([model.final_output, model.opt, model.loss], feed_dict=feed_dict)
loss+=_loss
if(ct%config.log_period==0):
print ('Loss in',ct,' is: ',loss/config.log_period)
model_name = 'Checkpoints/5/model'+str(num+ct)+'.ckpt'
save_path = saver.save(sess, model_name)
loss = 0
elif(config.mode=='infer'):
#infer mode
with tf.Session(config=tfconfig) as sess:
gvi = tf.global_variables_initializer()
sess.run(gvi)
sess.run(model.embeddings.assign(embedding_matrix.astype(np.float32)))
model_name = 'Checkpoints/5/model'+str(num)+'.ckpt'
saver.restore(sess, model_name)
loss_ = 0
for ct, batch in tqdm.tqdm(test_batches,total=100):
feed_dict = model.get_feed_dict(batch)
answer_logits,loss = sess.run([model.inference_logits,model.loss], feed_dict=feed_dict)
loss = sess.run(model.loss,feed_dict=feed_dict)
# print(answer_logits)
loss_ +=loss
if(ct%config.log_period==0):
print(loss_/config.log_period)
loss_ = 0
# print(' Summary: {}'.format(" ".join([vocab[j] for i in answer_logits for j in i])))
# +
# math.acos?
# -
|
CQA_Summarization.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [conda env:pytorch]
# language: python
# name: conda-env-pytorch-py
# ---
# ## Libraries
# +
# %matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from collections import deque
import skimage.measure
import numpy as np
import gym
from gym import wrappers
from torch.autograd import Variable
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
# +
#torch.set_num_threads(2)
# -
torch.get_num_threads()
# ## Get Data For Testing
nb_frames = 1
env = gym.make('Breakout-v0')
env.reset()
for t in range(nb_frames):
env.render()
action = env.action_space.sample() # take a random action
observation, reward, done, info = env.step(action)
frame = observation
env.close()
frame.shape
# # Classes
# ## Preprocessing
class Preprocess:
def __init__(self, data):
self.data_ = data
def crop(self, h_start=30, h_end=194):
self.data_ = self.data_[h_start:h_end, ::]
def rgb2gray(self):
self.data_ = np.dot(self.data_, [0.2989, 0.5870, 0.1140])
def downsample(self, kernel=2):
self.data_ = skimage.measure.block_reduce(self.data_, (kernel, kernel), np.max)
# ### Preprocessing Testing
frame.shape
# show raw data
plt.imshow(frame, cmap = plt.get_cmap('gray'))
plt.show()
x = Preprocess(frame)
# crop top and bottomw
x.crop()
plt.imshow(x.data_, cmap = plt.get_cmap('gray'))
plt.show()
x.data_.shape
# convert to grayscale
x.rgb2gray()
x.data_.shape
plt.imshow(x.data_, cmap = plt.get_cmap('gray'))
plt.show()
# downsample w/max pooling
x.downsample()
plt.imshow(x.data_, cmap = plt.get_cmap('gray'))
plt.show()
x.data_.shape
# ## Experience Replay
class ExperienceReplay:
dq_ = deque(maxlen=32)
def __init__(self, C, experience_tuple):
self.capacity_ = C
self.exp_tuple_ = experience_tuple
self.dq_.append(experience_tuple)
#self.seq_init = self.exp_tuple[0]
#self.action = self.exp_tuple[1]
#self.reward = self.exp_tuple[2]
#self.seq_update = self.exp_tuple[3]
#self.gamestatus = self.exp_tuple[4]
def add_exp(self, experience_tuple):
'''add new experience'''
self.dq_.append(experience_tuple)
def sample(self, capacity):
'''sample from experience'''
nb_items = len(self.dq_)
if nb_items > capacity:
idx = np.random.choice( nb_items, size=capacity, replace=False)
else:
idx = np.random.choice( nb_items, size=nb_items, replace=False)
return [self.dq_[i] for i in idx]
# ### Experience Replay Testing
tmp = (('init', 'action', 'reward', 'out', 'status'))
tmp
er = ExperienceReplay(10, tmp)
er.dq_
er.add_exp( ('init2', 'action2', 'reward2', 'out2', 'status2') )
er.add_exp( ('init4', 'action2', 'reward2', 'init3', 'status4') )
er.dq_
# ## Epsilon Generator
class EpsilonGenerator():
def __init__(self, start, stop, steps):
self.epsilon_ = start
self.stop_ = stop
self.steps_ = steps
self.step_size_ = (self.epsilon_ - stop) / (self.steps_)
self.count_ = 0
def epsilon_update(self):
if self.count_ == 0:
self.count_ += 1
return self.epsilon_
elif (self.epsilon_ >= self.stop_ and self.count_ < self.steps_):
self.count_ += 1
self.epsilon_ -= self.step_size_
else:
self.epsilon_ = self.stop_
self.count_ += 1
# ### EpsilonGenerator Testing
eg = EpsilonGenerator(1, 0.1, 10)
eg.epsilon_
eg.steps_
for i in range(20):
eg.epsilon_update()
print(eg.count_, eg.epsilon_)
# ## CNN Architecture
class CNN(nn.Module):
def __init__(self,):
super(CNN, self).__init__()
self.conv1 = nn.Conv2d(1, 16, 8, 4) ## Conv2d(nChannels, filters, kernel, stride)
self.conv2 = nn.Conv2d(16, 32, 4, 4)
self.fc1 = nn.Linear(32 * 4 * 4, 256)
self.fc2 = nn.Linear(256, 4)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.relu(self.conv2(x))
x = x.view(-1, 32 * 4 * 4) ## reshape
x = F.relu(self.fc1(x))
x = self.fc2(x)
return x
# ## CNN Setup
cnn = CNN()
learning_rate = 0.01
criterion = nn.MSELoss()
optimizer = optim.RMSprop(cnn.parameters(),
lr=learning_rate,
alpha=0.99,
eps=1e-08,
weight_decay=0,
momentum=0,
centered=False)
print(cnn.conv1)
print(cnn.conv2)
print(cnn.fc1)
print(cnn.fc2)
# ## Deep Reinforcement Learning
# +
class DRL(Preprocess, ExperienceReplay):
def __init__(self, data):
self.data_ = data
# -
drl = DRL(frame)
drl.dq_
drl.dq_.clear()
drl.data_.shape
drl.crop()
drl.data_.shape
drl.rgb2gray()
drl.data_.shape
drl.downsample()
drl.data_.shape
drl.dq_
drl.add_exp(('init2', 'action2', 'reward2', 'out2', 'status2'))
drl.dq_
drl.sample(3)
drl.add_exp((x.data_, 0, 1, x.data_, 'nonterminal'))
# ## DQN
# [ALGORITHM STEPS]
#
# Initialize replay memory D to capacity N
# Initialize action-value function Q with random weights
# for episode = 1, M do
# Initialise sequence s1 = {x1} and preprocessed sequenced φ1 = φ(s1)
# for t = 1, T do
# With probability select a random action at
# otherwise select a_t = maxa Q∗(φ(st), a; θ)
# Execute action a_t in emulator and observe reward rt and image xt+1
# Set st+1 = s_t, a_t, xt+1 and preprocess φt+1 = φ(s_t+1)
# Store transition (φt, a_t, r_t, φt+1) in D
# Sample random minibatch of transitions (φ_j , a_j , r_j , φj+1) from D
# Set y_j = r_j (for terminal φ_j+1)
# r_j + γ maxa0 Q(φ_j+1, a0; θ) (for non-terminal φ_j+1)
# Perform a gradient descent step on (yj − Q(φj , aj ; θ))^2 according to equation 3
# end for
# end for
# #### Atari Emulator
env = gym.make('Breakout-v0')
env = wrappers.Monitor(env,
directory='/Users/davidziganto/Data_Science/PyTorch/OpenAI_vids/breakout-experiment-1',
video_callable=None, ## takes video when episode number is perfect cube
force=True,
resume=False,
write_upon_reset=False,
uid=None)
# #### Game Variables
nb_games = 34 ## number of games to play
time_steps = 2000 ## max number of time steps per game
# #### Reinforcement Learning Variables & Setup
#anneal_tracker = 0 ## tally of how many total iterations have passed
#anneal_stop = int(1e4) ## nb of steps until annealing stops
#gen_epsilon = epsilon_generator(start=1, stop=0.1, num=anneal_stop) ## Prob(choosing random action) w/linear annealing
discount = 0.9 ## on future rewards
# #### Training
# +
for episode in range(nb_games):
## reset environment
env.reset()
## check nb times NN chooses action
checker = 0
## setup to check loss & score per episode
running_loss = 0.0
score = 0
## empty list to capture mini-batch of frames
frames = []
## default status to differentiate rewards (aka targets)
gamestatus = 'nonterminal'
## raw frame of game start
raw_frame = env.reset()
## preprocessed initial frame
seq_init = preprocess(raw_frame)
#print epsisode
#print('episode:', episode, 'checker:', checker)
for t in range(time_steps):
# show game in real-time
#env.render()
# linearly anneal epsilon (prob of selecting random action)
if anneal_tracker <= anneal_stop:
epsilon = next(gen_epsilon)
##print('epsilon:', epsilon)
anneal_tracker += 1
# take agent-based action every 4 time steps; otherwise push action forward w/out agent computing
if t%4 == 0:
# feedforward for agent-based action
sample_frame = Variable(torch.Tensor(seq_init).unsqueeze(0).unsqueeze(0)) ## setup for CNN (unsqueeze to fake 4D tensor since single observation)
action_decision = cnn(sample_frame) ## return optimal action
##print(action_decision)
# take epsilon-greedy action (prob(epsilon) = random; else argmax(action))
#action = env.action_space.sample() if np.random.binomial(n=1, p=epsilon, size=1) else action_decision.data.max(1)[1][0]
if np.random.binomial(n=1, p=epsilon, size=1):
action = env.action_space.sample()
else:
checker += 1
action = action_decision.data.max(1)[1][0]
#print('action =', action)
# gather feedback from emulator
observation, reward, done, info = env.step(action)
score += reward
# preprocess new observation post action
seq_update = preprocess(observation)
# mini-batch setup
if t%4 == 3 or done:
##print(t)
frames.append(seq_update)
## makes arrays callable to feed into CNN
frameTensor = np.stack(frames)
## convert Numpy Array --> PyTorch Tensor --> PyTorch Variable
frameTensor = Variable(torch.Tensor(frameTensor))
##print('t:', t, '\n', frameTensor) ## should be 4x82x80 unless 'done'
## clear mini-batch
frames = []
else:
frames.append(seq_update)
# stop if out of lives
if done:
gamestatus = 'terminal'
# update experience replay
experience_replay(C=N, DQ = D, seq_init=seq_init,
action=action, reward=reward,
seq_update=seq_update, gamestatus=gamestatus)
##print('*step: ', t, '| gamestatus: ', gamestatus, '| len(D):', len(D),
## '| init != update:', (D[len(D)-1][0] != D[len(D)-1][3]).sum())
print('steps:', t, '| episode:', episode, '| score:', score, '| checker:', checker)
break
else:
# update experience replay
experience_replay(C=N, DQ = D, seq_init=seq_init,
action=action, reward=reward,
seq_update=seq_update, gamestatus=gamestatus)
##print('step:', t, '| gamestatus:', gamestatus, '| action:', action, '| len(D):', len(D),
## '| init != update:',(D[len(D)-1][0] != D[len(D)-1][3]).sum())
# mini-batch sample of experience replay for ConvNet
D_size = len(D)
idx = np.random.choice(range(D_size), size=min(D_size, 32), replace=False)
## empty list to capture mini-batch of D
minibatch_D = []
# calculate target
for i in idx:
minibatch_D.append(D[i])
#print('step: ', i, 'gamestatus: ', D[4], 'reward: ', D[2])
# create dataset
data_list = [D[i][0] for i in range(D_size)]
data = Variable(torch.Tensor(data_list).unsqueeze(1))
##print(data)
# create target variable
target_list = []
for i in range(D_size):
if D[i][4] == 'terminal':
target_list.append(D[i][2])
else:
target_list.append(D[i][2] + discount *
cnn(Variable(torch.Tensor(D[i][3]).unsqueeze(0).unsqueeze(0))).data.max(1)[1][0])
targets = Variable(torch.Tensor(target_list))
##print(targets)
# zero the parameter gradients
optimizer.zero_grad()
# feedforward pass
outputs = cnn(data).max(1)[0]
##print(outputs)
# calculate loss
loss = criterion(outputs, targets)
#print('loss:', loss)
# backprop
loss.backward()
# update network weights
optimizer.step()
# set new observation as initial sequence
seq_init = seq_update
# print statistics
#running_loss += loss.data[0]
#if t % 200 == 199: # print every 200 mini-batches
# print('[%d, %5d] loss: %.3f' % (episode + 1, t + 1, running_loss / 200))
# running_loss = 0.0
env.close()
# -
# check how many times DQN chose action as opposed to random action
checker
# ensure minibatch of experience replay doesn't exceed 32
len(minibatch_D)
# #### Save Model
torch.save(cnn.state_dict(), '/Users/davidziganto/Data_Science/PyTorch/DL_models/DL_RL_Atari_breakout_500e_10000t')
# #### Load Model
# +
#cnn = CNN()
#cnn.load_state_dict(torch.load('/Users/davidziganto/Data_Science/PyTorch/DL_models/DL_RL_Atari_breakout'))
# -
# # EXAMPLE
#
# ### Get Frames
frames = []
rewards = []
nb_frames = 500
env = gym.make('Breakout-v0')
env.reset()
for t in range(nb_frames):
env.render()
action = env.action_space.sample() # take a random action
observation, reward, done, info = env.step(action)
frames.append(preprocess(observation))
if t%4 == 3 or done:
frameTensor = np.stack(frames)
minibatch = Variable(torch.Tensor(frameTensor)) ## convert to torch Variable data type
print('t:', t, '\n', minibatch)
frames = []
if done:
break
# ### Show Preprocessed Data Frames
for frame in frames:
plt.imshow(frame, cmap = plt.get_cmap('gray'))
plt.show()
# ### Frame Dimensions
frame.shape
# # EXPERIMENTAL
import torch
import torchvision
import torchvision.transforms as transforms
# +
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
# +
from torch.autograd import Variable
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(3, 6, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(6, 16, 5)
self.fc1 = nn.Linear(16 * 5 * 5, 120)
self.fc2 = nn.Linear(120, 84)
self.fc3 = nn.Linear(84, 10)
def forward(self, x):
x = self.pool(F.relu(self.conv1(x)))
x = self.pool(F.relu(self.conv2(x)))
x = x.view(-1, 16 * 5 * 5)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return x
net = Net()
# -
net.conv1
net.conv2
net.fc1
net.fc2
net.fc3
# +
import torch.optim as optim
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(net.parameters(), lr=0.001, momentum=0.9)
# +
for epoch in range(2): # loop over the dataset multiple times
running_loss = 0.0
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
# wrap them in Variable
inputs, labels = Variable(inputs), Variable(labels)
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
# print statistics
running_loss += loss.data[0]
if i % 2000 == 1999: # print every 2000 mini-batches
print('[%d, %5d] loss: %.3f' %
(epoch + 1, i + 1, running_loss / 2000))
running_loss = 0.0
print('Finished Training')
# -
outputs
labels
# # Legacy
# +
#1
# Atari emulator
env = gym.make('Breakout-v0')
# game variables
nb_games = 5 ## number of games to play
time_steps = 500 ## max number of time steps per game
# experience replay variables
N = int(1e6) ## capacity
D = deque() ## deque object
# RL vars
anneal_tracker = 0 ## tally of how many total iterations have passed
anneal_stop = 1000 ## nb of steps until annealing stops
gen_epsilon = epsilon_generator(start=1, stop=0.1, num=anneal_stop) ## Prob(choosing random action) w/linear annealing
discount = 0.9 ## on future rewards
# CNN setup
cnn = CNN()
learning_rate = 0.01
criterion = nn.MSELoss()
optimizer = optim.RMSprop(cnn.params,
lr=learning_rate,
alpha=0.99,
eps=1e-08,
weight_decay=0,
momentum=0,
centered=False)
# algorithm
for episode in range(nb_games):
gamestatus = 'nonterminal'
raw_frame = env.reset() ## raw initial frame
seq_init = preprocess(raw_frame) ## preprocessed initial sequence
for t in range(time_steps):
# show game in real-time
env.render()
# linearly anneal epsilon (prob of selecting random action)
if anneal_tracker <= anneal_stop:
epsilon = next(gen_epsilon)
print('epsilon:', epsilon)
anneal_tracker += 1
# take agent-based action every 4 time steps; otherwise push action forward w/out agent computing
if t%4 == 0:
action = env.action_space.sample() # take a random action
#action = env.action_space.sample() if np.random.binomial(n=1, p=epsilon, size=1) else action w/max Q-value
#print('action =', action)
# feedback from emulator
observation, reward, done, info = env.step(action)
# preprocess new observation after action
seq_update = preprocess(observation)
# stop if out of lives
if done:
gamestatus = 'terminal'
# update experience replay
experience_replay(C=N, DQ = D, seq_init=seq_init,
action=action, reward=reward,
seq_update=seq_update, gamestatus=gamestatus)
print('*step: ', t, '| gamestatus: ', gamestatus, '| len(D):', len(D),
'| init != update:', (D[len(D)-1][0] != D[len(D)-1][3]).sum())
break
else:
# update experience replay
experience_replay(C=N, DQ = D, seq_init=seq_init,
action=action, reward=reward,
seq_update=seq_update, gamestatus=gamestatus)
print('step:', t, '| gamestatus:', gamestatus, '| len(D):', len(D),
'| init != update:',(D[len(D)-1][0] != D[len(D)-1][3]).sum())
# mini-batch sample of experience replay for ConvNet
D_size = len(D)
idx = np.random.choice(range(D_size), size=min(D_size, 32), replace=False)
# calculate target
for i in idx:
if D[i][4] == 'terminal':
target = D[i][2] + 100
else:
#target = sample[i][2] + discount*(to be completed)
target = D[i][2]
#print('step: ', i, 'gamestatus: ', D[4], 'reward: ', D[2])
# SGD update
#update weights
# set new observation as initial sequence
seq_init = seq_update
#print('final target =', target)
#print( (D[len(D)-1][0] != D[len(D)-1][3]).sum())
#print(D)
# +
# 2
for episode in range(nb_games):
## setup to check loss per episode
running_loss = 0.0
## empty list to capture mini-batch of frames
frames = []
## default status to differentiate rewards (aka targets)
gamestatus = 'nonterminal'
## raw frame of game start
raw_frame = env.reset()
## preprocessed initial frame
seq_init = preprocess(raw_frame)
for t in range(time_steps):
# show game in real-time
env.render()
# linearly anneal epsilon (prob of selecting random action)
if anneal_tracker <= anneal_stop:
epsilon = next(gen_epsilon)
print('epsilon:', epsilon)
anneal_tracker += 1
# take agent-based action every 4 time steps; otherwise push action forward w/out agent computing
if t%4 == 0:
# feedforward for agent-based action
action_decision = Variable(torch.Tensor(seq_init)) ## setup for CNN
action_decision = cnn(action_decision.unsqueeze(0)) ## return optimal action
# take epsilon-greedy action (prob(epsilon) = random; else argmax(action))
action = env.action_space.sample() if np.random.binomial(n=1, p=epsilon, size=1) else action_decision.data.max()
#print('action =', action)
# gather feedback from emulator
observation, reward, done, info = env.step(action)
# preprocess new observation post action
seq_update = preprocess(observation)
# mini-batch setup
if t%4 == 3 or done:
## makes arrays callable to feed into CNN
frameTensor = np.stack(frames)
## convert Numpy Array --> PyTorch Tensor --> PyTorch Variable
frameTensor = Variable(torch.Tensor(frameTensor))
print('t:', t, '\n', frameTensor.shape) ## should be 4x82x80 unless 'done'
## clear mini-batch
frames = []
else:
frames.append(seq_update)
# stop if out of lives
if done:
gamestatus = 'terminal'
# update experience replay
experience_replay(C=N, DQ = D, seq_init=seq_init,
action=action, reward=reward,
seq_update=seq_update, gamestatus=gamestatus)
print('*step: ', t, '| gamestatus: ', gamestatus, '| len(D):', len(D),
'| init != update:', (D[len(D)-1][0] != D[len(D)-1][3]).sum())
break
else:
# update experience replay
experience_replay(C=N, DQ = D, seq_init=seq_init,
action=action, reward=reward,
seq_update=seq_update, gamestatus=gamestatus)
print('step:', t, '| gamestatus:', gamestatus, '| len(D):', len(D),
'| init != update:',(D[len(D)-1][0] != D[len(D)-1][3]).sum())
# mini-batch sample of experience replay for ConvNet
D_size = len(D)
idx = np.random.choice(range(D_size), size=min(D_size, 32), replace=False)
# calculate target
for i in idx:
if D[i][4] == 'terminal':
#target = D[i][2] + (discount * )
target = D[i][2]
else:
target = D[i][2]
#print('step: ', i, 'gamestatus: ', D[4], 'reward: ', D[2])
# zero the parameter gradients
optimizer.zero_grad()
# feedforward
outputs = cnn(frameTensor)
# calculate loss
loss = criterion(outputs, targets)
print('loss:', loss)
# backprop
loss.backward()
# update network weights
optimizer.step()
# set new observation as initial sequence
seq_init = seq_update
# print statistics
running_loss += loss.data[0]
if t % 100 == 99: # print every 100 mini-batches
print('[%d, %5d] loss: %.3f' %
(episode + 1, t + 1, running_loss / 100))
running_loss = 0.0
#print('final target =', target)
#print( (D[len(D)-1][0] != D[len(D)-1][3]).sum())
#print(D)
|
DQN_prototype_v2.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
#Parte 1
# #! pip install nltk
import nltk
nltk.download('stopwords')
stopwords = nltk.corpus.stopwords.words('portuguese')
print(stopwords)
# +
from nltk.tokenize import word_tokenize
nltk.download('punkt')
frase = 'Eu dirijo devagar porque nós queremos ver os animais.'
tokens = word_tokenize(frase)
print(tokens)
# -
for t in tokens:
if t not in stopwords:
print(t)
# +
#Parte 2
from sklearn.feature_extraction.text import TfidfVectorizer
import pandas as pd
texto1 = 'A matemática é muito importante para compreendermos como a natureza funciona'
tf_idf = TfidfVectorizer()
vetor = tf_idf.fit_transform([texto1])
print(vetor)
# -
vetor = vetor.todense()
print(vetor)
# +
nomes = tf_idf.get_feature_names()
df = pd.DataFrame(vetor, columns=nomes)
print(df)
# +
texto2 = 'A matemática é incrível, quanto mais estudo matemática, mais eu consigo aprender matemática'
tf_idf = TfidfVectorizer()
vetor2 = tf_idf.fit_transform([texto2])
vetor2 = vetor2.todense()
nomes = tf_idf.get_feature_names()
df = pd.DataFrame(vetor2, columns=nomes)
print(df)
|
Material, Exercicios/Codigos/iadell10_lingnat.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/deathstar1/Exploration/blob/main/CNNMaxPoolK1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="LVnjMh_Rx_VB"
import cv2
import numpy as np
from scipy import misc
i = misc.ascent()
# + id="nGz2Zkp30ghu" outputId="93ca55b1-5ba4-4daa-823c-725a734c6638" colab={"base_uri": "https://localhost:8080/", "height": 248}
import matplotlib.pyplot as plt
plt.grid(False)
plt.gray()
plt.axis('off')
plt.imshow(i)
plt.show()
# + id="9XVzIfYI02Ix"
i_transformed = np.array(i)
size_x = i_transformed.shape[0]
size_y = i_transformed.shape[1]
weight = 1
# + id="3dOgxILn1qAT"
filter = [[1,0 ,1] , [0 ,0, 1] , [1, 0 , 0]]
# filter = [[0 ,0 ,1] , [0 ,1, 1] , [1, 1 , 1]]
# filter = [[1,0 ,1] , [1 ,1, 1] , [1, 0 , 0]]
# filter = [[0,-1 ,1] , [-1 ,0, 1] , [1, -1 , 0]]
# + id="RwQxjzvIAEiT"
for x in range(1,size_x - 1):
for y in range(1 , size_y - 1):
convolution = 0.0
convolution = convolution + (i[x-1][y-1] * filter[0][0])
convolution = convolution + (i[x][y-1] * filter[1][0])
convolution = convolution + (i[x+1][y-1] * filter[2][0])
convolution = convolution + (i[x-1][y] * filter[0][1])
convolution = convolution + (i[x][y] * filter[1][1])
convolution = convolution + (i[x+1][y] * filter[2][1])
convolution = convolution + (i[x-1][y+1] * filter[0][2])
convolution = convolution + (i[x][y+1] * filter[1][2])
convolution = convolution + (i[x+1][y+1] * filter[2][2])
convolution = convolution * weight
if(convolution <0):
convolution = 0
if(convolution >255):
convolution = 255
i_transformed[x,y] = convolution
# + id="UXXGnLXRHdFi" outputId="a70a19d4-d4f5-4677-a75c-4030096e6058" colab={"base_uri": "https://localhost:8080/", "height": 269}
plt.gray()
plt.grid(False)
plt.imshow(i_transformed)
plt.show()
# + id="iJ4_FXxHKpVN" outputId="c71bd20f-4db0-49ad-c97e-cc095399754d" colab={"base_uri": "https://localhost:8080/", "height": 269}
new_x = int(size_x /2)
new_y = int(size_y /2)
new_image = np.zeros((new_x,new_y))
for xi in range(0,size_x ,2):
for yi in range(0,size_y, 2):
pixels = []
pixels.append(i_transformed[xi , yi])
pixels.append(i_transformed[xi , yi+1])
pixels.append(i_transformed[xi+1 ,yi])
pixels.append(i_transformed[xi+1 ,yi+1])
new_image[int(xi/2),int(yi/2)] = max(pixels)
plt.gray()
plt.grid(False)
plt.imshow(new_image)
plt.show()
|
CNNMaxPoolK1.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Self-Driving Car Engineer Nanodegree
#
#
# ## Project 1: **Finding Lane Lines on the Road with Changes based on Review**
# ***
# In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
#
# Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
#
# In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a [write up template](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the [rubric points](https://review.udacity.com/#!/rubrics/322/view) for this project.
#
# ---
# Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
#
# **Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".**
#
# ---
# **The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is to piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.**
#
# ---
#
# <figure>
# <img src="examples/line-segments-example.jpg" width="380" alt="Combined Image" />
# <figcaption>
# <p></p>
# <p style="text-align: center;"> Your output should look something like this (above) after detecting line segments using the helper functions below </p>
# </figcaption>
# </figure>
# <p></p>
# <figure>
# <img src="examples/laneLines_thirdPass.jpg" width="380" alt="Combined Image" />
# <figcaption>
# <p></p>
# <p style="text-align: center;"> Your goal is to connect/average/extrapolate line segments to get output like this</p>
# </figcaption>
# </figure>
# **Run the cell below to import some packages. If you get an `import error` for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
# ## Import Packages
# Importing relevant packages for data manipulation
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import math
# %matplotlib inline
# Importing relevant packages for computer vision
import cv2
from moviepy.editor import VideoFileClip
from IPython.display import HTML
# ## Read in an Image
# +
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image) # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
#image.shape
# -
# ## Ideas for Lane Detection Pipeline
# **Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:**
#
# `cv2.inRange()` for color selection
# `cv2.fillPoly()` for regions selection
# `cv2.line()` to draw lines on an image given endpoints
# `cv2.addWeighted()` to coadd / overlay two images
# `cv2.cvtColor()` to grayscale or change color
# `cv2.imwrite()` to output images to file
# `cv2.bitwise_and()` to apply a mask to an image
#
# **Check out the OpenCV documentation to learn about these and discover even more awesome functionality!**
# ## Helper Functions
# Below are some helper functions to help get you started. They should look familiar from the lesson!
# +
# Creating editing functions
def grayscale(img):
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
def gaussian_blur(img, kernel_size):
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def canny(img, low_threshold, high_threshold):
return cv2.Canny(img, low_threshold, high_threshold)
def region_of_interest(img, vertices):
# initializing a blank mask
mask = np.zeros_like(img)
if len(img.shape) > 2:
channel_count = img.shape[2]
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
cv2.fillPoly(mask, vertices, ignore_mask_color)
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def weighted_img(init_img, line_img, alpha=0.8, beta=1.0, gamma=0.0):
return cv2.addWeighted(init_img, alpha, line_img, beta, gamma)
def draw_lines(img, lines, color=[255, 0, 0], thickness=3, y_topL=430, y_topR=430):
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(img, (x1, y1), (x2, y2), color, thickness)
def draw_lines2(img, lines, color=[255, 0, 0], thickness=5, y_topL=324, y_topR=324):
# initialize list containers for slopes and coordinates
Lslope, Lx1, Lx2, Ly1, Ly2 = [], [], [], [], []
Rslope, Rx1, Rx2, Ry1, Ry2 = [], [], [], [], []
for line in lines:
for x1,y1,x2,y2 in line:
# calculate slope
if (y1 != y2):
slope = float(x2-x1)/(y2-y1)
if ((slope < -0.01) and (slope > -11.43)): # negative slope => left hand lane
Lslope.append(slope), Lx1.append(x1), Lx2.append(x2), Ly1.append(y1), Ly2.append(y2)
elif ((slope > 0.01) and (slope < 11.43)): # positive slope => right hand lane
Rslope.append(slope), Rx1.append(x1), Rx2.append(x2), Ry1.append(y1), Ry2.append(y2)
slopeL = np.mean(Lslope)
x1L = np.mean(Lx1)
x2L = np.mean(Lx2)
y1L = np.mean(Ly1)
y2L = np.mean(Ly2)
xLmin = x1L - slopeL*(y1L - img.shape[0])
xLmax = x2L + slopeL*(y_topL-y2L)
cv2.line(img, (int(xLmin),img.shape[0]),(int(xLmax),y_topL), color, thickness)
slopeR = np.mean(Rslope)
x1R = np.mean(Rx1)
x2R = np.mean(Rx2)
y1R = np.mean(Ry1)
y2R = np.mean(Ry2)
xRmin = x1R - slopeR*(y1R - img.shape[0])
xRmax = x2R + slopeR*(y_topR-y2R)
cv2.line(img, (int(xRmin),img.shape[0]),(int(xRmax),y_topR), color, thickness)
def draw_lines3(img, lines, color=[255, 0, 0], thickness=5, y_topL=324, y_topR=324):
# initialize list containers for slopes and coordinates
Lslope, Lx1, Lx2, Ly1, Ly2 = [], [], [], [], []
Rslope, Rx1, Rx2, Ry1, Ry2 = [], [], [], [], []
for line in lines:
for x1,y1,x2,y2 in line:
# calculate slope
if (y1 != y2):
slope = float(x2-x1)/(y2-y1)
if ((slope < -0.01) and (slope > -2.0)): # negative slope => left hand lane
Lslope.append(slope), Lx1.append(x1), Lx2.append(x2), Ly1.append(y1), Ly2.append(y2)
elif ((slope > 0.01) and (slope < 2.0)): # positive slope => right hand lane
Rslope.append(slope), Rx1.append(x1), Rx2.append(x2), Ry1.append(y1), Ry2.append(y2)
slopeL = np.mean(Lslope)
x1L = np.mean(Lx1)
x2L = np.mean(Lx2)
y1L = np.mean(Ly1)
y2L = np.mean(Ly2)
xLmin = x1L - slopeL*(y1L - img.shape[0])
xLmax = x2L + slopeL*(y_topL-y2L)
cv2.line(img, (int(xLmin),img.shape[0]),(int(xLmax),y_topL), color, thickness)
slopeR = np.mean(Rslope)
x1R = np.mean(Rx1)
x2R = np.mean(Rx2)
y1R = np.mean(Ry1)
y2R = np.mean(Ry2)
xRmin = x1R - slopeR*(y1R - img.shape[0])
xRmax = x2R + slopeR*(y_topR-y2R)
cv2.line(img, (int(xRmin),img.shape[0]),(int(xRmax),y_topR), color, thickness)
def hough_linesI(img, rho, theta, threshold, min_line_len, max_line_gap):
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((*img.shape, 3), dtype=np.uint8)
draw_lines2(line_img, lines, color=[0, 0, 255])
return line_img
def hough_linesV(img, rho, theta, threshold, min_line_len, max_line_gap):
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((*img.shape, 3), dtype=np.uint8)
draw_lines2(line_img, lines, color=[255, 0, 0])
return line_img
def hough_linesC(img, rho, theta, threshold, min_line_len, max_line_gap):
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((*img.shape, 3), dtype=np.uint8)
draw_lines(line_img, lines, color=[255, 0, 0], y_topL=430, y_topR=430)
return line_img
# -
# ## Test Images
#
# Build your pipeline to work on the images in the directory "test_images"
# **You should make sure your pipeline works well on these images before you try the videos.**
import os
image_list = os.listdir("test_images/")
image_list
# ## Build a Lane Finding Pipeline
#
#
# Build the pipeline and run your solution on all test_images. Make copies into the `test_images_output` directory, and you can use the images in your writeup report.
#
# Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.
# +
# TODO: Build your pipeline that will draw lane lines on the test_images
# then save them to the test_images_output directory.
# Creating a pipeline for image editing
def process_image(img):
gray = grayscale(img)
kernel_size = 5
blur_gray = gaussian_blur(gray, kernel_size)
low_threshold = 50
high_threshold = 150
edges = canny(blur_gray, low_threshold, high_threshold)
imgsh = img.shape
# shape for test images is: y,x,RGB = (540, 960, 3)
vertices = np.array([[(0,imgsh[0]),(448,324),(508,324),(imgsh[1],imgsh[0])]], dtype=np.int32)
masked_edges = region_of_interest(edges, vertices)
rho = 1 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 20 # minimum number of votes (intersections in Hough grid cell)
min_len = 80
max_gap = 100
line_img = hough_linesI(masked_edges, rho, theta, threshold, min_len, max_gap)
result = weighted_img(img, line_img, 0.8, 1.2, 0)
return result
# -
# Process all images and save them in output folder
for image in image_list:
img = cv2.imread('test_images/'+ image).astype('uint8')
img_out = process_image(img)
cv2.imwrite('test_images_output/'+ image[0:-4] + '_output.jpg', img_out)
# ## Test on Videos
#
# You know what's cooler than drawing lanes over images? Drawing lanes over video!
#
# We can test our solution on two provided videos:
#
# `solidWhiteRight.mp4`
#
# `solidYellowLeft.mp4`
#
# **Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, consult the forums for more troubleshooting tips.**
#
# **If you get an error that looks like this:**
# ```
# NeedDownloadError: Need ffmpeg exe.
# You can download it by calling:
# imageio.plugins.ffmpeg.download()
# ```
# **Follow the instructions in the error message and check out [this forum post](https://discussions.udacity.com/t/project-error-of-test-on-videos/274082) for more troubleshooting tips across operating systems.**
# Creating a pipeline for frame editing
def process_video1(img):
gray = grayscale(img)
kernel_size = 5
blur_gray = gaussian_blur(gray, kernel_size)
low_threshold = 50
high_threshold = 150
edges = canny(blur_gray, low_threshold, high_threshold)
imgsh = img.shape
vertices = np.array([[(10,imgsh[0]),(410,350),(800,380),(imgsh[1],imgsh[0])]], dtype=np.int32)
masked_edges = region_of_interest(edges, vertices)
rho = 2 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 50 # minimum number of votes (intersections in Hough grid cell)
min_len = 100
max_gap = 160
line_img = hough_linesV(masked_edges, rho, theta, threshold, min_len, max_gap)
result = weighted_img(img, line_img, 0.8, 1.2, 0)
return result
# Let's try the one with the solid white lane on the right first ...
# Applying the pipeline on clip1, frame by frame
white_output = 'test_videos_output/solidWhiteRight_output.mp4'
clip1 = VideoFileClip('test_videos/solidWhiteRight.mp4')
file_clip = clip1.fl_image(process_video1)
# %time file_clip.write_videofile(white_output, audio=False)
# Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
# ## Improve the draw_lines() function
#
# **At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".**
#
# **Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.**
# Now for the one with the solid yellow lane on the left. This one's more tricky!
# Creating a pipeline for frame editing
def process_video2(img):
gray = grayscale(img)
kernel_size = 5
blur_gray = gaussian_blur(gray, kernel_size)
low_threshold = 50
high_threshold = 200
edges = canny(blur_gray, low_threshold, high_threshold)
imgsh = img.shape
vertices = np.array([[(10,imgsh[0]),(410,350),(800,380),(imgsh[1],imgsh[0])]], dtype=np.int32)
masked_edges = region_of_interest(edges, vertices)
rho = 1 # distance resolution in pixels of the Hough grid
theta = np.pi/180 # angular resolution in radians of the Hough grid
threshold = 25 # minimum number of votes (intersections in Hough grid cell)
min_len = 65
max_gap = 100
line_img = hough_linesV(masked_edges, rho, theta, threshold, min_len, max_gap)
result = weighted_img(img, line_img, 0.8, 1.2, 0)
return result
# Applying the pipeline on clip2, frame by frame
yellow_output = 'test_videos_output/solidYellowLeft_output.mp4'
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
file_clip = clip2.fl_image(process_video2)
# %time file_clip.write_videofile(yellow_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
# ## Writeup and Submission
#
# If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a [link](https://github.com/udacity/CarND-LaneLines-P1/blob/master/writeup_template.md) to the writeup template file.
#
# ## Optional Challenge
#
# Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!
def process_video3(img):
gray = grayscale(img)
kernel_size = 5
blur_gray = gaussian_blur(gray, kernel_size)
low_threshold = 50
high_threshold = 150
edges = canny(blur_gray, low_threshold, high_threshold)
imgsh = img.shape
vertices = np.array([[(200,imgsh[0]-55),(660,430),(675,430),(imgsh[1]-175,imgsh[0]-55)]], dtype=np.int32)
masked_edges = region_of_interest(edges, vertices)
rho = 1 # distance resolution in pixels of the Hough grid
theta = np.pi/360 # angular resolution in radians of the Hough grid
threshold = 10 # minimum number of votes (intersections in Hough grid cell)
min_len = 10
max_gap = 30
line_img = hough_linesC(masked_edges, rho, theta, threshold, min_len, max_gap)
result = weighted_img(img, line_img, 0.8, 1.2, 0)
return result
# Applying the pipeline on clip3, frame by frame
challenge_output = 'test_videos_output/challenge_output.mp4'
clip3 = VideoFileClip('test_videos/challenge.mp4')
file_clip = clip3.fl_image(process_video3)
# %time file_clip.write_videofile(challenge_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
|
.ipynb_checkpoints/P1_post-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
from PIL import Image, ImageDraw, ImageFilter
im1 = Image.open('data/src/lena.jpg')
im2 = Image.open('data/src/rocket.jpg').resize(im1.size)
im2.save('data/src/rocket_resize.jpg')
# 
# 
mask = Image.new("L", im1.size, 128)
im = Image.composite(im1, im2, mask)
# im = Image.blend(im1, im2, 0.5)
im.save('data/dst/pillow_composite_solid.jpg', quality=95)
# 
mask = Image.new("L", im1.size, 0)
draw = ImageDraw.Draw(mask)
draw.ellipse((140, 50, 260, 170), fill=255)
im = Image.composite(im1, im2, mask)
im.save('data/dst/pillow_composite_circle.jpg', quality=95)
# 
mask_blur = mask.filter(ImageFilter.GaussianBlur(10))
im = Image.composite(im1, im2, mask_blur)
im.save('data/dst/pillow_composite_circle_blur.jpg', quality=95)
# 
mask = Image.open('data/src/horse.png').convert('L').resize(im1.size)
im = Image.composite(im1, im2, mask)
im.save('data/dst/pillow_composite_horse.jpg', quality=95)
# 
mask = Image.open('data/src/gradation_h.jpg').convert('L').resize(im1.size)
im = Image.composite(im1, im2, mask)
im.save('data/dst/pillow_composite_gradation.jpg', quality=95)
# 
|
notebook/pillow_composite.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## Operations
#
# `Operations` are toy datasets for us to illustrate how to build your own models in `carefree-learn`. We will generate some artificial datasets based on basic *operations*, namely `sum`, `prod` and their mixture, to deonstrate the validity of our customized model.
#
# Here are the formula of the definitions of the datasets:
#
# $$
# \begin{aligned}
# \mathcal{D}_{\text {sum}} &=\{(\mathbf x,\sum_{i=1}^d x_i)|\mathbf x\in\mathbb{R}^d\} \\
# \mathcal{D}_{\text {prod}}&=\{(\mathbf x,\prod_{i=1}^d x_i)|\mathbf x\in\mathbb{R}^d\} \\
# \mathcal{D}_{\text {mix}} &=\{(\mathbf x,[y_{\text{sum}},y_{\text{prod}}])|\mathbf x\in\mathbb{R}^d\}
# \end{aligned}
# $$
#
# In short, the `sum` dataset simply sums up the features, the `prod` dataset simply multiplies all the features, and the `mix` dataset is a mixture of `add` and `prod`. Here are the codes to generate them:
# +
import torch
import cflearn
import numpy as np
from typing import Any
from typing import Dict
from cflearn.modules.blocks import Linear
# for reproduction
np.random.seed(142857)
torch.manual_seed(142857)
# prepare
dim = 5
num_data = 10000
x = np.random.random([num_data, dim]) * 2.0
y_add = np.sum(x, axis=1, keepdims=True)
y_prod = np.prod(x, axis=1, keepdims=True)
y_mix = np.hstack([y_add, y_prod])
# -
# Since we want to hold the datasets' property, we should not apply any pre-processing strategies to these datasets. Fortunately, `carefree-learn` has provided a simple configuration for us to do so:
# `reg` represents a regression task
# `use_simplify_data` indicates that `carefree-learn` will do nothing to the input data
kwargs = {"task_type": "reg", "use_simplify_data": True}
# ### The `add` Dataset
#
# It's pretty clear that the `add` dataset could be solved easily with a `linear` model
#
# $$
# \hat y = wx + b,\quad w\in\mathbb{R}^{1\times d},b\in\mathbb{R}^{1\times 1}
# $$
#
# because the *ground truth* of `add` dataset could be represented as `linear` model, where
#
# $$
# w=[1,1,...,1],b=[0]
# $$
#
# Although this is a simple task, using Neural Networks to solve it might actually fail because it is likely to overfit the training set with some strange representation. We can demonstrate it by lifting a simple, quick experiment with the help of `carefree-learn`:
# add
linear = cflearn.make("linear", **kwargs).fit(x, y_add)
fcnn = cflearn.make("fcnn", **kwargs).fit(x, y_add)
# Then we can evaluate the models:
cflearn.evaluate(x, y_add, pipelines=[linear, fcnn])
# As we expected, the `fcnn` (Fully Connected Neural Network) model actually fails to reach a satisfying result, while the `linear` model approaches to the ground truth easily.
#
# We can also check whether the model has *actually* learned the ground truth by checking its parameters ($w$ and $b$):
linear_core = linear.model.heads["linear"].linear
print(f"w: {linear_core.weight.data}")
print(f"b: {linear_core.bias.data}")
# It's not perfect, but we are happy enough😆
# ### The `prod` Dataset
#
# However, when it comes to the `prod` dataset, the `linear` model is likely to face the *underfitting* issue because theoratically it cannot represent such formulation:
#
# $$
# y=\prod_{i=1}^{d}x_i
# $$
#
# Neural Networks, on the other side, are able to represent **ANY** functions ([Universal Approximation Theorem](https://en.wikipedia.org/wiki/Universal_approximation_theorem)). In this case, the `fcnn` model should be able to outperform the `linear` model:
linear = cflearn.make("linear", **kwargs).fit(x, y_prod)
fcnn = cflearn.make("fcnn", **kwargs).fit(x, y_prod)
# Then we can evaluate the models:
cflearn.evaluate(x, y_prod, pipelines=[linear, fcnn])
# Although `fcnn` outperforms `linear`, it is still not as satisfied as the results that we've got in `add` dataset. That's because although `fcnn` has strong approximation power, its representations are basically based on the `add` operations between features, and the non-linearities come from an activation function applied to **EACH** neuron. Which means, `fcnn` can hardly learn `prod` operation **ACROSS** features.
#
# A trivial thought is to manually extract the `prod` features $\tilde x$ from the input data with a new `extractor`:
#
# $$
# \tilde x\triangleq \prod_{i=1}^d x_i
# $$
#
# After which a `linear` model should solve the problem, because the *ground truth* here is simply
#
# $$
# w=[1],b=[0]
# $$
#
# But how could we apply this prior knowledge to our model? Thanks to `carefree-learn`, this is actually quite simple with only a few lines of codes:
# +
# register an `extractor` which represents the `prod` operation
@cflearn.register_extractor("prod_extractor")
class ProdExtractor(cflearn.ExtractorBase):
@property
def out_dim(self) -> int:
return 1
def forward(self, net: torch.Tensor) -> torch.Tensor:
return net.prod(dim=1, keepdim=True)
# define the `Config` for this `extractor`
# since `ProdExtractor` don't need any configurations, we can simply return an empty dict here
cflearn.register_config("prod_extractor", "default", config={})
# -
# > If you are interested in how does `extractor` actually work in `carefree-learn`, please refer to [pipe](https://carefree0910.me/carefree-learn-doc/docs/design-principles#pipe) and [extractor](https://carefree0910.me/carefree-learn-doc/docs/design-principles#extractor) for more information.
#
# After defining the `extractor`, we need to define a model that leverages it:
# we call this new model `prod`
# we use our new `extractor` followed by traditional `linear` model
cflearn.register_model("prod", pipes=[cflearn.PipeInfo("linear", extractor="prod_extractor")])
# And that's it!
prod = cflearn.make("prod", **kwargs).fit(x, y_prod)
# Then we can evaluate the models:
cflearn.evaluate(x, y_prod, pipelines=[linear, fcnn, prod])
# As we expected, the `prod` approaches to the ground truth easily.
#
# We can also check whether the model has actually learned the ground truth by checking its parameters ($w$ and $b$):
prod_linear = prod.model.heads["linear"].linear
print(f"w: {prod_linear.weight.item():8.6f}, b: {prod_linear.bias.item():8.6f}")
# It's not perfect, but we are happy enough😆
# ### The `mix` Dataset
#
# Now comes to the fun part: what if we mix up `add` and `prod` dataset? Since `linear` is professional in `add`, `prod` is professional in `prod`, and `fcnn` is **QUITE** professional in **ALL** datasets (🤣), it is hard to tell which one will outshine in the `mix` dataset. So let's do an experiment to obtain an empirical conclusion:
linear = cflearn.make("linear", **kwargs).fit(x, y_mix)
fcnn = cflearn.make("fcnn", **kwargs).fit(x, y_mix)
prod = cflearn.make("prod", **kwargs).fit(x, y_mix)
# Then we can evaluate the models:
cflearn.evaluate(x, y_mix, pipelines=[linear, fcnn, prod])
# Seems that the non-expert in both domain (`fcnn`) outperforms the domain experts (`linear`, `prod`)! But again, this is far from satisfying because theoratically we can combine the domain experts to build an expert in `mix` dataset.
#
# Thanks to `carefree-learn`, we again can actually do so, but this time we'll need some more coding. Recall that we build an expert in `prod` dataset by defining a novel `extractor`, because we needed to pre-process the input data. However in `mix`, what we actually need is to combine `linear` and `prod`, which means we need to define a novel `head` this time.
#
# > If you are interested in how does `head` actually work in `carefree-learn`, please refer to [pipe](https://carefree0910.me/carefree-learn-doc/docs/design-principles#pipe) and [head](https://carefree0910.me/carefree-learn-doc/docs/design-principles#head) for more information.
#
# Concretely, suppose we already have two models, $f_1$ and $f_2$, that are experts in `add` dataset and `prod` dataset respectively. What we need to do is to combine the first dimension of $f_1(\mathbf x)$ and the second dimension of $f_2(\mathbf x)$ to construct our final outputs:
#
# $$
# \begin{aligned}
# f_1(\mathbf x) \triangleq [\hat y_{11}, \hat y_{12}] \\
# f_2(\mathbf x) \triangleq [\hat y_{21}, \hat y_{22}] \\
# \Rightarrow \tilde f(\mathbf x) \triangleq [\hat y_{11}, \hat y_{22}]
# \end{aligned}
# $$
#
# Since $\hat y_{11}$ can fit `add` dataset perfectly, $\hat y_{22}$ can fit `prod` dataset perfectly, $\tilde f(\mathbf x)$ should be able to fit `mix` dataset perfectly. Let's implement this model to demonstrate it with experiment:
# +
@cflearn.register_head("mixture")
class MixtureHead(cflearn.HeadBase):
def __init__(self, in_dim: int, out_dim: int, target_dim: int):
super().__init__(in_dim, out_dim)
# when `target_dim == 0`, it represents an `add` head (y_11)
# when `target_dim == 1`, it represents a `prod` head (y_22)
self.dim = target_dim
self.linear = Linear(in_dim, 1)
def forward(self, net: torch.Tensor) -> torch.Tensor:
target = self.linear(net)
zeros = torch.zeros_like(target)
tensors = [target, zeros] if self.dim == 0 else [zeros, target]
return torch.cat(tensors, dim=1)
# we need to define two configurations for `add` and `prod` respectively
cflearn.register_head_config("mixture", "add", head_config={"target_dim": 0})
cflearn.register_head_config("mixture", "prod", head_config={"target_dim": 1})
# we use our new `head` to define the new model
# note that we need two `pipe`s, one for `add` and the other for `prod`
cflearn.register_model(
"mixture",
pipes=[
cflearn.PipeInfo("add", extractor="identity", head="mixture", head_config="add"),
cflearn.PipeInfo("prod", extractor="prod_extractor", head="mixture", head_config="prod"),
]
)
mixture = cflearn.make("mixture", **kwargs).fit(x, y_mix)
# -
# Then we can evaluate the models:
cflearn.evaluate(x, y_mix, pipelines=[linear, fcnn, prod, mixture])
# As we expected, the `mixture` approaches to the ground truth easily.
#
# We can also check whether the model has actually learned the ground truth by checking its parameters ($w$ and $b$):
add_linear = mixture.model.heads["add"].linear
prod_linear = mixture.model.heads["prod"].linear
print(f"add w: {add_linear.weight.data}")
print(f"add b: {add_linear.bias.data}")
print(f"prod w: {prod_linear.weight.data}")
print(f"prod b: {prod_linear.bias.data}")
# It's not perfect, but we are happy enough🥳
#
# ### Conclusions
#
# `Operations` are just artificial toy datasets, but quite handy for us to illustrate some basic concepts in `carefre-learn`. We hope that this small example can help you quickly walk through some development guides in `carefre-learn`, and help you leverage `carefree-learn` in your own tasks!
|
examples/operations/op.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="ddVHd1ngh2nu"
# #**Business Problem**
# XYZ bank wants a data driven approach to assist their decision making for providing loans to the customers. The banks wants to understand if the customer is going to repay the loan or not.
#
# #**Data Science Problem**
# Build a classification engine that predicts and classifies if a customer is going to repay the loan or not, based on various features like credit policy, interest rate, installment, revolving balance, etc.
#
# + [markdown] id="Mm2qkJL9ld9O"
# ## ***1. Import Libraries and Dependencies***
# + id="0eiqL4XUeoxl"
import numpy as np
import pandas as pd
pd.set_option('display.max_rows', 800)
pd.set_option('display.max_columns', 500)
import seaborn as sns
import matplotlib.pyplot as plt
# %matplotlib inline
# + id="37tSJsdozqGu"
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.metrics import confusion_matrix, accuracy_score, classification_report
from sklearn.tree import DecisionTreeClassifier
# + [markdown] id="8HXiO8cz3n4H"
# ## ***2. Load Data***
# + colab={"resources": {"http://localhost:8080/nbextensions/google.colab/files.js": {"data": "<KEY>", "ok": true, "headers": [["content-type", "application/javascript"]], "status": 200, "status_text": ""}}, "base_uri": "https://localhost:8080/", "height": 74} id="MUtgr4zL0HXE" outputId="166c893b-75c3-44aa-cbb2-c36f4311832f"
import io
from google.colab import files
uploaded = files.upload()
# + id="gj2Oegrq1cLI"
data = pd.read_csv(io.BytesIO(uploaded['loan_data.csv']))
# + [markdown] id="VG9n0VbN30Jm"
# ## ***3. Understanding Data***
# + colab={"base_uri": "https://localhost:8080/", "height": 435} id="axRJjRQ-1mwQ" outputId="8c83a30f-9f70-4cc8-ae67-0f20c7e54b8f"
data
# + colab={"base_uri": "https://localhost:8080/"} id="uYIQWB6P1oDF" outputId="3bdb070d-892c-493f-cd73-c29bd80e83f6"
data.info()
# + colab={"base_uri": "https://localhost:8080/", "height": 314} id="Nn846-ZE2TNg" outputId="9bc87707-3b81-438c-a700-7dab66bd07e7"
data.describe()
# + colab={"base_uri": "https://localhost:8080/", "height": 682} id="Z3Nx4goD2eCD" outputId="f42463e2-fda5-409e-82f3-56fd1eefacda"
data.head(20)
# + colab={"base_uri": "https://localhost:8080/"} id="C5aDchQq2nhy" outputId="fd35a8f3-9a84-43da-c3b2-73f3a8d6e0d7"
num_col = data.select_dtypes(include = np.number).columns
categ_col = data.select_dtypes(exclude = np.number).columns
print("Numerical Columns: \n", num_col,"\n")
print("Categorical Columns: \n", categ_col)
# + [markdown] id="iezUwVrl3g41"
# ## ***4. Data Preprocessing***
# + id="gIKEYGqR3R5N"
# Converting Categorical Data to Numerical Data using One Hot Encoding
# + id="8NY36n914Mje"
data = pd.get_dummies(data = data, prefix = 'purpose', columns = ['purpose'])
# + colab={"base_uri": "https://localhost:8080/"} id="umXLdOp_4ltK" outputId="aba746bd-0749-4543-db88-22cedebf3a5e"
num_col = data.select_dtypes(include = np.number).columns
categ_col = data.select_dtypes(exclude = np.number).columns
print("Numerical Columns: \n", num_col,"\n")
print("Categorical Columns: \n", categ_col)
# + colab={"base_uri": "https://localhost:8080/"} id="xuSQi2oD4pl8" outputId="c0ea1738-3581-4ffb-d508-19d691d71c36"
# Checking for NA values
print(data.isna().sum())
print(data.shape)
# + [markdown] id="zJnXf8Qj5Jy7"
# ## ***5. Exploratory Data Analysis***
# + colab={"base_uri": "https://localhost:8080/", "height": 279} id="kpitU-hs48Ob" outputId="3a2e173b-c25e-492c-afde-86a966a59945"
# Checking distribution of y - variable, if it is unbalanced or not
sns.countplot(y = data['not.fully.paid'], data = data)
plt.xlabel('Count of each Target class')
plt.ylabel('Target Classes')
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 716} id="hzEcZ8Bw5qMI" outputId="59c65449-745d-43e1-b712-7437e6a45ac1"
# Check distribution of all features.
data.hist(figsize = (15, 12), bins = 15)
plt.title('Feature Distribution')
plt.show()
# + [markdown] id="g6VgxgSK7PYk"
# ## ***6. Model Building***
# + id="qn5CZVL96vpk"
X = data.drop(['not.fully.paid'], axis = 1)
y = data['not.fully.paid']
# + id="HFYpUQBV7jN_"
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 1005)
# + [markdown] id="ClIw-hgdjd01"
# ## **Decision Tree Criterion - Gini**
# + id="orZDhcDw73mz" colab={"base_uri": "https://localhost:8080/"} outputId="b66ddfbb-747c-44b7-fb07-3fa3e5220677"
clf = DecisionTreeClassifier(criterion = 'gini', random_state = 0)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print("Confusion Matrix:\n ",confusion_matrix(y_test, y_pred))
print("\n Accuracy Score:\n ", accuracy_score(y_test, y_pred))
print("\n Classification Report:\n ", classification_report(y_test, y_pred))
# + [markdown] id="Jp_VAtcjmOtm"
# ## **Decision Tree Criterion - Entropy**
# + colab={"base_uri": "https://localhost:8080/"} id="ImV28Ma8l3we" outputId="1d8695cd-7f68-4515-9600-01ce8e876da1"
clf = DecisionTreeClassifier(criterion = 'entropy', random_state = 0)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print("Confusion Matrix:\n ",confusion_matrix(y_test, y_pred))
print("\n Accuracy Score:\n ", accuracy_score(y_test, y_pred))
print("\n Classification Report:\n ", classification_report(y_test, y_pred))
# + [markdown] id="RivIIKohoDku"
# ## **Handling Class Imbalance**
# + colab={"base_uri": "https://localhost:8080/"} id="Uh3DI99-mf_6" outputId="aeaaaf75-0cab-4b0b-a479-9c1073782b40"
#Handle Class Imbalance
from imblearn.over_sampling import RandomOverSampler
ros = RandomOverSampler()
X_ros, y_ros = ros.fit_sample(X, y)
# + id="elEagpYqoovf"
X_train, X_test, y_train, y_test = train_test_split(X_ros, y_ros, test_size = 0.3, random_state = 1005)
# + colab={"base_uri": "https://localhost:8080/"} id="niHBJwtppO9E" outputId="9ed52c66-fb32-4e31-95ae-0350a384ded1"
clf1 = DecisionTreeClassifier(criterion = 'gini', random_state = 0)
clf1.fit(X_train, y_train)
y_pred = clf1.predict(X_test)
print("Confusion Matrix:\n ",confusion_matrix(y_test, y_pred))
print("\n Accuracy Score:\n ", accuracy_score(y_test, y_pred))
print("\n Classification Report:\n ", classification_report(y_test, y_pred))
# + colab={"base_uri": "https://localhost:8080/"} id="L_CJdPktpmI9" outputId="785dc310-3e95-4af4-e3ee-71e270aa4c65"
clf2 = DecisionTreeClassifier(criterion = 'entropy', random_state = 0)
clf2.fit(X_train, y_train)
y_pred = clf2.predict(X_test)
print("Confusion Matrix:\n ",confusion_matrix(y_test, y_pred))
print("\n Accuracy Score:\n ", accuracy_score(y_test, y_pred))
print("\n Classification Report:\n ", classification_report(y_test, y_pred))
# + id="-e1RyN4SptQ0"
dtclassifier = DecisionTreeClassifier()
param_grid = {'criterion' : ['gini', 'entropy'],
'max_depth' : [10, 15, 25, 30, 35, 40, 45, 50]}
grid = GridSearchCV(estimator = dtclassifier, param_grid = param_grid, refit = True, verbose = 0)
# + colab={"base_uri": "https://localhost:8080/"} id="VV9kUtZMrTxz" outputId="7f39b627-22d8-4a6f-c539-2705faab81d8"
grid.fit(X_train, y_train)
# + colab={"base_uri": "https://localhost:8080/"} id="iSCHyxftrZA8" outputId="8fe46067-d418-4687-892b-cf82e98e4409"
grid_pred = grid.predict(X_test)
print("Confusion Matrix:\n ",confusion_matrix(y_test, y_pred))
print("\n Accuracy Score:\n ", accuracy_score(y_test, y_pred))
print("\n Classification Report:\n ", classification_report(y_test, y_pred)
|
Loan_Repay.ipynb
|
# -*- coding: utf-8 -*-
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .jl
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernel_info:
# name: julia-nteract-1.5
# kernelspec:
# argv:
# - C:\Users\snikula\AppData\Local\JuliaPro-1.5.4-1\Julia-1.5.4\bin\julia.exe
# - -i
# - --color=yes
# - C:\Users\snikula\.julia\packages\IJulia\e8kqU\src\kernel.jl
# - '{connection_file}'
# display_name: Julia nteract 1.5.4
# env: {}
# interrupt_mode: message
# language: julia
# name: julia-nteract-1.5
# ---
# + [markdown] nteract={"transient": {"deleting": false}}
# # Vaihtovirta
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}}
using Printf
using Dates
@printf "%s\n" Dates.now();
versioninfo(verbose=false);
# + [markdown] nteract={"transient": {"deleting": false}}
# 14-3
#
# Kirjan vastaus: a) 4.0 V b) 2.8 V c) 0.020 s d) 50 Hz
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}}
u0=4
U=u0/sqrt(2)
t=0.02
f=1/t
@printf("Tehollinen jännite on %.1f V.\n", U)
@printf("Taajuus on %.0f Hz.\n",f)
# + [markdown] nteract={"transient": {"deleting": false}}
# 14-4
#
# Kirjan vastaus: b) 50 Hz c) 4.0 V ja 0.19 A c) 21 Ω
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}}
u0=5.6
i0=0.27
U=u0/sqrt(2)
I=i0/sqrt(2)
t=0.02
f=1/t
R=U/I
@printf("Taajuus on %.0f Hz.\n",f)
@printf("Tehollinen jännite on %.1f V ja virta %.2f A.\n", U,I)
@printf("Vastuksen resistanssi on %.0f Ω.\n",R)
# + [markdown] nteract={"transient": {"deleting": false}}
# 14-5
#
#
# Kirjan vastaus: b) 0.495 A ja 0.35 A c) 40 Hz d) -0.24 A, 0.18 A
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}}
using Plots
i0=0.495
I=i0/sqrt(2)
f=40
@printf("Huippuarvo %.1f A ja tehollinen %.2f A.\n", i0,I)
@printf("Taajuus on %.0f Hz.\n",f)
@printf("Vastuksen resistanssi on %.0f Ω.\n",R)
tmax=0.036
x = 0:tmax/100:tmax
fi(x)=i0*sin(2*pi*f*x)
@printf("""I(0.023)=%.2f A, I(0.036)=%.2f A.\n""",
fi(0.023),fi(0.036));
plot(x,fi.(x))
xlabel!("t(s)")
ylabel!("I(A)")
# + [markdown] nteract={"transient": {"deleting": false}}
# 14-6
#
# Kirjan vastaus: a) 330 V b) ei
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}}
U=230
u0=sqrt(2)*U
@printf("Huippuarvo %.0f V.\n", u0)
# + [markdown] nteract={"transient": {"deleting": false}}
# 14-7
#
# Kirjan vastaus: a) 540 V b) 27 A
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}}
N=1200
B=1.2
A=150e-4
omega=1500/60
u0=N*A*B*omega
@printf("Huippujännite on %.0f V.\n", u0)
R=20
i0=u0/R
@printf("Huippuvirta on %.0f A.\n", i0)
# + [markdown] nteract={"transient": {"deleting": false}}
# 14-8
#
# Kirjan vastaus: a) 100 Ω b) 190 kJ c) 90 V
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}}
u0=120
i0=1.2
R=u0/i0
@printf("Vastus on %.0f Ω.\n",R)
w0=i0*u0/2*45*60
@printf("Lämpömäärä 45 minuutin aikana on %.2e J.\n",w0);
ia=0.9
ua=ia*R
@printf("Jännitehävio, kun virta on %.1f A on %.0f V.\n",ia,ua);
# + [markdown] nteract={"transient": {"deleting": false}}
# 14-9 $P=U*I, I=U/R \rightarrow U=R*I, P=R*I*I$
#
# Kirjan vastaus: a) 7.1 A b) 10 A
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}}
P=1200
R=24
I=sqrt(P/R)
i0=I*sqrt(2)
@printf("Tehollinen virta on %.1f A.\n",I)
@printf("Huippuvirta on %.1f A.\n",i0)
# + [markdown] nteract={"transient": {"deleting": false}}
# 14-10
#
# Kirjan vastaus: 92 mA
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}}
u0=325
i0=u0/2500
I=i0/sqrt(2)
@printf("Tehollinen virta on %.3f A.\n",I)
# + [markdown] nteract={"transient": {"deleting": false}}
# 14-11
#
# Kirjan vastaus: b) 120 Ω c) 300 Ω
# + jupyter={"source_hidden": false, "outputs_hidden": false} nteract={"transient": {"deleting": false}}
I=0.015
U=1.8
R=U/I
@printf("Vastuksen resistanssi on %.0f Ω.\n",R)
U2=4.5
Z=U2/I
@printf("Piirin impedanssi on %.0f Ω.\n",Z)
|
6/14.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python
# language: python
# name: conda-env-python-py
# ---
# ________
# # __Machine Learning Project (using PySpark):__
# ## __Accurately Classify "Injury" variable__
# #### By <NAME>
# <a href="https://www.linkedin.com/in/varungrewal/">LinkedIn</a> | <a href="https://github.com/varungrewal">Github</a>
# ____
# ### Relevant Links:
# <a href="https://drive.google.com/file/d/1l2E1zqmXeG9cYv9l7QVbJC11romEIzjV/view?usp=sharing">Data</a> | <a href="https://github.com/varungrewal/Machine-Learning-using-PySpark-/blob/main/Metadata.pdf">Meta Data</a>
# _____
# ### Tools Used:
# __Tool:__ Jupyter Notebook | __Language:__ Python
# __Packages:__ NumPy | Pandas | PySpark
# _____
# <a id='TOC'></a>
# ### <u>Table of Contents:</u>
# 1. [Introduction](#1)
# 1.1 [Goal](#1.1)
# 2. [Data Description](#2)
# 2.1 [Intialize](#2.1)
# 2.2 [Understanding Raw Data](#2.2)
# 3. [ETL (Data Preparation, Cleaning, Wrangling, Manipulation and Check)](#3)
# 4. [Machine Learning](#4)
# 4.1 [Functions](#4.1)
# 4.2 [PySpark Session and SQL](#4.2)
# 4.3 [Model 1 - Logistic Regression](#4.3)
# 4.4 [Model 2 - Random Forest](#4.4)
# 4.5 [Model 3 - Gradient Boosted Trees](#4.5)
# 4.6 [Model 4 - Linear Support Vector](#4.6)
# 5. [Conclusion](#5)
#
# <p style="color:green"> Note: Select any cell and press TAB to come back to Table of Contents </p>
# _____
# <a id='1'></a>
# ### 1. Introduction
# ____
#
# This is the 3rd project in the series. Please see the links below to learn about the previous projects.
# Project 1: <a href="https://github.com/varungrewal/Data-Analytics-Visualization">Data Analytics and Visualization</a>
# Project 2: <a href="https://github.com/varungrewal/Machine-Learning-using-Scikit-">Machine Learning (using Scikit) </a>
# _____
# <a id='1.1'></a>
# ### 1.1 Goal
# _____
# The goal of this project is to build following four Classification models to accurately classify "Injury" variable
# - Logistic Regression
# - Random Forest
# - Gradient Boosted Tree
# - Linear Support Vector
# <a id='2'></a>
# ____
# ### 2. Data Description
# ____
# The raw data under consideration for this project is the 'Collision Data" sourced from the Seattle Police Department for the year 2004-2020. Actual dataset is much larger. But, for this project I have limited the scope of the dataset to focus on one variable "Injury Collision".
#
# Preliminary analysis suggests that data is mostly clean and complete. However, some cleaning might be required to make it ideal for modeling and analysis. Size of the dataset is approx. 195K rows and 38 columns.
# Dataset is of medium complexity as there are multiple variables that can potentially impact the severity of the collision. Data is of mixed nature with integer, float, date and categorical variables being present. That means, it will require preprocessing and potentially normalization.
#
# Note: Data is missing following important variables:
# - Age
# - Gender
# - Make/Model of the vehicle
# <a id='2.1'></a>
# _____
# ### 2.1 Intialize:
# Import/Load all the required packages and the dataset
# _____
import pandas as pd
import numpy as np
# !pip install pyspark==2.4.5
# !pip install pyspark[sql]
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession, Row
from pyspark.ml.stat import Correlation, Summarizer
from pyspark.ml import Pipeline
from pyspark.ml.classification import LogisticRegression, RandomForestClassifier, GBTClassifier, LinearSVC
from pyspark.ml.feature import PCA, VectorAssembler
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder
from pyspark.ml.evaluation import MulticlassClassificationEvaluator
# %%capture
# ! pip install seaborn
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
# %matplotlib inline
mpl.style.use(['ggplot'])
# +
path='https://s3.us.cloud-object-storage.appdomain.cloud/cf-courses-data/CognitiveClass/DP0701EN/version-2/Data-Collisions.csv'
df = pd.read_csv(path)
df.head(1)
# -
# <a id='2.2'></a>
# _____
# ### 2.2 Understanding Raw Data
# To get basic understanding (size,shape, etc.) of the dataset
# ____
print('Raw Data Dimensions (Rows/Columns):',df.shape)
print ("Column Names: ")
print("-------------------------------")
df.columns.values
df.index.values
df.info()
# <a id='3'></a>
# _____
# ### 3. ETL (Data Preparation, Cleaning, Wrangling, Manipulation and Check)
# ____
#
# +
# To be consistent, making all column labels as type string
df.columns = list(map(str, df.columns))
# Fixing datatype for DATETIME variables
df[["INCDATE"]] = df[["INCDATE"]].astype("datetime64")
df[["INCDTTM"]] = df[["INCDTTM"]].astype("datetime64")
# Renaming the Severity variables to improve readability
df["SEVERITYDESC"].replace("Property Damage Only Collision", "Property Damage", inplace=True)
df["SEVERITYDESC"].replace("Injury Collision", "Injury", inplace=True)
# Adding needed columns for analysis of Big Picture
df[["COMB"]] = df['SEVERITYDESC']+"/"+df['COLLISIONTYPE']+"/"+df['JUNCTIONTYPE']
df["COMB"] = df.COMB.astype(str)
df[["COMB-COND"]] = df['WEATHER']+"/"+df['ROADCOND']+"/"+df['LIGHTCOND']
df["COMB-COND"] = df["COMB-COND"].astype(str)
# Adding needed columns for analysis of DATETIME variables
df['DATE'] = pd.to_datetime(df['INCDTTM'], format='%d-%m-%y', errors='coerce').dt.floor('D')
df['YEAR'] = pd.DatetimeIndex(df['INCDTTM']).year
df['MONTH'] = pd.DatetimeIndex(df['INCDTTM']).month
df['DAY'] = pd.DatetimeIndex(df['INCDTTM']).day
df['WEEKDAY'] = df['DATE'].dt.day_name()
df['WEEKDAYNUM'] = df['DATE'].dt.dayofweek
df['TIME'] = pd.DatetimeIndex(df['INCDTTM']).time
df['TIME2']=pd.to_datetime(df['INCDTTM']).dt.strftime('%I:%M %p')
df['TIME3']=pd.to_datetime(df['INCDTTM']).dt.strftime('%p')
# Adding needed columns for Business/Finance inspired metrics
bins = [1,3,6,9,12]
quarter = ["Q1","Q2","Q3","Q4"]
df['QUARTER'] = pd.cut(df['MONTH'], bins, labels=quarter,include_lowest=True)
df['QUARTER'] = df.QUARTER.astype(str)
df['YR-QTR'] = df['YEAR'].astype("str")+ "-" + df['QUARTER']
# Adding needed columns for Seasonal effect metrics
bins2 = [1,2,5,8,11,12]
season = ["WINTER","SPRING","SUMMER","FALL","WINTER"]
df['SEASON'] = pd.cut(df['MONTH'], bins2, labels=season,ordered=False, include_lowest=True)
bins3 = [0,1,2,3,4,5,6,7,8,9,10,11,12]
rainfall = [5.2,3.9,3.3,2.0,1.6,1.4,0.6,0.8,1.7,3.3,5.0,5.4]
df['AVGRAINFALL-INCHES'] = pd.cut(df['MONTH'], bins3, labels=rainfall,ordered=False, include_lowest=True)
temp = [45,48,52,56,64,69,72,73,67,59,51,47]
df['AVGTEMP-F'] = pd.cut(df['MONTH'], bins3, labels=temp,ordered=False, include_lowest=True)
daylight = [9,10,12,14,15,16,16,14,13,11,9,9]
df['AVGDAYLIGHT-HRS'] = pd.cut(df['MONTH'], bins3, labels=daylight,ordered=False, include_lowest=True)
df[['AVGRAINFALL-INCHES']] = df[['AVGRAINFALL-INCHES']].astype("float")
df[['AVGTEMP-F']] = df[["AVGTEMP-F"]].astype("int")
df[['AVGDAYLIGHT-HRS']] = df[["AVGDAYLIGHT-HRS"]].astype("int")
# Adding needed columns for analysis of GPS variable
df["GPS"] = round(df['X'],7).astype("str")+ ","+round(df['Y'],7).astype("str")
# Dropping unnecessary columns
df.drop(['OBJECTID','INCKEY','INTKEY','COLDETKEY','REPORTNO','STATUS','SEVERITYCODE.1','INCDTTM','INCDATE','EXCEPTRSNCODE','EXCEPTRSNDESC', 'SDOTCOLNUM', 'SEGLANEKEY', 'CROSSWALKKEY', 'ST_COLCODE'], axis=1, inplace=True)
df.head(1)
# list of columns after changes
df.columns
# To see if dataset has any missing rows
missing_data = df.isnull()
missing_data.head(1)
# To identiy and list columns with missing values
#for column in missing_data.columns.values.tolist():
# print(column)
# print (missing_data[column].value_counts())
# print("________________________________________")
# Dropping missing data rows to make sure data is complete
df.dropna(subset=["X"], axis=0, inplace=True)
df.dropna(subset=["COLLISIONTYPE"], axis=0, inplace=True)
df.dropna(subset=["UNDERINFL"], axis=0, inplace=True)
df.dropna(subset=["ROADCOND"], axis=0, inplace=True)
df.dropna(subset=["JUNCTIONTYPE"], axis=0, inplace=True)
df.dropna(subset=["WEATHER"], axis=0, inplace=True)
df.dropna(subset=["LIGHTCOND"], axis=0, inplace=True)
# Drop incomplete data i.e. Year 2020
df.drop(df[df.YEAR > 2019].index, inplace=True)
# Reset index, because we dropped rows
df.reset_index(drop=True, inplace=True)
print('Data Dimensions (Rows/Columns) after cleaning:',df.shape)
df.head(1)
# Steps to prepare data for future analysis
# Converting Y/N to 1/0
df["UNDERINFL"].replace("N", 0, inplace=True)
df["UNDERINFL"].replace("Y", 1, inplace=True)
df["HITPARKEDCAR"].replace("N", 0, inplace=True)
df["HITPARKEDCAR"].replace("Y", 1, inplace=True)
# Filling missing values
df["PEDROWNOTGRNT"].replace(np.nan, 0, inplace=True)
df["PEDROWNOTGRNT"].replace("Y", 1, inplace=True)
df["SPEEDING"].replace(np.nan, 0, inplace=True)
df["SPEEDING"].replace("Y", 1, inplace=True)
df["INATTENTIONIND"].replace(np.nan, 0, inplace=True)
df["INATTENTIONIND"].replace("Y", 1, inplace=True)
# Correcting datatype
df[["UNDERINFL"]] = df[["UNDERINFL"]].astype("int")
df[["PEDROWNOTGRNT"]] = df[["PEDROWNOTGRNT"]].astype("int")
df[["SPEEDING"]] = df[["SPEEDING"]].astype("int")
df[["INATTENTIONIND"]] = df[["INATTENTIONIND"]].astype("int")
df[["HITPARKEDCAR"]] = df[["HITPARKEDCAR"]].astype("int")
df[['YEAR']] = df[['YEAR']].astype("int")
df[['MONTH']] = df[['MONTH']].astype("int")
df[['DAY']] = df[['DAY']].astype("int")
# adding columns for analysis of state of mind
df[["COMB-MIND"]] = df['INATTENTIONIND']+df['UNDERINFL']+df['SPEEDING']
df["COMB-MIND"] = df["COMB-MIND"].astype(int)
df.head(1)
# Check missing data
missing_data = df.isnull()
#for column in missing_data.columns.values.tolist():
# print(column)
# print (missing_data[column].value_counts())
# print("________________________________________")
if missing_data.bool == True:
print("----There is still missing data----")
else:
print("----There is no missing data----")
# Print unique values and its count for each column
col_name = df.columns.tolist()
row_num = df.index.tolist()
#for i,x in enumerate(col_name):
# print ("Unique value count of: ", x)
#print ("------------------------------------------")
#print(df[x].value_counts())
# print ("__________________________________________")
# create dummy variable to split SEVERITYDESC
dummy_var = pd.get_dummies(df["SEVERITYDESC"])
dum_list = dummy_var.columns.values.tolist()
dum_list2 = [x.upper() for x in dum_list]
#print(dum_list2)
dummy_var.columns = dum_list2
#dummy_var.head(1)
# create dummy variable to split COLLISIONTYPE
dummy_var1 = pd.get_dummies(df["COLLISIONTYPE"])
dum_list = dummy_var1.columns.values.tolist()
#dummy_var1.head(1)
dum_list2 = [x.upper() for x in dum_list]
#print(dum_list2)
dummy_var1.columns = dum_list2
dummy_var1.rename(columns={'OTHER':'COLLISIONTYPE-OTHER'}, inplace=True)
#dummy_var1.head(1)
# create dummy variable to split ROADCOND
dummy_var2 = pd.get_dummies(df["ROADCOND"])
dum_list = dummy_var2.columns.values.tolist()
#dummy_var2.head(1)
dum_list2 = [x.upper() for x in dum_list]
#print(dum_list2)
dummy_var2.columns = dum_list2
dummy_var2.rename(columns={'OTHER':'ROADCOND-OTHER'}, inplace=True)
dummy_var2.rename(columns={'UNKNOWN':'ROADCOND-UNKNOWN'}, inplace=True)
#dummy_var2.head(1)
# create dummy variable to split LIGHTCOND
dummy_var3 = pd.get_dummies(df["LIGHTCOND"])
dum_list = dummy_var3.columns.values.tolist()
#dummy_var3.head(1)
dum_list2 = [x.upper() for x in dum_list]
#print(dum_list2)
dummy_var3.columns = dum_list2
dummy_var3.rename(columns={'OTHER':'LIGHTCOND-OTHER'}, inplace=True)
dummy_var3.rename(columns={'UNKNOWN':'LIGHTCOND-UNKNOWN'}, inplace=True)
#dummy_var3.head(1)
# create dummy variable to split WEATHER
dummy_var4 = pd.get_dummies(df["WEATHER"])
dum_list = dummy_var4.columns.values.tolist()
#dummy_var3.head(1)
dum_list2 = [x.upper() for x in dum_list]
#print(dum_list2)
dummy_var4.columns = dum_list2
dummy_var4.rename(columns={'OTHER':'WEATHER-OTHER'}, inplace=True)
dummy_var4.rename(columns={'UNKNOWN':'WEATHER-UNKNOWN'}, inplace=True)
#dummy_var4.head(1)
# create dummy variable to split JUNCTIONTYPE
dummy_var5 = pd.get_dummies(df["JUNCTIONTYPE"])
dum_list = dummy_var5.columns.values.tolist()
#dummy_var3.head(1)
dum_list2 = [x.upper() for x in dum_list]
#print(dum_list2)
dummy_var5.columns = dum_list2
dummy_var5.rename(columns={'UNKNOWN':'JUNCTIONTYPE-UNKNOWN'}, inplace=True)
#dummy_var5.head(1)
## create dummy variable to split ADDRTYPE
dummy_var6 = pd.get_dummies(df["ADDRTYPE"])
dum_list = dummy_var6.columns.values.tolist()
#dummy_var3.head(1)
dum_list2 = [x.upper() for x in dum_list]
#print(dum_list2)
dummy_var6.columns = dum_list2
#dummy_var6.head(1)
# merge dummy variables with df_ds (dataframe intialized for Data Science model)
df_ds = pd.concat([df, dummy_var,dummy_var1,dummy_var2,
dummy_var3,dummy_var4,dummy_var5,dummy_var6], axis=1)
# Dropping unnecessary columns
df_ds.drop(['SEVERITYCODE', 'ADDRTYPE','COLLISIONTYPE',
'JUNCTIONTYPE', 'SDOT_COLDESC','SDOT_COLCODE',
'WEATHER', 'ROADCOND', 'LIGHTCOND','ST_COLDESC'],
axis=1, inplace=True)
df_ds.head(1)
# -
# <a id='4'></a>
# _____
# ### 4. Machine Learning
#
# <a id='4.1'></a>
# ____
# #### 4.1 Functions
# ____
# +
def spk_classification(classificationmodel,df_spk_train,df_spk_test):
#prediction_train = None
#prediction_test = None
global pipeline
pipeline = Pipeline(stages=[vectorAssembler,classificationmodel])
model_train = pipeline.fit(df_spk_train)
prediction_train = model_train.transform(df_spk_train)
#model_test = pipeline.fit(df_spk_test)
#prediction_test = model_test.transform(df_spk_test)
prediction_test = model_train.transform(df_spk_test)
spk_classificationevaluation(prediction_train,prediction_test,model_train)
def spk_classificationevaluation(prediction_train,prediction_test,model_train):
if classificationmodel == lr:
print("Model: Logistic Regression","\n--------------------------------------" )
if classificationmodel == rf:
print("Model: Random Forest","\n--------------------------------------" )
if classificationmodel == gbt:
print("Model: Gradient Boosted Tree","\n--------------------------------------" )
if classificationmodel == lsvc:
print("Model: Linear Support Vector Machine","\n--------------------------------------" )
global evaluation
evaluation = MulticlassClassificationEvaluator(
labelCol="INJURY", predictionCol="prediction", metricName="accuracy")
print("Description:", model_train.stages[1])
accuracy_train = evaluation.evaluate(prediction_train)
print_spk_testtrainsetinfo(df_spk_train,df_spk_test)
print("Trainset Accuracy:", round(accuracy_train,3))
print("Trainset Error = %g" % round((1.0 - accuracy_train),3))
accuracy_test = evaluation.evaluate(prediction_test)
print("Testset Accuracy:",round(evaluation.evaluate(prediction_test),3))
print("Test Error = %g" % round((1.0 - accuracy_test),3))
spk_checkmodeloutput(prediction_train,prediction_test)
def spk_checkmodeloutput(prediction_train,prediction_test):
#prediction_test.createOrReplaceTempView("prediction_test")
#print("\nCheck Results: \n--------------------------------------" )
#prediction_test.select("Injury","rawPrediction","probability","prediction").show(2)
#spark.sql("""SELECT Injury, rawPrediction, probability, prediction
#from prediction_test where Injury=0 AND prediction =1""").show(2)
print("\nTrainset Results: " )
prediction_train.groupBy("Injury","prediction").count().show()
print("\nTestset Results: " )
prediction_test.groupBy("Injury","prediction").count().show()
def print_spk_testtrainsetinfo(df_spk_train,df_spk_test):
print("Trainset Size (%):",round(spk_trainsize,2),"\nTestset Size (%):", round(spk_testsize,2))
print("Trainset Count:",df_spk_train.count(),"\nTestset Count:", df_spk_test.count())
# -
def spk_crossvalidation(classificationmodel,pipeline,evaluation,df_spk_train,df_spk_test):
if classificationmodel == rf:
paramGrid = ParamGridBuilder() \
.addGrid(classificationmodel.numTrees, [3, 10]) \
.build()
if classificationmodel == lr:
paramGrid = ParamGridBuilder() \
.addGrid(classificationmodel.regParam, [0.1, 0.01]) \
.addGrid(classificationmodel.elasticNetParam, [0, 1]).build()
if classificationmodel == lsvc:
paramGrid = ParamGridBuilder() \
.addGrid(classificationmodel.regParam, [0.1, 0.01]) \
.addGrid(classificationmodel.maxIter, [3, 10]).build()
if classificationmodel == gbt:
paramGrid = ParamGridBuilder() \
.addGrid(classificationmodel.maxIter, [3, 10]) \
.build()
crossval = CrossValidator(estimator=pipeline, \
estimatorParamMaps=paramGrid,\
evaluator=evaluation, \
numFolds=4) # use 3+ folds in practice
cv_model = crossval.fit(df_spk_train)
cv_prediction = cv_model.transform(df_spk_test)
cv_selected = cv_prediction.select("INJURY","prediction")
print("Cross Validation Results:\n--------------------------------------")
cv_selected.groupBy("Injury","prediction").count().show()
# <a id='4.2'></a>
# ____
# #### 4.2 PySpark Session and SQL
# ____
# +
sc = SparkContext.getOrCreate(SparkConf().setMaster("local[*]"))
spark = SparkSession \
.builder \
.getOrCreate()
spark.conf.set("spark.sql.execution.arrow.pyspark.enabled", "true")
# +
df_spark= df_ds[['INATTENTIONIND', 'UNDERINFL',
'PEDROWNOTGRNT', 'SPEEDING', 'HITPARKEDCAR',
'AVGRAINFALL-INCHES','AVGTEMP-F', 'AVGDAYLIGHT-HRS', 'INJURY',
'PROPERTY DAMAGE', 'ANGLES', 'CYCLES', 'HEAD ON', 'LEFT TURN',
'COLLISIONTYPE-OTHER', 'PARKED CAR', 'PEDESTRIAN', 'REAR ENDED',
'RIGHT TURN', 'SIDESWIPE', 'DRY', 'ICE', 'OIL', 'ROADCOND-OTHER',
'SAND/MUD/DIRT', 'SNOW/SLUSH', 'STANDING WATER', 'ROADCOND-UNKNOWN',
'WET', 'DARK - NO STREET LIGHTS', 'DARK - STREET LIGHTS OFF',
'DARK - STREET LIGHTS ON', 'DAWN', 'DAYLIGHT', 'DUSK',
'LIGHTCOND-OTHER', 'LIGHTCOND-UNKNOWN', 'BLOWING SAND/DIRT', 'CLEAR',
'FOG/SMOG/SMOKE', 'WEATHER-OTHER', 'OVERCAST', 'RAINING',
'SEVERE CROSSWIND', 'SLEET/HAIL/FREEZING RAIN', 'SNOWING',
'WEATHER-UNKNOWN', 'AT INTERSECTION (BUT NOT RELATED TO INTERSECTION)',
'AT INTERSECTION (INTERSECTION RELATED)', 'DRIVEWAY JUNCTION',
'MID-BLOCK (BUT INTERSECTION RELATED)',
'MID-BLOCK (NOT RELATED TO INTERSECTION)', 'RAMP JUNCTION',
'JUNCTIONTYPE-UNKNOWN', 'BLOCK', 'INTERSECTION']]
df_spark.columns = ['INATTENTIONIND', 'UNDERINFL',
'PEDROWNOTGRNT', 'SPEEDING', 'HITPARKEDCAR',
'AVGRAINFALL_INCHES','AVGTEMP_F', 'AVGDAYLIGHT_HRS', 'INJURY',
'PROPERTY_DAMAGE', 'ANGLES', 'CYCLES', 'HEAD_ON', 'LEFT_TURN',
'COLLISIONTYPE_OTHER', 'PARKED_CAR', 'PEDESTRIAN', 'REAR_ENDED',
'RIGHT_TURN', 'SIDESWIPE', 'DRY', 'ICE', 'OIL', 'ROADCOND_OTHER',
'SAND_MUD_DIRT', 'SNOW_SLUSH', 'STANDING_WATER', 'ROADCOND_UNKNOWN',
'WET', 'DARK_NO_STREET_LIGHTS', 'DARK_STREET_LIGHTS_OFF',
'DARK_STREET_LIGHTS_ON', 'DAWN', 'DAYLIGHT', 'DUSK',
'LIGHTCOND_OTHER', 'LIGHTCOND_UNKNOWN', 'BLOWING_SAND_DIRT', 'CLEAR',
'FOG_SMOG_SMOKE', 'WEATHER_OTHER', 'OVERCAST', 'RAINING',
'SEVERE_CROSSWIND', 'SLEET_HAIL_FREEZING_RAIN', 'SNOWING',
'WEATHER_UNKNOWN', 'AT_INTERSECTION_BUT_NOT_RELATED_TO_INTERSECTION',
'AT_INTERSECTION_INTERSECTION_RELATED', 'DRIVEWAY_JUNCTION',
'MID_BLOCK_BUT_INTERSECTION_RELATED',
'MID_BLOCK_NOT RELATED_TO_INTERSECTION', 'RAMP_JUNCTION',
'JUNCTIONTYPE_UNKNOWN', 'BLOCK', 'INTERSECTION']
df_spark.columns = list(map(str, df_spark.columns))
df_spk_use = spark.createDataFrame(df_spark)
df_spk_use.createOrReplaceTempView("df_spk_use")
#spark.sql("SELECT * from df_spk_use").show()
# -
#df_spk_use.printSchema()
#df_spk_use.select("Injury","Property_Damage").show(3)
#df_spk_use.filter((df_spk_use["Injury"] > 0) & (df_spk_use["UNDERINFL"] > 0)).show(1)
#df_spk_use.groupBy("Injury","Property_Damage","Clear", "ANGLES").count().show()
#spark.sql("""SELECT SEVERITYDESC, INJURY,PROPERTY_DAMAGE, ANGLES,
# CYCLES, HEAD_ON, LEFT_TURN from df_spk_use""").show(2)
#spark.sql("SELECT INJURY as count from df_spk_use").count()
#spark.sql("SELECT INJURY as cnt from df_spk_use").first().cnt
'''spark.sql("""SELECT count(Injury) as Count,
max(Injury) as Max,
min(Injury) as Min,
round(mean(Injury),3) as Mean,
round(stddev_pop(Injury),3) as Std,
round(skewness(Injury),3) as Skewness,
round(kurtosis(injury),3) as Kurtosis
from df_spk_use""").show()'''
#spark.sql("SELECT round(corr(Injury,Clear),3) as Correlation from df_spk_use").show()
#Injury = spark.sql("SELECT Injury, clear, dry, angles FROM df_spk_use WHERE clear > 0 AND dry < 1")
#Injury.show(3)
#df_spk_use["Injury", "Clear", "Dry"].describe().show()
vectorAssembler = VectorAssembler(inputCols= ['INATTENTIONIND', 'UNDERINFL',
'PEDROWNOTGRNT', 'SPEEDING', 'HITPARKEDCAR',
'AVGRAINFALL_INCHES','AVGTEMP_F', 'AVGDAYLIGHT_HRS','INJURY','ANGLES',
'CYCLES', 'HEAD_ON', 'LEFT_TURN',
'PARKED_CAR', 'PEDESTRIAN', 'REAR_ENDED',
'RIGHT_TURN', 'SIDESWIPE', 'DRY',
'DARK_NO_STREET_LIGHTS', 'DARK_STREET_LIGHTS_OFF',
'DARK_STREET_LIGHTS_ON', 'DAYLIGHT',
'CLEAR','OVERCAST', 'RAINING',
'AT_INTERSECTION_BUT_NOT_RELATED_TO_INTERSECTION',
'AT_INTERSECTION_INTERSECTION_RELATED',
'MID_BLOCK_BUT_INTERSECTION_RELATED',
'MID_BLOCK_NOT RELATED_TO_INTERSECTION', 'RAMP_JUNCTION'],
outputCol="features")
var_topredict = "INJURY"
lr = LogisticRegression(maxIter = 10, regParam = 0.3, elasticNetParam = 0.8,labelCol=var_topredict)
rf = RandomForestClassifier(labelCol= var_topredict, featuresCol="features", numTrees=10)
gbt = GBTClassifier(labelCol=var_topredict, featuresCol="features", maxIter=10)
lsvc = LinearSVC(maxIter=10, regParam=0.1,labelCol=var_topredict)
# <a id='4.3'></a>
# ____
# #### 4.3 Model 1 - Logistic Regression
# _____
classificationmodel = lr
spk_trainsize = 0.75
spk_testsize = 1 - spk_trainsize
df_spk_train, df_spk_test = df_spk_use.randomSplit([spk_trainsize, spk_testsize], seed=12345)
spk_classification(classificationmodel,df_spk_train,df_spk_test)
spk_crossvalidation(classificationmodel,pipeline,evaluation,df_spk_train,df_spk_test)
# <a id='4.4'></a>
# ____
# #### 4.4 Model 2 - Random Forest
# _____
classificationmodel = rf
spk_trainsize = 0.8
spk_testsize = 1 - spk_trainsize
df_spk_train, df_spk_test = df_spk_use.randomSplit([spk_trainsize, spk_testsize], seed=12345)
spk_classification(classificationmodel,df_spk_train,df_spk_test)
spk_crossvalidation(classificationmodel,pipeline,evaluation,df_spk_train,df_spk_test)
# <a id='4.5'></a>
# ____
# #### 4.5 Model 3 - Gradient Boosted Tree
# _____
classificationmodel = gbt
spk_trainsize = 0.85
spk_testsize = 1 - spk_trainsize
df_spk_train, df_spk_test = df_spk_use.randomSplit([spk_trainsize, spk_testsize], seed=12345)
spk_classification(classificationmodel,df_spk_train,df_spk_test)
spk_crossvalidation(classificationmodel,pipeline,evaluation,df_spk_train,df_spk_test)
# <a id='4.6'></a>
# ____
# #### 4.6 Model 4 - Linear Support Vector
# _____
classificationmodel = lsvc
spk_trainsize = 0.82
spk_testsize = 1 - spk_trainsize
df_spk_train, df_spk_test = df_spk_use.randomSplit([spk_trainsize, spk_testsize], seed=12345)
spk_classification(classificationmodel,df_spk_train,df_spk_test)
spk_crossvalidation(classificationmodel,pipeline,evaluation,df_spk_train,df_spk_test)
# <a id='5'></a>
# ____
# ### 5. Conclusion
#
# The accuracy score across all four models is very high which is a very good outcome.
#
# ______
#
#
#
# <strong> <center> Thank You! :)</s>
# _____
|
ML using PySpark.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
from collections import defaultdict
import json
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
## import xgboost
import math
import matplotlib
from __future__ import division
from time import time
import seaborn as sns; sns.set(style="ticks", color_codes=True)
from sklearn.model_selection import train_test_split
from sklearn.ensemble import ExtraTreesRegressor, RandomForestRegressor
from sklearn.neighbors import KNeighborsRegressor
from sklearn.svm import SVR
from sklearn.feature_selection import RFE
from sklearn.neural_network import MLPRegressor
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import KFold
from sklearn.metrics import r2_score
from sklearn.linear_model import LinearRegression
from sklearn import cross_validation, tree, linear_model
from sklearn.model_selection import train_test_split
from sklearn.cross_validation import ShuffleSplit
from sklearn.metrics import explained_variance_score
import sklearn.learning_curve as curves
from scipy.stats import pearsonr
import matplotlib.pyplot as plt
import os
from cycler import cycler
from matplotlib import rcParams
import matplotlib.cm as cm
import matplotlib as mpl
# +
dark2_colors = [(0.10588235294117647, 0.6196078431372549, 0.4666666666666667),
(0.8509803921568627, 0.37254901960784315, 0.00784313725490196),
(0.4588235294117647, 0.4392156862745098, 0.7019607843137254),
(0.9058823529411765, 0.1607843137254902, 0.5411764705882353),
(0.4, 0.6509803921568628, 0.11764705882352941),
(0.9019607843137255, 0.6705882352941176, 0.00784313725490196),
(0.6509803921568628, 0.4627450980392157, 0.11372549019607843)]
rcParams['figure.figsize'] = (10, 6)
rcParams['figure.dpi'] = 150
rcParams['axes.prop_cycle'] = cycler('color',dark2_colors)
rcParams['lines.linewidth'] = 2
rcParams['axes.facecolor'] = 'white'
rcParams['font.size'] = 14
rcParams['patch.edgecolor'] = 'white'
rcParams['patch.facecolor'] = dark2_colors[0]
rcParams['font.family'] = 'StixGeneral'
# +
dataset = pd.read_csv("train.csv", names=['Store','Dept','Date','weeklySales','isHoliday'],sep=',', header=0)
features = pd.read_csv("features.csv",sep=',', header=0,
names=['Store','Date','Temperature','Fuel_Price','MarkDown1','MarkDown2','MarkDown3','MarkDown4',
'MarkDown5','CPI','Unemployment','IsHoliday']).drop(columns=['IsHoliday'])
stores = pd.read_csv("stores.csv", names=['Store','Type','Size'],sep=',', header=0)
dataset = dataset.merge(stores, how='left').merge(features, how='left')
data= dataset.iloc[:,[0,1,2,3,4,5,6,7,8,14,15]]
data.head()
# -
features = dataset.iloc[:,[0,1,2,3,6,7,8,14,15]].columns.tolist()
target = dataset.iloc[:,4].name
target1 = dataset.iloc[:,5].name
smaller_frame=df[['Radius', 'Texture', 'Perimeter']]
from pandas.tools.plotting import scatter_matrix
al=scatter_matrix(sf, alpha=0.8, figsize=(12, 12), diagonal="kde")
for a in al.flatten():
a.grid(False)
smaller_frame.corr()
|
FPP.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: venv38
# language: python
# name: venv38
# ---
# +
from typing import Dict
import matplotlib.pyplot as plt
import nlp
import numpy as np
import pandas as pd
import torch
import transformers
from captum.attr import (IntegratedGradients, LayerIntegratedGradients,
configure_interpretable_embedding_layer,
remove_interpretable_embedding_layer)
from tqdm.notebook import tqdm
from captum.attr import visualization as viz
from torch.utils.data import TensorDataset
from transformers import (ElectraForSequenceClassification,
ElectraTokenizerFast, EvalPrediction, InputFeatures,
Trainer, TrainingArguments, glue_compute_metrics)
import tensorflow as tf
transformers.__version__
# +
model = ElectraForSequenceClassification.from_pretrained(
"google/electra-small-discriminator", num_labels = 3)
tokenizer = ElectraTokenizerFast.from_pretrained(
"google/electra-small-discriminator", do_lower_case=True)
# +
df = pd.read_csv('./../Naive_Bayes/tweets/allLabeledTweets.csv')
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(df.index.values,
df.label.values,
test_size=0.15,
random_state=42,
stratify=df.label.values)
df['data_type'] = ['not_set']*df.shape[0]
df.loc[X_train, 'data_type'] = 'train'
df.loc[X_val, 'data_type'] = 'val'
df.groupby(['label', 'data_type']).count()
# -
df[df.data_type=='train']['label'].value_counts()
# +
df_val = [df[df.data_type=='val'].message_lowercase, df[df.data_type=='val'].label]
df_train = [df[df.data_type=='train'].message_lowercase, df[df.data_type=='train'].label]
df_train = pd.concat(df_train, axis=1, keys=["message", "label"])
df_0 = df_train[df_train['label']==0]
df_1 = df_train[df_train['label']==1]
df_2 = df_train[df_train['label']==2]
df_0_downsampled = df_0.sample(df_1.shape[0])
df_2_downsampled = df_2.sample(df_1.shape[0])
df_train = pd.concat([df_0_downsampled, df_2_downsampled, df_1])
df_train['label'].value_counts()
# +
encoded_data_train = tokenizer.batch_encode_plus(
df[df.data_type=='train'].message_lowercase.values.tolist(),
add_special_tokens=True,
return_attention_mask=True,
pad_to_max_length=True,
max_length=256,
return_tensors='pt'
)
encoded_data_val = tokenizer.batch_encode_plus(
df[df.data_type=='val'].message_lowercase.values.tolist(),
add_special_tokens=True,
return_attention_mask=True,
pad_to_max_length=True,
max_length=256,
return_tensors='pt'
)
input_ids_train = encoded_data_train['input_ids']
attention_masks_train = encoded_data_train['attention_mask']
labels_train = torch.tensor(df[df.data_type=='train'].label.values)
input_ids_val = encoded_data_val['input_ids']
attention_masks_val = encoded_data_val['attention_mask']
labels_val = torch.tensor(df[df.data_type=='val'].label.values)
dataset_train = TensorDataset(input_ids_train, attention_masks_train, labels_train)
dataset_val = TensorDataset(input_ids_val, attention_masks_val, labels_val)
# +
from torch.utils.data import DataLoader, RandomSampler, SequentialSampler
batch_size = 3
dataloader_train = DataLoader(dataset_train,
sampler=RandomSampler(dataset_train),
batch_size=batch_size)
dataloader_validation = DataLoader(dataset_val,
sampler=SequentialSampler(dataset_val),
batch_size=batch_size)
# +
from transformers import AdamW, get_linear_schedule_with_warmup
optimizer = AdamW(model.parameters(),
lr=1e-5,
eps=1e-8)
epochs = 5
scheduler = get_linear_schedule_with_warmup(optimizer,
num_warmup_steps=0,
num_training_steps=len(dataloader_train)*epochs)
# +
from sklearn.metrics import f1_score
def f1_score_func(preds, labels):
preds_flat = np.argmax(preds, axis=1).flatten()
labels_flat = labels.flatten()
return f1_score(labels_flat, preds_flat, average='weighted')
def accuracy_per_class(preds, labels):
label_dict = {0: 0, 1: 1, 2: 2,}
label_dict_inverse = {v: k for k, v in label_dict.items()}
preds_flat = np.argmax(preds, axis=1).flatten()
labels_flat = labels.flatten()
for label in np.unique(labels_flat):
y_preds = preds_flat[labels_flat==label]
y_true = labels_flat[labels_flat==label]
print(f'Class: {label_dict_inverse[label]}')
print(f'Accuracy: {len(y_preds[y_preds==label])}/{len(y_true)}\n')
# +
import random
import numpy as np
seed_val = 17
random.seed(seed_val)
np.random.seed(seed_val)
torch.manual_seed(seed_val)
torch.cuda.manual_seed_all(seed_val)
def evaluate(dataloader_val):
model.eval()
loss_val_total = 0
predictions, true_vals = [], []
for batch in dataloader_val:
batch = tuple(b.to(torch.device('cpu')) for b in batch)
inputs = {'input_ids': batch[0], 'attention_mask': batch[1], 'labels': batch[2]}
with torch.no_grad():
outputs = model(**inputs)
loss = outputs[0]
logits = outputs[1]
loss_val_total += loss.item()
logits = logits.detach().cpu().numpy()
label_ids = inputs['labels'].cpu().numpy()
predictions.append(logits)
true_vals.append(label_ids)
loss_val_avg = loss_val_total/len(dataloader_val)
predictions = np.concatenate(predictions, axis=0)
true_vals = np.concatenate(true_vals, axis=0)
return loss_val_avg, predictions, true_vals
# -
for epoch in tqdm(range(1, epochs+1)):
model.train()
loss_train_total = 0
progress_bar = tqdm(dataloader_train, desc='Epoch {:1d}'.format(epoch), leave=False, disable=False)
for batch in progress_bar:
model.zero_grad()
batch = tuple(b.to(torch.device('cpu')) for b in batch)
inputs = {'input_ids': batch[0],
'attention_mask': batch[1],
'labels': batch[2],
}
outputs = model(**inputs)
loss = outputs[0]
loss_train_total += loss.item()
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
optimizer.step()
scheduler.step()
progress_bar.set_postfix({'training_loss': '{:.3f}'.format(loss.item()/len(batch))})
torch.save(model.state_dict(), f'models/finetuned_electra_epoch_{epoch}.model')
tqdm.write(f'\nEpoch {epoch}')
loss_train_avg = loss_train_total/len(dataloader_train)
tqdm.write(f'Training loss: {loss_train_avg}')
val_loss, predictions, true_vals = evaluate(dataloader_validation)
val_f1 = f1_score_func(predictions, true_vals)
tqdm.write(f'Validation loss: {val_loss}')
tqdm.write(f'F1 Score (Weighted): {val_f1}')
# +
model.load_state_dict(torch.load('models/finetuned_electra_epoch_5.model', map_location=torch.device('cpu')))
_, predictions, true_vals = evaluate(dataloader_validation)
accuracy_per_class(predictions, true_vals)
# -
print(true_vals)
print(predictions)
# +
preds_flat = np.argmax(predictions, axis=1).flatten()
labels_flat = true_vals.flatten()
print(preds_flat)
print(labels_flat)
# -
tf.math.confusion_matrix(
labels_flat, preds_flat, num_classes=3, weights=None, dtype=tf.dtypes.int32,
name=None
)
labels_flat.shape
# +
from sklearn.metrics import classification_report, accuracy_score, confusion_matrix
print(classification_report(labels_flat, preds_flat, zero_division=0))
pd.DataFrame(
confusion_matrix(labels_flat, preds_flat),
index = [['actual', 'actual', 'actual'], ['neutral', 'positive', 'negative']],
columns = [['predicted', 'predicted', 'predicted'], ['neutral', 'positive', 'negative']])
# -
|
electra/electra.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="99i5oWpdzFO_" outputId="b3a15605-047a-40a7-abca-7a3c719433ce"
#Retrieving dataset from Google Drive
MODE = "MOUNT"
from google.colab import drive
drive.mount._DEBUG = False
if MODE == "MOUNT":
drive.mount('/content/drive', force_remount=True)
elif MODE == "UNMOUNT":
try:
drive.flush_and_unmount()
except ValueError:
pass
# + id="4NVzH1dY-cYR"
from keras.models import Sequential, Model
from keras.layers import ConvLSTM2D, Dense, Dropout, Activation, Embedding, Flatten, BatchNormalization, merge, Input, Convolution2D, MaxPooling2D, ZeroPadding2D, AveragePooling2D, GlobalAveragePooling2D, LSTM
from keras.utils import np_utils
from keras.models import model_from_json
from keras import backend as K
from keras.utils.data_utils import get_file
from keras.applications.resnet import ResNet50
#Preprocessing Input to suit the needs of ResNet and avoid the need for Batch normalization
from keras.applications.resnet import preprocess_input
from tensorflow.keras.optimizers import Adam, SGD
from tensorflow.keras.utils import to_categorical
from keras import applications
from keras.preprocessing.image import ImageDataGenerator, load_img
from keras.callbacks import ModelCheckpoint, EarlyStopping
from keras import layers
from keras.models import load_model
import seaborn as sns
from sklearn.metrics import confusion_matrix
import random
import os
import cv2
import pandas as pd
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
# %matplotlib inline
# + colab={"base_uri": "https://localhost:8080/", "height": 224} id="ESaBIo-BR2V-" outputId="5965a9e8-eb1f-42b8-aca2-ab8d3f84afbf"
PATH_TO_DATA_SET = '/content/drive/My Drive/resized128N/resized128N'
epochs = 10
batch_size = 32
image_size = 128
input_shape = (image_size, image_size, 3)
#Loading training dataset into a dataframe
data_set = os.listdir(PATH_TO_DATA_SET)
categories = []
for filename in data_set:
category = filename.split('.')[0]
categories.append(category)
df = pd.DataFrame({
'filename': data_set,
'category': categories
})
# dance_map = { 'ballet': 0, 'bharatnatyam': 1, 'break': 2, 'flamenco': 3 }
# df['category'] = df['category'].map(dance_map)
unique_vals, unique_vals_count = np.unique(df['category'], return_counts = True)
print(unique_vals, unique_vals_count)
# df['category'] = to_categorical(df['category'], num_classes = 4)
# print(df['category'].unique())
train_df = df
df.head(5)
# + id="PJ4CQcB-TgxO"
#Splitting data into train, test and validation sets
ind = np.random.rand(len(train_df)) < 0.8 #80% of data is training
train_df, rem_df = train_df[ind], train_df[~ind]
train_df = train_df.reset_index()
rem_df = rem_df.reset_index()
ind = np.random.rand(len(rem_df)) < 0.5 #50% of the remaining data is split between validation and test
validate_df, test_df = rem_df[ind], rem_df[~ind]
test_df = test_df.reset_index()
validate_df = validate_df.reset_index()
total_train = train_df.shape[0]
total_validate = validate_df.shape[0]
# + colab={"base_uri": "https://localhost:8080/"} id="KdoAu-6uSpx2" outputId="70244e14-7a81-4c46-a541-0b180bdef4bb"
train_datagen = ImageDataGenerator(
dtype='float32',
rescale=1/255,
preprocessing_function=preprocess_input
)
train_generator = train_datagen.flow_from_dataframe(
train_df,
PATH_TO_DATA_SET,
x_col='filename',
y_col='category',
class_mode='categorical',
target_size=(image_size, image_size),
batch_size=batch_size
)
validation_datagen = ImageDataGenerator(
dtype='float32',
rescale=1/255,
preprocessing_function=preprocess_input
)
validation_generator = validation_datagen.flow_from_dataframe(
validate_df,
PATH_TO_DATA_SET,
x_col='filename',
y_col='category',
class_mode='categorical',
target_size=(image_size, image_size),
batch_size=batch_size
)
test_datagen = ImageDataGenerator(
dtype='float32',
rescale=1/255,
preprocessing_function=preprocess_input
)
test_generator = test_datagen.flow_from_dataframe(
test_df,
PATH_TO_DATA_SET,
x_col='filename',
y_col='category',
class_mode='categorical',
target_size=(image_size, image_size),
batch_size=batch_size
)
# + colab={"base_uri": "https://localhost:8080/"} id="MuoiEmPxRmlp" outputId="e6d5aff8-545d-4ee8-d86e-e100c05332eb"
from keras.layers import Input, Lambda
model_lstm = Sequential()
model_lstm.add(Lambda(lambda x: x[:,:,:,0], input_shape=input_shape))
model_lstm.add(LSTM(units=256, return_sequences=True))
model_lstm.add(LSTM(units=128, return_sequences=True))
model_lstm.add(LSTM(units=64))
model_lstm.add(Dense(128))
model_lstm.add(Dense(6, activation='sigmoid'))
model_lstm.compile(loss='binary_crossentropy', optimizer=Adam(clipvalue=0.5), metrics=['accuracy'])
model_lstm.summary()
# + colab={"base_uri": "https://localhost:8080/"} id="KqoSYtOpnf9C" outputId="c740bd21-29f8-4119-a3f2-f89a889d842a"
# !pip install visualkeras
# + colab={"base_uri": "https://localhost:8080/", "height": 754} id="OPDU8x1znjvD" outputId="fcc5fe73-3dfe-4e34-aa8c-5fe600af9fc6"
import visualkeras
# visualkeras.layered_view(model_lstm).show() # display using your system viewer
# visualkeras.layered_view(model_lstm, to_file='lstm_model.png').show() # write and show
# visualkeras.layered_view(model_lstm, legend=True)
tf.keras.utils.plot_model(model_lstm, to_file='lstm_model_arch.png', show_shapes=True)
# + colab={"base_uri": "https://localhost:8080/"} id="CwNxCH-RSjY1" outputId="855b6060-21a8-4f15-a4bc-141e98ab2187"
checkpoint = ModelCheckpoint(
'lstm_best_model.hdf5',
monitor='val_loss',
verbose=1,
save_best_only=True,
save_weights_only=False,
mode='auto', period=1)
early = EarlyStopping(
monitor='val_loss',
min_delta=0.1,
patience=5,
verbose=1,
mode='auto')
lstm_history = model_lstm.fit(train_generator,
epochs=epochs,
validation_data=validation_generator,
validation_steps=total_validate//batch_size,
steps_per_epoch=total_train//batch_size,
callbacks = [checkpoint, early])
# + id="aHkaBlibSmuz" colab={"base_uri": "https://localhost:8080/", "height": 333} outputId="e7f87370-cc31-4dad-ded7-2806b5574050"
import matplotlib.pyplot as plt
print(lstm_history.history)
plt.plot(lstm_history.history["accuracy"])
plt.plot(lstm_history.history['val_accuracy'])
plt.plot(lstm_history.history['loss'])
plt.plot(lstm_history.history['val_loss'])
plt.title("model accuracy")
plt.ylabel("Accuracy")
plt.xlabel("Epoch")
plt.legend(["Accuracy","Validation Accuracy","loss","Validation Loss"])
plt.show()
# + id="JQTdg6RORSRC"
def plot_confusion_matrix(cm, classes, title, ax):
ax.imshow(cm, interpolation='nearest', cmap=plt.cm.Blues)
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
ax.text(j, i, cm[i, j],
horizontalalignment="center",
color="white" if cm[i, j] > cm.max() / 2. else "black")
tick_marks = np.arange(len(classes))
ax.set_xticks(tick_marks), ax.xaxis.set_ticklabels(classes)
ax.set_yticks(tick_marks), ax.yaxis.set_ticklabels(classes)
ax.set_xlabel('Predicted')
ax.set_ylabel('Truth')
ax.set_title(title)
ax.grid(False)
def plot_multiclass_confusion_matrix(y_true, y_pred, label_to_class, save_plot=False):
fig, axes = plt.subplots(int(np.ceil(len(label_to_class) / 3)), 3, figsize=(10, 12))
axes = axes.flatten()
for i, conf_matrix in enumerate(multilabel_confusion_matrix(y_true, y_pred)):
tn, fp, fn, tp = conf_matrix.ravel()
f1 = 2 * tp / (2 * tp + fp + fn + sys.float_info.epsilon)
recall = tp / (tp + fn + sys.float_info.epsilon)
precision = tp / (tp + fp + sys.float_info.epsilon)
plot_confusion_matrix(
np.array([[tp, fn], [fp, tn]]),
classes=['+', '-'],
title=f'Label: {label_to_class[i]}\nf1={f1:.5f}\nrecall={recall:.5f}\nprecision={precision:.5f}',
ax=axes[i]
)
plt.tight_layout()
if save_plot:
plt.savefig('confusion_matrices.png', dpi=50)
# + colab={"base_uri": "https://localhost:8080/", "height": 754} id="CTT92oluQ_sC" outputId="b1eee7a4-2c44-420c-880c-79a9ac7ad651"
import sys
import itertools
from sklearn.metrics import multilabel_confusion_matrix
label_to_class = {v: k for k, v in train_generator.class_indices.items()}
def array_to_labels(onehot_array, label_to_class):
labels = []
idx = np.where(onehot_array == 1)[0]
return [label_to_class[i] for i in idx]
nr_batches = 10
threshold = 0.5
img_iter_val_0, img_iter_val_1 = itertools.tee(validation_generator, 2)
y_true = np.vstack(next(img_iter_val_0)[1] for _ in range(nr_batches)).astype('int')
y_pred = (model_lstm.predict_generator(img_iter_val_1, steps=nr_batches) > threshold).astype('int')
plot_multiclass_confusion_matrix(y_true, y_pred, label_to_class, save_plot=False)
# + colab={"base_uri": "https://localhost:8080/", "height": 1000} id="6ZV12LqBwPpB" outputId="edd255e5-6e15-47ea-cb6b-9172cc7e343f"
class_to_label = {0: 'ballet', 1: 'bharatnatyam', 2:'break', 3: 'flamenco', 4: 'square', 5: 'waltz'}
def array_to_labels(onehot_array, label_to_class):
labels = []
idx = np.where(onehot_array == 1)[0]
return [label_to_class[i] for i in idx]
nr_batches = 10
threshold = 0.5
img_iter_test_0, img_iter_test_1 = itertools.tee(test_generator, 2)
y_true = np.vstack(next(img_iter_test_0)[1] for _ in range(nr_batches)).astype('int')
y_pred = (model_lstm.predict_generator(img_iter_test_1, steps=nr_batches) > threshold).astype('int')
fnames = test_generator.filenames ## fnames is all the filenames/samples used in testing
categories = np.asarray(test_generator.classes)
y_pred = np.squeeze(y_pred)
category_list = []
predicted_list = []
for i in range(len(y_pred)):
if 1 in y_pred[i]:
category_list.append(class_to_label[categories[i]])
predicted_list.append(array_to_labels(y_pred[i], label_to_class)[0])
# print(class_to_label[categories[i]], array_to_labels(y_pred[i], label_to_class)[0])
errors = np.where(np.asarray(predicted_list) != np.asarray(category_list))[0] ## misclassifications done on the test data where y_pred is the predicted values
plt.figure(figsize=(12, 12))
errors = [0, 80, 160]
display_errors = 0
for i in errors[:]:
filename = fnames[i]
category = categories[i]
img = load_img(PATH_TO_DATA_SET + '/'+ filename, target_size=input_shape)
plt.subplot(3, 3, display_errors+1)
plt.imshow(img)
plt.xlabel(filename + '(' + "Actual = {}".format(category_list[i]) + "; Predicted = {}".format(predicted_list[i]) + ')' )
display_errors = display_errors + 1
plt.tight_layout()
plt.show()
plot_multiclass_confusion_matrix(y_true, y_pred, label_to_class, save_plot=True)
|
dance_classification_lstm.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: SageMath 8.8
# language: sage
# name: sagemath
# ---
sage: a = 13
sage: b = 37*11
sage: x = PolynomialRing(RationalField(), 'x').gen()
sage: f =(x^2 - a)*(x^2-b)
sage: R.<x> = PolynomialRing(QQ)
sage: K.<a> = (f).splitting_field(); K
sage: K.class_number()
sage: a = 13
sage: b = 41*11
sage: x = PolynomialRing(RationalField(), 'x').gen()
sage: f =(x^2 - a)*(x^2-b)
sage: R.<x> = PolynomialRing(QQ)
sage: K.<a> = (f).splitting_field(); K
sage: K.class_number()
sage: a = 37
sage: b = 13*11
sage: x = PolynomialRing(RationalField(), 'x').gen()
sage: f =(x^2 - a)*(x^2-b)
sage: R.<x> = PolynomialRing(QQ)
sage: K.<a> = (f).splitting_field(); K
sage: K.class_number()
sage: a = 37
sage: b =29*11
sage: x = PolynomialRing(RationalField(), 'x').gen()
sage: f =(x^2 - a)*(x^2-b)
sage: R.<x> = PolynomialRing(QQ)
sage: K.<a> = (f).splitting_field(); K
sage: K.class_number()
sage: a = 37
sage: b = 61*11
sage: x = PolynomialRing(RationalField(), 'x').gen()
sage: f =(x^2 - a)*(x^2-b)
sage: R.<x> = PolynomialRing(QQ)
sage: K.<a> = (f).splitting_field(); K
sage: K.class_number()
sage: a = 37
sage: b = 109*11
sage: x = PolynomialRing(RationalField(), 'x').gen()
sage: f =(x^2 - a)*(x^2-b)
sage: R.<x> = PolynomialRing(QQ)
sage: K.<a> = (f).splitting_field(); K
sage: K.class_number()
sage: a = 37
sage: b = 139*11
sage: x = PolynomialRing(RationalField(), 'x').gen()
sage: f =(x^2 - a)*(x^2-b)
sage: R.<x> = PolynomialRing(QQ)
sage: K.<a> = (f).splitting_field(); K
sage: K.class_number()
sage: a = 37
sage: b = 151*11
sage: x = PolynomialRing(RationalField(), 'x').gen()
sage: f =(x^2 - a)*(x^2-b)
sage: R.<x> = PolynomialRing(QQ)
sage: K.<a> = (f).splitting_field(); K
sage: K.class_number()
|
q ,k congurent to 1.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/", "height": 191} colab_type="code" executionInfo={"elapsed": 31637, "status": "ok", "timestamp": 1542543682973, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-ZH_vkc0a6g0/AAAAAAAAAAI/AAAAAAAAAAc/x8TPjkqmxys/s64/photo.jpg", "userId": "13488364327507272606"}, "user_tz": -480} id="kauOILHSqXWw" outputId="abce5476-5b42-44c5-c0ec-2eb26f5fcfa7"
# from google.colab import drive
# drive.mount('/content/drive')
# + colab={"base_uri": "https://localhost:8080/", "height": 65} colab_type="code" executionInfo={"elapsed": 2078, "status": "ok", "timestamp": 1542543688169, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-ZH_vkc0a6g0/AAAAAAAAAAI/AAAAAAAAAAc/x8TPjkqmxys/s64/photo.jpg", "userId": "13488364327507272606"}, "user_tz": -480} id="_fzxugFjqc2T" outputId="2783a637-6e63-4f97-f31f-33e1f5f2f9bb"
# !ls
# + [markdown] colab_type="text" id="UauKNtFRqydu"
# ### Import libs
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 2189, "status": "ok", "timestamp": 1542543695321, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-ZH_vkc0a6g0/AAAAAAAAAAI/AAAAAAAAAAc/x8TPjkqmxys/s64/photo.jpg", "userId": "13488364327507272606"}, "user_tz": -480} id="qSbY3Amlqpkh" outputId="90795cb8-8374-4711-a036-e005c4c7b9d2"
from __future__ import division
from __future__ import print_function
from __future__ import absolute_import
import random
import pprint
import sys
import time
import numpy as np
from optparse import OptionParser
import pickle
import math
import cv2
import copy
from matplotlib import pyplot as plt
import tensorflow as tf
import pandas as pd
import os
from sklearn.metrics import average_precision_score
from keras import backend as K
from keras.optimizers import Adam, SGD, RMSprop
from keras.layers import Flatten, Dense, Input, Conv2D, MaxPooling2D, Dropout
from keras.layers import GlobalAveragePooling2D, GlobalMaxPooling2D, TimeDistributed
from keras.engine.topology import get_source_inputs
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.objectives import categorical_crossentropy
from keras.models import Model
from keras.utils import generic_utils
from keras.engine import Layer, InputSpec
from keras import initializers, regularizers
# + [markdown] colab_type="text" id="WrH5i5mmrDWY"
# #### Config setting
# + colab={} colab_type="code" id="DvJm0FFRsyVu"
class Config:
def __init__(self):
# Print the process or not
self.verbose = True
# Name of base network
self.network = 'vgg'
# Setting for data augmentation
self.use_horizontal_flips = False
self.use_vertical_flips = False
self.rot_90 = False
# Anchor box scales
# Note that if im_size is smaller, anchor_box_scales should be scaled
# Original anchor_box_scales in the paper is [128, 256, 512]
self.anchor_box_scales = [64, 128, 256]
# Anchor box ratios
self.anchor_box_ratios = [[1, 1], [1./math.sqrt(2), 2./math.sqrt(2)], [2./math.sqrt(2), 1./math.sqrt(2)]]
# Size to resize the smallest side of the image
# Original setting in paper is 600. Set to 300 in here to save training time
self.im_size = 300
# image channel-wise mean to subtract
self.img_channel_mean = [103.939, 116.779, 123.68]
self.img_scaling_factor = 1.0
# number of ROIs at once
self.num_rois = 4
# stride at the RPN (this depends on the network configuration)
self.rpn_stride = 16
self.balanced_classes = False
# scaling the stdev
self.std_scaling = 4.0
self.classifier_regr_std = [8.0, 8.0, 4.0, 4.0]
# overlaps for RPN
self.rpn_min_overlap = 0.3
self.rpn_max_overlap = 0.7
# overlaps for classifier ROIs
self.classifier_min_overlap = 0.1
self.classifier_max_overlap = 0.5
# placeholder for the class mapping, automatically generated by the parser
self.class_mapping = None
self.model_path = None
# + [markdown] colab_type="text" id="o0bIjlycyR9_"
# #### Parser the data from annotation file
# + colab={} colab_type="code" id="vc89E9uAydTX"
def get_data(input_path):
"""Parse the data from annotation file
Args:
input_path: annotation file path
Returns:
all_data: list(filepath, width, height, list(bboxes))
classes_count: dict{key:class_name, value:count_num}
e.g. {'Car': 2383, 'Mobile phone': 1108, 'Person': 3745}
class_mapping: dict{key:class_name, value: idx}
e.g. {'Car': 0, 'Mobile phone': 1, 'Person': 2}
"""
found_bg = False
all_imgs = {}
classes_count = {}
class_mapping = {}
visualise = True
i = 1
with open(input_path,'r') as f:
print('Parsing annotation files')
for line in f:
# Print process
sys.stdout.write('\r'+'idx=' + str(i))
i += 1
line_split = line.strip().split(',')
# Make sure the info saved in annotation file matching the format (path_filename, x1, y1, x2, y2, class_name)
# Note:
# One path_filename might has several classes (class_name)
# x1, y1, x2, y2 are the pixel value of the origial image, not the ratio value
# (x1, y1) top left coordinates; (x2, y2) bottom right coordinates
# x1,y1-------------------
# | |
# | |
# | |
# | |
# ---------------------x2,y2
(filename,x1,y1,x2,y2,class_name) = line_split
if class_name not in classes_count:
classes_count[class_name] = 1
else:
classes_count[class_name] += 1
if class_name not in class_mapping:
if class_name == 'bg' and found_bg == False:
print('Found class name with special name bg. Will be treated as a background region (this is usually for hard negative mining).')
found_bg = True
class_mapping[class_name] = len(class_mapping)
if filename not in all_imgs:
all_imgs[filename] = {}
img = cv2.imread(filename)
(rows,cols) = img.shape[:2]
all_imgs[filename]['filepath'] = filename
all_imgs[filename]['width'] = cols
all_imgs[filename]['height'] = rows
all_imgs[filename]['bboxes'] = []
# if np.random.randint(0,6) > 0:
# all_imgs[filename]['imageset'] = 'trainval'
# else:
# all_imgs[filename]['imageset'] = 'test'
all_imgs[filename]['bboxes'].append({'class': class_name, 'x1': int(x1), 'x2': int(x2), 'y1': int(y1), 'y2': int(y2)})
all_data = []
for key in all_imgs:
all_data.append(all_imgs[key])
# make sure the bg class is last in the list
if found_bg:
if class_mapping['bg'] != len(class_mapping) - 1:
key_to_switch = [key for key in class_mapping.keys() if class_mapping[key] == len(class_mapping)-1][0]
val_to_switch = class_mapping['bg']
class_mapping['bg'] = len(class_mapping) - 1
class_mapping[key_to_switch] = val_to_switch
return all_data, classes_count, class_mapping
# + [markdown] colab_type="text" id="oFvqGs4acGWl"
# #### Define ROI Pooling Convolutional Layer
# + colab={} colab_type="code" id="6l32Q85kcMpB"
class RoiPoolingConv(Layer):
'''ROI pooling layer for 2D inputs.
See Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition,
<NAME>, <NAME>, <NAME>, <NAME>
# Arguments
pool_size: int
Size of pooling region to use. pool_size = 7 will result in a 7x7 region.
num_rois: number of regions of interest to be used
# Input shape
list of two 4D tensors [X_img,X_roi] with shape:
X_img:
`(1, rows, cols, channels)`
X_roi:
`(1,num_rois,4)` list of rois, with ordering (x,y,w,h)
# Output shape
3D tensor with shape:
`(1, num_rois, channels, pool_size, pool_size)`
'''
def __init__(self, pool_size, num_rois, **kwargs):
self.dim_ordering = K.image_dim_ordering()
self.pool_size = pool_size
self.num_rois = num_rois
super(RoiPoolingConv, self).__init__(**kwargs)
def build(self, input_shape):
self.nb_channels = input_shape[0][3]
def compute_output_shape(self, input_shape):
return None, self.num_rois, self.pool_size, self.pool_size, self.nb_channels
def call(self, x, mask=None):
assert(len(x) == 2)
# x[0] is image with shape (rows, cols, channels)
img = x[0]
# x[1] is roi with shape (num_rois,4) with ordering (x,y,w,h)
rois = x[1]
input_shape = K.shape(img)
outputs = []
for roi_idx in range(self.num_rois):
x = rois[0, roi_idx, 0]
y = rois[0, roi_idx, 1]
w = rois[0, roi_idx, 2]
h = rois[0, roi_idx, 3]
x = K.cast(x, 'int32')
y = K.cast(y, 'int32')
w = K.cast(w, 'int32')
h = K.cast(h, 'int32')
# Resized roi of the image to pooling size (7x7)
rs = tf.image.resize_images(img[:, y:y+h, x:x+w, :], (self.pool_size, self.pool_size))
outputs.append(rs)
final_output = K.concatenate(outputs, axis=0)
# Reshape to (1, num_rois, pool_size, pool_size, nb_channels)
# Might be (1, 4, 7, 7, 3)
final_output = K.reshape(final_output, (1, self.num_rois, self.pool_size, self.pool_size, self.nb_channels))
# permute_dimensions is similar to transpose
final_output = K.permute_dimensions(final_output, (0, 1, 2, 3, 4))
return final_output
def get_config(self):
config = {'pool_size': self.pool_size,
'num_rois': self.num_rois}
base_config = super(RoiPoolingConv, self).get_config()
return dict(list(base_config.items()) + list(config.items()))
# + [markdown] colab_type="text" id="Mf2taA29RFNs"
# #### Vgg-16 model
# + colab={} colab_type="code" id="WaBQfl4XRJY3"
def get_img_output_length(width, height):
def get_output_length(input_length):
return input_length//16
return get_output_length(width), get_output_length(height)
def nn_base(input_tensor=None, trainable=False):
input_shape = (None, None, 3)
if input_tensor is None:
img_input = Input(shape=input_shape)
else:
if not K.is_keras_tensor(input_tensor):
img_input = Input(tensor=input_tensor, shape=input_shape)
else:
img_input = input_tensor
bn_axis = 3
# Block 1
x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv1')(img_input)
x = Conv2D(64, (3, 3), activation='relu', padding='same', name='block1_conv2')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block1_pool')(x)
# Block 2
x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv1')(x)
x = Conv2D(128, (3, 3), activation='relu', padding='same', name='block2_conv2')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block2_pool')(x)
# Block 3
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv1')(x)
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv2')(x)
x = Conv2D(256, (3, 3), activation='relu', padding='same', name='block3_conv3')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block3_pool')(x)
# Block 4
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv1')(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv2')(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block4_conv3')(x)
x = MaxPooling2D((2, 2), strides=(2, 2), name='block4_pool')(x)
# Block 5
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv1')(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv2')(x)
x = Conv2D(512, (3, 3), activation='relu', padding='same', name='block5_conv3')(x)
# x = MaxPooling2D((2, 2), strides=(2, 2), name='block5_pool')(x)
return x
# + [markdown] colab_type="text" id="xcOi5MIMVJpU"
# #### RPN layer
# + colab={} colab_type="code" id="gsuV21vpRczQ"
def rpn_layer(base_layers, num_anchors):
"""Create a rpn layer
Step1: Pass through the feature map from base layer to a 3x3 512 channels convolutional layer
Keep the padding 'same' to preserve the feature map's size
Step2: Pass the step1 to two (1,1) convolutional layer to replace the fully connected layer
classification layer: num_anchors (9 in here) channels for 0, 1 sigmoid activation output
regression layer: num_anchors*4 (36 in here) channels for computing the regression of bboxes with linear activation
Args:
base_layers: vgg in here
num_anchors: 9 in here
Returns:
[x_class, x_regr, base_layers]
x_class: classification for whether it's an object
x_regr: bboxes regression
base_layers: vgg in here
"""
x = Conv2D(512, (3, 3), padding='same', activation='relu', kernel_initializer='normal', name='rpn_conv1')(base_layers)
x_class = Conv2D(num_anchors, (1, 1), activation='sigmoid', kernel_initializer='uniform', name='rpn_out_class')(x)
x_regr = Conv2D(num_anchors * 4, (1, 1), activation='linear', kernel_initializer='zero', name='rpn_out_regress')(x)
return [x_class, x_regr, base_layers]
# + [markdown] colab_type="text" id="0fBt9xNFWsKS"
# #### Classifier layer
# + colab={} colab_type="code" id="0PKSPLRLWwMz"
def classifier_layer(base_layers, input_rois, num_rois, nb_classes = 4):
"""Create a classifier layer
Args:
base_layers: vgg
input_rois: `(1,num_rois,4)` list of rois, with ordering (x,y,w,h)
num_rois: number of rois to be processed in one time (4 in here)
Returns:
list(out_class, out_regr)
out_class: classifier layer output
out_regr: regression layer output
"""
input_shape = (num_rois,7,7,512)
pooling_regions = 7
# out_roi_pool.shape = (1, num_rois, channels, pool_size, pool_size)
# num_rois (4) 7x7 roi pooling
out_roi_pool = RoiPoolingConv(pooling_regions, num_rois)([base_layers, input_rois])
# Flatten the convlutional layer and connected to 2 FC and 2 dropout
out = TimeDistributed(Flatten(name='flatten'))(out_roi_pool)
out = TimeDistributed(Dense(4096, activation='relu', name='fc1'))(out)
out = TimeDistributed(Dropout(0.5))(out)
out = TimeDistributed(Dense(4096, activation='relu', name='fc2'))(out)
out = TimeDistributed(Dropout(0.5))(out)
# There are two output layer
# out_class: softmax acivation function for classify the class name of the object
# out_regr: linear activation function for bboxes coordinates regression
out_class = TimeDistributed(Dense(nb_classes, activation='softmax', kernel_initializer='zero'), name='dense_class_{}'.format(nb_classes))(out)
# note: no regression target for bg class
out_regr = TimeDistributed(Dense(4 * (nb_classes-1), activation='linear', kernel_initializer='zero'), name='dense_regress_{}'.format(nb_classes))(out)
return [out_class, out_regr]
# + [markdown] colab_type="text" id="WMev3UMadCzJ"
# #### Calculate IoU (Intersection of Union)
# + colab={} colab_type="code" id="Jy5iIBYgdCJD"
def union(au, bu, area_intersection):
area_a = (au[2] - au[0]) * (au[3] - au[1])
area_b = (bu[2] - bu[0]) * (bu[3] - bu[1])
area_union = area_a + area_b - area_intersection
return area_union
def intersection(ai, bi):
x = max(ai[0], bi[0])
y = max(ai[1], bi[1])
w = min(ai[2], bi[2]) - x
h = min(ai[3], bi[3]) - y
if w < 0 or h < 0:
return 0
return w*h
def iou(a, b):
# a and b should be (x1,y1,x2,y2)
if a[0] >= a[2] or a[1] >= a[3] or b[0] >= b[2] or b[1] >= b[3]:
return 0.0
area_i = intersection(a, b)
area_u = union(a, b, area_i)
return float(area_i) / float(area_u + 1e-6)
# + [markdown] colab_type="text" id="rcRlzqZudKkd"
# #### Calculate the rpn for all anchors of all images
# + colab={} colab_type="code" id="daPsCZtrdK3S"
def calc_rpn(C, img_data, width, height, resized_width, resized_height, img_length_calc_function):
"""(Important part!) Calculate the rpn for all anchors
If feature map has shape 38x50=1900, there are 1900x9=17100 potential anchors
Args:
C: config
img_data: augmented image data
width: original image width (e.g. 600)
height: original image height (e.g. 800)
resized_width: resized image width according to C.im_size (e.g. 300)
resized_height: resized image height according to C.im_size (e.g. 400)
img_length_calc_function: function to calculate final layer's feature map (of base model) size according to input image size
Returns:
y_rpn_cls: list(num_bboxes, y_is_box_valid + y_rpn_overlap)
y_is_box_valid: 0 or 1 (0 means the box is invalid, 1 means the box is valid)
y_rpn_overlap: 0 or 1 (0 means the box is not an object, 1 means the box is an object)
y_rpn_regr: list(num_bboxes, 4*y_rpn_overlap + y_rpn_regr)
y_rpn_regr: x1,y1,x2,y2 bunding boxes coordinates
"""
downscale = float(C.rpn_stride)
anchor_sizes = C.anchor_box_scales # 128, 256, 512
anchor_ratios = C.anchor_box_ratios # 1:1, 1:2*sqrt(2), 2*sqrt(2):1
num_anchors = len(anchor_sizes) * len(anchor_ratios) # 3x3=9
# calculate the output map size based on the network architecture
(output_width, output_height) = img_length_calc_function(resized_width, resized_height)
n_anchratios = len(anchor_ratios) # 3
# initialise empty output objectives
y_rpn_overlap = np.zeros((output_height, output_width, num_anchors))
y_is_box_valid = np.zeros((output_height, output_width, num_anchors))
y_rpn_regr = np.zeros((output_height, output_width, num_anchors * 4))
num_bboxes = len(img_data['bboxes'])
num_anchors_for_bbox = np.zeros(num_bboxes).astype(int)
best_anchor_for_bbox = -1*np.ones((num_bboxes, 4)).astype(int)
best_iou_for_bbox = np.zeros(num_bboxes).astype(np.float32)
best_x_for_bbox = np.zeros((num_bboxes, 4)).astype(int)
best_dx_for_bbox = np.zeros((num_bboxes, 4)).astype(np.float32)
# get the GT box coordinates, and resize to account for image resizing
gta = np.zeros((num_bboxes, 4))
for bbox_num, bbox in enumerate(img_data['bboxes']):
# get the GT box coordinates, and resize to account for image resizing
gta[bbox_num, 0] = bbox['x1'] * (resized_width / float(width))
gta[bbox_num, 1] = bbox['x2'] * (resized_width / float(width))
gta[bbox_num, 2] = bbox['y1'] * (resized_height / float(height))
gta[bbox_num, 3] = bbox['y2'] * (resized_height / float(height))
# rpn ground truth
for anchor_size_idx in range(len(anchor_sizes)):
for anchor_ratio_idx in range(n_anchratios):
anchor_x = anchor_sizes[anchor_size_idx] * anchor_ratios[anchor_ratio_idx][0]
anchor_y = anchor_sizes[anchor_size_idx] * anchor_ratios[anchor_ratio_idx][1]
for ix in range(output_width):
# x-coordinates of the current anchor box
x1_anc = downscale * (ix + 0.5) - anchor_x / 2
x2_anc = downscale * (ix + 0.5) + anchor_x / 2
# ignore boxes that go across image boundaries
if x1_anc < 0 or x2_anc > resized_width:
continue
for jy in range(output_height):
# y-coordinates of the current anchor box
y1_anc = downscale * (jy + 0.5) - anchor_y / 2
y2_anc = downscale * (jy + 0.5) + anchor_y / 2
# ignore boxes that go across image boundaries
if y1_anc < 0 or y2_anc > resized_height:
continue
# bbox_type indicates whether an anchor should be a target
# Initialize with 'negative'
bbox_type = 'neg'
# this is the best IOU for the (x,y) coord and the current anchor
# note that this is different from the best IOU for a GT bbox
best_iou_for_loc = 0.0
for bbox_num in range(num_bboxes):
# get IOU of the current GT box and the current anchor box
curr_iou = iou([gta[bbox_num, 0], gta[bbox_num, 2], gta[bbox_num, 1], gta[bbox_num, 3]], [x1_anc, y1_anc, x2_anc, y2_anc])
# calculate the regression targets if they will be needed
if curr_iou > best_iou_for_bbox[bbox_num] or curr_iou > C.rpn_max_overlap:
cx = (gta[bbox_num, 0] + gta[bbox_num, 1]) / 2.0
cy = (gta[bbox_num, 2] + gta[bbox_num, 3]) / 2.0
cxa = (x1_anc + x2_anc)/2.0
cya = (y1_anc + y2_anc)/2.0
# x,y are the center point of ground-truth bbox
# xa,ya are the center point of anchor bbox (xa=downscale * (ix + 0.5); ya=downscale * (iy+0.5))
# w,h are the width and height of ground-truth bbox
# wa,ha are the width and height of anchor bboxe
# tx = (x - xa) / wa
# ty = (y - ya) / ha
# tw = log(w / wa)
# th = log(h / ha)
tx = (cx - cxa) / (x2_anc - x1_anc)
ty = (cy - cya) / (y2_anc - y1_anc)
tw = np.log((gta[bbox_num, 1] - gta[bbox_num, 0]) / (x2_anc - x1_anc))
th = np.log((gta[bbox_num, 3] - gta[bbox_num, 2]) / (y2_anc - y1_anc))
if img_data['bboxes'][bbox_num]['class'] != 'bg':
# all GT boxes should be mapped to an anchor box, so we keep track of which anchor box was best
if curr_iou > best_iou_for_bbox[bbox_num]:
best_anchor_for_bbox[bbox_num] = [jy, ix, anchor_ratio_idx, anchor_size_idx]
best_iou_for_bbox[bbox_num] = curr_iou
best_x_for_bbox[bbox_num,:] = [x1_anc, x2_anc, y1_anc, y2_anc]
best_dx_for_bbox[bbox_num,:] = [tx, ty, tw, th]
# we set the anchor to positive if the IOU is >0.7 (it does not matter if there was another better box, it just indicates overlap)
if curr_iou > C.rpn_max_overlap:
bbox_type = 'pos'
num_anchors_for_bbox[bbox_num] += 1
# we update the regression layer target if this IOU is the best for the current (x,y) and anchor position
if curr_iou > best_iou_for_loc:
best_iou_for_loc = curr_iou
best_regr = (tx, ty, tw, th)
# if the IOU is >0.3 and <0.7, it is ambiguous and no included in the objective
if C.rpn_min_overlap < curr_iou < C.rpn_max_overlap:
# gray zone between neg and pos
if bbox_type != 'pos':
bbox_type = 'neutral'
# turn on or off outputs depending on IOUs
if bbox_type == 'neg':
y_is_box_valid[jy, ix, anchor_ratio_idx + n_anchratios * anchor_size_idx] = 1
y_rpn_overlap[jy, ix, anchor_ratio_idx + n_anchratios * anchor_size_idx] = 0
elif bbox_type == 'neutral':
y_is_box_valid[jy, ix, anchor_ratio_idx + n_anchratios * anchor_size_idx] = 0
y_rpn_overlap[jy, ix, anchor_ratio_idx + n_anchratios * anchor_size_idx] = 0
elif bbox_type == 'pos':
y_is_box_valid[jy, ix, anchor_ratio_idx + n_anchratios * anchor_size_idx] = 1
y_rpn_overlap[jy, ix, anchor_ratio_idx + n_anchratios * anchor_size_idx] = 1
start = 4 * (anchor_ratio_idx + n_anchratios * anchor_size_idx)
y_rpn_regr[jy, ix, start:start+4] = best_regr
# we ensure that every bbox has at least one positive RPN region
for idx in range(num_anchors_for_bbox.shape[0]):
if num_anchors_for_bbox[idx] == 0:
# no box with an IOU greater than zero ...
if best_anchor_for_bbox[idx, 0] == -1:
continue
y_is_box_valid[
best_anchor_for_bbox[idx,0], best_anchor_for_bbox[idx,1], best_anchor_for_bbox[idx,2] + n_anchratios *
best_anchor_for_bbox[idx,3]] = 1
y_rpn_overlap[
best_anchor_for_bbox[idx,0], best_anchor_for_bbox[idx,1], best_anchor_for_bbox[idx,2] + n_anchratios *
best_anchor_for_bbox[idx,3]] = 1
start = 4 * (best_anchor_for_bbox[idx,2] + n_anchratios * best_anchor_for_bbox[idx,3])
y_rpn_regr[
best_anchor_for_bbox[idx,0], best_anchor_for_bbox[idx,1], start:start+4] = best_dx_for_bbox[idx, :]
y_rpn_overlap = np.transpose(y_rpn_overlap, (2, 0, 1))
y_rpn_overlap = np.expand_dims(y_rpn_overlap, axis=0)
y_is_box_valid = np.transpose(y_is_box_valid, (2, 0, 1))
y_is_box_valid = np.expand_dims(y_is_box_valid, axis=0)
y_rpn_regr = np.transpose(y_rpn_regr, (2, 0, 1))
y_rpn_regr = np.expand_dims(y_rpn_regr, axis=0)
pos_locs = np.where(np.logical_and(y_rpn_overlap[0, :, :, :] == 1, y_is_box_valid[0, :, :, :] == 1))
neg_locs = np.where(np.logical_and(y_rpn_overlap[0, :, :, :] == 0, y_is_box_valid[0, :, :, :] == 1))
num_pos = len(pos_locs[0])
# one issue is that the RPN has many more negative than positive regions, so we turn off some of the negative
# regions. We also limit it to 256 regions.
num_regions = 256
if len(pos_locs[0]) > num_regions/2:
val_locs = random.sample(range(len(pos_locs[0])), len(pos_locs[0]) - num_regions/2)
y_is_box_valid[0, pos_locs[0][val_locs], pos_locs[1][val_locs], pos_locs[2][val_locs]] = 0
num_pos = num_regions/2
if len(neg_locs[0]) + num_pos > num_regions:
val_locs = random.sample(range(len(neg_locs[0])), len(neg_locs[0]) - num_pos)
y_is_box_valid[0, neg_locs[0][val_locs], neg_locs[1][val_locs], neg_locs[2][val_locs]] = 0
y_rpn_cls = np.concatenate([y_is_box_valid, y_rpn_overlap], axis=1)
y_rpn_regr = np.concatenate([np.repeat(y_rpn_overlap, 4, axis=1), y_rpn_regr], axis=1)
return np.copy(y_rpn_cls), np.copy(y_rpn_regr), num_pos
# + [markdown] colab_type="text" id="3qGAalfJB8zz"
# #### Get new image size and augment the image
# + colab={} colab_type="code" id="HKhSFbmB2RTo"
def get_new_img_size(width, height, img_min_side=300):
if width <= height:
f = float(img_min_side) / width
resized_height = int(f * height)
resized_width = img_min_side
else:
f = float(img_min_side) / height
resized_width = int(f * width)
resized_height = img_min_side
return resized_width, resized_height
def augment(img_data, config, augment=True):
assert 'filepath' in img_data
assert 'bboxes' in img_data
assert 'width' in img_data
assert 'height' in img_data
img_data_aug = copy.deepcopy(img_data)
img = cv2.imread(img_data_aug['filepath'])
if augment:
rows, cols = img.shape[:2]
if config.use_horizontal_flips and np.random.randint(0, 2) == 0:
img = cv2.flip(img, 1)
for bbox in img_data_aug['bboxes']:
x1 = bbox['x1']
x2 = bbox['x2']
bbox['x2'] = cols - x1
bbox['x1'] = cols - x2
if config.use_vertical_flips and np.random.randint(0, 2) == 0:
img = cv2.flip(img, 0)
for bbox in img_data_aug['bboxes']:
y1 = bbox['y1']
y2 = bbox['y2']
bbox['y2'] = rows - y1
bbox['y1'] = rows - y2
if config.rot_90:
angle = np.random.choice([0,90,180,270],1)[0]
if angle == 270:
img = np.transpose(img, (1,0,2))
img = cv2.flip(img, 0)
elif angle == 180:
img = cv2.flip(img, -1)
elif angle == 90:
img = np.transpose(img, (1,0,2))
img = cv2.flip(img, 1)
elif angle == 0:
pass
for bbox in img_data_aug['bboxes']:
x1 = bbox['x1']
x2 = bbox['x2']
y1 = bbox['y1']
y2 = bbox['y2']
if angle == 270:
bbox['x1'] = y1
bbox['x2'] = y2
bbox['y1'] = cols - x2
bbox['y2'] = cols - x1
elif angle == 180:
bbox['x2'] = cols - x1
bbox['x1'] = cols - x2
bbox['y2'] = rows - y1
bbox['y1'] = rows - y2
elif angle == 90:
bbox['x1'] = rows - y2
bbox['x2'] = rows - y1
bbox['y1'] = x1
bbox['y2'] = x2
elif angle == 0:
pass
img_data_aug['width'] = img.shape[1]
img_data_aug['height'] = img.shape[0]
return img_data_aug, img
# + [markdown] colab_type="text" id="0712o8CXkyh1"
# #### Generate the ground_truth anchors
# + colab={} colab_type="code" id="TvsEv3RIk0cF"
def get_anchor_gt(all_img_data, C, img_length_calc_function, mode='train'):
""" Yield the ground-truth anchors as Y (labels)
Args:
all_img_data: list(filepath, width, height, list(bboxes))
C: config
img_length_calc_function: function to calculate final layer's feature map (of base model) size according to input image size
mode: 'train' or 'test'; 'train' mode need augmentation
Returns:
x_img: image data after resized and scaling (smallest size = 300px)
Y: [y_rpn_cls, y_rpn_regr]
img_data_aug: augmented image data (original image with augmentation)
debug_img: show image for debug
num_pos: show number of positive anchors for debug
"""
while True:
for img_data in all_img_data:
try:
# read in image, and optionally add augmentation
if mode == 'train':
img_data_aug, x_img = augment(img_data, C, augment=True)
else:
img_data_aug, x_img = augment(img_data, C, augment=False)
(width, height) = (img_data_aug['width'], img_data_aug['height'])
(rows, cols, _) = x_img.shape
assert cols == width
assert rows == height
# get image dimensions for resizing
(resized_width, resized_height) = get_new_img_size(width, height, C.im_size)
# resize the image so that smalles side is length = 300px
x_img = cv2.resize(x_img, (resized_width, resized_height), interpolation=cv2.INTER_CUBIC)
debug_img = x_img.copy()
try:
y_rpn_cls, y_rpn_regr, num_pos = calc_rpn(C, img_data_aug, width, height, resized_width, resized_height, img_length_calc_function)
except:
continue
# Zero-center by mean pixel, and preprocess image
x_img = x_img[:,:, (2, 1, 0)] # BGR -> RGB
x_img = x_img.astype(np.float32)
x_img[:, :, 0] -= C.img_channel_mean[0]
x_img[:, :, 1] -= C.img_channel_mean[1]
x_img[:, :, 2] -= C.img_channel_mean[2]
x_img /= C.img_scaling_factor
x_img = np.transpose(x_img, (2, 0, 1))
x_img = np.expand_dims(x_img, axis=0)
y_rpn_regr[:, y_rpn_regr.shape[1]//2:, :, :] *= C.std_scaling
x_img = np.transpose(x_img, (0, 2, 3, 1))
y_rpn_cls = np.transpose(y_rpn_cls, (0, 2, 3, 1))
y_rpn_regr = np.transpose(y_rpn_regr, (0, 2, 3, 1))
yield np.copy(x_img), [np.copy(y_rpn_cls), np.copy(y_rpn_regr)], img_data_aug, debug_img, num_pos
except Exception as e:
print(e)
continue
# + [markdown] colab_type="text" id="FZAAMEH4uqu9"
# #### Define loss functions for all four outputs
# + colab={} colab_type="code" id="CyLxnL4_uvmr"
lambda_rpn_regr = 1.0
lambda_rpn_class = 1.0
lambda_cls_regr = 1.0
lambda_cls_class = 1.0
epsilon = 1e-4
# + colab={} colab_type="code" id="tvGfH6m3yu0_"
def rpn_loss_regr(num_anchors):
"""Loss function for rpn regression
Args:
num_anchors: number of anchors (9 in here)
Returns:
Smooth L1 loss function
0.5*x*x (if x_abs < 1)
x_abx - 0.5 (otherwise)
"""
def rpn_loss_regr_fixed_num(y_true, y_pred):
# x is the difference between true value and predicted vaue
x = y_true[:, :, :, 4 * num_anchors:] - y_pred
# absolute value of x
x_abs = K.abs(x)
# If x_abs <= 1.0, x_bool = 1
x_bool = K.cast(K.less_equal(x_abs, 1.0), tf.float32)
return lambda_rpn_regr * K.sum(
y_true[:, :, :, :4 * num_anchors] * (x_bool * (0.5 * x * x) + (1 - x_bool) * (x_abs - 0.5))) / K.sum(epsilon + y_true[:, :, :, :4 * num_anchors])
return rpn_loss_regr_fixed_num
def rpn_loss_cls(num_anchors):
"""Loss function for rpn classification
Args:
num_anchors: number of anchors (9 in here)
y_true[:, :, :, :9]: [0,1,0,0,0,0,0,1,0] means only the second and the eighth box is valid which contains pos or neg anchor => isValid
y_true[:, :, :, 9:]: [0,1,0,0,0,0,0,0,0] means the second box is pos and eighth box is negative
Returns:
lambda * sum((binary_crossentropy(isValid*y_pred,y_true))) / N
"""
def rpn_loss_cls_fixed_num(y_true, y_pred):
return lambda_rpn_class * K.sum(y_true[:, :, :, :num_anchors] * K.binary_crossentropy(y_pred[:, :, :, :], y_true[:, :, :, num_anchors:])) / K.sum(epsilon + y_true[:, :, :, :num_anchors])
return rpn_loss_cls_fixed_num
def class_loss_regr(num_classes):
"""Loss function for rpn regression
Args:
num_anchors: number of anchors (9 in here)
Returns:
Smooth L1 loss function
0.5*x*x (if x_abs < 1)
x_abx - 0.5 (otherwise)
"""
def class_loss_regr_fixed_num(y_true, y_pred):
x = y_true[:, :, 4*num_classes:] - y_pred
x_abs = K.abs(x)
x_bool = K.cast(K.less_equal(x_abs, 1.0), 'float32')
return lambda_cls_regr * K.sum(y_true[:, :, :4*num_classes] * (x_bool * (0.5 * x * x) + (1 - x_bool) * (x_abs - 0.5))) / K.sum(epsilon + y_true[:, :, :4*num_classes])
return class_loss_regr_fixed_num
def class_loss_cls(y_true, y_pred):
return lambda_cls_class * K.mean(categorical_crossentropy(y_true[0, :, :], y_pred[0, :, :]))
# + colab={} colab_type="code" id="5cX0N4VDl4zS"
def non_max_suppression_fast(boxes, probs, overlap_thresh=0.9, max_boxes=300):
# code used from here: http://www.pyimagesearch.com/2015/02/16/faster-non-maximum-suppression-python/
# if there are no boxes, return an empty list
# Process explanation:
# Step 1: Sort the probs list
# Step 2: Find the larget prob 'Last' in the list and save it to the pick list
# Step 3: Calculate the IoU with 'Last' box and other boxes in the list. If the IoU is larger than overlap_threshold, delete the box from list
# Step 4: Repeat step 2 and step 3 until there is no item in the probs list
if len(boxes) == 0:
return []
# grab the coordinates of the bounding boxes
x1 = boxes[:, 0]
y1 = boxes[:, 1]
x2 = boxes[:, 2]
y2 = boxes[:, 3]
np.testing.assert_array_less(x1, x2)
np.testing.assert_array_less(y1, y2)
# if the bounding boxes integers, convert them to floats --
# this is important since we'll be doing a bunch of divisions
if boxes.dtype.kind == "i":
boxes = boxes.astype("float")
# initialize the list of picked indexes
pick = []
# calculate the areas
area = (x2 - x1) * (y2 - y1)
# sort the bounding boxes
idxs = np.argsort(probs)
# keep looping while some indexes still remain in the indexes
# list
while len(idxs) > 0:
# grab the last index in the indexes list and add the
# index value to the list of picked indexes
last = len(idxs) - 1
i = idxs[last]
pick.append(i)
# find the intersection
xx1_int = np.maximum(x1[i], x1[idxs[:last]])
yy1_int = np.maximum(y1[i], y1[idxs[:last]])
xx2_int = np.minimum(x2[i], x2[idxs[:last]])
yy2_int = np.minimum(y2[i], y2[idxs[:last]])
ww_int = np.maximum(0, xx2_int - xx1_int)
hh_int = np.maximum(0, yy2_int - yy1_int)
area_int = ww_int * hh_int
# find the union
area_union = area[i] + area[idxs[:last]] - area_int
# compute the ratio of overlap
overlap = area_int/(area_union + 1e-6)
# delete all indexes from the index list that have
idxs = np.delete(idxs, np.concatenate(([last],
np.where(overlap > overlap_thresh)[0])))
if len(pick) >= max_boxes:
break
# return only the bounding boxes that were picked using the integer data type
boxes = boxes[pick].astype("int")
probs = probs[pick]
return boxes, probs
def apply_regr_np(X, T):
"""Apply regression layer to all anchors in one feature map
Args:
X: shape=(4, 18, 25) the current anchor type for all points in the feature map
T: regression layer shape=(4, 18, 25)
Returns:
X: regressed position and size for current anchor
"""
try:
x = X[0, :, :]
y = X[1, :, :]
w = X[2, :, :]
h = X[3, :, :]
tx = T[0, :, :]
ty = T[1, :, :]
tw = T[2, :, :]
th = T[3, :, :]
cx = x + w/2.
cy = y + h/2.
cx1 = tx * w + cx
cy1 = ty * h + cy
w1 = np.exp(tw.astype(np.float64)) * w
h1 = np.exp(th.astype(np.float64)) * h
x1 = cx1 - w1/2.
y1 = cy1 - h1/2.
x1 = np.round(x1)
y1 = np.round(y1)
w1 = np.round(w1)
h1 = np.round(h1)
return np.stack([x1, y1, w1, h1])
except Exception as e:
print(e)
return X
def apply_regr(x, y, w, h, tx, ty, tw, th):
# Apply regression to x, y, w and h
try:
cx = x + w/2.
cy = y + h/2.
cx1 = tx * w + cx
cy1 = ty * h + cy
w1 = math.exp(tw) * w
h1 = math.exp(th) * h
x1 = cx1 - w1/2.
y1 = cy1 - h1/2.
x1 = int(round(x1))
y1 = int(round(y1))
w1 = int(round(w1))
h1 = int(round(h1))
return x1, y1, w1, h1
except ValueError:
return x, y, w, h
except OverflowError:
return x, y, w, h
except Exception as e:
print(e)
return x, y, w, h
def calc_iou(R, img_data, C, class_mapping):
"""Converts from (x1,y1,x2,y2) to (x,y,w,h) format
Args:
R: bboxes, probs
"""
bboxes = img_data['bboxes']
(width, height) = (img_data['width'], img_data['height'])
# get image dimensions for resizing
(resized_width, resized_height) = get_new_img_size(width, height, C.im_size)
gta = np.zeros((len(bboxes), 4))
for bbox_num, bbox in enumerate(bboxes):
# get the GT box coordinates, and resize to account for image resizing
# gta[bbox_num, 0] = (40 * (600 / 800)) / 16 = int(round(1.875)) = 2 (x in feature map)
gta[bbox_num, 0] = int(round(bbox['x1'] * (resized_width / float(width))/C.rpn_stride))
gta[bbox_num, 1] = int(round(bbox['x2'] * (resized_width / float(width))/C.rpn_stride))
gta[bbox_num, 2] = int(round(bbox['y1'] * (resized_height / float(height))/C.rpn_stride))
gta[bbox_num, 3] = int(round(bbox['y2'] * (resized_height / float(height))/C.rpn_stride))
x_roi = []
y_class_num = []
y_class_regr_coords = []
y_class_regr_label = []
IoUs = [] # for debugging only
# R.shape[0]: number of bboxes (=300 from non_max_suppression)
for ix in range(R.shape[0]):
(x1, y1, x2, y2) = R[ix, :]
x1 = int(round(x1))
y1 = int(round(y1))
x2 = int(round(x2))
y2 = int(round(y2))
best_iou = 0.0
best_bbox = -1
# Iterate through all the ground-truth bboxes to calculate the iou
for bbox_num in range(len(bboxes)):
curr_iou = iou([gta[bbox_num, 0], gta[bbox_num, 2], gta[bbox_num, 1], gta[bbox_num, 3]], [x1, y1, x2, y2])
# Find out the corresponding ground-truth bbox_num with larget iou
if curr_iou > best_iou:
best_iou = curr_iou
best_bbox = bbox_num
if best_iou < C.classifier_min_overlap:
continue
else:
w = x2 - x1
h = y2 - y1
x_roi.append([x1, y1, w, h])
IoUs.append(best_iou)
if C.classifier_min_overlap <= best_iou < C.classifier_max_overlap:
# hard negative example
cls_name = 'bg'
elif C.classifier_max_overlap <= best_iou:
cls_name = bboxes[best_bbox]['class']
cxg = (gta[best_bbox, 0] + gta[best_bbox, 1]) / 2.0
cyg = (gta[best_bbox, 2] + gta[best_bbox, 3]) / 2.0
cx = x1 + w / 2.0
cy = y1 + h / 2.0
tx = (cxg - cx) / float(w)
ty = (cyg - cy) / float(h)
tw = np.log((gta[best_bbox, 1] - gta[best_bbox, 0]) / float(w))
th = np.log((gta[best_bbox, 3] - gta[best_bbox, 2]) / float(h))
else:
print('roi = {}'.format(best_iou))
raise RuntimeError
class_num = class_mapping[cls_name]
class_label = len(class_mapping) * [0]
class_label[class_num] = 1
y_class_num.append(copy.deepcopy(class_label))
coords = [0] * 4 * (len(class_mapping) - 1)
labels = [0] * 4 * (len(class_mapping) - 1)
if cls_name != 'bg':
label_pos = 4 * class_num
sx, sy, sw, sh = C.classifier_regr_std
coords[label_pos:4+label_pos] = [sx*tx, sy*ty, sw*tw, sh*th]
labels[label_pos:4+label_pos] = [1, 1, 1, 1]
y_class_regr_coords.append(copy.deepcopy(coords))
y_class_regr_label.append(copy.deepcopy(labels))
else:
y_class_regr_coords.append(copy.deepcopy(coords))
y_class_regr_label.append(copy.deepcopy(labels))
if len(x_roi) == 0:
return None, None, None, None
# bboxes that iou > C.classifier_min_overlap for all gt bboxes in 300 non_max_suppression bboxes
X = np.array(x_roi)
# one hot code for bboxes from above => x_roi (X)
Y1 = np.array(y_class_num)
# corresponding labels and corresponding gt bboxes
Y2 = np.concatenate([np.array(y_class_regr_label),np.array(y_class_regr_coords)],axis=1)
return np.expand_dims(X, axis=0), np.expand_dims(Y1, axis=0), np.expand_dims(Y2, axis=0), IoUs
# + colab={} colab_type="code" id="vT6X-fqJ1RSl"
def rpn_to_roi(rpn_layer, regr_layer, C, dim_ordering, use_regr=True, max_boxes=300,overlap_thresh=0.9):
"""Convert rpn layer to roi bboxes
Args: (num_anchors = 9)
rpn_layer: output layer for rpn classification
shape (1, feature_map.height, feature_map.width, num_anchors)
Might be (1, 18, 25, 18) if resized image is 400 width and 300
regr_layer: output layer for rpn regression
shape (1, feature_map.height, feature_map.width, num_anchors)
Might be (1, 18, 25, 72) if resized image is 400 width and 300
C: config
use_regr: Wether to use bboxes regression in rpn
max_boxes: max bboxes number for non-max-suppression (NMS)
overlap_thresh: If iou in NMS is larger than this threshold, drop the box
Returns:
result: boxes from non-max-suppression (shape=(300, 4))
boxes: coordinates for bboxes (on the feature map)
"""
regr_layer = regr_layer / C.std_scaling
anchor_sizes = C.anchor_box_scales # (3 in here)
anchor_ratios = C.anchor_box_ratios # (3 in here)
assert rpn_layer.shape[0] == 1
(rows, cols) = rpn_layer.shape[1:3]
curr_layer = 0
# A.shape = (4, feature_map.height, feature_map.width, num_anchors)
# Might be (4, 18, 25, 18) if resized image is 400 width and 300
# A is the coordinates for 9 anchors for every point in the feature map
# => all 18x25x9=4050 anchors cooridnates
A = np.zeros((4, rpn_layer.shape[1], rpn_layer.shape[2], rpn_layer.shape[3]))
for anchor_size in anchor_sizes:
for anchor_ratio in anchor_ratios:
# anchor_x = (128 * 1) / 16 = 8 => width of current anchor
# anchor_y = (128 * 2) / 16 = 16 => height of current anchor
anchor_x = (anchor_size * anchor_ratio[0])/C.rpn_stride
anchor_y = (anchor_size * anchor_ratio[1])/C.rpn_stride
# curr_layer: 0~8 (9 anchors)
# the Kth anchor of all position in the feature map (9th in total)
regr = regr_layer[0, :, :, 4 * curr_layer:4 * curr_layer + 4] # shape => (18, 25, 4)
regr = np.transpose(regr, (2, 0, 1)) # shape => (4, 18, 25)
# Create 18x25 mesh grid
# For every point in x, there are all the y points and vice versa
# X.shape = (18, 25)
# Y.shape = (18, 25)
X, Y = np.meshgrid(np.arange(cols),np. arange(rows))
# Calculate anchor position and size for each feature map point
A[0, :, :, curr_layer] = X - anchor_x/2 # Top left x coordinate
A[1, :, :, curr_layer] = Y - anchor_y/2 # Top left y coordinate
A[2, :, :, curr_layer] = anchor_x # width of current anchor
A[3, :, :, curr_layer] = anchor_y # height of current anchor
# Apply regression to x, y, w and h if there is rpn regression layer
if use_regr:
A[:, :, :, curr_layer] = apply_regr_np(A[:, :, :, curr_layer], regr)
# Avoid width and height exceeding 1
A[2, :, :, curr_layer] = np.maximum(1, A[2, :, :, curr_layer])
A[3, :, :, curr_layer] = np.maximum(1, A[3, :, :, curr_layer])
# Convert (x, y , w, h) to (x1, y1, x2, y2)
# x1, y1 is top left coordinate
# x2, y2 is bottom right coordinate
A[2, :, :, curr_layer] += A[0, :, :, curr_layer]
A[3, :, :, curr_layer] += A[1, :, :, curr_layer]
# Avoid bboxes drawn outside the feature map
A[0, :, :, curr_layer] = np.maximum(0, A[0, :, :, curr_layer])
A[1, :, :, curr_layer] = np.maximum(0, A[1, :, :, curr_layer])
A[2, :, :, curr_layer] = np.minimum(cols-1, A[2, :, :, curr_layer])
A[3, :, :, curr_layer] = np.minimum(rows-1, A[3, :, :, curr_layer])
curr_layer += 1
all_boxes = np.reshape(A.transpose((0, 3, 1, 2)), (4, -1)).transpose((1, 0)) # shape=(4050, 4)
all_probs = rpn_layer.transpose((0, 3, 1, 2)).reshape((-1)) # shape=(4050,)
x1 = all_boxes[:, 0]
y1 = all_boxes[:, 1]
x2 = all_boxes[:, 2]
y2 = all_boxes[:, 3]
# Find out the bboxes which is illegal and delete them from bboxes list
idxs = np.where((x1 - x2 >= 0) | (y1 - y2 >= 0))
all_boxes = np.delete(all_boxes, idxs, 0)
all_probs = np.delete(all_probs, idxs, 0)
# Apply non_max_suppression
# Only extract the bboxes. Don't need rpn probs in the later process
result = non_max_suppression_fast(all_boxes, all_probs, overlap_thresh=overlap_thresh, max_boxes=max_boxes)[0]
return result
# + [markdown] colab_type="text" id="Kk14GTaNmqoo"
#
#
# ---
#
#
# + [markdown] colab_type="text" id="oNsi6HtyJPSb"
#
#
# ---
#
#
# + [markdown] colab_type="text" id="TVmMqXE5x70U"
# ### Start training
# + colab={} colab_type="code" id="C66bqGuOq7w6"
base_path = 'base_path/'
train_path = 'annotation.txt' # Training data (annotation file)
num_rois = 4 # Number of RoIs to process at once.
# Augmentation flag
horizontal_flips = True # Augment with horizontal flips in training.
vertical_flips = True # Augment with vertical flips in training.
rot_90 = True # Augment with 90 degree rotations in training.
output_weight_path = os.path.join(base_path, 'model/model_frcnn_vgg.hdf5')
record_path = os.path.join(base_path, 'model/record.csv') # Record data (used to save the losses, classification accuracy and mean average precision)
base_weight_path = os.path.join(base_path, 'model/vgg16_weights_tf_dim_ordering_tf_kernels.h5')
config_output_filename = os.path.join(base_path, 'model_vgg_config.pickle')
# + colab={} colab_type="code" id="J3oAmbbEutH0"
# Create the config
C = Config()
C.use_horizontal_flips = horizontal_flips
C.use_vertical_flips = vertical_flips
C.rot_90 = rot_90
C.record_path = record_path
C.model_path = output_weight_path
C.num_rois = num_rois
C.base_net_weights = base_weight_path
# + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" executionInfo={"elapsed": 36108, "status": "ok", "timestamp": 1542545119215, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-ZH_vkc0a6g0/AAAAAAAAAAI/AAAAAAAAAAc/x8TPjkqmxys/s64/photo.jpg", "userId": "13488364327507272606"}, "user_tz": -480} id="yiEaAmb-x-so" outputId="eb280e97-7692-470f-fbf2-e0f1cf29ef31"
#--------------------------------------------------------#
# This step will spend some time to load the data #
#--------------------------------------------------------#
st = time.time()
train_imgs, classes_count, class_mapping = get_data(train_path)
print()
print('Spend %0.2f mins to load the data' % ((time.time()-st)/60) )
# + colab={"base_uri": "https://localhost:8080/", "height": 232} colab_type="code" executionInfo={"elapsed": 1523, "status": "error", "timestamp": 1542596406207, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-ZH_vkc0a6g0/AAAAAAAAAAI/AAAAAAAAAAc/x8TPjkqmxys/s64/photo.jpg", "userId": "13488364327507272606"}, "user_tz": -480} id="x-nuSdC56GsK" outputId="7b7d16df-5aa4-4b7a-decf-b0b24eec0758"
if 'bg' not in classes_count:
classes_count['bg'] = 0
class_mapping['bg'] = len(class_mapping)
# e.g.
# classes_count: {'Car': 2383, 'Mobile phone': 1108, 'Person': 3745, 'bg': 0}
# class_mapping: {'Person': 0, 'Car': 1, 'Mobile phone': 2, 'bg': 3}
C.class_mapping = class_mapping
print('Training images per class:')
pprint.pprint(classes_count)
print('Num classes (including bg) = {}'.format(len(classes_count)))
print(class_mapping)
# Save the configuration
with open(config_output_filename, 'wb') as config_f:
pickle.dump(C,config_f)
print('Config has been written to {}, and can be loaded when testing to ensure correct results'.format(config_output_filename))
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 610, "status": "ok", "timestamp": 1542545124369, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-ZH_vkc0a6g0/AAAAAAAAAAI/AAAAAAAAAAc/x8TPjkqmxys/s64/photo.jpg", "userId": "13488364327507272606"}, "user_tz": -480} id="LFlq36Sx4F4O" outputId="e2576f93-290b-48ff-ed8c-fda147fc53df"
# Shuffle the images with seed
random.seed(1)
random.shuffle(train_imgs)
print('Num train samples (images) {}'.format(len(train_imgs)))
# + colab={} colab_type="code" id="GXIV1uXyBo3v"
# Get train data generator which generate X, Y, image_data
data_gen_train = get_anchor_gt(train_imgs, C, get_img_output_length, mode='train')
# + [markdown] colab_type="text" id="y_yM5jkKqM1G"
# #### Explore 'data_gen_train'
#
# data_gen_train is an **generator**, so we get the data by calling **next(data_gen_train)**
# + colab={} colab_type="code" id="nIDnio1UlRHi"
X, Y, image_data, debug_img, debug_num_pos = next(data_gen_train)
# + colab={"base_uri": "https://localhost:8080/", "height": 848} colab_type="code" executionInfo={"elapsed": 1611, "status": "ok", "timestamp": 1542544779625, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-ZH_vkc0a6g0/AAAAAAAAAAI/AAAAAAAAAAc/x8TPjkqmxys/s64/photo.jpg", "userId": "13488364327507272606"}, "user_tz": -480} id="dZXoJ2e3l2Ey" outputId="eb1a0f85-93f4-46cb-9046-77afd48b59b5"
print('Original image: height=%d width=%d'%(image_data['height'], image_data['width']))
print('Resized image: height=%d width=%d C.im_size=%d'%(X.shape[1], X.shape[2], C.im_size))
print('Feature map size: height=%d width=%d C.rpn_stride=%d'%(Y[0].shape[1], Y[0].shape[2], C.rpn_stride))
print(X.shape)
print(str(len(Y))+" includes 'y_rpn_cls' and 'y_rpn_regr'")
print('Shape of y_rpn_cls {}'.format(Y[0].shape))
print('Shape of y_rpn_regr {}'.format(Y[1].shape))
print(image_data)
print('Number of positive anchors for this image: %d' % (debug_num_pos))
if debug_num_pos==0:
gt_x1, gt_x2 = image_data['bboxes'][0]['x1']*(X.shape[2]/image_data['height']), image_data['bboxes'][0]['x2']*(X.shape[2]/image_data['height'])
gt_y1, gt_y2 = image_data['bboxes'][0]['y1']*(X.shape[1]/image_data['width']), image_data['bboxes'][0]['y2']*(X.shape[1]/image_data['width'])
gt_x1, gt_y1, gt_x2, gt_y2 = int(gt_x1), int(gt_y1), int(gt_x2), int(gt_y2)
img = debug_img.copy()
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
color = (0, 255, 0)
cv2.putText(img, 'gt bbox', (gt_x1, gt_y1-5), cv2.FONT_HERSHEY_DUPLEX, 0.7, color, 1)
cv2.rectangle(img, (gt_x1, gt_y1), (gt_x2, gt_y2), color, 2)
cv2.circle(img, (int((gt_x1+gt_x2)/2), int((gt_y1+gt_y2)/2)), 3, color, -1)
plt.grid()
plt.imshow(img)
plt.show()
else:
cls = Y[0][0]
pos_cls = np.where(cls==1)
print(pos_cls)
regr = Y[1][0]
pos_regr = np.where(regr==1)
print(pos_regr)
print('y_rpn_cls for possible pos anchor: {}'.format(cls[pos_cls[0][0],pos_cls[1][0],:]))
print('y_rpn_regr for positive anchor: {}'.format(regr[pos_regr[0][0],pos_regr[1][0],:]))
gt_x1, gt_x2 = image_data['bboxes'][0]['x1']*(X.shape[2]/image_data['width']), image_data['bboxes'][0]['x2']*(X.shape[2]/image_data['width'])
gt_y1, gt_y2 = image_data['bboxes'][0]['y1']*(X.shape[1]/image_data['height']), image_data['bboxes'][0]['y2']*(X.shape[1]/image_data['height'])
gt_x1, gt_y1, gt_x2, gt_y2 = int(gt_x1), int(gt_y1), int(gt_x2), int(gt_y2)
img = debug_img.copy()
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
color = (0, 255, 0)
# cv2.putText(img, 'gt bbox', (gt_x1, gt_y1-5), cv2.FONT_HERSHEY_DUPLEX, 0.7, color, 1)
cv2.rectangle(img, (gt_x1, gt_y1), (gt_x2, gt_y2), color, 2)
cv2.circle(img, (int((gt_x1+gt_x2)/2), int((gt_y1+gt_y2)/2)), 3, color, -1)
# Add text
textLabel = 'gt bbox'
(retval,baseLine) = cv2.getTextSize(textLabel,cv2.FONT_HERSHEY_COMPLEX,0.5,1)
textOrg = (gt_x1, gt_y1+5)
cv2.rectangle(img, (textOrg[0] - 5, textOrg[1]+baseLine - 5), (textOrg[0]+retval[0] + 5, textOrg[1]-retval[1] - 5), (0, 0, 0), 2)
cv2.rectangle(img, (textOrg[0] - 5,textOrg[1]+baseLine - 5), (textOrg[0]+retval[0] + 5, textOrg[1]-retval[1] - 5), (255, 255, 255), -1)
cv2.putText(img, textLabel, textOrg, cv2.FONT_HERSHEY_DUPLEX, 0.5, (0, 0, 0), 1)
# Draw positive anchors according to the y_rpn_regr
for i in range(debug_num_pos):
color = (100+i*(155/4), 0, 100+i*(155/4))
idx = pos_regr[2][i*4]/4
anchor_size = C.anchor_box_scales[int(idx/3)]
anchor_ratio = C.anchor_box_ratios[2-int((idx+1)%3)]
center = (pos_regr[1][i*4]*C.rpn_stride, pos_regr[0][i*4]*C.rpn_stride)
print('Center position of positive anchor: ', center)
cv2.circle(img, center, 3, color, -1)
anc_w, anc_h = anchor_size*anchor_ratio[0], anchor_size*anchor_ratio[1]
cv2.rectangle(img, (center[0]-int(anc_w/2), center[1]-int(anc_h/2)), (center[0]+int(anc_w/2), center[1]+int(anc_h/2)), color, 2)
# cv2.putText(img, 'pos anchor bbox '+str(i+1), (center[0]-int(anc_w/2), center[1]-int(anc_h/2)-5), cv2.FONT_HERSHEY_DUPLEX, 0.5, color, 1)
print('Green bboxes is ground-truth bbox. Others are positive anchors')
plt.figure(figsize=(8,8))
plt.grid()
plt.imshow(img)
plt.show()
# + [markdown] colab_type="text" id="i4XSyIoubCMY"
# #### Build the model
# + colab={} colab_type="code" id="jODipXFDnDJ0"
input_shape_img = (None, None, 3)
img_input = Input(shape=input_shape_img)
roi_input = Input(shape=(None, 4))
# define the base network (VGG here, can be Resnet50, Inception, etc)
shared_layers = nn_base(img_input, trainable=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 68} colab_type="code" executionInfo={"elapsed": 10162, "status": "ok", "timestamp": 1542545147976, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-ZH_vkc0a6g0/AAAAAAAAAAI/AAAAAAAAAAc/x8TPjkqmxys/s64/photo.jpg", "userId": "13488364327507272606"}, "user_tz": -480} id="udTeQMVhfSzw" outputId="11c3bda4-96e2-4658-ab6f-e5bf38519178"
# define the RPN, built on the base layers
num_anchors = len(C.anchor_box_scales) * len(C.anchor_box_ratios) # 9
rpn = rpn_layer(shared_layers, num_anchors)
classifier = classifier_layer(shared_layers, roi_input, C.num_rois, nb_classes=len(classes_count))
model_rpn = Model(img_input, rpn[:2])
model_classifier = Model([img_input, roi_input], classifier)
# this is a model that holds both the RPN and the classifier, used to load/save weights for the models
model_all = Model([img_input, roi_input], rpn[:2] + classifier)
# Because the google colab can only run the session several hours one time (then you need to connect again),
# we need to save the model and load the model to continue training
if not os.path.isfile(C.model_path):
#If this is the begin of the training, load the pre-traind base network such as vgg-16
try:
print('This is the first time of your training')
print('loading weights from {}'.format(C.base_net_weights))
model_rpn.load_weights(C.base_net_weights, by_name=True)
model_classifier.load_weights(C.base_net_weights, by_name=True)
except:
print('Could not load pretrained model weights. Weights can be found in the keras application folder \
https://github.com/fchollet/keras/tree/master/keras/applications')
# Create the record.csv file to record losses, acc and mAP
record_df = pd.DataFrame(columns=['mean_overlapping_bboxes', 'class_acc', 'loss_rpn_cls', 'loss_rpn_regr', 'loss_class_cls', 'loss_class_regr', 'curr_loss', 'elapsed_time', 'mAP'])
else:
# If this is a continued training, load the trained model from before
print('Continue training based on previous trained model')
print('Loading weights from {}'.format(C.model_path))
model_rpn.load_weights(C.model_path, by_name=True)
model_classifier.load_weights(C.model_path, by_name=True)
# Load the records
record_df = pd.read_csv(record_path)
r_mean_overlapping_bboxes = record_df['mean_overlapping_bboxes']
r_class_acc = record_df['class_acc']
r_loss_rpn_cls = record_df['loss_rpn_cls']
r_loss_rpn_regr = record_df['loss_rpn_regr']
r_loss_class_cls = record_df['loss_class_cls']
r_loss_class_regr = record_df['loss_class_regr']
r_curr_loss = record_df['curr_loss']
r_elapsed_time = record_df['elapsed_time']
r_mAP = record_df['mAP']
print('Already train %dK batches'% (len(record_df)))
# + colab={} colab_type="code" id="-ULrg0V1soIR"
optimizer = Adam(lr=1e-5)
optimizer_classifier = Adam(lr=1e-5)
model_rpn.compile(optimizer=optimizer, loss=[rpn_loss_cls(num_anchors), rpn_loss_regr(num_anchors)])
model_classifier.compile(optimizer=optimizer_classifier, loss=[class_loss_cls, class_loss_regr(len(classes_count)-1)], metrics={'dense_class_{}'.format(len(classes_count)): 'accuracy'})
model_all.compile(optimizer='sgd', loss='mae')
# + colab={} colab_type="code" id="Qz2BYzL6sqfu"
# Training setting
total_epochs = len(record_df)
r_epochs = len(record_df)
epoch_length = 1000
num_epochs = 40
iter_num = 0
total_epochs += num_epochs
losses = np.zeros((epoch_length, 5))
rpn_accuracy_rpn_monitor = []
rpn_accuracy_for_epoch = []
if len(record_df)==0:
best_loss = np.Inf
else:
best_loss = np.min(r_curr_loss)
# + colab={"base_uri": "https://localhost:8080/", "height": 34} colab_type="code" executionInfo={"elapsed": 614, "status": "ok", "timestamp": 1542544971410, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-ZH_vkc0a6g0/AAAAAAAAAAI/AAAAAAAAAAc/x8TPjkqmxys/s64/photo.jpg", "userId": "13488364327507272606"}, "user_tz": -480} id="JDysEDQA2DUz" outputId="d38a5e1d-b44d-4645-b9f9-16802cc0f552"
print(len(record_df))
# + colab={"base_uri": "https://localhost:8080/", "height": 4916} colab_type="code" executionInfo={"elapsed": 1332624, "status": "ok", "timestamp": 1542188908991, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-ZH_vkc0a6g0/AAAAAAAAAAI/AAAAAAAAAAc/x8TPjkqmxys/s64/photo.jpg", "userId": "13488364327507272606"}, "user_tz": -480} id="dRXtd5W30DRN" outputId="438803d3-01c9-4818-859b-9a54e0243f93"
start_time = time.time()
for epoch_num in range(num_epochs):
progbar = generic_utils.Progbar(epoch_length)
print('Epoch {}/{}'.format(r_epochs + 1, total_epochs))
r_epochs += 1
while True:
try:
if len(rpn_accuracy_rpn_monitor) == epoch_length and C.verbose:
mean_overlapping_bboxes = float(sum(rpn_accuracy_rpn_monitor))/len(rpn_accuracy_rpn_monitor)
rpn_accuracy_rpn_monitor = []
# print('Average number of overlapping bounding boxes from RPN = {} for {} previous iterations'.format(mean_overlapping_bboxes, epoch_length))
if mean_overlapping_bboxes == 0:
print('RPN is not producing bounding boxes that overlap the ground truth boxes. Check RPN settings or keep training.')
# Generate X (x_img) and label Y ([y_rpn_cls, y_rpn_regr])
X, Y, img_data, debug_img, debug_num_pos = next(data_gen_train)
# Train rpn model and get loss value [_, loss_rpn_cls, loss_rpn_regr]
loss_rpn = model_rpn.train_on_batch(X, Y)
# Get predicted rpn from rpn model [rpn_cls, rpn_regr]
P_rpn = model_rpn.predict_on_batch(X)
# R: bboxes (shape=(300,4))
# Convert rpn layer to roi bboxes
R = rpn_to_roi(P_rpn[0], P_rpn[1], C, K.image_dim_ordering(), use_regr=True, overlap_thresh=0.7, max_boxes=300)
# note: calc_iou converts from (x1,y1,x2,y2) to (x,y,w,h) format
# X2: bboxes that iou > C.classifier_min_overlap for all gt bboxes in 300 non_max_suppression bboxes
# Y1: one hot code for bboxes from above => x_roi (X)
# Y2: corresponding labels and corresponding gt bboxes
X2, Y1, Y2, IouS = calc_iou(R, img_data, C, class_mapping)
# If X2 is None means there are no matching bboxes
if X2 is None:
rpn_accuracy_rpn_monitor.append(0)
rpn_accuracy_for_epoch.append(0)
continue
# Find out the positive anchors and negative anchors
neg_samples = np.where(Y1[0, :, -1] == 1)
pos_samples = np.where(Y1[0, :, -1] == 0)
if len(neg_samples) > 0:
neg_samples = neg_samples[0]
else:
neg_samples = []
if len(pos_samples) > 0:
pos_samples = pos_samples[0]
else:
pos_samples = []
rpn_accuracy_rpn_monitor.append(len(pos_samples))
rpn_accuracy_for_epoch.append((len(pos_samples)))
if C.num_rois > 1:
# If number of positive anchors is larger than 4//2 = 2, randomly choose 2 pos samples
if len(pos_samples) < C.num_rois//2:
selected_pos_samples = pos_samples.tolist()
else:
selected_pos_samples = np.random.choice(pos_samples, C.num_rois//2, replace=False).tolist()
# Randomly choose (num_rois - num_pos) neg samples
try:
selected_neg_samples = np.random.choice(neg_samples, C.num_rois - len(selected_pos_samples), replace=False).tolist()
except:
selected_neg_samples = np.random.choice(neg_samples, C.num_rois - len(selected_pos_samples), replace=True).tolist()
# Save all the pos and neg samples in sel_samples
sel_samples = selected_pos_samples + selected_neg_samples
else:
# in the extreme case where num_rois = 1, we pick a random pos or neg sample
selected_pos_samples = pos_samples.tolist()
selected_neg_samples = neg_samples.tolist()
if np.random.randint(0, 2):
sel_samples = random.choice(neg_samples)
else:
sel_samples = random.choice(pos_samples)
# training_data: [X, X2[:, sel_samples, :]]
# labels: [Y1[:, sel_samples, :], Y2[:, sel_samples, :]]
# X => img_data resized image
# X2[:, sel_samples, :] => num_rois (4 in here) bboxes which contains selected neg and pos
# Y1[:, sel_samples, :] => one hot encode for num_rois bboxes which contains selected neg and pos
# Y2[:, sel_samples, :] => labels and gt bboxes for num_rois bboxes which contains selected neg and pos
loss_class = model_classifier.train_on_batch([X, X2[:, sel_samples, :]], [Y1[:, sel_samples, :], Y2[:, sel_samples, :]])
losses[iter_num, 0] = loss_rpn[1]
losses[iter_num, 1] = loss_rpn[2]
losses[iter_num, 2] = loss_class[1]
losses[iter_num, 3] = loss_class[2]
losses[iter_num, 4] = loss_class[3]
iter_num += 1
progbar.update(iter_num, [('rpn_cls', np.mean(losses[:iter_num, 0])), ('rpn_regr', np.mean(losses[:iter_num, 1])),
('final_cls', np.mean(losses[:iter_num, 2])), ('final_regr', np.mean(losses[:iter_num, 3]))])
if iter_num == epoch_length:
loss_rpn_cls = np.mean(losses[:, 0])
loss_rpn_regr = np.mean(losses[:, 1])
loss_class_cls = np.mean(losses[:, 2])
loss_class_regr = np.mean(losses[:, 3])
class_acc = np.mean(losses[:, 4])
mean_overlapping_bboxes = float(sum(rpn_accuracy_for_epoch)) / len(rpn_accuracy_for_epoch)
rpn_accuracy_for_epoch = []
if C.verbose:
print('Mean number of bounding boxes from RPN overlapping ground truth boxes: {}'.format(mean_overlapping_bboxes))
print('Classifier accuracy for bounding boxes from RPN: {}'.format(class_acc))
print('Loss RPN classifier: {}'.format(loss_rpn_cls))
print('Loss RPN regression: {}'.format(loss_rpn_regr))
print('Loss Detector classifier: {}'.format(loss_class_cls))
print('Loss Detector regression: {}'.format(loss_class_regr))
print('Total loss: {}'.format(loss_rpn_cls + loss_rpn_regr + loss_class_cls + loss_class_regr))
print('Elapsed time: {}'.format(time.time() - start_time))
elapsed_time = (time.time()-start_time)/60
curr_loss = loss_rpn_cls + loss_rpn_regr + loss_class_cls + loss_class_regr
iter_num = 0
start_time = time.time()
if curr_loss < best_loss:
if C.verbose:
print('Total loss decreased from {} to {}, saving weights'.format(best_loss,curr_loss))
best_loss = curr_loss
model_all.save_weights(C.model_path)
new_row = {'mean_overlapping_bboxes':round(mean_overlapping_bboxes, 3),
'class_acc':round(class_acc, 3),
'loss_rpn_cls':round(loss_rpn_cls, 3),
'loss_rpn_regr':round(loss_rpn_regr, 3),
'loss_class_cls':round(loss_class_cls, 3),
'loss_class_regr':round(loss_class_regr, 3),
'curr_loss':round(curr_loss, 3),
'elapsed_time':round(elapsed_time, 3),
'mAP': 0}
record_df = record_df.append(new_row, ignore_index=True)
record_df.to_csv(record_path, index=0)
break
except Exception as e:
print('Exception: {}'.format(e))
continue
print('Training complete, exiting.')
# + colab={"base_uri": "https://localhost:8080/", "height": 1316} colab_type="code" executionInfo={"elapsed": 4761, "status": "ok", "timestamp": 1542545625897, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/-ZH_vkc0a6g0/AAAAAAAAAAI/AAAAAAAAAAc/x8TPjkqmxys/s64/photo.jpg", "userId": "13488364327507272606"}, "user_tz": -480} id="Kt-1Grs90oD3" outputId="bc9b57eb-e573-447f-a13e-c264b4500f5c"
plt.figure(figsize=(15,5))
plt.subplot(1,2,1)
plt.plot(np.arange(0, r_epochs), record_df['mean_overlapping_bboxes'], 'r')
plt.title('mean_overlapping_bboxes')
plt.subplot(1,2,2)
plt.plot(np.arange(0, r_epochs), record_df['class_acc'], 'r')
plt.title('class_acc')
plt.show()
plt.figure(figsize=(15,5))
plt.subplot(1,2,1)
plt.plot(np.arange(0, r_epochs), record_df['loss_rpn_cls'], 'r')
plt.title('loss_rpn_cls')
plt.subplot(1,2,2)
plt.plot(np.arange(0, r_epochs), record_df['loss_rpn_regr'], 'r')
plt.title('loss_rpn_regr')
plt.show()
plt.figure(figsize=(15,5))
plt.subplot(1,2,1)
plt.plot(np.arange(0, r_epochs), record_df['loss_class_cls'], 'r')
plt.title('loss_class_cls')
plt.subplot(1,2,2)
plt.plot(np.arange(0, r_epochs), record_df['loss_class_regr'], 'r')
plt.title('loss_class_regr')
plt.show()
plt.plot(np.arange(0, r_epochs), record_df['curr_loss'], 'r')
plt.title('total_loss')
plt.show()
# plt.figure(figsize=(15,5))
# plt.subplot(1,2,1)
# plt.plot(np.arange(0, r_epochs), record_df['curr_loss'], 'r')
# plt.title('total_loss')
# plt.subplot(1,2,2)
# plt.plot(np.arange(0, r_epochs), record_df['elapsed_time'], 'r')
# plt.title('elapsed_time')
# plt.show()
# plt.title('loss')
# plt.plot(np.arange(0, r_epochs), record_df['loss_rpn_cls'], 'b')
# plt.plot(np.arange(0, r_epochs), record_df['loss_rpn_regr'], 'g')
# plt.plot(np.arange(0, r_epochs), record_df['loss_class_cls'], 'r')
# plt.plot(np.arange(0, r_epochs), record_df['loss_class_regr'], 'c')
# # plt.plot(np.arange(0, r_epochs), record_df['curr_loss'], 'm')
# plt.show()
# + colab={} colab_type="code" id="-fgI0ovv3zlr"
|
MY_frcnn_train_vgg.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Frequency correlation plots for simulated populations
#
# Another attempt at calculating clade frequencies from tip-to-clade mappings without using a full tree.
# +
import altair as alt
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import numpy as np
import pandas as pd
from scipy.stats import pearsonr
import seaborn as sns
# %matplotlib inline
# -
sns.set_style("white")
plt.style.use("huddlej")
mpl.rcParams['savefig.dpi'] = 200
mpl.rcParams['figure.dpi'] = 200
mpl.rcParams['font.weight'] = 300
mpl.rcParams['axes.labelweight'] = 300
mpl.rcParams['font.size'] = 18
# !pwd
# +
def matthews_correlation_coefficient(tp, tn, fp, fn):
"""Return Matthews correlation coefficient for values from a confusion matrix.
Implementation is based on the definition from wikipedia:
https://en.wikipedia.org/wiki/Matthews_correlation_coefficient
"""
numerator = (tp * tn) - (fp * fn)
denominator = np.sqrt((tp + fp) * (tp + fn) * (tn + fp) * (tn + fn))
if denominator == 0:
denominator = 1
return float(numerator) / denominator
def get_matthews_correlation_coefficient_for_data_frame(freq_df, return_confusion_matrix=False):
"""Calculate Matthew's correlation coefficient from a given pandas data frame
with columns for initial, observed, and predicted frequencies.
"""
observed_growth = (freq_df["observed_frequency"] > freq_df["initial_frequency"])
predicted_growth = (freq_df["estimated_frequency"] > freq_df["initial_frequency"])
true_positives = ((observed_growth) & (predicted_growth)).sum()
false_positives= ((~observed_growth) & (predicted_growth)).sum()
observed_decline = (freq_df["observed_frequency"] <= freq_df["initial_frequency"])
predicted_decline = (freq_df["estimated_frequency"] <= freq_df["initial_frequency"])
true_negatives = ((observed_decline) & (predicted_decline)).sum()
false_negatives = ((~observed_decline) & (predicted_decline)).sum()
mcc = matthews_correlation_coefficient(
true_positives,
true_negatives,
false_positives,
false_negatives
)
if return_confusion_matrix:
confusion_matrix = {
"tp": true_positives,
"tn": true_negatives,
"fp": false_positives,
"fn": false_negatives
}
return mcc, confusion_matrix
else:
return mcc
# -
# ## Load data
data_root = "../results/builds/simulated/simulated_sample_3/"
tips = pd.read_csv(
"%s/tip_attributes_with_weighted_distances.tsv" % data_root,
sep="\t",
parse_dates=["timepoint"],
usecols=["strain", "timepoint", "frequency"]
)
first_validation_timepoint = "2023-10-01"
tips = tips.query("timepoint >= '%s'" % first_validation_timepoint).copy()
tips.shape
tips["future_timepoint"] = tips["timepoint"] + pd.DateOffset(months=12)
tips.set_index(["timepoint", "future_timepoint", "strain"], inplace=True)
tips.head(1)
tips_to_clades = pd.read_csv("%s/tips_to_clades.tsv" % data_root, sep="\t", parse_dates=["timepoint"])
tips_to_clades = tips_to_clades.query("timepoint >= '%s'" % first_validation_timepoint).copy()
tips_to_clades = tips_to_clades.rename(columns={"tip": "strain"}
tips_to_clades.set_index(["timepoint", "strain"], inplace=True)
tips_to_clades.head()
tips_to_clades.shape
forecasts = pd.read_csv(
"%s/forecasts.tsv" % data_root,
sep="\t",
parse_dates=["timepoint"],
usecols=["timepoint", "strain", "projected_frequency"]
)
forecasts.set_index(["timepoint", "strain"], inplace=True)
forecasts.head()
full_forecasts = pd.read_csv(
"%s/forecasts.tsv" % data_root,
sep="\t",
parse_dates=["timepoint", "future_timepoint"]
)
full_forecasts = full_forecasts.query("timepoint >= '%s'" % first_validation_timepoint).copy()
# ## Find clades for tips at future timepoint
# Annotate projected frequencies for each tip by timepoint.
tips = tips.join(forecasts, on=["timepoint", "strain"])
tips_with_current_clades = tips.join(
tips_to_clades,
on=["timepoint", "strain"]
).reset_index().rename(columns={
"level_0": "timepoint",
"level_2": "strain"
})
tips_with_current_clades.shape
tips_with_current_clades.head()
current_tips_with_future_clades = tips.join(
tips_to_clades,
on=["future_timepoint", "strain"]
).reset_index().rename(columns={
"level_1": "future_timepoint",
"level_2": "strain"
})
current_tips_with_future_clades.head()
current_tips_with_future_clades.shape
# If we take the closest clade to each tip and sum tip frequencies by timepoint, we should get 100% frequency for each timepoint.
current_tips_with_future_clades.groupby(["timepoint", "future_timepoint", "strain", "frequency"]).first().reset_index().groupby([
"timepoint"
])["frequency"].sum().values
# Get distinct list of clades for tips from the future timepoint (this is different from the list of all possible future clades because it is filtered to just those associated with tips that are alive at the future timepoint).
distinct_clades_for_future_tips = tips_with_current_clades.loc[
:,
["timepoint", "future_timepoint", "clade_membership"]
].drop_duplicates()
distinct_clades_for_future_tips.head()
distinct_clades_for_future_tips.shape
# Merge current tips with future clades with that distinct list and take the closest clade assignment from the future based on the current tip’s depth.
current_tips_with_assigned_clades = current_tips_with_future_clades.merge(
distinct_clades_for_future_tips,
left_on=["future_timepoint", "clade_membership"],
right_on=["timepoint", "clade_membership"],
suffixes=["", "_future"],
copy=False
).sort_values(["timepoint", "strain", "depth"]).groupby([
"timepoint",
"strain"
]).first().reset_index().drop(columns=[
"depth",
"timepoint_future",
"future_timepoint_future"
])
current_tips_with_assigned_clades.head()
current_tips_with_assigned_clades.shape
current_tips_with_assigned_clades[current_tips_with_assigned_clades["strain"] == "sample_5416_3"]
# Get distinct list of clades for tips from the current timepoint.
distinct_clades_for_current_tips = current_tips_with_assigned_clades.loc[
:,
["timepoint", "future_timepoint", "clade_membership"]
].drop_duplicates()
distinct_clades_for_current_tips.head()
distinct_clades_for_current_tips.shape
# Merge future tips with current timepoint’s future clades and take the closest clade assignment from the future.
future_tips_with_assigned_clades = tips_with_current_clades.merge(
distinct_clades_for_current_tips,
left_on=["timepoint", "clade_membership"],
right_on=["future_timepoint", "clade_membership"],
suffixes=["", "_current"],
copy=False
).sort_values(["timepoint", "strain", "depth"]).groupby([
"timepoint",
"strain"
]).first().reset_index().drop(columns=[
"depth",
"timepoint_current",
"future_timepoint_current"
])
future_tips_with_assigned_clades.shape
future_tips_with_assigned_clades.head()
future_tips_with_assigned_clades.query("strain == 'sample_5416_3'")
total_frequencies_for_current_tips = current_tips_with_assigned_clades.groupby(["timepoint"])["frequency"].sum().values
np.allclose(
np.ones_like(total_frequencies_for_current_tips),
total_frequencies_for_current_tips,
1e-4
)
total_frequencies_for_future_tips = future_tips_with_assigned_clades.groupby(["timepoint"])["frequency"].sum().values
np.allclose(
np.ones_like(total_frequencies_for_future_tips),
total_frequencies_for_future_tips,
1e-4
)
future_clades_for_current_timepoints = current_tips_with_assigned_clades.groupby([
"timepoint", "future_timepoint", "clade_membership"
]).aggregate({"frequency": "sum", "projected_frequency": "sum"}).reset_index()
future_clades_for_current_timepoints.head()
future_clades_for_future_timepoints = future_tips_with_assigned_clades.groupby([
"timepoint", "future_timepoint", "clade_membership"
])["frequency"].sum().reset_index()
future_clades_for_future_timepoints.head()
np.allclose(
np.ones_like(future_clades_for_current_timepoints.groupby("timepoint")["frequency"].sum().values),
future_clades_for_current_timepoints.groupby("timepoint")["frequency"].sum().values,
1e-4
)
# Next, find future tips that belong to the same clades as the current tips or which have descended from these clades. Instead of taking every clade assigned to each tip, we want to pick the closest clade to each tip.
merged_clades = future_clades_for_current_timepoints.merge(
future_clades_for_future_timepoints,
how="outer",
left_on=["future_timepoint", "clade_membership"],
right_on=["timepoint", "clade_membership"],
suffixes=["", "_future"]
).drop(columns=["timepoint_future", "future_timepoint_future"]).sort_values([
"timepoint", "future_timepoint", "clade_membership"
]).fillna(0.0)
merged_clades.head()
merged_clades.groupby("timepoint")["frequency"].sum().values
merged_clades.groupby("timepoint")["frequency_future"].sum().values
merged_clades = merged_clades.rename(columns={
"frequency": "initial_frequency",
"projected_frequency": "estimated_frequency",
"frequency_future": "observed_frequency"
}).copy()
merged_clades["observed_growth_rate"] = (
merged_clades["observed_frequency"] / merged_clades["initial_frequency"]
)
merged_clades["estimated_growth_rate"] = (
merged_clades["estimated_frequency"] / merged_clades["initial_frequency"]
)
merged_clades.head()
merged_clades.query("timepoint == '2029-10-01'")
# ## Find and analyze large clades
#
# Find all clades with an initial frequency some minimum value (e.g., >15%).
large_clades = merged_clades.query("initial_frequency > 0.15").copy()
large_clades.head()
large_clades.shape
r, p = pearsonr(
large_clades["observed_growth_rate"],
large_clades["estimated_growth_rate"]
)
r
p
mcc, confusion_matrix = get_matthews_correlation_coefficient_for_data_frame(large_clades, True)
mcc
growth_accuracy = confusion_matrix["tp"] / float(confusion_matrix["tp"] + confusion_matrix["fp"])
growth_accuracy
decline_accuracy = confusion_matrix["tn"] / float(confusion_matrix["tn"] + confusion_matrix["fn"])
decline_accuracy
min_growth_rate = 0
max_growth_rate = large_clades.loc[:, ["observed_growth_rate", "estimated_growth_rate"]].max().max() + 0.2
pseudofrequency = 0.001
# +
large_clades["log_observed_growth_rate"] = (
np.log10((large_clades["observed_frequency"] + pseudofrequency) / (large_clades["initial_frequency"] + pseudofrequency))
)
large_clades["log_estimated_growth_rate"] = (
np.log10((large_clades["estimated_frequency"] + pseudofrequency) / (large_clades["initial_frequency"] + pseudofrequency))
)
# +
upper_limit = np.ceil(large_clades.loc[:, ["observed_growth_rate", "estimated_growth_rate"]].max().max())
log_lower_limit = large_clades.loc[:, ["log_observed_growth_rate", "log_estimated_growth_rate"]].min().min() - 0.1
log_upper_limit = np.ceil(large_clades.loc[:, ["log_observed_growth_rate", "log_estimated_growth_rate"]].max().max()) + 0.1
# -
r, p = pearsonr(
large_clades["log_observed_growth_rate"],
large_clades["log_estimated_growth_rate"]
)
r
p
# +
fig, ax = plt.subplots(1, 1, figsize=(6, 6))
ax.plot(
large_clades["log_observed_growth_rate"],
large_clades["log_estimated_growth_rate"],
"o",
alpha=0.4
)
ax.axhline(color="#cccccc", zorder=-5)
ax.axvline(color="#cccccc", zorder=-5)
if p < 0.001:
p_value = "$p$ < 0.001"
else:
p_value = "$p$ = %.3f" % p
ax.text(
0.02,
0.9,
"Growth accuracy = %.2f\nDecline accuracy = %.2f\n$R$ = %.2f\n%s" % (growth_accuracy, decline_accuracy, r, p_value),
fontsize=12,
horizontalalignment="left",
verticalalignment="center",
transform=ax.transAxes
)
ax.set_xlabel("Observed $log_{10}$ growth rate")
ax.set_ylabel("Estimated $log_{10}$ growth rate")
ax.set_title("Validation of best model", fontsize=12)
ticks = np.arange(-6, 4, 1)
ax.set_xticks(ticks)
ax.set_yticks(ticks)
ax.set_xlim(log_lower_limit, log_upper_limit)
ax.set_ylim(log_lower_limit, log_upper_limit)
ax.set_aspect("equal")
#plt.savefig("../manuscript/figures/validation-of-best-model-for-natural-populations.pdf")
# -
# ## Estimated and observed closest strains per timepoint
#
# Create a figure similar to Figure 2D in Neher et al. 2014 showing the minimum estimated distance to the future and minimum observed distance to the future per timepoint.
sorted_df = full_forecasts.dropna().sort_values(
["timepoint"]
).copy()
sorted_df["timepoint_rank"] = sorted_df.groupby("timepoint")["weighted_distance_to_future"].rank(pct=True)
best_fitness_rank_by_timepoint_df = sorted_df.sort_values(
["timepoint", "fitness"],
ascending=False
).groupby("timepoint")["timepoint_rank"].first().reset_index()
best_fitness_rank_by_timepoint_df.head()
# +
median_best_rank = best_fitness_rank_by_timepoint_df["timepoint_rank"].median()
fig, ax = plt.subplots(1, 1, figsize=(6, 4))
ax.hist(best_fitness_rank_by_timepoint_df["timepoint_rank"], bins=np.arange(0, 1.01, 0.05), label=None)
ax.axvline(
median_best_rank,
color="orange",
label="median = %i%%" % round(median_best_rank * 100, 0)
)
ax.set_xticklabels(['{:3.0f}%'.format(x*100) for x in [0, 0.2, 0.4, 0.6, 0.8, 1.0]])
ax.set_xlim(0, 1)
ax.legend(
frameon=False
)
ax.set_xlabel("Percentile rank of distance for fittest strain")
ax.set_ylabel("Number of timepoints")
# -
# ## Merge validation figures into subpanels of one figure
# +
fig = plt.figure(figsize=(8, 4), facecolor='w')
gs = gridspec.GridSpec(1, 2, width_ratios=[1, 1], wspace=0.1)
#
# Clade growth rate correlations
#
clade_ax = fig.add_subplot(gs[0])
clade_ax.plot(
large_clades["log_observed_growth_rate"],
large_clades["log_estimated_growth_rate"],
"o",
alpha=0.4
)
clade_ax.axhline(color="#cccccc", zorder=-5)
clade_ax.axvline(color="#cccccc", zorder=-5)
if p < 0.001:
p_value = "$p$ < 0.001"
else:
p_value = "$p$ = %.3f" % p
clade_ax.text(
0.02,
0.9,
"Growth accuracy = %.2f\nDecline accuracy = %.2f\n$R$ = %.2f\n%s" % (growth_accuracy, decline_accuracy, r, p_value),
fontsize=10,
horizontalalignment="left",
verticalalignment="center",
transform=clade_ax.transAxes
)
clade_ax.set_xlabel("Observed $log_{10}$ growth rate")
clade_ax.set_ylabel("Estimated $log_{10}$ growth rate")
ticks = np.arange(-6, 4, 1)
clade_ax.set_xticks(ticks)
clade_ax.set_yticks(ticks)
clade_ax.set_xlim(log_lower_limit, log_upper_limit)
clade_ax.set_ylim(log_lower_limit, log_upper_limit)
clade_ax.set_aspect("equal")
#
# Estimated closest strain to the future ranking
#
rank_ax = fig.add_subplot(gs[1])
median_best_rank = best_fitness_rank_by_timepoint_df["timepoint_rank"].median()
rank_ax.hist(best_fitness_rank_by_timepoint_df["timepoint_rank"], bins=np.arange(0, 1.01, 0.05), label=None)
rank_ax.axvline(
median_best_rank,
color="orange",
label="median = %i%%" % round(median_best_rank * 100, 0)
)
rank_ax.set_xticklabels(['{:3.0f}%'.format(x*100) for x in [0, 0.2, 0.4, 0.6, 0.8, 1.0]])
rank_ax.set_xlim(0, 1)
rank_ax.legend(
frameon=False
)
rank_ax.set_xlabel("Percentile rank by distance\nfor estimated best strain")
rank_ax.set_ylabel("Number of timepoints")
gs.tight_layout(fig)
plt.savefig("../manuscript/figures/validation-of-best-model-for-simulated-populations.png")
# -
|
analyses/2019-09-25-frequency-correlations-for-simulated-populations.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Spectral Clustering
#
# The spectrum of a matrix refers to the eigenvalues of the matrix. In spectral clustering the eigenvalues of a similarity matrix is used to perform dimensionality reduction, prior to clustering. In this tutorial we will take the reader through the construction of a similarity matrix, the construction of a Laplacian matrix, the reduction of the laplacian to its eigenvalues and how these eigenvalues can be used to cluster the graph.
#
# ### Similarity and Degree is used to construct Laplacian
#
# Consider the case where we have two locations on a map $l_i, l_j$ $\in L$, we can define a distance measure $d$ such that $\forall l_i,l_j \in L: d(l_i,l_j) \mapsto {1 \cup 0}$, dependent on whether a road directly connects $l_i$ to $l_j$.
#
# Once a distance measure has been defined a similarity matrix can be constructed as a symmetric matrix $A$, where $A_{ij} \geq 0$ represents whether $i$ and $j$ are neighboring towns.
#
# The general idea of spectral clustering is to use a regular clustering algorihtm (such as k-means) on relevant eigenvectors of a Laplacian Matrix of $A$. The question to answer next is how do we construct the Laplacian?
#
# Once we have a similarity matrix $A$ we can construct a distance matrix $D$, such that $D_{ii} = \sum_j A_{ij}$. This produces a diagonal matrix the elements of which for a given town $i$, represents the number of towns $j$ that neighbor $i$.
#
# There are many different types of Laplacian which a few of which we shall delve deeper into later in this article. For now a __simple Laplacian__ matrix, can be constructed using the formula
#
# $$L = D - A$$
#
# Elements $L_{ii}$ represent the number of neighboring towns, and elements $L_{ij}: i\neq j$ specify whether there is road between town $i$ and town $j$.
#
# This leaves us with our Laplacian matrix.
#
# #### Symmetric normalised Laplacian
#
# The symmetric normalised Laplacian is defined as
#
# $$L_{sym} = D^{-\frac{1}{2}}LD^{-\frac{1}{2}} = I - D^{-\frac{1}{2}}AD^{-\frac{1}{2}}$$
#
# Wher $L$ is the unnormalised Laplacian, $A$ is the adjacency matrix and $D$ is the degree matrix. The reciprocal square root of $D$, $D^-{\frac{1}{2}}$ is a diagonal matrix who's entries are the positive square roots of the diagonal entries $D$.
#
# Once one has $L^{sym} = SS^*$, where $S$ is the matrix with row and columns corresponding to vertices and edges repectively. Such that each column corresponding to an edge $e= \{u, v\}$ has an entry $\frac{1}{\sqrt{d_u}}$ in the row corresponding to $u$, an entry $\frac{1}{\sqrt{d_v}}$ in the row corresponding to $v$, and $0$ elsewhere.
#
# The eigenvalues of the normalised Laplacian are non-negative, and satisfy $0 = \lambda_0 \leq \dots \leq \lambda_{n-1} \leq 2$. These eigenvalues known as the spectrum, relate well to other graph invariants for general graphs.
#
# #### Random Walk Normalised Laplacian
# The random walk normalised Laplacian is defined as
#
# $$L^{rw} = D^{-1}L$$
#
# In this case $D^{-1}$ is simply the a diagonal matrix with entries which reciprocals of the positive diagonal entries of $D$.
#
# For isolated vertices a common choice is to set the corresponding element $L^{rw}_{i,i}$ to $0$. This leads to the nice property that the multiplicity of the eigenvalue $0$ is equal tothe numebr of connected components in the graph.
#
#
#
# ### Eigenvectors and Eigenvalues
#
# Once the Laplacian matrix has been constructed the eigenvalues can be calculated. Let $\{\lambda_0, \dots, \lambda_n\}$ be an ordered set of eigenvalues such that $\lambda_0 \leq \lambda_1 \leq \dots \leq \lambda_n$. As every row and column of the matrix $L$ sum to $0$, $\lambda_0 = 0$ because the vector $\vec{v_0}=(1,1,\dots,1)$.
#
# The second smallest eigenvalue $\lambda_1$ is called the \textbf{Fiedler} value. The Fiedler value is only $> 0$ if L represents a fully connected graph. The corresponding vector called the Fiedler Vector which approximates the sparsest cut on the graph $L$. Therfore this
#
# Clustering based on the Fiedler vector means we are clustering on a line in the $\mathbb{R}$ space. This means we have a number of options available to define the clusters, a common choice is k-means.
#
# +
from sklearn.datasets import make_circles
from sklearn.neighbors import kneighbors_graph
import numpy as np
X = [[1,2], [2,3], [2,1], [30,33], [32,30], [31,31]]
A = kneighbors_graph(X, n_neighbors=2).toarray()
# create the graph laplacian
D = np.diag(A.sum(axis=1))
L = D-A
print("A:\n", A)
print("D:\n", D)
print("L:\n", L)
# find the eigenvalues and eigenvectors
vals, vecs = np.linalg.eig(L)
idx = vals.argsort()[::1]
vals = vals[idx]
vecs = vecs[:,idx]
print("Sort order: ", order)
print("Vals:\n", vals)
print("Vecs:\n", vecs)
# use Fiedler value to fi
clusters = vecs[:,1] < 0
print("Fiedler Vector:\n", vecs[:,1])
print("Clusters:\n", clusters)
# -
|
Spectral Clustering.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
os.environ['RESULTS_VERS'] = 'l33'
from astropy.table import Table
from astropy.io import ascii
import pandas
import astropy.units as u
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from astropy.modeling import models, fitting
import scipy.stats as stats
from scipy.interpolate import InterpolatedUnivariateSpline, UnivariateSpline
import apogee.tools.read as apread
plt.style.use('nature') # this is one of my plotting styles so youll need to comment out if you're running the NB!
# %matplotlib inline
# %config InlineBackend.figure_format = 'retina'
# -
# Lets see if we can do something about the bug with the Battistini and Bitsch model application to the simulation...
#
# Loading in the data first, scraped from the BB paper:
# +
BBmodel = ascii.read("../sav/Combineddata.csv",data_start=2)
BBmodel.rename_column('\ufeffFe_H', 'Fe_H')
Comp = Table(BBmodel)
Comp
# -
# OK - these are the data from the appropriate figure. We only really care about the $\mathrm{[Fe/H]}$ vs water mass fraction
#
# Now we'll do the polynomial fit...
# +
# Now interpolate between points
model_poly = models.Polynomial1D(degree=3)
fitter_poly = fitting.LinearLSQFitter()
best_fit_poly = fitter_poly(model_poly, BBmodel['Fe_H'],BBmodel['H20'])
print(best_fit_poly)
plt.errorbar(BBmodel['Fe_H'],BBmodel['H20'], fmt='k.')
plt.plot(BBmodel['Fe_H'],best_fit_poly(BBmodel['Fe_H']),color='r', linewidth=3)
newxs = np.linspace(-3,1.,100)
plt.plot(newxs, best_fit_poly(newxs))
plt.ylim(-0.1,1.1)
# -
# This doesnt look ideal at the low $\mathrm{[Fe/H]}$ end, so we can try some other options here... what about a spline?
# +
s = UnivariateSpline(BBmodel['Fe_H'],BBmodel['H20'], k=4)
plt.errorbar(BBmodel['Fe_H'],BBmodel['H20'], fmt='k.')
plt.plot(BBmodel['Fe_H'],s(BBmodel['Fe_H']),color='r', linewidth=3)
newxs = np.linspace(-3,1.,100)
plt.plot(newxs, s(newxs))
plt.ylim(-0.1,1.1)
# -
# Again, not ideal since the mass fraction can get to above 1 (and below 0....). What actually happens in the data we have for stars?
#we'll grab the APOGEE data using jobovy's apogee code (this requires some set-up, ask Ted for details...)
allstar = apread.allStar(main=True, exclude_star_bad=True, exclude_star_warn=True)
# in the BB paper, they looked at C/O abundances in GALAH, to see what happens with O as the $\mathrm{[Fe/H]}$ increases (more O is locked into CO and CO2 at increasing Fe). We can use this idea to figure out what might be the best course of action at the low Fe end as well?
# +
fig = plt.figure()
fig.set_size_inches(4,4)
#we'll clean up the APOGEE data a bit to only look at stars where the abundances are probably good (red giants)
bad = (allstar['LOGG_ERR'] < 0.1) & (allstar['LOGG'] < 4)& (allstar['LOGG'] > 1) & (allstar['C_FE'] != -9999.9902) & (allstar['O_FE'] != -9999.9902)
plt.scatter(allstar['FE_H'][bad], allstar['C_FE'][bad]-allstar['O_FE'][bad], s=0.1, lw=0., alpha=0.8, rasterized=True)
plt.xlim(-2.,0.7)
plt.ylim(-1,0.5)
def running_percentile(x, y, bins):
'''quick function to get the running median/percentiles'''
bin_inds = np.digitize(x, bins)
values = np.ones((len(bins),3))*np.nan
for i in np.unique(bin_inds):
if i == 0 or i == 15:
continue
in_bin = bin_inds == i
if sum(in_bin) < 10:
continue
values[i] = np.percentile(y[in_bin], [16,50,84])
bin_centers = (bins[1:]+bins[:-1])/2.
return values, bin_centers
bins = np.linspace(-1.5,0.5,15)
medians, bin_centers = running_percentile(allstar['FE_H'][bad], allstar['C_FE'][bad]-allstar['O_FE'][bad], bins)
plt.plot(bins-((bins[1]-bins[0])/2.), medians[:,1], c='Black')
plt.fill_between(bins-((bins[1]-bins[0])/2.), medians[:,0], medians[:,2], color='Black', alpha=0.3)
plt.axvline(-0.4, color='Black', linestyle='dashed')
plt.axvline(0.4, color='Black', linestyle='dashed')
plt.text(-1.5,0.3, 'BB20 [Fe/H] limits')
plt.xlabel(r'$\mathrm{[Fe/H]}$')
plt.ylabel(r'$\mathrm{[C/O]}$')
plt.savefig('../plots/CO_FEH_APOGEEDR16.pdf')
# -
# the APOGEE behaviour matches GALAH quite well in the BB limits, which is reassuring, but the trend changes quite significantly at low $\mathrm{[Fe/H]}$. Since the C/O drops right off, we might assume that the Oxygen available in forming the ISO's at low metallicities is much higher?
#
# Since the accreted dwarfs probably dominate in terms of mass at this metallicity regime, maybe this means that the ISO's accreted in dwarfs would be genuinely disentangle-able from the MW ones...
#
# Just to illustrate that there is a lot of accreted debris down there at low metallicity - you can see that this stands out in the Tinsley diagram as a larger scatter in $\mathrm{[Mg/Fe]}$ (alpha elements) at low $\mathrm{[Fe/H]}$.
plt.scatter(allstar['FE_H'], allstar['MG_FE'], s=0.1, lw=0., color='Black')
plt.xlim(-2,0.7)
plt.ylim(-0.2,0.5)
# In that case, maybe the polynomial is ok. I think the best we can do is to just set the function to be the upper and lower limit:
# +
# Now interpolate between points
def piecewise_poly(x):
'''this function allows us to set the extrapolation to the limits of the data in x'''
model_poly = models.Polynomial1D(degree=3)
fitter_poly = fitting.LinearLSQFitter()
best_fit_poly = fitter_poly(model_poly, BBmodel['Fe_H'],BBmodel['H20'])
minx, maxx = np.min(BBmodel['Fe_H']), np.max(BBmodel['Fe_H'])
minxy = best_fit_poly(minx)
maxxy = best_fit_poly(maxx)
if not hasattr(x, '__iter__'):
if x < minx:
return minxy
elif x > maxx:
return maxxy
else:
return best_fit_poly(x)
else:
out = np.zeros(len(x))
out[x < minx] = minxy
out[x > maxx] = maxxy
out[(x >= minx) & (x <= maxx)] = best_fit_poly(x[(x >= minx) & (x <= maxx)])
return out
plt.errorbar(BBmodel['Fe_H'],BBmodel['H20'], fmt='k.')
plt.plot(BBmodel['Fe_H'],piecewise_poly(BBmodel['Fe_H']),color='r', linewidth=3)
newxs = np.linspace(-3,1.,100)
plt.plot(newxs, piecewise_poly(newxs))
# +
files = ['GalaxyA_FOF507.dat', 'EAGLE_MW_L0025N0376_REFERENCE_ApogeeRun_30kpc_working.dat', 'GalaxyC_FOF526.dat']
sims = [ascii.read('../sav/%s' % file) for file in files]
# +
fig, ax = plt.subplots(1,3, sharex=True, sharey=True)
fig.set_size_inches(10,2)
for i in range(len(sims)):
ax[i].scatter(sims[i]['fe_h'], sims[i]['mg_h']-sims[i]['fe_h'], s=0.01, color='Black')
ax[i].set_xlabel(r'$\mathrm{[Fe/H]}$')
plt.xlim(-2.5,1.)
plt.ylim(-0.3,0.5)
ax[0].set_ylabel(r'$\mathrm{[Mg/Fe]}$')
# -
fig, ax = plt.subplots(1,3, sharex=True, sharey=True)
fig.set_size_inches(6,2)
for i in range(len(sims)):
ax[i].scatter(sims[i]['x_g'], sims[i]['z_g'], s=0.1, lw=0., color='Black')
ax[i].set_xlabel(r'$x\ \mathrm{[kpc]}$')
ax[0].set_ylabel(r'$z\ \mathrm{[kpc]}$')
# lets re-make Chris' plot with the fixed limits (note I also do a thing to 'spread out' the probability density where we extrapolate... not sure if this is useful though...)
# +
colors = ['#4477AA', '#BB5566', '#DDAA33']
labels = ['late', 'bi-modal', 'early']
frac_water_rich = []
mean_age = []
for i in range(len(sims)):
water_mass_frac = piecewise_poly(sims[i]['fe_h'])
bins = np.linspace(np.min(water_mass_frac),np.max(water_mass_frac),20)
hist, bins = np.histogram(water_mass_frac, bins=bins, density=True)
old_end = np.copy(bins[-2])
#figure out the correction to get the final bin probability density right...
end_correct = (bins[1]-bins[0])/(1-old_end)
bins[-2] = 1.
hist[-1] *= end_correct
plt.step(bins[:-1], hist, color=colors[i], lw=2., label=labels[i])
frac_water_rich.append(sum(water_mass_frac > 0.4)/len(water_mass_frac))
mean_age.append(np.mean(sims[i]['age']))
plt.xlim(0.,1.)
plt.ylim(0.,3)
plt.legend()
plt.xlabel(r'$\mathrm{H_2O\ mass\ fraction}$')
plt.ylabel(r'$p(\mathrm{H_2O\ mass\ fraction})_i$')
#mark out the regions where we extrapolate
plt.gca().axvspan(bins[-3], 1., alpha=0.1, color='Black')
plt.gca().axvspan(0, bins[0], alpha=0.1, color='Black')
# +
#just for fun, how does it look with the spline?
frac_water_rich = []
mean_age = []
for i in range(len(sims)):
water_mass_frac = s(sims[i]['fe_h'])
bins = np.linspace(-0.1,1.1,50)
plt.hist(water_mass_frac, histtype='step', density=True, bins=bins, lw=2., color=colors[i])
frac_water_rich.append(sum(water_mass_frac > 0.4)/len(water_mass_frac))
mean_age.append(np.mean(sims[i]['age']))
#mark out the regions where we extrapolate
plt.gca().axvspan(np.max(BBmodel['H20']), 1., alpha=0.1, color='Black')
plt.gca().axvspan(0, np.min(BBmodel['H20']), alpha=0.1, color='Black')
plt.xlim(0.,1.)
plt.xlabel(r'$\mathrm{H_2O\ mass\ fraction}$')
plt.ylabel(r'$p(\mathrm{H_2O\ mass\ fraction})_i$')
# -
# If the behaviour is anything like the spline fit, then (i.e. the water mass fraction goes much higher at low Fe) then we actually still see a peak at ~0.5 for the EAGLE galaxies...
plt.plot(mean_age, frac_water_rich)
#plot the MDF of each galaxy...
for i in range(len(sims)):
plt.hist(sims[i]['fe_h'], range=[-2,1.], histtype='step', density=True, bins=30)
|
py/iso_abundances.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Get Landsat Time Series (lsts) Sample Data
#
# In this notebook we download the *Forest in Colorado, USA (P035-R032)* dataset from [Chris Holdens landsat_stack repository](https://github.com/ceholden/landsat_stack#landsat_stack) and save part of the data as single layer TIFF files.
#
# Specifically, we run through the following steps:
#
# * We download and extract the data.
# * We save the RED, NIR and SWIR1 bands and the FMASK band from the years 2008 to 2013 as single layer TIFF files.
# * We delete the stack.
# * We create a dataframe with all *.tif* files.
#
# The following is a lists of all Landsat / Fmask bands we download.
# The ones that are not striked out are the ones we keep.
#
# * <del>Band 1 SR (SR * 10000)
# * <del>Band 2 SR (SR * 10000)
# * Band 3 SR (SR * 10000)
# * Band 4 SR (SR * 10000)
# * Band 5 SR (SR * 10000)
# * <del>Band 7 SR (SR * 10000)
# * <del>Band 6 Thermal Brightness (C * 100)
# * Fmask
# * 0 - clear land
# * 1 - clear water
# * 2 - cloud
# * 3 - snow
# * 4 - shadow
# * 255 - NoData
#
# The dataset is a sample dataset included under the name *lsts* (Landsat time series) in the ``eotools-dataset`` package.
#
# First, download and extract the data.
# ! wget http://ftp-earth.bu.edu/public/ceholden/landsat_stacks/p035r032.tar.bz2
# !tar xf p035r032.tar.bz2
# Imports and the helper functions.
# +
import os
import pandas as pd
from pathlib import Path
import shutil
import subprocess
def save_as_single_layer_file(src_dir, overwrite=False, remove_stack=True):
keep = [2, 3, 4, 7]
band_names = ["b1", "b2", "b3", "b4", "b5", "b7", "b6", "fmask"]
src = list(src_dir.glob("*gtif"))[0]
for bindex, bname in enumerate(band_names):
if bindex not in keep:
continue
dst_dir = sdir.parent.parent / src_dir.stem
dst_dir.mkdir(exist_ok=True)
dst = dst_dir / f"{src_dir.stem.split('_')[0]}_{band_names[bindex]}.tif"
if (not dst.exists() or overwrite):
ot = "Byte" if bname == "fmask" else "Int16"
exit_code = subprocess.check_call(
f"gdal_translate -ot {ot} -b {bindex+1} -co COMPRESS=DEFLATE {str(src)} {str(dst)}",
shell=True)
# -
# Save the selected bands and fmask of the selected years as single layer files.
# +
bdir_scenes_single = Path("./p035r032")
bdir_scenes = Path("./p035r032/images")
scene_dirs = list(bdir_scenes.glob("L*"))
counter = 0
for i, sdir in enumerate(scene_dirs):
if int(sdir.stem[9:13]) < 2008:
continue
counter += 1
print(f"{counter} / {len(scene_dirs)} - {sdir}")
save_as_single_layer_file(src_dir=sdir, overwrite=False)
# -
# Delete the *image* directory which contained the complete downloaded data.
shutil.rmtree(bdir_scenes)
# Lets derive the paths and some thereof derived metadata and put the info in a dataframe.
layers_paths = list(Path(bdir_scenes_single).rglob("*.tif"))
layers_df = pd.Series([p.stem for p in layers_paths]).str.split("_", expand=True) \
.rename({0: "sceneid", 1:"band"}, axis=1)
layers_df["date"] = pd.to_datetime(layers_df.sceneid.str[9:16], format="%Y%j")
layers_df["path"] = layers_paths
layers_df = layers_df.sort_values(["date", "band"])
layers_df = layers_df.reset_index(drop=True)
layers_df.head(10)
# Reformat the data such that we can check if some of the scenes have missing bands.
counts_bands_per_sceneid = layers_df[["sceneid", "band", "path"]] \
.pivot_table(index="sceneid", columns="band", aggfunc="count")
display(counts_bands_per_sceneid.head(2))
display(counts_bands_per_sceneid.tail(2))
counts_bands_per_sceneid.apply("sum", axis=0)
# Which is not the case (;
#
# **The End**
|
examples/sampledata/create_lsts_sample_data.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# For Caltech ML homework1
# In this problem, you will create your own target function f and data set D to see how the Perceptron Learning Algorithm works.
# Take d = 2 so you can visualize the problem, and assume X = [−1,1]×[−1,1] with uniform probability of picking each x ∈X.
# In each run, choose a random line in the plane as your target function f
# (do this by taking two random, uniformly distributed points in [−1,1]×[−1,1]
# and taking the line passing through them), where one side of the line maps to +1
# and the other maps to −1. Choose the inputs xn of the data set as random points (uniformly in X),
# and evaluate the target function on each xn to get the corresponding output yn.
# Now, in each run, use the Perceptron Learning Algorithm to find g.
# Start the PLA with the weight vector w being all zeros (consider sign(0) = 0, so all points are initially misclassified),
# and at each iteration have the algorithm choose a point randomly from the set of misclassified points.
# -
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from random import randint
# %matplotlib inline
# +
# Choose a target function
# generate two points to get the target function
target_points = np.random.uniform(low=-1, high=1, size=(2, 2))
coefficients = np.polyfit([target_points[0][0], target_points[1][0]], [target_points[0][1], target_points[1][1]], 1)
polynomial = np.poly1d(coefficients)
x = np.linspace(-1, 1)
f_x = polynomial(x)
# +
# Generate training sample
N = 100 # Sample size
# generate N uniformly distributed data points of dimention 2
data_points = np.random.uniform(low=-1, high=1, size=(N, 2))
y = np.sign([point[1] - polynomial(point[0]) for point in data_points])
df = pd.DataFrame(data=data_points, columns=['x1', 'x2'])
df['y'] = y
# +
# PLA
# initial weight vector w
# so the initial hypothesis is sign(0) = 0
# all the data points are misclassified
w = [float(0), float(0), float(0)]
# record the w in every iteration
w_set = []
# indexes of misclassified points
iteration = 0
selected_misclassified = []
while iteration < 1000:
# get misclassified points
misclassified = []
for index, row in df.iterrows():
hypothesis = np.sign(w[0] + w[1] * row['x1'] + w[2] * row['x2'])
if hypothesis == row['y']:
pass
else:
misclassified.append(index)
# break if there is no misclassified point
if not misclassified:
break
# select a misclassified point
mis_index = misclassified[randint(0, len(misclassified) - 1)]
mis_row = df.iloc[[mis_index]]
selected_misclassified.append(mis_index)
w[0] += mis_row.at[mis_index, 'y']
w[1] += mis_row.at[mis_index, 'y'] * mis_row.at[mis_index, 'x1']
w[2] += mis_row.at[mis_index, 'y'] * mis_row.at[mis_index, 'x2']
w_set.append([w[0], w[1], w[2]])
iteration += 1
# +
# Visualization
# visualization setup
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 1, 1])
axes.set_xlim(-1, 1)
axes.set_ylim(-1, 1)
# plot target function
axes.plot(x, f_x)
axes.plot(target_points[0][0], target_points[0][1], target_points[1][0], target_points[1][1], marker='o', color='blue')
# plot sample
for index, row in df.iterrows():
if row['y'] > 0:
axes.plot(row['x1'], row['x2'], marker='o', color='green')
elif row['y'] < 0:
axes.plot(row['x1'], row['x2'], marker='o', color='red')
else:
axes.plot(row['x1'], row['x2'], marker='o', color='grey')
# plot all hypothesis functions
for ws in w_set:
hypo_poly = np.poly1d([-ws[1] / ws[2], -ws[0] / ws[2]])
hypo = hypo_poly(x)
if ws == w_set[-1]:
axes.plot(x, hypo, color='red')
else:
axes.plot(x, hypo, color='black')
for index in selected_misclassified:
axes.plot(df.at[index, 'x1'], df.at[index, 'x2'], marker='x')
# -
|
PLA/single run pla.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import seaborn as sns
from glob import glob
# # EC2 experiments
# ## T2-medium instance
# ### Caching with uncompressed files
# #### 1 file 1 repetition
def fig_cached(cache, uncached):
df = pd.concat([pd.read_csv(f) for f in glob(cache)])
df_start = df[df["action"].str.contains("start")].reset_index()
df_end = df[df["action"].str.contains("end")].reset_index()
df_start["task"] = df_start["action"].apply(lambda x: x.split("_")[0])
df_start["start"] = df_start["timestamp"]
df_end["end"] = df_end["timestamp"]
df_cached = pd.concat([df_start, df_end], axis=1)
df_cached["task_runtime"] = (df_cached["end"] - df_cached["start"]) * 10**-9
df_cached = df_cached[["task", "task_runtime"]]
df_cached["cached"] = True
df = pd.concat([pd.read_csv(f) for f in glob(uncached)])
df_start = df[df["action"].str.contains("start")].reset_index()
df_end = df[df["action"].str.contains("end")].reset_index()
df_start["task"] = df_start["action"].apply(lambda x: x.split("_")[0])
df_start["start"] = df_start["timestamp"]
df_end["end"] = df_end["timestamp"]
df_ncached = pd.concat([df_start, df_end], axis=1)
df_ncached["task_runtime"] = (df_ncached["end"] - df_ncached["start"]) * 10**-9
df_ncached = df_ncached[["task", "task_runtime"]]
df_ncached["cached"] = False
df = pd.concat([df_cached, df_ncached])
ax = sns.barplot(x="task", y="task_runtime", data=df, hue="cached")
fig_cached("../results/ec2-t2medium/conditions-cache/rep-*/benchmark_1i_1f_cache*_sequential*",
"../results/ec2-t2medium/conditions-cache/rep-*/benchmark_1i_1f_nocache*_sequential*")
# #### 1 file 5 repetitions
fig_cached("../results/ec2-t2medium/conditions-cache/rep-*/benchmark_5i_1f_cache*_sequential*",
"../results/ec2-t2medium/conditions-cache/rep-*/benchmark_5i_1f_nocache*_sequential*")
# #### 1 file 10 repetitions
fig_cached("../results/ec2-t2medium/conditions-cache/rep-*/benchmark_10i_1f_cache*_sequential*",
"../results/ec2-t2medium/conditions-cache/rep-*/benchmark_10i_1f_nocache*_sequential*")
# ### I/O Benchmarks with multiple files
def fig_io(benchmarks):
df = pd.concat([pd.read_csv(f) for f in glob(benchmarks)])
df_start = df[df["action"].str.contains("start")].reset_index()
df_end = df[df["action"].str.contains("end")].reset_index()
df_start["task"] = df_start["action"].apply(lambda x: x.split("_")[0])
df_start["start"] = df_start["timestamp"]
df_end["end"] = df_end["timestamp"]
df = pd.concat([df_start, df_end], axis=1)
df["task_runtime"] = (df["end"] - df["start"]) * 10**-9
df = df[["task", "task_runtime"]]
ax = sns.barplot(x="task", y="task_runtime", data=df)
# #### 1 file 5 iterations
fig_io("../results/ec2-t2medium/conditions-dask/rep-*/benchmark_5i_1f_nocache*")
# #### 5 files 5 iterations
fig_io("../results/ec2-t2medium/conditions-dask/rep-*/benchmark_5i_5f_nocache*")
# #### 10 files 5 iterations
fig_io("../results/ec2-t2medium/conditions-dask/rep-*/benchmark_5i_10f_nocache*")
|
notebook/ec2-figures.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Visualizing Frame Dragging in Kerr Spacetime
#
# ### Importing required modules
# +
import numpy as np
from einsteinpy.geodesic import Nulllike
from einsteinpy.plotting import StaticGeodesicPlotter
# -
# ### Setting up the system
# - Initial position & momentum of the test partcle
# - Spin of the Kerr Black Hole
# - Other solver parameters
#
# Note that, we are working in _M_-Units ($G = c = M = 1$). Also, setting momentum's $\phi$-component to negative, implies an initial retrograde trajectory.
position = [2.5, np.pi / 2, 0.]
momentum = [0., 0., -2.]
a = 0.99
steps = 7440 # As close as we can get before the integration becomes highly unstable
delta = 0.0005
omega = 0.01
suppress_warnings = True
# Here, `omega`, the coupling between the hamiltonian flows, needs to be decreased in order to decrease numerical errors and increase integration stability. Reference: https://arxiv.org/abs/2010.02237.
#
# Also, `suppress_warnings` has been set to `True`, as the error would grow exponentially, very close to the black hole.
# ### Calculating the geodesic
geod = Nulllike(
metric="Kerr",
metric_params=(a,),
position=position,
momentum=momentum,
steps=steps,
delta=delta,
return_cartesian=True,
omega=omega,
suppress_warnings=suppress_warnings
)
# ### Plotting the geodesic in 2D
sgpl = StaticGeodesicPlotter(bh_colors=("red", "blue"))
sgpl.plot2D(geod, coordinates=(1, 2), figsize=(10, 10), color="indigo") # Plot X vs Y
sgpl.show()
# As can be seen in the plot above, the photon's trajectory is reversed, due to frame-dragging effects, so that, it moves in the direction of the black hole's spin, before eventually falling into the black hole.
#
# Also, the last few steps seem to have a larger `delta`, but that is simply because of huge numerical errors, as the particle has crossed the Event Horizon.
|
docs/source/examples/Visualizing Frame Dragging in Kerr Spacetime.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import azureml.core
from azureml.core import Workspace
ws = Workspace.from_config()
# Get the default datastore
default_ds = ws.get_default_datastore()
default_ds.upload_files(files=['./Data/borrower.csv', './Data/loan.csv'], # Upload the diabetes csv files in /data
target_path='creditrisk-data/', # Put it in a folder path in the datastore
overwrite=True, # Replace existing files of the same name
show_progress=True)
#Create a Tabular dataset from the path on the datastore
from azureml.core import Dataset
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'creditrisk-data/borrower.csv'))
tab_data_set = tab_data_set.register(workspace=ws,
name='BorrowerData',
description='Borrower Data',
tags = {'format':'CSV'},
create_new_version=True)
#Create a Tabular dataset from the path on the datastore
tab_data_set = Dataset.Tabular.from_delimited_files(path=(default_ds, 'creditrisk-data/loan.csv'))
tab_data_set = tab_data_set.register(workspace=ws,
name='LoanData',
description='Loans Data',
tags = {'format':'CSV'},
create_new_version=True)
# +
from azureml.core import Workspace, Dataset, Datastore, ScriptRunConfig, Experiment
from azureml.data.data_reference import DataReference
import os
import azureml.dataprep as dprep
import pandas as pd
import numpy as np
import azureml.core
from azureml.core import Workspace
ws = Workspace.from_config()
borrowerData = Dataset.get_by_name(ws, name='BorrowerData')
loanData = Dataset.get_by_name(ws, name='LoanData')
# +
from azureml.core import Datastore
from azureml.core.compute import AmlCompute, ComputeTarget
datastore = ws.get_default_datastore()
# Create a compute cluster
compute_name = 'cpu-cluster'
if not compute_name in ws.compute_targets :
print('creating a new compute target...')
provisioning_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS2_V2',
min_nodes=0,
max_nodes=1)
compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)
compute_target.wait_for_completion(
show_output=True, min_node_count=None, timeout_in_minutes=20)
# Show the result
print(compute_target.get_status().serialize())
compute_target = ws.compute_targets[compute_name]
# +
from azureml.core.runconfig import RunConfiguration
from azureml.core import Environment
from azureml.core.conda_dependencies import CondaDependencies
# Create a Python environment for the experiment
creditrisk_env = Environment("creditrisk-pipeline-env")
creditrisk_env.python.user_managed_dependencies = False # Let Azure ML manage dependencies
creditrisk_env.docker.enabled = True # Use a docker container
# Create a set of package dependencies
creditrisk_packages = CondaDependencies.create(conda_packages=['scikit-learn','joblib','pandas','numpy','pip'],
pip_packages=['azureml-defaults','azureml-dataprep[pandas]'])
# Add the dependencies to the environment
creditrisk_env.python.conda_dependencies = creditrisk_packages
# Register the environment
creditrisk_env.register(workspace=ws)
registered_env = Environment.get(ws, 'creditrisk-pipeline-env')
# Create a new runconfig object for the pipeline
aml_run_config = RunConfiguration()
# Use the compute you created above.
aml_run_config.target = compute_target
# Assign the environment to the run configuration
aml_run_config.environment = registered_env
print ("Run configuration created.")
# +
# %%writefile PrepareData.py
from azureml.core import Run
import pandas as pd
import numpy as np
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--prepared_data', dest='prepared_data', required=True)
args = parser.parse_args()
borrowerData = Run.get_context().input_datasets['BorrowerData']
loanData = Run.get_context().input_datasets['LoanData']
df_borrower = borrowerData.to_pandas_dataframe()
df_loan = loanData.to_pandas_dataframe()
# Join data and do some transformations
df_data = df_borrower.merge(df_loan,on='memberId',how='inner')
df_data.shape
df_data['homeOwnership'] = df_data['homeOwnership'].replace('nan', np.nan).fillna(0)
df_data['isJointApplication'] = df_data['isJointApplication'].replace('nan', np.nan).fillna(0)
drop_cols = ['memberId', 'loanId', 'date','grade','residentialState']
df_data = df_data.drop(drop_cols, axis=1)
df_data['loanStatus'] = np.where(df_data['loanStatus'] == 'Default', 1, 0) # change label column to 0/1
df_data.to_csv(os.path.join(args.prepared_data,"prepared_data.csv"),index=False)
print(f"Wrote prepped data to {args.prepared_data}/prepared_data.csv")
# +
from azureml.data import OutputFileDatasetConfig
from azureml.pipeline.steps import PythonScriptStep
prepared_data = OutputFileDatasetConfig(name="prepared_data")
dataprep_step = PythonScriptStep(
name="PrepareData",
script_name="PrepareData.py",
compute_target=compute_target,
runconfig=aml_run_config,
arguments=["--prepared_data", prepared_data],
inputs=[borrowerData.as_named_input('BorrowerData'),loanData.as_named_input('LoanData')],
allow_reuse=True
)
# +
# prepared_data = prepared_data_path.read_delimited_files()
# +
# %%writefile TrainTestDataSplit.py
from azureml.core import Run
import pandas as pd
import numpy as np
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--prepared_data', dest='prepared_data', required=True)
parser.add_argument('--train_data', dest='train_data', required=True)
parser.add_argument('--test_data', dest='test_data', required=True)
args = parser.parse_args()
df_data = pd.read_csv(args.prepared_data + '/prepared_data.csv')
df_train=df_data.sample(frac=0.8,random_state=200) #random state is a seed value
df_train=df_data.drop(df_train.index)
df_train.to_csv(os.path.join(args.train_data,"train_data.csv"),index=False)
df_train.to_csv(os.path.join(args.test_data,"test_data.csv"),index=False)
print(f"Wrote prepped data to {args.train_data}/train_data.csv")
print(f"Wrote prepped data to {args.test_data}/test_data.csv")
# +
# test train split the data
train_data = OutputFileDatasetConfig(name="train_data")
test_data = OutputFileDatasetConfig(name="test_data")
test_train_step = PythonScriptStep(name = "TestTrainDataSplit",
script_name ="TrainTestDataSplit.py",
arguments = ["--prepared_data", prepared_data.as_input(),
"--train_data", train_data,
"--test_data", test_data],
outputs = [train_data,test_data],
compute_target = compute_target,
runconfig = aml_run_config,
allow_reuse = True
)
# +
training_data = train_data.read_delimited_files()
training_data
testing_data = test_data.read_delimited_files()
testing_data
# +
# %%writefile TrainModel.py
from azureml.core import Run
from azureml.core.model import Model
import joblib
import pandas as pd
import numpy as np
import argparse
from sklearn.linear_model import LogisticRegression
import pandas as pd
import numpy as np
from sklearn.preprocessing import OneHotEncoder
from sklearn.impute import SimpleImputer
def creditrisk_onehot_encoder(df_data):
catColumns = df_data.select_dtypes(['object']).columns
df_data[catColumns] = df_data[catColumns].fillna(value='Unknown')
df_data = df_data.fillna(df_data.mean())
OH_encoder = OneHotEncoder(handle_unknown='ignore', sparse=False)
OH_cols= pd.DataFrame(OH_encoder.fit_transform(df_data[catColumns]),columns = list(OH_encoder.get_feature_names(catColumns)))
# Remove categorical columns (will replace with one-hot encoding)
numeric_cols = df_data.drop(catColumns, axis=1)
# Add one-hot encoded columns to numerical features
df_result = pd.concat([numeric_cols, OH_cols], axis=1)
# impute missing numeric values with mean
fill_NaN = SimpleImputer(missing_values=np.nan, strategy='mean')
imputed_df = pd.DataFrame(fill_NaN.fit_transform(df_result))
imputed_df.columns = df_result.columns
imputed_df.index = df_result.index
df_result = imputed_df
return(df_result)
# Get the experiment run context
run = Run.get_context()
parser = argparse.ArgumentParser()
parser.add_argument('--train_data', dest='train_data', required=True)
parser.add_argument('--test_data', dest='test_data', required=True)
parser.add_argument('--metrics_data', dest='metrics_data', required=True)
parser.add_argument('--model_data', dest='model_data', required=True)
args = parser.parse_args()
df_train = pd.read_csv(args.train_data + '/train_data.csv')
df_test = pd.read_csv(args.test_data + '/test_data.csv')
df_train = creditrisk_onehot_encoder(df_train)
df_test = creditrisk_onehot_encoder(df_test)
cols = [col for col in df_train.columns if col not in ["loanStatus"]]
clf = LogisticRegression()
clf.fit(df_train[cols].values, df_train["loanStatus"].values)
print('predicting ...')
y_hat = clf.predict(df_test[cols].astype(int).values)
acc = np.average(y_hat == df_test["loanStatus"].values)
print('Accuracy is', acc)
print("save model")
os.makedirs('models', exist_ok=True)
joblib.dump(value=clf, filename= 'models/creditrisk_model.pkl')
model = Model.register(model_path = 'models/creditrisk_model.pkl',
model_name = 'creditrisk_model',
description = 'creditrisk model',
workspace = run.experiment.workspace,
properties={'Accuracy': np.float(acc)})
modeldata = []
modeldata.append(('models/creditrisk_model.pkl','creditrisk_model'))
df_model = pd.DataFrame(modeldata, columns=('modelfile', 'model_name'))
metricsdata = []
metricsdata.append(('Accuracy',acc))
df_metrics = pd.DataFrame(metricsdata, columns=('Metric', 'Value'))
df_model.to_csv(os.path.join(args.model_data,"model_data.csv"),index=False)
df_metrics.to_csv(os.path.join(args.metrics_data,"metrics_data.csv"),index=False)
print(f"Wrote model data to {args.model_data}/model_data.csv")
print(f"Wrote metrics data to {args.metrics_data}/metrics_data.csv")
# +
# train the model
model_data = OutputFileDatasetConfig(name="model_data")
metrics_data = OutputFileDatasetConfig(name="metrics_data")
train_step = PythonScriptStep(name = "TrainModel",
script_name ="TrainModel.py",
arguments = ["--train_data", train_data.as_input(),
"--test_data", test_data.as_input(),
"--model_data", model_data,
"--metrics_data", metrics_data],
outputs = [model_data,metrics_data],
compute_target = compute_target,
runconfig = aml_run_config,
allow_reuse = True
)
# +
# %%writefile BatchInference.py
from azureml.core import Run
from azureml.core.model import Model
import joblib
import pandas as pd
import numpy as np
import argparse
from sklearn.preprocessing import OneHotEncoder
from sklearn.impute import SimpleImputer
def creditrisk_onehot_encoder(df_data):
catColumns = df_data.select_dtypes(['object']).columns
df_data[catColumns] = df_data[catColumns].fillna(value='Unknown')
df_data = df_data.fillna(df_data.mean())
OH_encoder = OneHotEncoder(handle_unknown='ignore', sparse=False)
OH_cols= pd.DataFrame(OH_encoder.fit_transform(df_data[catColumns]),columns = list(OH_encoder.get_feature_names(catColumns)))
# Remove categorical columns (will replace with one-hot encoding)
numeric_cols = df_data.drop(catColumns, axis=1)
# Add one-hot encoded columns to numerical features
df_result = pd.concat([numeric_cols, OH_cols], axis=1)
# impute missing numeric values with mean
fill_NaN = SimpleImputer(missing_values=np.nan, strategy='mean')
imputed_df = pd.DataFrame(fill_NaN.fit_transform(df_result))
imputed_df.columns = df_result.columns
imputed_df.index = df_result.index
df_result = imputed_df
return(df_result)
parser = argparse.ArgumentParser()
parser.add_argument('--test_data', dest="test_data", type=str, required=True)
parser.add_argument('--model_data', dest="model_data", type=str, required=True)
parser.add_argument('--batchinfer_data', dest='batchinfer_data', required=True)
args = parser.parse_args()
# Get the experiment run context
run = Run.get_context()
df_model = pd.read_csv(args.model_data + '/model_data.csv')
# model_path = Model.get_model_path(model_name = 'best_model_data')
model_name = df_model['model_name'][0]
model_path = Model.get_model_path(model_name=model_name, _workspace=run.experiment.workspace)
model = joblib.load(model_path)
df_test = pd.read_csv(args.test_data + '/test_data.csv')
df_test = creditrisk_onehot_encoder(df_test)
x_test = df_test.drop(['loanStatus'], axis=1)
y_predict = model.predict(x_test)
df_test['Prediction'] = y_predict
df_test.to_csv(os.path.join(args.batchinfer_data,"batchinfer_data.csv"),index=False)
print(f"Wrote prediction data with to {args.batchinfer_data}/batchinfer_data.csv")
# +
from azureml.data import OutputFileDatasetConfig
from azureml.pipeline.steps import PythonScriptStep
batchinfer_data = OutputFileDatasetConfig(name="batchinfer_data").register_on_complete(name="CreditRiskBatchInferenceData",description = 'Batch Inference Data Output')
batchinfer_step = PythonScriptStep(
name="RunBatchInference",
script_name="BatchInference.py",
compute_target=compute_target,
runconfig=aml_run_config,
arguments=["--test_data", test_data.as_input(),"--model_data", model_data.as_input(),"--batchinfer_data", batchinfer_data],
outputs = [batchinfer_data],
allow_reuse=True
)
# +
from azureml.pipeline.core import Pipeline
from azureml.core import Experiment
pipeline = Pipeline(ws, [dataprep_step, test_train_step, train_step,batchinfer_step])
experiment = Experiment(workspace=ws, name='CreditRiskPipeline')
run = experiment.submit(pipeline, show_output=True)
run.wait_for_completion()
|
AMLNotebooks/01_Create_CreditRisk_AML_Pipeline.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Análisis de Componentes Principales - Paso a Paso
# * Estandarizar los datos (para cada una de las m observaciones)
# * Obtener los vectores y valores propios a partir de la matriz de covarianzas o de correlaciones o incluso la técnica de singular vector decomposition.
# * Ordenar los valores propios en orden descendente y quedarnos con los *p* que se correpondan a los *p* mayores y así disminuir el número de variables del dataset (p<m)
# * Constrir la matriz de proyección W a partir de los p vectores propios
# * Transformar el dataset original X a través de W para así obtener dadtos en el subespacio dimensional de dimensión *p*, que será Y
import pandas as pd
df = pd.read_csv("../datasets/iris/iris.csv")
df.head()
X = df.iloc[:,0:4].values
y = df.iloc[:,4].values
X[0]
# +
#import plotly.plotly as py #Version Anterior
#from plotly.graph_objs import *#Version Anterior
#import plotly.tools as tls # Version anterior
import chart_studio.plotly as py
from plotly.graph_objects import *
import chart_studio
import warnings
warnings.filterwarnings('ignore')
# +
#tls.set_credentials_file(username='JuanGabriel', api_key='<KEY>') # Version Anterior se debe cambiar
# Para generar los graficos con plotly se debe crear un usuario para tener acceso en plotly cloud.
# Una vez creado ese usuario en los campos username y api_key se debe colocar la informacion correspondiente
# Al username y al api_key otorgados en la aplicacion
chart_studio.tools.set_credentials_file(username='<username>', api_key='<api_key>')
# +
traces = []
legend = {0:True, 1:True, 2:True, 3:True}
colors = {'setosa': 'rgb(255,127,20)',
'versicolor': 'rgb(31, 220, 120)',
'virginica': 'rgb(44, 50, 180)'}
for col in range(4):
for key in colors:
traces.append(Histogram(x=X[y==key, col], opacity = 0.7,
xaxis="x%s"%(col+1), marker=Marker(color=colors[key]),
name = key, showlegend=legend[col]))
legend = {0:False, 1:False, 2:False, 3:False}
data = Data(traces)
layout = Layout(barmode="overlay",
xaxis=XAxis(domain=[0,0.25], title="Long. Sépalos (cm)"),
xaxis2=XAxis(domain=[0.3, 0.5], title = "Anch. Sépalos (cm)"),
xaxis3=XAxis(domain = [0.55, 0.75], title = "Long. Pétalos (cm)"),
xaxis4=XAxis(domain=[0.8,1.0], title = "Anch. Pétalos (cm)"),
yaxis=YAxis(title="Número de ejemplares"),
title="Distribución de los rasgos de las diferentes flores Iris")
fig = Figure(data = data, layout = layout)
py.iplot(fig)
# -
from sklearn.preprocessing import StandardScaler
X_std = StandardScaler().fit_transform(X)
# +
traces = []
legend = {0:True, 1:True, 2:True, 3:True}
colors = {'setosa': 'rgb(255,127,20)',
'versicolor': 'rgb(31, 220, 120)',
'virginica': 'rgb(44, 50, 180)'}
for col in range(4):
for key in colors:
traces.append(Histogram(x=X_std[y==key, col], opacity = 0.7,
xaxis="x%s"%(col+1), marker=Marker(color=colors[key]),
name = key, showlegend=legend[col]))
legend = {0:False, 1:False, 2:False, 3:False}
data = Data(traces)
layout = Layout(barmode="overlay",
xaxis=XAxis(domain=[0,0.25], title="Long. Sépalos (cm)"),
xaxis2=XAxis(domain=[0.3, 0.5], title = "Anch. Sépalos (cm)"),
xaxis3=XAxis(domain = [0.55, 0.75], title = "Long. Pétalos (cm)"),
xaxis4=XAxis(domain=[0.8,1.0], title = "Anch. Pétalos (cm)"),
yaxis=YAxis(title="Número de ejemplares"),
title="Distribución de los rasgos de las diferentes flores Iris")
fig = Figure(data = data, layout = layout)
py.iplot(fig)
# -
# ### 1- Calculamos la descomposición de valores y vectores propios
# ##### a) Usando la Matriz de Covarianzas
from IPython.display import display, Math, Latex
display(Math(r'\sigma_{jk} = \frac{1}{n-1}\sum_{i=1}^m (x_{ij} - \overline{x_j})(x_{ik} - \overline{x_k})'))
display(Math(r'\Sigma = \frac{1}{n-1}((X-\overline{x})^T(X-\overline{x}))'))
display(Math(r'\overline{x} = \sum_{i=1}^n x_i\in \mathbb R^m'))
import numpy as np
mean_vect = np.mean(X_std, axis=0)
mean_vect
cov_matrix = (X_std - mean_vect).T.dot((X_std - mean_vect))/(X_std.shape[0]-1)
print("La matriz de covarianzas es \n%s"%cov_matrix)
np.cov(X_std.T)
eig_vals, eig_vectors = np.linalg.eig(cov_matrix)
print("Valores propios \n%s"%eig_vals)
print("Vectores propios \n%s"%eig_vectors)
# ##### b) Usando la Matriz de Correlaciones
corr_matrix = np.corrcoef(X_std.T)
corr_matrix
eig_vals_corr, eig_vectors_corr = np.linalg.eig(corr_matrix)
print("Valores propios \n%s"%eig_vals_corr)
print("Vectores propios \n%s"%eig_vectors_corr)
corr_matrix = np.corrcoef(X.T)
corr_matrix
# ##### c) Singular Value Decomposition
u,s,v = np.linalg.svd(X_std.T)
u
s
v
# ### 2 - Las componentes principales
for ev in eig_vectors:
print("La longitud del VP es: %s"%np.linalg.norm(ev))
eigen_pairs = [(np.abs(eig_vals[i]), eig_vectors[:,i]) for i in range(len(eig_vals))]
eigen_pairs
# Ordenamos los vectores propios con valor propio de mayor a menor
eigen_pairs.sort()
eigen_pairs.reverse()
eigen_pairs
print("Valores propios en orden descendente:")
for ep in eigen_pairs:
print(ep[0])
total_sum = sum(eig_vals)
var_exp = [(i/total_sum)*100 for i in sorted(eig_vals, reverse=True)]
cum_var_exp = np.cumsum(var_exp)
# +
plot1 = Bar(x=["CP %s"%i for i in range(1,5)], y = var_exp, showlegend=False)
plot2 = Scatter(x=["CP %s"%i for i in range(1,5)], y = cum_var_exp, showlegend=True, name = "% de Varianza Explicada Acumulada")
data = Data([plot1, plot2])
layout = Layout(xaxis = XAxis(title="Componentes principales"),
yaxis = YAxis(title = "Porcentaje de varianza explicada"),
title = "Porcentaje de variabilidad explicada por cada componente principal")
fig = Figure(data = data, layout = layout)
py.iplot(fig)
# -
W = np.hstack((eigen_pairs[0][1].reshape(4,1),
eigen_pairs[1][1].reshape(4,1)))
W
X[0]
# ### 3- Proyectando las variables en el nuevo subespacio vectorial
display(Math(r'Y = X \cdot W, X \in M(\mathbb R)_{150, 4}, W \in M(\mathbb R)_{4,2}, Y \in M(\mathbb R)_{150, 2}'))
Y = X_std.dot(W)
Y
# +
results = []
for name in ('setosa', 'versicolor', 'virginica'):
result = Scatter(x=Y[y==name,0], y = Y[y==name, 1],
mode = "markers", name=name,
marker=Marker(size = 12, line = Line(color='rgba(220,220,220,0.15)', width=0.5), opacity = 0.8))
results.append(result)
data = Data(results)
layout = Layout(showlegend=True, scene =Scene(xaxis=XAxis(title="Componente Principal 1"),
yaxis=YAxis(title="Componente Principal 2")))
fig = Figure(data=data, layout=layout)
py.iplot(fig)
# -
|
notebooks/T10 - 1 - Analisis de Componentes Principales_Py38.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# author=cxf
# date=2020-8-8
# file for take a look
# subplot1 RNCU distribution
import numpy as np
import matplotlib.gridspec as mg
import matplotlib.pyplot as mp
# import warnings filter
from warnings import simplefilter
mp.switch_backend('TkAgg')
# ignore all future warnings
simplefilter(action='ignore')
###########################################################################
data1 = []
name = []
RNCU_value=[]
# Read RNCU index
with open('machine_X_index.txt', 'r') as fx1:
for line in fx1.readlines():
each_sample = np.array(line[0:-1].split(','))[1:].astype('int32')
data1.append(each_sample)
name.append(np.array(line[0:-1].split(','))[0])
x1 = np.array(data1)
# Read RNCU value
with open('machine_X_values.txt', 'r') as fy1:
for line in fy1.readlines():
each_sample = np.array(line[0:-1].split(','))[1:].astype('int32')
RNCU_value.append(each_sample)
y1 = np.array(RNCU_value)
# subplot2 error rate at different cutoff
data2 = []
with open('error.txt', 'r') as fx2:
for line in fx2.readlines():
each_sample = np.array(line[0:-1].split(','))[1:].astype('f8')
data2.append(each_sample)
x2 = np.array(data2)
# subplot2 number of sites which could be genotypes with 90% homozygotes
data3 = []
with open('a90.txt', 'r') as fx3:
for line in fx3.readlines():
each_sample = np.array(line[0:-1].split(','))[1:].astype('int32')
data3.append(each_sample)
x3 = np.array(data3)
# subplot3 number of sites which could be genotypes with 95% homozygotes
data4 = []
with open('a95.txt', 'r') as fx4:
for line in fx4.readlines():
each_sample = np.array(line[0:-1].split(','))[1:].astype('int32')
data4.append(each_sample)
#print(each_sample)
x4 = np.array(data4)
# subplot4 number of sites which could be genotypes with 99% homozygotes
data5 = []
with open('a99.txt', 'r') as fx5:
for line in fx5.readlines():
each_sample = np.array(line[0:-1].split(','))[1:].astype('int32')
data5.append(each_sample)
x5 = np.array(data5)
# -
# draw pictures
gs = mg.GridSpec(3, 2)
for i in range(1, 192):
# 1
mp.figure(figsize=(10,5))
mp.subplot(gs[0, :2])
mp.grid(ls=':')
mp.title(f'{name[i]}')
mp.plot(x1[i][:40],y1[i][:40], label='RNCU')
ax = mp.gca()
ax.xaxis.set_major_locator(mp.MultipleLocator(1))
mp.legend()
# 3
mp.subplot(gs[1, 0])
mp.grid(ls=':')
mp.plot(np.arange(0, 11), x2[i], label='Error_rate')
ax = mp.gca()
ax.xaxis.set_major_locator(mp.MultipleLocator(1))
mp.legend()
# 4
mp.subplot(gs[1, 1])
mp.grid(ls=':')
mp.plot(np.arange(0, 11), x3[i], label='90% sites')
ax = mp.gca()
ax.xaxis.set_major_locator(mp.MultipleLocator(1))
mp.legend()
# 5
mp.subplot(gs[2, 0])
mp.grid(ls=':')
mp.plot(np.arange(0, 11), x4[i], label='95% sites')
ax = mp.gca()
ax.xaxis.set_major_locator(mp.MultipleLocator(1))
mp.legend()
# 6
mp.subplot(gs[2, 1])
mp.grid(ls=':')
mp.plot(np.arange(0, 11), x5[i], label='99% sites')
ax = mp.gca()
ax.xaxis.set_major_locator(mp.MultipleLocator(1))
mp.legend()
mp.show()
|
model_training/0.prepare_processing/run1/take_look.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # 1.3. Introducing the multidimensional array in NumPy for fast array computations
import random
import numpy as np
n = 1000000
x = [random.random() for _ in range(n)]
y = [random.random() for _ in range(n)]
x[:3], y[:3]
z = [x[i] + y[i] for i in range(n)]
z[:3]
# %timeit [x[i] + y[i] for i in range(n)]
xa = np.array(x)
ya = np.array(y)
xa[:3]
za = xa + ya
za[:3]
# %timeit xa + ya
# %timeit sum(x) # pure Python
# %timeit np.sum(xa) # NumPy
d = [abs(x[i] - y[j])
for i in range(1000)
for j in range(1000)]
d[:3]
da = np.abs(xa[:1000, np.newaxis] - ya[:1000])
da
# %timeit [abs(x[i] - y[j]) \
# for i in range(1000) \
# for j in range(1000)]
# %timeit np.abs(xa[:1000, np.newaxis] - ya[:1000])
|
chapter01_basic/03_numpy.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import numpy as np
from sklearn.tree import DecisionTreeClassifier
url = 'C:/Users/juan_/Documents/GitHub/Datasets/drug200.csv'
df = pd.read_csv(url)
df
print(f'The DataFrame has {df.shape[0]} rows and {df.shape[1]} columns')
df.columns
X = df[['Age', 'Sex', 'BP', 'Cholesterol', 'Na_to_K',]].values
X[:,1]
# +
from sklearn import preprocessing
le_sex = preprocessing.LabelEncoder()
le_sex.fit(['F','M'])
X[:,1] = le_sex.transform(X[:,1])
le_BP = preprocessing.LabelEncoder()
le_BP.fit([ 'LOW', 'NORMAL', 'HIGH'])
X[:,2] = le_BP.transform(X[:,2])
le_Chol = preprocessing.LabelEncoder()
le_Chol.fit([ 'NORMAL', 'HIGH'])
X[:,3] = le_Chol.transform(X[:,3])
# -
y = df[['Drug']].values
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=1)
print(f'Dimensions of train set\nX_train: {X_train.shape}\ny_train: {y_train.shape}')
print(f'\nDimensions of test set\nX_train: {X_test.shape}\ny_train: {y_test.shape}')
Tree = DecisionTreeClassifier(criterion='entropy', max_depth=4)
Tree.fit(X_train,y_train)
predict_Tree = Tree.predict(X_test)
print(predict_Tree[:9])
print(y_test[:9].transpose())
from sklearn import metrics
import matplotlib.pyplot as plt
print(f'Decision Tree Accuracy: {metrics.accuracy_score(y_test,predict_Tree)}')
from io import StringIO
import pydotplus
import matplotlib.image as mpimg
from sklearn import tree
# %matplotlib inline
dot_data = StringIO()
filename = "drugtree.png"
featureNames = df.columns[0:5]
out=tree.export_graphviz(Tree,feature_names=featureNames, out_file=dot_data, class_names= np.unique(y_train), filled=True, special_characters=True,rotate=False)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
graph.write_png(filename)
img = mpimg.imread(filename)
plt.figure(figsize=(100, 200))
plt.imshow(img,interpolation='nearest')
|
Decision_Tree.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + colab={"base_uri": "https://localhost:8080/"} id="gKRLIk_8UAAE" executionInfo={"status": "ok", "timestamp": 1615817510599, "user_tz": -60, "elapsed": 3829, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}} outputId="a7a84834-e112-449a-b34f-4dae840fd926"
# !pip install aicrowd-cli==0.1
API_KEY = "<KEY>"
# !aicrowd login --api-key $API_KEY
# + colab={"base_uri": "https://localhost:8080/"} id="uaOkMCfbUAdY" executionInfo={"status": "ok", "timestamp": 1615817529573, "user_tz": -60, "elapsed": 22791, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}} outputId="0f31d865-9fa0-441d-e5b5-88c0ed30b1e0"
# !aicrowd dataset download --challenge rover-classification -j 3
# + id="bE_-odE3UCV2" executionInfo={"status": "ok", "timestamp": 1615817537264, "user_tz": -60, "elapsed": 30470, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}}
# !rm -rf data
# !mkdir data
# !unzip train.zip -d data/train >/dev/null
# !unzip val.zip -d data/val >/dev/null
# !unzip test.zip -d data/test >/dev/null
# + id="ADEZzRMWUD62" executionInfo={"status": "ok", "timestamp": 1615817538626, "user_tz": -60, "elapsed": 31825, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}}
import pandas as pd
import os
import re
import tensorflow as tf
# + id="_7MHBCk4UFZx" executionInfo={"status": "ok", "timestamp": 1615817538627, "user_tz": -60, "elapsed": 31820, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}}
df_train = pd.read_csv("train.csv")
# + id="08p2OrymUGnY" executionInfo={"status": "ok", "timestamp": 1615817538627, "user_tz": -60, "elapsed": 31814, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}}
df_val = pd.read_csv("val.csv")
# + id="YRYiUmZXUHta" executionInfo={"status": "ok", "timestamp": 1615817538628, "user_tz": -60, "elapsed": 31807, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}}
df_train['ImageID'] = df_train['ImageID'].astype(str)+".jpg"
df_val['ImageID'] = df_val['ImageID'].astype(str)+".jpg"
# + id="bAswipWfUI3T" executionInfo={"status": "ok", "timestamp": 1615817538628, "user_tz": -60, "elapsed": 31801, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}}
INPUT_SIZE = 256
# + id="cbIwhuviUKBv" executionInfo={"status": "ok", "timestamp": 1615817538629, "user_tz": -60, "elapsed": 31796, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}}
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# + colab={"base_uri": "https://localhost:8080/"} id="kfAsmKriULRF" executionInfo={"status": "ok", "timestamp": 1615817539239, "user_tz": -60, "elapsed": 32397, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}} outputId="37c2d490-ac7e-47f0-ba39-98aa2d04f4b3"
datagen=ImageDataGenerator(rescale=1./255.)
train_generator=datagen.flow_from_dataframe(
dataframe=df_train,
directory="data/train/",
x_col="ImageID",
y_col="label",
batch_size=32,
seed=42,
shuffle=True,
class_mode="categorical",
target_size=(INPUT_SIZE,INPUT_SIZE))
# + colab={"base_uri": "https://localhost:8080/"} id="7m5HW8SMUMzv" executionInfo={"status": "ok", "timestamp": 1615817539239, "user_tz": -60, "elapsed": 32382, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}} outputId="e850c502-4071-4ad8-bd9b-8210ce63b195"
val_generator=datagen.flow_from_dataframe(
dataframe=df_val,
directory="data/val/",
x_col="ImageID",
y_col="label",
batch_size=64,
seed=42,
shuffle=True,
class_mode="categorical",
target_size=(INPUT_SIZE,INPUT_SIZE))
# + id="t_5qKxNXUOEs" executionInfo={"status": "ok", "timestamp": 1615817539240, "user_tz": -60, "elapsed": 32369, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}}
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation, Conv2D, Flatten, Dropout, MaxPooling2D, BatchNormalization
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras import regularizers, optimizers
import os
import numpy as np
import pandas as pd
# + id="TTw5FO7tUQiX" executionInfo={"status": "ok", "timestamp": 1615817539240, "user_tz": -60, "elapsed": 32359, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}}
class CustomAugment(object):
def __call__(self, image):
# Random flips and grayscale with some stochasticity
img = self._random_apply(tf.image.flip_left_right, image, p=0.5)
img = self._random_apply(self._color_drop, img, p=0.8)
return img
def _color_drop(self, x):
image = tf.image.rgb_to_grayscale(x)
image = tf.tile(x, [1, 1, 1, 3])
return x
def _random_apply(self, func, x, p):
return tf.cond(
tf.less(tf.random.uniform([], minval=0, maxval=1, dtype=tf.float32),
tf.cast(p, tf.float32)),
lambda: func(x),
lambda: x)
# + id="aG75p8ULUamk" executionInfo={"status": "ok", "timestamp": 1615817539506, "user_tz": -60, "elapsed": 32617, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}}
data_augmentation = tf.keras.Sequential(
[
tf.keras.layers.Lambda(CustomAugment()),
tf.keras.layers.experimental.preprocessing.RandomFlip("horizontal",
input_shape=(INPUT_SIZE,
INPUT_SIZE,
3)),
tf.keras.layers.experimental.preprocessing.RandomRotation(0.1),
tf.keras.layers.experimental.preprocessing.RandomZoom(0.1),
]
)
# + id="1cT3fOlBUSRd" executionInfo={"status": "ok", "timestamp": 1615817543639, "user_tz": -60, "elapsed": 36740, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}}
model = Sequential()
model.add(data_augmentation)
model.add(tf.keras.applications.ResNet152V2(
include_top=False,
weights="imagenet",
input_shape=(INPUT_SIZE, INPUT_SIZE, 3),
))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(2, activation='softmax'))
model.compile(optimizers.RMSprop(lr=0.0001/10), loss="categorical_crossentropy", metrics=["Recall", "Precision"])
# + id="oNz956tpUbBr" executionInfo={"status": "ok", "timestamp": 1615817543640, "user_tz": -60, "elapsed": 36732, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}}
model.layers[1].trainable = False
# + id="dyJ2aTuEYpan" executionInfo={"status": "ok", "timestamp": 1615817543640, "user_tz": -60, "elapsed": 36706, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}}
STEP_SIZE_TRAIN=train_generator.n//train_generator.batch_size
STEP_SIZE_VAL=val_generator.n//train_generator.batch_size
# + colab={"base_uri": "https://localhost:8080/"} id="uMtUXHqtYrWM" executionInfo={"status": "ok", "timestamp": 1615819805873, "user_tz": -60, "elapsed": 2298928, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}} outputId="94ea284c-0c23-4321-ced3-71ce5de76457"
model.fit(train_generator, validation_data=val_generator, epochs=5)
# + id="L5oOQG5tYuSm" executionInfo={"status": "ok", "timestamp": 1615819805876, "user_tz": -60, "elapsed": 2298877, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}}
model.layers[1].trainable = True
# + colab={"base_uri": "https://localhost:8080/"} id="9njWjIGvgGir" executionInfo={"status": "ok", "timestamp": 1615819805877, "user_tz": -60, "elapsed": 2298868, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}} outputId="328ea6cc-1949-4c59-db48-a5fea26a2838"
len(model.layers[1].layers)
# + colab={"base_uri": "https://localhost:8080/"} id="SnDrB-uLllcv" executionInfo={"status": "ok", "timestamp": 1615819806157, "user_tz": -60, "elapsed": 2299125, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}} outputId="a24001c2-5470-4460-fdb0-7a37b45eed13"
model.summary()
# + id="JbmR6mHlgI27" executionInfo={"status": "ok", "timestamp": 1615819806157, "user_tz": -60, "elapsed": 2299117, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}}
model.layers[1].trainable = True
for layer in model.layers[1].layers[:100]:
layer.trainable = False
# + colab={"base_uri": "https://localhost:8080/"} id="xEFlCVU6ltxc" executionInfo={"status": "ok", "timestamp": 1615819806158, "user_tz": -60, "elapsed": 2299108, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}} outputId="00ee0744-c317-443b-dd79-7d04d551b759"
model.summary()
# + id="ELk3Wcb3mAHb" executionInfo={"status": "ok", "timestamp": 1615819806160, "user_tz": -60, "elapsed": 2299100, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}}
tf.keras.backend.set_value(model.optimizer.learning_rate, 0.0001/100)
# + colab={"base_uri": "https://localhost:8080/"} id="YQG-TE8xlu-Z" executionInfo={"status": "ok", "timestamp": 1615820695062, "user_tz": -60, "elapsed": 3187995, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}} outputId="c245951e-67f1-4cd0-8c0c-753d56ccb9e7"
model.fit(train_generator, validation_data=val_generator, epochs=2)
# + id="TGi7gS-clyfX" executionInfo={"status": "ok", "timestamp": 1615820695068, "user_tz": -60, "elapsed": 3187992, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}}
df_test = pd.read_csv("sample_submission.csv",dtype=str)
df_test["ImageID"] = df_test["ImageID"].astype(str)+".jpg"
# + colab={"base_uri": "https://localhost:8080/"} id="veMtR1nImF-4" executionInfo={"status": "ok", "timestamp": 1615820695069, "user_tz": -60, "elapsed": 3187986, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}} outputId="dc9e35f2-0c80-44b3-bc73-96d25b9b0d45"
test_generator=datagen.flow_from_dataframe(
dataframe=df_test,
directory="data/test/",
x_col="ImageID",
y_col="label",
batch_size=1,
seed=42,
shuffle=False,
class_mode="categorical",
target_size=(INPUT_SIZE,INPUT_SIZE))
# + id="3jS_hAXwmGBa" executionInfo={"status": "ok", "timestamp": 1615820695070, "user_tz": -60, "elapsed": 3187977, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}}
STEP_SIZE_TEST = test_generator.n//test_generator.batch_size
# + colab={"base_uri": "https://localhost:8080/"} id="KHbbv8bCmGED" executionInfo={"status": "ok", "timestamp": 1615820695071, "user_tz": -60, "elapsed": 3187969, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}} outputId="25f1fedf-b5b4-4379-ceac-29569903b25c"
STEP_SIZE_TEST
# + colab={"base_uri": "https://localhost:8080/"} id="-XO2EjuumGGE" executionInfo={"status": "ok", "timestamp": 1615820880821, "user_tz": -60, "elapsed": 3373707, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}} outputId="6132be57-64b0-4850-ca47-563434551cfa"
test_generator.reset()
pred = model.predict(test_generator,
steps=STEP_SIZE_TEST, verbose=1)
# + id="8q-woeyMmGIv" executionInfo={"status": "ok", "timestamp": 1615820880823, "user_tz": -60, "elapsed": 3373700, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}}
predicted_class_indices = np.argmax(pred,axis=1)
# + id="3HjyRNnvmNq4" executionInfo={"status": "ok", "timestamp": 1615820880831, "user_tz": -60, "elapsed": 3373699, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}}
labels = (train_generator.class_indices)
labels = dict((v,k) for k,v in labels.items())
predictions = [labels[k] for k in predicted_class_indices]
# + id="xs7wiUwcmNtk" executionInfo={"status": "ok", "timestamp": 1615820880832, "user_tz": -60, "elapsed": 3373692, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}}
df_test["pred"] = predictions
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="Z9UX1DXcmNv2" executionInfo={"status": "ok", "timestamp": 1615820880841, "user_tz": -60, "elapsed": 3373692, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}} outputId="2b7d2819-fa1f-465d-ba48-b1f08280765e"
df_test.head()
# + id="sATJkJ1ZmSYO" executionInfo={"status": "ok", "timestamp": 1615820880841, "user_tz": -60, "elapsed": 3373680, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}}
df_test.drop("label", axis=1, inplace=True)
df_test.rename(columns={"pred": "label"}, inplace=True)
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="0rBwIu_SmSbA" executionInfo={"status": "ok", "timestamp": 1615820880842, "user_tz": -60, "elapsed": 3373672, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}} outputId="872ca069-83e8-434e-af60-7a9c07b38270"
df_test.head()
# + id="_iXcmKHpmSd8" executionInfo={"status": "ok", "timestamp": 1615820880845, "user_tz": -60, "elapsed": 3373665, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}}
df_test["ImageID"] = df_test["ImageID"].map(lambda x: re.sub(r"\D", "", str(x)))
# + colab={"base_uri": "https://localhost:8080/", "height": 204} id="yN8n2eFGmNyM" executionInfo={"status": "ok", "timestamp": 1615820880845, "user_tz": -60, "elapsed": 3373657, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}} outputId="5935beae-2a2b-4fff-a462-1daa5889e5d5"
df_test.head()
# + id="v1sqNJaDmbA0" executionInfo={"status": "ok", "timestamp": 1615820880847, "user_tz": -60, "elapsed": 3373648, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}}
df_test.to_csv("data/03_sub.csv", index=False)
# + colab={"base_uri": "https://localhost:8080/"} id="Nq5LR8L1mbDQ" executionInfo={"status": "ok", "timestamp": 1615820884361, "user_tz": -60, "elapsed": 3377154, "user": {"displayName": "<NAME>", "photoUrl": "", "userId": "11235078078976363784"}} outputId="a309ef71-af7f-42b6-cc63-34e89e005f18"
# !aicrowd submission create -c rover-classification -f data/03_sub.csv
|
nbs/colab/03.1_resnet.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Chat Intents Tutorial
# The following provides an example of using the `chatintents` package on a dataset of chat message intents to automatically extract the number of clusters and apply descriptive labels. The associated [Medium blog post](https://towardsdatascience.com/clustering-sentence-embeddings-to-identify-intents-in-short-text-48d22d3bf02e) provides additional detail about the `chatintents` package and this example.
#
# The [bank77 dataset](https://github.com/PolyAI-LDN/task-specific-datasets) is used here, which contains messages representing 77 different intents from users. Four models are compared: Universal Sentence Encoder and three different Sentence Transformer models.
# +
import numpy as np
import pandas as pd
from hyperopt import hp
# for USE model
import tensorflow as tf
import tensorflow_hub as hub
# for Sentence Transformer models
from sentence_transformers import SentenceTransformer
import chatintents
from chatintents import ChatIntents
pd.set_option("display.max_rows", 600)
pd.set_option("display.max_columns", 500)
pd.set_option("max_colwidth", 400)
# -
# ## Load data and pre-trained models
# Load a sample of the bank77 dataset to use as an example. Larger datasets can be used but will take longer to process.
data_sample = pd.read_csv('../data/processed/data_sample.csv')[['text', 'category']]
data_sample.head()
all_intents = list(data_sample['text'])
len(all_intents)
# +
module_url = "https://tfhub.dev/google/universal-sentence-encoder/4"
model_use = hub.load(module_url)
print(f"module {module_url} loaded")
# + tags=[]
model_st1 = SentenceTransformer('all-mpnet-base-v2')
model_st2 = SentenceTransformer('all-MiniLM-L6-v2')
model_st3 = SentenceTransformer('all-distilroberta-v1')
# -
def embed(model, model_type, sentences):
if model_type == 'use':
embeddings = model(sentences)
elif model_type == 'sentence transformer':
embeddings = model.encode(sentences)
return embeddings
# Create document embeddings for the four different models:
embeddings_use = embed(model_use, 'use', all_intents)
embeddings_use.shape
embeddings_st1 = embed(model_st1, 'sentence transformer', all_intents)
embeddings_st1.shape
embeddings_st2 = embed(model_st2, 'sentence transformer', all_intents)
embeddings_st2.shape
embeddings_st3 = embed(model_st3, 'sentence transformer', all_intents)
embeddings_st3.shape
model_use = ChatIntents(embeddings_use, 'use')
model_st1 = ChatIntents(embeddings_st1, 'st1')
model_st2 = ChatIntents(embeddings_st2, 'st2')
model_st3 = ChatIntents(embeddings_st3, 'st3')
# ## Results with user-supplied hyperparameters
clusters_manual = model_st1.generate_clusters(n_neighbors = 15,
n_components = 5,
min_cluster_size = 5,
min_samples = None,
random_state=42)
labels_manual, cost_manual = model_st1.score_clusters(clusters_manual)
print(labels_manual)
print(cost_manual)
# ## Tuning hyperparameters
# ### Random hyperparameter search
# Randomly evalute 100 hyperparameter combinations within user-supplied ranges.
# +
space = {
"n_neighbors": range(12,16),
"n_components": range(3,7),
"min_cluster_size": range(2,15),
"min_samples": [None]
}
random_st1 = model_st1.random_search(space, 100)
# -
random_st1.head(20)
# As seen above, we could manually inspect the random search results and apply our domain knowledge to decide which model is best. For this problem, we'd expect there to be between 30 and 100 clusters, so the third configuration in the above table seems reasonable (82 clusters and 9.7% of the data is labeled as noise).
# ### Bayesian optimization with Hyperopt
# Rather than selecting the parameters at random, here we use hyperopt to perform a Bayesian search of the hyperparameter space. Note that the hspace dictionary must use hyperopt functions to define the hyperparameter search space.
# +
hspace = {
"n_neighbors": hp.choice('n_neighbors', range(3,16)),
"n_components": hp.choice('n_components', range(3,16)),
"min_cluster_size": hp.choice('min_cluster_size', range(2,16)),
"min_samples": None,
"random_state": 42
}
label_lower = 30
label_upper = 100
max_evals = 100
# -
model_use.bayesian_search(space=hspace,
label_lower=label_lower,
label_upper=label_upper,
max_evals=max_evals)
model_use.best_params
model_use.trials.best_trial
model_st1.bayesian_search(space=hspace,
label_lower=label_lower,
label_upper=label_upper,
max_evals=max_evals)
# Here we see that the bayesian search resulted in a lower loss (0.056) than the previous random search above (0.097 loss).
model_st2.bayesian_search(space=hspace,
label_lower=label_lower,
label_upper=label_upper,
max_evals=max_evals)
model_st3.bayesian_search(space=hspace,
label_lower=label_lower,
label_upper=label_upper,
max_evals=max_evals)
# + [markdown] tags=[]
# ## Visually inspect clusters
# -
# We can visualize the best clusters by using UMAP to reduce the dimensionality down to two dimensions.
model_use.plot_best_clusters()
model_st1.plot_best_clusters()
model_st2.plot_best_clusters()
model_st3.plot_best_clusters()
# ## Apply labels
# Since sentence transformer 1 achieved the lowest loss (after hyperparameter tuning), we'll select that as our best clusters. Then, we can get our final result below of clusters and applied descriptive labels, and then apply those to each document in the original dataset.
# +
# %%time
df_summary, labeled_docs = model_st1.apply_and_summarize_labels(data_sample[['text']])
# -
df_summary.head()
labeled_docs.head()
# For most applications, these above two dataframes would be the final results we'd be trying to obtain.
# + [markdown] tags=[]
# ## Evaluate clustering performance using ground truth labels
# -
# If we know the ground truth labels, then we can evaluate how well our tuned models actually did. In this case we do know the ground truth labels of the bank77 dataset so we can compare the four models evaluated.
# ### Comparing multiple models
# +
models = [model_use, model_st1, model_st2, model_st3]
df_comparison, labeled_docs_all_models = chatintents.evaluate_models(data_sample[['text', 'category']], models)
# -
df_comparison
# In agreement with our above conclusion that sentence transformer 1 would be best since it has the lowest cost, here we see that it does in fact have the highest [Adjusted Rand Index (ARI)](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.adjusted_rand_score.html) and [Normalized Mutual Info (NMI)](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.normalized_mutual_info_score.html). Sentence transformers 2 and 3 are swithced in order in actual performance from what would be expected from their loss, but the loss measure is still a helpful metric.
labeled_docs_all_models.sample(5)
labeled_docs_all_models[labeled_docs_all_models['label_st1']==2]
labeled_docs_all_models[labeled_docs_all_models['category']=='card_about_to_expire']
# ### Evaluating labels from a single model to ground truth
chatintents.top_cluster_category(labeled_docs,
data_sample[['text', 'category']],
'text',
df_summary).head(20)
# Most of the extracted labels match the ground labels quite well. Understandably, the labels match the most when the derived clusters contain a larger percentage of the dominant ground truth category.
|
notebooks/chatintents_tutorial.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# +
import numpy as np
# import some data
from pypge.benchmarks import diffeq
# visualization libraries
import matplotlib.pyplot as plt
# plot the visuals in ipython
# %matplotlib inline
# +
# default parameters to the Simple pendulum, you may change the uncommented ones
params = {
# 'name': "SimplePendulum",
# 'xs_str': ["A", "V"],
'params': {
"M": 1.0, # Mass of pendulum
"R": 1.0 # Length of rod
},
# 'eqn_strs': [
# "V", # dA
# "(-9.8/R)*sin(A)" # dV
# ],
'init_conds': {
"A": 2.0,
"V": 2.0
},
'time_end': 10.0,
'time_step': 0.01,
'noise': 0.1
}
# This returns the params object with more fields for Data and sympy objects
PROB = diffeq.SimplePendulum(params=params)
t_pts = PROB['time_pts']
x_pts = PROB['xs_pts']
p_pts = PROB['xs_pure']
print PROB['name']
for i,dx in enumerate(PROB['dxs_str']):
print " {:<4s} = {:s}".format(dx,PROB['eqn_strs'][i])
# With Noise, Plot velocity & angle as a function of time
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
ax1.plot(t_pts, x_pts[0], 'g')
ax2.plot(t_pts, x_pts[1])
ax1.set_xlabel('time')
ax1.set_ylabel('velocity (blue)')
ax2.set_ylabel('angle (green)')
plt.show()
# No Noise, Plot velocity & angle as a function of time
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
ax1.plot(t_pts, p_pts[0], 'g')
ax2.plot(t_pts, p_pts[1])
ax1.set_xlabel('time')
ax1.set_ylabel('velocity (blue)')
ax2.set_ylabel('angle (green)')
plt.show()
# +
# Since we want a diffeq, we need to calc numerical derivatives
# [explain why eval on numerical derivative data]
dt_pts = np.gradient(t_pts, edge_order=2)
dp_pts = np.gradient(p_pts,dt_pts,edge_order=2)[1]
# But first need to smooth out the "real" data before learning
from scipy.signal import savgol_filter
win_sz = 151
poly_ord = 7
x_pts_sm = savgol_filter(x_pts, win_sz, poly_ord)
plt.plot(t_pts,x_pts[1],'b.', ms=3)
plt.plot(t_pts,x_pts_sm[1], 'r')
plt.show()
## numerical derivatives (first order)
dx_pts_sm = savgol_filter(x_pts, win_sz, poly_ord, deriv=1, delta=t_pts[1])
plt.plot(t_pts,dp_pts[1],'b.', ms=3)
plt.plot(t_pts,dx_pts_sm[1], 'r')
plt.show()
# +
# now let's do some Learning
# we will search for dV, cause that's the interesting one
from pypge.search import PGE
from pypge import expand
# create the PGE estimator
pge = PGE(
system_type = "diffeq",
search_vars = "y",
usable_vars = PROB['xs_str'],
usable_funcs = expand.BASIC_BASE[1:],
pop_count = 3,
peek_count = 9,
max_iter = 4,
workers = 2
)
# A & V are the data values, dV is the y target
pge.fit(x_pts_sm, dx_pts_sm[1])
# +
paretos = pge.get_final_paretos()
finals = [m for front in paretos for m in front]
pge_szs = [m.size() for m in finals]
pge_scr = [m.score for m in finals]
pge_evar = [1.0 - m.evar for m in finals]
pge_szs_f = [m.size() for m in paretos[0]]
pge_scr_f = [m.score for m in paretos[0]]
pge_evar_f = [1.0 - m.evar for m in paretos[0]]
plt.plot(pge_szs, pge_scr, 'b.', pge_szs_f, pge_scr_f, 'ro')
plt.show()
plt.plot(pge_szs, pge_evar, 'b.', pge_szs_f, pge_evar_f, 'ro')
plt.show()
# -
for best_m in paretos[0]:
print best_m
y_pred = best_m.predict(best_m, pge.vars, x_pts)
plt.plot(t_pts, dx_pts_sm[1], 'b.', ms=3)
plt.plot(t_pts, y_pred, 'r')
plt.show()
|
notebooks/SimplePendulum.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [Root]
# language: python
# name: Python [Root]
# ---
# +
# code for loading the format for the notebook
import os
# path : store the current path to convert back to it later
path = os.getcwd()
os.chdir( os.path.join('..', 'notebook_format') )
from formats import load_style
load_style()
# +
os.chdir(path)
import numpy as np
import pandas as pd
# 1. magic to print version
# 2. magic so that the notebook will reload external python modules
# %load_ext watermark
# %load_ext autoreload
# %autoreload 2
import keras.backend as K
from keras.datasets import mnist
from keras.utils import np_utils
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Dense, Activation, Flatten
# %watermark -a 'Ethen' -d -t -v -p numpy,pandas,keras
# -
# # Convolutional Network
# loading the mnist dataset as an example
(X_train, y_train), (X_test, y_test) = mnist.load_data()
print('X_train shape:', X_train.shape)
print(X_train.shape[0], 'train samples')
print(X_test.shape[0] , 'test samples')
# +
# input image dimensions
img_rows, img_cols = 28, 28
# load training data and do basic data normalization
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# the keras backend supports two different kind of image data format,
# either channel first or channel last, we can detect it and transform
# our raw data accordingly, if it's channel first, we add another dimension
# to represent the depth (RGB color) at the very beginning (it is 1 here because
# mnist is a grey scale image), if it's channel last, we add it at the end
if K.image_data_format() == 'channels_first':
X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1)
X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
# images takes values between 0 - 255, we can normalize it
# by dividing every number by 255
X_train /= 255
X_test /= 255
print('train shape:', X_train.shape)
# -
# one-hot encode the class (target) vectors
n_class = 10
y_train = np_utils.to_categorical(y_train, n_class)
y_test = np_utils.to_categorical(y_test , n_class)
print('y_train shape:', y_train.shape)
# <p>
# <div class="alert alert-danger">
# The following code chunk takes A WHILE if you're running it on a laptop!!
# </div>
# +
model = Sequential()
# apply a 32 3x3 filters for the first convolutional layer
# then we specify the `padding` to be 'same' so we get
# the same width and height for the input (it will automatically do zero-padding),
# the default stride is 1,
# and since this is the first layer we need to specify the input shape of the image
model.add(Conv2D(32, kernel_size = (3, 3), padding = 'same', input_shape = input_shape))
# some activation function after conv layer
model.add(Activation('relu'))
model.add(Conv2D(64, kernel_size = (3, 3), padding = 'same'))
model.add(Activation('relu'))
# pooling layer, we specify the size of the filters for the pooling layer
# the default `stride` is None, which will default to pool_size
model.add(MaxPooling2D(pool_size = (2, 2)))
# before calling the fully-connected layers, we'll have to flatten it
model.add(Flatten())
model.add(Dense(n_class))
model.add(Activation('softmax'))
model.compile(loss = 'categorical_crossentropy',
optimizer = 'adam',
metrics = ['accuracy'])
n_epoch = 12
batch_size = 2056
model.fit(X_train, y_train,
batch_size = batch_size,
epochs = n_epoch,
verbose = 1,
validation_data = (X_test, y_test))
# evaluating the score, categorical cross entropy error and accuracy
score = model.evaluate(X_test, y_test, verbose = 0)
print('Test score:', score[0])
print('Test accuracy:', score[1])
# -
# # Reference
#
# - [Keras Example: mnist_cnn example](https://github.com/fchollet/keras/blob/master/examples/mnist_cnn.py)
|
keras/cnn_image_keras.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: myenv
# language: python
# name: myenv
# ---
# # 自動機械学習 Automated Machine Learning による品質管理モデリング & モデル解釈 (リモート高速実行)
#
# 製造プロセスから採取されたセンサーデータと検査結果のデータを用いて、品質管理モデルを構築します。
# - Python SDK のインポート
# - Azure ML service Workspace への接続
# - Experiment の作成
# - データの準備
# - 計算環境の準備
# - 自動機械学習の事前設定
# - モデル学習と結果の確認
# - モデル解釈
# ## 1. 事前準備
# ### Python SDK のインポート
# Azure Machine Learning service の Python SDKをインポートします
# +
import logging
from matplotlib import pyplot as plt
import pandas as pd
import os
import azureml.core
from azureml.core.experiment import Experiment
from azureml.core.workspace import Workspace
from azureml.core.dataset import Dataset
from azureml.train.automl import AutoMLConfig
# -
# ### Azure ML workspace との接続
# Azure Machine Learning service との接続を行います。Azure に対する認証が必要です。
ws = Workspace.from_config()
print(ws.name, ws.location, ws.resource_group, ws.location, sep = '\t')
# ### 実験名の設定
# choose a name for experiment
experiment_name = 'automl-classif-factoryQC-remote'
experiment=Experiment(ws, experiment_name)
# ### 学習データの準備
dataset = Dataset.get_by_name(ws, name='factory')
dataset.take(5).to_pandas_dataframe()
train_dataset, test_dataset = dataset.random_split(0.8, seed=1234)
X_train = train_dataset.drop_columns(columns=['Quality','ID'])
y_train = train_dataset.keep_columns(columns=['Quality'], validate=True)
X_test = test_dataset.drop_columns(columns=['Quality','ID'])
y_test = test_dataset.keep_columns(columns=['Quality'], validate=True)
# ### 計算環境 (Machine Learning Compute) の設定
# 予め cpucluster という名称の Machine Learning Compute を作成しておく
from azureml.core.compute import ComputeTarget
compute_target = ComputeTarget(ws, "cpucluster")
# +
from azureml.core.runconfig import RunConfiguration
from azureml.core.conda_dependencies import CondaDependencies
# create a new RunConfig object
conda_run_config = RunConfiguration(framework="python")
# Set compute target to AmlCompute
conda_run_config.target = compute_target
conda_run_config.environment.docker.enabled = True
cd = CondaDependencies.create(conda_packages=['numpy','py-xgboost<=0.80', 'tensorflow>=1.10.0,<=1.12.0'])
conda_run_config.environment.python.conda_dependencies = cd
# -
# ## 2. 自動機械学習 Automated Machine Learning
# ### 学習事前設定
# +
automl_settings = {
"iteration_timeout_minutes": 5,
"iterations": 50,
"n_cross_validations": 3,
"primary_metric": 'AUC_weighted',
"preprocess": True,
"enable_tf" : True,
"enable_voting_ensemble": False,
"enable_stack_ensemble": False
}
automl_config = AutoMLConfig(task = 'classification',
X = X_train,
y = y_train,
run_configuration=conda_run_config,
max_concurrent_iterations = 10,
**automl_settings
)
# -
# ### 実行と結果確認
remote_run = experiment.submit(automl_config, show_output = True)
# Widget で結果確認
from azureml.widgets import RunDetails
RunDetails(remote_run).show()
# 詳細ログの出力
remote_run.get_details()
best_run, fitted_model = remote_run.get_output()
best_run
# ### モデルの理解
fitted_model.named_steps['datatransformer'].get_featurization_summary()
# +
from pprint import pprint
def print_model(model, prefix=""):
for step in model.steps:
print(prefix + step[0])
if hasattr(step[1], 'estimators') and hasattr(step[1], 'weights'):
pprint({'estimators': list(
e[0] for e in step[1].estimators), 'weights': step[1].weights})
print()
for estimator in step[1].estimators:
print_model(estimator[1], estimator[0] + ' - ')
else:
pprint(step[1].get_params())
print()
print_model(fitted_model)
# -
# ## 3. モデルの解釈
# +
from azureml.train.automl.automl_explain_utilities import AutoMLExplainerSetupClass, automl_setup_model_explanations
automl_explainer_setup_obj = automl_setup_model_explanations(fitted_model, X=X_train, X_test=X_test, y=y_train, task='classification')
# -
from azureml.explain.model.mimic.models.lightgbm_model import LGBMExplainableModel
from azureml.explain.model.mimic_wrapper import MimicWrapper
explainer = MimicWrapper(ws, automl_explainer_setup_obj.automl_estimator, LGBMExplainableModel,
init_dataset=automl_explainer_setup_obj.X_transform, run=best_run,
features=automl_explainer_setup_obj.engineered_feature_names,
feature_maps=[automl_explainer_setup_obj.feature_map],
classes=automl_explainer_setup_obj.classes)
raw_explanations = explainer.explain(['local', 'global'], get_raw=True,
raw_feature_names=automl_explainer_setup_obj.raw_feature_names,
eval_dataset=automl_explainer_setup_obj.X_test_transform)
#print(raw_explanations.get_feature_importance_dict())
from azureml.contrib.explain.model.visualize import ExplanationDashboard
ExplanationDashboard(raw_explanations, automl_explainer_setup_obj.automl_pipeline, automl_explainer_setup_obj.X_test_raw)
|
Sample/Automated-Machine-Learning/FactoryQC-classification-explainer-remote.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import scipy
import scipy.stats
y = [1,1,2,2,3,3,3,3,3,3,3,4,4,5,6,7,8,8]# 待估计的序列
dist = getattr(scipy.stats, 'norm')# 均匀分布
loc,scale = dist.fit(y) # 得到均匀分布参数
dist.cdf(3,loc,scale) # 求P(3)=0.34493 (CDF:积累分布函数)
# -
dist.pdf(3,loc,scale)# 概率密度函数
# +
import matplotlib.pyplot as plt
h = plt.hist(y, bins=range(len(y)), color='w')
pdf_fitted = dist.pdf(list(set(y)), loc, scale) * len(y)
plt.plot(pdf_fitted, label='norm')
plt.show()
# -
dir(scipy.stats)# 还可计算这些分布
|
scipy/get_list_norm_distribution_probability.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: 'Python 3.7.4 64-bit (''base'': conda)'
# name: python37464bitbaseconda283db1cc85e247fb8163a5e3375dfbda
# ---
# # 2. Modeling and Predicting Fantasy Success
#
# This notebook overviews some initial analysis with correlation and feature importance, along with simple one hot encoding positions. After three models, random forest, linear regression with elastic net, and k nearest neighbor are compared with various hyperparameters compared with split training data that’s is cross validated.
#
# The models when picking parameters are scored to the rooted squared mean error so a predicted score can be outputted. However, When to quantify how successful the models are – they are compared against a consensus rating from fantasy football writers (raw data from 2016 – 2019 predicted rankings from [Fantasy Pros](https://www.fantasypros.com/) (data was downloaded from their API which can adjust to look at specific positions, weeks, scoring types, and more, (https://partners.fantasypros.com/api/v1/consensus-rankings.php?sport=NFL&year=2019&week=0&id=1054&position=ALL&type=ST&scoring=HALF&filters=7:8:9:285:699&export=csv)
#
# This comparison converts each model’s outputted scores to ranking per position and is compared to the actual rankings during the season based on points per game. This ranking is compared against the consensus ranking and the error mean absolute error is compared to not penalize outliers too heavily due to the turbulent nature of injuries.
# +
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import re
from sklearn.preprocessing import OneHotEncoder
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import RandomizedSearchCV
from sklearn.metrics import mean_squared_error
from sklearn.neighbors import KNeighborsRegressor
from sklearn.linear_model import LinearRegression
from sklearn.linear_model import ElasticNet
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import VotingRegressor
# +
x_4 = pd.read_csv("data_output/4_years_x.csv", index_col=0)
y_4 = pd.read_csv("data_output/4_years_y.csv", index_col=0)
x_2 = pd.read_csv("data_output/2_years_x.csv", index_col=0)
y_2 = pd.read_csv("data_output/2_years_y.csv", index_col=0)
x_2020 = pd.read_csv("data_output/2020_x.csv", index_col=0)
# -
# ## 2.1 Correlation Matrix vs. Half PPR
# +
def correlation_hppr(x_, y_):
x1 = x_.iloc[:,4:40]
x2 = x_.iloc[:,40:76]
x3 = x_.iloc[:,76:112]
x4 = x_.iloc[:,112:148]
cor_hppr = pd.DataFrame(list(range(len(x1.columns)+1)), columns=["key"])
cor_hppr["var"] = ["HPPR_G"] + list(x1.columns)
count = 1
for x in [x1, x2, x3, x4]:
year = ("%s-YearBack" % count)
full_x = pd.concat([y_, x], axis=1, sort=False)
cor = full_x.corr()
hppr = cor[["HPPR_G"]]
hppr["key"] = list(range(len(hppr)))
cor_hppr = pd.merge(cor_hppr,hppr, how="inner", on="key")
count += 1
cor_hppr = cor_hppr.set_index("var").drop(columns="key")
cor_hppr.columns = ['Y1', 'Y2', "Y3", "Y4"]
cor_hppr.to_csv("data_output/y1_y4_correlation.csv")
plt.figure(figsize=(5,10))
sns.heatmap(cor_hppr,annot=True)
plt.tight_layout()
correlation_hppr(x_4, y_4)
# -
# ## 2.2 Preprocess X Data
# +
def preprocess_x(train_x, x_2020):
train_x["t"] = 0
x_2020["t"] = 1
dfx = pd.concat([train_x, x_2020])
dfx = dfx.drop(columns=["Player"]).reset_index(drop=True)
ohe = OneHotEncoder(handle_unknown='ignore')
cat_col = dfx[["Position"]]
ohe.fit(cat_col)
feature_col_list = list(ohe.get_feature_names())
ohe_array = ohe.transform(dfx[["Position"]]).toarray()
dfx_ohe = pd.DataFrame(ohe_array, columns=feature_col_list)
dfx= dfx.reset_index(drop=True)
dfx = pd.concat([dfx_ohe, dfx], axis=1, sort=False)
train_x = dfx[dfx["t"] == 0].drop(columns=["t", "Tm", "Position"]).reset_index(drop=True)
x_2020 = dfx[dfx["t"] == 1].drop(columns=["t", "Tm", "Position"]).reset_index(drop=True)
return train_x, x_2020
x_4_adj, x2020_adj = preprocess_x(x_4, x_2020)
x_2_adj, x2020_adj = preprocess_x(x_2, x_2020)
print((list(x_2_adj.columns)))
print(len(list(x2020_adj.columns)))
# -
# ## 2.3 Feature Importance RFR
# +
from sklearn.ensemble import RandomForestRegressor
def feature_importance(x, y):
features = pd.DataFrame()
features['feature'] = x.columns
feat_hue = []
for row in features["feature"]:
end = str(row)[-2:]
if end in ["-1", "-2", "-3", "-4"]:
end = str(row)[-1:] + " Year Ago"
feat_hue.append(end)
else:
feat_hue.append("Position / Year")
features["hue"] = feat_hue
reg = RandomForestRegressor(random_state=0).fit(x, y)
features['importance'] = reg.feature_importances_
features.sort_values(by=['importance'], ascending=False, inplace=True)
return features
features = feature_importance(x_4_adj, y_4)
# -
sns.catplot(x="importance", y="feature", data=features.head(30), kind="bar", orient="h", hue="hue", dodge=False)
# ## 2.4 Model Selection and Hyperparameters
X_train, X_test, y_train, y_test = train_test_split(x_2_adj, y_2, test_size=0.25, random_state=1)
# + tags=["outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend", "outputPrepend"]
def check_models_cv(classifier, X_train, X_test, y_train, y_test):
rscv = RandomizedSearchCV(
classifier[0],
param_distributions=classifier[2],
scoring="neg_root_mean_squared_error",
n_iter=30,
cv=5,
verbose=1,
n_jobs=4,
)
rscv.fit(X_train, y_train)
y_pred = rscv.best_estimator_.predict(X_test)
rmse = mean_squared_error(y_test, y_pred) ** 0.5
clf_summary = [classifier[1], rscv.best_score_, rmse, rscv.best_estimator_]
return clf_summary
# +
RFR_clf = [
RandomForestRegressor(random_state=1),
"RFR",
{
"max_depth": [5, 7, 9, None],
"min_samples_split": range(2, 5),
"n_estimators": range(30, 101, 10),
},
]
RFR_best = check_models_cv(RFR_clf, X_train, X_test, y_train, y_test)
RFR_best
# +
LR_EN = [
ElasticNet(random_state=1),
"RFR",
{
"alpha": [x * 0.1 for x in range(0, 11, 1)],
"l1_ratio": [x * 0.1 for x in range(0, 11, 1)],
},
]
EN_best = check_models_cv(LR_EN, X_train, X_test, y_train, y_test)
EN_best
# +
KNN = [
KNeighborsRegressor(),
"RFR",
{
"n_neighbors": range(3, 20, 2),
"leaf_size": range(10, 41, 5),
},
]
KNN_best = check_models_cv(KNN, X_train, X_test, y_train, y_test)
KNN_best
# -
models =[("RFR", RFR_best[3]), ("EN", EN_best[3])]
ensemble = VotingRegressor(models)
# ## 2.5 Predict and Compare to Professional Rankings
# +
def clean_names(df):
name_list = []
for name in df["Player"]:
if name == "<NAME>":
name = "<NAME>"
name_adj = str(name).replace("*", "").replace("+", "").replace(".", "")
name_adj = name_adj.strip()
name_adj = re.sub(' +', ' ', name_adj)
name_split = name_adj.split()
for x in name_split:
if x.lower() in ["iii", "ii", "iv", "v", "jr"]:
name_split.remove(x)
name_adj = ' '.join(name_split)
name_list.append(name_adj)
df["Player"] = name_list
return df
def pull_prof_ff_rank(year, pos):
df = pd.read_csv("data_raw/%s%s.csv" % (year, pos), skiprows=2)
df = df.rename(columns = {'Rank':'Player'})
ranks = list(range(1,len(df)+1))
df["Prof_Rk"] = ranks
df = df[["Player", "Prof_Rk"]]
df = clean_names(df)
return df
# +
def score_ranks(yearly_df, pos):
if pos in ["RB", "WR"]:
n = 24
elif pos in ["QB", "TE"]:
n = 12
rk = list(yearly_df["Rk"].head(n))
pred = list(yearly_df["Pred_Rk"].head(n))
prof = list(yearly_df["Prof_Rk"].head(n))
pred_mae = 0
prof_mae = 0
for x in range(n):
pred_mae += abs(rk[x] - pred[x])
prof_mae += abs(rk[x] - prof[x])
return pred_mae/n, prof_mae/n
def predict_year(year, x, x_full, y, model):
full_train = pd.concat([y, x], axis=1, sort=False)
set_2019 = full_train[full_train["Year"] == year]
x_2019 = set_2019.drop(columns=["HPPR_G"])
y_2019 = pd.DataFrame(set_2019["HPPR_G"])
train_no_2019 = full_train[full_train["Year"] != year]
x_no_2019 = train_no_2019.drop(columns=["HPPR_G"])
y_no_2019 = train_no_2019["HPPR_G"]
reg = model.fit(x_no_2019, y_no_2019)
pred_2019_y = list(reg.predict(x_2019))
actual_2019_y = list(y_2019["HPPR_G"])
#print(mean_squared_error(actual_2019_y, pred_2019_y) ** 0.5)
y_2019["Pred_HPPR_G"] = pred_2019_y
full_2019 = x_full[["Player", "Tm", "Position"]]
full_2019 = y_2019.join(full_2019)
full_2019 = full_2019.sort_values(by=['HPPR_G'], ascending=False)
rank_full = pd.DataFrame()
full_scores = []
for pos in ["RB", "WR", "QB", "TE"]:
scores = 0
pos_df = full_2019[(full_2019["Position"] == pos)]
ranks = list(range(1,len(pos_df)+1))
pos_df["Rk"] = ranks
pos_df= pos_df.sort_values(by=['Pred_HPPR_G'], ascending=False)
pos_df["Pred_Rk"] = ranks
prof_df = pull_prof_ff_rank(year, pos)
pos_df = pd.merge(pos_df, prof_df, how="left", on="Player")
pos_df = pos_df.fillna(max(pos_df["Prof_Rk"])+5)
pos_df = pos_df.sort_values(by=['Rk'], ascending=True)
pred_mae, prof_mae = score_ranks(pos_df, pos)
scores = [pos, pred_mae, prof_mae]
full_scores.append(scores)
rank_full= pd.concat([rank_full, pos_df])
full_scores = pd.DataFrame(full_scores, columns=["Pos", "Pred MAE", "Prof MAE"])
return rank_full, full_scores
# +
def pred_vs_prof_yearly(start_year, end_year, x, x_full, y, model):
full_scores = pd.DataFrame()
full_stat = pd.DataFrame()
for year in range(start_year, end_year+1):
full, scores = predict_year(year, x, x_full, y, model)
scores["Year"] = year
full_scores = pd.concat([full_scores, scores])
full["Year"] = year
full_stat = pd.concat([full_stat, full])
return full_stat, full_scores
def pred_vs_prof_summary(start_year, end_year, x, x_full, y, model_list):
full_scores = pd.DataFrame()
full_stats = pd.DataFrame()
for model in model_list:
stat, scores = pred_vs_prof_yearly(start_year, end_year, x, x_full, y, model[0])
scores["Model"] = model[1]
full_scores = pd.concat([full_scores, scores])
stat["Model"] = model[1]
full_stats = pd.concat([full_stats, stat])
full_stats.to_csv("data_output/prof_vs_pred_stats.csv")
full_scores["Diff"] = full_scores["Prof MAE"] - full_scores["Pred MAE"]
full_scores.to_csv("data_output/prof_vs_pred_models.csv")
return full_scores, full_stats
# -
# ## 2.6 Model Results vs. Professional Ranks
# +
model_list = [
(KNN_best[3], "KNN"),
(RFR_best[3], "RFR"),
(EN_best[3], "LR_EN"),
(ensemble, "ESM")
]
full_score, full_stats = pred_vs_prof_summary(2016, 2019, x_2_adj, x_2, y_2, model_list)
full_score
# -
full_score[["Model", "Diff"]].groupby(["Model"]).mean()
full_score[["Year", "Diff"]].groupby(["Year"]).mean()
full_score[["Pos", "Diff"]].groupby(["Pos"]).mean()
full_score[["Model", "Diff", "Pos"]].groupby(["Model", "Pos"]).mean()
# ## 2.7 Predicting Success 2020 Players
# +
def predict_2020(x, y, x2020_adj, x_2020, model_list):
full_2020 = x_2020[["Player", "Tm", "Position"]]
for model in model_list:
reg = model[0].fit(x, y)
pred_2020_y = list(reg.predict(x2020_adj))
full_2020[model[1]] = pred_2020_y
full_2020 = full_2020.sort_values(by="LR_EN", ascending=False)
full_2020["KNN"]= [x[0] for x in full_2020["KNN"]]
full_2020.to_csv("data_output/2020_pred_values.csv")
return full_2020
pred_2020 = predict_2020(x_2_adj, y_2, x2020_adj, x_2020, model_list)
pred_2020
# -
|
ff-2-model_predict_score.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# Draw all defined logic elements, with color and fill
# -
import schemdraw
from schemdraw import logic
schemdraw.use('svg')
# +
def drawElements(elm_list, n=5, dx=1, dy=2, ofst=.8, fname=None, **kwargs):
x, y = 0, 0
d = schemdraw.Drawing(fontsize=12)
for e in elm_list:
element = getattr(logic, e)
A = d.add(element, xy=[(d.unit+1)*x+1,y], toplabel=e, **kwargs)
x = x + dx
if x >= n:
x=0
y=y-dy
return d
elist = ['And', 'Nand', 'Or', 'Nor', 'Xor', 'Xnor', 'Buf', 'Not', 'NotNot', 'Tgate',
'Schmitt', 'SchmittNot', 'SchmittAnd', 'SchmittNand']
# -
display(drawElements(elist, d='right'))
display(drawElements(elist, d='right', fill='yellow'))
display(drawElements(elist, d='right', color='blue'))
d = schemdraw.Drawing()
G = d.add(logic.And())
G.add_label('A', loc='in1')
G.add_label('B', loc='in2')
G.add_label('C', loc='out')
d.draw()
logic.And(inputs=5)
logic.Or(inputs=5)
logic.Or(inputs=5, inputnots=[1, 2, 3, 4, 5])
|
test/test_logic.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # dictionary data structure ==> mapping key value pair
# +
# a dict let you use anything, not just numbers
studentDetails = {"Name" : 'Shahin', "ID" : 1404021, "Address" : "Chittagong"}
print(studentDetails["Name"])
print(studentDetails["ID"])
print(studentDetails["Address"])
print(studentDetails)
# -
studentDetails.items()
studentDetails.keys()
studentDetails.values()
# ### list inside dictionary
dict_item = {'key1' : 'value', 'key2' : [1, 3, 'sos']}
print(dict_item['key2'])
print(dict_item['key2'][2])
# ### nested dictionary
dict_item = {'key1' : 'value', 'keyX' : {'innerkey' : [1, 3, 'sos']}}
print(dict_item['keyX'])
print(dict_item['keyX']['innerkey'])
print(dict_item['keyX']['innerkey'][2])
# # adding items in dictionary
# +
studentDetails["Language"] = "Python"
print(studentDetails)
studentDetails[1] = "Java"
print(studentDetails)
# -
# # changing vaiue in dictionary
studentDetails[1] = "MySQL"
print(studentDetails)
# # delete item from dictionary
del studentDetails["Language"]
print(studentDetails)
# # converting list into dictionary
planetsName = ['Mercury', 'Venus', 'Earth', 'Mars', 'Jupiter', 'Saturn', 'Uranus', 'Neptune']
planet_initial = {planet: planet[0] for planet in planetsName}
planet_length = {planet: len(planet) for planet in planetsName}
print(planet_initial)
print(planet_length)
# # some operations
"Venus" in planet_initial
"Venues" in planet_initial
planet_initial.keys()
planet_initial.values()
for key in planet_initial:
print(f"{key} : {planet_initial[key]}")
for key, value in planet_initial.items():
print(f"{key.rjust(8)} starts with : {value}")
print(" | ".join(sorted(planet_initial.keys())))
print(" | ".join(sorted(planet_initial.values())))
|
basics/P12_dictionary.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] colab_type="text" id="view-in-github"
# <a href="https://colab.research.google.com/github/NeuromatchAcademy/course-content/blob/master/tutorials/W3D5_NetworkCausality/student/W3D5_Tutorial3.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> <a href="https://kaggle.com/kernels/welcome?src=https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/tutorials/W3D5_NetworkCausality/student/W3D5_Tutorial3.ipynb" target="_parent"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open in Kaggle"/></a>
# -
# # Tutorial 3: Simultaneous fitting/regression
# **Week 3, Day 5: Network Causality**
#
# **By Neuromatch Academy**
#
# **Content creators**: <NAME>, <NAME>, <NAME>
#
# **Content reviewers**: <NAME>, <NAME>, <NAME>, <NAME>, <NAME>
# **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs**
#
# <p align='center'><img src='https://github.com/NeuromatchAcademy/widgets/blob/master/sponsors.png?raw=True'/></p>
# ---
# # Tutorial objectives
#
# *Estimated timing of tutorial: 20 min*
#
# This is tutorial 3 on our day of examining causality. Below is the high level outline of what we'll cover today, with the sections we will focus on in this notebook in bold:
#
# 1. Master definitions of causality
# 2. Understand that estimating causality is possible
# 3. Learn 4 different methods and understand when they fail
# 1. perturbations
# 2. correlations
# 3. **simultaneous fitting/regression**
# 4. instrumental variables
#
# ### Notebook 3 objectives
#
# In tutorial 2 we explored correlation as an approximation for causation and learned that correlation $\neq$ causation for larger networks. However, computing correlations is a rather simple approach, and you may be wondering: will more sophisticated techniques allow us to better estimate causality? Can't we control for things?
#
# Here we'll use some common advanced (but controversial) methods that estimate causality from observational data. These methods rely on fitting a function to our data directly, instead of trying to use perturbations or correlations. Since we have the full closed-form equation of our system, we can try these methods and see how well they work in estimating causal connectivity when there are no perturbations. Specifically, we will:
#
# - Learn about more advanced (but also controversial) techniques for estimating causality
# - conditional probabilities (**regression**)
# - Explore limitations and failure modes
# - understand the problem of **omitted variable bias**
#
# + cellView="form"
# @title Tutorial slides
# @markdown These are the slides for the videos in all tutorials today
from IPython.display import IFrame
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/gp4m9/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
# -
# ---
# # Setup
# +
# Imports
import numpy as np
import matplotlib.pyplot as plt
from sklearn.multioutput import MultiOutputRegressor
from sklearn.linear_model import Lasso
# + cellView="form"
# @title Figure Settings
import ipywidgets as widgets # interactive display
# %config InlineBackend.figure_format = 'retina'
plt.style.use("https://raw.githubusercontent.com/NeuromatchAcademy/course-content/master/nma.mplstyle")
# + cellView="form"
# @title Plotting Functions
def see_neurons(A, ax, ratio_observed=1, arrows=True):
"""
Visualizes the connectivity matrix.
Args:
A (np.ndarray): the connectivity matrix of shape (n_neurons, n_neurons)
ax (plt.axis): the matplotlib axis to display on
Returns:
Nothing, but visualizes A.
"""
n = len(A)
ax.set_aspect('equal')
thetas = np.linspace(0, np.pi * 2, n, endpoint=False)
x, y = np.cos(thetas), np.sin(thetas),
if arrows:
for i in range(n):
for j in range(n):
if A[i, j] > 0:
ax.arrow(x[i], y[i], x[j] - x[i], y[j] - y[i], color='k', head_width=.05,
width = A[i, j] / 25,shape='right', length_includes_head=True,
alpha = .2)
if ratio_observed < 1:
nn = int(n * ratio_observed)
ax.scatter(x[:nn], y[:nn], c='r', s=150, label='Observed')
ax.scatter(x[nn:], y[nn:], c='b', s=150, label='Unobserved')
ax.legend(fontsize=15)
else:
ax.scatter(x, y, c='k', s=150)
ax.axis('off')
def plot_connectivity_matrix(A, ax=None):
"""Plot the (weighted) connectivity matrix A as a heatmap
Args:
A (ndarray): connectivity matrix (n_neurons by n_neurons)
ax: axis on which to display connectivity matrix
"""
if ax is None:
ax = plt.gca()
lim = np.abs(A).max()
ax.imshow(A, vmin=-lim, vmax=lim, cmap="coolwarm")
# + cellView="form"
# @title Helper Functions
def sigmoid(x):
"""
Compute sigmoid nonlinearity element-wise on x.
Args:
x (np.ndarray): the numpy data array we want to transform
Returns
(np.ndarray): x with sigmoid nonlinearity applied
"""
return 1 / (1 + np.exp(-x))
def create_connectivity(n_neurons, random_state=42, p=0.9):
"""
Generate our nxn causal connectivity matrix.
Args:
n_neurons (int): the number of neurons in our system.
random_state (int): random seed for reproducibility
Returns:
A (np.ndarray): our 0.1 sparse connectivity matrix
"""
np.random.seed(random_state)
A_0 = np.random.choice([0, 1], size=(n_neurons, n_neurons), p=[p, 1 - p])
# set the timescale of the dynamical system to about 100 steps
_, s_vals, _ = np.linalg.svd(A_0)
A = A_0 / (1.01 * s_vals[0])
# _, s_val_test, _ = np.linalg.svd(A)
# assert s_val_test[0] < 1, "largest singular value >= 1"
return A
def simulate_neurons(A, timesteps, random_state=42):
"""
Simulates a dynamical system for the specified number of neurons and timesteps.
Args:
A (np.array): the connectivity matrix
timesteps (int): the number of timesteps to simulate our system.
random_state (int): random seed for reproducibility
Returns:
- X has shape (n_neurons, timeteps).
"""
np.random.seed(random_state)
n_neurons = len(A)
X = np.zeros((n_neurons, timesteps))
for t in range(timesteps - 1):
# solution
epsilon = np.random.multivariate_normal(np.zeros(n_neurons), np.eye(n_neurons))
X[:, t + 1] = sigmoid(A.dot(X[:, t]) + epsilon)
assert epsilon.shape == (n_neurons,)
return X
def get_sys_corr(n_neurons, timesteps, random_state=42, neuron_idx=None):
"""
A wrapper function for our correlation calculations between A and R.
Args:
n_neurons (int): the number of neurons in our system.
timesteps (int): the number of timesteps to simulate our system.
random_state (int): seed for reproducibility
neuron_idx (int): optionally provide a neuron idx to slice out
Returns:
A single float correlation value representing the similarity between A and R
"""
A = create_connectivity(n_neurons, random_state)
X = simulate_neurons(A, timesteps)
R = correlation_for_all_neurons(X)
return np.corrcoef(A.flatten(), R.flatten())[0, 1]
def correlation_for_all_neurons(X):
"""Computes the connectivity matrix for the all neurons using correlations
Args:
X: the matrix of activities
Returns:
estimated_connectivity (np.ndarray): estimated connectivity for the selected neuron, of shape (n_neurons,)
"""
n_neurons = len(X)
S = np.concatenate([X[:, 1:], X[:, :-1]], axis=0)
R = np.corrcoef(S)[:n_neurons, n_neurons:]
return R
# -
# The helper functions defined above are:
# - `sigmoid`: computes sigmoid nonlinearity element-wise on input, from Tutorial 1
# - `create_connectivity`: generates nxn causal connectivity matrix., from Tutorial 1
# - `simulate_neurons`: simulates a dynamical system for the specified number of neurons and timesteps, from Tutorial 1
# - `get_sys_corr`: a wrapper function for correlation calculations between A and R, from Tutorial 2
# - `correlation_for_all_neurons`: computes the connectivity matrix for the all neurons using correlations, from Tutorial 2
# ---
# # Section 1: Regression: recovering connectivity by model fitting
# + cellView="form"
# @title Video 1: Regression approach
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1m54y1q78b", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="Av4LaXZdgDo", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# -
# You may be familiar with the idea that correlation only implies causation when there no hidden *confounders*. This aligns with our intuition that correlation only implies causality when no alternative variables could explain away a correlation.
#
# **A confounding example**:
# Suppose you observe that people who sleep more do better in school. It's a nice correlation. But what else could explain it? Maybe people who sleep more are richer, don't work a second job, and have time to actually do homework. If you want to ask if sleep *causes* better grades, and want to answer that with correlations, you have to control for all possible confounds.
#
# A confound is any variable that affects both the outcome and your original covariate. In our example, confounds are things that affect both sleep and grades.
#
# **Controlling for a confound**:
# Confonds can be controlled for by adding them as covariates in a regression. But for your coefficients to be causal effects, you need three things:
#
# 1. **All** confounds are included as covariates
# 2. Your regression assumes the same mathematical form of how covariates relate to outcomes (linear, GLM, etc.)
# 3. No covariates are caused *by* both the treatment (original variable) and the outcome. These are [colliders](https://en.wikipedia.org/wiki/Collider_(statistics)); we won't introduce it today (but Google it on your own time! Colliders are very counterintuitive.)
#
# In the real world it is very hard to guarantee these conditions are met. In the brain it's even harder (as we can't measure all neurons). Luckily today we simulated the system ourselves.
# + cellView="form"
# @title Video 2: Fitting a GLM
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV16p4y1S7yE", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="GvMj9hRv5Ak", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# -
#
#
# Recall that in our system each neuron effects every other via:
#
# $$
# \vec{x}_{t+1} = \sigma(A\vec{x}_t + \epsilon_t),
# $$
#
# where $\sigma$ is our sigmoid nonlinearity from before: $\sigma(x) = \frac{1}{1 + e^{-x}}$
#
# Our system is a closed system, too, so there are no omitted variables. The regression coefficients should be the causal effect. Are they?
# We will use a regression approach to estimate the causal influence of all neurons to neuron #1. Specifically, we will use linear regression to determine the $A$ in:
#
# \begin{equation}
# \sigma^{-1}(\vec{x}_{t+1}) = A\vec{x}_t + \epsilon_t ,
# \end{equation}
#
# where $\sigma^{-1}$ is the inverse sigmoid transformation, also sometimes referred to as the **logit** transformation: $\sigma^{-1}(x) = \log(\frac{x}{1-x})$.
#
# Let $W$ be the $\vec{x}_t$ values, up to the second-to-last timestep $T-1$:
#
# \begin{equation}
# W =
# \begin{bmatrix}
# \mid & \mid & ... & \mid \\
# \vec{x}_0 & \vec{x}_1 & ... & \vec{x}_{T-1} \\
# \mid & \mid & ... & \mid
# \end{bmatrix}_{n \times (T-1)}
# \end{equation}
#
# Let $Y$ be the $\vec{x}_{t+1}$ values for a selected neuron, indexed by $i$, starting from the second timestep up to the last timestep $T$:
#
# \begin{equation}
# Y =
# \begin{bmatrix}
# x_{i,1} & x_{i,2} & ... & x_{i, T} \\
# \end{bmatrix}_{1 \times (T-1)}
# \end{equation}
#
# You will then fit the following model:
#
# \begin{equation}
# \sigma^{-1}(Y^T) = W^TV
# \end{equation}
#
# where $V$ is the $n \times 1$ coefficient matrix of this regression, which will be the estimated connectivity matrix between the selected neuron and the rest of the neurons.
#
# **Review**: As you learned in Week 1, *lasso* a.k.a. **$L_1$ regularization** causes the coefficients to be sparse, containing mostly zeros. Think about why we want this here.
# ## Coding Exercise 1: Use linear regression plus lasso to estimate causal connectivities
#
# You will now create a function to fit the above regression model and V. We will then call this function to examine how close the regression vs the correlation is to true causality.
#
# **Code**:
#
# You'll notice that we've transposed both $Y$ and $W$ here and in the code we've already provided below. Why is that?
#
# This is because the machine learning models provided in scikit-learn expect the *rows* of the input data to be the observations, while the *columns* are the variables. We have that inverted in our definitions of $Y$ and $W$, with the timesteps of our system (the observations) as the columns. So we transpose both matrices to make the matrix orientation correct for scikit-learn.
#
#
# - Because of the abstraction provided by scikit-learn, fitting this regression will just be a call to initialize the `Lasso()` estimator and a call to the `fit()` function
# - Use the following hyperparameters for the `Lasso` estimator:
# - `alpha = 0.01`
# - `fit_intercept = False`
# - How do we obtain $V$ from the fitted model?
#
# We will use the helper function `logit`.
#
# + cellView="form"
# @markdown Execute this cell to enable helper function `logit`
def logit(x):
"""
Applies the logit (inverse sigmoid) transformation
Args:
x (np.ndarray): the numpy data array we want to transform
Returns
(np.ndarray): x with logit nonlinearity applied
"""
return np.log(x/(1-x))
# + cellView="both"
def get_regression_estimate(X, neuron_idx):
"""
Estimates the connectivity matrix using lasso regression.
Args:
X (np.ndarray): our simulated system of shape (n_neurons, timesteps)
neuron_idx (int): a neuron index to compute connectivity for
Returns:
V (np.ndarray): estimated connectivity matrix of shape (n_neurons, n_neurons).
if neuron_idx is specified, V is of shape (n_neurons,).
"""
# Extract Y and W as defined above
W = X[:, :-1].transpose()
Y = X[[neuron_idx], 1:].transpose()
# Apply inverse sigmoid transformation
Y = logit(Y)
############################################################################
## TODO: Insert your code here to fit a regressor with Lasso. Lasso captures
## our assumption that most connections are precisely 0.
## Fill in function and remove
raise NotImplementedError("Please complete the regression exercise")
############################################################################
# Initialize regression model with no intercept and alpha=0.01
regression = ...
# Fit regression to the data
regression.fit(...)
V = regression.coef_
return V
# Set parameters
n_neurons = 50 # the size of our system
timesteps = 10000 # the number of timesteps to take
random_state = 42
neuron_idx = 1
# Set up system and simulate
A = create_connectivity(n_neurons, random_state)
X = simulate_neurons(A, timesteps)
# Estimate causality with regression
V = get_regression_estimate(X, neuron_idx)
print("Regression: correlation of estimated connectivity with true connectivity: {:.3f}".format(np.corrcoef(A[neuron_idx, :], V)[1, 0]))
print("Lagged correlation of estimated connectivity with true connectivity: {:.3f}".format(get_sys_corr(n_neurons, timesteps, random_state, neuron_idx=neuron_idx)))
# + [markdown] colab_type="text"
# [*Click for solution*](https://github.com/NeuromatchAcademy/course-content/tree/master//tutorials/W3D5_NetworkCausality/solutions/W3D5_Tutorial3_Solution_17d9b1a7.py)
#
#
# -
# You should find that using regression, our estimated connectivity matrix has a correlation of $0.865$ with the true connectivity matrix. With correlation, our estimated connectivity matrix has a correlation of $0.703$ with the true connectivity matrix.
#
# We can see from these numbers that multiple regression is better than simple correlation for estimating connectivity.
# ---
# # Section 2: Partially Observed Systems
#
# *Estimated timing to here from start of tutorial: 10 min*
#
# If we are unable to observe the entire system, **omitted variable bias** becomes a problem. If we don't have access to all the neurons, and so therefore can't control for them, can we still estimate the causal effect accurately?
#
#
# + cellView="form"
# @title Video 3: Omitted variable bias
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1ov411i7dc", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="5CCib6CTMac", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# -
# **Video correction**: the labels "connectivity from"/"connectivity to" are swapped in the video but fixed in the figures/demos below
# We first visualize different subsets of the connectivity matrix when we observe 75% of the neurons vs 25%.
#
# Recall the meaning of entries in our connectivity matrix: $A[i,j] = 1$ means a connectivity **from** neuron $i$ **to** neuron $j$ with strength $1$.
# + cellView="form"
#@markdown Execute this cell to visualize subsets of connectivity matrix
# Run this cell to visualize the subsets of variables we observe
n_neurons = 25
A = create_connectivity(n_neurons)
fig, axs = plt.subplots(2, 2, figsize=(10, 10))
ratio_observed = [0.75, 0.25] # the proportion of neurons observed in our system
for i, ratio in enumerate(ratio_observed):
sel_idx = int(n_neurons * ratio)
offset = np.zeros((n_neurons, n_neurons))
axs[i,1].title.set_text("{}% neurons observed".format(int(ratio * 100)))
offset[:sel_idx, :sel_idx] = 1 + A[:sel_idx, :sel_idx]
im = axs[i, 1].imshow(offset, cmap="coolwarm", vmin=0, vmax=A.max() + 1)
axs[i, 1].set_xlabel("Connectivity from")
axs[i, 1].set_ylabel("Connectivity to")
plt.colorbar(im, ax=axs[i, 1], fraction=0.046, pad=0.04)
see_neurons(A,axs[i, 0],ratio)
plt.suptitle("Visualizing subsets of the connectivity matrix", y = 1.05)
plt.show()
# -
# ### Interactive Demo 3: Regression performance as a function of the number of observed neurons
#
# We will first change the number of observed neurons in the network and inspect the resulting estimates of connectivity in this interactive demo. How does the estimated connectivity differ?
# + cellView="form"
# @markdown Execute this cell to get helper functions `get_regression_estimate_full_connectivity` and `get_regression_corr_full_connectivity`
def get_regression_estimate_full_connectivity(X):
"""
Estimates the connectivity matrix using lasso regression.
Args:
X (np.ndarray): our simulated system of shape (n_neurons, timesteps)
neuron_idx (int): optionally provide a neuron idx to compute connectivity for
Returns:
V (np.ndarray): estimated connectivity matrix of shape (n_neurons, n_neurons).
if neuron_idx is specified, V is of shape (n_neurons,).
"""
n_neurons = X.shape[0]
# Extract Y and W as defined above
W = X[:, :-1].transpose()
Y = X[:, 1:].transpose()
# apply inverse sigmoid transformation
Y = logit(Y)
# fit multioutput regression
reg = MultiOutputRegressor(Lasso(fit_intercept=False,
alpha=0.01, max_iter=250 ), n_jobs=-1)
reg.fit(W, Y)
V = np.zeros((n_neurons, n_neurons))
for i, estimator in enumerate(reg.estimators_):
V[i, :] = estimator.coef_
return V
def get_regression_corr_full_connectivity(n_neurons, A, X, observed_ratio, regression_args):
"""
A wrapper function for our correlation calculations between A and the V estimated
from regression.
Args:
n_neurons (int): number of neurons
A (np.ndarray): connectivity matrix
X (np.ndarray): dynamical system
observed_ratio (float): the proportion of n_neurons observed, must be betweem 0 and 1.
regression_args (dict): dictionary of lasso regression arguments and hyperparameters
Returns:
A single float correlation value representing the similarity between A and R
"""
assert (observed_ratio > 0) and (observed_ratio <= 1)
sel_idx = np.clip(int(n_neurons*observed_ratio), 1, n_neurons)
sel_X = X[:sel_idx, :]
sel_A = A[:sel_idx, :sel_idx]
sel_V = get_regression_estimate_full_connectivity(sel_X)
return np.corrcoef(sel_A.flatten(), sel_V.flatten())[1,0], sel_V
# + cellView="form"
# @markdown Execute this cell to enable demo. the plots will take a few seconds to update after moving the slider.
n_neurons = 50
A = create_connectivity(n_neurons, random_state=42)
X = simulate_neurons(A, 4000, random_state=42)
reg_args = {
"fit_intercept": False,
"alpha": 0.001
}
@widgets.interact(n_observed = widgets.IntSlider(min = 5, max = 45, step = 5, continuous_update=False))
def plot_observed(n_observed):
to_neuron = 0
fig, axs = plt.subplots(1, 3, figsize=(15, 5))
sel_idx = n_observed
ratio = (n_observed) / n_neurons
offset = np.zeros((n_neurons, n_neurons))
axs[0].title.set_text("{}% neurons observed".format(int(ratio * 100)))
offset[:sel_idx, :sel_idx] = 1 + A[:sel_idx, :sel_idx]
im = axs[1].imshow(offset, cmap="coolwarm", vmin=0, vmax=A.max() + 1)
plt.colorbar(im, ax=axs[1], fraction=0.046, pad=0.04)
see_neurons(A,axs[0], ratio, False)
corr, R = get_regression_corr_full_connectivity(n_neurons,
A,
X,
ratio,
reg_args)
#rect = patches.Rectangle((-.5,to_neuron-.5),n_observed,1,linewidth=2,edgecolor='k',facecolor='none')
#axs[1].add_patch(rect)
big_R = np.zeros(A.shape)
big_R[:sel_idx, :sel_idx] = 1 + R
#big_R[to_neuron, :sel_idx] = 1 + R
im = axs[2].imshow(big_R, cmap="coolwarm", vmin=0, vmax=A.max() + 1)
plt.colorbar(im, ax=axs[2],fraction=0.046, pad=0.04)
c = 'w' if n_observed<(n_neurons-3) else 'k'
axs[2].text(0,n_observed+3,"Correlation : {:.2f}".format(corr), color=c, size=15)
#axs[2].axis("off")
axs[1].title.set_text("True connectivity")
axs[1].set_xlabel("Connectivity from")
axs[1].set_ylabel("Connectivity to")
axs[2].title.set_text("Estimated connectivity")
axs[2].set_xlabel("Connectivity from")
#axs[2].set_ylabel("Connectivity to")
# -
# Next, we will inspect a plot of the correlation between true and estimated connectivity matrices vs the percent of neurons observed over multiple trials.
# What is the relationship that you see between performance and the number of neurons observed?
#
# **Note:** the cell below will take about 25-30 seconds to run.
# + cellView="form"
# @markdown Plot correlation vs. subsampling
import warnings
warnings.filterwarnings('ignore')
# we'll simulate many systems for various ratios of observed neurons
n_neurons = 50
timesteps = 5000
ratio_observed = [1, 0.75, 0.5, .25, .12] # the proportion of neurons observed in our system
n_trials = 3 # run it this many times to get variability in our results
reg_args = {
"fit_intercept": False,
"alpha": 0.001
}
corr_data = np.zeros((n_trials, len(ratio_observed)))
for trial in range(n_trials):
A = create_connectivity(n_neurons, random_state=trial)
X = simulate_neurons(A, timesteps)
print("simulating trial {} of {}".format(trial + 1, n_trials))
for j, ratio in enumerate(ratio_observed):
result,_ = get_regression_corr_full_connectivity(n_neurons,
A,
X,
ratio,
reg_args)
corr_data[trial, j] = result
corr_mean = np.nanmean(corr_data, axis=0)
corr_std = np.nanstd(corr_data, axis=0)
plt.plot(np.asarray(ratio_observed) * 100, corr_mean)
plt.fill_between(np.asarray(ratio_observed) * 100,
corr_mean - corr_std,
corr_mean + corr_std,
alpha=.2)
plt.xlim([100, 10])
plt.xlabel("Percent of neurons observed")
plt.ylabel("connectivity matrices correlation")
plt.title("Performance of regression as a function of the number of neurons observed");
# -
# ---
# # Summary
#
# *Estimated timing of tutorial: 20 min*
# + cellView="form"
# @title Video 4: Summary
from ipywidgets import widgets
out2 = widgets.Output()
with out2:
from IPython.display import IFrame
class BiliVideo(IFrame):
def __init__(self, id, page=1, width=400, height=300, **kwargs):
self.id=id
src = 'https://player.bilibili.com/player.html?bvid={0}&page={1}'.format(id, page)
super(BiliVideo, self).__init__(src, width, height, **kwargs)
video = BiliVideo(id="BV1bh411o73r", width=854, height=480, fs=1)
print('Video available at https://www.bilibili.com/video/{0}'.format(video.id))
display(video)
out1 = widgets.Output()
with out1:
from IPython.display import YouTubeVideo
video = YouTubeVideo(id="T1uGf1H31wE", width=854, height=480, fs=1, rel=0)
print('Video available at https://youtube.com/watch?v=' + video.id)
display(video)
out = widgets.Tab([out1, out2])
out.set_title(0, 'Youtube')
out.set_title(1, 'Bilibili')
display(out)
# -
# In this tutorial, we explored:
#
# 1. Using regression for estimating causality
# 2. The problem of ommitted variable bias, and how it arises in practice
|
tutorials/W3D5_NetworkCausality/student/W3D5_Tutorial3.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
# #!/usr/bin/env python3
from datetime import datetime as dt
import csv
import time
import json
import emoji
import re
import urllib.request
from emoji.unicode_codes import UNICODE_EMOJI
from collections import Counter
import timeit
start_time = time.time()
print("hello")
def extract_emojis(a_string):
"""Finds all emojis in a string and converts them to unicode
Parameters
----------
a_string: str
string that is searched for emojis
Returns
-------
emojiTextArray: list
contains unicode for each emoji found in a_string
"""
emojiTextArray = []
for c in a_string:
if c in emoji.UNICODE_EMOJI:
emojiTextArray.append(UNICODE_EMOJI[c])
return emojiTextArray
def extract_profanity(a_string):
"""Finds all emojis in a string and converts them to unicode
Parameters
----------
a_string: str
string that is searched for profanity
Returns
-------
profanityArray: list
contains counts for each word found in swearWords
"""
#convert string to lower for consistency
a_string = a_string.lower()
#list of profanity to search string for
swearWords = ['fuck', 'shit', 'bitch', 'dick']
profanityArray = []
for swear in swearWords:
count = a_string.count(swear)
profanityArray.append(count)
return profanityArray
def extract_commentData(comments, limit, subreddit):
"""Extracts comment data from a subreddit containing score, number of emojis, individual and sum profanity counts, total emoji count, and total comments searched
Ensures even ratio of comments containing emojis, profanity, and neither. Receives data from PushShift query extract_all_commentData (max 1000 comments).
Parameters
----------
comments: list
list containing a dictionary for each comment with all data returned by PushShift relating to that comment
limit: int
number of comments to return data on
subreddit: string
subreddit the comments originated from
Returns
-------
commentData: list
list of lists containing individual comment score, emoji count, individual profanity count, and total profanity count
lastTime: int
unix time that last comment read was posted
totalEmojis: list
list containing all unicode of all emojis found in all comments (max 1000 at a time) queried
totalProfanity: list
list of total specific profanity counts in all comments (max 1000 at a time) queried
commentCount: int
total comments queried (max 1000). Will likely be larger than limit, unless subreddit has a very high frequency of emojis
"""
# initialize storage
commentData = []
emojiCount = 0
profanityCount = 0
normalCount = 0
commentCount = 0
totalEmojis = []
totalProfanity = []
# searches each comment in comments
for comment in comments:
# ends if comments to be returned equal or surpass the limit
if len(commentData) >= limit: break
individualCommentData = []
# extracts emojis unicode for each comment
emojis = extract_emojis(comment['body'])
# adds all unicode to a comprehensive list
totalEmojis += emojis
# extracts proganity counts from comment
profanityArray = extract_profanity(comment['body'])
totalProfanity.append(profanityArray)
#appends comment score, total emojis, specific and total profanity counts, and subreddit to individualCommentData
individualCommentData.append(comment['score'])
individualCommentData.append(len(emojis))
individualCommentData.append(sum(profanityArray))
individualCommentData.append(len(comment['body'].split()))
individualCommentData.append(subreddit)
# appends comments with emojis to the overall comment data. Increments emoji comment count
if len(emojis) > 0:
commentData.append(individualCommentData)
emojiCount += 1
# appends comments with profanity to the overall comment data if there are less comments with profanity than emojis
# Increments profanity comment count
if len(emojis) == 0 and sum(profanityArray) > 0 and profanityCount < emojiCount:
commentData.append(individualCommentData)
profanityCount += 1
# appends comments without emojis or profanity to the overall comment data if there are less comments without emojis and profanity
# Increments profanity comment count
if len(emojis) == 0 and normalCount < emojiCount:
commentData.append(individualCommentData)
normalCount += 1
#increments comment count to summarize total number of comments queried to reach limit
commentCount += 1
lastTime = comment['created_utc']
#sums all profanity for all comments queried
totalProfanity = [sum(i) for i in zip(*totalProfanity)]
return commentData, lastTime, totalEmojis, totalProfanity, commentCount
def extract_all_commentData(after, before, subreddit, limit):
"""Calls extract_commentData on comments queried from PushShift. PushShift has a query limit of 1000 comments. extract_all_commentData will
continue to call extract_commentData untul the desired number of comments is reached.
Parameters
----------
after: int
starting time to query comments from
before: int
ending time to query comments form
subreddit: string
subreddit to pull comments from
limit:int
limit of comments to retrieve and save
Returns
-------
commentData: list
list of lists containing individual comment score, emoji count, individual profanity count, and total profanity count
lastTime: int
unix time that last comment read was posted
totalEmojis: list
list containing all unicode of all emojis found in all comments (no max) queried
totalProfanity: list
list of total specific profanity counts in all comments (no max) queried
commentCount: int
total comments queried. Will likely be larger than limit, unless subreddit has a very high frequency of emojis
"""
#first line of csv
allCommentData = [['score', 'emojiCount', 'profanityCount', 'wordCount', 'subreddit']]
# initialize storage
totalCommentCount = 0
totalEmojis = []
totalProfanity = [0, 0, 0, 0]
#PushShift query. Pulls 1000 comments at a time
while True:
with urllib.request.urlopen("https://api.pushshift.io/reddit/comment/search/?subreddit=" + subreddit + "&size=1000&after=" + str(after) + "&before=" + str(before)) as url:
pushShiftData = json.loads(url.read().decode())
#breaks loops if limit is reached or query is empty
if limit <= 0 or len(pushShiftData["data"]) <= 0: break
commentData, after, emojis, profanity, commentCount= extract_commentData(pushShiftData["data"], limit, subreddit)
limit -= len(commentData)
allCommentData += commentData
totalEmojis += emojis
totalProfanity = [totalProfanity[i]+profanity[i] for i in range(len(profanity))]
totalCommentCount += commentCount
return allCommentData, totalCommentCount, totalEmojis, totalProfanity
def writeData(after, before, subredditArray, limit):
"""Calls extract_all_commentData on subreddits specified and writes comment data to CSV. Total profanity count, total emoji count, and total comments
for all subreddits are written to 1 JSON.
Parameters
----------
after: int
starting time to query comments from. Passed to extract_all_commentData
before: int
ending time to query comments form. Passed to extract_all_commentData
subreddit: string
subreddit to pull comments from. Passed to extract_all_commentData
limit:int
limit of comments to retrieve and save. Passed to extract_all_commentData
"""
jsonData = []
#loops through subreddit array and extracts data
for subreddit in subredditArray:
allCommentData, commentCount, allEmojis, allProfanity = extract_all_commentData(after, before, subreddit, limit)
# writes comment data to csvs named after the subreddit they came from
if len(subreddit) == 0:
subreddit = 'all'
with open('../analysis/data/' + subreddit + '.csv', 'w') as outcsv:
#configure writer to write standard csv file
writer = csv.writer(outcsv, delimiter=',', quotechar='|', quoting=csv.QUOTE_MINIMAL, lineterminator='\n')
for commentData in allCommentData:
#Write item to outcsv
writer.writerow(commentData)
# counts 3 most common emojis in each subreddit
emojis_to_count = (emoji for emoji in allEmojis)
emojiCounter = Counter(emojis_to_count)
emojiTop3 = dict(emojiCounter.most_common(3))
# counts specific profanity in each subreddit
swearWords = ['f---', 's---', 'b----', 'd---']
profanityCounter = []
for index, swear in enumerate(swearWords):
profanityCounter.append((swear, allProfanity[index]))
# adds emoji and profanity counts to dict
dic = dict(profanityCounter, **emojiTop3)
subredditData = dict({'totalComments':commentCount, 'emojiCount': len(allEmojis), 'profanityCount': sum(allProfanity)}, **dic)
subredditData['subreddit'] = subreddit
# appends subreddit name to dict
jsonData.append(subredditData)
print(subreddit + " finished")
# writes emoji and profanity counts and subreddit name to JSON
with open('../analysis/data/countData.json', 'w') as outjson:
json.dump(jsonData , outjson)
#########################################################################################################################
# subreddits = ['funny', 'changemyview', 'dataisbeautiful', 'nba', 'emojipasta']
subreddits = ['dataisbeautiful', 'funny']
writeData('2019-04-27', '2019-06-28', subreddits, 1500)
print("")
print("--- %s seconds ---" % (time.time() - start_time))
print("")
print('done')
# -
|
projects/redditEmojiAnalysis/webScraping/emojiScraping.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ### Kaggle 竞赛 | 泰坦尼克灾难中的机器学习
#
# >泰坦尼克号的沉没是历史上最臭名昭著的沉船事件之一。1912年4月15日,泰坦尼克号首航时撞上冰山沉没,2224名乘客和船员中1502人遇难。这一耸人听闻的悲剧震惊了国际社会,并导致了更好的船舶安全法规。
#
# >这次海难造成这么多人死亡的原因之一是没有足够的救生艇来容纳乘客和船员。虽然在沉船中幸存有一些运气因素,但一些群体,如妇女、儿童和上层社会,比其他群体更有可能幸存。
#
# >在这个比赛中,我们要求你完成对什么样的人可能存活下来的分析。我们特别要求你们运用机器学习的工具来预测哪些乘客在这场悲剧中幸存下来。
#
# >这场Kaggle入门比赛为那些在数据科学和机器学习方面没有太多经验的人提供了一个理想的起点。”
#
# ### 这个Notebook的目标:
# 展示一个简单的示例,使用完整的PyData实用程序对Python中的泰坦尼克号灾难进行分析。这是针对那些希望进入该领域或已经进入该领域并希望看到用Python完成的分析示例的人。
#
# #### 这个Notebook将站是一个基础的例子,它包括:
# #### 数据处理
# * 通过Pandas导入数据
# * 清洗数据
# * 使用Matplotlib通过可视化的方式探索数据
#
# #### 数据分析
# * 监督机器学习技术:
# + 逻辑回归模型
# + 绘图结果
# + Support Vector Machine (SVM) using 3 kernels
# + 基础随机森林
# + 绘图结果
#
# #### 分析的评估验证
# * k折叠交叉验证,以评估局部结果
# * 将IPython笔记本的结果输出到Kaggle
#
#
#
# #### Required Libraries:
# * [NumPy](http://www.numpy.org/)
# * [IPython](http://ipython.org/)
# * [Pandas](http://pandas.pydata.org/)
# * [SciKit-Learn](http://scikit-learn.org/stable/)
# * [SciPy](http://www.scipy.org/)
# * [StatsModels](http://statsmodels.sourceforge.net/)
# * [Patsy](http://patsy.readthedocs.org/en/latest/)
# * [Matplotlib](http://matplotlib.org/)
import matplotlib.pyplot as plt
# %matplotlib inline
#plt.rcParams['font.sans-serif']=['Microsoft YaHei'] #显示中文标签
#plt.rcParams['axes.unicode_minus']=False #这两行需要手动设置
import numpy as np
import pandas as pd
import statsmodels.api as sm
from statsmodels.nonparametric.kde import KDEUnivariate
from statsmodels.nonparametric import smoothers_lowess
from pandas import Series, DataFrame
from patsy import dmatrices
from sklearn import datasets, svm
from KaggleAux import predict as ka # see github.com/agconti/kaggleaux for more details
# ### 数据处理
# #### 让我们使用pandas读取数据:
df = pd.read_csv("data/train.csv")
# 展示我们数据的全貌:
df
# ### 让我们看下:
#
# 以上是包含在`Pandas``DataFrame`中的我们的数据摘要。可以把“DataFrame”看作是Excel表格中Python的工作流的超级版本。正如您所看到的,摘要包含了相当多的信息。首先,它让我们知道我们有891个观察结果,或乘客,可以在这里进行分析:
#
# 接下来,它显示了`DataFrame`中的所有列。每一列都告诉我们观察结果的一些信息,比如他们的`名字`、`性别`或`年龄`。这些列被称为数据集的特性。
#
# 在每个特性之后,它让我们知道它包含了多少个值。虽然我们的大多数功能在每次观察中都有完整的数据,就像这里的`survived`特征:
#
# survived 891 non-null values
#
# 有些缺失了信息,比如 `age` 特征:
#
# age 714 non-null values
#
# 这些缺失信息用`NaN`表现。
#
# ### 小心缺失值:
# `ticket`和`cabin`的特征有许多缺失的值,因此不能为我们的分析增加很多价值。为了处理这个问题,我们将把它们从数据帧中删除,以保持数据集的完整性。
#
# 为此,我们将使用这行代码完全删除这些特性:
#
# df = df.drop(['ticket','cabin'], axis=1)
#
#
# 同时,这行代码从每个剩余的列/特性中删除包含`NaN `的值:
#
# df = df.dropna()
#
# 现在我们有了一个干净整洁的数据集,可以进行分析了。因为`.dropna() `会从我们的数据中删除即使其中一个特性中只有1个`NaN`,所以如果我们没有首先删除` ticket `和` cabin `特性,它就会删除我们的大部分数据集。
df = df.drop(['Ticket','Cabin'], axis=1)
# Remove NaN values
df = df.dropna()
# ### 让我们以图表的形式看看我们的数据:
# 原理:利用plt.subplot2grid初始化多块画布,每次取其中一块来画,画完取另一块,最后统一显示。
# 初始化一块画,并指定大小(宽,高)及dpi(分辨率)
fig = plt.figure(figsize=(18,6),dpi=80)
# #+++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# 一起绘制许多不同形状的图形,第一对参数是画布的行列个数,第二对参数是定义返回位于哪个坐标点的画布
ax1 = plt.subplot2grid((2,3),(0,0))
# 用条形图表示那些存活下来的人和那些没有存活下来的人,alpha为颜色饱和度。该图像将被画在ax2画布上,因为如果画图时不指定画布,那么将默认画在最近被初始化出来的画布上
df.Survived.value_counts().plot(kind='bar', alpha=0.55)
#-ax1.bar(df.Survived.value_counts().index,df.Survived.value_counts().values,alpha=0.5)
# 设置x轴的范围
ax1.set_xlim(-1, 2)
plt.ylabel("人数")
# 为我们的图形设置一个标题
plt.title('每个生存状态乘客数量分布, (1 = 生存者)')
# #+++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ax2 = plt.subplot2grid((2,3),(1,2))
plt.scatter(df.Survived, df.Age, alpha=0.2)
#-ax2.scatter(df.Survived, df.Age, alpha=0.2)
plt.ylabel("年龄")
# 设置网格样式
plt.grid(b=True, which='major', axis='y')
#-ax2.grid(color='g', linestyle='--', linewidth=0.1,alpha=0.3)
plt.title("乘客年龄分布, (1 = 生存者)")
# #+++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ax3 = plt.subplot2grid((2,3),(0,1))
# barh为横向条形图
df.Pclass.value_counts().plot(kind="barh", alpha=0.55)
# 设置横向条形图的y轴,即仓位范围
ax3.set_ylim(-1, len(df.Pclass.value_counts()))
plt.title("每个仓位乘客数量分布")
# #+++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ax4 = plt.subplot2grid((2,3),(1,0), colspan=2)
# 绘制不同仓位的年龄核密度曲线
df.Age[df.Pclass == 1].plot(kind='kde')
df.Age[df.Pclass == 2].plot(kind='kde')
df.Age[df.Pclass == 3].plot(kind='kde')
# 两种方法设置坐标轴名
plt.xlabel('年龄')
#-ax4.set_xlabel("年龄")
# 两种防范设置画布标题
plt.title("每个仓位乘客的年龄分布")
#-ax4.set_title("不同仓位的年龄分布")
# sets our legend for our graph.
plt.legend(('一等舱', '二等舱','三等舱'),loc='best')
#-ax4.legend(('一等舱', '二等舱','三等舱'),loc='best')
# #+++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ax5 = plt.subplot2grid((2,3),(0,2))
df.Embarked.value_counts().plot(kind='bar', alpha=0.55)
ax5.set_xlim(-1, len(df.Embarked.value_counts()))
# specifies the parameters of our graphs
plt.title("每个登船口乘客数量分布")
# 设置子画布之间的间距
plt.subplots_adjust(hspace=0.4, wspace=0.2)
# ### 可视化探索:
#
# 这场比赛的目的是根据数据中的特征预测一个人是否能够生存,比如:
#
# * 仓位(pclass in the dat)
# * 性别
# * 年龄
# * 船票价格
#
# 让我们看看是否能更好地了解谁活了下来,谁死了。
#
#
# 首先,让我们绘制一张幸存者与未幸存者的条形图。
# 简洁初始化一个fig对象和ax对象
fig, ax = plt.subplots()
df.Survived.value_counts().plot(kind='barh', color="blue", alpha=.65)
ax.set_ylim(-1, len(df.Survived.value_counts()))
plt.title("每个生存状态乘客数量分布, (1 = 生存者)")
# ### 现在让我们从数据中找出更多的结构,
# ### 让我们按性别将前面的图表分解
# +
#这个分析有问题,颜色出了问题,按说一个图标只有两种颜色,这里出现了三种,而且还不对。
fig = plt.figure(figsize=(18,6))
#create a plot of two subsets, male and female, of the survived variable.
#After we do that we call value_counts() so it can be easily plotted as a bar graph.
#'barh' is just a horizontal bar graph
df_male = df.Survived[df.Sex == 'male'].value_counts().sort_index()
df_female = df.Survived[df.Sex == 'female'].value_counts().sort_index()
ax1 = fig.add_subplot(121)
df_male.plot(kind='barh',label='男性', alpha=0.55)
df_female.plot(kind='barh', color='#FA2379',label='女性', alpha=0.55)
plt.title("谁活下来了?关于性别,(原始值计数)"); plt.legend(loc='best')
ax1.set_ylim(-1, 2)
ax1.set_xlim(0, 400)
#adjust graph to display the proportions of survival by gender
ax2 = fig.add_subplot(122)
(df_male/float(df_male.sum())).plot(kind='barh',label='Male', alpha=0.55)
(df_female/float(df_female.sum())).plot(kind='barh', color='#FA2379',label='Female', alpha=0.55)
plt.title("按比例谁活了下来?关于性别问题"); plt.legend(loc='best')
ax2.set_ylim(-1, 2)
# -
# 很明显,尽管在原始价值计算中死亡和存活的男性更多,但女性的存活率比例(25%)高于男性(20%)
#
# #### 非常好!但让我们更进一步:
#
# 我们可以使用Pclass找出更多的结构吗?让我们按性别和他们乘坐的舱位来细分。
fig = plt.figure(figsize=(18,4), dpi=100)
# #+++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# 这里我们在性别子集中创造出来了额外的子集
ax1=fig.add_subplot(141)
female_highclass = df.Survived[df.Sex == 'female'][df.Pclass != 3].value_counts()
female_highclass.sort_index(ascending=True).plot(kind='bar', label='女性, 高等仓', color='#FA2479', alpha=0.65)
# 设置x轴刻度标签
ax1.set_xticklabels(["死亡", "存活"], rotation=0)
ax1.set_xlim(-1, len(female_highclass))
plt.title("谁活了下来?关于性别和仓位"); plt.legend(loc='best')
# #+++++++++++++++++++++++++++++++++++++++++++++++++++++++++
# 两个属于同一父画布的子画布之间,可以设置共享y轴
ax2=fig.add_subplot(142, sharey=ax1)
female_lowclass = df.Survived[df.Sex == 'female'][df.Pclass == 3].value_counts()
female_lowclass.plot(kind='bar', label='女性, 低等仓', color='pink', alpha=alpha_level)
ax2.set_xticklabels(["死亡","存活"], rotation=0)
ax2.set_xlim(-1, len(female_lowclass))
plt.legend(loc='best')
# #+++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ax3=fig.add_subplot(143, sharey=ax1)
male_lowclass = df.Survived[df.Sex == 'male'][df.Pclass == 3].value_counts()
male_lowclass.plot(kind='bar', label='男性, 低等仓',color='lightblue', alpha=alpha_level)
ax3.set_xticklabels(["死亡","存活"], rotation=0)
ax3.set_xlim(-1, len(male_lowclass))
plt.legend(loc='best')
# #+++++++++++++++++++++++++++++++++++++++++++++++++++++++++
ax4=fig.add_subplot(144, sharey=ax1)
male_highclass = df.Survived[df.Sex == 'male'][df.Pclass != 3].value_counts()
male_highclass.plot(kind='bar', label='男性, 高等仓', alpha=alpha_level, color='steelblue')
ax4.set_xticklabels(["死亡","存活"], rotation=0)
ax4.set_xlim(-1, len(male_highclass))
plt.legend(loc='best')
# 令人惊叹!现在我们有了更多关于谁在悲剧中幸存和死亡的信息。有了更深入的理解,我们就能更好地创建更具洞察力的模型。这是交互式数据分析中的典型过程。首先,您从小事做起,了解最基本的关系,随着您对正在处理的数据的了解越来越多,您的分析的复杂性逐渐增加。以下是流程的进展情况:
# +
fig = plt.figure(figsize=(18,12), dpi=100)
a = 0.65
# Step 1
ax1 = fig.add_subplot(341)
df.Survived.value_counts().plot(kind='bar', color="blue", alpha=a)
ax1.set_xlim(-1, len(df.Survived.value_counts()))
plt.title("Step. 1")
# Step 2
ax2 = fig.add_subplot(345)
df.Survived[df.Sex == 'male'].value_counts().plot(kind='bar',label='Male')
df.Survived[df.Sex == 'female'].value_counts().plot(kind='bar', color='#FA2379',label='Female')
ax2.set_xlim(-1, 2)
plt.title("Step. 2 \nWho Survived? with respect to Gender."); plt.legend(loc='best')
ax3 = fig.add_subplot(346)
(df.Survived[df.Sex == 'male'].value_counts()/float(df.Sex[df.Sex == 'male'].size)).plot(kind='bar',label='Male')
(df.Survived[df.Sex == 'female'].value_counts()/float(df.Sex[df.Sex == 'female'].size)).plot(kind='bar', color='#FA2379',label='Female')
ax3.set_xlim(-1,2)
plt.title("Who Survied proportionally?"); plt.legend(loc='best')
# Step 3
ax4 = fig.add_subplot(349)
female_highclass = df.Survived[df.Sex == 'female'][df.Pclass != 3].value_counts()
female_highclass.plot(kind='bar', label='female highclass', color='#FA2479', alpha=a)
ax4.set_xticklabels(["Survived", "Died"], rotation=0)
ax4.set_xlim(-1, len(female_highclass))
plt.title("Who Survived? with respect to Gender and Class"); plt.legend(loc='best')
ax5 = fig.add_subplot(3,4,10, sharey=ax1)
female_lowclass = df.Survived[df.Sex == 'female'][df.Pclass == 3].value_counts()
female_lowclass.plot(kind='bar', label='female, low class', color='pink', alpha=a)
ax5.set_xticklabels(["Died","Survived"], rotation=0)
ax5.set_xlim(-1, len(female_lowclass))
plt.legend(loc='best')
ax6 = fig.add_subplot(3,4,11, sharey=ax1)
male_lowclass = df.Survived[df.Sex == 'male'][df.Pclass == 3].value_counts()
male_lowclass.plot(kind='bar', label='male, low class',color='lightblue', alpha=a)
ax6.set_xticklabels(["Died","Survived"], rotation=0)
ax6.set_xlim(-1, len(male_lowclass))
plt.legend(loc='best')
ax7 = fig.add_subplot(3,4,12, sharey=ax1)
male_highclass = df.Survived[df.Sex == 'male'][df.Pclass != 3].value_counts()
male_highclass.plot(kind='bar', label='male highclass', alpha=a, color='steelblue')
ax7.set_xticklabels(["Died","Survived"], rotation=0)
ax7.set_xlim(-1, len(male_highclass))
plt.legend(loc='best')
# -
# 我已经尽了最大努力使绘图代码可读性和直观性,但是如果您想了解如何在matplotlib中开始绘图的更详细信息,请自行搜索教程。
|
Titanic-zh.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: py37_football
# language: python
# name: py37_football
# ---
import tyrone_mings as tm
player_page = "https://www.transfermarkt.com/tyrone-mings/profil/spieler/253677"
tm.tm_pull(player_page, transfer_history = True, output = 'pandas')
player_page = "https://www.transfermarkt.com/tyrone-mings/marktwertverlauf/spieler/253677"
tm.tm_pull(player_page, market_value_history = True, output = 'pandas')
|
Scrapers/Tyrone Mings.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # ML Pipeline Preparation
# Follow the instructions below to help you create your ML pipeline.
# ### 1. Import libraries and load data from database.
# - Import Python libraries
# - Load dataset from database with [`read_sql_table`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_sql_table.html)
# - Define feature and target variables X and Y
# import libraries
import pandas as pd
import numpy as np
import sqlite3
from sqlalchemy import create_engine
import nltk
nltk.download(['punkt', 'wordnet'])
nltk.download('stopwords')
import re
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
from nltk.stem import WordNetLemmatizer
# load data from database
engine = create_engine('sqlite:///DisasterResponse.db')
df = pd.read_sql_table('DisasterResponse', engine)
#df.head()
X = df['message']
Y = df.drop(['id', 'message', 'original', 'genre'], axis=1)
df.groupby(df['related']).count()
# ### 2. Write a tokenization function to process your text data
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.multioutput import MultiOutputClassifier
from sklearn.metrics import precision_score, recall_score, f1_score
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier
def tokenize(text):
# Define url pattern
url_re = 'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\), ]|(?:%[0-9a-fA-F][0-9a-fA-F]))+'
# Detect and replace urls
detected_urls = re.findall(url_re, text)
for url in detected_urls:
text = text.replace(url, "urlplaceholder")
# tokenize sentences
tokens = word_tokenize(text)
lemmatizer = WordNetLemmatizer()
# save cleaned tokens
clean_tokens = [lemmatizer.lemmatize(tok).lower().strip() for tok in tokens]
# remove stopwords
STOPWORDS = list(set(stopwords.words('english')))
clean_tokens = [token for token in clean_tokens if token not in STOPWORDS]
return clean_tokens
a=tokenize("It is a far, far better thing that I do, than I have ever done; it is a far, far better rest I go to than I have ever known.")
b=CountVectorizer(a)
c = b.fit_transform(a)
c
# ### 3. Build a machine learning pipeline
# This machine pipeline should take in the `message` column as input and output classification results on the other 36 categories in the dataset. You may find the [MultiOutputClassifier](http://scikit-learn.org/stable/modules/generated/sklearn.multioutput.MultiOutputClassifier.html) helpful for predicting multiple target variables.
def build_pipeline():
# build NLP pipeline - count words, tf-idf, multiple output classifier
pipeline = Pipeline([
('vec', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(RandomForestClassifier(n_estimators = 100, n_jobs = -1)))
])
return pipeline
# ### 4. Train pipeline
# - Split data into train and test sets
# - Train pipeline
X_train, X_test, y_train, y_test = train_test_split(X, Y)
pipeline = build_pipeline()
pipeline.fit(X_train, y_train)
# ### 5. Test your model
# Report the f1 score, precision and recall for each output category of the dataset. You can do this by iterating through the columns and calling sklearn's `classification_report` on each.
# +
def report(pipeline, X_test, Y_test):
# predict on the X_test
Y_pred = pipeline.predict(X_test)
# build classification report on every column
performances = []
for i in range(len(Y_test.columns)):
performances.append([f1_score(Y_test.iloc[:, i].values, Y_pred[:, i], average='micro'),
precision_score(Y_test.iloc[:, i].values, Y_pred[:, i], average='micro'),
recall_score(Y_test.iloc[:, i].values, Y_pred[:, i], average='micro')])
# build dataframe
performances = pd.DataFrame(performances, columns=['f1 score', 'precision', 'recall'],
index = Y_test.columns)
return performances
report(pipeline, X_test, y_test)
# -
# ### 6. Improve your model
# Use grid search to find better parameters.
X_train, X_test, y_train, y_test = train_test_split(X, Y)
pipeline = build_pipeline()
# +
from sklearn.model_selection import GridSearchCV
parameters = {
'clf__estimator__n_estimators':[10,50,100]
}
cv = GridSearchCV(pipeline, param_grid=parameters, n_jobs= -1)
cv.fit(X_train, y_train)
# -
# ### 7. Test your model
# Show the accuracy, precision, and recall of the tuned model.
#
# Since this project focuses on code quality, process, and pipelines, there is no minimum performance metric needed to pass. However, make sure to fine tune your models for accuracy, precision and recall to make your project stand out - especially for your portfolio!
cv.best_params_
report(cv, X_test, y_test)
# ### 8. Try improving your model further. Here are a few ideas:
# * try other machine learning algorithms
# * add other features besides the TF-IDF
pipeline_improved = Pipeline([
('vect', CountVectorizer(tokenizer=tokenize)),
('tfidf', TfidfTransformer()),
('clf', MultiOutputClassifier(AdaBoostClassifier(n_estimators = 100)))
])
pipeline_improved.fit(X_train, y_train)
y_pred_improved = pipeline_improved.predict(X_test)
report(pipeline_improved, X_test, y_test)
# ### 9. Export your model as a pickle file
# +
import pickle
pickle.dump(cv, open('classifier.pkl', 'wb'))
#pickle.dump(pipeline_improved, open('adaboost_model.pkl', 'wb'))
# -
# ### 10. Use this notebook to complete `train.py`
# Use the template file attached in the Resources folder to write a script that runs the steps above to create a database and export a model based on a new dataset specified by the user.
|
ML Pipeline Preparation.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <a name = 'top'></a>
# # Hash the Trash notebook
#
# Here everything should run smoothly with just this notebook, running Ganache (with quickstart) and setting the PORT on which Ganache is running (The contract's abi and bytecode has been compiled from Remix and imported here to run everything).
#
# To see the real time version of the blockchain event log catcher:
# 1. Deploy the contract from this notebook [from cell 1 through 6]
# 2. Launch *events_real_time.py* from the terminal and just continue through the notebook (basic prints will appear in the terminal when trashbags lifecycle events get called and they will be saved in events_log.csv)
#
# The various sections of the notebook are:
# * [Functions](#func)
# * [Deploying the contract](#deploy)
# * [Operating with contract's functions and eth accounts](#functions)
# * [TARI](#tari)
# * [Trash lifecycle](#cycle)
# * [Refunds](#refund)
# * [Picking up logs of certain events from the chain](#logs)
# +
# #!pip install -r requirements.txt # if not already run
# +
# imports
import random, time, json
import pandas as pd
from web3 import Web3
from contracts.abi_bytecode import abi, bytecode # saved externally as .py
# to webscrape ETH exchange rate
from bs4 import BeautifulSoup
import requests
from forex_python.converter import CurrencyRates
# -
# Connecting to ganache through opened up PORT
ganache_url = 'HTTP://127.0.0.1:7545' #change here if different
web3 = Web3(Web3.HTTPProvider(ganache_url))
web3.isConnected()
# Compiled abi and bytecode of trash.sol which inherits from agents.sol (plus Ownable and Safemath)
abiRemix = json.loads(abi) # turned to string after copy to clipboard from Remix
bytecodeRemix = bytecode['object'] # it is a dictionary (as copy to clipboard form remix), we use the object for web3 deployment
# For the purposes of the simulation, we created an Excel file containing relevant information about the actors involved in the trash chain. In particular, the Excel file *example_data.xlsx* is composed of four different sheets:
# * **agents_data**: database containing information about the municipality, citizens, trucks and stations at the beginning of the year. In particular, we assumed that the municipality of Codogno has five citizens, two trucks and two stations.
# * **bags_data**: simulation of trash bags generated by citizens during the year. In a real life application of this project, such data would be collected by sensors placed on trucks.
# * **gps_data**: gps coordinates of the trucks when they stop to drop the garbage collected throughout the day
# * **stations_data**: data collected from the station
# <a name ='func'></a>
# ### Functions used in the notebook
#
# These functions have been used throughout notebook and have been grouped here for easier reading.
# +
def deploy_contract(deployer, _abi, _bytecode):
"""
Deploy the contract using Python and web3.py without relying on Remix (aside from getting the compiled abi and bytecode)
With this function, we are able to deploy the contract just from the compiled abi and bytecode
obtained from Remix. It would be possibile to compile the contract from Python as well
using *solc* library (Solidity compiler), but it is much more complicated than
copying the compiled abi and bytecode from Remix.
Parameters
-------------
deployer: eth account
abi, bytecode : compiled contract things
Returns
-------------
contract instance
"""
contract = web3.eth.contract(abi=_abi, bytecode=_bytecode) # compiled contract
tx_hash = contract.constructor().transact({'from': deployer}) # contract constructor call (i.e. deploy)
tx_receipt = web3.eth.waitForTransactionReceipt(tx_hash) # Get receipt when deployed to blockchain
print(f"Deployed contract at address:\n{tx_receipt.contractAddress}") # contract address
# simple yet effective method to pass the contract address to the real time filtering
with open('data/ctr_addr.txt', 'w') as f:
f.write(tx_receipt.contractAddress)
deployed_ctr = web3.eth.contract(address = tx_receipt.contractAddress, abi = _abi) # contract
return deployed_ctr
def create(municip_addr ,data):
'''
Call contract functions to assign addresses (and their characteristics) roles and populate mappings
Parameters
----------------------
municip_addr : eth address of the municipality
c, t, s : list of tuples with correct characteristics
Returns
----------------------
nothing on python, the chain grows as these transactions are registered
'''
# Creating list of touples as correct inputs for the contract structs
# Create CITIZENS - (address payable _address, string memory _name, uint _family, uint _house, uint256 _w)
c = [[r.address, " ".join([r.name, r.surname]), int(r.family), int(r.mq), int(r.weight)]
for r in data.itertuples() if r.role == 'citizen']
# Create TRUCKS - (address _address, bool _recycle)
t = [[r.address, r.recycle] for r in data.itertuples() if r.role == 'truck']
# Create STATIONS - (address _address, bool _recycle, int _lat, int _long)
s = [[r.address, r.recycle, int(r.lat), int(r.long)] for r in data.itertuples() if r.role == 'station']
from_dict = {'from': municip_addr}
for i in range(len(c)):
contract.functions.createCitizen(c[i][0], c[i][1], c[i][2], c[i][3], c[i][4]).transact(from_dict)
for j in range(len(t)):
contract.functions.createTruck(t[j][0], t[j][1]).transact(from_dict)
for k in range(len(s)):
contract.functions.createStation(s[k][0], s[k][1], s[k][2], s[k][3]).transact(from_dict)
print('Roles assigned')
# Some functions to show structs statuses as we change them
def get_citizen_info(address):
"""
Call to the contract to pretty print informations on citizen
"""
info = contract.functions.citizens(address).call()
print(f"""
Address eth : {address}
Name : {info[0]}
Family members : {info[1]}
House sq.meters : {info[2]}
Assigned weight\liters : {info[3]}
TARI amount : {info[4]}
Recyclable Waste Tot : {info[5]}
Non Recyclable Waste Tot : {info[6]}
Paid TARI : {info[7]}
Active account : {info[8]}
""")
def get_truck_info(address):
"""Show pretty info on trucks"""
info = contract.functions.trucks(address).call()
print(f"""
Address eth : {address}
Truck number : {info[0]}
Weight transported : {info[1]}
Recyclable Truck : {info[2]}
Active Truck : {info[3]}
""")
def get_station_info(address):
"""Show pretty info for stations"""
info = contract.functions.stations(address).call()
print(f"""
Address eth : {address}
Station nr. : {info[0]}
Weight : {info[1]}
latitude : {info[2]}
longitude : {info[3]}
Recyclable Plant : {info[4]}
Active Plant : {info[5]}
""")
def get_past_logs(filter_list):
"""
Iterates over every filter built and extracts logs from the block specified in the filter to the 'latest'
(function called in the last section)
inputs
---------------
filter_list : filters created
returns
---------------
list containing every event attribute generated from the 'emit' on the contract
"""
events = []
for event_filter in filter_list:
for e in event_filter.get_all_entries(): # get_new_entry() to check only last block
# e is a nested dictionary, like this we bring all elements on the same level
args = dict(e['args'])
args.update(dict(e))
del args['args'] # brought one level above, delete the nest
# args.pop('args', None) # could delete like this too
events.append(args)
return events
# -
# <a name = 'deploy'></a>
#
# ### Deploying the contract
#
# At the beginning of each year, the Municipality (in our example, the Municipality of Codogno) has to deploy the smart contract and invoke the function *setBeginningYear* to set the varible *start* equal to 1st January. This will come useful for the Municipality when perfoming some payment checks throughout the year. However, for the purposes of this simulation, we have decided to not include all the time contraints set in the smart contracts. In this way, we can indeed call all the functions without incurring in any problem. For example, we don't have to wait almost one year before being able to compute and pay the payouts to the citizens.
# Simple database example. The column with ganache accounts is added to later interact with them
data = pd.read_excel('data/example_data.xlsx', sheet_name='agents_data', engine='openpyxl')
data['address'] = web3.eth.accounts
data
# +
# Deploy contract (the municipality is the one deplying in our case). You can check the blocks from Ganache to see
# if it has worked.
municipality = data[data.role == 'municipality']['address'].item()
contract = deploy_contract(deployer = municipality, _abi = abiRemix, _bytecode = bytecodeRemix)
# -
# ##### !! You can now run `events_real_time.py` !!
# <a name = 'functions'></a>
# ## Interacting with the functions
#
# We first perform a simple check to verify whether the actual owner of the contract is the municipality. We then create the agents with the functions defined in the smart contract.
#
# **Reminder:** function(...).**transact**(...) is used for calls to functions that modify the chain, otherwise use function(...).**call**(...) (e.g. for view functions)
# +
# simple check for owner of the contract
owner = contract.functions.owner().call() # get owner from contract function
print(f"owner: {owner}")
print(f"Is it the municipality? {owner == data[data.role == 'municipality']['address'].item()}")
# to remember inputs and all functions
contract.all_functions()
# +
# Populating mappings and checking amount of entities
create(municipality, data)
print(f"n° Citizens: {contract.functions.numberC().call()}, n° Trucks : {contract.functions.numberT().call()}, n° Stations : {contract.functions.numberS().call()}")
# +
# Using exemplar citizen and other roles to do checks along the notebook
dutiful_citizen = data[data.role == 'citizen']['address'].reset_index(drop = True)[0]
get_citizen_info(dutiful_citizen)
ex_truck = data[data.role == 'truck']['address'].reset_index(drop = True)[0]
get_truck_info(ex_truck)
ex_station = data[data.role == 'station']['address'].reset_index(drop = True)[0]
get_station_info(ex_station)
# -
# <a name = 'tari'></a>
# ### Compute TARI
#
# The TARI is here computed by summing two parts:
# 1. The first part is given by the product of the square meters of the property and a fixed fee which depends on the number of people in the household. If there are less than four people in the household, the fee *deposit_mq_less4 defined* in the smart contract applies, whereas if the household has more than four members, the fee *deposit_mq_more4* applies.
# 2. The second part is variable and depends on the total amount of waste produced by the household the year before. To be more specific, the total weight of waste produced by the household the year before is multiplied by a constant amount of money.
# At the beginning of each year, the Municipality has to call the *TariAmount* function to compute how much TARI each citizen has to pay. The Municipality is then in charge of informing each citizen (e.g. through a text message or an app notification).
# +
# Calculate amount of TARI for each citizen (and looping for the others)
for addr in data[data.role == 'citizen']['address']:
contract.functions.TariAmount(addr).transact({'from' : municipality})
get_citizen_info(dutiful_citizen)
# -
# The TARI amount has been calculated and assigned. In particular, the TARI is expressed in wei for each citizen. We can thus now compute its equivalent in euros and save this information.
# +
# Webscrape currency exchange rates to convert computed TARI in wei to euro
# Save the TARI amount for each citizen
tari_list = [contract.functions.citizens(addr).call()[4]*(10**(-18))
for addr in data[data.role == 'citizen']['address']]
data.loc[data.role == 'citizen', 'TARI_eth'] = tari_list
# Convert to EUR
cmc = requests.get('https://coinmarketcap.com/currencies/ethereum/markets/')
soup = BeautifulSoup(cmc.content, 'html.parser')
data_coinmkt = soup.find('script', type="application/ld+json")
data_coinmkt = json.loads(data_coinmkt.contents[0])
usd_eth = data_coinmkt['currentExchangeRate']['price']
c = CurrencyRates()
usd_eur_rate = c.get_rate('USD', 'EUR')
eur_eth = usd_eth * usd_eur_rate
data.loc[data.role == 'citizen', 'TARI_eur'] = data.loc[data.role == 'citizen', 'TARI_eth']*eur_eth
data[['name', 'surname', 'family', 'mq', 'weight', 'role', 'TARI_eth', 'TARI_eur']]
# -
# Each citizen pays its due amount
for addr in data[data.role == 'citizen']['address']:
info = contract.functions.citizens(addr).call() # need the amount from the contract
print(f"Citizen {info[0]} is paying {info[4]} wei --> {round(info[4]*10**(-18)*eur_eth, 2)} Euro")
# payable function, the amount is passed in the dictionary of .transact()
contract.functions.payTari().transact({'from': addr, 'value' : info[4]})
# At this point, all the citizens in the simulation have paid the TARI. To check whether this is actually true, let us print the information stored in the struct of a citizen (i.e. dutiful_citizen). As you can see, the attribute "Paid TARI" is now equal to true, signalling that the citizen Francesca has respected the law and paid the due amount of TARI.
get_citizen_info(dutiful_citizen)
# All the citizens should pay the TARI by the end of January. It would also be possible to add some functionalities to deal with the case of non-compliance with this rule. For example, if a citizen didn't pay the TARI on time, the Municipality could solicit the payment through, for example, a text message or an app notification, and maybe also decrease the reimbursement by a certain amount at the end of the year.
# <a name = 'cycle'></a>
# ## Working with trashbags lifecycles
#
# At this point of the simulation, we use as input the data collected in the Excel sheet *bags_data*, which contains information about the single trash bags generated by the citizens. The trucks will pick such trash bags up and drop them at the appropriate disposal station. In a real life application of this project, the data from the Excel sheet would be collected by sensors placed on trucks.
bags = pd.read_excel('data/example_data.xlsx', sheet_name='bags_data', engine='openpyxl')
bags.head(6)
# ### Pick up trash
#
# The function for the pick up of trash bags is called by the truck. Each truck is equipped with some sensors, which scan and store the information printed on each bin, where citizens must put their trash bags (i.e. the Ethereum address of the citizen who has generated a specific trash bag), and with a scale, which is used to determine the weight of each trash bag.
# +
# The right truck will pick up the various trashbags
for i, name, sur, w, recyclable in bags.itertuples():
# get address of generator
generator_addr = data[data.name == name]['address'].item()
# get correct truck address via subsetting
truck_addr = data[(data['name'] == 'Truck') & (data['recycle'] == recyclable)]['address'].item()
# 'pick' function in contract is called by the correct truck
contract.functions.pick(generator_addr, w, random.randint(0, 1e+10)).transact({'from' : truck_addr})
get_citizen_info(dutiful_citizen)
get_truck_info(ex_truck)
# -
# As you can see, waste counters increased for both the citizen (have a look at the attributes *Recyclable Waste Tot*
# and *Non Recyclable Waste Tot*) and for the truck (have a look at the attribute *Weight transported*)
# ### Drop bags at station
#
# The function for the dumping of trash bags at the appropriate disposal station is still called by the trucks. Once a truck has dropped its content at the station, it will result empty, while the station will instead increase its total counter with the received weight.
# The **drop** function requires as input the GPS coordinates of the truck when it gets to the station. For the purposes of this simulation, we are going to extract this information from the Excel sheet *gps_data*.
gps = pd.read_excel('data/example_data.xlsx', sheet_name='gps_data', engine='openpyxl')
gps
get_station_info(ex_station)
# +
# get station name and coords from the gps
for i, name, sur, lat, long in gps.itertuples():
#get station address
station_addr = data[(data.name == name) & (data.surname == sur)]['address'].item()
#get_station_info(station_addr)
# need to get the type of station (recycling/not recycling) to pair the truck
s_info = contract.functions.stations(station_addr).call() # s_info[4] is the type
# pairing truck based on recyclable or not and calling the 'drop' function
correct_truck = data[(data.name == 'Truck') & (data.recycle == s_info[4])]['address'].item()
contract.functions.drop(station_addr, int(lat), int(long)).transact({'from' : correct_truck})
get_station_info(ex_station)
get_truck_info(ex_truck)
# -
# The station's total weight has now increased by an amount equal to the weight of trash carried by the truck. On the other hand, the truck is now empty (the attribute "weight transported is now equal to zero").
# ### Received
#
# The **received** function can only be called by the station, and its purpose is to verify whether there is coherence between the total amount of trash that a station declares to have received up to now, and the amount of trash that has been actually dumped to the station by trucks over time. When calling this function, the station has to specify the Ethereum address of the truck that has just arrived to the station (the address is read by some sensors placed at the station), the type of waste that the station disposes and the total weight of trash that has been dropped at the station so far (thanks to the help of some scales).
station_sense = pd.read_excel('data/example_data.xlsx', sheet_name='stations_data', engine='openpyxl')
station_sense
for i, name, sur ,weight in station_sense.itertuples():
# get truck address
truck_addr = data[(data.name == name) & (data.surname == sur)]['address'].item()
# correct station to read the correct type of truck [otherwise we get an error]
info_tr = contract.functions.trucks(truck_addr).call() # to get truck type in info_tr[2]
st_addr = data[(data.name == 'Disposal Station') & (data.recycle == info_tr[2])]['address'].item()
print(f"Station: {st_addr}\nFrom Truck: {truck_addr}\n ")
#recieved function call
contract.functions.received(info_tr[2], truck_addr, weight).transact({'from' : st_addr})
# <a name = 'refund'></a>
# ## Refund
# At the end of each year, the municipality calls the "givePayout" function and reimburses the citizens on the basis of their recycling behaviours. In particular, the municipality is allowed to call this function only between 20th and 28th December (considering that the municipality has deployed the contract on 1st January). However, for the purposes of this simulation, we have commented the line of the code that performes this "time check", so that we could actually call the "givePayout" function without incurring in an error.
# citizen address as input to function
for citizen in data[data.role == 'citizen']['address']:
# balance comparison for the refund
before = web3.eth.getBalance(citizen)
contract.functions.givePayout(citizen).transact({'from' : municipality})
after = web3.eth.getBalance(citizen)
print(f"{citizen}\nRefunded wei :{after-before} --> Euro : {round((after-before)*10**(-18)*eur_eth, 2)}\n")
# The municipality has the possibility to withdraw some funds from the contract before the end of the year, and thus enjoy some immediate liquidity. However, the municipality is only allowed to withdraw at most 88% of the money in the contract to not compromise the ability of reimbursing all the citizens at the end of the year.
# +
#print(municipality) # we have the municipality address stored from before
# the withdraw works just like this. See ganache '0' account, i.e. the account of the municipality
before = web3.eth.getBalance(municipality)
contract.functions.withdraw().transact({'from' : municipality})
after = web3.eth.getBalance(municipality)
print(f"Withdrawn: {(after-before)/(10**18)} ETH (also minus gas costs)")
# Check how much money is still stored in the contract
balance = contract.functions.MunicipalityBalance().call({'from' : municipality})
print('On contract there is still :', balance/(10**18), 'ETH')
# -
# At the end of each year, the municipality, after reimbursing all the citizens, calls the **destroyContract** function and receives all the funds that were still stored in the contract.
# selfdestruct contract by invoking destroyContract function
contract.functions.destroyContract().transact({'from' : municipality})
# <a name = 'logs'></a>
#
# ## Event/Log Filtering
#
# After running the above cells, the blockchain is populated with a lot of different logs from events. Such logs can be extracted with specific filters that are contract specific.
#
# For the sake of the demonstration, we employ just one filter and then show an example of the database that is created. A ***real time*** version that repeatedly checks the last block can instead be found in the file *events_real_time.py*.
# +
# creating logs/event filters to inspect the whole chain (fromBlock = 1)
# filter for the Event 'PickedUp' generated by the truck
pickedUp_filter = contract.events.PickedUp.createFilter(fromBlock = 1)
#recieved_filter etc
filters = [pickedUp_filter]
# Pass filters on the chain (from block 1) and pick logs found (for now only 'ToPickUp')
events = get_past_logs(filters)
df = pd.DataFrame(events)
df.head()
# -
# Each line of the dataframe represents a trash bag being picked up by a truck. In particular, for each trash bag being collected, you can get information about:
# * **transporter**: the Ethereum address of the transporter
# * **wasteType**: whether the trash bag containes recyclable or non-recyclable waste
# * **bagId**: the unique id representing the trash bag
# * **generator**: the Ethereum address of the citizen who has generated the trash bag
# * **wasteWeight**: the weight of the trash bag
# * **pickUpTime**: the time when the trash bag was picked up by the truck
# * **event**: the type of event. In this case, we are only focusing on the event "PickedUP"
# * **logIndex**: the log index
# * **transactionIndex**: the transaction index
# * **transactionHash**: the hash of the transaction
# * **address**: the address of the deployed contract
# * **blockHash**: the hash of the block
# * **blockNumber**: the number of the block
#
# (The first six columns are characterizing the *PickedUp* event during the emit, for other events different columns are present)
# [Back to top](#top)
|
HashTheTrash.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Print vs Return in Functions
# ---
#
# In this lesson we will be examining the differences between using a ```print()``` statement and a ```return``` statement
#
# ### Example Function: Average of Integers in a List
# +
def averagePrint1(array):
''' averagePrint1() outputs the total average of all the integers in a list
argument:
-- array : a list of integers
'''
total_sum = sum(array)
length = len(array)
average = total_sum / length
print('The average of the given list of values is:', average)
# end of averagePrint1
def averagePrint2(array):
''' averagePrint() returns the total average of all the integers in a list
argument:
-- array : a list of integers
'''
total_sum = sum(array)
length = len(array)
average = total_sum / length
return average
# end of averagePrint2
# Start of the actual program
values = [96, 42, 55, 4, 12, 14, 67, 25, 37, 82, 62, 13]
print('Executing avereagePrint1():')
averagePrint1(values)
print('Executing avereagePrint2():')
averagePrint2(values)
print('-'*25)
print('Setting variables result1:')
result1 = averagePrint1(values)
print('Setting variables result2:')
result2 = averagePrint2(values)
print('-'*25)
print('Variable result1:', result1)
print('Variable result2:', result2)
# -
# ## ```print()``` vs ```return``` in our example
#
# 1) Calling the function without variable assignment:
#
# When we first called ```averagePrint1(values)```, it outputted the message with the result
#
# When we first called ```averagePrint2(values)```, it did not output anything
#
# - This is the difference of using print() and return: print() outputs a message to the console whereas return does not
#
# 2) Assigning variables with the result of a function call:
#
# When we output the variable ```result1``` the value is ```None```.
#
# When we output the variable ```result2``` the value is the resulting average.
#
# - This happens because the function: __averagePrint1__ does not return any value; therefore, variable __result1__ cannot be assigned with the result from said function
#
# - The function, __averagePrint2__, returns the __average__ variable; therefore, it can be used for variable assignment
#
# __NOTE:__ 9 out of 10 times your functions should and will return a value
|
02 Print vs Return.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# + [markdown] colab_type="text" id="c0ES8ZbPaK4_"
# # SIT742: Modern Data Science
# **(Module 04: Exploratory Data Analysis)**
#
#
# ---
# - Materials in this module include resources collected from various open-source online repositories.
# - You are free to use, change and distribute this package.
# - If you found any issue/bug for this document, please submit an issue at [tulip-lab/sit742](https://github.com/tulip-lab/sit742/issues)
#
# Prepared by **SIT742 Teaching Team**
#
# ---
#
#
# ## Session 4B - Matplotlib (Optional)
#
# `matplotlib` is probably the single most used Python package for 2D-graphics. It provides both a very quick way to visualize data from Python and publication-quality figures in many formats. We are going to explore `matplotlib` in interactive mode covering most common cases.
#
# + colab={} colab_type="code" id="OnOM1U8VaK5D"
# %matplotlib inline
# ignore this "magic" command -- it's only necessary to setup this notebook...
# + [markdown] colab_type="text" id="B6ZoIeQ5aK5M"
# ## 1.Introduction to the basics of matplotlib visualizations
#
# Further reading:
#
# http://matplotlib.org/users/pyplot_tutorial.html
#
# http://www.labri.fr/perso/nrougier/teaching/matplotlib/matplotlib.html
# + [markdown] colab_type="text" id="jt3f90bTaK5O"
# ### 1.1.Importing Matplotlib
#
# The popular convention is to import
# the `matplotlib.pyplot` module and alias it to `plt` for easier typing:
#
# + colab={} colab_type="code" id="26mjkbV0aK5R"
import matplotlib.pyplot as plt
# + [markdown] colab_type="text" id="8ABZlU2ZaK5Y"
# ### 1.2.Interactively plotting
#
# Note: the following instructions only apply if you are trying things out in ipython -- which you _should_ be doing when trying out matplotlib.
#
# When testing things out in iPython, if you want to see the chart images pop-up as you execute the charting commands, begin your iPython session by [running the `%matplotlib` magic command](https://ipython.org/ipython-doc/3/interactive/magics.html#magic-matplotlib) (however, _don't_ include it in any standalone Python scripts):
#
#
# ```py
# # # %matplotlib
# import matplotlib.pyplot as plt
# ```
#
# ### Getting unstuck out of a stuck iPython prompt
#
# In current versions of matplotlib, iPython, OSX, you may run into an error where the prompt just freezes. [This is a known bug](https://github.com/ipython/ipython/issues/9128). Just hit Ctrl-C a couple of times and then Enter to break out of whatever command you were stuck in (you'll have to retype the command).
#
#
#
#
#
# + [markdown] colab_type="text" id="yKwi6MbPaK5a"
# ### 1.3.The simplest plot
#
# The following snippet is all you need to get a chart going in matplotlib. We actually won't be using this convention going forward, but it's worth seeing the minimal amount of code needed to make a graph:
# + colab={} colab_type="code" id="D-ihDCPkaK5b"
xvals = [0, 1, 2, 3]
yvals = [20, 10, 50, -15]
plt.bar(xvals, yvals)
# + [markdown] colab_type="text" id="9VTD0B9gaK5i"
# ### 1.4.Saving the simplest plot to disk
#
# To save to file, use the `savefig()` method:
#
# ```py
# plt.savefig('hello.png')
# ```
#
#
#
#
#
# + colab={} colab_type="code" id="U1D_U4RsI7wC"
# %matplotlib inline
xvals = [0, 1, 2, 3]
yvals = [20, 10, 50, -15]
plt.bar(xvals, yvals)
plt.savefig('hello.png',dpi=100)
# + [markdown] colab_type="text" id="LkEMQMbGI7ZC"
#
# ### 1.5.Removing the active chart (while interactively plotting)
#
# If you are doing these commands in iPython, then a chart window will have popped up with the rendered-chart image as as soon as you executed the `plt.bar()` method. To clear the space, call the `plt.close()` method:
#
# ```py
# plt.close()
# ```
# + colab={} colab_type="code" id="_CjVgD6iI7IG"
plt.close()
# + [markdown] colab_type="text" id="MYe_Pk83aK5k"
# ## 2.Making "subplots" and using `fig` and `ax`
#
# While the invocation of methods on the global `plt` object will produce charts quick and easy, we'll be following this general convention (note that `plot()` is a method for drawing line charts):
# + colab={} colab_type="code" id="-on6J2eraK5n"
fig, ax = plt.subplots()
ax.plot([1,2,3], [40, 20, 33])
# + [markdown] colab_type="text" id="Z06VUMRRaK5t"
# What's `fig`? What's `ax`? And what exactly is `plt.subplots()` doing? It's not worth explaining in these simple examples, but it's a convention worth getting into the habit of as it allows us to be more flexible in the future. And it's not too hard to memorize.
#
# Here's another example, this time using the `scatter()` chart method:
# + colab={} colab_type="code" id="IncMS5CxaK5w"
fig, ax = plt.subplots()
xvals = [42, 8, 33, 25, 39]
yvals = [30, 22, 42, 9, 16]
ax.scatter(xvals, yvals)
# + [markdown] colab_type="text" id="hP9lOU3LaK52"
# ### 2.1.Saving figures
#
# Using the `fig, ax = plt.subplots()` convention, saving to disk is slightly different: call the `savefig()` method via the `fig` object:
#
# ```py
# fig.savefig('helloagain.jpg')
# ```
#
# + [markdown] colab_type="text" id="OmO08Tt2aK53"
# ### 2.2.Charting multiple data series
#
# To chart more than one series of data on a single set of axes, simply invoke the charting methods of the given axes multiple times:
# + colab={} colab_type="code" id="GURAH5hyaK56"
fig, ax = plt.subplots()
xvals = [0, 1, 2, 3, 4]
y1 = [20, 8, 12, 24, 18]
y2 = [9, 1, 8, 15, 26]
ax.plot(xvals, y1)
ax.plot(xvals, y2)
ax
# + [markdown] colab_type="text" id="nGSdV8rSaK6B"
# Want multiple _types_ of charts on a single set of axes? Just call different types of charts on a single axes:
# + colab={} colab_type="code" id="3kqQOkYjaK6E"
fig, ax = plt.subplots()
xvals = [0, 1, 2, 3, 4]
y1 = [20, 8, 12, 24, 18]
y2 = [9, 1, 8, 15, 26]
ax.scatter(xvals, y1)
ax.plot(xvals, y2)
ax
# + [markdown] colab_type="text" id="9G_LWBooaK6I"
# ### 2.3.The importance of data structure
#
# We've only scratched the surface of Matplotlib's visualization methods, but the main constraint we'll face is having correctly-structured data.
#
# For instance, matplotlib will throw an error if we attempt to chart x-values and y-values in which the relationship is not 1-to-1:
#
#
# ```py
# xvals = [0, 1, 2]
# yvals = [42]
# ax.bar(xvals, yvals)
#
# # ValueError: incompatible sizes: argument 'height' must be length 3 or scalar
# ```
#
#
# And certain data structures don't make sense for certain charts. Here's a valid pie chart:
#
# + colab={} colab_type="code" id="5f9eLINNaK6J"
yvals = [10, 20, 30]
fig, ax = plt.subplots()
ax.pie(yvals)
# + [markdown] colab_type="text" id="151MwrHhaK6P"
# However, the `pie()` call doesn't take in x- and y- parameters -- instead, the second argument is the `explode` value, easier shown than explained:
# + colab={} colab_type="code" id="9amjpcZjaK6Q"
a = [10, 20, 30]
b = [0.2, 2, 1]
fig, ax = plt.subplots()
ax.pie(a, b)
# + [markdown] colab_type="text" id="IVhvSFPSaK6V"
# ### 2.4.Stacked bar charts
#
# Matplotlib offers a variety of ways to arrange multiple-series data. It's worth looking at the logic behind how a stacked bar chart is created.
#
# First, start with a single bar chart:
# + colab={} colab_type="code" id="2sOGha79aK6Y"
xvals = [0, 1, 2, 3, 4]
y1 = [50, 40, 30, 20, 10]
fig, ax = plt.subplots()
ax.bar(xvals, y1)
# + [markdown] colab_type="text" id="lsJWOs9yaK6e"
# What is the structure of data of a stacked bar chart? It's when two data series share the same independent variable (i.e. x-axis).
#
# However, simply calling `bar()` twice creates overlapping bars...which is not quite what we want:
#
# (note that I've added the `color` argument to the second call to make the different charts stand out):
#
# + colab={} colab_type="code" id="cZDrv4bnaK6f"
xvals = [0, 1, 2, 3, 4]
y1 = [50, 40, 30, 20, 10]
y2 = [10, 18, 23, 7, 26]
fig, ax = plt.subplots()
ax.bar(xvals, y1)
ax.bar(xvals, y2, color='orange')
# + [markdown] colab_type="text" id="MIx6oeUmaK6j"
# To get the grouped effect, we need to pass the `bottom` argument to the second call of `bar()`. What do we pass into that argument? The list of y-values that are in the _first_ call of `bar()`:
# + colab={} colab_type="code" id="qpI6IbSgaK6m"
xvals = [0, 1, 2, 3, 4]
y1 = [50, 40, 30, 20, 10]
y2 = [10, 18, 23, 7, 26]
fig, ax = plt.subplots()
ax.bar(xvals, y1)
ax.bar(xvals, y2, color='orange', bottom=y1)
# + [markdown] colab_type="text" id="zWMufNrNaK6s"
# In effect, we've told the matplotlib plotter that we want to start the `y2` values from where each corresponding `y1` value left off, i.e. stack `y2` on top of `y1`.
#
#
# What happens when the `y1` and `y2` values have _different_ x-values? Something weird...which is why you shouldn't be stacking non-aligning data series:
# + colab={} colab_type="code" id="l2T4HqzmaK6t"
x1 = [0, 1, 2, 3, 4]
y1 = [50, 40, 30, 20, 10]
x2 = [ 10, 11, 12, 13, 14]
y2 = [10, 18, 23, 7, 26]
fig, ax = plt.subplots()
ax.bar(x1, y1)
ax.bar(x2, y2, color='orange', bottom=y1)
# + [markdown] colab_type="text" id="1k7NKxEGaK6y"
# ### 2.5.Plotting categorical data
#
# One more example to show how picky matplotlib is about data structure.
#
# Pretend we have two _things_, e.g. 'apples' and 'orranges', with two corresponding y-values, e.g. `42` and `25`, to represent `42 apples` and `25 oranges`.
#
# Unfortunately, we can't plot the __categories__ of `apples` and `oranges` along the x-axis so easily:
#
#
# ```py
# xvals - ['apples', 'oranges']
# yvals = [42, 25]
# fig, ax = plt.subplots()
# ax.bar(xvals, yvals)
# ```
#
# We get this arcane error:
#
# ```
# ---------------------------------------------------------------------------
# TypeError Traceback (most recent call last)
# <ipython-input-51-368b1dcacfa1> in <module>()
# ----> 1 xvals - ['apples', 'oranges']
# 2 yvals = [42, 25]
# 3 fig, ax = plt.subplots()
# 4 ax.bar(xvals, yvals)
#
# TypeError: unsupported operand type(s) for -: 'list' and 'list'
# ```
#
# Basically, matplotlib won't deal with anything but numerical values -- integers, floats, or datetimes -- when plotting a chart. It just simply doesn't know where `apples` and `oranges` -- which we refer to as __categorical__ (as opposed to _continuous_) values -- should be positioned along the x-axis.
#
# So we have to hold matplotlib by the hand and tell it:
#
# 1. For the y-values of `42` and `25`, plot them against the x-values of `0` and `1` -- for now.
# 2. Then, label the x-axis with 0 and 1, using `ax.set_xticks()`
# 3. OK, where the `0` and `1` x-axis labels currently exist, replace them with `apples` and `oranges`, respectively, using `ax.set_xticklabels()`
#
#
#
# Here's the code to do that:
#
# + colab={} colab_type="code" id="VouJD4I5aK6z"
# Step 1
xvals = [0, 1]
yvals = [42, 25]
fig, ax = plt.subplots()
ax.bar(xvals, yvals)
# + colab={} colab_type="code" id="QcC9TCZoaK63"
# Step 1 & 2
xvals = [0, 1]
yvals = [42, 25]
fig, ax = plt.subplots()
# note that I specify the `align` argument in the `bar()` call:
ax.bar(xvals, yvals, align='center')
ax.set_xticks(xvals)
# + colab={} colab_type="code" id="S7W5f5d8aK7B"
# Steps 1,2,3
# Step 1 & 2
xlabels = ['apples', 'oranges']
xvals = [0, 1]
yvals = [42, 25]
fig, ax = plt.subplots()
# note that I specify the `align` argument in the `bar()` call:
ax.bar(xvals, yvals, align='center')
ax.set_xticks(xvals)
ax.set_xticklabels(xlabels)
# + [markdown] colab_type="text" id="Mujv71PDaK7G"
# It'd be nice if matplotlib just "knew" how to deal with a set of human-readable labels for a simple bar chart. But just like most parts of Python programming, explicitness over ambiguity is required.
# + [markdown] colab_type="text" id="oZ4piN0FaK7I"
# ---
# ## 3.Animation
#
# The easiest way to make a live animation in matplotlib is to use one of the Animation classes.
#
# See the following link for more examples and configurations
#
# https://matplotlib.org/2.0.0/api/animation_api.html
# + colab={} colab_type="code" id="9e1hNNaiaK7J"
#
#Author: <NAME>
#Source code:https://colab.research.google.com/drive/131wXGA8h8d7llSZxZJ6R4e8nz0ih1WPG#scrollTo=5zVG8JcR4CS2
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import animation, rc
from IPython.display import HTML
# First set up the figure, the axis, and the plot element we want to animate
fig, ax = plt.subplots()
plt.close()
ax.set_xlim(( 0, 2))
ax.set_ylim((-2, 2))
line, = ax.plot([], [], lw=2)
# initialization function: plot the background of each frame
def init():
line.set_data([], [])
return (line,)
# animation function. This is called sequentially
def animate(i):
x = np.linspace(0, 2, 1000)
y = np.sin(2 * np.pi * (x - 0.01 * i))
line.set_data(x, y)
return (line,)
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=100, interval=100, blit=True)
# Note: below is the part which makes it work on Colab
rc('animation', html='jshtml')
anim
# + [markdown] colab_type="text" id="NWjZMk3EP7Y1"
# ---
# ## Exercise
# In addition, the matplotlib offical website provided some Animation examples can run on the Anaconda Jupyter. If you would like to run them on the Google colab, you should follow the above Navjot Singh sample to modify the below codes for matching with the Google colab python compiling enviroment.
#
# [Decay](https://matplotlib.org/gallery/animation/animate_decay.html)
#
# [The Bayes update](https://matplotlib.org/gallery/animation/bayes_update.html)
#
# [The double pendulum problem](https://matplotlib.org/gallery/animation/double_pendulum_sgskip.html)
#
# [Animated histogram](https://matplotlib.org/gallery/animation/random_walk.html)
#
# [Rain simulation](https://matplotlib.org/gallery/animation/rain.html)
#
# [Animated 3D random walk](https://matplotlib.org/gallery/animation/random_walk.html)
#
# [Animated line plot](https://matplotlib.org/gallery/animation/simple_anim.html)
#
# [Oscilloscope](https://matplotlib.org/gallery/animation/strip_chart.html)
#
# [MATPLOTLIB UNCHAINED](https://matplotlib.org/gallery/animation/unchained.html)
|
Jupyter/M09-Optional/SIT742P04B-Matplotlib.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: PyCharm (hello-dockerized-pinb-with-vault)
# language: python
# name: pycharm-80906557
# ---
# + pycharm={"metadata": false, "name": "#%%\n"}
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import animation, rc
from IPython.display import HTML
# animate over some set of x, y
x = np.linspace(-4, 4, 100)
y = np.sin(x)
# First set up the figure, the axes, and the plot element
fig, ax = plt.subplots()
plt.close()
ax.set_xlim(( -4, 4))
ax.set_ylim((-2, 2))
line1, = ax.plot([], [], lw=2)
line2, = ax.plot([], [], lw=2)
# initialization function: plot the background of each frame
def init():
line1.set_data(x, y)
return (line1,)
# animation function: this is called sequentially
def animate(i):
at_x = x[i]
# gradient_line will have the form m*x + b
m = np.cos(at_x)
b = np.sin(at_x) - np.cos(at_x)*at_x
gradient_line = m*x + b
line2.set_data(x, gradient_line)
return (line2,)
anim = animation.FuncAnimation(fig, animate, init_func=init, frames=100, interval=100, blit=True)
rc('animation', html='jshtml')
anim
|
notebooks/hello_matplotlib_animation.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="SNQmXaye3CT-"
# #**Keystroke Dynamics on Mobile Devices Varying with Time**
#
#
#
# + [markdown] id="A6kv4WvFZ7yi"
# ###Importing required libraries
# + id="ETDQL0x1RrrM"
#import required libraries
import os
import pandas as pd
import numpy as np
import datetime
import pytz
import pickle
import seaborn as sns
from matplotlib import pyplot as plt
import matplotlib.patches as mpatches
import matplotlib.dates as mdates
# + [markdown] id="4VGheqxLzr7o"
# **Required Functions**
# + id="zA6DROaoTMav"
# pickle functions
#for reading pickle file
def read_pickle(filename, path='/content/drive/My Drive/Practicum/Pickle/'):
sct = datetime.datetime.now()
print("Start Pickle Load time: {0}".format(sct))
with open(path + filename, 'rb') as file:
unpickler = pickle.Unpickler(file)
df = pickle.load(file)
ct = datetime.datetime.now()
print("End Pickle Load time: {0} Duration:{1}".format(ct, ct-sct))
return df
#to write into a pickle file
def write_pickle(df,filename, path='/content/drive/My Drive/Practicum/Pickle/'):
sct = datetime.datetime.now()
print("Start Pickle Load time: {0}".format(sct))
with open(path + filename, 'wb') as file:
pickle.dump(pd.DataFrame(df), file)
ct = datetime.datetime.now()
print("End Pickle Load time: {0} Duration:{1}".format(ct, ct-sct))
# + id="KUWH2VKhNHRN"
#The funcion will check the geo location and convert the time to corrresponding time in the geographic location
def getLocalDateTime(df) :
time_zone = df['time_zone']
utc_timestamp=df['utc_timestamp']
if (time_zone == 'Europe/Dublin') :
return utc_timestamp.tz_convert('Europe/Dublin')
elif (time_zone == 'Asia/Riyadh') :
return utc_timestamp.tz_convert('Asia/Riyadh')
elif (time_zone == 'Asia/Dubai') :
return utc_timestamp.tz_convert('Asia/Dubai')
elif (time_zone == 'Asia/Kolkata') :
return utc_timestamp.tz_convert('Asia/Kolkata')
# + id="JbSedP3yuUQi"
#function to assign number to different time slot
def getTimeSlotNo(event_time):
if event_time == 'Morning':
return 1
elif event_time == 'Noon':
return 2
elif event_time =='After Noon':
return 3
elif event_time =='Evening':
return 4
elif event_time =='Dinner':
return 5
elif event_time =='Night':
return 6
# + colab={"base_uri": "https://localhost:8080/"} id="Lq-cmXSYazAx" executionInfo={"status": "ok", "timestamp": 1628510569986, "user_tz": -60, "elapsed": 18710, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gge-HcP6kKrFs_mHrpW_PZzERJyif7O7TX1L-NS=s64", "userId": "15765088102150491852"}} outputId="b0d9cf6a-f9c9-4388-8aa8-35ced59bb7cb"
from google.colab import drive
drive.mount("/content/drive", force_remount=True)
# mounting a specific directory on my google drive for data storage and retrieval
os.chdir("/content/drive/My Drive/Practicum/")
# !ls
# + [markdown] id="yBdGQBLT4EhJ"
# **Read Pickle**
# + colab={"base_uri": "https://localhost:8080/", "height": 638} id="UKyOAVLDhnDf" executionInfo={"status": "ok", "timestamp": 1628510578385, "user_tz": -60, "elapsed": 7201, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gge-HcP6kKrFs_mHrpW_PZzERJyif7O7TX1L-NS=s64", "userId": "15765088102150491852"}} outputId="95e7734c-4cbb-4bde-b66d-e4afc2b04e92"
#retrieve the pickle file 'df_event.p'
df_event=read_pickle('df_event_new_multiple_user_26_07_2021.p')
df_event
# + [markdown] id="97PbFgkHZyKP"
# **Visualisation**
# + id="Z5uqf_T_cdcm"
#remove less active user #activedays=1
df_event=df_event[~df_event.user_name.isin(['user4','user6','user10'])]
# + [markdown] id="eYogcXaYwS6g"
# Mulitple Users -Different Time Slot
# + colab={"base_uri": "https://localhost:8080/", "height": 605} id="4sQBiBiM16wH" executionInfo={"status": "ok", "timestamp": 1628510582370, "user_tz": -60, "elapsed": 740, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gge-HcP6kKrFs_mHrpW_PZzERJyif7O7TX1L-NS=s64", "userId": "15765088102150491852"}} outputId="e601bf6e-88af-44d3-c07f-fd8c8110293f"
#add extra column as timeslot no for plotting x-axis in slot order
# using the 'getLocalDateTime' function we will convert the time to the time at its respective time zones & stored in 'geo_localTime'
df_event['time_slot_no'] = df_event.time_slot.apply(lambda x:getTimeSlotNo(x))
df_event
# + colab={"base_uri": "https://localhost:8080/"} id="Zmk_OTs3yrtT" executionInfo={"status": "ok", "timestamp": 1628510584808, "user_tz": -60, "elapsed": 18, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gge-HcP6kKrFs_mHrpW_PZzERJyif7O7TX1L-NS=s64", "userId": "15765088102150491852"}} outputId="b6f7136f-8930-47a9-d842-5d37f8ae9374"
df_event.user_name.unique()
# + id="IDrziTgImbU1"
#make different df_event for each users
df_event_u1=df_event[df_event.user_name.isin(['user1'])]
df_event_u2=df_event[df_event.user_name.isin(['user2'])]
df_event_u3=df_event[df_event.user_name.isin(['user3'])]
df_event_u5=df_event[df_event.user_name.isin(['user5'])]
df_event_u7=df_event[df_event.user_name.isin(['user7'])]
df_event_u8=df_event[df_event.user_name.isin(['user8'])]
df_event_u9=df_event[df_event.user_name.isin(['user9'])]
df_event_u11=df_event[df_event.user_name.isin(['user11'])]
df_event_u12=df_event[df_event.user_name.isin(['user12'])]
df_event_u13=df_event[df_event.user_name.isin(['user13'])]
df_event_u14=df_event[df_event.user_name.isin(['user14'])]
df_event_u15=df_event[df_event.user_name.isin(['user15'])]
# + id="2fz5-R8AqRPj"
#group by user name and time slot
#user1
df_event_timeslot_u1=df_event_u1[['event_date','time_slot','time_slot_no','dwell_time_ms','flight_time_ms']]
#user2
df_event_timeslot_u2=df_event_u2[['event_date','time_slot','time_slot_no','dwell_time_ms','flight_time_ms']]
#user3
df_event_timeslot_u3=df_event_u3[['event_date','time_slot','time_slot_no','dwell_time_ms','flight_time_ms']]
#user5
df_event_timeslot_u5=df_event_u5[['event_date','time_slot','time_slot_no','dwell_time_ms','flight_time_ms']]
#user7
df_event_timeslot_u7=df_event_u7[['event_date','time_slot','time_slot_no','dwell_time_ms','flight_time_ms']]
#user8
df_event_timeslot_u8=df_event_u8[['event_date','time_slot','time_slot_no','dwell_time_ms','flight_time_ms']]
#user9
df_event_timeslot_u9=df_event_u9[['event_date','time_slot','time_slot_no','dwell_time_ms','flight_time_ms']]
#user11
df_event_timeslot_u11=df_event_u11[['event_date','time_slot','time_slot_no','dwell_time_ms','flight_time_ms']]
#user12
df_event_timeslot_u12=df_event_u12[['event_date','time_slot','time_slot_no','dwell_time_ms','flight_time_ms']]
#user13
df_event_timeslot_u13=df_event_u13[['event_date','time_slot','time_slot_no','dwell_time_ms','flight_time_ms']]
#user14
df_event_timeslot_u14=df_event_u14[['event_date','time_slot','time_slot_no','dwell_time_ms','flight_time_ms']]
#user15
df_event_timeslot_u15=df_event_u15[['event_date','time_slot','time_slot_no','dwell_time_ms','flight_time_ms']]
# + colab={"base_uri": "https://localhost:8080/", "height": 958} id="56Blm_ZkCNp3" executionInfo={"status": "ok", "timestamp": 1628510597616, "user_tz": -60, "elapsed": 6359, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.<KEY>", "userId": "15765088102150491852"}} outputId="b2488e22-b9e9-4243-bbe2-971fed599d21"
#Multiple User Analysis in Different Timeslots- Dwell Time Vs Flight Time
fig=plt.figure(figsize=(60,30))
#set a figure title on top
fig.suptitle('Multiple User Analysis - Dwell Time and Flight Time', fontsize = 22,fontweight='bold');
#to plot x axis in user name asc order
# set the spacing between subplots
plt.subplots_adjust(left=0.125,
bottom=0.02,
right=0.9,
top=0.9,
wspace=0.3,
hspace=0.45)
#############User1 #########################################################
plt.subplot(4, 3, 1)#plt.subplot(#rows,#columns,Plot no)
plt.title('User 1-Dwell Time and Flight Time ', size = 20,fontweight='bold');
ax1 = sns.boxplot(x="time_slot_no", y="dwell_time_ms",color='#f8aef4', data=df_event_timeslot_u1)
ax1 = sns.boxplot(x="time_slot_no", y="flight_time_ms",color='#cddcb4', data=df_event_timeslot_u1)
# Puts x-axis labels on an angle
ax1.xaxis.set_tick_params(rotation = 60)
ax1.set_ylim([0, None])
rose_patch = mpatches.Patch(color='#f8aef4', label='Dwell Time')
green_patch = mpatches.Patch(color='#cddcb4', label='Flight Time')
plt.legend(handles=[rose_patch, green_patch],loc='upper left',prop=dict(weight='bold',size=15))
ax1.set_xlabel('Time Slots',size=18,fontweight='bold')
ax1.set_ylabel('Time in Milli Second',size=18,fontweight='bold')
#print(plt.xticks())
ax1.set_xticklabels(['Morning', 'Noon', 'Afternoon','Dinner'],{'fontsize': 16,'fontweight': 'bold'})
plt.yticks(size=16,fontweight='bold')
#############User2 #########################################################
plt.subplot(4,3,2)#plt.subplot(#rows,#columns,Plot no)
plt.title('User 2-Dwell Time and Flight Time ', size = 20,fontweight='bold');
ax2 = sns.boxplot(x="time_slot_no", y="dwell_time_ms",color='#f8aef4', data=df_event_timeslot_u2)
ax2 = sns.boxplot(x="time_slot_no", y="flight_time_ms",color='#cddcb4', data=df_event_timeslot_u2)
# Puts x-axis labels on an angle
ax2.xaxis.set_tick_params(rotation = 60)
ax2.set_ylim([0, None])
rose_patch = mpatches.Patch(color='#f8aef4', label='Dwell Time')
green_patch = mpatches.Patch(color='#cddcb4', label='Flight Time')
plt.legend(handles=[rose_patch, green_patch],loc='upper left',prop=dict(weight='bold',size=15))
ax2.set_xlabel('Time Slots',size=18,fontweight='bold')
ax2.set_ylabel('Time in Milli Second',size=18,fontweight='bold')
#print(plt.xticks())
ax2.set_xticklabels(['Morning', 'Noon', 'Afternoon','Evening','Dinner','Night'],{'fontsize': 16,'fontweight': 'bold'})
plt.yticks(size=16,fontweight='bold')
#############User3 #########################################################
plt.subplot(4, 3,3)#plt.subplot(#rows,#columns,Plot no)
plt.title('User 3-Dwell Time and Flight Time ', size = 20,fontweight='bold');
ax3 = sns.boxplot(x="time_slot_no", y="dwell_time_ms",color='#f8aef4', data=df_event_timeslot_u3)
ax3 = sns.boxplot(x="time_slot_no", y="flight_time_ms",color='#cddcb4', data=df_event_timeslot_u3)
# Puts x-axis labels on an angle
ax3.xaxis.set_tick_params(rotation = 60)
ax3.set_ylim([0, None])
rose_patch = mpatches.Patch(color='#f8aef4', label='Dwell Time')
green_patch = mpatches.Patch(color='#cddcb4', label='Flight Time')
plt.legend(handles=[rose_patch, green_patch],loc='upper left',prop=dict(weight='bold',size=15))
ax3.set_xlabel('Time Slots',size=18,fontweight='bold')
ax3.set_ylabel('Time in Milli Second',size=18,fontweight='bold')
#print(plt.xticks())
ax3.set_xticklabels(['Morning', 'Noon', 'Afternoon','Evening','Dinner','Night'],{'fontsize': 16,'fontweight': 'bold'})
plt.yticks(size=16,fontweight='bold')
#############User5 #########################################################
plt.subplot(4,3,4)#plt.subplot(#rows,#columns,Plot no)
plt.title('User 5-Dwell Time and Flight Time ', size = 20,fontweight='bold');
ax4 = sns.boxplot(x="time_slot_no", y="dwell_time_ms",color='#f8aef4', data=df_event_timeslot_u5)
ax4 = sns.boxplot(x="time_slot_no", y="flight_time_ms",color='#cddcb4', data=df_event_timeslot_u5)
# Puts x-axis labels on an angle
ax4.xaxis.set_tick_params(rotation = 60)
ax4.set_ylim([0, None])
rose_patch = mpatches.Patch(color='#f8aef4', label='Dwell Time')
green_patch = mpatches.Patch(color='#cddcb4', label='Flight Time')
plt.legend(handles=[rose_patch, green_patch],loc='upper left',prop=dict(weight='bold',size=15))
ax4.set_xlabel('Time Slots',size=18,fontweight='bold')
ax4.set_ylabel('Time in Milli Second',size=18,fontweight='bold')
#print(plt.xticks())
ax4.set_xticklabels(['Morning', 'Noon', 'Afternoon','Evening'],{'fontsize': 16,'fontweight': 'bold'})
plt.yticks(size=16,fontweight='bold')
#############User7 #########################################################
plt.subplot(4,3,5)#plt.subplot(#rows,#columns,Plot no)
plt.title('User 7-Dwell Time and Flight Time ', size = 20,fontweight='bold');
ax5 = sns.boxplot(x="time_slot_no", y="dwell_time_ms",color='#f8aef4', data=df_event_timeslot_u7)
ax5 = sns.boxplot(x="time_slot_no", y="flight_time_ms",color='#cddcb4', data=df_event_timeslot_u7)
# Puts x-axis labels on an angle
ax5.xaxis.set_tick_params(rotation = 60)
ax5.set_ylim([0, None])
rose_patch = mpatches.Patch(color='#f8aef4', label='Dwell Time')
green_patch = mpatches.Patch(color='#cddcb4', label='Flight Time')
plt.legend(handles=[rose_patch, green_patch],loc='upper left',prop=dict(weight='bold',size=15))
ax5.set_xlabel('Time Slots',size=18,fontweight='bold')
ax5.set_ylabel('Time in Milli Second',size=18,fontweight='bold')
#print(plt.xticks())
ax5.set_xticklabels(['Morning', 'Noon', 'Afternoon','Evening','Dinner','Night'],{'fontsize': 16,'fontweight': 'bold'})
plt.yticks(size=16,fontweight='bold')
#############User 8 #########################################################
plt.subplot(4,3,6)#plt.subplot(#rows,#columns,Plot no)
plt.title('User 8-Dwell Time and Flight Time ', size = 20,fontweight='bold');
ax6 = sns.boxplot(x="time_slot_no", y="dwell_time_ms",color='#f8aef4', data=df_event_timeslot_u8)
ax6 = sns.boxplot(x="time_slot_no", y="flight_time_ms",color='#cddcb4', data=df_event_timeslot_u8)
# Puts x-axis labels on an angle
ax6.xaxis.set_tick_params(rotation = 60)
ax6.set_ylim([0, None])
rose_patch = mpatches.Patch(color='#f8aef4', label='Dwell Time')
green_patch = mpatches.Patch(color='#cddcb4', label='Flight Time')
plt.legend(handles=[rose_patch, green_patch],loc='upper left',prop=dict(weight='bold',size=15))
ax6.set_xlabel('Time Slots',size=18,fontweight='bold')
ax6.set_ylabel('Time in Milli Second',size=18,fontweight='bold')
#print(plt.xticks())
ax6.set_xticklabels(['Morning', 'Noon', 'Afternoon','Evening','Dinner','Night'],{'fontsize': 16,'fontweight': 'bold'})
plt.yticks(size=16,fontweight='bold')
#############User 9 #########################################################
plt.subplot(4,3,7)#plt.subplot(#rows,#columns,Plot no)
plt.title('User 9-Dwell Time and Flight Time ', size = 20,fontweight='bold');
ax7 = sns.boxplot(x="time_slot_no", y="dwell_time_ms",color='#f8aef4', data=df_event_timeslot_u9)
ax7 = sns.boxplot(x="time_slot_no", y="flight_time_ms",color='#cddcb4', data=df_event_timeslot_u9)
# Puts x-axis labels on an angle
ax7.xaxis.set_tick_params(rotation = 60)
ax7.set_ylim([0, None])
rose_patch = mpatches.Patch(color='#f8aef4', label='Dwell Time')
green_patch = mpatches.Patch(color='#cddcb4', label='Flight Time')
plt.legend(handles=[rose_patch, green_patch],loc='upper left',prop=dict(weight='bold',size=15))
ax7.set_xlabel('Time Slots',size=18,fontweight='bold')
ax7.set_ylabel('Time in Milli Second',size=18,fontweight='bold')
#print(plt.xticks())
ax7.set_xticklabels(['Morning', 'Noon', 'Afternoon','Evening','Dinner','Night'],{'fontsize': 16,'fontweight': 'bold'})
plt.yticks(size=16,fontweight='bold')
#############User 11 #########################################################
plt.subplot(4, 3,8)#plt.subplot(#rows,#columns,Plot no)
plt.title('User 11-Dwell Time and Flight Time ', size = 20,fontweight='bold');
ax8 = sns.boxplot(x="time_slot_no", y="dwell_time_ms",color='#f8aef4', data=df_event_timeslot_u11)
ax8 = sns.boxplot(x="time_slot_no", y="flight_time_ms",color='#cddcb4', data=df_event_timeslot_u11)
# Puts x-axis labels on an angle
ax8.xaxis.set_tick_params(rotation = 60)
ax8.set_ylim([0, None])
rose_patch = mpatches.Patch(color='#f8aef4', label='Dwell Time')
green_patch = mpatches.Patch(color='#cddcb4', label='Flight Time')
plt.legend(handles=[rose_patch, green_patch],loc='upper left',prop=dict(weight='bold',size=15))
ax8.set_xlabel('Time Slots',size=18,fontweight='bold')
ax8.set_ylabel('Time in Milli Second',size=18,fontweight='bold')
#print(plt.xticks())
ax8.set_xticklabels(['Morning', 'Noon', 'Afternoon','Evening','Dinner','Night'],{'fontsize': 16,'fontweight': 'bold'})
plt.yticks(size=16,fontweight='bold')
#############User 12 #########################################################
plt.subplot(4,3,9)#plt.subplot(#rows,#columns,Plot no)
plt.title('User 12-Dwell Time and Flight Time ', size = 20,fontweight='bold');
ax9 = sns.boxplot(x="time_slot_no", y="dwell_time_ms",color='#f8aef4', data=df_event_timeslot_u12)
ax9 = sns.boxplot(x="time_slot_no", y="flight_time_ms",color='#cddcb4', data=df_event_timeslot_u12)
# Puts x-axis labels on an angle
ax9.xaxis.set_tick_params(rotation = 60)
ax9.set_ylim([0, None])
rose_patch = mpatches.Patch(color='#f8aef4', label='Dwell Time')
green_patch = mpatches.Patch(color='#cddcb4', label='Flight Time')
plt.legend(handles=[rose_patch, green_patch],loc='upper left',prop=dict(weight='bold',size=15))
ax9.set_xlabel('Time Slots',size=18,fontweight='bold')
ax9.set_ylabel('Time in Milli Second',size=18,fontweight='bold')
#print(plt.xticks())
ax9.set_xticklabels(['Morning', 'Noon', 'Afternoon','Evening','Dinner','Night'],{'fontsize': 16,'fontweight': 'bold'})
plt.yticks(size=16,fontweight='bold')
#############User 13 #########################################################
plt.subplot(4, 3,10)#plt.subplot(#rows,#columns,Plot no)
plt.title('User 13-Dwell Time and Flight Time ', size = 20,fontweight='bold');
ax10 = sns.boxplot(x="time_slot_no", y="dwell_time_ms",color='#f8aef4', data=df_event_timeslot_u13)
ax10 = sns.boxplot(x="time_slot_no", y="flight_time_ms",color='#cddcb4', data=df_event_timeslot_u13)
# Puts x-axis labels on an angle
ax10.xaxis.set_tick_params(rotation = 60)
ax10.set_ylim([0, None])
rose_patch = mpatches.Patch(color='#f8aef4', label='Dwell Time')
green_patch = mpatches.Patch(color='#cddcb4', label='Flight Time')
plt.legend(handles=[rose_patch, green_patch],loc='upper left',prop=dict(weight='bold',size=15))
ax10.set_xlabel('Time Slots',size=18,fontweight='bold')
ax10.set_ylabel('Time in Milli Second',size=18,fontweight='bold')
#print(plt.xticks())
ax10.set_xticklabels(['Morning', 'Noon', 'Afternoon','Evening','Dinner','Night'],{'fontsize': 16,'fontweight': 'bold'})
plt.yticks(size=16,fontweight='bold')
#############User 14 #########################################################
plt.subplot(4, 3,11)#plt.subplot(#rows,#columns,Plot no)
plt.title('User 14-Dwell Time and Flight Time ', size = 20,fontweight='bold');
ax11 = sns.boxplot(x="time_slot_no", y="dwell_time_ms",color='#f8aef4', data=df_event_timeslot_u14)
ax11 = sns.boxplot(x="time_slot_no", y="flight_time_ms",color='#cddcb4', data=df_event_timeslot_u14)
# Puts x-axis labels on an angle
ax11.xaxis.set_tick_params(rotation = 60)
ax11.set_ylim([0, None])
rose_patch = mpatches.Patch(color='#f8aef4', label='Dwell Time')
green_patch = mpatches.Patch(color='#cddcb4', label='Flight Time')
plt.legend(handles=[rose_patch, green_patch],loc='upper left',prop=dict(weight='bold',size=15))
ax11.set_xlabel('Time Slots',size=18,fontweight='bold')
ax11.set_ylabel('Time in Milli Second',size=18,fontweight='bold')
#print(plt.xticks())
ax11.set_xticklabels(['Morning', 'Noon', 'Afternoon','Evening','Dinner','Night'],{'fontsize': 16,'fontweight': 'bold'})
plt.yticks(size=16,fontweight='bold')
#############User 15 #########################################################
plt.subplot(4,3,12)#plt.subplot(#rows,#columns,Plot no)
plt.title('User 15-Dwell Time and Flight Time ', size = 20,fontweight='bold');
ax11 = sns.boxplot(x="time_slot_no", y="dwell_time_ms",color='#f8aef4', data=df_event_timeslot_u15)
ax12 = sns.boxplot(x="time_slot_no", y="flight_time_ms",color='#cddcb4', data=df_event_timeslot_u15)
# Puts x-axis labels on an angle
ax12.xaxis.set_tick_params(rotation = 60)
ax12.set_ylim([0, None])
rose_patch = mpatches.Patch(color='#f8aef4', label='Dwell Time')
green_patch = mpatches.Patch(color='#cddcb4', label='Flight Time')
plt.legend(handles=[rose_patch, green_patch],loc='upper left',prop=dict(weight='bold',size=15))
ax12.set_xlabel('Time Slots',size=18,fontweight='bold')
ax12.set_ylabel('Time in Milli Second',size=18,fontweight='bold')
#print(plt.xticks())
ax12.set_xticklabels(['Morning', 'Noon', 'Afternoon','Evening','Dinner','Night'],{'fontsize': 16,'fontweight': 'bold'})
plt.yticks(size=16,fontweight='bold')
# + [markdown] id="gN6rm_llS6_U"
# Mulitple Users By Timezone -Different Time Slot
# + colab={"base_uri": "https://localhost:8080/"} id="mvLoC2O6x4aE" executionInfo={"status": "ok", "timestamp": 1628432450259, "user_tz": -60, "elapsed": 300, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gge-HcP6kKrFs_mHrpW_PZzERJyif7O7TX1L-NS=s64", "userId": "15765088102150491852"}} outputId="40321cbc-9876-4e09-c31d-cf2b93c887c4"
df_event.country_code.unique()
# + id="rlgHS51ZxlXD"
#make different df_event for each users
df_event_AE=df_event[df_event.country_code.isin(['AE'])]
df_event_IE=df_event[df_event.country_code.isin(['IE'])]
df_event_IN=df_event[df_event.country_code.isin(['IN'])]
df_event_SA=df_event[df_event.country_code.isin(['SA'])]
# + id="K-wE5NDj3fLb" colab={"base_uri": "https://localhost:8080/", "height": 419} executionInfo={"status": "ok", "timestamp": 1628432452716, "user_tz": -60, "elapsed": 408, "user": {"displayName": "<NAME>", "photoUrl": "<KEY>", "userId": "15765088102150491852"}} outputId="0a6fe46e-87ae-4970-8127-54a4dcc91d37"
#group by event date and time slot of each country
#AE
df_event_tz_timeslot_AE=df_event_AE[['event_date','time_slot','time_slot_no','dwell_time_ms','flight_time_ms']]
#IE
df_event_tz_timeslot_IE=df_event_IE[['event_date','time_slot','time_slot_no','dwell_time_ms','flight_time_ms']]
#IN
df_event_tz_timeslot_IN=df_event_IN[['event_date','time_slot','time_slot_no','dwell_time_ms','flight_time_ms']]
#SA
df_event_tz_timeslot_SA=df_event_SA[['event_date','time_slot','time_slot_no','dwell_time_ms','flight_time_ms']]
df_event_tz_timeslot_SA
# + colab={"base_uri": "https://localhost:8080/", "height": 797} id="FrkxKzQ5Cl8a" executionInfo={"status": "ok", "timestamp": 1628432759827, "user_tz": -60, "elapsed": 2688, "user": {"displayName": "<NAME>", "photoUrl": "https://lh3.googleusercontent.com/a-/AOh14Gge-HcP6kKrFs_mHrpW_PZzERJyif7O7TX1L-NS=s64", "userId": "15765088102150491852"}} outputId="a46e3498-8a15-4811-ef86-18e2ef766279"
#Multiple User Analysis at Different Timezone - Dwell Time Vs Flight Time
fig=plt.figure(figsize=(50,15))
#set a figure title on top
fig.suptitle('Multiple User Analysis at Different Timezone - Dwell Time and Flight Time', fontsize = 22,fontweight='bold');
# set the spacing between subplots
# set the spacing between subplots
plt.subplots_adjust(left=0.125,
bottom=0.02,
right=0.9,
top=0.9,
wspace=0.3,
hspace=0.45)
#############At UAE #########################################################
plt.subplot(1, 4, 1)#plt.subplot(#rows,#columns,Plot no)
plt.title('UAE- Dwell Time and Flight Time ', size = 20,fontweight='bold');
ax1 = sns.boxplot(x="time_slot_no", y="dwell_time_ms",color='#f8aef4', data=df_event_tz_timeslot_AE)
ax1 = sns.boxplot(x="time_slot_no", y="flight_time_ms",color='#cddcb4', data=df_event_tz_timeslot_AE)
# Puts x-axis labels on an angle
ax1.xaxis.set_tick_params(rotation = 60)
ax1.set_ylim([0, None])
rose_patch = mpatches.Patch(color='#f8aef4', label='Dwell Time')
green_patch = mpatches.Patch(color='#cddcb4', label='Flight Time ')
plt.legend(handles=[rose_patch, green_patch],loc='upper left',prop=dict(weight='bold',size=15))
ax1.set_xlabel('Time Slots',size=18,fontweight='bold')
ax1.set_ylabel('Time in Milli Second',size=18,fontweight='bold')
#print(plt.xticks())
ax1.set_xticklabels(['Morning', 'Noon', 'Afternoon','Evening','Dinner','Night'],{'fontsize': 16,'fontweight': 'bold'})
plt.yticks(size=16,fontweight='bold')
#############At Ireland #########################################################
plt.subplot(1, 4, 2)#plt.subplot(#rows,#columns,Plot no)
plt.title('Ireland- Dwell Time and Flight Time ', size = 20,fontweight='bold');
ax2 = sns.boxplot(x="time_slot_no", y="dwell_time_ms",color='#f8aef4', data=df_event_tz_timeslot_IE)
ax2 = sns.boxplot(x="time_slot_no", y="flight_time_ms",color='#cddcb4', data=df_event_tz_timeslot_IE)
# Puts x-axis labels on an angle
ax2.xaxis.set_tick_params(rotation = 60)
ax2.set_ylim([0, None])
rose_patch = mpatches.Patch(color='#f8aef4', label='Dwell Time')
green_patch = mpatches.Patch(color='#cddcb4', label='Flight Time ')
plt.legend(handles=[rose_patch, green_patch],loc='upper left',prop=dict(weight='bold',size=15))
ax2.set_xlabel('Time Slots',size=18,fontweight='bold')
ax2.set_ylabel('Time in Milli Second',size=18,fontweight='bold')
#print(plt.xticks())
ax2.set_xticklabels(['Morning', 'Noon', 'Afternoon','Evening','Dinner','Night'],{'fontsize': 16,'fontweight': 'bold'})
plt.yticks(size=16,fontweight='bold')
#############At India #########################################################
plt.subplot(1, 4, 3)#plt.subplot(#rows,#columns,Plot no)
plt.title('India- Dwell Time and Flight Time ', size = 20,fontweight='bold');
ax3 = sns.boxplot(x="time_slot_no", y="dwell_time_ms",color='#f8aef4', data=df_event_tz_timeslot_IN)
ax3 = sns.boxplot(x="time_slot_no", y="flight_time_ms",color='#cddcb4', data=df_event_tz_timeslot_IN)
# Puts x-axis labels on an angle
ax3.xaxis.set_tick_params(rotation = 60)
ax3.set_ylim([0, None])
rose_patch = mpatches.Patch(color='#f8aef4', label='Dwell Time')
green_patch = mpatches.Patch(color='#cddcb4', label='Flight Time ')
plt.legend(handles=[rose_patch, green_patch],loc='upper left',prop=dict(weight='bold',size=15))
ax3.set_xlabel('Time Slots',size=18,fontweight='bold')
ax3.set_ylabel('Time in Milli Second',size=18,fontweight='bold')
#print(plt.xticks())
ax2.set_xticklabels(['Morning', 'Noon', 'Afternoon','Evening','Dinner','Night'],{'fontsize': 16,'fontweight': 'bold'})
plt.yticks(size=16,fontweight='bold')
#############At Saudi Arabia #########################################################
plt.subplot(1, 4, 4)#plt.subplot(#rows,#columns,Plot no)
plt.title('S<NAME>- Dwell Time and Flight Time ', size = 20,fontweight='bold');
ax4 = sns.boxplot(x="time_slot_no", y="dwell_time_ms",color='#f8aef4', data=df_event_tz_timeslot_SA)
ax4 = sns.boxplot(x="time_slot_no", y="flight_time_ms",color='#cddcb4', data=df_event_tz_timeslot_SA)
# Puts x-axis labels on an angle
ax4.xaxis.set_tick_params(rotation = 60)
ax4.set_ylim([0, None])
rose_patch = mpatches.Patch(color='#f8aef4', label='Dwell Time')
green_patch = mpatches.Patch(color='#cddcb4', label='Flight Time ')
plt.legend(handles=[rose_patch, green_patch],loc='upper left',prop=dict(weight='bold',size=15))
ax4.set_xlabel('Time Slots',size=18,fontweight='bold')
ax4.set_ylabel('Time in Milli Second',size=18,fontweight='bold')
print(plt.xticks())
ax4.set_xticklabels(['Morning', 'Noon', 'Afternoon','Evening','Dinner','Night'],{'fontsize': 16,'fontweight': 'bold'})
plt.yticks(size=16,fontweight='bold')
|
ColabNotebooks/version1.6.4_multipleUserAnalysis.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# name: python3
# ---
# + [markdown] id="view-in-github" colab_type="text"
# <a href="https://colab.research.google.com/github/ChitraChaudhari/Advanced-Lane-Lines/blob/master/EDA/Matplotlib_Practice.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# + id="9k7n7DRMdxVn"
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# + id="VmlqESsXeQgv"
x=[1,2,3,4,5,6,7]
y=[50,53,52,48,47,49,46]
# + colab={"base_uri": "https://localhost:8080/", "height": 282} id="LRX83UBBR6_X" outputId="90bb002c-14a5-4111-fb2c-2c3eb54f9992"
plt.plot(x,y)
# + colab={"base_uri": "https://localhost:8080/", "height": 282} id="t-2ieYUHVgT8" outputId="a2e97a8a-c35a-4fdc-d4d8-833878e6b169"
plt.plot(x,y,color='green',linewidth=3,marker='o')
# + colab={"base_uri": "https://localhost:8080/", "height": 312} id="lpdM4PXlWZgm" outputId="547ca003-486a-45eb-cfdc-d13b11cd0d62"
plt.xlabel('Day')
plt.ylabel('Temperature')
plt.title('Weather Data')
plt.plot(x,y,color='green',linewidth=3,marker='o')
# + colab={"base_uri": "https://localhost:8080/", "height": 312} id="X3yA_fcOW-ly" outputId="6c6fc03a-3d1d-46e5-fee8-002cb8555562"
plt.xlabel('Day')
plt.ylabel('Temperature')
plt.title('Weather Data')
plt.plot(x,y,'--g+')
# + colab={"base_uri": "https://localhost:8080/", "height": 312} id="F9E0uq-4XwaI" outputId="1c185c5f-d42e-4b43-d49f-732e1aa92027"
plt.xlabel('Day')
plt.ylabel('Temperature')
plt.title('Weather Data')
plt.plot(x,y,'rD')
# + colab={"base_uri": "https://localhost:8080/", "height": 312} id="KTB9blMDYBfh" outputId="b94082ac-3b56-45d7-f70e-bad8504f2303"
plt.xlabel('Day')
plt.ylabel('Temperature')
plt.title('Weather Data')
plt.plot(x,y,'-rD', alpha=0.4,markersize = 7)
# + id="Og4JwHRsaKuL"
days = ['Mon','Tue','Wed','Thu','Fri','Sat','Sun']
max_t=[50,51,52,48,47,49,46]
min_t=[43,42,40,44,33,35,37]
avg_t=[45,48,48,46,40,42,41]
# + colab={"base_uri": "https://localhost:8080/", "height": 295} id="4FSPPVKCau8x" outputId="41ba5732-1d69-4a70-cf35-c9f19b6e2977"
plt.xlabel('Days')
plt.ylabel('Temperature')
plt.title('Weather Data')
plt.plot(days,max_t,label = 'max')
plt.plot(days,min_t,label = 'min')
plt.plot(days,avg_t,label = 'avg')
plt.legend(loc='best',shadow=True, fontsize = 'large')
plt.grid()
# + id="9Khe8wnUcock"
company=['GOOGL','AMZN','MSFT','FB']
revenue=[90,136,89,27]
# + colab={"base_uri": "https://localhost:8080/", "height": 312} id="WOVIfFeTddYM" outputId="948b621d-56bd-4222-ae9c-adcaf2879601"
plt.title("US Tech Stock")
plt.xlabel("Company")
plt.ylabel("Revenue")
plt.bar(company,revenue,label='Revenue')
plt.legend()
# + colab={"base_uri": "https://localhost:8080/"} id="yr0c8D8-kK1v" outputId="dc560432-43ad-4fe9-e6a1-d7862b3639fa"
import numpy as np
xpos = np.arange(len(company))
xpos
# + id="Rl2OsKyijkE6"
profit = [40,2,34,12]
# + colab={"base_uri": "https://localhost:8080/", "height": 282} id="ChoX0qW2khu3" outputId="2b5d2dad-1815-4db0-d233-6906fe5b7196"
plt.xticks(xpos,company)
plt.ylabel("Revenue")
plt.bar(xpos,revenue,label='Revenue')
plt.bar(xpos,profit,label='Revenue')
plt.legend()
# + colab={"base_uri": "https://localhost:8080/", "height": 282} id="Q4HhaQU_lNN2" outputId="66eb2e30-386d-49bc-aea6-cc45383f0938"
plt.xticks(xpos,company)
plt.ylabel("Revenue")
plt.bar(xpos-0.2,revenue,width=0.4,label='Revenue')
plt.bar(xpos+0.2,profit,width=0.4,label='Revenue')
plt.legend()
# + colab={"base_uri": "https://localhost:8080/", "height": 298} id="XfsTrOfUlhrK" outputId="4381c867-20f7-4927-d74a-2a8c1b771431"
plt.yticks(xpos,company)
plt.title("US Tech stocks")
plt.barh(xpos,revenue,label='Revenue')
plt.barh(xpos,profit,label='Revenue')
plt.legend()
# + colab={"base_uri": "https://localhost:8080/", "height": 334} id="LTvv3_RhmSWY" outputId="b905dd45-7177-43e7-a394-2f8202c4ca02"
blood_sugar = [113, 85, 90, 150, 149, 88, 93, 115, 135, 80, 77, 82, 129]
plt.hist(blood_sugar) # by default number of bins is set to 10
# + colab={"base_uri": "https://localhost:8080/", "height": 317} id="c43YbcWdm_NB" outputId="89e34412-6cd2-4a0d-ea74-56427161de47"
# but we want "80-100",'100-125','above 125'
plt.hist(blood_sugar, bins = 3)
# + colab={"base_uri": "https://localhost:8080/", "height": 317} id="J7br3hHSnLqE" outputId="4f060af3-4bf5-4644-b0cd-f62f5ee55d07"
plt.hist(blood_sugar, bins = 3, rwidth = 0.95)
# + colab={"base_uri": "https://localhost:8080/", "height": 282} id="RPN5O_fenVTD" outputId="6fb8ccde-5536-4ea8-de8c-43b35a0fd9b6"
plt.hist(blood_sugar, bins=[80,100,125,150], rwidth = 0.95,color='g')
# + colab={"base_uri": "https://localhost:8080/", "height": 282} id="4lPlcN96nvI4" outputId="69e65bcd-b2e5-4425-e62e-db746282f935"
plt.hist(blood_sugar, bins=[80,100,125,150], rwidth = 0.95,color='g',histtype='step')
# + id="KCFGmem4oBhz"
blood_sugar_men = [113, 85, 90, 150, 149, 88, 93, 115, 135, 80, 77, 82, 129]
blood_sugar_women = [67, 98, 89, 120, 133, 150, 84, 69, 89, 79, 120, 112, 100]
# + colab={"base_uri": "https://localhost:8080/", "height": 312} id="UaWkQQUOoFce" outputId="8353bb35-f112-45bc-e7cd-6a72ba7d3790"
plt.xlabel("Sugar range")
plt.ylabel("Total no of patients")
plt.title("Blood sugar analyasis")
plt.hist([blood_sugar_men,blood_sugar_women], bins=[80,100,125,150], rwidth = 0.95,label=['men','women'])
plt.legend()
# + colab={"base_uri": "https://localhost:8080/", "height": 312} id="Pt_5QFKepI5o" outputId="5f40b7f0-74ad-437c-bff0-f46fce2b3278"
plt.ylabel("Sugar range")
plt.xlabel("Total no of patients")
plt.title("Blood sugar analyasis")
plt.hist([blood_sugar_men,blood_sugar_women], bins=[80,100,125,150], rwidth = 0.95,label=['men','women'], orientation='horizontal')
plt.legend()
# + id="M0Vg2CW3pj_8"
exp_vals = [1400,600,300,410,250]
exp_labels = ["Home Rent","Food","Phone/Internet Bill","Car ","Other Utilities"]
# + colab={"base_uri": "https://localhost:8080/", "height": 422} id="BvqHwAQLpmeo" outputId="22e1fa62-5523-4ac5-dedd-baad36c9188c"
plt.pie(exp_vals, labels=exp_labels)
# + colab={"base_uri": "https://localhost:8080/", "height": 421} id="dS2M8covp_Ky" outputId="7eee17c6-66be-4f9f-802d-a1418cc17501"
plt.pie(exp_vals, labels=exp_labels,radius=2, autopct="%0.2f%%",explode=[0,0.1,0,0,0],startangle=180)
plt.show()
# + colab={"base_uri": "https://localhost:8080/", "height": 378} id="A5vBLmG0nDGV" outputId="27877dfc-c378-47c1-84ee-6be259d9d14d"
fig = plt.figure()
fig.set_size_inches(10,6)
ax1 = fig.add_subplot(2,2,1)
ax2 = fig.add_subplot(2,2,2)
ax3 = fig.add_subplot(2,2,3)
ax4 = fig.add_subplot(2,2,4)
# + id="dxeLGAlYkmzn"
|
EDA/Matplotlib_Practice.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3 (ipykernel)
# language: python
# name: python3
# ---
# # Automatic peak finding and calibration tools in Becquerel
#
# `Becquerel` contains tools for obtaining a rough first calibration for an uncalibrated `Spectrum`.
# First, some imports:
# %matplotlib inline
import os
import matplotlib.pyplot as plt
import numpy as np
import becquerel as bq
# Also some function definitions:
# +
def plot_spec(spectrum, xmode='channel'):
if xmode == 'channel':
facecolor = 'green'
else:
facecolor = 'blue'
plt.figure()
spectrum.fill_between(xmode=xmode, facecolor=facecolor, alpha=0.4, ax=plt.gca())
spectrum.plot('k-', lw=0.7, xmode=xmode, ax=plt.gca())
if xmode == 'channel':
plt.xlim(0, spectrum.bin_edges_raw.max())
plt.title('Uncalibrated spectrum')
else:
plt.xlim(0, spectrum.bin_centers_kev[-1])
plt.title('Calibrated spectrum')
plt.yscale('log')
plt.ylim(2e-1)
plt.tight_layout()
def plot_calibrator(cal):
cal.peakfinder.spectrum.apply_calibration(cal.cal)
print('fit gain:', cal.gain, 'keV/channel')
print('fit channels:', cal.fit_channels)
plt.figure()
plt.title('Peaks used in fit')
cal.plot()
plt.tight_layout()
plot_spec(cal.peakfinder.spectrum, xmode='channel')
for x, erg in zip(cal.fit_channels, cal.fit_energies):
chan = cal.peakfinder.spectrum.find_bin_index(x, use_kev=False)
y = cal.peakfinder.spectrum.counts_vals[chan-10:chan+10].max() * 1.5
plt.plot([x, x], [1e-1, y], 'r-', alpha=0.5)
plt.text(x, y, '{:.1f} keV'.format(erg))
plot_spec(cal.peakfinder.spectrum, xmode='energy')
for erg in cal.fit_energies:
x = int(erg / cal.gain)
chan = cal.peakfinder.spectrum.find_bin_index(x, use_kev=False)
y = cal.peakfinder.spectrum.counts_vals[chan-15:chan+15].max() * 1.5
plt.plot([erg, erg], [1e-1, y], 'r-', alpha=0.5)
plt.text(erg, y, '{:.1f} keV'.format(erg))
# -
# ## `PeakFilter` classes
#
# Instances of `PeakFilter` classes generate energy-dependent kernels that can be convolved with a spectrum to extract lines from the background continuum. To instantiate a kernel, the FWHM in channels at a specific channel is required, and the kernel scales the FWHM so that it is proportional to the square root of the channel (approximating the energy resolution of a detector).
#
# Here is what a `GaussianDerivPeakFilter` looks like:
# demonstrate energy-dependent kernels
channels = np.arange(1000)
for kernel in [bq.GaussianPeakFilter(1000, 50, 5)]:
plt.figure()
plt.title('{} evaluated at different channels'.format(type(kernel).__name__))
ind = np.arange(1000)
plt.plot([-50, 50], [0, 0], 'k-')
for chan in range(100, 900, 100):
kern = kernel.kernel(chan, np.arange(1001))
plt.plot(ind - chan, kern, '-', lw=1.5, label='Channel {}'.format(chan))
plt.xlim(-50, 50)
plt.xlabel('offset from channel')
plt.ylabel('kernel value')
plt.legend()
plt.tight_layout()
# We will use the `GaussiaPeakKernel` from now on.
#
# A kernel can create a matrix that can be multiplied with a spectrum to perform the convolution. Here is what such a matrix could look like:
# +
# display the kernel matrix
kernel = bq.GaussianPeakFilter(1000, 50, 5)
plt.figure()
plt.title('Matrix of GaussianPeakFilter evaluated across entire spectrum')
kernel.plot_matrix(np.arange(1000))
plt.tight_layout()
# -
# ## `PeakFinder` and `AutoCalibrator` classes
#
# The `PeakFinder` class allows one to automatically select peaks that a `PeakFilter` filters out of the spectrum.
#
# The `AutoCalibrator` class takes the peaks found by a `PeakFinder` and finds the most likely energies associated with those peaks.
#
# It is easiest to explain these classes using examples.
# ## Example 1: Calibrating a scintillator spectrum
#
# First we read in a raw spectrum from file (this is a simulated background spectrum for a scintillator):
counts = []
filename = os.path.join(os.path.dirname(bq.__file__), '../tests/samples/sim_spec.csv')
with open(filename, 'r') as f:
for line in f:
tokens = line.strip().split(',')
if len(tokens) == 2:
counts.append(float(tokens[1]))
spec = bq.Spectrum(counts=counts)
spec = spec.combine_bins(4)
spec.bin_edges_raw *= 4
plot_spec(spec)
plt.figure()
plt.plot(spec.bin_centers_raw, spec.counts_vals)
plt.yscale('log')
plt.show()
# To filter this spectrum we will use a kernel with a width of 50 channels at 500 channels, to match the strong line in the center (most likely the K-40 line at 1460 keV):
kernel = bq.GaussianPeakFilter(500, 50, fwhm_at_0=10)
# ### 1.1 `PeakFinder` class
#
# The `PeakFinder` class uses a `PeakFilter` to filter and calibrate the spectrum.
#
# Under the hood, the kernel estimates the SNR of each peak by separating peaks from the background continuum. We can introspect this process using the `PeakFinder` instance:
# +
# show how the kernel estimates the peaks+background and the background
finder = bq.PeakFinder(spec, kernel)
plt.figure()
plt.plot(spec.counts_vals.clip(1e-1), label='Raw spectrum')
plt.plot(finder._peak_plus_bkg.clip(1e-1), label='Peaks+Continuum')
plt.plot(finder._bkg.clip(1e-1), label='Continuum')
plt.plot(finder._signal.clip(1e-1), label='Peaks')
plt.yscale('log')
plt.xlim(0, len(spec))
plt.ylim(3e-1)
plt.xlabel('Channels')
plt.ylabel('Counts')
plt.legend()
plt.tight_layout()
# -
# The kernel applied directly to the spectral count data produces the estimated signal-to-noise (SNR) of each peak.
# plot signal to noise
plt.figure()
plt.title('Kernel applied to spectrum')
finder.plot()
plt.tight_layout()
# ### 1.2 Using `find_peak` to find a specific peak
#
# Use the method `find_peak` to find a specific peak in the spectrum.
#
# Let's try to locate the index of the tallest peak, right in the middle of the spectrum:
# +
peak_chan = finder.find_peak(500, min_snr=3.)
print(peak_chan)
plt.figure()
plt.title('find_peak')
finder.plot()
plt.xlim(0,1000)
plt.tight_layout()
# -
finder.centroids
# Subsequent calls to `find_peak` will store the any new results:
# +
peak_chan = finder.find_peak(900, min_snr=3.)
print(peak_chan)
plt.figure()
plt.title('find_peak')
finder.plot()
plt.tight_layout()
# -
# #### 1.2 Use `reset` to remove all candidate peaks and calibration data
#
# The list of candidate peaks will persist in the `PeakFinder` object, as will any calibration information (will be covered later).
#
# Resetting the current object yields:
# +
finder.reset()
plt.figure()
plt.title('after reset')
finder.plot()
plt.tight_layout()
# -
# ### 1.2 Using `find_peaks` to find all peaks above an SNR threshold
#
# Instead of repeatedly calling `find_peak`, one can build up a set of peak candidates using `find_peaks`. The following locates all peaks above channel 50 and an SNR of 2:
# +
finder.find_peaks(min_snr=1, xmin=50)
print(finder.centroids)
print(finder.snrs)
plt.figure()
plt.title('find_peaks')
finder.plot()
plt.tight_layout()
# -
# ### 1.4 The `AutoCalibrator.fit` method
#
# The main machinery of auto-calibration is the `fit` method, which matches peak candidates (e.g., the outputs of `find_peaks`) with specific line energies and keeps the best match:
cal = bq.AutoCalibrator(finder)
cal.fit(
[351.93, 609.32, 1460.82, 2614.3],
optional=[295.22, 768.36, 1120.294, 1238.122, 1764.49],
gain_range=[2.5e-2, 4e2],
de_max=200.,
)
plot_calibrator(cal)
# ### 1.5 `AutoCalibrator.fit` with only one peak
#
# A special case of the calibrator is when only one peak has been found and only one energy is given. Use this with caution since there is none of the cross-validation that comes with multiple lines.
cal.peakfinder.reset()
cal.peakfinder.fwhm_tol=(0.5, 1.2)
cal.peakfinder.find_peak(500, min_snr=3.)
cal.fit([1460.82], gain_range=[2.5e-1, 4e1], de_max=50.)
plot_calibrator(cal)
# +
# looks like there may be an off-by-one or bin center vs edge issue in plotting...
# -
# ## Example 2: Calibrating an HPGe spectrum
#
# Let's perform the same calibration steps using an HPGe spectrum. This spectrum will have many more lines to fit.
# read raw HPGe data
filename = os.path.join(os.path.dirname(bq.__file__), '../tests/samples/Mendocino_07-10-13_Acq-10-10-13.Spe')
spec = bq.Spectrum.from_file(filename)
plot_spec(spec)
# We will again use a `GaussianDerivKernel`, but this one must be much narrower to match the resolution. Not surprisingly, many of the peaks in the spectrum have higher SNR values:
# +
# apply the kernel to the data to get SNR
kernel = bq.GaussianPeakFilter(3700, 10, fwhm_at_0=5)
finder = bq.PeakFinder(spec, kernel)
cal = bq.AutoCalibrator(finder)
plt.figure()
plt.title('Kernel applied to spectrum')
cal.peakfinder.plot()
plt.tight_layout()
# +
# find significant peaks
cal.peakfinder.find_peaks(min_snr=15, xmin=400)
print(cal.peakfinder.centroids)
print(cal.peakfinder.snrs)
plt.figure()
plt.title('find_peaks')
cal.peakfinder.plot()
plt.tight_layout()
# -
# perform calibration
cal.fit(
[295.22, 351.93, 511.0, 609.32, 1460.82, 2614.3],
optional=[583.187, 911.20, 1120.294, 1238.122, 1377.67, 1764.49, 2204.06],
gain_range=[0.35, 0.40],
de_max=5.,
)
plot_calibrator(cal)
# ## Example 3: An unusual NaI spectrum
#
# This example shows a real spectrum from a NaI detector with very poor energy resolution and where the dynamic range has cut off the higher energies. Can we still calibrate it?
counts = []
filename = os.path.join(os.path.dirname(bq.__file__), '../tests/samples/nai_detector.csv')
with open(filename, 'r') as f:
for line in f:
tokens = line.strip().split(',')
if len(tokens) == 2:
counts.append(float(tokens[1]))
spec = bq.Spectrum(counts=counts)
plot_spec(spec)
# +
kernel = bq.GaussianPeakFilter(700, 50, 10)
finder = bq.PeakFinder(spec, kernel)
cal = bq.AutoCalibrator(finder)
# find significant peaks
cal.peakfinder.find_peaks(min_snr=3, xmin=100)
print(cal.peakfinder.centroids)
print(cal.peakfinder.snrs)
plt.figure()
plt.title('find_peaks')
cal.peakfinder.plot()
plt.tight_layout()
# -
# perform calibration
cal.fit(
[609.32, 1460.82],
optional=[],
gain_range=[0.1, 5.],
de_max=50.,
)
plot_calibrator(cal)
# That did not work right, the calibrator matched with the wrong lines. To fix this, we could either increase `xmin` to exclude the lower energy lines, increase `min_snr` to exclude the lower significance lines, or add optional energies. Let's try the same fit but with a longer list of prominent background lines:
# perform calibration again, but with more optional energies
cal.fit(
[609.32, 1460.82],
optional=[238.63, 338.32, 351.93, 911.20, 1120.294, 1620.50, 1764.49, 2118.514],
gain_range=[0.1, 5.],
de_max=50.,
)
plot_calibrator(cal)
# Success! The cross-validation used in `AutoCalibrator.fit` was able to find a better match.
# ## Example 4: CsI detector with Ba-133 and Cs-137 sources
#
# This data is from a small detector with Ba-133 and Cs-137 sources near it. We want to use those sources' lines and any strong backgroud lines to calibrate it.
counts = []
filename = os.path.join(os.path.dirname(bq.__file__), '../tests/samples/SGM102432.csv')
with open(filename, 'r') as f:
for line in f:
tokens = line.strip().split(',')
if len(tokens) == 2:
counts.append(float(tokens[1]))
spec = bq.Spectrum(counts=counts)
plot_spec(spec)
# +
kernel = bq.GaussianPeakFilter(2400, 120, 30)
finder = bq.PeakFinder(spec, kernel)
cal = bq.AutoCalibrator(finder)
# find significant peaks
cal.peakfinder.find_peaks(min_snr=3, xmin=200)
print(cal.peakfinder.centroids)
print(cal.peakfinder.snrs)
plt.figure()
plt.title('find_peaks')
cal.peakfinder.plot()
plt.tight_layout()
# -
cal.fit(
[356.0129, 661.657, 1460.82],
optional=[911.20, 1120.294, 1764.49, 2614.3],
gain_range=[0.5, 0.7],
de_max=100.,
)
plot_calibrator(cal)
# This last plot reveals that the 1460 keV peak does not quite line up with the calibration, so this detector probably exhibits a significant nonlinearity and would have to be calibrated with a more sophisticated method.
|
examples/autocal.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pickle as pk
import matplotlib.pylab as plt
import numpy as np
from numpy.linalg import norm
from math import sqrt, exp
# %matplotlib inline
from PyKEP import *
import seaborn as sns
sns.set_style("whitegrid")
from matplotlib import rc
rc('font',**{'family':'sans-serif','sans-serif':['Helvetica']})
## for Palatino and other serif fonts use:
#rc('font',**{'family':'serif','serif':['Palatino']})
rc('text', usetex=True)
# We load the data file obtained from the optimization (in low-thrust) of many legs (GTOC7) for the maximum initial mass:
#
# ast1, ast2, $t_1$, $t_2$, $\Delta V_L$, $m^*$, $m^*_L$, $m^*_D$
#
#
# We want to study how well $m^*_D$ and $m^*_L$ approximate $m^*$
#
a = pk.load(open("slide14.pkl","rb"), encoding='latin1')
print("DATA = ", a)
print("ONE ROW = ", a[0])
# ### Visualizing the data
#
#
plt.rcParams['figure.figsize'] = [6,4]
plt.rcParams['axes.labelsize'] = 14
plt.rcParams['xtick.labelsize'] = 14
plt.rcParams['ytick.labelsize'] = 14
plt.rcParams['legend.fontsize'] = 12
# +
plt.figure()
skip=20
plt.scatter( a[::skip,4], a[::skip,5], marker='x', alpha=1,edgecolor="none", label="$m^*$ - ground truth")
plt.scatter( a[::skip,4], a[::skip,6], marker='.', color='b',alpha=1,edgecolor="none", label="$m_L^*$ - Lambert's approximation")
plt.scatter( a[::skip,4], a[::skip,7], marker='.', color='r',alpha=1,edgecolor="none", label="$m_D^*$ - MIMA")
#plt.hexbin( a[:,5]/a[:,4], a[:,7] / a[:,6], bins='log')
plt.ylabel("Kg")
plt.xlabel("$ \Delta V_L$, [m/s]")
plt.ylim(200,2000)
plt.xlim(1000,6000)
plt.legend()
plt.tight_layout(1)
plt.savefig(open("slide14.png", "w"))
# -
b = a[a[:,5] < 2000]
b = b[b[:,5] > 500]
RMSE = np.sqrt(np.mean((b[:,5] - b[:,7])**2))
RMSE_LAMBERT = np.sqrt(np.mean((b[:,5] - b[:,6])**2))
MAE = np.mean(np.abs((b[:,5] - b[:,7])))
MAE_LAMBERT = np.mean(np.abs((b[:,5] - b[:,6])))
print(RMSE)
print(RMSE_LAMBERT)
print(MAE)
print(MAE_LAMBERT)
b[0]
len(b)
|
fast-approx-ML-slide14.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# # Exploratory data Analysis
import pandas as pd
dados = pd.read_csv('dados/tips.csv')
dados.head(5)
dados.columns
traducao = {
'total_bill' : 'valor_da_conta',
'tip' : 'gorjeta',
'dessert' : 'sobremesa',
'day' : 'dia_da_semana',
'time' : 'hora_do_dia',
'size' : 'total_de_pessoas'
}
gorjetas = dados.rename(columns = traducao)
gorjetas
gorjetas.sobremesa.value_counts()
sim_nao = {
'No' : 'Não',
'Yes' : 'Sim'
}
gorjetas.sobremesa.replace(sim_nao, inplace = True)
gorjetas
gorjetas.sobremesa.value_counts(normalize = True)
gorjetas.dia_da_semana.unique()
dias = {'Sun' : 'domingo',
'Sat' : 'Sábado',
'Thur' : 'Quinta',
'Fri' : 'Sexta'
}
gorjetas.dia_da_semana.replace(dias, inplace = True)
gorjetas.hora_do_dia.unique()
hora ={
'Dinner' : 'Jantar',
'Lunch' : 'Almoço'
}
gorjetas.hora_do_dia.replace(hora, inplace =True)
gorjetas
import seaborn as sns
sns.__version__
# # Análise 1: Valor da conta e gorjeta
sns.set_style('darkgrid')
ax = sns.scatterplot(x = 'valor_da_conta', y = 'gorjeta', data = gorjetas, hue = 'sobremesa')
ax.set_title('gráfico valor da conta x gorjeta', loc = 'left', fontsize = 18);
ax.figure.set_size_inches(7,5)
# Visualmente olhando o gráfico parace que quanto maior o valor da conta, também é maior o valor da gorjeta!
#
print(f'a nossa base de dados contém: {gorjetas.shape[0]} \n')
print(f'e ela apresenta um total de registros nulos igual a {gorjetas.isna().sum().sum()}')
gorjetas.info()
# ### Criando o atributo porcentagem, que será a divisão da gorjeta dada referente ao valor da conta
gorjetas['porcentagem'] = gorjetas.gorjeta / gorjetas.valor_da_conta
gorjetas.porcentagem = gorjetas.porcentagem.round(2)
gorjetas
ax = sns.scatterplot(x = 'valor_da_conta', y = 'porcentagem', data = gorjetas)
# visualmente o valor da conta não é proporcional ao valor da gorjeta
# ### utilizando outros metodos do seaborn como relplot
ax = sns.relplot(x = 'valor_da_conta', y = 'porcentagem', data = gorjetas, kind = 'line')
ax = sns.lmplot(x = 'valor_da_conta', y = 'porcentagem', data = gorjetas)
ax.set_ylabels('olá')
grafico1 = ax.fig
grafico1.savefig('teste-salvando-gráfico.png')
ax.savefig('teste.png')
ax.fig
# # Análise 2 - Sobremesa
selecao = (gorjetas.sobremesa == 'Sim')
gorjetas[selecao].describe()
gorjetas[~selecao].describe()
sns.catplot(x = 'sobremesa', y = 'gorjeta', data = gorjetas)
sns.relplot(x = 'valor_da_conta', y = 'gorjeta', data = gorjetas, hue = 'sobremesa')
import matplotlib.pyplot as plt
sns.relplot(x = 'valor_da_conta', y = 'gorjeta', data = gorjetas, col = 'sobremesa', hue = 'sobremesa')
sns.lmplot(x = 'valor_da_conta', y = 'gorjeta', data = gorjetas, col = 'sobremesa', hue = 'sobremesa')
sns.lmplot(x = 'valor_da_conta', y = 'porcentagem', data = gorjetas, col = 'sobremesa', hue = 'sobremesa')
# visualmente existe uma diferença no valor da gorjeta dos que pediram sobremesa e dos que não pediram
sns.relplot(x = 'valor_da_conta', y = 'gorjeta', data = gorjetas, col = 'sobremesa', hue = 'sobremesa', kind = 'line')
# ## Teste de hipótese (gorjeta x sobremesa)
# ### H <sup>null</sup> -> **a distribuição da gorjeta é a mesma nos dois cenários da sobremesa**
# ### H <sup>alt</sup> -> **a distribuição da gorjeta não é a mesma nos dois cenários da sobremesa**
from scipy.stats import ranksums
sobremesa = gorjetas.query("sobremesa == 'Sim'").porcentagem
sem_sobremesa = gorjetas.query("sobremesa == 'Não'").porcentagem
ranksums(sobremesa, sem_sobremesa)
# olhando para o nosso p-valor vemos que nosso hipótese alternativa não tem relevância na nossa população pois o p=valor deu acima de 0.05
sns.catplot(x = 'dia_da_semana', y = 'valor_da_conta', data = gorjetas)
sns.relplot(x = 'valor_da_conta', y = 'gorjeta', data = gorjetas, hue = 'dia_da_semana')
sns.relplot(x = 'valor_da_conta', y = 'gorjeta', data = gorjetas, hue = 'dia_da_semana', col = 'dia_da_semana')
sns.lmplot(x = 'valor_da_conta', y = 'gorjeta', data = gorjetas, hue = 'dia_da_semana', col = 'dia_da_semana')
sns.relplot(x = 'valor_da_conta', y = 'porcentagem', data = gorjetas, hue = 'dia_da_semana', col = 'dia_da_semana')
sns.lmplot(x = 'valor_da_conta', y = 'porcentagem', data = gorjetas, hue = 'dia_da_semana', col = 'dia_da_semana')
# # Análise descritiva dos dados
# **Média das gorjetas e média por dia das gorjetas**
round(gorjetas.gorjeta.mean(), 2)
gorjetas.groupby(['dia_da_semana']).mean()
print('Frequência dos dias da semana no conjunto de dados')
gorjetas.dia_da_semana.value_counts()
# ### Teste de hipótese
# #### H null - > a distribuição do valor da conta é igual no sábado e no domingo
# #### H alt - > a distribuição do valor da conta é não igual no sábado e no domingo
valor_da_conta_domingo = gorjetas.query("dia_da_semana == 'domingo'").valor_da_conta
valor_da_conta_sabado = gorjetas.query("dia_da_semana == 'Sábado'").valor_da_conta
ranksums(valor_da_conta_domingo, valor_da_conta_sabado)
# # Análise 4 - hora da refeição
sns.catplot(x = 'hora_do_dia', y = 'valor_da_conta', data = gorjetas)
sns.catplot(x = 'hora_do_dia', y = 'valor_da_conta', data = gorjetas, kind = 'swarm')
sns.violinplot(x = 'hora_do_dia', y = 'valor_da_conta', data = gorjetas)
sns.boxplot(x = 'hora_do_dia', y = 'valor_da_conta', data = gorjetas)
almoco = gorjetas.query("hora_do_dia == 'Almoço'").valor_da_conta
sns.histplot(almoco)
jantar = gorjetas.query("hora_do_dia == 'Jantar'").valor_da_conta
sns.histplot(jantar)
|
Exploratory data analysis.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 2
# language: python
# name: python2
# ---
# # Ex -
# ### Introduction:
#
# This time you will create a data
#
# Special thanks to: https://github.com/justmarkham for sharing the dataset and materials.
#
# ### Step 1. Import the necessary libraries
# ### Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/justmarkham/DAT8/master/data/chipotle.tsv).
# ### Step 3. Assign it to a variable called
# ### Step 4.
# ### Step 5.
# ### Step 6.
# ### Step 7.
# ### Step 8.
# ### Step 9.
# ### Step 10.
# ### Step 11.
# ### Step 12.
# ### Step 13.
# ### Step 14.
# ### Step 15.
# ### Step 16.
# ### BONUS: Create your own question and answer it.
|
Template/Solutions.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# <!--BOOK_INFORMATION-->
# <img align="left" style="padding-right:10px;" src="figures/PDSH-cover-small.png">
#
# *This notebook contains an excerpt from the [Python Data Science Handbook](http://shop.oreilly.com/product/0636920034919.do) by <NAME>; the content is available [on GitHub](https://github.com/jakevdp/PythonDataScienceHandbook).*
#
# *The text is released under the [CC-BY-NC-ND license](https://creativecommons.org/licenses/by-nc-nd/3.0/us/legalcode), and code is released under the [MIT license](https://opensource.org/licenses/MIT). If you find this content useful, please consider supporting the work by [buying the book](http://shop.oreilly.com/product/0636920034919.do)!*
# <!--NAVIGATION-->
# < [Data Manipulation with Pandas](03.00-Introduction-to-Pandas.ipynb) | [Contents](Index.ipynb) | [Data Indexing and Selection](03.02-Data-Indexing-and-Selection.ipynb) >
#
# <a href="https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/03.01-Introducing-Pandas-Objects.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a>
#
# # Introducing Pandas Objects
# At the very basic level, Pandas objects can be thought of as enhanced versions of NumPy structured arrays in which the rows and columns are identified with labels rather than simple integer indices.
# As we will see during the course of this chapter, Pandas provides a host of useful tools, methods, and functionality on top of the basic data structures, but nearly everything that follows will require an understanding of what these structures are.
# Thus, before we go any further, let's introduce these three fundamental Pandas data structures: the ``Series``, ``DataFrame``, and ``Index``.
#
# We will start our code sessions with the standard NumPy and Pandas imports:
# + jupyter={"outputs_hidden": true}
import numpy as np
import pandas as pd
# -
# ## The Pandas Series Object
#
# A Pandas ``Series`` is a one-dimensional array of indexed data.
# It can be created from a list or array as follows:
# + jupyter={"outputs_hidden": false}
data = pd.Series([0.25, 0.5, 0.75, 1.0])
data
# -
# As we see in the output, the ``Series`` wraps both a sequence of values and a sequence of indices, which we can access with the ``values`` and ``index`` attributes.
# The ``values`` are simply a familiar NumPy array:
# + jupyter={"outputs_hidden": false}
data.values
# -
# The ``index`` is an array-like object of type ``pd.Index``, which we'll discuss in more detail momentarily.
# + jupyter={"outputs_hidden": false}
data.index
# -
# Like with a NumPy array, data can be accessed by the associated index via the familiar Python square-bracket notation:
# + jupyter={"outputs_hidden": false}
data[1]
# + jupyter={"outputs_hidden": false}
data[1:3]
# -
# As we will see, though, the Pandas ``Series`` is much more general and flexible than the one-dimensional NumPy array that it emulates.
# ### ``Series`` as generalized NumPy array
# From what we've seen so far, it may look like the ``Series`` object is basically interchangeable with a one-dimensional NumPy array.
# The essential difference is the presence of the index: while the Numpy Array has an *implicitly defined* integer index used to access the values, the Pandas ``Series`` has an *explicitly defined* index associated with the values.
#
# This explicit index definition gives the ``Series`` object additional capabilities. For example, the index need not be an integer, but can consist of values of any desired type.
# For example, if we wish, we can use strings as an index:
# + jupyter={"outputs_hidden": false}
data = pd.Series([0.25, 0.5, 0.75, 1.0],
index=['a', 'b', 'c', 'd'])
data
# -
# And the item access works as expected:
# + jupyter={"outputs_hidden": false}
data['b']
# -
# We can even use non-contiguous or non-sequential indices:
# + jupyter={"outputs_hidden": false}
data = pd.Series([0.25, 0.5, 0.75, 1.0],
index=[2, 5, 3, 7])
data
# + jupyter={"outputs_hidden": false}
data[5]
# -
# ### Series as specialized dictionary
#
# In this way, you can think of a Pandas ``Series`` a bit like a specialization of a Python dictionary.
# A dictionary is a structure that maps arbitrary keys to a set of arbitrary values, and a ``Series`` is a structure which maps typed keys to a set of typed values.
# This typing is important: just as the type-specific compiled code behind a NumPy array makes it more efficient than a Python list for certain operations, the type information of a Pandas ``Series`` makes it much more efficient than Python dictionaries for certain operations.
#
# The ``Series``-as-dictionary analogy can be made even more clear by constructing a ``Series`` object directly from a Python dictionary:
# + jupyter={"outputs_hidden": false}
population_dict = {'California': 38332521,
'Texas': 26448193,
'New York': 19651127,
'Florida': 19552860,
'Illinois': 12882135}
population = pd.Series(population_dict)
population
# -
# By default, a ``Series`` will be created where the index is drawn from the sorted keys.
# From here, typical dictionary-style item access can be performed:
# + jupyter={"outputs_hidden": false}
population['California']
# -
# Unlike a dictionary, though, the ``Series`` also supports array-style operations such as slicing:
# + jupyter={"outputs_hidden": false}
population['California':'Illinois']
# -
# We'll discuss some of the quirks of Pandas indexing and slicing in [Data Indexing and Selection](03.02-Data-Indexing-and-Selection.ipynb).
# ### Constructing Series objects
#
# We've already seen a few ways of constructing a Pandas ``Series`` from scratch; all of them are some version of the following:
#
# ```python
# >>> pd.Series(data, index=index)
# ```
#
# where ``index`` is an optional argument, and ``data`` can be one of many entities.
#
# For example, ``data`` can be a list or NumPy array, in which case ``index`` defaults to an integer sequence:
# + jupyter={"outputs_hidden": false}
pd.Series([2, 4, 6])
# -
# ``data`` can be a scalar, which is repeated to fill the specified index:
# + jupyter={"outputs_hidden": false}
pd.Series(5, index=[100, 200, 300])
# -
# ``data`` can be a dictionary, in which ``index`` defaults to the sorted dictionary keys:
# + jupyter={"outputs_hidden": false}
pd.Series({2:'a', 1:'b', 3:'c'})
# -
# In each case, the index can be explicitly set if a different result is preferred:
# + jupyter={"outputs_hidden": false}
pd.Series({2:'a', 1:'b', 3:'c'}, index=[3, 2])
# -
# Notice that in this case, the ``Series`` is populated only with the explicitly identified keys.
# ## The Pandas DataFrame Object
#
# The next fundamental structure in Pandas is the ``DataFrame``.
# Like the ``Series`` object discussed in the previous section, the ``DataFrame`` can be thought of either as a generalization of a NumPy array, or as a specialization of a Python dictionary.
# We'll now take a look at each of these perspectives.
# ### DataFrame as a generalized NumPy array
# If a ``Series`` is an analog of a one-dimensional array with flexible indices, a ``DataFrame`` is an analog of a two-dimensional array with both flexible row indices and flexible column names.
# Just as you might think of a two-dimensional array as an ordered sequence of aligned one-dimensional columns, you can think of a ``DataFrame`` as a sequence of aligned ``Series`` objects.
# Here, by "aligned" we mean that they share the same index.
#
# To demonstrate this, let's first construct a new ``Series`` listing the area of each of the five states discussed in the previous section:
# + jupyter={"outputs_hidden": false}
area_dict = {'California': 423967, 'Texas': 695662, 'New York': 141297,
'Florida': 170312, 'Illinois': 149995}
area = pd.Series(area_dict)
area
# -
# Now that we have this along with the ``population`` Series from before, we can use a dictionary to construct a single two-dimensional object containing this information:
# + jupyter={"outputs_hidden": false}
states = pd.DataFrame({'population': population,
'area': area})
states
# -
# Like the ``Series`` object, the ``DataFrame`` has an ``index`` attribute that gives access to the index labels:
# + jupyter={"outputs_hidden": false}
states.index
# -
# Additionally, the ``DataFrame`` has a ``columns`` attribute, which is an ``Index`` object holding the column labels:
# + jupyter={"outputs_hidden": false}
states.columns
# -
# Thus the ``DataFrame`` can be thought of as a generalization of a two-dimensional NumPy array, where both the rows and columns have a generalized index for accessing the data.
# ### DataFrame as specialized dictionary
#
# Similarly, we can also think of a ``DataFrame`` as a specialization of a dictionary.
# Where a dictionary maps a key to a value, a ``DataFrame`` maps a column name to a ``Series`` of column data.
# For example, asking for the ``'area'`` attribute returns the ``Series`` object containing the areas we saw earlier:
# + jupyter={"outputs_hidden": false}
states['area']
# -
# Notice the potential point of confusion here: in a two-dimesnional NumPy array, ``data[0]`` will return the first *row*. For a ``DataFrame``, ``data['col0']`` will return the first *column*.
# Because of this, it is probably better to think about ``DataFrame``s as generalized dictionaries rather than generalized arrays, though both ways of looking at the situation can be useful.
# We'll explore more flexible means of indexing ``DataFrame``s in [Data Indexing and Selection](03.02-Data-Indexing-and-Selection.ipynb).
# ### Constructing DataFrame objects
#
# A Pandas ``DataFrame`` can be constructed in a variety of ways.
# Here we'll give several examples.
# #### From a single Series object
#
# A ``DataFrame`` is a collection of ``Series`` objects, and a single-column ``DataFrame`` can be constructed from a single ``Series``:
# + jupyter={"outputs_hidden": false}
pd.DataFrame(population, columns=['population'])
# -
# #### From a list of dicts
#
# Any list of dictionaries can be made into a ``DataFrame``.
# We'll use a simple list comprehension to create some data:
# + jupyter={"outputs_hidden": false}
data = [{'a': i, 'b': 2 * i}
for i in range(3)]
pd.DataFrame(data)
# -
# Even if some keys in the dictionary are missing, Pandas will fill them in with ``NaN`` (i.e., "not a number") values:
# + jupyter={"outputs_hidden": false}
pd.DataFrame([{'a': 1, 'b': 2}, {'b': 3, 'c': 4}])
# -
# #### From a dictionary of Series objects
#
# As we saw before, a ``DataFrame`` can be constructed from a dictionary of ``Series`` objects as well:
# + jupyter={"outputs_hidden": false}
pd.DataFrame({'population': population,
'area': area})
# -
# #### From a two-dimensional NumPy array
#
# Given a two-dimensional array of data, we can create a ``DataFrame`` with any specified column and index names.
# If omitted, an integer index will be used for each:
# + jupyter={"outputs_hidden": false}
pd.DataFrame(np.random.rand(3, 2),
columns=['foo', 'bar'],
index=['a', 'b', 'c'])
# -
# #### From a NumPy structured array
#
# We covered structured arrays in [Structured Data: NumPy's Structured Arrays](02.09-Structured-Data-NumPy.ipynb).
# A Pandas ``DataFrame`` operates much like a structured array, and can be created directly from one:
# + jupyter={"outputs_hidden": false}
A = np.zeros(3, dtype=[('A', 'i8'), ('B', 'f8')])
A
# + jupyter={"outputs_hidden": false}
pd.DataFrame(A)
# -
# ## The Pandas Index Object
#
# We have seen here that both the ``Series`` and ``DataFrame`` objects contain an explicit *index* that lets you reference and modify data.
# This ``Index`` object is an interesting structure in itself, and it can be thought of either as an *immutable array* or as an *ordered set* (technically a multi-set, as ``Index`` objects may contain repeated values).
# Those views have some interesting consequences in the operations available on ``Index`` objects.
# As a simple example, let's construct an ``Index`` from a list of integers:
# + jupyter={"outputs_hidden": false}
ind = pd.Index([2, 3, 5, 7, 11])
ind
# -
# ### Index as immutable array
#
# The ``Index`` in many ways operates like an array.
# For example, we can use standard Python indexing notation to retrieve values or slices:
# + jupyter={"outputs_hidden": false}
ind[1]
# + jupyter={"outputs_hidden": false}
ind[::2]
# -
# ``Index`` objects also have many of the attributes familiar from NumPy arrays:
# + jupyter={"outputs_hidden": false}
print(ind.size, ind.shape, ind.ndim, ind.dtype)
# -
# One difference between ``Index`` objects and NumPy arrays is that indices are immutable–that is, they cannot be modified via the normal means:
# + jupyter={"outputs_hidden": false}
ind[1] = 0
# -
# This immutability makes it safer to share indices between multiple ``DataFrame``s and arrays, without the potential for side effects from inadvertent index modification.
# ### Index as ordered set
#
# Pandas objects are designed to facilitate operations such as joins across datasets, which depend on many aspects of set arithmetic.
# The ``Index`` object follows many of the conventions used by Python's built-in ``set`` data structure, so that unions, intersections, differences, and other combinations can be computed in a familiar way:
# + jupyter={"outputs_hidden": false}
indA = pd.Index([1, 3, 5, 7, 9])
indB = pd.Index([2, 3, 5, 7, 11])
# + jupyter={"outputs_hidden": false}
indA & indB # intersection
# + jupyter={"outputs_hidden": false}
indA | indB # union
# + jupyter={"outputs_hidden": false}
indA ^ indB # symmetric difference
# -
# These operations may also be accessed via object methods, for example ``indA.intersection(indB)``.
# <!--NAVIGATION-->
# < [Data Manipulation with Pandas](03.00-Introduction-to-Pandas.ipynb) | [Contents](Index.ipynb) | [Data Indexing and Selection](03.02-Data-Indexing-and-Selection.ipynb) >
#
# <a href="https://colab.research.google.com/github/jakevdp/PythonDataScienceHandbook/blob/master/notebooks/03.01-Introducing-Pandas-Objects.ipynb"><img align="left" src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open in Colab" title="Open and Execute in Google Colaboratory"></a>
#
|
PythonDataScienceHandbook-master/notebooks/03.01-Introducing-Pandas-Objects.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# Import Libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import string
# %matplotlib inline
# Load Dataset
data = pd.read_csv('Dataset/All-Pra&Pasca.csv', sep=",")
# Menghitung perbandingan tweet dan retweet
r_stat = data['Sentiment'].groupby(data['Sentiment']).count()
temp = r_stat.values
# +
# Plotting Pie
def func(pct, allvals):
absolute = int(pct/100.*np.sum(allvals))
return "{:.1f}%\n{:d}".format(pct, absolute)
plt.figure(figsize = (8,8))
plt.pie(temp,explode=(0.1,0),labels=['negatif','positif'],shadow=True,colors=['#A3FBFF','#ADFFA3'],
autopct=lambda pct: func(pct, temp),startangle=90)
plt.title('Perbandingan Jumlah Tweet dan Retweet',fontsize=18)
plt.axis('equal')
plt.legend(fontsize=11)
plt.show()
# -
# Load Dataset
data1 = pd.read_csv('Dataset/All-Pra & Pasca ND Clean Angka.csv', sep=",")
# Menghitung perbandingan tweet dan retweet
r_stat = data1['Sentiment'].groupby(data1['Sentiment']).count()
temp = r_stat.values
# +
# Plotting Pie
def func(pct, allvals):
absolute = int(pct/100.*np.sum(allvals))
return "{:.1f}%\n{:d}".format(pct, absolute)
plt.figure(figsize = (8,8))
plt.pie(temp,explode=(0.1,0),labels=['negatif','positif'],shadow=True,colors=['#A3FBFF','#ADFFA3'],
autopct=lambda pct: func(pct, temp),startangle=90)
plt.title('Perbandingan Jumlah Tweet dan Retweet',fontsize=18)
plt.axis('equal')
plt.legend(fontsize=11)
plt.show()
# -
|
.ipynb_checkpoints/Visualize + --checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# ## 2021년 6월 13일 일요일
# ### BaekJoon - 열 개씩 끊어 출력하기 (Python)
# ### 문제 : https://www.acmicpc.net/problem/11721
# ### 블로그 : https://somjang.tistory.com/entry/BaekJoon-11721%EB%B2%88-%EC%97%B4-%EA%B0%9C%EC%94%A9-%EB%81%8A%EC%96%B4-%EC%B6%9C%EB%A0%A5%ED%95%98%EA%B8%B0-Python
# ### Solution
input_string = input()
string_length = len(input_string)
for i in range(0, string_length, 10):
print(input_string[i:i+10])
|
DAY 301 ~ 400/DAY394_[BaekJoon] 열 개씩 끊어 출력하기 (Python).ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python [Root]
# language: python
# name: Python [Root]
# ---
# +
import pandas as pd
import datetime
import bankInfoExtractor as bie
import taggedToTree as ttt
import subprocess
import timestring
# +
def stoinfo(s):
#TBD: Remove hard coding of paths
HOME_PATH = 'set_to_home_path'
PROJECT_PATH = HOME_PATH + 'set_to_where_syntaxnet_is_installed'
#Eg: /home/user/syntaxnet/models/syntaxnet
CWD = PROJECT_PATH
SD_PATH = CWD + '/syntaxnet'
DEMO_CMD = 'demo.sh'
cmd = 'echo "' + s + '" | ' + SD_PATH + '/' + DEMO_CMD + ' 2>/dev/null'
sno = subprocess.check_output(cmd, shell=True, cwd=CWD)
sno_st = '\n'.join(sno.split('\n')[2:]).rstrip()
print sno_st
print "-----"
root = ttt.str_to_tree(sno_st)
# ttt.print_node(root, 0)
# print "--- clean up tree to remove unnecessary nodes---"
root = bie.cleanTree(root)
# ttt.print_node(root, 0)
# print "Extracting (keyword,adj)"
(keyword,adj) = bie.bankKeyWord(root)
# print "keyword: " + keyword
# print "adj: " + adj
# print "Extracting Timeline"
r = bie.timeLine(root)
return (keyword, adj, r)
# -
key, adj, r = stoinfo("average balance from january 2016 to july 2016")
print 'key=%s, adj=%s, r=%s'%(key,adj,r)
# +
df = pd.read_csv('citistmt.csv', header=1)
df.columns = ['Date', 'Description', 'Amount','Balance']
df.loc[:,'Balance'] = df.Balance.map(lambda x: ''.join(x.split(',')))
df.loc[:,'Balance'] = df.Balance.astype(float)
df.loc[:,'Date'] = pd.to_datetime(df['Date'])
df.set_index(['Date'], drop=True, inplace=True)
df.head()
# -
df.tail(30)
# +
def get_balance(adj, r, tdf):
"""
adj: avg, min, max, curr
"""
start = datetime.date(r.start.year, r.start.month, r.start.day)
end = datetime.date(r.end.year, r.end.month, r.end.day)
if adj == 'curr':
return tdf.iloc[-1].Balance
if adj == 'avg':
return tdf[start:end].Balance.mean()
def act(key, adj, r, tdf):
if key == 'balance':
if adj == 'curr':
print "Current Balance: %0.2f"%get_balance(adj, r, tdf)
elif adj == 'avg':
print "Avg Balance for Timeline: %s: %0.2f"%(r, get_balance(adj, r, tdf))
def decipher(s, tdf):
key, adj, r = stoinfo(s)
print 'key=%s, adj=%s, r=%s\n-------\n'%(key,adj,r)
act(key,adj,r, tdf)
# -
decipher("latest balance", df)
decipher("average balance from august 2015 to june 2016", df)
r.start.year
td = datetime.date(r.start.year, r.start.month, r.start.day)
start = datetime.date(r.start.year, r.start.month, r.start.day)
end = datetime.date(r.end.year, r.end.month, r.end.day)
start
end
df[start:end].Balance.mean()
|
rough.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import math
import matplotlib.pyplot as plt
# %matplotlib inline
from math import pi
# importing Qiskit
from qiskit import Aer, IBMQ
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute
from qiskit.providers.ibmq import least_busy
from qiskit.tools.monitor import job_monitor
from qiskit.tools.visualization import plot_histogram
# -
IBMQ.load_accounts()
# $$
# QFT: \vert j_1 j_2 j_3 ... j_n \rangle \rightarrow \frac{1}{2^{n/2}} (\vert 0 \rangle + e^{2 \pi i (0.j_n)}\vert 1 \rangle) (\vert 0 \rangle + e^{2 \pi i (0.j_{n-1} j_n)}\vert 1 \rangle) (\vert 0 \rangle + e^{2 \pi i (0.j_{n-2} j_{n-1} j_n)}\vert 1 \rangle) .... (\vert 0 \rangle + e^{2 \pi i (0.j_1 j_2 ... j_n)}\vert 1 \rangle)
# $$
def qft(circuit, reg, n):
circuit.h(reg)
circuit.barrier()
for i in range(n):
k = n-1-i
for j in range(k):
circuit.cu1(math.pi/(2 ** (j+1)) , reg[i+j+1], reg[i])
for i in range(n-1):
circuit.swap(reg[i],reg[i+1])
# +
qreg = QuantumRegister(3)
creg = ClassicalRegister(3)
circ = QuantumCircuit(qreg,creg)
circ.draw(output = 'mpl')
circ.x(qreg[0])
circ.x(qreg[2])
circ.barrier()
qft(circ, qreg, 3)
circ.measure(qreg, creg)
# -
circ.draw(output = 'mpl')
# +
backend = Aer.get_backend("qasm_simulator")
simulate = execute(circ, backend=backend, shots=99999).result()
results = simulate.get_counts()
plot_histogram(results)
# +
# Use the IBM Quantum Experience
backend = least_busy(IBMQ.backends(simulator=False))
shots = 1024
job_exp = execute(circ, backend=backend, shots=shots)
job_monitor(job_exp)
results = job_exp.result()
plot_histogram(results.get_counts())
# -
|
QFT.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + id="sK8sCW2vpbx7"
import librosa
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# %matplotlib inline
import os
import csv
# Preprocessing
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder, StandardScaler, MinMaxScaler
# model
from sklearn.linear_model import LogisticRegression, LinearRegression
# from catboost import CatBoostClassifier
# from xgboost import XGBClassifier, XGBRFClassifier
from sklearn.model_selection import KFold,StratifiedKFold
from lightgbm import LGBMRegressor, LGBMClassifier
# from sklearn.neural_network import MLPRegressor, MLPClassifier
from sklearn.metrics import mean_absolute_error as mae, roc_auc_score,accuracy_score
from sklearn.multioutput import MultiOutputClassifier
# Keras
import keras
# save the model to disk
import pickle
# -
# # Read data
# + colab={"base_uri": "https://localhost:8080/", "height": 232} id="c6nqgVmfv19-" outputId="a92d63ff-c728-4f2f-b2cf-84496dc8e29d"
train = pd.read_csv('../data/train_new_feat.csv')
train.head()
# -
# # Data cleaning
# define target labels
LABELS = [' tenderness', ' calmness', ' power',
' joyful_activation', ' tension', ' sadness',
]
#define useless features
USELESS = [' amazement', ' solemnity',' nostalgia',' mother tongue',
'genre',' liked', 'sample_silence',' disliked' ]
# remove unlabelled data to avoid overfitting
train['sum'] = train[LABELS].sum(1)
train = train[train['sum'] > 0]
train = train.drop('sum', 1)
train.head()
# + colab={"base_uri": "https://localhost:8080/", "height": 232} id="iDJlkxwcrbdm" outputId="79f15a26-6b09-4595-c7d3-25dac1c04d78"
# remove useless features
train.drop(USELESS,axis=1,inplace=True)
train.head()
# -
# drop duplicated data to avoid overfitting
train = train.drop_duplicates()
# + id="RTDQoHQmrrRA"
# get train and target
y_train = train[LABELS]
X_train = train.drop(columns=LABELS)
# -
# # Outlier treatment
# + id="Ii8xllVmVg37"
# predict audio features outliers
def predict_audio_feat_outlier(db):
"""
This function globally detects outliers
Parameters:
db (DataFrame): the dataframe containing the values to be cleaned
Returns:
db_tr (DataFrame): the dataframe with detected outliers
"""
db_tr = db.copy()
clf = IForest()
clf.fit(db_tr.drop(columns=['track id',' mood',' age']))
db_tr['is_outlier'] = clf.predict(db_tr.drop(columns=['track id',' mood',' age']))
return db_tr
# -
#treat audio features outliers
def treat_audio_feat_outlier(db):
"""
This function globally treats outliers
Parameters:
db (DataFrame): the dataframe containing the outliers to be cleaned
Returns:
new_db (DataFrame): the dataframe with cleaned outliers
"""
features = db.columns
features = features.drop(['track id',' mood',' gender', ' age','is_outlier'])
mask = db['is_outlier'] == 1
for f in features:
db.loc[mask, f] = db[f].median()
return db.drop(columns=['is_outlier'])
# + id="R5SEcYtNYkiI"
# predict and treat outliers for a given feature
def predict_and_treat_outlier(feat,db):
"""
This function treats outliers for a given feature
Parameters:
db (DataFrame): the dataframe containing the outliers to be cleaned
feat (string): the given feature
Returns:
db (DataFrame): the dataframe with cleaned outliers
"""
X = db[[feat]]
clf = IForest()
clf.fit(X)
db['is_outlier'] = clf.predict(X)
db_cleaned = db.copy()
mask = db_cleaned['is_outlier'] == 1
db_cleaned.loc[mask, feat] = db_cleaned[feat].median()
return db_cleaned.drop(columns=['is_outlier'])
# + colab={"base_uri": "https://localhost:8080/", "height": 232} id="eFGMKvLQWFyq" outputId="bd8cc06e-091a-46bd-bedf-1e58a842ed57"
from pyod.models.iforest import IForest
from pyod.models.knn import KNN
# predict and treat age outliers
X_train_age = predict_and_treat_outlier(' age',X_train) # performed 74% (roc_auc_score = 74%)
X_train_age.head()
# predict audio features outliers
# X_train = predict_audio_feat_outlier(X_train)
# treat audio features outliers
#X_train = treat_audio_feat_outlier(X_train)
# -
X_train_age.columns
X_train.shape
# # Model training
# ## Set the batch size and epochs for the DNN and the LSTM model
# + id="Rt9u_XTz3RKi"
# batch size and epochs for the DNN and the LSTM model
batch_size=128
epochs = 5
X_train = np.expand_dims(X_train.values, axis=-1)
# -
# ## Import all the useful module for the training
from keras import models
from keras import layers
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.layers import Dense, Input, LSTM, Embedding, Dropout, Activation,Conv1D,Flatten
from keras.layers import Bidirectional, GlobalMaxPool1D
from keras.models import Model
from keras import initializers, regularizers, constraints, optimizers, layers
from keras.optimizers import Adam
from sklearn.model_selection import train_test_split
# ## DNN model
# The DNN model scored 77% on the mfcc features, and 77% with the new features, no improvement has observed
# After outliers treatment, the model scored 75%
# Conclusion : the DNN model is not stable and the results are not reproducible
# +
inp = Input(shape=(35,))
x = GlobalMaxPool1D()(inp)
x = Dense(256, activation='relu')(inp)
x = Dropout(0.5)(x)
x = Dense(256, activation='relu')(x)
x = Dropout(0.5)(x)
x = Dense(9, activation='softmax')(x)
model = Model(inputs=inp, outputs=x)
# -
# ## LSTM model
#
# The LSTM model scored 77% with the mfcc features, and 77% with the new features, no improvement has been observed.
# After outliers treatment, the model's scored remained the same,
# thus, there is no improvement even after the training was run for 10 epochs
inp = Input(shape=(32,1))
x = LSTM(50, return_sequences=True)(inp)
x = GlobalMaxPool1D()(x)
x = Dropout(0.1)(x)
x = Dense(50, activation="relu")(x)
x = Dropout(0.1)(x)
x = Dense(9, activation="softmax")(x)
model = Model(inputs=inp, outputs=x)
# + id="S92TgZ-jswx2"
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# + id="SfZJJ83qh9GS"
model.summary()
# + colab={"base_uri": "https://localhost:8080/"} id="vYc6oIzEs009" outputId="4ba366e6-4bb0-40e5-86c3-129abfb6e486"
print(y_train.shape)
history = model.fit(X_train,y_train,epochs=epochs,batch_size=batch_size,validation_split=0.2)
# + id="ZYPkUXSM3ly9"
import matplotlib.pyplot as plt
plt.style.use('ggplot')
def plot_history(history):
""" This functions plot the model's loss and accuracy on graphs
parameters:
history (model) : the model to evaluate
returns:
-
"""
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
x = range(1, len(acc) + 1)
plt.figure(figsize=(12, 5))
plt.subplot(1, 2, 1)
plt.plot(x, acc, 'b', label='Training acc')
plt.plot(x, val_acc, 'r', label='Validation acc')
plt.title('Training and validation accuracy')
plt.legend()
plt.subplot(1, 2, 2)
plt.plot(x, loss, 'b', label='Training loss')
plt.plot(x, val_loss, 'r', label='Validation loss')
plt.title('Training and validation loss')
plt.legend()
# + colab={"base_uri": "https://localhost:8080/", "height": 337} id="zHJ-RuUN30Iy" outputId="196a5075-63d0-4d04-d7a5-8ed0b0670160"
plot_history(history)
# -
# ## LGBM model
# + colab={"base_uri": "https://localhost:8080/"} id="MdexQvADxibP" outputId="be4553e2-15f9-4c39-f032-359e59253a4c"
# Model Validation
epochs = 5
# KFold Validation
kf = KFold(n_splits=epochs, shuffle=True, random_state=1997) # 30, n_split 3
y_oof = np.zeros([X_train_age.shape[0], len(LABELS)])
i = 0
# Data normalization
SCALE = True
if SCALE:
scaler = MinMaxScaler()
X_train_age[X_train_age.columns.drop(['track id'])] = scaler.fit_transform(
X_train_age.drop(columns=['track id']))
# Model Validation
for tr_idx, val_idx in kf.split(X_train_age, y_train):
X_tr, X_vl = X_train_age.iloc[tr_idx, :], X_train_age.iloc[val_idx, :]
y_tr, y_vl = y_train.iloc[tr_idx, :], y_train.iloc[val_idx, :]
X_tr = X_tr.drop(columns=['track id'] )
X_vl = X_vl.drop(columns=['track id'] )
model = MultiOutputClassifier(LGBMClassifier(n_estimators=10, random_state=47))
model.fit(X_tr, y_tr)
y_pred = np.zeros((X_vl.shape[0],len(LABELS)))
for i, j in enumerate(model.predict_proba(X_vl)):
y_pred[:,i] = j[:, 1]
y_oof[val_idx, :] = y_pred
i += 1
acc = roc_auc_score(y_vl, y_pred, multi_class='ovr')
print(f"Fold #{i} AUC : {round(acc, 2)}")
metric = roc_auc_score(y_train, y_oof, multi_class='ovr')
print(f"Full AUC : {round(metric, 2)}")
# -
# ## Model interpretation:
# 1- With the initial features :
# LGBM model scored 79.4%
# 2- After adding the new features :
# LGBM scored 79.55%
#
# 3- After both audio and age features outliers treatment:
# LGBM scored 79.36% ==> the model's performance slightly decreased
#
# 4- After only age feature treatment (Indeed, the 'age' feature is the most important feature of the model:
# LGBM scored 79.57%
#
# ## Observation after testing the model:
# The model is overfitting especially with 3 of the emotion features : amazement, solemnity and nostalgia.
# Thus, we decided to work on 6 of the emotions and remove the 3 others.
# Performance: LGBM scored 74%
# # Save model
# +
from pure_sklearn.map import convert_estimator
filename = f'../model_saved/music_emotion_classifier_model.sav'
clf_pure_predict = convert_estimator(model)
pickle.dump(clf_pure_predict, open(filename, 'wb'))
# -
# # Test the model on existing test data
def get_features(y, sr, id):
'''
This function extracts audio features from an audio file.
Parameters:
id (string): the audio track id
y
sr
Returns:
audio_features (DataFrame): the extracted audio features
'''
# Features to concatenate in the final dictionary
features = {'chroma_sftf': None, 'rolloff': None, 'zero_crossing_rate': None, 'rmse': None,
'flux': None, 'contrast': None, 'flatness': None}
print(id)
# Using librosa to calculate the features
features['chroma_sftf'] = np.mean(librosa.feature.chroma_stft(y=y, sr=sr))
features['rolloff'] = np.mean(librosa.feature.spectral_rolloff(y, sr=sr))
features['zero_crossing_rate'] = np.mean(librosa.feature.zero_crossing_rate(y))
features['rmse'] = np.mean(librosa.feature.rms(y))
features['flux'] = np.mean(librosa.onset.onset_strength(y=y, sr=sr))
features['contrast'] = np.mean(librosa.feature.spectral_contrast(y, sr=sr))
features['flatness'] = np.mean(librosa.feature.spectral_flatness(y))
# MFCC treatment
mfcc = librosa.feature.mfcc(y=y, sr=sr, n_mfcc=20)
for idx, v_mfcc in enumerate(mfcc):
features['mfcc_{}'.format(idx)] = np.mean(v_mfcc)
features['tempo'] = librosa.beat.tempo(y, sr=sr)[0]
features['track_id'] = id
return features
def read_process_songs(audiofile, debug = True):
"""
This function reads an audio file.
Parameters:
audiofile (string): the audio file path
Returns:
audio_features (DataFrame): the extracted audio features
"""
# Empty array of dicts with the processed features from all files
arr_features = []
# Read the audio file
signal, sr = librosa.load(audiofile,duration=30)
#pre-emphasis before extracting features
signal_filt = librosa.effects.preemphasis(signal)
track_id = audiofile.replace(".wav","")
# Append the result to the data structure
features = get_features(signal_filt,sr,track_id)
arr_features.append(features)
return arr_features
# +
from pydub import AudioSegment
def convert_to_wav(src,dst):
'''
This function converts any mp3 file into wav format
Parameters:
src (string): audio file source (path)
Returns:
dst (string): new source of the converted audio file
'''
# convert wav to mp3
sound = AudioSegment.from_mp3(src)
sound.export(dst, format="wav")
return dst
# -
audio_file = f'C:/Music/Retro/Aerosmith - I Don\'t Want to Miss a Thing (Official Music Video).wav'
print(audio_file)
test_data = read_process_songs(audio_file,debug=False)
df_test = pd.DataFrame(test_data)
df_test.columns
df_test[' mood'] = 3
df_test[' gender'] = 1
df_test[' age'] = 23
df_test.head()
df_test.columns
emotion_clf = pickle.load(open(filename, 'rb'))
def predict_proba(test, model):
'''
This function predicts the music genre
Parameters:
test (DataFrame): audio features
model (model): the music emotion identifier
Returns:
y_pred (DataFrame): probability of music emotions
'''
y_pred = np.zeros((test.shape[0],len(LABELS)))
print(test.columns)
df_test = test.drop(columns=['track_id'])
for i, j in enumerate(model.predict_proba(df_test)):
y_pred[:,i] = j[:, 1]
y_pred = pd.DataFrame(y_pred)
y_pred.columns = LABELS
return y_pred
pred = predict_proba(df_test, emotion_clf)
print(pred)
print(LABELS[np.argmax(pred)])
|
notebook/.ipynb_checkpoints/Music_Emotion_Classifier-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# +
import os
#import sys
dir_parent = os.getcwd() #os.path.abspath(os.path.join(os.getcwd(), os.pardir))
#sys.path.append(dir_parent) # add parent directory to path
import time
import json
import mlflow
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from src.utils_beeps import beeps
from sklearn.metrics import roc_auc_score
from sklearn.metrics import precision_score
from sklearn.compose import ColumnTransformer
from sklearn.pipeline import Pipeline
from sklearn.ensemble import RandomForestClassifier
from sklearn.preprocessing import OneHotEncoder
from sklearn.preprocessing import RobustScaler #StandardScaler
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
#from sklearn.decomposition import PCA
from src.utils_general import get_df_parquet
from src.utils_model import get_ls_col
from src.utils_model import plot_pr_curve
from src.utils_model import save_pr_curve
from src.utils_model import save_feature_importance
dir_train = os.path.join(dir_parent, 'data', 'train')
dir_models = os.path.join(dir_parent, 'data', 'models')
dir_images = os.path.join(dir_parent, 'data', 'images')
dir_mlflow = 'file:' + os.sep + os.path.join(dir_parent, 'mlflow')
mlflow.set_tracking_uri(dir_mlflow)
class SklearnModelWrapper(mlflow.pyfunc.PythonModel):
def __init__(self, model):
self.model = model
def predict(self, context, model_input):
return self.model.predict_proba(model_input)[:,1]
class RandomForestClassifierFlow():
def __init__(self, label, params={}, tags={}, n_pca=None):
tags['model'] = 'RandomForestClassifier'
self.model = RandomForestClassifier(**params)
self.label = label
self.params = params
self.tags = tags
self.n_pca = n_pca
def mlflow_run(self, df):
with mlflow.start_run() as run:
run_id = run.info.run_uuid
experiment_id = run.info.experiment_id
# train test split
X = df.drop(columns=[self.label]).copy()
y = df[self.label].copy()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y)
# pipeline
float_cols = df.select_dtypes(include='float64').columns
preprocessor = ColumnTransformer([
#('MinMaxScaler', MinMaxScaler(), float_cols),
('RobustScaler', RobustScaler(), float_cols),
#('OneHotEncoder', OneHotEncoder(), cat_cols),
], remainder='passthrough')
full_pipe = Pipeline(steps=[
('preprocessor', preprocessor),
#('pca', PCA(n_components = self.n_pca)),
('model', self.model),])
# fit
t_start = time.time()
full_pipe.fit(X_train, y_train)
t_training = time.time() - t_start
# predict
t_start = time.time()
y_test_pred_proba = full_pipe.predict_proba(X_test)
t_prediction = time.time() - t_start
# score
proba_threshold = 0.75
metrics = {
'auroc':roc_auc_score(y_test, y_test_pred_proba[:,1]),
'precision':precision_score(y_test, (y_test_pred_proba[:,1]>proba_threshold)),
't_training':t_training,
't_prediction':t_prediction,
}
# log params, metrics, tags
mlflow.log_params(self.params)
mlflow.log_metrics(metrics)
mlflow.set_tags(self.tags)
# pr curve, feature importance
save_pr_curve(y_test, y_test_pred_proba[:,1])
save_feature_importance(X_test.columns, full_pipe['model'].feature_importances_)
mlflow.log_artifact(dir_images)
# log Model
#mlflow.sklearn.log_model(full_pipe, artifact_path='model')
#wrapped_model = SklearnModelWrapper(full_pipe)
#mlflow.pyfunc.log_model('model', python_model=wrapped_model)
return full_pipe
beeps()
[print(f" '{x}',") for x in os.listdir(dir_train) if x[-8:]=='.parquet'];
# -
# # Prep data
# +
# df_train - Import
ls_f = [
'df_train_20210203_2045.parquet',
'df_train_20210203_2050.parquet',
'df_train_20210203_2128.parquet',
'df_train_20210203_2220.parquet',
'df_train_20210203_2227.parquet',
'df_train_20210203_2250.parquet',
'df_train_20210203_2251.parquet',
'df_train_20210204_2207.parquet',
'df_train_20210204_2214.parquet',
'df_train_20210206_1613.parquet',
'df_train_20210215_1924.parquet',
'df_train_20210301_2220.parquet',
]
df = get_df_parquet(ls_f, dir_train)
# df_train - get dates
df = df[df['datetime'].dt.date.astype('str')>='2020-06-29']
inputs_date_start = df['datetime'].dt.date.astype('str').unique().min()
inputs_date_end = df['datetime'].dt.date.astype('str').unique().max()
print(inputs_date_start, inputs_date_end)
# df_train - Remove outliers and non-relevant data
#(divergence=='bull_reg' or divergence=='bull_hid')\
q = '''
divergence=='bull_reg'\
and prev_close>5\
and abs(sma9_var)<0.02\
and abs(sma180_var)<0.2\
and abs(vwap_var)<0.2\
and abs(spread14_e)<0.02\
and abs(prev_close_var)<0.5\
and abs(prev_floor_var)<0.5\
and abs(prev_ceil_var)<0.5\
and abs(prev1_candle_score)<0.02\
and abs(prev2_candle_score)<0.02\
and abs(prev3_candle_score)<0.02\
and mins_from_start<300\
and valley_interval_mins<200\
and valley_close_score<10\
and abs(day_open_var)<1.5\
and abs(open_from_prev_close_var)<0.4\
and abs(ceil_var)<0.2\
and abs(floor_var)<0.2\
and abs(day_sma9_var)<1\
and abs(day_sma180_var)<2\
and prev_close_var<0
'''
df = df.query(q)
# df_train - Remove unwanted columns
ls_col_remove = [
'sym',
'datetime',
'prev_close',
'divergence',
'profit',
###
#'sma9_var',
#'prev_close_var',
#'ceil_var',
#'prev_ceil_var',
###
#'rsi14',
#'volume14_34_var',
'valley_rsi_score',
'prev1_candle_score',
'prev2_candle_score',
'prev3_candle_score',
'valley_interval_mins',
'floor_var',
]
df = df.drop(columns=ls_col_remove)
ls_col = list(df.drop(columns='is_profit'))
# df-train - Preview
df.info()
# -
# # Run test
params = {
'criterion': 'entropy',
'max_depth': 1000,
'max_features': 'sqrt',
'min_samples_leaf': 4,
'min_samples_split': 5,
'n_estimators': 600,
###
'n_jobs': -1,
'random_state': 42,
}
params = {
'n_jobs': -1,
'random_state': 42,
}
params = {
'criterion': 'entropy',
'max_depth': 1000,
'max_features': 'log2',
'min_samples_leaf': 8,
'min_samples_split': 5,
'n_estimators': 600,
###
'n_jobs': -1,
'random_state': 42,
}
params = {
'max_depth': 2048,
'max_features': 3,
'min_samples_leaf': 2,
'min_samples_split': 2,
'n_estimators': 600,
###
'n_jobs': -1,
'random_state': 42,
}
tags = {
'inputs_date_start':inputs_date_start,
'inputs_date_end':inputs_date_end,
'df_train files':str(ls_f),
'features':str(ls_col),
'query':q,
'df len':df.shape[0],
'comments':'',
}
label = 'is_profit'
rfcf = RandomForestClassifierFlow(label, params, tags)
full_pipe = rfcf.mlflow_run(df)
beeps()
# # Save model
import pickle
import datetime
timestamp = datetime.datetime.now().strftime("%Y-%m-%d_%H%M")
tup_model = (q, ls_col, full_pipe)
f = os.path.join(dir_models, f'tup_model_{timestamp}.p')
pickle.dump(tup_model, open(f, 'wb'))
f
# # RandomizedSearchCV
# +
import pprint
random_grid = {
'n_estimators': [800], # Number of trees in random forest
'max_depth': [2048, None],
'max_features': [2, 3, 4, 5],
'min_samples_leaf': [1, 2, 4],
'min_samples_split': [2, 4, 8],
}
random_grid = {
'criterion': ['entropy', 'gini'],
'max_depth': [1000, 2000, 3000, 4000],
'max_features': ['auto', 'sqrt','log2', None],
'min_samples_leaf': [4, 6, 8, 12],
'min_samples_split': [5, 7, 10, 14],
'n_estimators': [400, 600, 800]
}
pprint.pprint(random_grid)
# +
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import RandomizedSearchCV
# Use the random grid to search for best hyperparameters
model = RandomForestClassifier()
model_random_cv = RandomizedSearchCV(
estimator = model,
param_distributions = random_grid,
n_iter = 10,
cv = StratifiedKFold(n_splits=5, random_state=42, shuffle=True),
verbose=2,
random_state=42,
n_jobs = -1,
scoring = 'roc_auc',
refit=False,
)
# Run random search
y = df['is_profit'].copy()
X = df.drop(columns=['is_profit']).copy()
model_random_cv.fit(X, y)
# -
# Print results
for tup in sorted(zip(model_random_cv.cv_results_['params'], model_random_cv.cv_results_['mean_test_score']), key = lambda x: x[1], reverse=1):
pprint.pprint(tup[0])
print(round(tup[1], 5))
print()
beeps()
# # Correlation
# correlation matrix
df_corr = df.drop(columns=['is_profit']).corr().round(2)
fig, ax = plt.subplots(figsize=(12,12))
ax = sns.heatmap(df_corr, vmin=-.8, vmax=.8, square=1, annot=True)
# # Boxplots
fig = plt.figure(figsize = (20, 25))
for i, col in enumerate(df.drop(columns='is_profit').columns):
print(i, end=' ')
plt.subplot(((df.shape[1]-1)//4)+1, 4, i+1)
sns.boxplot(x='is_profit', y=col, data=df)
plt.show()
# # Histograms
df_e = df.drop(columns='is_profit').copy()
df_e.hist(bins=50, figsize=(20,15))
plt.show()
# +
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
df_e = df.drop(columns='is_profit').copy()
df_e[df_e.columns] = scaler.fit_transform(df_e[df_e.columns])
df_e.hist(bins=50, figsize=(20,15))
plt.show()
# -
# # Distplots
fig = plt.figure(figsize = (20, 25))
for i, col in enumerate(df.drop(columns='is_profit').columns):
print(i, end=' ')
plt.subplot(int(len(df.columns)/4)+1, 4, i+1)
#sns.displot(data=df, x=col, hue='is_profit', kind="kde", fill=True)
sns.histplot(data=df, x=col, hue='is_profit', element='step')
#plt.legend(loc='best')
plt.show()
df_s = df.skew(axis=0).reset_index(name='skew')
df_s['normal'] = np.where(df_s['skew'].abs()<2, 1, 0)
df_s
# # Kaggle Dataset Prep
# +
# df_train - Import
ls_f = [
'df_train_20201204_1216.parquet',
'df_train_20201204_1219.parquet',
'df_train_20201212_1545.parquet',
'df_train_20201220_1446.parquet',
'df_train_20201226_1503.parquet',
'df_train_20210112_2304.parquet',
'df_train_20210117_1327.parquet',
'df_train_20210124_1204.parquet',
]
df = get_df_parquet(ls_f, dir_train)
# df_train - Remove outliers and non-relevant data
q = '''
divergence=='bull_reg'\
and prev_close>5\
and abs(sma9_var)<0.02\
and abs(sma180_var)<0.2\
and abs(vwap_var)<0.2\
and abs(spread14_e)<0.02\
and abs(prev_close_var)<0.5\
and abs(prev_floor_var)<0.5\
and abs(prev_ceil_var)<0.5\
and abs(prev1_candle_score)<0.02\
and abs(prev2_candle_score)<0.02\
and abs(prev3_candle_score)<0.02\
and mins_from_start<300\
and valley_interval_mins<200\
and valley_close_score<10\
and abs(day_open_var)<1.5\
and abs(open_from_prev_close_var)<0.4\
and abs(ceil_var)<0.2\
and abs(floor_var)<0.2\
'''
df = df.query(q)
# df_train - get dates
df = df[df['datetime'].dt.date.astype('str')>='2020-06-29']
inputs_date_start = df['datetime'].dt.date.astype('str').unique().min()
inputs_date_end = df['datetime'].dt.date.astype('str').unique().max()
print(inputs_date_start, inputs_date_end)
# df_train - Remove unwanted columns
ls_col_remove = [
#'sym',
#'datetime',
'prev_close',
'divergence',
'profit',
]
df = df.drop(columns=ls_col_remove)
ls_col = [
'is_profit',
'sym',
'datetime',
'rsi14',
'sma9_var',
'sma180_var',
'vwap_var',
'spread14_e',
'volume14_34_var',
'prev_close_var',
'prev_floor_var',
'prev_ceil_var',
'prev1_candle_score',
'prev2_candle_score',
'prev3_candle_score',
'mins_from_start',
'valley_interval_mins',
'valley_close_score',
'valley_rsi_score',
'day_open_var',
'open_from_prev_close_var',
'ceil_var',
'floor_var',
]
df = df[ls_col]
# df-train - Preview
df.to_csv('train.csv', index=0)
df.info()
|
notebooks/temp.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
# + [markdown] toc=true
# <h1>Table of Contents<span class="tocSkip"></span></h1>
# <div class="toc"><ul class="toc-item"><li><span><a href="#Question-1" data-toc-modified-id="Question-1-1"><span class="toc-item-num">1 </span>Question 1</a></span><ul class="toc-item"><li><span><a href="#Question-1a-solution" data-toc-modified-id="Question-1a-solution-1.1"><span class="toc-item-num">1.1 </span>Question 1a solution</a></span></li><li><span><a href="#Question-1b-solution" data-toc-modified-id="Question-1b-solution-1.2"><span class="toc-item-num">1.2 </span>Question 1b solution</a></span></li><li><span><a href="#Question-1c-solution:" data-toc-modified-id="Question-1c-solution:-1.3"><span class="toc-item-num">1.3 </span>Question 1c solution:</a></span></li></ul></li><li><span><a href="#Question-2" data-toc-modified-id="Question-2-2"><span class="toc-item-num">2 </span>Question 2</a></span><ul class="toc-item"><li><span><a href="#Question-2a-solution" data-toc-modified-id="Question-2a-solution-2.1"><span class="toc-item-num">2.1 </span>Question 2a solution</a></span></li><li><span><a href="#Question-2b-solution" data-toc-modified-id="Question-2b-solution-2.2"><span class="toc-item-num">2.2 </span>Question 2b solution</a></span></li></ul></li><li><span><a href="#Question-3" data-toc-modified-id="Question-3-3"><span class="toc-item-num">3 </span>Question 3</a></span><ul class="toc-item"><li><span><a href="#Question-3-solution" data-toc-modified-id="Question-3-solution-3.1"><span class="toc-item-num">3.1 </span>Question 3 solution</a></span></li></ul></li><li><span><a href="#Question-4" data-toc-modified-id="Question-4-4"><span class="toc-item-num">4 </span>Question 4</a></span><ul class="toc-item"><li><span><a href="#Question-4-solution" data-toc-modified-id="Question-4-solution-4.1"><span class="toc-item-num">4.1 </span>Question 4 solution</a></span></li></ul></li></ul></div>
# -
import numpy as np
import a301.radiation
# # Question 1
# - (10) A satellite orbiting at an altitude of 36000 km observes the
# surface in a thermal channel with a wavelength range of
# $8\ \mu m < \lambda < 10\ \mu m$.
#
# - Assuming that the atmosphere has density scale height of
# $H_\rho=10$ km and a surface air density of $\rho_{air}=1$
# and that the absorber has mass absorption coefficient of
# $k_\lambda = 3 \times 10^{-2}\ m^2/kg$ at $\lambda=9\ \mu m$
# and a mixing ratio $6 \times 10^{-3}$ kg/kg, find the vertical
# optical thickness $\tau$ and transmittance of the atmosphere
# directly beneath the satellite
#
# - If the surface is black with a temperature of 300 K, and the
# atmosphere has an average temperature of 270 K, find the
#
# - radiance observed by the satellite in at 9 $\mu m$
#
# - brightness temperature of the pixel in Kelvin for that
# radiance
#
# - Given a pixel size 2 $km^2$, what is the flux, in , reaching
# the satellite in this channel?
#
#
# ## Question 1a solution
#
# Assuming that the atmosphere has density scale height of
# $H_\rho=10$ km and a surface air density of $\rho_{air}=1$
# and that the absorber has mass absorption coefficient of
# $k_\lambda = 3 \times 10^{-2}\ m^2/kg$ at $\lambda=9\ \mu m$
# and a mixing ratio $6 \times 10^{-3}$ kg/kg, find the vertical
# optical thickness $\tau$ and transmittance of the atmosphere
# directly beneath the satellite
# $$\rho_{atm} = \rho_0 \exp \left ( -z/H \right )$$
#
# $$H=10\ km$$
#
# $$\tau = \int_0^{3.6e6}k \rho_0 \exp (-z/H ) r_{mix} dz$$
#
# $$\tau = -H k \exp(-z^\prime/H ) \rho_0 r_{mix} \big \rvert_0^{3.6e6} =0 - (-Hk \rho_0 r_{mix})=H k \rho_0 r_{mix} $$
#
# $$t=\exp(-\tau)$$
H=10000.
k=3.e-2
rho0=1.
rmix=6.e-3
tau = H*k*rho0*rmix
t=np.exp(-tau)
print(f'optical thickness τ={tau} and transmittance t={t:5.2f}')
# ## Question 1b solution
# - If the surface is black with a temperature of 300 K, and the
# atmosphere has an average temperature of 270 K, find the
#
# - radiance observed by the satellite in at 9 $\mu m$
#
# - brightness temperature of the pixel in Kelvin for that
# radiance
# $$L_{atm}= B(300)*\exp(-\tau) + (1 - \exp(-\tau))*B(270)$$
t=np.exp(-tau)
e=1 - t
L270=a301.radiation.calc_radiance(9.e-6,270)
L300=a301.radiation.calc_radiance(9.e-6,300)
Lsat = t*L300 + e*L270
print(Lsat)
Tbright=a301.radiation.planck_invert(9.e-6,Lsat)
print(f'radiance is {Lsat*1.e-6:5.2f} W/m^2/microns/sr')
print(f'brightness temperature is {Tbright:5.2f} K')
# ## Question 1c solution:
# - Given a pixel size 2 $km^2$, what is the flux, in , reaching
# the satellite in this channel?
#
#
# $\Delta \omega = A/R^2 = 2/36000^2. = 1.54 \times 10^{-9}$ sr
#
# $E = L \Delta \omega \,\Delta \lambda = 6.15\ W\,m^2\,\mu^{-1} m \times 1.54 \times 10^{-9} \times 2$
#
Eout=6.15*1.54e-9*2
print(f'flux in channel is {Eout:5.2g} W/m^2')
# # Question 2
#
# ## Question 2a solution
#
# - (3) A cone has a spreading angle of 35 degrees between its
# center and its side. What is its subtended solid angle?
#
# $$\omega = \int_0^{2\pi} \int_0^{35} \sin \theta d\theta d\phi = 2\pi (-\cos \theta \big \rvert_0^{35}) = 2 \pi (1 - \cos(35))$$
omega = 2*np.pi*(1 - np.cos(35*np.pi/180.))
print(f'solid angle = {omega:5.2f} sr')
#
# ## Question 2b solution
#
# - (3) Assuming that radiance is independent of the distance $d$
# between an instrument and a surface, show that the flux from the
# surface decreases as $1/d^2$
#
# Given a narrow field of view of a pixel the radiance is:
#
# $$E \approx L \Delta \omega$$
#
# where $\Delta \omega = A/d^2$ with A the area of the pixel. Since $L$ is constant, $E \propto 1/d^2$
# # Question 3
#
# Integrate the Schwartzchild equation for constant temperature
#
# ## Question 3 solution
#
# 1. We know the emission from an infinitesimally thin layer:
#
# $$ dL_{emission} = B_{\lambda} (T_{layer}) de_\lambda = B_{\lambda} (T_{layer}) d\tau_\lambda$$
#
#
# 2. Add the gain from $dL_{emission}$ to the loss from $dL_{absorption}$ to get
# the **Schwartzchild equation** without scattering:
#
# $$ dL_{\lambda,absorption} + dL_{\lambda,emission} = -L_\lambda\, d\tau_\lambda + B_\lambda (T_{layer})\, d\tau_\lambda $$
#
# 3. We can rewrite :eq:$schwart1$ as:
#
# $$ \frac{dL_\lambda}{d\tau_\lambda} = -L_\lambda + B_\lambda (T_{layer})$$
#
# 4. In class I used change of variables to derived the following: if the temperature $T_{layer}$ (and hence $B_\lambda(T_{layer})$) is constant with height and the radiance arriving at the base of the layer is $L_{\lambda 0} = B_{\lambda} T_{skin}$ for a black surface with $e_\lambda = 1$, then the total radiance exiting the top of the layer is $L_{\lambda}$ where:
#
# $$ \int_{L_{\lambda 0}}^{L_\lambda} \frac{dL^\prime_\lambda}{L^\prime_\lambda -
# B_\lambda} = - \int_{0}^{\tau_{T}} d\tau^\prime $$
#
# Where the limits of integration run from just above the black surface (where the radiance from
# the surface is $L_{\lambda 0}$) and $\tau=0$ to the top of the layer, (where the radiance is $L_\lambda$) and the optical thickness is $\tau_{\lambda T}$.
#
# To integrate this, make the change of variables:
#
#
#
#
# \begin{align}
# U^\prime &= L^\prime_\lambda - B_\lambda \\
# dU^\prime &= dL^\prime_\lambda\\
# \frac{dL^\prime_\lambda}{L^\prime_\lambda -
# B_\lambda} &= \frac{dU^\prime}{U^\prime} = d\ln U^\prime
# \end{align}
#
#
#
# where I have made use of the fact that $dB_\lambda = 0$ since the temperature is constant.
#
# This means that we can now solve this by integrating a perfect differential:
#
# $$
# \int_{U_0}^U d\ln U^\prime = \ln \left (\frac{U}{U_0} \right ) = \ln \left (\frac{L_\lambda - B_\lambda}{L_{\lambda 0} - B_\lambda} \right ) = - \tau_{\lambda T} $$
#
# Taking the $\exp$ of both sides:
#
# $$ L_\lambda - B_\lambda = (L_{\lambda 0} - B_\lambda) \exp (-\tau_{\lambda T}) $$
#
#
# or rearranging and recognizing that the transmittance is $\hat{t_\lambda} = \exp(-\tau_{\lambda T} )$:
#
# $$ L_\lambda = L_{\lambda 0} \exp( -\tau_{\lambda T} ) + B_\lambda (T_{layer})(1- \exp( -\tau_{\lambda T} )) $$
#
#
# $$ L_\lambda = L_{\lambda 0} \hat{t}_{\lambda} + B_\lambda (T_{layer})(1- \hat{t}_{\lambda}) $$
#
# $$ L_\lambda = L_{\lambda 0} \hat{t}_{\lambda} + B_\lambda (T_{layer})a_\lambda $$
#
# 5. so bringing in Kirchoff's law, the radiance exiting the top of the isothermal layer of thickness $\Delta \tau$ is:
#
# $$ L_\lambda = L_{\lambda 0} \hat{t}_{\lambda} + e_\lambda B_\lambda $$
#
#
# # Question 4
# - Pyresample (10)
#
# Consider the following code:
#
# from pyresample import SwathDefinition, kd_tree, geometry
# proj_params = get_proj_params(m5_file)
# swath_def = SwathDefinition(lons_5km, lats_5km)
# area_def_lr=swath_def.compute_optimal_bb_area(proj_dict=proj_params)
# area_def_lr.name="ir wv retrieval modis 5 km resolution (lr=low resolution)"
# area_def_lr.area_id='modis_ir_wv'
# area_def_lr.job_id = area_def_lr.area_id
# fill_value=-9999.
# image_wv_ir = kd_tree.resample_nearest(swath_def, wv_ir_scaled.ravel(),
# area_def_lr, radius_of_influence=5000,
# nprocs=2,fill_value=fill_value)
# image_wv_ir[image_wv_ir < -9000]=np.nan
# print(f'\ndump area definition:\n{area_def_lr}\n')
# print((f'\nx and y pixel dimensions in meters:'
# f'\n{area_def_lr.pixel_size_x}\n{area_def_lr.pixel_size_y}\n'))
#
# In the context of this snippet, explain what the following objects
# (i.e. their type, what some of their attributes are, etc.) and how
# they are used to map a satellite image:
# ## Question 4 solution
#
# - proj\_params
#
# dictionary holding parameters for a map projection that
# can be used by pyproj to map lat/lon to x/y: datum, lat\_0, lon\_0
# name of projection etc.
#
# - swath\_def
#
# object of type pyresample.geometry.SwathDefinition that holds data and
# functions needed to convert modis pixel lat/lon values to x,y -- pass as input
# to kd_tree_resample_nearest
#
# - area\_def\_lr
#
# object of type pyresample.geometry.AreaDefinition that holds x,y array information
# like number of rows, number of columns and image extent in x and y.
#
# - wv\_ir\_scaled.ravel()
#
# water vapor data scaled to units of cm in the column and converted to a 1-dimensional
# vector using the ravel method.
#
# - kd\_tree.resample\_nearest
#
# function that takes water vapor values and sorts them onto an x,y grid based on
# their lat/lon values from the swath\_def object. This is the mapped image.
# !pwd
|
notebooks/midterm_2018_sols.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import cobra
! [ ! -f "yeastGEM.xml" ] && curl -O -L "https://raw.githubusercontent.com/SysBioChalmers/yeast-GEM/master/ModelFiles/xml/yeastGEM.xml"
model = cobra.io.read_sbml_model("yeastGEM.xml")
import os
os.remove("yeastGEM.xml")
# +
def test_production(model, met_id):
# Add exchange reaction
ex_rxn_id = "EX_" + met_id
ex_rxn = cobra.Reaction(ex_rxn_id)
ex_rxn.lower_bound = 0
ex_rxn.upper_bound = +1000
ex_rxn.add_metabolites({model.metabolites.get_by_id(met_id): -1})
model.add_reactions([ex_rxn])
# Test production:
model.objective = ex_rxn_id
solution = cobra.flux_analysis.pfba(model)
#Display fluxes:
for reaction in model.reactions:
flux = solution.fluxes[reaction.id]
formula = reaction.build_reaction_string(use_metabolite_names=True)
if flux > 1e-3:
print(str(flux) + " - " + reaction.id + ": " + formula)
model.reactions.r_4046.lower_bound = 0
test_production(model, "s_1198[c]")
# -
model.reactions.r_1992.lower_bound = 0
test_production(model, "s_1198[c]")
|
yeast-test-production/notebook.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
import matplotlib.pyplot as plt
# %matplotlib inline
df = pd.read_csv("07-hw-animals.csv")
print(df)
print(df.columns.values)
print(df['animal'])
print(df[:3])
print(df)
print(df.sort_values(by='length', ascending=0)[:3])
print(df['animal'])
print(df['animal'].value_counts())
dogs = df[df['animal'] == 'dog']
dogs
df[df['length'] > 40]
df['inches'] = df['length'] * .394
df
cats = df[df['animal'] == 'cat']
cats
dogs = df[df['animal'] == 'dog']
dogs
cats[cats['inches'] > 12]
df[df['inches'] > 12]
df[df['animal'] == 'cat']
cats['length'].mean()
dogs['length'].mean()
df.groupby('animal')['length'].mean()
dogs['length'].hist()
dogs.plot(kind='scatter', x='length', y='inches')
df.plot(kind='barh', x='name', y='length', legend=False)
sortcats = (cats.sort_values(by='length', ascending=0))
sortcats.plot(kind='barh', x='name', y='length', legend=False, sort_columns=False)
cats
import pandas as pd
df = pd.read_excel("richpeople.xlsx")
# What country are most billionaires from? For the top ones, how many billionaires per billion people?
# Who are the top 10 richest billionaires?
# What's the average wealth of a billionaire? Male? Female?
# Who is the poorest billionaire? Who are the top 10 poorest billionaires?
# 'What is relationship to company'? And what are the most common relationships?
# Most common source of wealth? Male vs. female?
# Given the richest person in a country, what % of the GDP is their wealth?
# Add up the wealth of all of the billionaires in a given country (or a few countries) and then compare it to the GDP of the country, or other billionaires, so like pit the US vs India
# What are the most common industries for billionaires to come from? What's the total amount of billionaire money from each industry?
# How many self made billionaires vs. others?
# How old are billionaires? How old are billionaires self made vs. non self made? or different industries?
# Who are the youngest billionaires? The oldest? Age distribution - maybe make a graph about it?
# Maybe just made a graph about how wealthy they are in general?
# Maybe plot their net worth vs age (scatterplot)
# Make a bar graph of the top 10 or 20 richest
# # How many female billionaires are there compared to male? What industries are they from? What is their average wealth?
# %matplotlib inline
print(df['gender'].value_counts())
df.groupby('gender')['networthusbillion'].mean()
df.groupby('gender')['sourceofwealth'].value_counts()
# # Let's make a graph 'bout it
df.plot(kind='scatter', x='gender', y='networthusbillion')
|
07-notebook-and-data/.ipynb_checkpoints/Homework7-checkpoint.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
def fib(self, N):
"""
:type N: int
:rtype: int
"""
cache = {}
def recur_fib(N):
if N in cache:
return cache[N]
if N < 2:
result = N
else:
result = recur_fib(N-1) + recur_fib(N-2)
cache[N] = result
return result
return recur_fib(N)
|
day 8 ass 1.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import pandas as pd
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import train_test_split
from sklearn.utils import resample
from joblib import dump, load
df1 = pd.read_pickle("projects_16_18_essay_features_only.pkl")
df2 = pd.read_parquet("tokens_single_file_part-00000-5bdd9372-029e-4ce4-869e-12a302797c34-c000.snappy.parquet")
df3 = pd.read_pickle("other_feats.pkl")
df1.head()
df2.head()
df1.shape, df2.shape, df3.shape
df = df1.merge(df3, left_on = 'Project ID', right_on = 'Project ID').merge(df2, left_on = 'Project ID', right_on = 'ProjectID')
df = df[df['Project Current Status'] != 'Live']
df['Project Current Status'].value_counts()
df['Project Current Status Coded'] = df['Project Current Status'].apply(lambda x: int(x == 'Fully Funded'))
df['Project Current Status Coded'].value_counts(normalize=True)
del df['Project Current Status']
del df['ProjectID']
del df['Project ID']
df.shape
df.head()
df_fully_funded = df[df['Project Current Status Coded'] == 1]
df_expired = df[df['Project Current Status Coded'] == 0]
df_fully_funded = resample(df_fully_funded, n_samples=df_expired.shape[0], replace=False, random_state=0)
df_balance = df_fully_funded.append(df_expired)
Y = df_balance['Project Current Status Coded']
X = df_balance
del X['Project Current Status Coded']
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.3, random_state=42)
X_train.shape, X_test.shape
clf = LogisticRegression(random_state=0).fit(X_train, y_train)
# clf = RandomForestClassifier(random_state=0).fit(X_train, y_train)
clf.score(X_test, y_test)
y_test.value_counts(normalize=True)
len(df2.columns)
dump(clf, 'logit.joblib')
clf = load('logit.joblib')
clf.score(X_test, y_test)
|
modeling/logit.ipynb
|
# ---
# jupyter:
# jupytext:
# text_representation:
# extension: .py
# format_name: light
# format_version: '1.5'
# jupytext_version: 1.14.4
# kernelspec:
# display_name: Python 3
# language: python
# name: python3
# ---
import turicreate as tc
from turicreate import SFrameBuilder
import os
from pycocotools.coco import COCO
import numpy as np
import skimage.io as io
import matplotlib.pyplot as plt
import pylab
from pathlib import Path
import random
dataDir='/Volumes/Data/cocodata'
dataVal='val2017'
annVal='{}/annotations/instances_{}.json'.format(dataDir,dataVal)
dataTrain='train2017'
annTrain='{}/annotations/instances_{}.json'.format(dataDir,dataTrain)
# initialize COCO api for instance annotations
cocoVal=COCO(annVal)
cocoTrain=COCO(annTrain)
catsVal = cocoVal.loadCats(cocoVal.getCatIds())
catsTrain = cocoTrain.loadCats(cocoTrain.getCatIds())
# +
def getCatName(catID):
which = next(i for i,x in enumerate(catsTrain) if x['id']==catID)
name = catsTrain[which]["name"]
return name
# +
nms=[cat['name'] for cat in catsVal]
print('COCO categories: \n{}\n'.format(' '.join(nms)))
nms = set([cat['supercategory'] for cat in catsVal])
print('COCO supercategories: \n{}'.format(' '.join(nms)))
# -
# get all images containing given categories, select one at random
imageIdSet=set()
catNms=['person','dog','cat']
catIds = cocoTrain.getCatIds(catNms=catNms);
imgIds = cocoTrain.getImgIds(catIds=catIds);
#print(catIds, imgIds)
#imgIds = coco.getImgIds(imgIds = [324158])
#img = cocoTrain.loadImgs(imgIds[np.random.randint(0,len(imgIds))])[0]
for catId in catIds:
thisImgIds = cocoTrain.getImgIds(catIds=catId)
random.shuffle(thisImgIds)
thisImgIds = thisImgIds[:4000]
#print(len(thisImgIds))
imageIdSet.update(thisImgIds)
print(len(imageIdSet), len(imgIds))
# +
sb = SFrameBuilder([tc.Image,list],column_names=['image','annotations'])
import ipywidgets as widgets
from IPython.display import display
from decimal import Decimal
out = widgets.Output(layout={'border': '1px solid black'})
twoinplaces = Decimal('0.01')
display(out)
for index,imageId in enumerate(imageIdSet):
img = cocoTrain.loadImgs([imageId])
imgName = img[0]['file_name']
imgPath = '%s/images/%s/%s'%(dataDir,dataTrain,imgName)
out.clear_output(wait=True)
with out:
#image.show()
print(f'process image number: {imgName}, progress: {Decimal((index + 1) / len(imageIdSet) * 100).quantize(twoinplaces)}%')
image=tc.Image(imgPath)
annIds = cocoTrain.getAnnIds(imgIds=[imageId])
anns = cocoTrain.loadAnns(annIds)
annotations=[]
for ann in anns:
thisCatId = ann["category_id"]
if thisCatId in catIds:
name = getCatName(thisCatId)
bbox = ann['bbox']
#print(f' bounding box: {bbox}, this cat: {thisCatId}')
#coco use top/left coordinate, turicreate needs center
bboxDict = {'width': bbox[2], 'height': bbox[3], 'x':bbox[0] + bbox[2] / 2, 'y':bbox[1] + bbox[3] / 2}
annotation={'label':name, 'coordinates': bboxDict}
annotations.append(annotation)
sb.append([image,annotations])
sf = sb.close()
#sf['image_with_ground_truth'] = \
# tc.object_detector.util.draw_bounding_boxes(sf['image'], sf['annotations'])
#sf.explore()
# -
sf.save('/Volumes/Data/yoloTiny.sframe')
## sframe above is for training
# +
#sf['image_with_ground_truth'] = \
# tc.object_detector.util.draw_bounding_boxes(sf['image'], sf['annotations'])
#sf.explore()
# -
#validation
imageIdSet=set()
catNms=['person','cat','dog']
catIds = cocoVal.getCatIds(catNms=catNms);
imgIds = cocoVal.getImgIds(catIds=catIds);
#print(catIds, imgIds)
#imgIds = coco.getImgIds(imgIds = [324158])
#img = cocoTrain.loadImgs(imgIds[np.random.randint(0,len(imgIds))])[0]
for catId in catIds:
thisImgIds = cocoVal.getImgIds(catIds=catId)
random.shuffle(thisImgIds)
thisImgIds = thisImgIds[:500]
print(len(thisImgIds))
imageIdSet.update(thisImgIds)
# +
sb_val = SFrameBuilder([tc.Image,list],column_names=['image','annotations'])
import ipywidgets as widgets
from IPython.display import display
from decimal import Decimal
out = widgets.Output(layout={'border': '1px solid black'})
twoinplaces = Decimal('0.01')
display(out)
for index,imageId in enumerate(imageIdSet):
img = cocoVal.loadImgs([imageId])
imgName = img[0]['file_name']
imgPath = '%s/images/%s/%s'%(dataDir,dataVal,imgName)
out.clear_output(wait=True)
with out:
#image.show()
print(f'process image number: {imgName}, progress: {Decimal((index + 1) / len(imageIdSet) * 100).quantize(twoinplaces)}%')
image=tc.Image(imgPath)
annIds = cocoTrain.getAnnIds(imgIds=[imageId])
anns = cocoTrain.loadAnns(annIds)
annotations=[]
for ann in anns:
thisCatId = ann["category_id"]
if thisCatId in catIds:
name = getCatName(thisCatId)
bbox = ann['bbox']
#print(f' bounding box: {bbox}, this cat: {thisCatId}')
#coco use top/left coordinate, turicreate needs center
bboxDict = {'width': bbox[2], 'height': bbox[3], 'x':bbox[0] + bbox[2] / 2, 'y':bbox[1] + bbox[3] / 2}
annotation={'label':name, 'coordinates': bboxDict}
annotations.append(annotation)
sb_val.append([image,annotations])
sf = sb_val.close()
# -
sf.save('/Volumes/Data/yoloVal.sframe')
|
coco2SFrame.ipynb
|